uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,993,663
arxiv
\section{Introduction} \label{introsec} The characterisation of giant molecular clouds (GMCs) -- the sites of nearly all star formation activity in the local Universe -- is an important step towards understanding how stars are born. Molecular hydrogen (H$_2$) is the most abundant molecule in the interstellar medium (ISM), but its rotational emission lines are not excitable at the temperatures found in most GMCs. However, the second most abundant molecule is carbon monoxide (CO), which has rotational transitions that are easily excitable within typical GMCs, making CO a good tracer of molecular gas. Additionally the lower levels of CO emit at a frequency that can be observed from the ground. Therefore, CO has become the favoured tracer for studying molecular gas in GMCs \citep{Liszt:1998tx, Dame:2001bg,McKee:2007bd,2008ApJ...680..428G,Bolatto:2013hl,2016SAAS...43...85K}. CO is not without problems as a tracer of molecular gas. Its emission is highly sensitive to environmental conditions \citep{2012A&A...541A..58L}, and traces only a limited range of column densities. At low column densities, CO is rapidly photodissociated by the interstellar radiation field (ISRF) \citep{1988ApJ...334..771V}, while at high column densities, its emission lines become optically thick. For the $J = 1-0$ line of $^{12}$CO, this occurs at a CO column density of $N_{\rm CO} \approx 10^{16}${\rm cm}$^{-2}$, corresponding to a visual extinction of only a few \citep{Liszt:1998tx}. Observations of this line therefore do not directly probe the highest density regions of the cloud. Despite this, CO emission still contains plenty of information about the cloud's conditions and structure. A study by \citet{1990A&A...234..469C} illustrates this by investigating the emission ratio ($R_{2-1/1-0}$) for CO's lowest two rotation transitions, $J = 2-1$ and $J=1-0$. This ratio is conventionally defined as \begin{equation} R_{2-1/1-0}=\frac{W_{2-1}}{W_{1-0}}, \label{eq:ratio} \end{equation} where $W_{1-0}$ and $W_{2-1}$ are the velocity-integrated brightness temperatures of the $J=1-0$ and $J=2-1$ rotational transition lines of CO, expressed in units of ${\rm K \, km \, s^{-1}}$. These two transitions have energy separations $E_{10} / k_{\rm B} = 5.5$~K and $E_{21} / k_{\rm B} = 11.04$~K, respectively, where $k_{\rm B}$ is Boltzmann's constant. Their critical densities in fully molecular gas with a temperature $T = 10$~K are $n_{\rm crit, 1-0} \simeq 2000 \: {\rm cm^{-3}}$ and $n_{\rm crit, 2-1} \simeq 10000 \: {\rm cm^{-3}}$. As most of the gas in a molecular cloud has a density $n_{\rm H_{2}} < n_{\rm crit, 2-1}$, the value of $R_{2-1/1-0}$ is sensitive to both the density and the temperature structure of the gas, as well as the optical depth of the two lines. The behaviour of $R_{2-1/1-0}$ on small scales within molecular clouds has been examined by \citet{1994ApJ...425..641S} and more recently by \citet{2015ApJS..216...18N}. They studied how the line ratio varies within the Orion GMC, finding that $R_{2-1/1-0} \sim 1$ towards the centre of the cloud, but that it declines towards the outskirts where $R_{2-1/1-0} \sim 0.5$. \citet{1994ApJ...425..641S} argue that the observed variations can be understood as a consequence of the density variations within the cloud. This is a reasonable assumption if the CO-emitting gas is isothermal, but we know from numerical simulations of molecular clouds that this is only approximately true and that temperature variations of a factor of two or more within CO-rich gas are not uncommon \citep[see e.g.][]{Glover:2010bu}. Further complicating matters is the fact that the variations in density and temperature are not independent: the density structure depends sensitively on the temperature of the gas, while the temperature depends both on the density, and also on other factors such as the local extinction, the metallicity of the gas and the strength of the ISRF. In order to better understand what the CO line ratio can tell us about the physics of the cloud, we make use of numerical models which satisfactorily reproduce the irregular structure of the gas. This has become practical within the last few years with the advent of 3D dynamical models of GMCs that account for the chemical and thermal evolution of the gas, the non-isotropic nature of the attenuated radiation field, and the complex morphology of the cloud whilst still being computationally reasonable (see e.g.\ \citealt{Glover:2010bu}, \citeauthor{Clark_etal_2012}~2012a, \citealt{2012MNRAS.427.2100B}, \citealt{Offner:2013du}). In this paper, we make use of these techniques to study the behaviour of $R_{2-1/1-0}$ within a turbulent molecular cloud. We perform a 3D hydrodynamical simulation of a representative cloud that self-consistently follow the thermal and chemical evolution of the gas. We then post-process the results of this simulation to generate synthetic $^{12}$CO 1-0 and 2-1 emission maps.\footnote{From this point on, when we refer to CO, we mean $^{12}$CO, unless otherwise noted.} The resulting maps allow us to study in detail the relationship between the line ratio and the physical conditions in the cloud. The structure of the paper is as follows. In Section \ref{metsec}, we outline our method for modelling a molecular cloud and also describe how we post-process the simulations to generate synthetic emission maps. Section \ref{ressec} presents the results from our simulations and the analysis of the emission lines and $R_{2-1/1-0}$. Section \ref{dissec} discusses possible explanations for the physical processes driving the behaviour $R_{2-1/1-0}$ and how this is consistent with our findings. Finally we summarise all of our findings in Section \ref{consec}. \section{Method}\label{metsec} \subsection{Numerical setup} \subsubsection{Hydrodynamics and chemistry} To model the gas in this study, we use a modified version of the publicly available smoothed particle hydrodynamics (SPH) code, GADGET-2 \citep{Springel:2005cz}. The changes include a time-dependent chemical network that follows the formation and destruction of H$_2$ \citep{Glover:2007gr,Glover:2007wq} and CO \citep{NelsonLanger1999}, more details of which can be found in \citet{Glover:2012et}, which also includes the photodissociation rates that we adopt in this study. We adopt the same radiative heating and cooling rates, and cosmic ray heating rate as described in \citet{Glover:2007gr} and \citet{Glover:2012dd}. To treat the attenuation of the ISRF we use the {\sc TreeCol} algorithm developed by \citeauthor{Clark_etal_2012}~(2012a). \subsubsection{Initial conditions} \label{sec:ics} Our initial setup uses a $10^4$ {\rm M}$_{\odot}$\ uniform sphere $(R\sim8.84\ {\rm pc})$, with an initial volume density of $n=100$ {\rm cm}$^{-3}$\ ($n$ is given for a mean molecular weight of $\mu = 1.4 $) and $2\times 10^6$ SPH particles. We impose a turbulent velocity field with a power spectrum of $P(k) \propto k^{-4}$, in which the energy is partitioned into a natural mixture of solenoidal and compressive modes. The energy in the turbulent velocity field is set such that $E_{\rm pot} / E_{ \rm kin} = \epsilon = 2$ (i.e.\ the cloud is gravitationally bound). This kinetic energy is allowed to decay freely via shock dissipation. We adopt solar metallicity (${\rm Z} = {\rm Z}_{\odot}$), and a standard dust-to-gas ratio of 0.01. For the ISRF, we use a spectral shape taken from \citet{Draine1978} at ultraviolet wavelengths and \citet{Black1994} at longer wavelengths. The strength of the ISRF is scaled such that $G_{0} = 1.7$ in \citet{Habing1968} units, where $G_{0}$ is the energy density in the range 6--13.6~eV. At the beginning of the simulation, hydrogen is assumed to be fully molecular, i.e.\ $f({\rm H}_2)=1$, oxygen is in its atomic form, and carbon is assumed to be in the form of C$^+$. \citeauthor{Clark2012b}~(2012b) demonstrated that the H$_2$ fraction rises sharply to near unity in the compression events that form molecular clouds. However, it has also been shown that the initial chemical state of the cloud has little effect on the global evolution \citep{Glover:2012et,Glover:2012dd,2015MNRAS.452.2057C}. In this study, we analysed our results for clouds which started both fully atomic and fully molecular, finding no significant difference. In the interest of clarity, we present only the results from the clouds with $f({\rm H}_2) = 1$ initially. As we are interested in the properties of the gas, and not the star formation that takes place inside the cloud, we stop the simulation just as the collapse of the first pre-stellar core occurs. This takes place at about $1.91$ \rm Myr\ for our simulated cloud. At this point, we produce a snapshot containing the positions, velocities, temperatures, dust densities and molecular number densities for each SPH particle. This snapshot contains the necessary data to perform radiative transfer simulations and produce synthetic emission maps. \subsection{Post-processing} \subsubsection{Radiative transfer simulations} To produce synthetic observations of the CO emission, we need to post-process our final simulation snapshot with a line radiative transfer code. In this study we use the publicly available radiative transfer code RADMC-3D \citep{2012ascl.soft02015D}. The high optical depth of CO means that the populations in the first and second energy levels are often close to those expected for molecules in local thermal equilibrium (LTE). However, this is not always the case, in particular in gas that has a low density or low optical depth. Therefore, we use the large velocity gradient (LVG) approximation \citep{1957SvA.....1..678S} to account for the non-LTE level populations in these regions. A detailed description of the implementation of the LVG algorithm in RADMC-3D can be found in \citet{Shetty:2011eh}. For the level population calculations, RADMC-3D requires the number density of CO, the number density of its dominant collision partner H$_{2}$, the temperature and the velocity of the gas, all which come directly from our hydrodynamic simulation. Additionally the molecular properties for CO are taken from the Leiden Atomic and Molecular Database \citep{2005A&A...432..369S}. The collisional excitation rates that we adopt come originally from the study of \citet{2010ApJ...718.1062Y}. Finally, we include a microturbulence velocity dispersion of $v=0.2 \, {\rm km s}^{-1}$ to account for small-scale broadening of the spectral lines by unresolved velocity fluctuations. The magnitude of this microturbulent velocity is chosen to be consistent with the \citet{1981MNRAS.194..809L} size-linewidth relation. \subsubsection{Grid interpolation} \label{gridsubsec} To post-process the SPH data in RADMC-3D, one first needs to map the unstructured SPH particle distribution onto a Cartesian grid. Interpolation onto a uniform cartesian grid (see e.g.\ \citealt{Glover:2012jo} and \citealt{2014MNRAS.445.4055S}) is straightforward, but has the limitation that it is not well suited to account for the varying spatial resolution that exists in GADGET-2's particle distribution. In high density regions, the Lagrangian nature of SPH means that the particles are closely spaced, but this information can be lost if they are interpolated onto a grid with a cell size that is larger than the inter-particle spacing. The obvious solution to this problem is to require the cell size to be smaller than the smallest particle separation, but to achieve this with a uniform grid is computationally infeasible and would require a grid resolution of around $4096^{3}$ for the simulation we present here. In our present study, we therefore make use of an alternative solution. RADMC-3D is capable of constructing and utilising oct-tree grids (similar to those used in some adaptive mesh refinement codes, such as FLASH; see e.g.\ \citealt{Fryxell00}), and this structure is a much more natural fit to the disordered SPH particle distribution. We therefore interpolate the data from the SPH particles onto a suitably-constructed oct-tree grid, ensuring that no data is lost during the interpolation process. Full details of our methodology can be found in Appendix~\ref{appsec}. \begin{figure*} \includegraphics[width=\textwidth]{Images/r_h2/NCO_WCO_R_MAPS} \caption{Top left: Column density of hydrogen nuclei, $N_{\rm gas}$, at the end of the simulation. Since the gas in the cloud is primarily molecular, the H$_{2}$ column density is given approximately by $N_{\rm H_{2}} \simeq N_{\rm gas} / 2$. Top right and bottom right: the velocity-integrated intensity of the cloud at the same time, for the J$=1-0$ and J$=2-1$ emission lines, respectively. Bottom left: the emission line ratio $R_{2-1/1-0}$, as defined in Equation~\ref{eq:ratio}.} \label{fig:MAPS} \end{figure*} \section{The CO 2-1 / CO 1-0 line ratio}\label{ressec} \subsection{CO emission maps} The column density map in the upper-left panel of Fig.~\ref{fig:MAPS} gives an overview of the gas distribution and density of the cloud at the end of the simulation. The column density shown is the total column density of H nuclei, $N_{\rm gas} = N_{\rm H} + N_{\rm H^{+}} + 2 N_{\rm H_{2}}$. However, as the gas is predominantly molecular, we find that in practice, $N_{\rm gas} \simeq 2 N_{\rm H_{2}}$.The column density map clearly shows the filamentary structure produced in the cloud by turbulence and self-gravity. Maps of the velocity-integrated intensity of the CO 1-0 and 2-1 emission lines, $W_{10}$ and $W_{21}$, are also shown in Fig.~\ref{fig:MAPS}, in the upper-right and lower-right panels, respectively. Comparing the column density map and the integrated intensities for both lines, we see that the bulk of the gas in the cloud is well-traced by the emission. Many of the filamentary structures visible in the column density map are flattened out due to line saturation. However, their locations are still visible in the maps of $W_{10}$ and $W_{21}$. Towards the denser regions of the filaments, the emission is much brighter and more structure can be observed. Note, however, that the change in the column density as we move from one of the filaments to the surrounding gas is much smaller than the corresponding change in the CO integrated intensity; much of the CO in the lower density gas is photodissociated by the ISRF. Comparing $W_{10}$ and $W_{21}$, we see that both maps show similar structure, with the most obvious difference being that in general$W_{10}$ is slightly brighter than $W_{21}$. This is particularly apparent towards the centre of the filaments, or at the outskirts of the cloud, where the gas is most diffuse. Nevertheless, the overall integrated intensity of both lines is very similar. \subsection{The value of $R_{2-1/1-0}$} \begin{figure} \centering \includegraphics[width=1.\linewidth]{Images/r_h2/R_HISTOGRAM} \caption{(i) Probability density function (PDF) of $R_{2-1/1-0}$, illustrating its bimodal behaviour. (ii) Cumulative PDF of $R_{2-1/1-0}$. (iii) PDF of $R_{2-1/1-0}$, weighted by the integrated brightness temperature $W_{\rm CO}$ of the 1-0 line (red solid) or the 2-1 line (blue dashed-dotted). (iv) Cumulative version of (iii).} \label{fig:Histograms} \end{figure} The images in Fig. \ref{fig:MAPS} discussed above show that the emission from both the CO 1-0 and CO 2-1 lines is very similar. In order to highlight the differences that do exist, we compute the ratio $R_{2-1/1-0}$, as defined in Equation \ref{eq:ratio}, for each pixel in the synthetic image. The resulting distribution of intensity ratios is shown in the bottom left panel of Fig. \ref{fig:MAPS}. This figure suggests that the distribution of $R_{2-1/1-0}$ could be bimodal. To better quantify the variations in the line ratio, we construct a probability density function (PDF) for $R_{2-1/1-0}$, which is shown in panel (i) of Fig.~\ref{fig:Histograms}. This PDF is constructed using area weighting, meaning that each pixel in the synthetic images is weighted equally. The figure confirms the bimodal behaviour of $R_{2-1/1-0}$: there are two distinct peaks, one centred at $R_{2-1/1-0}\sim 0.7$ and the other at $R_{2-1/1-0}$ $\sim 0.3$. If we consider the cumulative PDF, as shown in Panel (ii) of Fig. \ref{fig:Histograms}, then we see that the high ratio peak represents around 60\% of the total cloud area, and the low ratio peak represents the remaining 40\%. Observationally, measurements of $R_{2-1/1-0}$ for real molecular clouds or collections of molecular clouds typically recover values similar to those that we find for the high ratio peak. For example, \citet{1994ApJ...425..641S} report mean values of $R_{2-1/1-0}$ of 0.77 for the Orion A molecular cloud and 0.66 for the Orion B cloud. On larger scales, \citet{Koda:2012ht} report values of around 0.7--0.9 for clouds in the spiral arms of M51, although for the inter-arm clouds they find somewhat lower values of 0.4--0.6. It is also worth noting that a common value adopted in the literature for converting from $W_{21}$ to $W_{10}$ is $R_{2-1/1-0}\sim0.7$ \citep{1990ApJ...348..434E,1991A&A...251....1C,1995A&A...303..851B,1997ApJ...486..276S,1997IAUS..170...39H,2001ApJS..136..189S,2008AJ....136.2846B,2009AJ....137.4670L,2011MNRAS.416.1250B}. The reason why these previous studies have not detected or discussed the lower ratio peak becomes clear when we examine the emission-weighted PDF of $R_{2-1/1-0}$, shown in panel (iii) of Fig.~\ref{fig:Histograms}. It is evident from this plot that although both peaks in the area-weighted PDF correspond to similar areas, they correspond to very different total intensities. The high ratio peak corresponds to regions of the cloud that are bright in CO, and hence shows up clearly in the emission-weighted PDF. The low ratio peak, however, is produced by emission from regions with very low CO brightness, and hence essentially disappears in the emission-weighted PDF, remaining visible only as a small wing on the left-hand side of the main peak. We see also that we recover the same behaviour regardless of whether we weight the PDF using the integrated intensity of the (1-0) transition (the red curve in panel (iii) of Fig.~\ref{fig:Histograms}) or the (2-1) transition (the blue curve in the same Figure). To get a feel for the sensitivity that would be required to observe $R_{2-1/1-0}$, in Fig.~\ref{fig:TB2peak} we show how $R_{2-1/1-0}$ varies as a function of the integrated intensity of the 1-0 line. This plot again demonstrates that lines-of-sight with low values of $R_{2-1/1-0}$ have low integrated intensities. For example, essentially all of the lines-of-sight that have $R_{2-1/1-0} \sim 0.3$ have CO 1-0 integrated intensities that are less than 1 K km$\,{\rm s}^{-1}$. Observations with sensitivities of $\ga 1$~K km$\,{\rm s}^{-1}$ will therefore simply not detect the emission from these regions. To put these values into context, note that the CO (1-0) map used in the \citet{1994ApJ...425..641S} study of Orion A and B (taken originally from \citealt{Maddalena:1986}) has a brightness temperature sensitivity of around 0.8~K, while the CO (2-1) map made by \citet{1994ApJ...425..641S} themselves has roughly a factor of two better sensitivity. If we assume a cloud velocity width of a few km~s$^{-1}$, this corresponds to a minimum integrated intensity of a few K~km~s$^{-1}$ for both lines. It is therefore unsurprising that they recover only high values for $R_{2-1/1-0}$. Interestingly, the more recent study of Orion A and B by \citet{2015ApJS..216...18N}, which had a $3\sigma$ sensitivity of $0.24$ {\rm K \rm km \rm s}$^{-1}$, recovers values of $R_{2-1/1-0} \sim 0.4$ or lower in some regions of the cloud (particularly towards the left side of the ridge, away from the OB association), consistent with our argument above that high sensitivity is required in order to observe regions with low $R_{2-1/1-0}$. \begin{figure} \centering \includegraphics[width=\linewidth]{Images/r_h2/Tb_R_graphCO} \caption{ $R_{2-1/1-0}$, plotted as a function of the integrated intensity of the CO 1-0 line, $W_{10}$. Values are plotted for all pixels in the synthetic emission maps that have $W_{10} > 0.01 \: {\rm K \, km s^{-1}}$ and $W_{21} > 0.01 \: {\rm K \, km s^{-1}}$. The diagonal red dashed line indicate this selection criterion. Note that in practice, $R < 0.4$ when $W_{10} < 0.1 \: {\rm K \, km s^{-1}}$, so a number of points are removed that have $W_{21} < 0.01 \: {\rm K \, km s^{-1}}$ but $W_{10} > 0.01 \: {\rm K \, km s^{-1}}$.} \label{fig:TB2peak} \end{figure} \subsection{The dependence of $R_{2-1/1-0}$ on density and temperature} Considering the distribution of $R_{2-1/1-0}$ in out maps in Fig. \ref{fig:MAPS}, it is worth exploring how it relates to physical quantities within the cloud such as temperature or density. The bimodal behaviour we see in the area-weighted PDF suggests that the local conditions of the cloud are changing such that emission from one or both lines is affected, creating two peaks in $R_{2-1/1-0}$. We investigate how $R_{2-1/1-0}$ varies as a function of a number of different quantities computed for each line of sight: the mean temperature $\langle T \rangle = \sum_{i} T_i m_i$, the mean number density $\langle n \rangle$, or the H$_{2}$ and CO column densities (N$_{\rm H_2}$ and N$_{\rm CO}$). We also examine how these quantities correlate with each other. The results are shown in~Fig. \ref{fig:quantities}. Note that the colour map used in these plots to indicate $R_{2-1/1-0}$ is the same as that of Fig.~\ref{fig:MAPS}. \begin{figure} \centering \includegraphics[width=1\linewidth]{Images/r_h2/quantities} \caption {Different physical quantities are plotted and colour coded with $R_{2-1/1-0}$: (i) N$_{\rm CO}$ vs $\langle T \rangle$; (ii) N$_{\rm H_2}$ vs $\langle T \rangle$; (iii) $\langle n \rangle$ vs $\langle T \rangle$; (iv) $\langle n \rangle$ vs N$_{\rm CO}$.} \label{fig:quantities} \end{figure} Panel (i) in Fig.~\ref{fig:quantities} shows N$_{\rm CO}$ plotted against $\langle T \rangle$. Although there is a clear inverse correlation between these two quantities, there is also significant scatter in the mean temperature associated with any given CO column density. This is a consequence of the fact that other than at the very highest CO column densities, warm, CO-poor gas makes a significant contribution to $\langle T \rangle$ but has little influence on N$_{\rm CO}$. This is not surprising since this follows from the heating and cooling processes of the gas \citep{1965ApJ...142..531F,1969ApJ...155L.149F, Glover:2012et}. Therefore two sight-lines that probe similar amounts of CO but differing amounts of warm gas, can have quite different mean temperatures associated with the same CO column density. Consequently, $\langle T \rangle$ is only a good measure of the temperature of the CO-emitting gas when the CO column density is large. We also see that $R_{2-1/1-0}$ has a strong dependence on N$_{\rm CO}$: there is a clear rapid shift at N$_{\rm CO}\sim 10^{15}$ {\rm cm}$^{-2}$\ separating gas with low $R_{2-1/1-0}$ from gas with high $R_{2-1/1-0}$. On the other hand, $R_{2-1/1-0}$ depends only weakly on $\langle T \rangle$, largely because $\langle T \rangle$ in general is not a good measure of the temperature of the CO-emitting gas. Panel (ii) in Fig.~\ref{fig:quantities} depicts N$_{\rm H_2}$ plotted against $\langle T \rangle$. Again, there is a clear inverse correlation, reflecting the fact that lines-of-sight with high H$_{2}$ column densities preferentially sample dense gas that is well-shielded from the ISRF and that hence is cold. Looking at the behaviour of $R_{2-1/1-0}$ in this plot, we see that although it is low when N$_{\rm H_{2}}$ is small (N$_{\rm H_{2}} \sim 10^{21} \: {\rm cm^{-2}}$) and high when N$_{\rm H_{2}}$ is large (N$_{\rm H_{2}} > 10^{22} \: {\rm cm^{-2}}$), at intermediate column densities there is no clear correlation between $R_{2-1/1-0}$ and N$_{\rm H_{2}}$. In panel (iii) of Fig.~\ref{fig:quantities}, we illustrate the relationship between $\langle n \rangle$ and $\langle T \rangle$. In this case, there is a relatively tight inverse correlation, showing that lines-of-sight with low mean density probe primarily warm gas, while lines-of-sight with high mean density probe cold gas. Once again, there is a clear bimodality in the behaviour of $R_{2-1/1-0}$: low values correlate well with low mean densities and high mean temperatures, while high values correlate with high mean densities and low mean temperatures. Finally, in panel (iv) of Fig.~\ref{fig:quantities}, we present $\langle n \rangle$ against N$_{\rm CO}$. We see from this plot that although there is a clear correlation between the mean density along a sight-line and the CO column density of that sight-line, there is a substantial scatter in this relationship for values of $\langle n \rangle$ around $\langle n \rangle \sim 100 \: {\rm cm^{-3}}$. As $R_{2-1/1-0}$ correlates more strongly with N$_{\rm CO}$ than with $\langle n \rangle$, the result is that there is only a weak relationship between the mean density and the value of $R_{2-1/1-0}$ for mean densities close to $100 \: {\rm cm^{-3}}$. However, it is also clear that $R_{2-1/1-0}$ is always large when $n \gg 100 \: {\rm cm^{-3}}$, and always small if $n \ll 100 \: {\rm cm^{-3}}$. Putting this all together, we see that there are clear links between the bimodal structure visible in the distribution of $R_{2-1/1-0}$ and the mean values of the physical conditions (density, temperature, etc.) within the cloud. Lines-of-sight with high $R_{2-1/1-0}$ preferentially probe regions with high CO column densities, high mean densities and low temperatures. Conversely, lines-of-sight with low $R_{2-1/1-0}$ probe regions with low CO column densities, low mean densities and high mean temperatures. However, these relationships do not explain why the transition from low $R_{2-1/1-0}$ to high $R_{2-1/1-0}$ occurs so suddenly, or why it is the CO column density in particular that best predicts when this transition will occur. To understand why this happens, we need to look at how the CO line opacities vary within the cloud. \subsection{Optical depth effects} \begin{figure} \includegraphics[width=1\linewidth]{Images/r_h2/6tauRTex_Tpanel} \caption{Panels i) and ii) show Integrated opacity image weighted by integrated intensity for both lines ($\tau_{W10}$ and $\tau_{W21}$ respectively). Panels iii) and iv) show $\tau_{W}$, plotted as a function of $R_{2-1/1-0}$ for both lines illustrating how the ratio and opacity are correlated. Panels v) and vi) show T$_{\rm ex}$/T$_{\rm kin}$ plotted as a function of $\tau_{W}$ for each line and colour coded with $R_{2-1/1-0}$ the same colour scale as Fig. \ref{fig:quantities}} \label{fig:taus} \end{figure} To investigate the influence that line opacity has on the value of $R_{2-1/1-0}$, we have computed the optical depths of both CO lines using {\sc radmc-3d}. For each line of sight, we first compute the optical depth individually for each velocity channel. We then average these values to produce a single representative value of $\tau$. To construct this average, we weight the contribution of each velocity channel by the contribution it makes to the velocity-integrated brightness temperature of the line, i.e. \begin{equation} \tau_{\rm W} = \frac{\sum T_{{\rm b}, i} \times \tau_i \times {\rm d}v}{W_{\rm CO}} \end{equation} where $T_{{\rm b}, i}$ is the brightness temperature in velocity channel $i$, $\tau_i$ is the corresponding optical depth, and ${\rm d}v$ is the width of the channel. It is worth noting that the way $\tau_i$ is computed is analogous to how an image is computed in {\sc radmc-3d} \citep{2012ascl.soft02015D}. Therefore $\tau_i$ is not the mean opacity seen by a single cell along the line of sight, but rather the total integrated opacity along the line of sight for each velocity channel. As such the resulting image is dependent on the 3D velocity field, in the same way our integrated intensity maps are, given that we use the LVG approximation to account for non-LTE effects. In Fig.~\ref{fig:taus}, panels (i) and (ii) show how $\tau_{\rm W}$ varies as a function of position for the CO 1-0 and 2-1 lines, respectively. A quick comparison with Fig.~\ref{fig:MAPS} suggests a correlation between optically thick lines of sight ($\tau > 1$) and $R_{2-1/1-0} \sim 0.7$. Additionally, along these lines of sight, $\tau_{W21}$ seems to both be larger and to increase faster than $\tau_{W10}$. The correlation between $\tau$ and $R_{2-1/1-0}$ becomes more evident in panels (iii) and (iv) of Fig.~\ref{fig:taus}. Values of $R_{2-1/1-0} \sim 0.7$ can be mostly attributed to emission coming from optically thick lines of sight, whereas values of $R_{2-1/1-0} \sim 0.3$ originate from optically thin lines of sight. The influence of $\tau$ on the level populations is further emphasised in panels (v) and (vi), where we plot the ratio of the CO excitation temperature $T_{\rm ex}$ and the kinetic temperature $T_{\rm kin}$ as a function of $\tau_{\rm W}$. Panel (v) shows the results for the 1-0 line and panel (vi) shows the results for the 2-1 line. We see that along most lines of sight with $\tau < 1$, both transitions are strongly sub-thermal, with excitation temperatures that are much less than the kinetic temperature. This is to be expected: we have already seen that most of this emission comes from gas with a density $n < 100 \: {\rm cm^{-3}}$, far below the critical density of even the 1-0 line, and since $\tau < 1$, even if a small amount of radiative trapping occurs, it is insufficient to change this conclusion. Along lines of sight with $\tau > 1$, the behaviour is more complex. On physical grounds, we expect that $T_{\rm ex} \rightarrow T$ as $\tau \rightarrow \infty$, where T is the kinetic temperature of the gas. From Fig.~\ref{fig:taus}, we see that we do indeed recover this behaviour for some of the gas along the sight-lines with high optical depth. However, we also see that there are other regions along each high $\tau$ sight-line where the emission is sub-thermal. The key to understanding this behaviour is the fact that the quantity directly responsible for influencing the level populations is not the same quantity that we are dealing with when we compute and plot $\tau_{\rm W}$. Although there is a correlation between the mean optical depth along a sight-line and the angle-averaged optical depths probed by that sight-line, the highly inhomogeneous structure of the cloud causes this correlation to be fairly weak \citep[see e.g.][]{2014MNRAS.444.2396C}. Therefore, the emission that we see along the $\tau > 1$ sight-lines comes from a mix of sub-thermal and thermalised gas, explaining why we do not simply recover a value of 1 for $R_{2-1/1-0}$. \section{Discussion}\label{dissec} Our synthetic images recover the expected peak in the CO 2-1/1-0 line ratio at $R_{2-1/1-0} \sim 0.7$, but also indicate the existence of a second peak in the line ratio distribution at $R_{2-1/1-0} \sim 0.3$. The first peak is produced by emission from CO in cold, dense regions of the cloud and our analysis in the previous section shows that much of this CO is optically thick. On the other hand, the second peak is produced by emission from optically thin, sub-thermally excited CO in warm, diffuse regions of the cloud. Our analysis therefore suggests that the value of $R_{2-1/1-0}$ can potentially be used as a probe of the physical conditions within a molecular cloud. By detecting both CO lines and determining whether $R_{2-1/1-0}$ is found in the high peak or the low peak, we can place constraints on the density, temperature and optical depth of the CO-emitting gas. Validating the existence of the low ratio peak with observational studies will therefore be important to establish $R_{2-1/1-0}$ as a potential observational tool for confidently distinguishing the different regions within a GMC. However, we caution that it remains to be seen whether the $R_{2-1/1-0}$ distribution that we see in this particular cloud is universal or is a result of our choice of initial conditions and/or ISRF. However, what is clear is that for this particular cloud -- i.e. the physical properties, and environmental conditions -- $R_{2-1/1-0}$ has a well-defined bimodal structure that corresponds to the physical state of the gas, such as its temperature, density and resulting level populations. \section{Conclusions}\label{consec} We have used a numerical simulation of a turbulent molecular cloud to investigate the behaviour of the ratio of the velocity-integrated brightness temperatures of the first two emission lines of CO, defined as $R_{2-1/1-0}=W_{21}/W_{10}$. Our simulated cloud has properties similar to those found in nearby star-forming clouds. We have used SPH to model the chemical, thermal and dynamical evolution of the cloud, and then post-processed the simulation output using a radiative transfer code to generate synthetic CO emission maps. Our main findings can be summarised as follows: \begin{enumerate} \item The area-weighted PDF of $R_{2-1/1-0}$ has a bimodal distribution with two main peaks, at $\sim 0.7$ and $\sim0.3$. This clear bimodal structure correlates well with the optical depths of the CO lines. Along optically thin lines of sight, the CO excitation is strongly sub-thermal and the resulting value of $R_{2-1/1-0}$ is small. On the other hand, along optically thick lines of sight, we probe a mix of sub-thermal and thermal emission, resulting in a much higher value of the line ratio. \item The high ratio peak primarily traces the cold ($T \leq 40$~K) and dense ($n \geq 10^3 \: {\rm cm^{-3}}$) molecular gas within the molecular cloud. This value is similar to the ``canonical'' value of $R_{2-1/1-0}$ often quoted in the literature, and also to the values measured in local molecular clouds. \citep{1994ApJ...425..641S,2015ApJS..216...18N} \item The low ratio peak traces more diffuse ($n \leq 10^3 \: {\rm cm^{-3}}$) and warmer ($T \geq 40$~K) molecular gas within the cloud. This gas contains much less CO and so the emission from these regions is much fainter, requiring high sensitivity to detect. We note that \citet{2015ApJS..216...18N} reported values of $R_{2-1/1-0} \sim 0.4-0.5$ towards the outskirts of Orion, consistent with the range of values we find for their limiting sensitivity. \end{enumerate} As such the value of $R_{2-1/1-0}$ can be indicative of the physical conditions in a particular region of a cloud. Further study, exploring a wide range of environmental conditions, is required to see whether the result we present here is universal. We will follow up on this in a future paper. \section*{Acknowledgements} We thank the anonymous referee for a constructive report that improved the manuscript. We would also like to thank Cornelis Dullemond for his help with the RADMC-3D refinement. PCC acknowledges support from the Science and Technology Facilities Council (under grant ST/N00706/1) and the European Community's Horizon 2020 Programme H2020-COMPET-2015, through the StarFormMapper project (number 687528). SCOG and RSK acknowledge financial support from the Deutsche Forschungsgemeinschaft via SFB 881, ``The Milky Way System'' (sub-projects B1, B2 and B8) and SPP 1573, ``Physics of the Interstellar Medium''. They also acknowledge support from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013) via the ERC Advanced Grant STARLIGHT (project number 339177).
1,314,259,993,664
arxiv
\section{Introduction} \label{} Far-forward calorimetry, covering the region between 5 and 50 milliradians from the on-energy beam axis, is envisioned as a component of both the ILD~\cite{ref:ILD_DBD} and SiD~\cite{ref:SiD_DBD} detector concepts for the proposed International Linear Collider (ILC). The BeamCal tungsten sampling calorimeter proposed to cover this angular region is expected to absorb approximately 10 TeV of electromagnetic radiation per beam crossing from e$^+$e$^-$ beamstrahlung pairs, leading to expected annual radiation doses of 100 Mrad for the most heavily-irradiated portions of the instrument. While the deposited energy is expected to arise primarily from minimum-ionizing electrons and positrons in the induced electromagnetic showers, radiation damage to calorimeter sensors may be dominated by hadrons induced by nuclear interactions of shower photons, which are much more likely to contribute to the non-ionizing energy loss that has been observed to damage sensors exposed to hadronic radiation. We report here on the latest results of SLAC Experiment T-506, for which several different types of solid-state sensors were exposed to doses of up to 300 Mrad at the approximate maxima of electromagnetic showers induced in a tungsten radiator by electrons of energy 3.5-13.3 GeV, similar to that of electrons and positrons from ILC beamstrahlung pairs. Bulk damage leading to the suppression of the electron/hole charge-collection efficiency (CCE) is generally thought to be proportional to the non-ionizing energy loss (`NIEL') component of the energy deposited by the incident radiation. Observations from early studies of electromagnetically-induced damage to solar cells~\cite{ref:TaeSung5,ref:TaeSung6,ref:TaeSung7} suggested that p-type bulk sensors were more tolerant to damage from electromagnetic sources, due to an apparent departure from NIEL scaling, particularly for electromagnetic particles of lower incident energy. Several more-recent studies have explored radiation tolerance of silicon diode to incident fluxes of electrons. A study assessing the capacitance vs. bias voltage (CV) characteristics of sensors exposed to as much as 1 Grad of incident 2 MeV electrons~\cite{ref:Rafi_09} suggested approximately 35 times less damage to n-type magnetic Czochralski sensors than that expected from NIEL scaling. A study of various n-type sensor types exposed to 900 MeV electrons showed charge-collection loss of as little as 3\% for exposures up to 50 Mrad~\cite{ref:Dittongo2004}; for exposures of 150 Mrad, the degree of damage was observed to be as small as one-fourth that expected from NIEL scaling~\cite{ref:Dittongo2005}. These discrepancies have been attributed to the different types of defects created by lattice interactions: electrons tend to create point-like defects that are more benign than the clusters formed due to hadronic interactions. Finally, in studies of sensors exposed to large doses of hadron-induced radiation, p-type bulk silicon was found to be more radiation-tolerant than n-type bulk silicon, an observation that has been attributed to the absence of type inversion and the collection of an electron-based signal~\cite{ref:TaeSung3,ref:TaeSung4}. However, n-type bulk devices have certain advantages, such as a natural inter-electrode isolation with commonly used passivation materials such as silicon oxide and silicon nitride. More recently, a number of non-diode solid-state sensors have been proposed as a possible radiation-tolerant alternative to silicon sensors, the latter of which can develop a large dark current after significant irradiation. Here, we report on an exploration of the radiation tolerance of silicon-diode pad sensors and several bulk solid-state pad sensors, including gallium arsenide (GaAs), silicon carbide (SiC) and industrial sapphire sensors. The sensors' radiation tolerance is assessed via direct measurements of the median collected charge deposited by minimum-ionizing particles as well as leakage current measurements. Note that while diode sensors must be operated with the sense of bias voltage that provides a reverse bias, bulk sensors can in principle be operated with either sign of the bias voltage. When results are presented below for bulk sensors with only one sign of the bias voltage, they are presented for the sign that provided the best performance. Two n-type and one p-type bulk silicon diode sensors were explored. The p-bulk sensor (`WSI-P4' or `PF'), from a test structure associated with a prototype sensor developed for the ATLAS upgrade tracker~\cite{ref:nobu}, was fabricated by the Hamamatsu Photonics Corporation (HPK) from a float-zone crystal with a thickness of 320 $\mu$m, and had an un-irradiated depletion voltage of approximately 180 V. The first of the n-bulk sensors (`N6906' or `NF'), from a test structure associated with sensors developed for the Fermi satellite~\cite{ref:Ohsugi}, was also manufactured by HPK from a float zone crystal, in this case with a thickness of 400 $\mu$m, and an un-irradiated depletion voltage of less than 40 V. In addition, the radiation tolerance of a 320$\mu$m-thick n-type pad structure designed for use in the ILC Luminosity Calorimeter (LumiCal), which would cover the angular region just outside that of the BeamCal, was explored. The sensor was manufactured by HPK; further details about the sensor design can be found in~\cite{ref:Lumi} This study made use of a fragment of a LumiCal sensor that had been accidentally broken; once the broken sensor was obtained by SCIPP, a study was done to determine, qualitatively, how the leakage current of a given pad depended upon that pad's proximity to the guard ring and the cleaved edge. Based on this understanding, the fragment was intentionally cleaved into two small pieces that could be mounted on the daughter boards used in the radiation target and charge-collection-efficiency apparatus. After a regimen of exposing the sensor to elevated temperature, and exposing the cleaved edge to ultraviolet light, it was found that the sensor could be biased to over 200 V without breaking down (i.e. developing a leakage current greater than several $\mu$A), well above the depletion voltage of 40 V. This allowed pre-irradiation charge-collection data to be taken at full depletion. The irradiation process further raised the breakdown voltage, allowing post-irradiation data to be taken at bias voltages as high as 400 V. The GaAs sensor used in the study was produced by means of the Liquid Encapsulated Czochralski method doped by a shallow donor (Sn or Te; Sn was used for the sensor in this study)~\cite{ref:GaAs}, and had a bulk thickness of 300 $\mu$m. The SiC sensor used in the study featured double-layer nickel-gold Schottky-barrier contacts mounted on a 4H-SiC crystal structure. The sensors were 420 $\mu$m thick, with an high quality epitaxial (active) layer of thickness 70 $\mu$m and an inactive substrate thickness of 350 $\mu$m. More details about this sensor and its performance can be found in~\cite{ref:SiC}. The industrial sapphire sensor used in the study had a thickness of 520 $\mu$m, and was produced by Crystal GmbH, Berlin. A three-layer metal contact consisting of aluminum (at the crystal surface) overlain with platinum and then gold was applied at the GSI laboratory in Darmstadt. Prior results on the radiation tolerance of the GaAs sensor, as well as of silicon diode sensors, including a p-type float-zone sensor irradiated to a dose of 270 Mrad, are presented in~\cite{ref:T506_NIM} and~\cite{ref:lcws2015}. New to this report are results for n-type float-zone and sapphire sensors irradiated to 300 Mrad, and for a 77 Mrad irradiation of the SiC sensor. Extended annealing studies are also presented for the GaAs sensor and for the p-type float-zone sensor that was irradiated to 270 Mrad. While the radiation dose was initiated by electromagnetic processes (electrons showering in tungsten), the placement of the sensors near shower max ensures that the shower incorporates an appropriate component of hadronic irradiation arising from neutron spallation, photoproduction, and the excitation of the $\Delta$ resonance. Particularly for the case that NIEL scaling suppresses electromagnetically-induced radiation damage, the small hadronic component of the electromagnetic shower might dominate the rate of damage to the sensor. However, the size and effect of this component is difficult to estimate reliably, and so we choose to study radiation damage in a configuration that naturally incorporates all components present in an electromagnetic shower. \section{Experimental Setup} Un-irradiated sensors were subjected to current vs. bias voltage (IV) and capacitance vs. bias voltage (CV) tests, the results of which allowed a subset of them to be selected for irradiation based on their breakdown voltage (typically above 1000 V for selected sensors) and low level of leakage current. The sensors were placed on carrier printed-circuit `daughter boards' and wire-bonded to a readout connector. The material of the daughter boards was milled away in the region to be irradiated in order to facilitate the charge collection measurement (described below) and minimize radio-activation. The median collected charge was measured with the Santa Cruz Institute for Particle Physics (SCIPP) charge-collection (CC) apparatus (also described below) before irradiation. The sensors remained mounted to their individual daughter boards throughout irradiation and the followup tests, simplifying their handling and reducing uncontrolled annealing. Additionally, this allowed a reverse-bias voltage to be maintained across the sensor during irradiation. This voltage was kept small (at the level of a few volts) to avoid possible damage of the devices from a large instantaneous charge during the spill. Sensors were irradiated with beam provided by the End Station Test Beam (ESTB) facility at the SLAC National Accelerator Laboratory. Parameters of the beam provided by the ESTB facility are shown in Table~\ref{tab:ESTB}. The beam was incident upon a series of tungsten radiators. An initial 7mm-thick (2.0 radiation-length) tungsten plate (`R1') served to initiate the electromagnetic shower. The small number of radiation lengths of this initial radiator permitted the development of a small amount of divergence of the shower relative to the straight-ahead beam direction without significant development of the largely isotropic hadronic component of the shower. \begin{table}[h] \begin{centering} \caption{Parameters of the beam delivered by the ESTB facility during the T-506 experiment.} \label{tab:ESTB} \vspace {5mm} \begin{tabular}{cc} Parameter & Value \\ \hline Energy & 3.5-14.5 GeV \\ Repetition Rate & 5-10 Hz \\ Charge per Pulse & 150-180 pC \\ Spot Size (radius) & $\sim 1$ mm \\ \end{tabular} \par\end{centering} \end{table} This plate was followed by an open length of approximately half a meter, which allowed a degree of spreading of the shower before it impinged upon a second, significantly thicker `R2' (4.0 radiation-length) tungsten plate, which was followed immediately by the sensor undergoing irradiation. This was closely followed, in turn, by an 8.0 radiation-length tungsten plate. Immediately surrounding the sensor by tungsten radiators that both initiated and absorbed the great majority of the electromagnetic shower ensured that the sensor would be illuminated by a flux of hadrons commensurate with that experienced by a calorimeter sensor close to the maximum of a tungsten-induced shower. More precise values of the location of the various radiator elements and sensor, for each of the four years of running of T-506, are given in Table~\ref{tab:radiator}. \begin{table}[h] \begin{centering} \caption{Location of the various radiator elements and the sensor under irradiation, for the three successive phases of T-506 running. The R1 radiator had a thickness of 2 $X_0$, while the thickness of the R2 radiator was 4 $X_0$. The apparent increased geometrical thickness of R2 in Year 1 was due to the presence of a 6mm air gap mid-way through the radiator.} \label{tab:radiator} \vspace {5mm} \begin{tabular}{lccc} & Year 1 & Year 2 & Year 3-4 \\ & (2013) & (2014) & (2015-6) \\ Surface & Location & Location & Location \\ & (cm) & (cm) & (cm) \\ \hline R1 Entrance & 0.0 & 0.0 & 0.0 \\ R1 Exit & 0.7 & 0.7 & 0.7 \\ R2 Entrance & 55.0 & 45.7 & 46.6 \\ R2 Exit & 57.0 & 47.1 & 48.0 \\ Sensor & 57.7 & 47.6 & 48.5 \\ \end{tabular} \par\end{centering} \end{table} Although initiating the shower significantly upstream of the sensor promoted a more even illumination of the sensor than would otherwise have been achieved, the half-width of the resulting electron-positron fluence distribution at the sensor plane was less than 0.5 cm. On the other hand, the aperture of the CC apparatus (to be described below) was of order 0.7 cm. Thus, in order to ensure that the radiation dose was well understood over the region of exposure to the CC apparatus source, it was necessary to achieve a uniform illumination over a region of approximately 1 cm$^2$. This was done by `rastering' the detector across the beam spot through a range of 1 cm in both dimensions transverse to that of the incident beam. According to Monte Carlo simulation studies, this is expected to generate a region of approximately 1 cm$^2$ over which the illumination is uniform to within $\pm 20$\%. To account for potential millimeter-level misalginments of the beamline center with the sensor, a `targeting factor' of $(90 \pm 10$)\% is included in the final dose-rate calculations. \section{Dose Rates} During the 120 Hz operation of the SLAC Linac Coherent Light Source (LCLS), 5-10 Hz of beam was deflected by a pulsed kicker magnet into the End Station transfer line. The LCLS beam was very stable with respect to both current and energy. Electronic pickups and ionization chambers measured the beam current and beam loss through the transfer line aperture, ensuring that good transfer efficiency could be established and maintained. The transfer efficiency was estimated to be ($95 \pm 5$)\%. To calculate the dose rate through the sensor, it is necessary to determine the `shower conversion factor' $\alpha$ that provides the mean fluence of minimum-ionizing particles (predominantly electrons and positrons), in particles per cm$^2$, per incoming beam electron. This factor is dependent upon the radiator configuration and incident beam energy, as well as the rastering pattern used to provide an even fluence across the sensor (as stated above, the detector was translated continuously across the beam centerline in a 1 cm$^2$ square pattern). To estimate $\alpha$, the Electron-Gamma-Shower (EGS) Monte Carlo program~\cite{ref:EGS} was used to simulate showers through the radiator configuration and into the sensor. The radiator configuration was input to the EGS program, and a mean fluence profile (particles per cm$^2$ through the sensor as a function of transverse distance from the nominal beam trajectory) was accumulated by simulating the showers of 1000 incident electrons of a given energy. To simulate the rastering process, the center of the simulated profile was then moved across the face of the sensor in 0.5mm steps, and an estimated mean fluence per incident electron as a function of position on the sensor (again, relative to the nominal beam trajectory) was calculated. This resulted in a mean fluence per incident electron that was uniform to within (as stated above) $\pm$20\% anywhere inside the boundary of the rastering region. The value of $\alpha$ used for subsequent irradiation dose estimates was taken to be the value found at the intersection of the nominal beam trajectory with the sensor plane. The simulation was repeated for various values of the incident electron energy, producing the values of $\alpha$ shown in Table~\ref{tab:alpha_2013} (Table~\ref{tab:alpha_2014-15}) for the 2013 (2014-16) radiator configuration. For years 2014 through 2016, the spacings of the radiator and sensor were similar enough that a single mean value of $\alpha$ sufficed. \begin{table}[h] \begin{centering} \caption{Shower conversion factor $\alpha$, giving the mean fluence at the sensor per incident electron, as a function of electron energy. for the 2013 radiator configuration. These values include the effect of rastering over a 1 cm$^2$ area surrounding the nominal beam trajectory. Also shown is the number of rads per nC of delivered charge, at the given energy, corresponding to the given value of $\alpha$. } \label{tab:alpha_2013} \vspace {5mm} \begin{tabular}{ccc} Beam & 2013 Shower & Dose per nC \\ Energy & Conversion & Delivered \\ (GeV) & Factor $\alpha$ & Charge (krad) \\ \hline 2 & 2.1 & 0.34 \\ 4 & 9.4 & 1.50 \\ 6 & 16.5 & 2.64 \\ 8 & 23.5 & 3.76 \\ 10 & 30.2 & 4.83 \\ 12 & 36.8 & 5.89 \\ \end{tabular} \par\end{centering} \end{table} \begin{table}[h] \begin{centering} \caption{Shower conversion factor $\alpha$, giving the mean fluence at the sensor per incident electron, as a function of electron energy, for the 2014-16 radiator configuration. These values include the effect of rastering over a 1 cm$^2$ area surrounding the nominal beam trajectory. Also shown is the number of rads per nC of delivered charge, at the given energy, corresponding to the given value of $\alpha$. For 2014 through 2016, the spacings of the radiator and sensor were similar enough that a single mean value of $\alpha$ sufficed. } \label{tab:alpha_2014-15} \vspace {5mm} \begin{tabular}{ccc} Beam & 2014-16 Shower & Dose per nC \\ Energy & Conversion & Delivered \\ (GeV) & Factor $\alpha$ & Charge (krad) \\ \hline 3 & 4.6 & 0.73 \\ 5 & 10.0 & 1.60 \\ 7 & 15.5 & 2.48 \\ 9 & 21.1 & 3.38 \\ 11 & 26.7 & 4.27 \\ 13 & 31.8 & 5.09 \\ 15 & 37.7 & 6.03 \\ 17 & 43.0 & 6.88 \\ \end{tabular} \par\end{centering} \end{table} To convert this number to rads per nC of delivered charge, a mean energy loss in silicon of 3.7 MeV/cm was assumed, leading to a fluence-to-rad conversion factor of 160 rad per nC/cm$^2$. It should be noted that while this dose rate considers only the contribution from electrons and positrons, these two sources dominate the overall energy absorbed by the sensor. In addition, the BeamCal dose-rate spec of 100 Mrad per year considered only the contribution from electrons and positrons. To confirm the adequacy of the dose-calibration simulation, in 2013 an in-situ measurement of the dose was made using a radiation-sensing field-effect transistor (`RADFET')~\cite{ref:radfet} positioned on a daughter board at the expected position of the nominal beam trajectory at the center of the rastering pattern. Beam was delivered in 150 pC pulses of 4.02 GeV electrons; a total of 1160 pulses were directed into the target over a period of four minutes, during which the sensor was rastered quickly through its 1 cm$^2$ pattern. The RADFET was then read out, indicating a total accumulated dose of 230 krad, with an uncertainty of roughly 10\%. Making use of the dose rate calibration of Table~\ref{tab:alpha_2013}, interpolating to the exact incident energy of 4.02 GeV, and taking into account the ($95 \pm 5$)\% transfer efficiency of the ESTB beamline, leads to an expected dose of 250 krad, within the $\sim$10\% uncertainty of the RADFET measurement. \section{Sensor Irradiation Levels} This proceeding reports on the study of three types of silicon diode sensors. Two pad sensors with p-type and n-type bulk doping (denoted ``PF'' and ``NF'', respectively) were produced from float-zone crystals. A third n-type silicon diode sensor (denoted ``LUMI'') was designed for use in the ILC Luminosity Calorimeter. In addition, irradiated bulk (non-diode) GaAs, SiC and industrial sapphire sensors were studied. Once a sensor was irradiated in a $0^{\circ}-5^{\circ}$ C environment at the ESTB, it was placed in a sub-freezing environment and not irradiated again. Up to four sensors of each type were irradiated and chilled until they could be brought back to the University of California, Santa Cruz campus for the post-irradiation CC and leakage current measurements. In addition, the sub-freezing environment was maintained both during and after the CCE and current measurements, so that controlled annealing studies could be performed. Table~\ref{tab:dose} displays the dose parameters of the irradiated sensors. The $(95 \pm 5)$\% transfer line efficiency and the $(90 \pm 10)$\% targeting factor have been taken into account in these estimates. \begin{table*}[h] \begin{centering} \caption{Dose parameters of the irradiated sensors. A $(95 \pm 5)$\% transfer line efficiency and a $(90 \pm 10)$\% targeting factor has been taken into account in final dose estimates.} \label{tab:dose} \begin{tabular}{lcccc} Sensor & Year & Beam Energy & Delivered & Dose \\ & & (GeV) & Charge ($\mu$C) & (Mrad) \\ \hline \hline WSI-P4 (PF) & 2015 & 13.3 & 50.9 & 269 \\ N6906 (NF) & 2015 & 14.6 & 60.0 & 290 \\ LUMI & 2016 & (14.5,13.4) & (30.3,37.0) & 316 \\ \hline GaAs-09 & 2014 & 3.90 & 21.7 & 20.8 \\ \hline SiC & 2015 & 13.3 & 17.2 & 76.6 \\ \hline Sapphire-04 & 2016 & (14.5,13.4) & (13.1,53.8) & 307 \\ \hline \end{tabular} \par\end{centering} \end{table*} \section{Charge Collection Measurement} The SCIPP CC apparatus incorporates a $^{90}$Sr source that has a secondary $\beta$-decay with an end-point energy of 2.28 MeV that illuminate the sensor under study, passing through to a scintillator immediately behind the sensor that is read out by a photomultiplier tube. For assessing the CCE of pad sensors, a two-stage, single-channel amplifier was constructed from discrete components, based on a design of Fabris, Madden and Yaver~\cite{ref:jfet_amp}. For the first stage, a cascode of two NXP BF862 JFETs is used. The source of the second JFET was connected to the non-inverting input of an LM6171 operational amplifier, chosen for its high slew rate and low input noise contribution. The output of this opamp was then fed back to the input of the first JFET through a $0.05$ pF capacitor shunted by a 10 M$\Omega$ resistor, completing the negative feedback loop. An external network, including a 32 dB Sonoma Instrument 310 SDI amplifier, was used to further amplify the pulse and shape it to a rise-time of 290 ns. Upon receiving a trigger from the scintillator, the signal from the amplifier was read out by a Tektronix DPO 4054 digital storage oscilloscope, and the digitized waveforms were written out and stored on the disk of a dedicated data-acquisition computer. After the waveforms were accumulated on the computer, a narrow temporal window was set around the peak of the average excitation pulse from through-going beta particles, and a histogram was made of the resulting pulse-height distribution; a typical distribution is shown in Figure~\ref{fig:PH_dist}. Since not all $\beta$ particles that trigger the scintillator go through the pad, the distribution shows contributions from both the Landau deposition of the through-going $\beta$ particles, as well as that of the noise pedestal, allowing for an in-situ subtraction of the mean pedestal. The amplification system was calibrated by reading out an unirradiated silicon diode sensor of known thickness, and comparing the median charge of the resulting Landau distribution (after subtracting off the mean pedestal) to that expected for an unirradiated sensor of that thickness. The measured gain exhibited only a small dependence on load capacitance. The width of the pedestal distribution then provides a measurement of the readout noise, which was found to be approximately 250 electrons at room temperature. \begin{figure}[h] \begin{center} \includegraphics[width=0.45\textwidth]{fig_pad_ph_dist.pdf} \end{center} \caption{ Histogram of pulse height for photomultiplier-triggered data events for the single-channel readout. Both the Landau distribution due to through-going $\beta$ particles as well as the noise pedestal (for triggers for which the $\beta$ particle did not traverse the sensor) are seen. \label{fig:PH_dist} } \end{figure} \section{Charge Collection and Leakage Current Results} The daughter boards containing the irradiated sensors were designed with connectors that allowed them to be attached to the CC apparatus readout board without handling the sensors. The median CC was measured as a function of bias voltage for each sensor both before and after irradiation, typically after a series of hour-long annealing steps at successively higher temperatures. \subsection{Results for bulk (non-diode) sensors} GaAs has been made use of in sensors designed specifically for use in the BeamCal instrument. The GaAs test structure described in Section~1 was irradiated to a level of 21 Mrad in 2014. Figure~\ref{fig:GaAs_traj} exhibits the observed CCE for the GaAs sensor as a function of bias voltage and annealing temperature. Figure~\ref{fig:GaAs_slice} shows the CCE as a function of annealing temperature. The sensor exhibited a significant loss in CCE, which worsened after low-temperature annealing but then recovered somewhat after higher-temperature annealing. Figure~\ref{fig:GaAs_current} shows the sensor's pre- and post-irradiation leakage current as a function of temperature, for a bias voltage of $V_B$ = -600 V. A significant dependence on sensor temperature is observed, as well as a degradation (increase) after irradiation. This increased leakage current was not observed to improve significantly with annealing. \begin{figure}[h] \begin{center} \includegraphics[width=0.50\textwidth]{fig_medQ_GaAs.pdf} \end{center} \caption{ Dependence of the median collected charge from a GaAs sensor upon bias voltage and annealing temperature, after exposure to a dose of 21 Mrad of electromagnetically-induced radiation. Also shown is the median collected charge as a function of bias voltage prior to irradiation. \label{fig:GaAs_traj} } \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.50\textwidth]{fig_medQ_T_GaAs.pdf} \end{center} \caption{ Dependence of the median collected charge from a GaAs sensor upon annealing temperature for a bias of $V_B$ = -600 V, before and after exposure to a dose of 21 Mrad of electromagnetically-induced radiation. \label{fig:GaAs_slice} } \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{fig_cur_T_GaAs.pdf} \end{center} \caption{ Leakage current vs. temperature for unirradiated and irradiated (21 Mrad) GaAs sensors. The study was done with a bias of $V_B$ = -600 V. The irradiated GaAs sensor had been annealed for an hour at a temperature of 75$^o$ C. \label{fig:GaAs_current} } \end{figure} Industrial sapphire has been proposed as a possible sensor technology for the BeamCal, due to an expectation that it will exhibit radiation tolerance similarly favorable to that of industrial diamond, which is much more costly than industrial sapphire. The intrinsic CCE is low, however, presumably due to a short mean-free path of carriers in the sensor bulk. After exposure to 307 Mrad of electromagnetically-induced radiation, leakage current remained in the nanoamp range over the ~1 cm$^2$ area of the sensor. As illustrated in Figure~\ref{fig:Sap_traj}, the median collected charge before irradiation was measured to be only of order 0.3 fC for a detector basis as high as 1000 V, with a significant drop in CCE observed after irradiation. Figure~\ref{fig:Sap_slice} shows the measure CCE as a function of annealing temperature for biases of $\pm 1000$ V; no improvement is observed over the course of the annealing process. \begin{figure}[h] \begin{center} \includegraphics[width=0.50\textwidth]{fig_medQ_Sap.pdf} \end{center} \caption{ Dependence of the median collected charge from an industrial sapphire sensor upon bias voltage and annealing temperature, after exposure to a dose of 307 Mrad of electromagnetically-induced radiation. Also shown is the median collected charge as a function of bias voltage prior to irradiation. \label{fig:Sap_traj} } \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.50\textwidth]{fig_medQ_T_Sap.pdf} \end{center} \caption{ Dependence of the median collected charge from an industrial sapphire sensor upon annealing temperature for biases of $\pm 1000$ V, before and after exposure to a dose of 307 Mrad of electromagnetically-induced radiation. \label{fig:Sap_slice} } \end{figure} As illustrated in Figure~\ref{fig:SiC_traj}, the sample sensor of 4H silicon carbide with an epitaxial (active) layer thickness of 70 $\mu$m exhibited charge collection of approximately 0.5 fC before irradiation. After an electromagnetically-induced irradiation of 77 Mrad, substantial loss of CCE was observed at lower bias levels ($V_B \simeq 200$ V); however, the CCE was approximately 2/3 of its unirradiated value for $V_B = 1000$ V. As illustrated in Figure~\ref{fig:SiC_slice}, little improvement was observed after hour-long annealing episodes at successively higher temperature. Post-irradiation leakage current, measured at approximately $-15^{\circ}$ C, remained at the nanoamp level. \begin{figure}[h] \begin{center} \includegraphics[width=0.50\textwidth]{fig_medQ_SiC.pdf} \end{center} \caption{ Dependence of the median collected charge from a SiC sensor upon bias voltage and annealing temperature, after exposure to a dose of 77 Mrad of electromagnetically-induced radiation. Also shown is the median collected charge as a function of bias voltage prior to irradiation. \label{fig:SiC_traj} } \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.50\textwidth]{fig_medQ_T_SiC.pdf} \end{center} \caption{ Dependence of the median collected charge from a SiC sensor upon annealing temperature for a bias of $V_B$ = 600 V, before and after exposure to a dose of 77 Mrad of electromagnetically-induced radiation. \label{fig:SiC_slice} } \end{figure} \subsection{Results for silicon diode sensors} CCE and leakage current were measured for three types of Si diode sensors after doses of electromagnetically-induced radiation of order 300 Mrad, including both p-bulk (PF) and n-bulk (NF) float zone pad sensors as well as for an n-bulk sensor (LUMI) designed for use in the ILC Luminosity Calorimeter. Figures~\ref{fig:PF_traj} and ~\ref{fig:PF_slice} exhibit the CCE for the PF sensor before and after a 270 Mrad irradiation, with the post-irradiation CCE exhibited after several successive annealing episodes. Significant CCE loss was observed at lower ($V_B = -200$ V) bias voltages, but for $V_B = -600$ V, the CCE was observed to exceed 80\% of its pre-irradiation value after annealing at moderate temperature. Figure~\ref{fig:PF_curr} shows the PF sensor leakage current observed after irradiation, measured as a function of bias voltage at a temperature of $-10^{\circ}$ C. Taking into account the $0.025$ cm$^2$ area of the sensor, the current density was measured to be approximately 80 $\mu$A/cm$^2$, with little dependence upon annealing temperature. Figure~\ref{fig:PF_curr_T} exhibits the post-irradiation leakage current as a function of temperature for a bias voltage of $V_B = -600$ V; the power-law behavior of a doubling of the leakage current for every $5-10^{\circ}$ C increase in temperature is typical for Si diode sensors. \begin{figure}[h] \begin{center} \includegraphics[width=0.50\textwidth]{fig_medQ_PF.pdf} \end{center} \caption{ Dependence of the median collected charge from a p-bulk float-zone technology silicon diode sensor upon bias voltage and annealing temperature, after exposure to a dose of 270 Mrad of electromagnetically-induced radiation. Also shown is the median collected charge as a function of bias voltage prior to irradiation. \label{fig:PF_traj} } \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.50\textwidth]{fig_medQ_T_PF.pdf} \end{center} \caption{ Dependence of the median collected charge from a p-bulk, float-zone technology silicon diode sensor as a function of annealing temperature for a bias of $V_B = -600$ V, before and after exposure to a dose of 270 Mrad of electromagnetically-induced radiation. \label{fig:PF_slice} } \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.45\textwidth]{fig_cur_PF.pdf} \end{center} \caption{ Dependence of the leakage current through a p-bulk float-zone technology silicon diode sensor upon bias voltage and annealing temperature, after exposure to a dose of 270 Mrad of electromagnetically-induced radiation. \label{fig:PF_curr} } \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.40\textwidth]{fig_cur_T_PF.pdf} \end{center} \caption{ Dependence of the leakage current through a p-bulk float-zone technology silicon diode sensor upon temperature after exposure to a dose of 270 Mrad of electromagnetically-induced radiation. The sensor was biased to $V_B = -600$ V. \label{fig:PF_curr_T} } \end{figure} Figures~\ref{fig:NF_traj} and~\ref{fig:NF_slice} exhibit the CCE for the NF sensor before and after a 290 Mrad irradiation, with the post-irradiation CCE exhibited after several successive annealing episodes. Again, significant CCE loss was observed at lower ($V_B = 200$ V) bias voltages, while for $V_B = 600$ V, the CCE was observed to approach 60\% of its pre-irradiation value after annealing at moderate temperature. The post-irradiation leakage current density was found to be similar in magnitude and temperature dependence to that of the PF sensor discussed above. \begin{figure}[h] \begin{center} \includegraphics[width=0.50\textwidth]{fig_medQ_NF.pdf} \end{center} \caption{ Dependence of the median collected charge from an n-bulk float-zone technology silicon diode sensor upon bias voltage and annealing temperature, after exposure to a dose of 290 Mrad of electromagnetically-induced radiation. Also shown is the median collected charge as a function of bias voltage prior to irradiation. \label{fig:NF_traj} } \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.50\textwidth]{fig_medQ_T_NF.pdf} \end{center} \caption{ Dependence of the median collected charge from an n-bulk, float-zone technology silicon diode sensor as a function of annealing temperature for a bias of $600$ V, before and after exposure to a dose of 290 Mrad of electromagnetically-induced radiation. \label{fig:NF_slice} } \end{figure} Finally, figure~\ref{fig:Lumi_traj} exhibits the CCE for the LUMI sensor before and after a 316 Mrad irradiation, with the post-irradiation CCE again exhibited after several successive annealing episodes. The results are qualitatively similar to those for the PF and NF sensors, with significant CCE loss observed for lower bias voltages, but with substantial improvement to the CCE observed for higher bias voltages, particularly after annealing at moderate temperature. Because of the breakdown experienced at bias voltages above several hundred volts (due, again, to the fact that the integrity of the sensor was compromised when it was broken and cleaved, but not expected for an intact sensor), the high-bias CCE behavior could not be explored. Nonetheless, the behavior was observed to be qualitatively consistent with that of the NF sensor discussed above. Given the possibility that significant current was passing across the cleaved edge of the sensor, only an upper bound could be determined for the post-irradiation leakage current for the LUMI sensor. Again, though, this bound suggested qualitative consistency with the NF and PF sensor results. \begin{figure}[h] \begin{center} \includegraphics[width=0.43\textwidth]{fig_medQ_Lumi.pdf} \end{center} \caption{ Dependence of the median collected charge from a fragment of an n-bulk silicon diode sensor designed for use with the ILC Luminosity Calorimeter, as function of bias voltage and annealing temperature, after exposure to a dose of 316 Mrad of electromagnetically-induced radiation. Also shown is the median collected charge as a function of bias voltage prior to irradiation. \label{fig:Lumi_traj} } \end{figure} \section{Summary and Conclusions} Using the End Station Test Beam facility at the SLAC National Accelerator Laboratory, we have explored the radiation tolerance of three different types of silicon diode sensors as well as bulk (non-diode) sensors formed from gallium arsenide, silicon carbide, and industrial sapphire crystals. These sensors were exposed to doses of electromagnetically-induced radiation that varied from 21 Mrad (for the gallium arsenide sensor) to of order 300 Mrad (for the silicon diode and sapphire sensors). At these dose levels, the silicon diode sensors were observed to develop leakage currents of several tens of $\mu$A/cm$^2$ at operating temperatures of $-10^{\circ}$C, increasing with temperature with a doubling interval of $5-10^{\circ}$C. After moderate-temperature annealing, and operating at a reverse bias of 400-600 V, the silicon diode sensors were observed to retain charge-collection efficiency above 50\%, with the p-bulk sensor retaining somewhat better charge-collection efficiency than the two n-bulk sensors that were irradiated. The silicon carbide and sapphire sensors, after absorbing doses of 77 and 307 Mrad, respectively, did not develop significant leakage current at an operating temperature of $-10^{\circ}$C. Operating at a bias voltage of $V_B$ = 1000 V, the silicon carbide sensor retained somewhat over 50\% of its unirradiated charge-collection efficiency, while the sapphire sensor retained only about 20\% of its unirradiated charge-collection efficiency. Neither sensors showed improvement with annealing. Finally, after moderate temperature annealing, the gallium arsenide sensor retained approximately 50\% of its original charge-collection efficiency after a 21 Mrad dose. Somewhat unexpectedly, the sensor was observed to develop a leakage current of approximately 1 $\mu$A/cm$^2$ at an operating temperature of $-10^{\circ}$C, with a fractional rate of increase with temperature similar to that of the silicon diode sensors. These gallium arsenide results await confirmation with a sensor that has been irradiated to 100 Mrad but has not yet been characterized. \section{Acknowledgments} We are grateful to Leszek Zawiejski, INP, Krakow for supplying us with the tungsten plates needed to form our radiator, Georgy Shelkov, JINR, Dubna for supplying us with GaAs sensors for irradiation studies, Bohumir Zatko, Slovak Academy of Sciences, for supplying us with the SiC sensor, and Sergej Schuwalow, DESY Zeuthen, for supplying us with the industrial sapphire sensor. We would also like to express our gratitude to the SLAC Laboratory, and particularly the End Station Test Beam delivery and support personnel, who made the run possible and successful. Finally, we would like to thank our SCIPP colleague Hartmut Sadrozinski for the numerous helpful discussions and guidance he provided us. \section{Role of the Funding Source} The work described in this article was supported by the United States Department of Energy, DOE contract DE-AC02-7600515 (SLAC) and grant DE-FG02-04ER41286 (UCSC/SCIPP). The funding agency played no role in the design, execution, interpretation, or documentation of the work described herein. \nocite{*} \bibliographystyle{elsarticle-num}
1,314,259,993,665
arxiv
\section{Introduction\label{sec:Introduction}} \IEEEPARstart{W}{ireless} multi-hop networks, in various forms, e.g. wireless sensor networks, underwater networks, vehicular networks, mesh networks and unmanned aerial vehicle formations, and under various names, e.g. ad-hoc networks, hybrid networks, delay tolerant networks and intermittently connected networks, are being increasingly used in military and civilian applications. Studying the capacity of these networks is an important problem. Since the seminal work of Gupta and Kumar \cite{Gupta00the}, extensive research has been done in the area. Particularly, in \cite{Gupta00the} Gupta and Kumar considered an ad-hoc network with a total of $n$ nodes uniformly and \emph{i.i.d.} on an area of unit size. Furthermore, each node is capable of transmitting at $W$ bit/s and using a fixed and identical transmission range. They showed that the transport capacity and the achievable per-node throughput, when each node randomly and independently chooses another node in the network as its destination, are $\Theta\left(W\sqrt{\frac{n}{\log n}}\right)$ and $\Theta\left(\frac{W}{\sqrt{n\log n}}\right)$ respectively% \footnote{The following notations are used throughout the paper. For two positive functions $f\left(x\right)$ and $h\left(x\right)$: \begin{itemize} \item $f\left(x\right)=o\left(h\left(x\right)\right)$ iff (if and only if) $\lim_{x\rightarrow\infty}\frac{f\left(x\right)}{h\left(x\right)}=0$; \item $f\left(x\right)=\omega\left(h\left(x\right)\right)$ iff $h\left(x\right)=o\left(f\left(x\right)\right)$; \item $f\left(x\right)=\Theta\left(h\left(x\right)\right)$ iff there exist a sufficiently large $x_{0}$ and two positive constants $c_{1}$ and $c_{2}$ such that for any $x>x_{0}$, $c_{1}h\left(x\right)\geq f\left(x\right)\geq c_{2}h\left(x\right)$; \item $f\left(x\right)\sim h\left(x\right)$ iff $\lim_{x\rightarrow\infty}\frac{f\left(x\right)}{h\left(x\right)}=1$; \item $f\left(x\right)=O\left(h\left(x\right)\right)$ iff there exist a sufficiently large $x_{0}$ and a positive constant $c$ such that for any $x>x_{0}$, $f\left(x\right)\leq ch\left(x\right)$; \item $f\left(x\right)=\Omega\left(h\left(x\right)\right)$ iff $h\left(x\right)=O\left(f\left(x\right)\right)$; \item An event $\xi$ is said to occur almost surely if its probability equals one; \item An event $\xi_{x}$ depending on $x$ is said to occur asymptotically almost surely (a.a.s.) if its probability tends to one as $x\rightarrow\infty$. \end{itemize} The above definition applies whether the argument $x$ is continuous or discrete, e.g. assuming integer values.% }. When the nodes are optimally and deterministically placed to maximize throughput, the transport capacity and the achievable per-node throughput become $\Theta\left(W\sqrt{n}\right)$ and $\Theta\left(\frac{W}{\sqrt{n}}\right)$ respectively. In \cite{Franceschetti07Closing}, Franceschetti \emph{et al.} considered essentially the same random network as that in \cite{Gupta00the} except that nodes in the network are allowed to use two different transmission ranges. The link capacity between a pair of directly connected nodes is determined by their SINR through the Shannon\textendash{}Hartley theorem. They showed that by having each source-destination pair transmitting via the so-called ``highway system'', formed by nodes using the smaller transmission range, the transport capacity and the per-node throughput can also reach $\Theta\left(\sqrt{n}\right)$ and $\Theta\left(\frac{1}{\sqrt{n}}\right)$ respectively even when nodes are randomly deployed. The existence of such highways was established using the percolation theory \cite{Meester96Continuum}. In \cite{Grossglauser02Mobility} Grossglauser and Tse showed that in mobile networks, by leveraging on the nodes' mobility, a per-node throughput of $\Theta\left(1\right)$ can be achieved at the expense of unbounded delay. Their work \cite{Grossglauser02Mobility} has sparked huge interest in studying the capacity-delay tradeoffs in mobile networks assuming various mobility models and the obtained results often vary greatly with the different mobility models being considered, see \cite{Dousse04latency,Gamal04Throughput-Delay,Jacquet12On,Kong08Connectivity,Li09Capacity,Neely05Capacity} and references therein for examples. In \cite{Chen09Order}, Chen et al. studied the capacity of wireless networks under a different traffic distribution. In particular, they considered a set of $n$ randomly deployed nodes transmitting to single sink or multiple sinks where the sinks can be either regularly deployed or randomly deployed. They showed that with single sink, the transport capacity is given by $\Theta\left(W\right)$; with $k$ sinks, the transport capacity is increased to $\Theta\left(kW\right)$ when $k=O(n\log n)$ or $\Theta\left(n\log nW\right)$ when $k=\Omega\left(n\log n\right)$. Furthermore, there is also significant amount of work studying the impact of infrastructure nodes \cite{Zemlianov05Capacity} and multiple-access protocols \cite{Chau11Capacity} on the capacity and the multicast capacity \cite{Li09Multicast}. We refer readers to \cite{Haenggi09Stochastic} for a comprehensive review of related work. The above efforts have led to many sophisticated and customized analytical studies on the capacity of particular networks. The obtained results often vary greatly with even a slight change in the scenario being investigated. While most of the analyses are intellectually challenging, they lack universal properties that can be extended to study the capacity of a different network. In this paper, we sift through these capacity-impacting parameters, e.g. mobility, traffic distribution, spatial node distribution, the capability of nodes to adjust their transmission power, the presence of infrastructure nodes, multiple-access protocols and scheduling algorithms, and present a simple relationship that can be used to estimate the capacity of wireless multi-hop networks. In addition to capacity, delay is also an important performance metric that has been extensively investigated. In this paper we focus on the study of capacity. We refer readers to \cite{Dousse04latency,Gamal04Throughput-Delay,Jacquet12On,Kong08Connectivity,Li09Capacity} for relevant work on delay. The main contribution of this paper is the development of a simple relationship for estimating the capacity of wireless multi-hop networks applicable to various different scenarios. The following is a detailed summary of our contributions: \begin{itemize} \item Considering an arbitrary network, we show that the network capacity is determined by the link capacity, the average number of simultaneous transmissions, and the average number of transmissions required to deliver a packet to its destination; \item We extend the above relationship for arbitrary networks to random networks; \item We apply our new result to determine the asymptotic capacity of several typical random networks considered in the literature \cite{Gupta00the,Franceschetti07Closing,Grossglauser02Mobility,Neely05Capacity,Zemlianov05Capacity,Li09Multicast,Chau11Capacity}. The capacity analysis using the aforementioned relationship often becomes simpler; \item Based on the intuitive understanding gained from our result, we point out limitations of some existing results and suggest further improvements; \item Furthermore, using our result, the capacity analysis for different networks can be transformed into the analysis of the three key parameters, i.e. the link capacity, the average number of simultaneous transmissions, and the average number of transmissions required to deliver a packet to its destination. Therefore our work makes important contributions towards developing a generic methodology for network capacity analysis that is applicable to a variety of different scenarios. \end{itemize} The rest of the paper is organized as follows: Section \ref{sec:Network-Models} gives a formal defi{}nition of the network models and notations considered in the paper. Section \ref{sec:Capacity-of-Static-Networks} gives the main results in this paper on the capacity of arbitrary networks and random networks. In Section \ref{sec:Applicability of the result}, we demonstrate wide applications of our result by using it to analyze the asymptotic capacity of various random networks considered in the literature \cite{Gupta00the,Franceschetti07Closing,Grossglauser02Mobility,Neely05Capacity,Zemlianov05Capacity,Li09Multicast,Chau11Capacity}. Finally Section \ref{sec:Conclusion-and-Further} concludes this paper. \section{Network Models\label{sec:Network-Models}} We consider two classes of networks in this paper: \emph{arbitrary networks} and \emph{random networks}. \subsection{Arbitrary networks} We use the term arbitrary network to refer to a network with a total of $n$ nodes arbitrarily and deterministically (i.e. not randomly) placed in a bounded area $A$ initially. These nodes may be either stationary or moving following arbitrary and fixed (i.e. not random) trajectories. A node may choose an arbitrary and fixed number of other nodes as its destination(s). In the case that a source node has multiple destination nodes, the source node may transmit the same packets to its destinations, viz. multicast, or transmit different portions of its packets to different destinations, viz. unicast. Packets are transmitted from a source to its destination(s) via multiple intermediate relay nodes. Each node can be either a source, a relay, a destination or a mixture. It is assumed that there are always packets waiting at the source nodes to be transmitted, viz. a so-called saturated traffic scenario is considered. Let $V_{n}$ be the node set. Let $E$ be the set of links. The establishment of a link between a pair of nodes may follow either the protocol model or the physical model \cite{Gupta00the}. Our analysis does not depend on the particular way a link is established. When nodes are mobile, the link between a pair of nodes may only exist temporarily and the link set at a particular time instant $t$ may be more appropriately denoted by $E_{t}$ to emphasize its temporal dependence. In this paper, we drop the subscript $t$ for convenience. It is assumed that there is a spatial and temporal path between every source and destination pair. Without loss of generality\cite{Gupta00the,Grossglauser02Mobility,Neely05Capacity,Zemlianov05Capacity,Li09Multicast,Chau11Capacity}, we further assume that each node can transmit at a fixed and known data rate of $W$ bits per second over a common wireless channel. Following the same analytical approach as that in \cite{Gupta00the}, it can be shown that it is immaterial to our result if the channel is broken into several subchannels of capacity $W_{1},W_{2},\cdots,W_{M}$ bits per second, as long as $\sum_{m=1}^{M}W_{m}=W$. This assumption allows us to ignore some physical layer details and focus on the topological aspects of the network that determine the capacity. Our result however can be readily extended to incorporate the situation that each link has a different and known capacity. We do not consider the impact of erroneous transmissions in our analysis. Transmission errors will cause a decrease in the effective link capacity and its impact can be captured in the parameter $W$, which is assumed to be known. Denote the above network by $G\left(V_{n},E\right)$ and in this paper, we study the capacity of $G\left(V_{n},E\right)$. In the following paragraphs, we give a formal definition of the capacity of $G\left(V_{n},E\right)$. Let $v_{i}\in V$ be a source node and let $b_{i,j}$ be the $j^{th}$ bit transmitted from $v_{i}$ to its destination. Let $d\left(v_{i},j\right)$ be the destination of $b_{i,j}$. For unicast transmission, $d\left(v_{i},j\right)$ represents single destination; for multicast transmission, $d\left(v_{i},j\right)$ represents the set of all destinations of $b_{i,j}$. Let $N_{i,T}^{\chi}$ be the number of bits transmitted by $v_{i}$ \emph{and} which reached, i.e. successfully received by, their respective destinations during a time interval $\left[0,T\right]$, with $T$ being an arbitrarily large number. The superscript $\chi\in\Phi$ denotes the spatial and temporal scheduling algorithm used in the network and $\Phi$ denotes the set of all scheduling algorithms. If the same bit is transmitted from a source to multiple destinations, e.g. in the case of multicast, it is counted as one bit in the calculation of $N_{i,T}^{\chi}$. It is assumed that the network is \emph{stable} $\forall\chi\in\Phi$. A network is called \emph{stable} if and only if for any fixed $n$, assuming that each node has an infinite queue, the queue length in any intermediate relay node storing packets in transit does not grow towards infinity as $T\rightarrow\infty$, or equivalently the long-term incoming traffic rate into the network equals the long-term outgoing traffic rate. It is further assumed that there is no traffic loss due to queue overflow. The transport capacity when using the spatial and temporal scheduling algorithm $\chi$, denoted by $\eta^{\chi}\left(n\right)$, is defined as: \begin{equation} \eta^{\chi}\left(n\right)\triangleq\lim_{T\rightarrow\infty}\frac{\sum_{i=1}^{n}N_{i,T}^{\chi}}{T}\label{eq:definition of capacity using phi} \end{equation} and the transport capacity of the network is defined as \begin{equation} \eta\left(n\right)\triangleq\max_{\chi\in\Phi}\eta^{\chi}\left(n\right)\label{eq:definition of capacity} \end{equation} Obviously $\eta\left(n\right)\geq\eta^{\chi}\left(n\right),\forall\chi\in\Phi$. An important special case occurs when the scheduling algorithm divides the transport capacity equally among all source-destination pairs asymptotically over time. Denote by $\Phi^{f}\subseteq\Phi$ the set of \emph{fair} scheduling algorithms that divide the transport capacity equally among all $m$ source-destination pairs asymptotically over time. The throughput per source-destination pair is defined as \begin{equation} \lambda_{m}\triangleq\max_{\chi\in\Phi^{f}}\frac{\eta^{\chi}\left(n\right)}{m}\label{eq:definition of throughput arbitrary network} \end{equation} The above definitions of the transport capacity and throughput capacity are valid for both finite $n$ and asymptotically infinite $n$. \subsection{Random networks} In addition to arbitrary networks, random networks have also been extensively studied in the literature, particularly the asymptotic properties of random networks as the number of nodes $n$ approaches infinity \cite{Gupta00the,Franceschetti07Closing,Grossglauser02Mobility,Neely05Capacity,Zemlianov05Capacity,Li09Multicast,Chau11Capacity}. By a random network, we mean a network with a total of $n$ nodes and each node is i.i.d. in a bounded area $A$ initially following a known distribution. If these nodes are mobile, their trajectories may also be random and i.i.d. A link between a pair of nodes in a random network may be established following either the protocol model or the physical model \cite{Gupta00the}. Denote the above random network by $G_{n}$ to distinguish it from the arbitrary network considered in the previous subsection. Given the randomness involved in the problem statement, the above definitions of throughput capacity for arbitrary networks need to be modified to account for ``vanishingly small probabilities'' \cite{Gupta00the}. Particularly, for asymptotic random networks whose number of nodes $n$ is sufficiently large, we say that under the spatial and temporal scheduling algorithm $\chi$, the transport capacity of $G_{n}$ is $\eta^{\chi}\left(n\right)$ if and only if $\eta^{\chi}\left(n\right)$ is the \emph{maximum} transport capacity that can be achieved \emph{asymptotically almost surely (a.a.s.)} as\emph{ $n\rightarrow\infty$} under $\chi$. Given the above modification on $\eta^{\chi}\left(n\right)$, the transport capacity of an arbitrary network defined in (\ref{eq:definition of capacity}) can still be used for random networks. The most extensively studied traffic distribution in random networks involves each node choosing another node independently as its destination and the transport capacity being divided equally among all source-destination pairs. In that case, the total number of source-destination pairs equals $n$ and the capacity of the network is often studied using the metric known as the \emph{per-node throughput} or the \emph{throughput capacity}. Denote by $\Phi^{f}\subseteq\Phi$ the set of \emph{fair }scheduling algorithms that divide the transport capacity equally among all $n$ source-destination pairs asymptotically over time. The per-node throughput (or throughput capacity) is defined as \begin{equation} \lambda\left(n\right)\triangleq\max_{\chi\in\Phi^{f}}\frac{\eta^{\chi}\left(n\right)}{n}\label{eq:definition of throughput} \end{equation} Intuitively, a scheduling algorithm $\chi$ is fair if it divides the transport capacity equally among all source-destination pairs asymptotically over time and also distributes traffic evenly across $A$ such that there is no traffic hot spot. For nodes uniformly and i.i.d. on $A$ and each node choosing another node independently as its destination, which is the scenario studied in Sections \ref{sub:Capacity-of-Random} and \ref{sec:Applicability of the result}, the technique is well known to establish the (asymptotic) fairness of a scheduling algorithm $\chi$, or to construct a (asymptotically) fair scheduling algorithm. It typically involves partitioning $A$ into a set of equal-size sub-areas, allocating transmission opportunities equally among all sub-areas and then demonstrating that using $\chi$, the number of source-destination pairs crossing each sub-area varies by at most a constant factor. The conclusion readily follows that the throughput obtainable by each source-destination pair varies by at most a constant factor and each source-destination pair has access to throughput of the same order asymptotically, see \cite{Franceschetti07Closing,Li12Capacity} for examples. The set of scheduling algorithms analyzed in Section \ref{sec:Applicability of the result} are known to be fair in the sense that \emph{a.a.s.}, each source-destination pair can achieve a throughput of the same order. Note that the above definitions of transport capacity and throughput capacity for random networks are consistent with those in \cite{Gupta00the,Franceschetti07Closing,Grossglauser02Mobility,Neely05Capacity,Zemlianov05Capacity,Li09Multicast,Chau11Capacity}. Particularly in \cite{Gupta00the}, a throughput capacity of $\lambda\left(n\right)$ bits per second is called \emph{feasible} if there is a spatial and temporal scheme for scheduling transmissions such that every node can send $\lambda\left(n\right)$ bits per second on average to its chosen destination \cite{Gupta00the}. The throughput capacity of random networks with $n$ node is of order $\Theta\left(f\left(n\right)\right)$ bits per second if there are deterministic constants $c>0$ and $c'<+\infty$ such that \[ \lim_{n\rightarrow\infty}\Pr\left(\lambda\left(n\right)=cf\left(n\right)\text{ is feasible}\right)=1 \] \[ \lim\inf_{n\rightarrow\infty}\Pr\left(\lambda\left(n\right)=c'f\left(n\right)\text{ is feasible}\right)<1 \] \section{Capacity of Arbitrary and Random Networks\label{sec:Capacity-of-Static-Networks}} In this section, we analyze the capacity of arbitrary networks and the capacity of random networks respectively. \subsection{Capacity of Arbitrary Networks\label{sub:Capacity-of-Arbitrary}} The following theorem on the capacity of arbitrary networks summarizes a major result of the paper: \begin{thm} \label{thm:Capacity relationship for policy pi}Consider an arbitrary network $G\left(V_{n},E\right)$. Let $\chi$ be the spatial and temporal scheduling algorithm used in $G\left(V_{n},E\right)$. Let $k^{\chi}\left(n\right)$ be the average number of transmissions required to deliver a randomly chosen bit to its destination. Let $Y^{\chi}\left(n\right)$ be the average number of simultaneous transmissions in $G\left(V_{n},E\right)$, the transport capacity $\eta^{\chi}\left(n\right)$ satisfies: \begin{equation} \eta^{\chi}\left(n\right)=\frac{Y^{\chi}\left(n\right)W}{k^{\chi}\left(n\right)}\label{eq:Capacity of mobile and static networks} \end{equation} \end{thm} \begin{IEEEproof} Recall from Section \ref{sec:Network-Models} that $v_{i}\in V_{n}$ represents a source node, $b_{i,j}$ represents the $j^{th}$ bit transmitted from $v_{i}$ to its destination(s), $d\left(v_{i},j\right)$, and $N_{i,T}^{\chi}$ is the number of bits successfully transmitted by $v_{i}$ during a time interval $\left[0,T\right]$. Let $h_{i,j}^{\chi}$ be the number of transmissions required to deliver $b_{i,j}$ to its destination (or all destination nodes in $d\left(v_{i},j\right)$ in the case of multicast) when the spatial and temporal scheduling algorithm $\chi\in\Phi$ is used. Let $Y_{t}^{\chi}\left(n\right)$ be the number of simultaneous transmissions in the network $G\left(V_{n},E\right)$ at time $t$. It follows from the definitions of $k^{\chi}\left(n\right)$ and $Y^{\chi}\left(n\right)$ that \begin{eqnarray} k^{\chi}\left(n\right) & = & \lim_{T\rightarrow\infty}\frac{\sum_{i=1}^{n}\sum_{j=1}^{N_{i,T}^{\chi}}h_{i,j}^{\chi}}{\sum_{i=1}^{n}N_{i,T}^{\chi}}\label{eq:definition of average number of transmissions} \end{eqnarray} and \begin{equation} Y^{\chi}\left(n\right)=\lim_{T\rightarrow\infty}\frac{\int_{0}^{T}Y_{t}^{\chi}\left(n\right)dt}{T}\label{eq:definition of EY} \end{equation} Let $\tau_{i,j,l}$, $1\leq l\leq h_{i,j}$ be the time required to transmit $b_{i,j}$ in the $l^{th}$ transmission and assume that the transmitting node is active during the entire $\tau_{i,j,l}$ interval. As each node transmits at the same data rate $W$, $\tau_{i,j,l}=\frac{1}{W}$. Given the above definitions, we are now ready to prove the theorem. \begin{rem} The technique used in the proof is based on first considering the \emph{total transmission time}, viz. the amount of traffic transmitted, measured in bits, multiplied by the time required to transmit each bit, in the network on the individual node level by aggregating the transmissions at different nodes, viz. $\sum_{i=1}^{n}\sum_{j=1}^{N_{i,T}^{\chi}}\sum_{l=1}^{h_{i,j}^{\chi}}\tau_{i,j,l}$ shown in the latter equations, and then evaluating the total transmission time in the network on the network level by considering the number of simultaneous transmissions in the entire network, viz. $\int_{0}^{T}Y_{t}^{\chi}\left(n\right)dt$ shown in the latter equations. Obviously, the two values must be equal. On the basis of this observation, the theorem can be established. \end{rem} At time $T$, the total transmission time during $\left[0,T\right]$ is given by \begin{equation} \sum_{i=1}^{n}\sum_{j=1}^{N_{i,T}^{\chi}}\sum_{l=1}^{h_{i,j}^{\chi}}\tau_{i,j,l}+q_{T}^{\chi}=\frac{1}{W}\sum_{i=1}^{n}\sum_{j=1}^{N_{i,T}^{\chi}}h_{i,j}^{\chi}+q_{T}^{\chi}\label{eq:average transmissions} \end{equation} where $\sum_{i=1}^{n}\sum_{j=1}^{N_{i,T}^{\chi}}\sum_{l=1}^{h_{i,j}^{\chi}}\tau_{i,j,l}$ accounts for the transmission time for traffic that has reached its destination and $q_{T}^{\chi}$ accounts for the transmission time for traffic still in transit at time $T$. Let $p_{max}^{\chi}$ be the maximum length, measured in the number of hops, of all routes in $G\left(V_{n},E\right)$ under $\chi$, obviously $p_{max}^{\chi}<n$. Furthermore, since the network is stable, there exists a positive constant $C_{1}$, independent of $T$, such that the total amount of traffic in transit is bounded by $C_{1}n$. Therefore \begin{equation} q_{T}^{\chi}\leq\frac{p_{max}^{\chi}C_{1}n}{W}<\frac{C_{1}n^{2}}{W}\label{eq:inequality on maximum queue length} \end{equation} On the other hand, the total transmission time during $\left[0,T\right]$ evaluated on the network level equals $\int_{0}^{T}Y_{t}^{\chi}\left(n\right)dt$. Obviously \[ \sum_{i=1}^{n}\sum_{j=1}^{N_{i,T}^{\chi}}\sum_{l=1}^{h_{i,j}^{\chi}}\tau_{i,j,l}+q_{T}^{\chi}=\int_{0}^{T}Y_{t}^{\chi}\left(n\right)dt \] When $T$ is sufficiently large and the network is \emph{stable}, using (\ref{eq:inequality on maximum queue length}), the amount of traffic in transit is negligibly small compared with the amount of traffic that has already reached its destination. Therefore, the following relationship can be established: \begin{equation} \lim_{T\rightarrow\infty}\frac{\sum_{i=1}^{n}\sum_{j=1}^{N_{i,T}^{\chi}}\sum_{l=1}^{h_{i,j}^{\chi}}\tau_{i,j,l}}{\int_{0}^{T}Y_{t}^{\chi}\left(n\right)dt}=1\label{eq:fundamental relation between capacity and average transmission} \end{equation} Noting that $\tau_{i,j,l}=\frac{1}{W}$, Equation (\ref{eq:Capacity of mobile and static networks}) follows readily by combing (\ref{eq:definition of capacity using phi}), (\ref{eq:definition of average number of transmissions}), (\ref{eq:definition of EY}) and (\ref{eq:fundamental relation between capacity and average transmission}).\end{IEEEproof} \begin{rem} Equation (\ref{eq:Capacity of mobile and static networks}) can also be obtained using Little's formula \cite{Kleinrock75Queueing}. Intuitively, defining the \emph{system} as consisting of the set of all wireless channels in $G\left(V_{n},E\right)$, the long-term average effective arrival rate into the system equals $k^{\chi}\left(n\right)\eta^{\chi}\left(n\right)$, the long-term average amount of traffic in the system equals $Y^{\chi}\left(n\right)$ and the average time in the system equals $\frac{1}{W}$. Equation (\ref{eq:Capacity of mobile and static networks}) then readily follows using Little's formula. \end{rem} Equation (\ref{eq:Capacity of mobile and static networks}) is obtained under a very generic setting and is applicable to networks of any size. It reveals that the network capacity can be readily determined by evaluating the average number of simultaneous transmissions $Y^{\chi}\left(n\right)$, the average number of transmissions required for reaching the destinations $k^{\chi}\left(n\right)$ and the link capacity $W$. The two parameters $Y^{\chi}\left(n\right)$ and $k^{\chi}\left(n\right)$ are often related. For example, in a network where each node transmits using a fixed transmission range $r\left(n\right)$, reducing $r\left(n\right)$ (while keeping the network connected) will cause increases in both $Y^{\chi}\left(n\right)$ and $k^{\chi}\left(n\right)$ and the converse. On the other hand, $Y^{\chi}\left(n\right)$ and $k^{\chi}\left(n\right)$ also have their independent significance, and can be optimized and studied independently of each other. For example, an optimally designed routing algorithm can distribute traffic evenly and avoid creating bottlenecks which helps to significantly increase $Y^{\chi}\left(n\right)$ at the expense of slightly increased $k^{\chi}\left(n\right)$ only, compared with the shortest-path routing. The following corollary is an easy consequence of Theorem \ref{thm:Capacity relationship for policy pi}: \begin{cor} \label{cor:capacity of arbitrary networks upper bound}Under the same setting as that in Theorem \ref{thm:Capacity relationship for policy pi}, \[ \eta\left(n\right)=\max_{\chi\in\Phi}\frac{Y^{\chi}\left(n\right)W}{k^{\chi}\left(n\right)}\leq\frac{\max_{\chi\in\Phi}Y^{\chi}\left(n\right)W}{\min_{\chi\in\Phi}k^{\chi}\left(n\right)} \] \end{cor} Corollary \ref{cor:capacity of arbitrary networks upper bound} allows the two key parameters that determining the capacity of $G\left(V_{n},E\right)$, viz. $Y^{\chi}\left(n\right)$ and $k^{\chi}\left(n\right)$ to be studied separately. Parameter $\max_{\chi\in\Phi}Y^{\chi}\left(n\right)W$ is determined by the maximum number of transmissions that can be accommodated in the network area. Assuming that each node transmits using a fixed transmission range $r\left(n\right)$, each transmission will then ``consume'' a disk area of radius at least $\frac{C_{2}r\left(n\right)}{2}$ in the sense that two simultaneous active transmitters must be separated by an Euclidean distance of at least $C_{2}r\left(n\right)$, where $C_{2}>1$ is a constant determined by the interference model \cite{Gupta00the}. The problem of finding the maximum number of simultaneous transmissions, viz. $\max_{\chi\in\Phi}Y^{\chi}\left(n\right)$, can be converted into one that finds the maximum number of non-overlapping equal-radius circles that can be packed into $A$ and then studied as a densest circle packing problem (see \cite{Yang12Connectivity} for an example). Parameter $Y^{\chi}\left(n\right)$ can also be studied as the transmission capacity of networks \cite{Weber10An}. For unicast transmission, $k^{\chi}\left(n\right)$ becomes the average number of hops between two randomly chosen source-destination pairs and has been studied extensively \cite{Mao10Probability}. As will also be shown in Section \ref{sec:Applicability of the result}, $Y^{\chi}\left(n\right)$ and $k^{\chi}\left(n\right)$ can be optimized separately to maximize the network capacity. \subsection{Capacity of Random Networks\label{sub:Capacity-of-Random}} We now consider the capacity of random networks. Note the connection between random networks and arbitrary networks that an instance of a random network forms an arbitrary network. The following result on the capacity of an arbitrary network can be obtained from Theorem \ref{thm:Capacity relationship for policy pi}. \begin{cor} \label{cor:capacity of random networks}Consider a random network $G_{n}$. Let $\chi\in\Phi^{f}$ be the spatial and temporal scheduling algorithm used in $G_{n}$. Let $k^{\chi}\left(n\right)$ be the average number of transmissions required to deliver a randomly chosen bit to its destination in an instance of $G_{n}$. Let $Y^{\chi}\left(n\right)$ be the average number of simultaneous transmissions in an instance of $G_{n}$. Both $k^{\chi}\left(n\right)$ and $Y^{\chi}\left(n\right)$ are random numbers associated with a particular (random) instance of $G_{n}$. If there exist two positive functions $f\left(n\right)$ and $g\left(n\right)$ such that \[ \Pr\left(\lim_{n\rightarrow\infty}\frac{k^{\chi}\left(n\right)}{f\left(n\right)}=1\right)=1 \] and \[ \Pr\left(\lim_{n\rightarrow\infty}\frac{Y^{\chi}\left(n\right)}{g\left(n\right)}=1\right)=1 \] the throughput capacity $\lambda^{\chi}\left(n\right)$ satisfies: \begin{equation} \Pr\left(\lim_{n\rightarrow\infty}\frac{\lambda^{\chi}\left(n\right)}{\frac{g\left(n\right)W}{nf\left(n\right)}}=1\right)=1\label{eq:capacity of random networks} \end{equation} \end{cor} \begin{IEEEproof} Using the union bound, \begin{alignat*}{1} & 1-\Pr\left(\frac{\lambda^{\chi}\left(n\right)}{\frac{g\left(n\right)W}{nf\left(n\right)}}=1\right)\\ \leq & \left(1-\Pr\left(\frac{k^{\chi}\left(n\right)}{f\left(n\right)}=1\right)\right)+\left(1-\Pr\left(\frac{Y^{\chi}\left(n\right)}{g\left(n\right)}=1\right)\right) \end{alignat*} \begin{eqnarray*} \\ \end{eqnarray*} The result in the corollary readily follows from Theorem \ref{thm:Capacity relationship for policy pi}. \end{IEEEproof} In reality, such two functions $f\left(n\right)$ and $g\left(n\right)$ required by Corollary \ref{cor:capacity of random networks} do not necessarily exist or are very difficult to find. Therefore asymptotic capacity of random networks is more commonly studied by investigating its upper and lower bounds. The following two corollaries give respectively an upper and a lower bound on the asymptotic capacity of random networks. These two corollaries are used in Section \ref{sec:Applicability of the result} to examine the asymptotic capacity of random networks. \begin{cor} \label{cor:an upper bound on per-node throughput}Consider a random network $G_{n}$. Let $\chi\in\Phi^{f}$ be the spatial and temporal scheduling algorithm used in $G_{n}$. Let $f\left(n\right)$ and $g\left(n\right)$ be two positive functions such that \[ \lim_{n\rightarrow\infty}\Pr\left(\min_{\chi\in\Phi^{f}}k^{\chi}\left(n\right)\geq f\left(n\right)\right)=1 \] and let $g\left(n\right)$ be a function of $n$ such that \[ \lim_{n\rightarrow\infty}\Pr\left(\max_{\chi\in\Phi^{f}}Y^{\chi}\left(n\right)\leq g\left(n\right)\right)=1 \] the throughput capacity of $G_{n}$ satisfies: \begin{equation} \lim_{n\rightarrow\infty}\Pr\left(\lambda\left(n\right)\leq\frac{g\left(n\right)W}{nf\left(n\right)}\right)=1\label{eq:capacity of random networks-upper bound} \end{equation} \end{cor} \begin{cor} \label{cor:lower bound on per-node throughput}Consider a random network $G_{n}$. Let $\chi\in\Phi^{f}$ be the spatial and temporal scheduling algorithm used in $G_{n}$. Let $f\left(n\right)$ and $g\left(n\right)$ be two positive functions such that \[ \lim_{n\rightarrow\infty}\Pr\left(k^{\chi}\left(n\right)\leq f\left(n\right)\right)=1 \] and \[ \lim_{n\rightarrow\infty}\Pr\left(Y^{\chi}\left(n\right)\geq g\left(n\right)\right)=1 \] the throughput capacity of $G_{n}$ satisfies: \begin{equation} \lim_{n\rightarrow\infty}\Pr\left(\lambda\left(n\right)\geq\frac{g\left(n\right)W}{nf\left(n\right)}\right)=1,\;\;\forall\chi\in\Phi^{f}\label{eq:capacity of random networks-lower bound} \end{equation} \end{cor} As implied in Corollaries \ref{cor:capacity of arbitrary networks upper bound} and \ref{cor:an upper bound on per-node throughput}, finding the throughput capacity upper bound of $G_{n}$ is achieved by analyzing the upper bound of $Y^{\chi}\left(n\right),\;\forall\chi\in\Phi^{f}$, viz. $\max_{\chi\in\Phi^{f}}Y^{\chi}\left(n\right)$, and then the lower bound of $k^{\chi}\left(n\right),\;\forall\chi\in\Phi^{f}$, viz. $\min_{\chi\in\Phi^{f}}k^{\chi}\left(n\right)$, separately. An upper bound of $\max_{\chi\in\Phi^{f}}Y^{\chi}\left(n\right)$ can usually be found by analyzing the maximum number of simultaneous transmissions that can be accommodated in $A$, which is in turn determined by such parameters like SINR threshold or the transmission range, independent of $\chi$. A lower bound of $\min_{\chi\in\Phi^{f}}k^{\chi}\left(n\right)$ can often be found by analyzing the average number of hops between a randomly chosen source-destination pair along the shortest path, which is mainly determined by the network topology and node distribution, and is independent of $\chi$. Finding the throughput capacity lower bound of $G_{n}$ often involves using a constructive technique, i.e. constructing a particular scheduling algorithm $\chi\in\Phi^{f}$ and analyzing the throughput capacity $\lambda^{\chi}\left(n\right)$ under $\chi$ by analyzing the associated parameters $k^{\chi}\left(n\right)$ and $Y^{\chi}\left(n\right)$. \section{Applications of the Relationship to Determine the Capacity of Random Networks\label{sec:Applicability of the result}} In this section, to demonstrate the usage and applicability of our results developed in Section \ref{sec:Capacity-of-Static-Networks}, we use these results to re-derive some well-known results in the literature obtained for different networks and through the use of some intellectually challenging and customized techniques \cite{Gupta00the,Franceschetti07Closing,Grossglauser02Mobility,Neely05Capacity,Zemlianov05Capacity,Li09Multicast,Chau11Capacity}. Due to the large amount of existing work in the area, it is not possible for us to include all of them. Therefore the random networks considered \cite{Gupta00the,Franceschetti07Closing,Grossglauser02Mobility,Neely05Capacity,Zemlianov05Capacity,Li09Multicast,Chau11Capacity} are chosen as typical examples only. We show that the use of our result often lead to simpler analysis. Furthermore, through the intuitive understanding revealed in our result on the interactions of these capacity-impacting parameters, we point out limitations in some existing results and suggest further improvement. \subsection{Capacity of static ad-hoc networks with uniform transmission capability\label{sub:Capacity-of-static-Kumar}} In \cite{Gupta00the}, Gupta and Kumar first considered a random network with $n$ nodes uniformly and \emph{i.i.d.} on a unit square $A$ and each node is capable of transmitting at a fixed rate of $W$ bit/s using a common channel. Every node chooses its destination randomly and independently of other nodes and transmits using a fixed and identical transmission range $r\left(n\right)$. Both the protocol model and the physical model are considered for modeling the interference. As shown in \cite{Gupta00the}, results obtained assuming the protocol model can be readily extended to those assuming the physical model. Therefore, in this paper, we focus on the protocol model only. In the protocol model, a direct transmission from a transmitter $v_{i}$ located at $X_{i}$ to a receiver $v_{j}$ located at $X_{j}$ is successful if the Euclidean distance between $v_{i}$ and $v_{j}$ is smaller than or equal to $r\left(n\right)$ \emph{and} for every other node $v_{k}$ simultaneously transmitting over the same channel, $\left\Vert X_{k}-X_{j}\right\Vert \geq\left(1+\triangle\right)r\left(n\right)$ where the parameter $\triangle>0$ defines a guard zone which prevents a nearby node from transmitting on the same channel at the same time and $\left\Vert \bullet\right\Vert $ denotes the Euclidean norm. Given the above setting, it is straightforward to show that each transmitter defines a disk with a radius equal to $\frac{1}{2}\triangle r\left(n\right)$ and centered at itself such that for the set of concurrent transmitters, their respective associated disks do not overlap. Therefore, each transmitter located in $A$ ``consumes'' a disk of area at least $\frac{1}{4}\pi\left(\frac{1}{2}\triangle r\left(n\right)\right)^{2}=\frac{\pi}{16}\triangle^{2}r^{2}\left(n\right)$ in $A$ (The worst case happens for a transmitter located at the corners of $A$ where only one quarter of the disk falls in $A$.). It follows that \begin{equation} \max_{\chi\in\Phi}Y^{\chi}\left(n\right)\leq\frac{1}{\frac{\pi}{16}\triangle^{2}r^{2}\left(n\right)}\label{eq:upper bound on E(Y) Kumar} \end{equation} We now establish a lower bound of $\min_{\chi\in\Phi^{f}}k^{\chi}\left(n\right)$. Let $A_{1}$ be a $\frac{1}{4}\times\frac{1}{4}$ square located at the lower left corner of $A$ and let $A_{2}$ be a $\frac{1}{4}\times\frac{1}{4}$ square located at the upper right corner of $A$. Using the property that nodes are uniformly and i.i.d. on $A$, it can be shown that \emph{a.a.s.} the expected fraction of source-destination pairs with the source located in $A_{1}$ (or $A_{2}$) and the destination located in $A_{2}$ (or $A_{1}$) equals $2\times\frac{1}{16}\times\frac{1}{16}=\frac{1}{128}$. The minimum Euclidean distance between these source-destination pairs is $\frac{\sqrt{2}}{2}$ and thus the minimum number of hops between these source-destination pairs is $\frac{\sqrt{2}}{2r\left(n\right)}$. It is then follows that \begin{equation} \lim_{n\rightarrow\infty}\Pr\left(\min_{\chi\in\Phi^{f}}k^{\chi}\left(n\right)\geq\frac{\sqrt{2}}{256}\times\frac{1}{r\left(n\right)}\right)=1\label{eq:lower bound on k(n) Kumar} \end{equation} Note that $\Phi^{f}\subseteq\Phi$, the following lemma can be obtained as an easy consequence of Corollary \ref{cor:an upper bound on per-node throughput}, (\ref{eq:upper bound on E(Y) Kumar}) and (\ref{eq:lower bound on k(n) Kumar}). \begin{lem} \label{lem:capacity upper bound Gupta Kumar}In the random network considered by Gupta and Kumar\cite{Gupta00the} and assuming the protocol model, the per-node throughput satisfies \[ \lim_{n\rightarrow\infty}\Pr\left(\lambda\left(n\right)\leq\frac{2048\sqrt{2}}{\pi\triangle^{2}}W\frac{1}{nr\left(n\right)}\right)=1 \] \end{lem} In Lemma \ref{lem:capacity upper bound Gupta Kumar}, the upper bound of $\lambda\left(n\right)$ is expressed as a function of the transmission range $r\left(n\right)$ and an increase in $r\left(n\right)$ will reduce the upper bound. As the minimum transmission range required for the network to be \emph{a.a.s.} connected is well known to be $r\left(n\right)=\sqrt{\frac{\log n+f\left(n\right)}{\pi n}}$ where $f\left(n\right)=o\left(\log n\right)$ and $f\left(n\right)\rightarrow\infty$ as $n\rightarrow\infty$ \cite{Gupta98Critical}, the conclusion readily follows that $\lim_{n\rightarrow\infty}\Pr\left(\lambda\left(n\right)\leq\frac{2048\sqrt{2}}{\triangle^{2}}W\frac{1}{\sqrt{\pi n\log n}}\right)=1$. We now proceed to obtaining a lower bound of $\lambda\left(n\right)$. The lower bound is obtained constructively. Specifically, using the scheduling algorithm $\chi\in\Phi^{f}$ presented in \cite{Gupta00the}, we will analyze the associated $k^{\chi}\left(n\right)$ and $Y^{\chi}\left(n\right)$ and then obtain a lower bound of $\lambda^{\chi}\left(n\right)$. The lower bound obtained under a particular scheduling algorithm is of course also a lower bound of $\lambda\left(n\right)$. We first recall the scheduling algorithm used in \cite{Gupta00the}. In \cite{Gupta00the}, the network area $A$ is partitioned into a set of Voronoi cells such that every Voronoi cell contains a disk of radius $\rho\left(n\right)=\sqrt{\frac{100\log n}{\pi n}}$ and is contained in a disk of radius $2\rho\left(n\right)$. Packets are relayed sequentially from a node in a Voronoi cell to another node in an adjacent Voronoi cell along the Voronoi cells intersecting the direct line connecting the source and the destination. Denote the above scheduling scheme by $\chi$. The following result on a lower bound of $Y^{\chi}\left(n\right)$ is required for obtaining the lower bound of $\lambda^{\chi}\left(n\right)$: \begin{lem} \label{lem:number of simultaneous trans lower bound}In the random network considered by Gupta and Kumar\cite{Gupta00the} and assuming the protocol model, there exists a small positive constant $c_{1}$ such that the average number of simultaneous transmissions using $\chi$ satisfies \[ \lim_{n\rightarrow\infty}\Pr\left(Y^{\chi}\left(n\right)\geq c_{1}\frac{n}{\log n}\right)=1 \] \end{lem} Note that each Voronoi cell has an area of at most $\frac{400\log n}{n}$. Therefore the total number of Voronoi cells in $A$ is at least $\frac{n}{400\log n}$. The result in Lemma \ref{lem:number of simultaneous trans lower bound} follows readily from \cite[Lemma 4.4]{Gupta00the}% \footnote{Strictly speaking, the result in \cite[Lemma 4.4]{Gupta00the} was derived for nodes on the surface of a sphere. However the result can be readily modified for a planar area with due consideration to the boundary effect. Thus we ignore the difference and use the result directly.% } which states that \emph{a.a.s.} there exists a positive \emph{constant} $c_{2}$ such that every $\left(1+c_{2}\right)$ slots, each cell gets at least one slot in which to transmit. In addition to Lemma \ref{lem:number of simultaneous trans lower bound} , we also need the following lemma that provides an upper bound of $k^{\chi}\left(n\right)$. \begin{lem} \label{lem:average number of hops upper bound Kumar}Under the same setting as that in Lemma \ref{lem:number of simultaneous trans lower bound}, there exists a positive constant $c_{3}$ such that \[ \lim_{n\rightarrow\infty}\Pr\left(k^{\chi}\left(n\right)\leq c_{3}\sqrt{\frac{n}{\log n}}\right)=1 \] \end{lem} \begin{IEEEproof} In \cite[Lemma 4.4]{Gupta00the}, it was shown that for every line connecting an arbitrary source-destination pair, denoted by $L$, and every Voronoi cell $V\in\Gamma_{n}$ where $\Gamma_{n}$ denotes the set of Voronoi cells, there exists a positive constant $c_{4}$ such that $\Pr\left(L\text{ intersect }V\right)\leq c_{4}\sqrt{\frac{\log n}{n}}$. Since each Voronoi cell has an area of at least $\frac{100\log n}{n}$, the maximum number of Voronoi cells is bounded by $\frac{n}{100\log n}$. Denoting by $N\left(L\right)$ the expected number of cells intersected by a randomly chosen source-destination line and using the union bound, it follows from the above results that $N\left(L\right)\leq\frac{c_{4}}{100}\sqrt{\frac{n}{\log n}}$. This result, together with the result in \cite[Lemma 4.8]{Gupta00the}, which shows that there exists a sequence $\delta\left(n\right)\rightarrow0$ as $n\rightarrow\infty$ such that $\Pr\left(\text{Every cell }V\in\Gamma_{n}\text{ contains at least one node}\right)\geq1-\delta\left(n\right)$, allow us to conclude that there exists a positive constant $c_{3}=\frac{c_{4}}{100}$ such that \[ \lim_{n\rightarrow\infty}\Pr\left(k^{\chi}\left(n\right)\leq c_{3}\sqrt{\frac{n}{\log n}}\right)=1 \] \end{IEEEproof} Combing the results in Lemmas \ref{lem:number of simultaneous trans lower bound} and \ref{lem:average number of hops upper bound Kumar}, and also using Corollary \ref{cor:lower bound on per-node throughput}, the following result can be shown: \begin{lem} \label{lem:per-node capacity lower bound Gupta and Kumar}In the random network considered by Gupta and Kumar\cite{Gupta00the} and assuming the protocol model, there exists a positive constant $c_{5}$ such that the per-node throughput satisfies \[ \lim_{n\rightarrow\infty}\Pr\left(\lambda\left(n\right)\geq c_{5}W\sqrt{\frac{1}{n\log n}}\right)=1 \] \end{lem} Combing Lemmas \ref{lem:capacity upper bound Gupta Kumar} and \ref{lem:per-node capacity lower bound Gupta and Kumar}, conclusion readily follows that \emph{a.a.s.} $\lambda\left(n\right)=\Theta\left(W\sqrt{\frac{1}{n\log n}}\right)$. In \cite{Gupta00the}, Gupta and Kumar also investigated the capacity of arbitrary networks and showed that by placing nodes optimally and deterministically to maximize the capacity, e.g. on grid points, $\lambda\left(n\right)=\Theta\left(W\sqrt{\frac{1}{n}}\right)$. Realizing that when nodes are optimally placed, a reduced transmission range of $r\left(n\right)=\Theta\left(\sqrt{\frac{1}{n}}\right)$ is required for the network to be connected. Following a similar analysis leading to Lemma \ref{lem:capacity upper bound Gupta Kumar} and using Theorem \ref{thm:Capacity relationship for policy pi} and (\ref{eq:definition of throughput}), result readily follows that $\lambda\left(n\right)\leq\frac{2048\sqrt{2}}{\pi\left(1+\triangle\right)^{2}}W\frac{1}{nr\left(n\right)}$ and hence $\lambda\left(n\right)=O\left(W\sqrt{\frac{1}{n}}\right)$. To obtain a lower bound of $\lambda\left(n\right)$, first it can be shown that when $r\left(n\right)=\Theta\left(\sqrt{\frac{1}{n}}\right)$, a scheduling algorithm $\chi$ can be easily constructed such that $Y^{\chi}\left(n\right)=\Theta\left(\sqrt{\frac{1}{n}}\right)$ and $k^{\chi}\left(n\right)=\Theta\left(\sqrt{\frac{1}{n}}\right)$ (for example an algorithm that first routes packets along a horizontal line to a node on the same vertical height as the destination node and then routes packets along a vertical line to the destination). Conclusion then follows that $ $$\lambda^{\chi}\left(n\right)=\Theta\left(W\sqrt{\frac{1}{n}}\right)$ and $\lambda\left(n\right)=\Omega\left(W\sqrt{\frac{1}{n}}\right)$. Combing the lower and the upper bound, results follows using Theorem \ref{thm:Capacity relationship for policy pi} that for an arbitrary network with optimally placed nodes, $\lambda\left(n\right)=\Theta\left(W\sqrt{\frac{1}{n}}\right)$. The above results on the throughput capacity of arbitrary networks and random networks unsurprisingly are consistent with those in \cite{Gupta00the}. In addition to the above rigorous analysis, we also offer the following intuitive explanation on the capacity results in \cite{Gupta00the} using the relationship revealed in Section \ref{sec:Capacity-of-Static-Networks}. In the network considered by Gupta and Kumar, each node transmits using a fixed and identical transmission range $r\left(n\right)$. Therefore each transmission consumes a disk area of radius $\Theta\left(r\left(n\right)\right)$ and $Y\left(n\right)=O\left(\frac{1}{r^{2}\left(n\right)}\right)$. Here we dropped the superscript $\chi$ when we discuss $k\left(n\right)$ and $Y\left(n\right)$ generally and the result does not depend on a particular scheduling algorithm being used. Furthermore, a scheduling algorithm can be readily constructed that distributes the transmissions evenly across $A$ such that $Y\left(n\right)=\Theta\left(\frac{1}{r^{2}\left(n\right)}\right)$. Given that the average Euclidean distance between a randomly chosen pair of source-destination nodes equals a constant, independent of $n$ \cite{Philip07The}, it can be shown that $k\left(n\right)=\Theta\left(\frac{1}{r\left(n\right)}\right)$. Thus result follows that the throughput capacity $\lambda\left(n\right)=\Theta\left(\frac{W}{nr\left(n\right)}\right)$, viz. a smaller transmission range will result in a larger throughput. The minimum transmission range required for a random network to be \emph{a.a.s.} connected is known to be $r\left(n\right)=\Theta\left(\sqrt{\frac{\log n}{n}}\right)$ while the minimum transmission range required for a network with optimally and deterministically deployed nodes is known to be $r\left(n\right)=\Theta\left(\sqrt{\frac{1}{n}}\right)$. Accordingly, the throughput capacity of random networks and arbitrary networks with optimally placed nodes are $\Theta\left(\frac{W}{\sqrt{n\log n}}\right)$ and $\Theta\left(\frac{W}{\sqrt{n}}\right)$ respectively. Therefore the $\frac{1}{\sqrt{\log n}}$ factor is the price in reduction of network capacity to pay for placing nodes randomly, instead of optimally. \subsection{Capacity of static networks with non-uniform transmission capability} In\emph{ }\cite{Franceschetti07Closing}, Franceschetti \emph{et al. }considered a network with $n$ nodes uniformly and \emph{i.i.d.} on a square of $\sqrt{n}\times\sqrt{n}$. A node $v_{i}$ can transmit to another node $v_{j}$ directly at a rate of \[ R\left(v_{i},v_{j}\right)=\log\left(1+\frac{Pl\left(X_{i,}X_{j}\right)}{N_{0}+\sum_{k\in\Gamma_{i}}Pl\left(X_{k},X_{j}\right)}\right) \] where $\Gamma_{i}$ denotes the set of indices of nodes that are simultaneously active as $v_{i}$, $l\left(X_{i},X_{j}\right)$ denotes the power attenuation function and $l\left(X_{i},X_{j}\right)=\min\left\{ 1,e^{-\gamma\left\Vert X_{i}-X_{j}\right\Vert }/\left\Vert X_{i}-X_{j}\right\Vert ^{\alpha}\right\} $ with $\gamma>0$ or $\gamma=0$ and $\alpha>2$, and $N_{0}$ represents the background noise. It is assumed that all nodes transmit at the same power level $P$. Each node chooses its destination randomly and independently of other nodes. \begin{rem} Strictly speaking, the results derived in Section \ref{sec:Capacity-of-Static-Networks} can only be used when the link capacity $W$ is fixed. However it is straightforward to extend these results to study the capacity of the network considered in \cite{Franceschetti07Closing} where the link capacity depends on its SINR and is a variable. More specifically, given the two functions $g\left(n\right)$ and $f\left(n\right)$ defined in Corollary \ref{cor:an upper bound on per-node throughput}, if a third function $h\left(n\right)$ can be found such that $W=O\left(h\left(n\right)\right)$, it can be readily shown using Corollary \ref{cor:an upper bound on per-node throughput} that $\lambda\left(n\right)=O\left(\frac{g\left(n\right)h\left(n\right)}{nf\left(n\right)}\right)$. Similarly, given the two functions $g\left(n\right)$ and $f\left(n\right)$ defined in Corollary \ref{cor:lower bound on per-node throughput}, if a third function $h\left(n\right)$ can be found such that $W=\Omega\left(h\left(n\right)\right)$, then $\lambda\left(n\right)=\Omega\left(\frac{g\left(n\right)h\left(n\right)}{nf\left(n\right)}\right)$. \end{rem} We first introduce the scheduling algorithm used in \cite{Franceschetti07Closing}. The network area is partitioned into non-overlapping squares of size $c^{2}$, called cells hereinafter. These cells are grouped into $l^{2}$ non-overlapping sets of cells where $l=2\left(d+1\right)$ and within each set, adjacent cells are separated by an Euclidean distance of $\left(l-1\right)c$, see Fig. 4 of \cite{Franceschetti07Closing} for an illustration. Parameter $d$ is a positive integer to be specified later. The time is also divided into $l^{2}$ time slots, which are equally distributed among the $l^{2}$ sets of cells. Within each time slot, at most one node in a cell can transmit. Furthermore, nodes located in cells belonging to the same set can transmit at the same time and nodes located in cells of different sets should use different time slots to transmit. The following result was established in \cite{Franceschetti07Closing} on the transmission rate between a pair of directly connected transmitter and receiver, which will be used in the later analysis: \begin{lem} \label{lem:Franceschetti Results on Single Hop}Using the above scheduling algorithm, for any integer $d>0$, there exists an $W\left(d\right)>0$ such that a.a.s., when a node is scheduled to transmit, the node can transmit directly to any other node located within an Euclidean distance of $\sqrt{2}c\left(d+1\right)$ at rate $W\left(d\right)$. Furthermore, as $d$ tends to infinity, we have \[ W\left(d\right)=\Omega\left(d^{-\alpha}e^{-\gamma\sqrt{2}cd}\right) \] \end{lem} Lemma \ref{lem:Franceschetti Results on Single Hop} is essentially the same as Theorem 3 in \cite{Franceschetti07Closing} except that in \cite[Theorem 3]{Franceschetti07Closing}, it was considered that $W\left(d\right)$ is further multiplied by the fraction of time a cell is scheduled to be active, i.e. $1/l^{2}$, and the data rate is given in terms of rate per cell whereas in Lemma \ref{lem:Franceschetti Results on Single Hop}, $W\left(d\right)$ corresponds to the link rate, i.e. $W$ in Theorem \ref{thm:Capacity relationship for policy pi} and Corollaries \ref{cor:capacity of arbitrary networks upper bound}, \ref{cor:capacity of random networks}, \ref{cor:an upper bound on per-node throughput} and \ref{cor:lower bound on per-node throughput}. In addition to the above result, capacity analysis in \cite{Franceschetti07Closing} also relies on the use of the percolation theory. More specifically, the $\sqrt{n}\times\sqrt{n}$ square is partitioned into $L=\left\lceil \frac{\sqrt{n}}{\kappa\log\left(\sqrt{n}\right)}\right\rceil $ non-overlapping horizontal slabs where $\kappa$ is a positive constant $ $and each slab is of size $\frac{\sqrt{n}}{L}\times\sqrt{n}$. By symmetry, the $\sqrt{n}\times\sqrt{n}$ square can also be partitioned into $L=\left\lceil \frac{\sqrt{n}}{\kappa\log\left(\sqrt{n}\right)}\right\rceil $ non-overlapping vertical slabs and each slab is of size $\sqrt{n}\times\frac{\sqrt{n}}{L}$. Using the percolation theory, it was shown that there exists positive constants $c_{1}$ and $c_{2}$ such that by directly connecting nodes separated by an Euclidean distance of at most $c_{1}$ only, \emph{a.a.s.} there are at least $c_{2}\log\left(\sqrt{n}\right)$ \emph{disjoint} left-to-right (top-to-bottom) crossing paths within every horizontal (vertical) slab as $n\rightarrow\infty$ \cite[Theorem 5]{Franceschetti07Closing}. These crossing paths are termed ``highway'' in \cite{Franceschetti07Closing}. Furthermore it was shown that for nodes not part of the highway, they can access their respective nearest highway node in single hops of length at most proportional to $\log\left(\sqrt{n}\right)$, i.e. the Euclidean distance between non-highway nodes and their respective nearest highway nodes is $O\left(\log\left(\sqrt{n}\right)\right)$. On the basis of the above results, the following scheduling algorithm was used in \cite{Franceschetti07Closing} to deliver a packet from its source to its destination. The algorithm uses four separate phases, and in each phase time is divided into $l^{2}=4\left(d+1\right)^{2}$ slots where the value of $d$ varies in each phase. The first phase is used by source nodes to access their nearest highway nodes; in the second phase, information is transported on the horizontal highways; in the third phase information is transported on vertical highways to highway nodes nearest their respective destinations; and in the fourth phase information is delivered to the respective destinations. The first and fourth phases use direct transmissions to deliver information from the source nodes to the respective highway nodes within Euclidean distance $O\left(\log\left(\sqrt{n}\right)\right)$ away; while the second and third phases use multiple hops to deliver information hop-by-hop along the highway and each hop is separated by a maximum Euclidean distance of $c_{1}$. Denote the above scheduling algorithm by $\xi$. The following result on the throughput capacity can be established: \begin{lem} \label{lem:capacity of ad hoc networks using highway}Using the scheduling algorithm $\xi$, the throughput capacity in the random networks considered in \cite{Franceschetti07Closing} satisfies $\lambda^{\xi}\left(n\right)=\Omega\left(\frac{1}{\sqrt{n}}\right)$.\end{lem} \begin{IEEEproof} Denote the per-node throughput in the four different phases by $\lambda_{1}^{\xi}\left(n\right)$ , $\lambda_{2}^{\xi}\left(n\right)$, $\lambda_{3}^{\xi}\left(n\right)$ and $\lambda_{4}^{\xi}\left(n\right)$ respectively. We analyze $\lambda_{1}^{\xi}\left(n\right)$ , $\lambda_{2}^{\xi}\left(n\right)$, $\lambda_{3}^{\xi}\left(n\right)$ and $\lambda_{4}^{\xi}\left(n\right)$ separately in the following paragraphs to obtain $\lambda^{\xi}\left(n\right)$ where $\lambda^{\xi}\left(n\right)=\min\left\{ \lambda_{1}^{\xi}\left(n\right),\lambda_{2}^{\xi}\left(n\right),\lambda_{3}^{\xi}\left(n\right),\lambda_{4}^{\xi}\left(n\right)\right\} $. We first analyze the link capacity in phase 1. From the earlier result that the Euclidean distance between non-highway nodes and their respective nearest highway nodes is $O\left(\log\left(\sqrt{n}\right)\right)$, there exists a positive constant $c_{3}$ such that \emph{a.a.s.} the Euclidean distance between non-highway nodes and their respective nearest highway nodes is smaller than or equal to $c_{3}\log n$. Choosing the value of $d$ such that $d$ is the smallest integer satisfying $\sqrt{2}c\left(d+1\right)\geq c_{3}\log n$ and using Lemma \ref{lem:Franceschetti Results on Single Hop}, it follows that each non-highway node can transmit to its nearest highway node at a rate of $\Omega\left(d^{-\alpha}e^{-\gamma\sqrt{2}cd}\right)=\Omega\left(\left(\log n\right)^{-\alpha}n^{-c\gamma\frac{\sqrt{2}}{2}}\right)$ \emph{a.a.s.} using $\xi$. Now we analyze the number of simultaneous transmissions in phase 1. Note that each highway node is separated from its nearest highway node by at most an Euclidean distance $c_{1}$. Therefore if a node has no other node located within an Euclidean distance of $c_{1}$ from itself, that node must be a non-highway node. Let $N_{h}$ be the number of cells where each cell has at least one non-highway node, let $N_{o}$ be the number of cells where each cell has exactly one non-highway node and let $N_{iso}$ be the number of cells where each cell has exactly one node \emph{and} that node has no other node located within an Euclidean distance of $c_{1}$ from itself. It follows from the above observation that \begin{equation} N_{h}\geq N_{o}\geq N_{iso}\label{eq:inequality on N_h} \end{equation} Now we further analyze the asymptotic property of $N_{iso}$. Let $\Gamma$ denote the set of all cells. Let $I_{i}$ be an indicator random variable such that if the $i^{th}$ cell, denoted by $C_{i}$, has exactly one node \emph{and} that node has no other node located within an Euclidean distance of $c_{1}$ from itself, $I_{i}=1$; otherwise $I_{i}=0$. It follows from the definition of $N_{iso}$ that $N_{iso}=\sum_{C_{i}\in\Gamma}I_{i}$. Using the property that nodes are uniform and i.i.d., it can be shown that $\lim_{n\rightarrow\infty}E\left(I_{i}\right)=p=c^{2}e^{-c^{2}}e^{-\pi c_{1}^{2}}$ where $c^{2}e^{-c^{2}}$ is the probability that $C_{i}$ has exactly one node and $e^{-\pi c_{1}^{2}}$ is the probability that the node has no other node located within an Euclidean distance of $c_{1}$ from itself. Furthermore $Var\left(I_{i}\right)=E\left(I_{i}^{2}\right)-E^{2}\left(I_{i}\right)=E\left(I_{i}\right)-E^{2}\left(I_{i}\right)$ and $\lim_{n\rightarrow\infty}Var\left(I_{i}\right)=p-p^{2}$. Note that $I_{i}$ and $I_{j}$ are asymptotically independent as $n\rightarrow\infty$ if the associated cells $C_{i}$ and $C_{j}$ are separated by an Euclidean distance greater than or equal to $2c_{1}$. Denote by $\Gamma_{ind}$ a maximal set of cells where adjacent cells are separated by an Euclidean distance $\mu=\left\lceil \frac{2c_{1}}{c}\right\rceil c$. It can be readily shown that $\left|\Gamma_{ind}\right|\geq\left(\frac{\sqrt{n}}{\mu+c}\right)^{2}$, where $\left|\Gamma_{ind}\right|$ denotes the cardinality of $\Gamma_{ind}$. Therefore using the central limit theorem, \[ \lim_{n\rightarrow\infty}\Pr\left(\sum_{C_{i}\in\Gamma_{ind}}I_{i}\geq\frac{n}{\left(\mu+c\right)^{2}}-h\left(n\right)\right)=1 \] where $h\left(n\right)$ is an arbitrary positive function satisfying $h\left(n\right)=o\left(n\right)$ and $\lim_{n\rightarrow\infty}h\left(n\right)=\infty$. Noting that $N_{iso}=\sum_{C_{i}\in\Gamma}I_{i}\geq\sum_{C_{i}\in\Gamma_{ind}}I_{i}$ and using inequality (\ref{eq:inequality on N_h}) and the above equation, \emph{a.a.s. $N_{h}=\Omega\left(n\right)$ }as \emph{$n\rightarrow\infty$. }Using $\xi$, every $l^{2}=4\left(d+1\right)^{2}$ time slots, each cell gets one time slot to transmit. Therefore \emph{a.a.s.} the average number of simultaneous transmissions in phase 1 equals $\Omega\left(\frac{n}{4\left(d+1\right)^{2}}\right)$. Note that in phase 1, only direct transmission is allowed. It then follows from Corollary \ref{cor:lower bound on per-node throughput} that in the first phase, each node can have access a per-node throughput of $\lambda_{1}^{\xi}\left(n\right)$ where \[ \lambda_{1}^{\xi}\left(n\right)=\Omega\left(\frac{n}{4\left(d+1\right)^{2}}\right)\times\frac{\Omega\left(\left(\log n\right)^{-\alpha}n^{-c\gamma\frac{\sqrt{2}}{2}}\right)}{n} \] or equivalently $\lambda_{1}^{\xi}\left(n\right)=\Omega\left(\left(\log n\right)^{-\alpha-2}n^{-c\gamma\frac{\sqrt{2}}{2}}\right)$. Using a similar analysis, it can be shown that $\lambda_{4}^{\xi}\left(n\right)=\Omega\left(\left(\log n\right)^{-\alpha-2}n^{-c\gamma\frac{\sqrt{2}}{2}}\right)$. Now we analyze the throughput capacity in phases 2 and 3. We consider phase 2 first. In phase 2, $d$ is chosen such that $d$ is the smallest integer satisfying $\sqrt{2}c\left(d+1\right)\geq c_{1}$. It follows from Lemma \ref{lem:Franceschetti Results on Single Hop}, \emph{a.a.s.} there exists a positive constant $c_{4}$ such that each highway node can transmit at a rate of at least $ $$c_{4}$ bits per second, i.e. $W>c_{4}$ in phase 2. As introduced earlier, \emph{a.a.s.} each horizontal slab of size $\frac{\sqrt{n}}{L}\times\sqrt{n}$ has at least $c_{2}\log\left(\sqrt{n}\right)$ \emph{disjoint} highways where $L=\left\lceil \frac{\sqrt{n}}{\kappa\log\left(\sqrt{n}\right)}\right\rceil $. Two nodes belonging to two disjoint highways are separated by an Euclidean distance of at least $c_{1}$. Therefore the number of disjoint highways that can cross a cell is at most $\left\lceil \frac{c^{2}}{\frac{1}{4}\pi c_{1}^{2}}\right\rceil $. Each horizontal slab has $\frac{\sqrt{n}}{L}\times\frac{\sqrt{n}}{c^{2}}$ cells. Thus each horizontal highway crosses at most $\frac{\sqrt{n}}{L}\times\frac{\sqrt{n}}{c^{2}}\times\left\lceil \frac{c^{2}}{\frac{1}{4}\pi c_{1}^{2}}\right\rceil /\left(c_{2}\log\left(\sqrt{n}\right)\right)=O\left(\sqrt{n}\right)$ cells. A packet moves by at least one cell in each hop. Therefore the average number of hops traversed by a packet in phase 2 is $O\left(\sqrt{n}\right)$. Furthermore, \emph{a.a.s.} the total number of disjoint horizontal highways is at least $c_{2}L\log\left(\sqrt{n}\right)>\frac{c_{2}}{\kappa}\sqrt{n}$ and each horizontal highway crosses at least $\frac{\sqrt{n}}{c}$ cells where $\sqrt{n}$ is the minimum length of a left-to-right line in $A$. The number of disjoint highways that can cross a cell is at most $\left\lceil \frac{c^{2}}{\frac{1}{4}\pi c_{1}^{2}}\right\rceil $. Therefore, \emph{a.a.s.} the number of cells where each cell contains at least one high-way node is at least $\frac{c_{2}}{\kappa}\sqrt{n}\times\frac{\sqrt{n}}{c}/\left\lceil \frac{c^{2}}{\frac{1}{4}\pi c_{1}^{2}}\right\rceil $. Using $\xi$, every $l^{2}=4\left(d+1\right)^{2}$ time slots, each cell gets one time slot to transmit. It follows that \emph{a.a.s. }the average number of simultaneous transmissions in phase 2 is greater than or equal to $\frac{c_{2}}{\kappa}\sqrt{n}\times\frac{\sqrt{n}}{c}\times\frac{1}{l^{2}}/\left\lceil \frac{c^{2}}{\frac{1}{4}\pi c_{1}^{2}}\right\rceil =c_{5}n$, where $c_{5}=\frac{c_{2}}{\kappa}\times\frac{1}{c}\times\frac{1}{l^{2}}/\left\lceil \frac{c^{2}}{\frac{1}{4}\pi c_{1}^{2}}\right\rceil $ is a positive constant independent of $n$. It follows from the above analysis and Corollary \ref{cor:lower bound on per-node throughput} that \[ \lambda_{2}^{\xi}\left(n\right)=\Omega\left(\frac{1}{\sqrt{n}}\right) \] By symmetry, $\lambda_{3}^{\xi}\left(n\right)=\Omega\left(\frac{1}{\sqrt{n}}\right)$ . By choosing the value of $c$ such that $c\gamma\frac{\sqrt{2}}{2}<\frac{1}{2}$, the conclusion in the lemma readily follows. \end{IEEEproof} Lemma \ref{lem:capacity of ad hoc networks using highway} allows us to conclude that the throughput capacity in the random network considered by Franceschetti et al. satisfies $\lambda\left(n\right)=\Omega\left(\frac{1}{\sqrt{n}}\right)$, which is consistent with the result in \cite{Franceschetti07Closing}. In \cite{Franceschetti07Closing}, essentially nodes are allowed to use two transmission ranges, viz. a smaller transmission range of $\Theta\left(1\right)$ for nodes forming the highways and a larger transmission range of $O\left(\log\left(\sqrt{n}\right)\right)$ for non-highway nodes to access their respective nearest highway nodes. Most transmissions are through the highway using the smaller transmission range while the larger transmission range is only used for the last mile in phases 1 and 4. It can be shown that phases 1 and 4 do not become the bottleneck in determining the throughput capacity. Therefore both $Y\left(n\right)$ and $k\left(n\right)$ are dominated by the smaller transmission range and accordingly $Y\left(n\right)=\Theta\left(n\right)$, $k\left(n\right)=\Theta\left(\sqrt{n}\right)$. Furthermore, as a consequence of Lemma \ref{lem:Franceschetti Results on Single Hop}, $W=\Omega\left(1\right)$. It then readily follows that $\lambda\left(n\right)=\Omega\left(\frac{1}{\sqrt{n}}\right)$. This higher throughput capacity, compared with that in \cite{Gupta00the}, is achieved by allowing nodes to adjust their transmission capabilities as required. In \cite{Chau11Capacity}, Chau, Chen and Liew showed that the higher throughput capacity of $\lambda\left(n\right)=\Omega\left(\frac{1}{\sqrt{n}}\right)$ can also be achieved in large-scale CSMA wireless networks if wireless nodes performing CSMA operations are allowed to use two different carrier-sensing ranges. The capacity analysis in \cite{Chau11Capacity} is based on two findings: a) by adjusting the count-down rate, a tunable parameter in CSMA protocols, of each node, a distributed and randomized CSMA scheme can achieve the same capacity as a centralized deterministic scheduling scheme \cite{Jiang10A}; b) by using the highway system defined in \cite{Franceschetti07Closing}, a higher throughput capacity of $\lambda\left(n\right)=\Omega\left(\frac{1}{\sqrt{n}}\right)$ can be achieved using a centralized deterministic scheduling algorithm. Using \cite[Lemma 9]{Chau11Capacity}, which states that in CSMA schemes, there exists a set of count-down rates such that the throughput of each and every link is not smaller than that can be achieved with a centralized deterministic scheduling scheme, and a similar analysis above for analyzing the capacity of networks in \cite{Franceschetti07Closing}, the result in \cite{Chau11Capacity} can also be obtained using the relationship established in this paper. Except for some analysis on particular details of CSMA networks, i.e. hidden node problem and distributedness of CSMA protocols, the analysis is similar as the analysis earlier in the section and hence is omitted in the paper. Observing that in a large network, a much smaller transmission range is required to connect most nodes in the network (i.e. forming a giant component) whereas the larger transmission range of $\Theta\left(\sqrt{\frac{\log n}{n}}\right)$ is only required to connect the few hard-to-reach nodes \cite{Ta09On}, a routing scheme can be designed, which achieves a per-node throughput of $\lambda\left(n\right)=\Theta\left(\frac{1}{\sqrt{n}}\right)$ and does not have to use the highway system, such that a node uses the smaller transmission ranges for most communications and only uses the larger transmission if the next-hop node cannot be reached when using the smaller transmission ranges. \subsection{Capacity of mobile ad-hoc networks} In \cite{Grossglauser02Mobility}, Grossglauser and Tse considered mobile ad hoc networks consisting of $n$ nodes uniformly and i.i.d. on a unit square $A$ initially. Nodes are mobile and the spatial distribution of nodes is stationary and ergodic with stationary distribution uniform on $A$. The trajectories of nodes are i.i.d. Each node chooses its destination randomly and independently of other nodes. At time $t$, a node $v_{i}$ can transmit directly to another node $v_{j}$ at rate $W$ if the SINR at $v_{j}$ is above a prescribed threshold $\beta$: \[ \frac{P_{i}\left(t\right)\gamma_{ij}\left(t\right)}{N_{0}+\frac{1}{L}\sum_{k\in\Gamma_{i}\left(t\right)}P_{k}\left(t\right)\gamma_{kj}\left(t\right)}>\beta \] where $N_{0}$ is the background noise power, $L$ is the processing gain, $\Gamma_{i}\left(t\right)$ is the set of nodes, not including $v_{i}$ itself, simultaneously transmitting with $v_{i}$ at time $t$ and $P_{i}\left(t\right)$ is the transmitting power of $v_{i}$ at time $t$. The transmitting power $P_{i}\left(t\right)$ is determined by the scheduling algorithm and is chosen to be a constant independent of $n$. For a narrowband system $L=1$. Parameter $\gamma_{ij}\left(t\right)$ is the channel gain and is given by $\gamma_{ij}\left(t\right)=\left\Vert X_{i}\left(t\right)-X_{j}\left(t\right)\right\Vert ^{-\alpha}$ where $X_{i}\left(t\right)$ represents the location of $v_{i}$ at time $t$ and $\alpha$ is a parameter greater than $2$. A two-hop relaying strategy is adopted. In the first phase, a source transmits a packet to a nearby node (acting as a relay). As the source moves around, different packets are transmitted to different relay nodes. In the second phase, either the source or a relay transmits the packet to the destination when it is close to the destination and is scheduled to transmit to the destination. Within each time slot, the set of concurrent transmissions are scheduled randomly and independently of transmissions in the previous time slot. More specifically, a parameter $\theta\in\left(0,0.5\right)$, called the transmitter density, is fixed first. $n_{S}=\theta n$ number of nodes are randomly designated as transmitters and the remaining nodes are designated as \emph{potential receivers}. Denote the set of potential receivers by $R_{t}$. Each transmitter transmits its packets to its nearest neighbor among all nodes in $R_{t}$. Among all the $n_{S}$ sender-receiver pairs, only those whose SINR is above $\beta$ are retained. Denote the number of such pairs by $N_{t}$. Note that the set of transmitter-receiver pairs is random in each time slot (thus $N_{t}$ is a random integer) and depends on the time varying locations of nodes. Denote the above scheduling algorithm by $\chi$. From the above description of the scheduling algorithm $\chi$, obviously $1\leq k^{\chi}\left(n\right)\leq2$. Furthermore, it can be shown \cite[Theorem III-4]{Grossglauser02Mobility} that $Y^{\chi}\left(n\right)=E\left(N_{t}\right)$ and that there exists a positive constant $c$ such that \begin{equation} \lim_{n\rightarrow\infty}\Pr\left(\frac{Y^{\chi}\left(n\right)}{n}\geq c\right)=1\label{eq:MANET lower bound on simultaneous transmission} \end{equation} The following result on the asymptotical throughput capacity of the random mobile ad hoc networks considered in \cite{Grossglauser02Mobility} readily follows: \begin{lem} \label{lem:throughput capacity of mobile ad hoc networks}In the random mobile ad hoc network considered by Grossglauser and Tse\cite{Grossglauser02Mobility}, a.a.s. $\lambda\left(n\right)=\Theta\left(1\right)$.\end{lem} \begin{IEEEproof} We first consider an upper bound of $\lambda\left(n\right)$. It can be easily shown that $\min_{\chi\in\Phi^{f}}k^{\chi}\left(n\right)=\Omega\left(1\right)$ and $\max_{\chi\in\Phi^{f}}Y^{\chi}\left(n\right)=O\left(n\right)$. It then follows using Corollary \ref{cor:an upper bound on per-node throughput} that $\lambda\left(n\right)=O\left(1\right)$. Now we consider the lower bound. Using the two-phase scheduling algorithm $\chi$ introduced above, $1\leq k^{\chi}\left(n\right)\leq2$. Using the above result, (\ref{eq:MANET lower bound on simultaneous transmission}) and Corollary \ref{cor:lower bound on per-node throughput}, conclusion readily follows that $\lim_{n\rightarrow\infty}\Pr\left(\lambda\left(n\right)\geq\frac{c}{2}W\right)=1$ where $W$ is a constant independent of $n$. \end{IEEEproof} The capacity result in \cite{Grossglauser02Mobility} and the use of the two hop relaying strategy can be intuitively explained as follows. Obviously the two-hop relaying strategy helps to cap $k^{\chi}\left(n\right)$ at $2$. Compared with a one-hop strategy where a source is only allowed to transmit when it is close to its destination, the two-hop relaying strategy also helps to spread the traffic stream between a source-destination pair to a large number of intermediate relay nodes such that in steady state, the packets of every source node will be distributed across all the nodes in the network. This arrangement ensures that every node in the network will have packets buffered for every other node. Therefore a node always has a packet to send when a transmission opportunity is available. Thus the role of the two-hop relaying strategy, compared with a one-hop strategy is to maximize $Y^{\chi}\left(n\right)$ such that $Y^{\chi}\left(n\right)=\Theta\left(n\right)$ \cite{Grossglauser02Mobility} at the expense of a slightly increased $k^{\chi}\left(n\right)$. A lower bound on $\lambda\left(n\right)$ readily results using $Y^{\chi}\left(n\right)=\Theta\left(n\right)$, $k^{\chi}\left(n\right)\leq2$ and Corollary \ref{cor:lower bound on per-node throughput}. An upper bound on $\lambda\left(n\right)$ can be easily obtained using Corollary \ref{cor:an upper bound on per-node throughput}. Therefore conclusions readily follows for $\lambda\left(n\right)$. Capacity of mobile ad-hoc networks assuming other mobility models and routing strategies \cite{Neely05Capacity} can also be obtained analogously. Given the insight revealed in Theorem \ref{thm:Capacity relationship for policy pi} and Corollaries \ref{cor:capacity of arbitrary networks upper bound}, \ref{cor:capacity of random networks}, \ref{cor:an upper bound on per-node throughput} and \ref{cor:lower bound on per-node throughput}, it can be readily shown that in a network with a different traffic model than that in \cite{Grossglauser02Mobility}, e.g. each node has an infinite stream of packets for every other node in the network, a one-hop strategy can also achieve a transport capacity of $\eta\left(n\right)=\Theta\left(n\right)$. Therefore the insight revealed in Theorem \ref{thm:Capacity relationship for policy pi} and Corollaries \ref{cor:capacity of arbitrary networks upper bound}, \ref{cor:capacity of random networks}, \ref{cor:an upper bound on per-node throughput} and \ref{cor:lower bound on per-node throughput} helps to design the optimum routing strategy for different scenarios of mobile ad-hoc networks. \subsection{Multicast capacity} In the previous three subsections, we have used Theorem \ref{thm:Capacity relationship for policy pi} and Corollaries \ref{cor:capacity of arbitrary networks upper bound}, \ref{cor:capacity of random networks}, \ref{cor:an upper bound on per-node throughput} and \ref{cor:lower bound on per-node throughput} established in Section \ref{sec:Capacity-of-Static-Networks} to analyze the capacity of the random static and mobile networks considered in \cite{Gupta00the,Franceschetti07Closing,Grossglauser02Mobility}. An upper bound on the throughput capacity can often be readily obtained using Corollary \ref{cor:an upper bound on per-node throughput}. For the lower bound, the procedure generally involves using existing results and scheduling algorithms already established in \cite{Gupta00the,Franceschetti07Closing,Grossglauser02Mobility} to obtain $k^{\chi}\left(n\right)$ and $Y^{\chi}\left(n\right)$, and then using Corollary \ref{cor:lower bound on per-node throughput} to obtain the throughput capacity lower bound. The use of Theorem \ref{thm:Capacity relationship for policy pi} and Corollaries \ref{cor:capacity of arbitrary networks upper bound}, \ref{cor:capacity of random networks}, \ref{cor:an upper bound on per-node throughput} and \ref{cor:lower bound on per-node throughput} often results in simpler analysis. Similar methods can also be used to obtain the multicast capacity and capacity of hybrid networks considered in this subsection and the next subsection. To avoid repetition and to focus on the main ideas, in this subsection and the next subsection, we choose to give an intuitive explanation of the results on the multicast capacity and capacity of hybrid networks only using Theorem \ref{thm:Capacity relationship for policy pi} and Corollaries \ref{cor:capacity of arbitrary networks upper bound}, \ref{cor:capacity of random networks}, \ref{cor:an upper bound on per-node throughput} and \ref{cor:lower bound on per-node throughput}. In \cite{Li09Multicast}, Li considered the multicast capacity of a network with $n$ nodes uniformly and i.i.d. on a $a\times a$ square, denoted by $A$. It is assumed that all nodes have the same transmission range $r\left(n\right)=\Theta\left(\sqrt{\frac{\log n}{n}}\right)$ and are capable of transmitting at $W$ bits per second over a common channel. Furthermore, a protocol interference model is assumed and two concurrent transmitters must be separated by an Euclidean distance of at least $\left(1+\triangle\right)r\left(n\right)$. A subset $S\subseteq V_{n}$ of $n_{s}=\left|S\right|$ nodes are randomly chosen to serve as the source nodes of $n_{s}$ multicast sessions where $n_{s}$ is assumed to be sufficiently large. Each node $v_{i}\in S$ chooses a set of $l-1$ points randomly and independently from $A$ and multicast its data to the nearest node of each point. Denote by $\Phi^{f}$ the set of scheduling algorithm that allocate the transport capacity equally among all multicast sessions. Denote by $\eta^{\chi}\left(n\right)$ the maximum transport capacity that can be achieved \emph{a.a.s.} using $\chi$. The multicast capacity $\eta\left(n\right)$ is the maximum transport capacity that can be achieved \emph{a.a.s.} for all $\chi\in\Phi^{f}$: $\eta\left(n\right)=\max_{\chi\in\Phi^{f}}\eta^{\chi}\left(n\right)$. Note that a bit multicast to $l-1$ destinations is counted as a single bit in the calculation of the multicast transport capacity. Therefore our definition of transport capacity in Section \ref{sec:Network-Models} is consistent with the definition of the multicast transport capacity in \cite{Li09Multicast} and the results established in Section \ref{sec:Capacity-of-Static-Networks} can be used directly here. We first consider the situation that $l=O\left(\frac{n}{\log n}\right)$. We will obtain an upper bound on the multicast transport capacity. It can be readily shown that $\max_{\chi\in\Phi^{f}}Y^{\chi}\left(n\right)=O\left(\frac{1}{r^{2}\left(n\right)}\right)$. Furthermore, it can be shown that \emph{a.a.s.} any multicast tree spanning $l$ nodes that are randomly placed in $A$ has a total edge length of at least $ca\sqrt{l}$ \cite[Lemma 9]{Li09Multicast} where $c$ is a positive constant. It follows that $\min_{\chi\in\Phi^{f}}k^{\chi}\left(n\right)=\Omega\left(\frac{ca\sqrt{l}}{r\left(n\right)}W\right)$. Therefore, as an easy consequence of Corollary \ref{cor:an upper bound on per-node throughput}, $\eta\left(n\right)=O\left(\frac{1}{r\left(n\right)\sqrt{l}}W\right)=O\left(\frac{W}{\sqrt{l}}\sqrt{\frac{n}{\log n}}\right)$. To obtain a lower bound on the multicast transport capacity, a scheduling algorithm $\chi$ is constructed (see \cite{Li09Multicast} for a detailed description of the scheduling algorithm $\chi$). More specifically, $A$ is partitioned into non-overlapping squares and each square is of size $\frac{r\left(n\right)}{\sqrt{5}}\times\frac{r\left(n\right)}{\sqrt{5}}$. Calling these squares cells, the total number of cells equals $\frac{5a^{2}}{r^{2}\left(n\right)}$. Furthermore, nodes located in adjacent cells are directly connected, where two cells are \emph{adjacent} if they have at least one point in common. Using the property that nodes are uniformly and i.i.d., \emph{a.a.s.} \emph{every} cell has at least one node \cite[Lemma 18]{Li09Multicast}. Dividing time into time slots of equal length, it can be shown that there exists a positive integer $c_{1}$, independent of $n$, such that every $\frac{1}{c_{1}}$ time slots, \emph{every} cell gets at least one time slot to transit. Using the above results, \emph{a.a.s.} $Y^{\chi}\left(n\right)\geq\frac{5a^{2}}{c_{1}r^{2}\left(n\right)}$. Choosing one node from each cell, it can be shown that these nodes form a connected component, termed \emph{connected dominating set}. All other nodes are directly connected to at least one node in the connected dominating set. Multicast traffic is routed using the connected dominating set. Using the result that for an arbitrary cell, \emph{a.a.s.}, the probability that a randomly chosen multicast flow is routed via the cell is at most $c_{2}\sqrt{l}r\left(n\right)/a$ \cite[Lemma 20]{Li09Multicast}, \emph{a.a.s. }the number of cells crossed by a randomly chosen multicast flow is at most $\frac{c_{2}\sqrt{l}r\left(n\right)}{a}\times\frac{5a^{2}}{r^{2}\left(n\right)}=5c_{2}a\frac{\sqrt{l}}{r\left(n\right)}$. Therefore \emph{a.a.s.} $k^{\chi}\left(n\right)=O\left(\frac{\sqrt{l}}{r\left(n\right)}\right)$ and $\eta^{\chi}\left(n\right)=\Omega\left(\frac{1}{r\left(n\right)\sqrt{l}}W\right)=\Omega\left(\frac{W}{\sqrt{l}}\sqrt{\frac{n}{\log n}}\right)$. Combing the upper and lower bounds on the transport capacity, conclusion can be obtained that when $l=O\left(\frac{n}{\log n}\right)$, \emph{a.a.s.} $\eta\left(n\right)=\Theta\left(\frac{W}{\sqrt{l}}\sqrt{\frac{n}{\log n}}\right)$. When $l=\Omega\left(\frac{n}{\log n}\right)$, the situation becomes slightly different. More specifically, the density of the multicast destination nodes becomes high enough such that the probability that a single transmission will deliver the packet to more than one multicast destination nodes becomes high. In fact, using the above connected dominating set, it can be shown that \emph{a.a.s. }the number of transmissions required to deliver a packet to all nodes (hence the $l-1$ multicast destination nodes) is at most $\frac{5a^{2}}{r^{2}\left(n\right)}$, which is independent of $l$. Consequently $k\left(n\right)=\Theta\left(\frac{1}{r^{2}\left(n\right)}\right)$. Conclusion then readily follows that when $l=\Omega\left(\frac{n}{\log n}\right)$, $\eta\left(n\right)=\Theta\left(W\right)$. \subsection{Capacity of hybrid networks\label{sub:Capacity-of-Hybrid}} Now we consider the impact of infrastructure nodes on network capacity. In addition to $n$ ordinary nodes uniformly and i.i.d. on a unit square $A$, a set of $M$ infrastructure nodes are regularly or randomly placed in the same area $A$ where $M\leq n$. These infrastructure nodes act as relay nodes only and do not generate their own traffic. Following the same setting as that in \cite{Zemlianov05Capacity}, it is assumed that the infrastructure nodes have the same transmission range $r\left(n\right)=\Theta\left(\sqrt{\frac{\log n}{n}}\right)$ and link capacity $W$ when they communicate with the ordinary nodes and these infrastructure nodes are inter-connected via a backbone network with much higher capacity. Furthermore a protocol interference model is adopted. The routing algorithm used in the above network \cite{Zemlianov05Capacity} has been optimized such that these infrastructure nodes do not become the bottleneck, which may be possibly caused by a poorly designed routing algorithm diverting excessive amount of traffic to the infrastructure nodes. First consider the case when $M=o\left(\frac{1}{r^{2}\left(n\right)}\right)=o\left(\frac{n}{\log n}\right)$. In this situation, the number of transmissions involving an infrastructure node as a transmitter or receiver is small and has little impact on $Y\left(n\right)$, which has been shown in previous subsections to be $\Theta\left(\frac{1}{r^{2}\left(n\right)}\right)$. Furthermore, it can be shown that the average Euclidean distance between a randomly chosen pair of infrastructure nodes is $\Theta\left(1\right)$ \cite{Philip07The}. That is, a packet transmitted between two infrastructure nodes moves by an Euclidean distance of $\Theta\left(1\right)$ whereas a packet transmitted by a pair of directly connected ordinary nodes moves by an Euclidean distance of $\Theta\left(r\left(n\right)\right)$. Therefore a transmission between two infrastructure nodes \emph{is equivalent to} $\Theta\left(\frac{1}{r\left(n\right)}\right)$ transmissions between ordinary nodes and the \emph{equivalent }average number of simultaneous ordinary node transmissions equals $\Theta\left(\left(\frac{1}{r^{2}\left(n\right)}-M\right)+\frac{M}{r\left(n\right)}\right)=\Theta\left(\frac{1}{r^{2}\left(n\right)}+\frac{M}{r\left(n\right)}\right)$. It follows using a similar procedure outlined in Section \ref{sub:Capacity-of-static-Kumar} that \[ \eta\left(n\right)=\Theta\left(\frac{\left(\frac{1}{r^{2}\left(n\right)}+\frac{M}{r\left(n\right)}\right)W}{\frac{1}{r\left(n\right)}}\right)=\Theta\left(\left(\sqrt{\frac{n}{\log n}}+M\right)W\right) \] Therefore when $M=o\left(\sqrt{\frac{n}{\log n}}\right)$, the infrastructure nodes have little impact on the order of $\eta\left(n\right)$; when $M=\Omega\left(\sqrt{\frac{n}{\log n}}\right)$ (and $M=o\left(\frac{n}{\log n}\right)$), the infrastructure nodes start to have dominant impact on the network capacity and the above equation on the transport capacity reduces to $\eta\left(n\right)=\Theta\left(MW\right)$. Noting that the fundamental reason why infrastructure nodes improve capacity is that they help a pair of ordinary nodes separated by a large Euclidean distance to leapfrog some very long hops, thereby reducing $k\left(n\right)$. Therefore the same result in the above equation can also be obtained by analyzing the reduction in $k\left(n\right)$ directly. The analysis is albeit more complicated. When $M=\Omega\left(\frac{n}{\log n}\right)$, assuming that the transmission range stays the same as when $M=o\left(\frac{n}{\log n}\right)$ at $r\left(n\right)=\Theta\left(\sqrt{\frac{\log n}{n}}\right)$, the number of simultaneous active infrastructure nodes becomes limited by the transmission range. More specifically, only $\Theta\left(\frac{1}{r^{2}\left(n\right)}\right)=\Theta\left(\frac{n}{\log n}\right)$ infrastructure nodes can be active simultaneously. Furthermore, \emph{a.a.s.} each ordinary node can access its nearest infrastructure node in $\Theta\left(1\right)$ hops. Following a similar analysis as that in the last paragraph, it can be shown that $\eta\left(n\right)=\Theta\left(\frac{nW}{\log n}\right)$ when $M=\Omega\left(\frac{n}{\log n}\right)$. The above results are consistent with the results in \cite{Zemlianov05Capacity}. However we further note that when $M=\Omega\left(\frac{n}{\log n}\right),$ a smaller transmission range of $r\left(n\right)=\Theta\left(\frac{1}{\sqrt{M}}\right)$ is sufficient for an ordinary node to reach its nearest infrastructure node and hence achieving connectivity. A smaller transmission range helps to increase $Y\left(n\right)$ and it has been shown previously that $Y\left(n\right)=\Theta\left(\frac{1}{r^{2}\left(n\right)}\right)$, while $k\left(n\right)=\Theta\left(1\right)$. Therefore the achievable transport capacity using the smaller transmission range is $\eta\left(n\right)=\Theta\left(MW\right)=\Omega\left(\frac{nW}{\log n}\right)$, which is better than the result $\eta\left(n\right)=\Theta\left(\frac{nW}{\log n}\right)$ in \cite{Zemlianov05Capacity}. Moreover, different from the conclusion in \cite{Zemlianov05Capacity} suggesting that when $M=\Omega\left(\frac{n}{\log n}\right)$, further investment in infrastructure nodes will not lead to improvement in capacity, our result suggests that even when $M=\Omega\left(\frac{n}{\log n}\right),$ capacity still keeps increasing linearly with $M$. This capacity improvement is achieved by reducing the transmission range with the increase in $M$. \section{Conclusion and Further Work \label{sec:Conclusion-and-Further}} In this paper, we show that the network capacity can be determined by estimating the three parameters, viz. the average number of simultaneous transmissions, the link capacity and the average number of transmissions required to deliver a packet to its destination. Our result is valid for both finite networks and asymptotically infinite networks. We have demonstrated the usage and the applicability of our result by using the result to analyze the capacity of a number of different networks studied in the literature. The use of our result often simplifies analysis. More importantly, we showed that the same methodology can be used to analyze the capacity of networks under different conditions. Therefore our work makes important contributions towards developing a generic methodology for network capacity analysis that is applicable to a variety of different scenarios. Furthermore, as illustrated in Section \ref{sub:Capacity-of-Hybrid}, the simple capacity-determing relationship revealed in the paper can be used as a powerful and convenient tool to quickly estimate the capacity of networks based on an intuitive understanding of the networks. However we readily acknowledge that the analysis of the three parameters: the average number of simultaneous transmissions, the link capacity and the average number of transmissions required to deliver a packet to its destination, may still need some customized analysis that takes into account details of a network different from other networks. For asymptotically infinite random networks, the use of our result to estimate the capacity often involves estimating the capacity upper bound and the capacity lower bound separately. The capacity upper bound can be readily obtained by estimating the maximum number of simultaneously active transmissions satisfying the interference constraints that can be accommodated in the network area and the minimum number of transmissions required to deliver a packet. The capacity lower bound is more difficult to find. It usually involves constructing a spatial and temporal scheduling algorithm for the particular network and demonstrating that the network capacity is achievable using that algorithm. It remains to be investigated on whether a generic technique can be found such that the capacity lower bound can be obtained without resorting to designing customized algorithm for a particular network. In this paper, we have ignored physical layer details by assuming that each node is capable of transmitting at a fixed and identical data rate. This assumption allows us to focus on the topological aspects of networks that determine capacity. It remains to be investigated on how to develop a generic methodology to incorporate the impact of physical layer techniques, e.g. coding and MIMO, on capacity. We refer readers to recent work by Jiang et al. \cite{Jiang12Towards}, which suggests a possible direction to extend our result to incorporate physical layer details. \bibliographystyle{ieeetr}
1,314,259,993,666
arxiv
\section{Introduction} The story begins by considering power series of the form given in \eqref{eq:perf1} with coefficients in the perfectoid field $K$ denoted by $K\lr{X}_\infty$. Perfectoid fields are defined in \cite{scholze_1} and most foundational details can found in \cite{Kiran_AWS}. \begin{equation}\label{eq:perf1} \sum_{n\geq 0}a_nX^n,~n\in\ds{Z}[1/p]\text{ and } \abs{a_n}\rightarrow 0 \text{ as } n\rightarrow\infty \end{equation} The above comes equipped with a Gauss norm, the valuation ring $R=\{a\in K:\abs{a}\leq 1\}$, a maximal ideal $\id{m}=\{a\in K:\abs{a}< 1\}$ and the residue field $k=R/\id{m}$. Let $R$ denote the restricted ring with $\abs{f}\leq 1$ (Gauss Norm). The elements of the series can be ordered by observing the countability of rationals. The reduction map takes power series and converts them into polynomials \begin{equation} \begin{aligned} \pi:R&\rightarrow k\\ \pi: R\lr{X}_\infty&\rightarrow k[X,X^{1/p},\ldots,X^{1/p^i},\ldots ]\\ f&\mapsto \til{f} \end{aligned} \end{equation} In particular $g\in K\lr{X}_\infty$ is a unit iff its reduction is a unit $\til{g}\in\tio{k}$. All the standard properties of Tate Algebras as described in \cite[pp 15]{bosch2014lectures} hold here, and have been proved in \cite{Bedi2018}. The above helps us define order of a power series (analogue of degree of polynomials). \begin{mdef} A power series $g\in K\lr{X}_\infty$ with $\abs{g}=1$ is distinguished of order $s$ iff its reduction is of the form \begin{equation} \til{g}=a_0+\ldots+a_iX^{j/p^i}+\ldots+a_sX^s, a_i\in\tio{K}, s\in\ds{Z}[1/p] \end{equation} \end{mdef} \begin{theorem}[Weirstra{\ss} preparation theorem] Let $g\in K\lr{X}_\infty$ of order $s$, then there is a unique monic polynomial $h$ of degree $s$ such that $g=uh$ where $u$ is unit in $K\lr{X}_\infty$. \end{theorem} Notice that an element $g\in K\lr{X}_\infty$ has only finitely many zeros, since $h$ in the theorem above can be made into a polynomial with integer degrees by a change of variable. For example $X^5+X^{1/p^2}$ has degree $5p^2$. Thus, for any polynomial look for the minimum power which would be of the form $a/p^i$ ($i=0$ for integer) and then change the variable $X^{1/p^i}\mapsto X.$ The result is the key to doing algebraic geometry on perfectoid spaces. Start with series, do the computations and reduce to polynomial forms whenever one wants to talk about zeros or poles. \section{Perfectoid Abel Jacobi Theorem} \begin{mdef} Given a point $\alpha\in K$ we define $\ensuremath{\mathrm{ord}}_\alpha g$ as the highest power of $\varphi_\alpha(X)$ to divide $h$, where $\varphi_\alpha(X)$ is the irreducible polynomial of $\alpha$ over $K$. \end{mdef} Put $\ensuremath{\mathrm{ord}}_\alpha(0)=+\infty$ and obtain an additive valuation $g\mapsto \ensuremath{\mathrm{ord}}_\alpha(g)\in\ds{Z}[1/p]$, this can be extended to rational functions of the form $f=g_1/g_2$ with $g_i\in K\lr{X}_\infty$. The rational functions would also be called \emph{meromorphic functions}. The numbers $\ensuremath{\mathrm{ord}}_\alpha(f)$ satisfy the following: \begin{enumerate} \item {[Finiteness Condition]} There are only finitely many $\alpha$ with $\ensuremath{\mathrm{ord}}_\alpha\neq 0$ in every region $0<r\leq |\alpha|\leq r'$. \item {[Rationality Condition]} If $\alpha$ and $\beta$ are conjugate over $K$, $\ensuremath{\mathrm{ord}}_\alpha=\ensuremath{\mathrm{ord}}_\beta$ (share the minimal polynomial). \end{enumerate} The collection $\{\ensuremath{\mathrm{ord}}_\alpha(f)\}$ is called the divisor of $f$. The collection $\{ m_\alpha\},\alpha\in \alg{K}$ with $m_\alpha\in\ds{Z}[1/p]$ satisfying the above conditions is called a divisor with component wise addition and thus forms an additive group. Given a divisor of the form $\sum_im_{\alpha_i}[\alpha_i]$, with finitely many $\alpha_i$, the corresponding function $f$ with $\ensuremath{\mathrm{Div}}(f)=\sum_im_{\alpha_i}[\alpha_i]$ is given as \begin{equation} \prod_{\abs{\alpha_i}\leq 1}\left(1-\frac{\alpha_i}{X}\right)^{e_{\alpha_i}m_{\alpha_i}}\prod_{\abs{\alpha_i}> 1}\left(1-\frac{X}{\alpha_i}\right)^{e_{\alpha_i}m_{\alpha_i}} \end{equation} where $e_\alpha$ degree of separability of $\alpha$ over $K$, we can combine the conjugates and get \begin{equation} \left(\frac{\varphi_\alpha(X)}{X^{n_\alpha}} \right)^{m_\alpha} \text{ if }\abs{\alpha}\leq 1\text{ and }\left(\frac{\varphi_\alpha(X)}{a_0} \right)^{m_\alpha} \text{ if }\abs{\alpha}> 1 \end{equation} where $n_\alpha=[K(\alpha):K]$ and $a_0=N_{K(\alpha)/K}(\alpha)$ as in \cite[pp 11-12]{roquette1970analytic}. Closely following \cite[chapter 1]{roquette1970analytic} one could adapt to the case of $q\in\ds{Z}[1/p]$. A meromorphic function on $\alg{K}$ has period $q$ if it satisfies the functional equation $f(q^{-1}X)=f(X)$, these functions form a subfield denoted as $F_K(q)$ and called elliptic function field over $K$. If a function is $q$ periodic then $\ensuremath{\mathrm{Div}}( f)$ is $q$ periodic too. The non-zero meromorphic functions $f$ can be determined by $\ensuremath{\mathrm{Div}}(f)$ upto a function of the form $cX^d$ with $c\in\tio{K}$ and $d\in\ds{Z}[1/p]$, this helps define theta functions \begin{equation} f\left(\frac{X}{q}\right)=\frac{(-X)^df(X)}{a}\text{ where }a\in\tio{K}, d\in\ds{Z}[1/p] \end{equation} where $d$ is called degree of $f$ and $a$ is called multiplicator of $f$. If another function $f$ satisfies $\ensuremath{\mathrm{Div}} g=\ensuremath{\mathrm{Div}} f$ it becomes a theta function with same degree as $f$ \begin{equation}\label{Rq3} g\left(\frac{X}{q}\right)=\frac{(-X)^dg(X)}{aq^k},\qquad k\in\ds{Z}[1/p] \end{equation} and its multiplicator differs by a power of $q$. Thus, $d$ is uniquely determined by the divisor (say $\id{d}=\ensuremath{\mathrm{Div}} f=\ensuremath{\mathrm{Div}} g$). The degree is uniquely determined as \begin{equation} \deg_q\id{d}=d. \end{equation} The multiplicator $a$ is uniquely determined upto a power of $q$, we denote its residue class in $\tio{K}/q^k, k\in\ds{Z}[1/p]$ as $\Phi_q(\id{d})$ (\emph{Jacobi image}) , and write \begin{equation} a\equiv \Phi_q(\id{d}){\mod^\times } q^{p^{-\infty}} \end{equation} If $f(X/q)=f(X)$, then \eqref{Rq3} gives \begin{equation}\label{Roq5} \deg_q(\id{d})=0\text{ and }\Phi_q(\id{d})\equiv 1 \end{equation} Conversely, if the above is satisfied then for any rational function with $\ensuremath{\mathrm{Div}} f=\id{d}$ we have $d=0,a=q^{-k}$ for some $k\in\ds{Z}[1/p]$. Then $g(X)$ defined below is $q$ periodic and is uniquely determined upto a factor in $\tio{K}$. \begin{equation} g(X)=cX^kf(X), \qquad c\in\tio{K} \end{equation} Thus, the conditions in \eqref{Roq5} are necessary and sufficient for $\id{d}$ to be a divisor of a $q$ periodic function. The formulas in \eqref{Roq12} explicitly give Jacobi image (of a $q$ periodic divisor $\id{d}$) and the degree. We have the Perfectoid-Abel-Jacobi Theorem {\cite[pp 15]{roquette1970analytic}}: \begin{proposition}\label{prop1}[Perfectoid-Abel-Jacobi Theorem] A $q$ periodic function $f$ has a $q$ periodic divisor $\id{d}$ iff it satisfies \begin{equation} \deg_q(\id{d})=0,\qquad \Phi_q(\id{d})\equiv 1\mod^{\times}q^{p^{-\infty}} \end{equation} with degree and Jacobi image $\Phi_q$ are given in \eqref{Roq12}. Furthermore, $f$ is uniquely determined by $\id{d}$ upto a factor in $\tio{K}$. \end{proposition} The fundamental theta function for $\id{d}$ gives an explicit formula for computing the degree and Jacobi image. \begin{equation}\label{funtheta} \Theta(X)=\prod_{n\geq 0}\left(1-\frac{q^n}{X}\right)\prod_{n<0}\left(1-{q^{-n}}{X}\right),\qquad n\in\ds{Z}[1/p] \end{equation} The above function is a satisfies the functional equation $\Theta(X/q)=-X\Theta(X)$ which follows from the observation \begin{equation} \frac{\Theta(X)}{\Theta(X/q)}=\underset{n\geq 0}{\underbrace{\left(1-\frac{1}{X}\right)}}\cdot \overset{n< 0}{\overbrace{\left(\frac{1}{1-X}\right)}}=-\frac{1}{X}\end{equation} Rewriting the functional equation explicitly \eqref{thetaFunc} observe that degree of $\Theta$ is one and its multiplicator is also one. \begin{equation}\label{thetaFunc} \Theta(X/q)=-X\Theta(X) \end{equation} For every $\alpha\in\tio{\alg{K}}$ define \begin{equation} \Theta_\alpha(X)=\Theta(\alpha^{-1}X) \end{equation} which is a $q$ periodic function with multiplicity one its functional equation is \begin{equation} \Theta_\alpha(q^{-1}X)=\alpha^{-1}(-X)\Theta_\alpha(X) \end{equation} which gives degree one and multiplicator $\alpha$. Set \begin{equation} \Theta_{\id{d}}=\prod_{|q|<|\alpha|\leq 1}\Theta_\alpha^{e_\alpha m_{\alpha}} \end{equation} where $e_\alpha$ is the degree of inseparability of $\alpha$ over $K$ and $\id{d}=\{m_\alpha\}$ is a $q$ periodic divisor and $\Theta_\id{d}$ satisfies the functional equation \begin{equation} \begin{aligned} \Theta_\id{d}(X/q)&=(-X)^d\Theta_{\id{d}}(X)\\ \text{where}\qquad\qquad\qquad &\\ d=\sum_{|q|<|\alpha|\leq 1}{e_\alpha m_\alpha}&=\deg_q\id{d}\\ a=\prod_{|q|<|\alpha|\leq 1}\alpha^{e_\alpha m_\alpha}&\cong \Phi_q(\id{d}){\mod^\times }q^{p^{-\infty}} \end{aligned} \end{equation} The rationality conditions ensure that we can write the above in the form below with prime denoting the conjugacy classes (with conjugate elements having the same absolute value). \begin{equation} \begin{aligned}\label{Roq12} \deg_q\id{m}&={\sum'}_{|q|<|\alpha|\leq 1}[K(\alpha):K]m_\alpha\\ \Phi_q(\id{d})&\cong \prod'_{|q|<|\alpha|\leq 1} N_{K(\alpha)|K}(\alpha)^{m_\alpha}\mod^\times q^{p^{-\infty}} \end{aligned} \end{equation} The perfectoid version of Corollary at \cite[pp 15]{roquette1970analytic} \begin{corollary}\label{coro1} For every $\alpha\in\tio{\alg{K}}$ there is a $q$ periodic function $f$ with $\ensuremath{\mathrm{ord}}_\alpha(f)=1$. If $\beta$ is not $K$ conjugate of $\alpha\mod^\times q^{p^{-\infty}}$, then we can choose $f$ such that $\ensuremath{\mathrm{ord}}_\beta(f)=0$. \end{corollary} \begin{proof} Following \eqref{Roq12} we construct divisor $\id{p}_\alpha$ for elements conjugate to $\alpha$ \begin{equation} \deg_q\id{p}_\alpha=[K(\alpha):K],\qquad \Phi_q(\id{p}_\alpha)\equiv N_{K(\alpha)/K}(\alpha)\mod^\times q^{p^{-\infty}} \end{equation} with $\ensuremath{\mathrm{ord}}_\alpha(f)=1$ (for all $K$-conjugates of $\alpha$) and $\ensuremath{\mathrm{ord}}_\beta(f)=0$ by setting $m_\alpha=1$ (thus, there are no fractional powers that need to be considered in the proof). There are two cases to consider either $\alpha\in\tio{K}$ or $\alpha\notin \tio{K}$, that is $\alpha\in\tio{\alg{K}}\ensuremath{\backslash}\tio{K}$.\\ Case 1: For $\alpha\in \tio{K}$ choose elements $u,v\in\tio{K}$ such that $\alpha,uv,\alpha u, v$ are all different $\mod^\times q^{p^{-\infty}}$. The divisor $\id{d}$ \begin{equation} \begin{aligned} \id{d}&=\id{p}_\alpha+\id{p}_{uv}-\id{p}_{\alpha u}-\id{p}_v,\\ \deg\id{d}&=0\text{ and }\Phi_q(\id{d})\equiv\frac{\alpha uv}{\alpha uv}\equiv 1\mod^\times q^{p^{-\infty}} \end{aligned} \end{equation} Thus, $\id{d}=\ensuremath{\mathrm{Div}}(f)$ a $q$ periodic function with $\ensuremath{\mathrm{ord}}_\alpha(f)=1$ and $u,v$ can be so chosen that $uv,\alpha u,v$ are all different from $\beta\mod^\times q^{p^{-\infty}}$ giving $\ensuremath{\mathrm{ord}}_\beta(f)=0$.\\ Case 2: Let $\alpha\notin \tio{K}$ and $d=[K(\alpha):K],a=N_{K(\alpha)|K}(\alpha), u\in\tio{K}, v=u^{1-d}a$, then the divisor \begin{equation} \begin{aligned} \id{d}&=\id{p}_\alpha-(d-1)\id{p}_u-\id{p}_v\\ \deg{\id{d}}&=0\text{ and }\Phi_q\equiv \frac{a}{u^{d-1}v}\equiv 1\mod^\times q^{p^{-\infty}}. \end{aligned} \end{equation} Thus $\ensuremath{\mathrm{Div}} f=\id{d}$ with a $f$ a $q$ periodic function and by construction $\id{d}$ has multiplicity one at $\alpha$, and an appropriate choice of $u,v\not\equiv\beta\mod^{\times}q^{p^{-\infty}}$ gives $\ensuremath{\mathrm{ord}}_\beta(f)=0$. \end{proof} \begin{mdef} The vector space of a $q$ periodic divisor is denoted as \begin{equation} \begin{aligned} L_K(q|\id{d})&:=\{\text{$q$ periodic functions $f$ such that }\ensuremath{\mathrm{Div}}(f)\geq -\id{d}\}\\ \ell_k(q|\id{d})&:=\dim L_K(q|\id{d}) \end{aligned} \end{equation} \end{mdef} \begin{rem}\label{rem1} Proposition \ref{prop1} helps give the dimension \begin{equation} \ell_k(q|\id{d}) \begin{cases} 0 \text{ if }\deg_q{\id{d}}<0\\ 1 \text{ if }\deg_q{\id{d}}=0\\ \end{cases} \end{equation} Note that in the perfectoid world $\deg\in\ds{Z}[1/p]$. \end{rem} In order to prove the Riemann-Roch Theorem in the perfectoid, the corollary \ref{coro1} has to be recast for a perfectoid power $1/p^i$ ($i,p$ fixed) in place of $1$. \begin{corollary}\label{coro2} For every $\alpha\in\tio{\alg{K}}$ and chosen $i,p$ there is a $q$ periodic function $f$ with $\ensuremath{\mathrm{ord}}_\alpha(f)=1/p^i$. If $\beta$ is not $K$ conjugate of $\alpha\mod^\times q^{p^{-\infty}}$, then we can choose $f$ such that $\ensuremath{\mathrm{ord}}_\beta(f)=0$. \end{corollary} \begin{proof} Following \ref{coro1} we construct divisor $\id{p}_\alpha$ for elements conjugate to $\alpha$ \begin{equation} \deg_q\id{p}_\alpha=[K(\alpha):K]\cdot\frac{1}{p^i},\qquad \Phi_q(\id{p}_\alpha)\equiv N_{K(\alpha)/K}(\alpha)^{{1}/{p^i}}\mod^\times q^{p^{-\infty}} \end{equation} with $\ensuremath{\mathrm{ord}}_\alpha(f)={1}/{p^i}$ (for all $K$-conjugates of $\alpha$) and $\ensuremath{\mathrm{ord}}_\beta(f)=0$ by setting $m_\alpha={1}/{p^i}$ (thus, there are fractional powers that need to be considered in the proof). The divisor $\id{p}_\alpha$ has multiplicity $1/p^i$ at $\alpha$. There are two cases to consider either $\alpha\in\tio{K}$ or $\alpha\notin \tio{K}$, that is $\alpha\in\tio{\alg{K}}\ensuremath{\backslash}\tio{K}$.\\ Case 1: For $\alpha\in \tio{K}$ choose elements $u,v\in\tio{K}$ such that $\alpha,uv,\alpha u, v$ are all different $\mod^\times q^{p^{-\infty}}$. The divisor $\id{d}$ \begin{equation} \begin{aligned} \id{d}&=\id{p}_\alpha+\id{p}_{uv}-\id{p}_{\alpha u}-\id{p}_v,\\ \deg\id{d}&=0\text{ and }\Phi_q(\id{d})\equiv\frac{\alpha uv}{\alpha uv}\equiv 1\mod^\times q^{p^{-\infty}} \end{aligned} \end{equation} Thus, $\id{d}=\ensuremath{\mathrm{Div}}(f)$ a $q$ periodic function with $\ensuremath{\mathrm{ord}}_\alpha(f)=1$ and $u,v$ can be so chosen that $uv,\alpha u,v$ are all different from $\beta\mod^\times q^{p^{-\infty}}$ giving $\ensuremath{\mathrm{ord}}_\beta(f)=0$.\\ Case 2: Let $\alpha\notin \tio{K}$ and $d=[K(\alpha):K],a=N_{K(\alpha)|K}(\alpha)^{1/p^i}, u\in\tio{K}, v=u^{1-d}a$, then the divisor \begin{equation} \begin{aligned} \id{d}&=\id{p}_\alpha-(d-1)\id{p}_u-\id{p}_v\\ \deg{\id{d}}&=0\text{ and }\Phi_q\equiv \frac{a}{u^{d-1}v}\equiv 1\mod^\times q^{p^{-\infty}}. \end{aligned} \end{equation} Thus $\ensuremath{\mathrm{Div}} f=\id{d}$ with a $f$ a $q$ periodic function and by construction $\id{d}$ has multiplicity $1/p^i$ at $\alpha$, and an appropriate choice of $u,v\not\equiv\beta\mod^{\times}q^{p^{-\infty}}$ gives $\ensuremath{\mathrm{ord}}_\beta(f)=0$. \end{proof} \subsection{Perfectoid Riemann-Roch} Let a divisor be $1[x_1]+2[x_2]+3/p^i[x_3]$, this is rewritten as $1p^i/p^i[x_1]+2p^i/p^i[x_2]+3/p^i[x_3]$ and called a divisor with denominator $1/p^i$. The Riemann-Roch theorem gives the dimension $\ell_K(q|\id{d})$ as the degree ($\times p^i$) of the divisor. Setting $i=0$ gives the standard case as in \cite[Proposition 2, pp 16]{roquette1970analytic}. \begin{theorem}\label{RR} If $\id{d}$ is a $q$ periodic divisor with denominator $1/p^i$ (where $i,p$ are fixed) and $\deg_q(\id{d})>0$ then \begin{equation} \ell_K(q|\id{d})=\deg_q(\id{d})\cdot p^i \end{equation} \end{theorem} \begin{proof} For $\deg_q(\id{d})=0$ the result holds from Proposition \ref{prop1}, thus we may assume $d=\deg_q(\id{d})>0$ and use induction for the numerator of $d$. Start by choosing element $a\in\tio{K},a\equiv\Phi_q(\id{d})\mod^\times q^{p^{-\infty}}$, now choose $b\not\equiv 1, a\mod^\times q^{p^{-\infty}} $ such that is $\id{d}$ has multiplicity zero at $b$. The divisor $\id{p}_b$ as constructed in corollary \ref{coro2} has multiplicity $1/p^i$. Notice the divisor \begin{equation} \id{d}'=\id{d}-\id{p}_b\text{ where }\deg\id{d}'=(d-1)/p^i,\qquad \Phi_q(\id{d}')\equiv\frac{a}{b}\not\equiv 1 \mod^\times q^{-p^{-\infty}} \end{equation} By induction $\ell_K(q|\id{d}')=d-1$. Consider the map $f\rightarrow f(b)$, the kernel of this map is the space $L_K(q|\id{d}')$ which has value zero at $b$. Since, we need one $d$ as dimension we need to show that there is an $f\in L_K(q|\id{d})$ and $f(b)\neq 0$. Let \begin{equation} \ensuremath{\mathrm{Div}} f=\id{z}-\id{d},\text{ hence }\id{z}\geq 0 \end{equation} and $\id{z}$ has multiplicity zero at $b$. \ref{prop1} requires to show \begin{equation} \deg_q(\id{z})=d/p^i, \Phi_q(\id{z})\equiv a\mod^\times q^{p^{-\infty}} \end{equation} which is given by the divisor below \begin{equation} \id{z}=\id{p}_a+(d-1)\id{p}_1 \end{equation} where $\id{p}_a$ and $\id{p}_1$ are as given in corollary \ref{coro2}(to transfer $1/p^i$) and $b\neq 1,a\mod^\times q^{-p^{-\infty}}$ as assumed above. \end{proof} \section{Perfectoid Tate Curve} Define perfectoid Tate curve as $\mathcal{T}:=\ds{G}_{m,K}/\lr{q}$ with $0<\abs{q}<1$, and $\lr{q}$ is the subgroup of $\tio{K}$ generated by $q$. This is precisely the same as the definition in \cite[pp 121]{fresnel2012rigid}. It is possible to define the above as $\mathcal{T}:=\ds{G}_{m,K}/\lr{q}$ with fractional powers for $q$, that is modulo out with subgroup generated by $q^{\ds{Z}[1/p]}$, but this will give us a different model. In the standard model \cite[pp 220]{bosch2014lectures} the gluing is induced via multiplication by $q$, if we go by the fractional power case we will have to consider gluing induced by multiplication by the vector $(q,q^{1/p},q^{1/p^2},\ldots,q^{1/p^i},\ldots)$, and define admissible sets according to each $q^{1/p^i}$. In order to simplify the situation we want to keep the same admissible sets as the standard Tate Curve but put a different sheaf on it. Let $\ds{B}(r_1,r_2)$ denote an annulus with inner radius $r_1$ and outer radius $r_2$, that is $|r_1|\leq |r_2|$. Following definition 1.3.1 \cite[pp6]{lutkebohmert2016rigid} the ring corresponding $\ds{B}(r_1,r_2)$ is $K\lr{X/r_2,r_1/X}$. We can replace $\ds{B}(r_1,r_2)$ with $\ds{B}(r_1^{1/p},r_2^{1/p})$ with corresponding ring as $K\lr{(X/r_2)^{1/p},(r_1/X)^{1/p}}$, and use direct image sheaf to transfer the ring $K\lr{(X/r_2)^{1/p},(r_1/X)^{1/p}}$ to the disk $\ds{B}(r_1,r_2)$. Notice the inverse system below which is analogous to one given at \cite[pp16]{lutkebohmert2016rigid}. \begin{equation} \cdots\rightarrow\ds{B}(r_1^{1/p^2},r_2^{1/p^2})\xra{(\cdot)^p}\ds{B}(r_1^{1/p},r_2^{1/p})\xra{(\cdot)^p}\ds{B}(r_1,r_2) \end{equation} which gives us a direct system (all maps are inclusion) \begin{equation} \cdots\leftarrow K\lr{(X/r_2)^{1/p^2},(r_1/X)^{1/p^2}}\leftarrow K\lr{(X/r_2)^{1/p},(r_1/X)^{1/p}}\leftarrow K\lr{X/r_2,r_1/X} \end{equation} Thus, we can construct inverse and direct limit of the systems above and call them perfectoid versions denoting them by $\ds{B}_\infty$ and the corresponding ring as $K\lr{(X/r_2)^{1/p^\infty},(r_1/X)^{1/p^\infty}}$. \[K\lr{X/r_2,r_1/X}_\infty:=K\lr{(X/r_2)^{1/p^\infty},(r_1/X)^{1/p^\infty}}=\cup_{i\geq 0}K\lr{(X/r_2)^{1/p^i},(r_1/X)^{1/p^i}}\] We will not worry much about limits, instead we directly associate the ring $K\lr{X/r_2,r_1/X}_\infty$ to the disk $\ds{B}(r_1,r_2)$. We will follow chapter $5$ of \cite{fresnel2012rigid} and define the open sets as $U_0=\ds{B}(q,q^{-1}),U_1=\ds{B}(q^2,q),U_{0,1,+}=\ds{B}(q,q),U_{0,1,-}=\ds{B}(q^2,q^2),U_{0+}=\ds{B}(q^{-1},q^{-1}) $. The corresponding rings are given below (with coefficients tending to zero as $n\rightarrow \infty$). Notice that we are closely following \cite[pp 122]{fresnel2012rigid} replacing $z$ with $X$ and $\pi$ with $q$, and explicitly writing the constants (and of course $n\in\ds{Z}[1/p]$ instead of usual $\ds{Z}$). \begin{landscape} {\begin{equation}\label{eq:Tate1} \begin{aligned} \curly{O}(U_0)&=\left\{K\lr{qX,q/X}_\infty\text{ or }\sum_{n> 0}a_n(q X)^n+b_0+\sum_{n>0}b_n\left(\frac{q}{X}\right)^n\text{ with }\lim a_n=0,\lim b_n=0 \right\}\\ \curly{O}(U_1)&=\left\{K\lr{X/q,q^2/X}_\infty\text{ or }\sum_{n> 0}c_n\left(\frac{X}{q}\right)^n+c_0+\sum_{n>0}d_n\left(\frac{q^2}{X}\right)^n \text{ with }\lim c_n=0,\lim d_n=0\right\}\\ \curly{O}(U_{0,1,+})&=\left\{K\lr{X/q,q/X}_\infty\text{ or }\sum_{n> 0}e'_n\left(\frac{X}{q}\right)^n+e_0+\sum_{n> 0}e''_n\left(\frac{q}{X}\right)^n\text{ or }\sum_{n\in\ds{Z}[1/p]}e_n\left(\frac{X}{q}\right)^n\text{ with }\lim e_n=0\right\}\\ \curly{O}(U_{0,1,-})&=\left\{K\lr{X/q^2,q^2/X}_\infty\text{ or }\sum_{n> 0}f'_n\left(\frac{X}{q^2}\right)^n+f_0+\sum_{n> 0}f''_n\left(\frac{q^2}{X}\right)^n\text{ or }\sum_{n\in\ds{Z}[1/p]}f_n\left(\frac{X}{q^2}\right)^n\text{ with }\lim f_n=0\right\}\\ \curly{O}(U_{0+})&=\left\{K\lr{qX,1/qX}_\infty\text{ or }\sum_{n> 0}g'_n\left(qX\right)^n+g_0+\sum_{n> 0}g''_n\left(\frac{1}{qX}\right)^n\text{ or }\sum_{n\in\ds{Z}[1/p]}g_n\left(qX\right)^n \text{ with }\lim g_n=0\right\}\\ \curly{O}(U_{0,1})&=\curly{O}(U_{0,1,+})\oplus\curly{O}(U_{0,1,-}) \end{aligned} \end{equation} } \end{landscape} \subsection{Gluing the Sets } The inner boundary of $U_0$ is $U_{0,1+}$ and outer boundary is $U_{0+}$, and the outer boundary of $U_1$ is $U_{0,1+}$ and inner boundary is $U_{0,1,-}$. We can identify $U_{0,1,-}=\ds{B}(q^2,q^2)$ to $U_{0+}=\ds{B}(q^{-1},q^{-1})$ by multiplying with $1/q^3$. In terms of ring map $\curly{O}(U_{0+})\rightarrow\curly{O}(U_{0,1,-})$ the mapping is $X\mapsto X/q^3$( or $qX\mapsto X/q^2$). \begin{equation} \ds{B}(q^2,q^2)=U_{0,1,-}\xra{1/q^3}U_{0+}=\ds{B}(q^{-1},q^{-1}) \end{equation} The above gluing is necessary for identification of rings in the Cech complex. The mapping from $\curly{O}(U_1)$ to $\curly{O}(U_{0,1,+})\oplus \curly{O}(U_{0,1,-})$ carries terms with coefficient $c_n$ to $e'_n$ and $d_n$ to $f''_n$, we rewrite this as $(c_n,d_n)\mapsto(e'_n,f''_n) $. Similarly, we have the mapping $\curly{O}(U_0)$ to $\curly{O}(U_{0,1,+})\oplus \curly{O}(U_{0+})=\curly{O}(U_{0,1,-})$ where $(a_n,b_n)\mapsto (g'_n,e''_n)\mapsto (f'_n,e''_n)$. \begin{figure}[H] \centering \begin{tikzpicture}\label{check8} [] \matrix (m) [ matrix of math nodes, row sep=0.5em, column sep=7.5em, ] { |[name=aa]|\curly{O}(U_0) & |[name=ab]|\curly{O}(U_{0,1,+}) \\ |[name=ka]| \oplus &|[name=kb]| \oplus \\ |[name=qa]| \curly{O}(U_1) & |[name=qb]|\curly{O}(U_{0,1,-}) \\ |[name=wa]| \sum_{n> 0}a_n(q X)^n & |[name=wb]|\sum_{n> 0}e'_n\left(\dfrac{X}{q}\right)^n \\ |[name=ea]| +b_0 & |[name=eb]|+e_0 \\ |[name=ra]|+\sum_{n>0}b_n\left(\dfrac{q}{X}\right)^n & |[name=rb]|+\sum_{n> 0}e''_n\left(\dfrac{q}{X}\right)^n \\ \oplus & \oplus \\ |[name=ta]| \sum_{n> 0}c_n\left(\dfrac{X}{q}\right)^n & |[name=tb]|\sum_{n> 0}f'_n\left(\dfrac{X}{q^2}\right)^n \\ |[name=ya]| +c_0 & |[name=yb]|+f_0 \\ |[name=ua]|+ \sum_{n>0}d_n\left(\dfrac{q^2}{X}\right)^n & |[name=ub]|+\sum_{n> 0}f''_n\left(\dfrac{q^2}{X}\right)^n \\ }; \path[overlay,->, font=\scriptsize,>=latex] (wa) edge [out=355,in=175,looseness=1] (tb) (ra) edge (rb) (ua) edge (ub) ; \path[overlay,->,color=gray, font=\scriptsize,>=latex] (ya) edge [out=355,in=195,looseness=1.5] (eb) (ea) edge (eb) ; \path[overlay,->,color=blue, font=\scriptsize,>=latex] (ta) edge [out=355,in=175,looseness=1] (wb) (ua) edge (ub) ; \end{tikzpicture} \caption{Restriction Maps: Sets restricted to their boundary}\label{check8} \end{figure} The \v{C}ech complex is given as \begin{equation} \begin{aligned} \curly{O}(U_0)\oplus \curly{O}(U_1)&\xra{d} \curly{O}(U_{0,1,+})\oplus \curly{O}(U_{0,1,-}) \xra{d_1} 0\\ (a_n,b_n)\oplus (c_n,d_n)&\xra{d}(b_n-c_n,a_n-d_n)\\ ~~&~~~~~(e''_n-e'_n,f'_n-f''_n)\\ (K,0)\oplus (K,0)&\xra{d}(0,0) \end{aligned} \end{equation} Notice that for $b_0=c_0=$ constant, the $\kr d=0$. Hence we get the global sections $H^0(\curly{O}_\mathcal{T},\mathcal{T}_p)=K$. \begin{equation} \begin{aligned} \curly{O}(U_0)\oplus \curly{O}(U_1)\xra{d} \curly{O}(U_{0,1,+})\oplus \curly{O}(U_{0,1,-}) &\xra{d_1} 0\\ (e'_n+e''_n\oplus f'_n+f''_n)&\xra{d_1}(0,0) \end{aligned} \end{equation} But, the above terms can be lifted to $(b_n,-c_n,a_n,-d_n)$ or $(a_n,b_n)\oplus(-c_n,-d_n)$. We now consider the constant terms $(e_0,f_0)$, where $e_0$ can be lifted to $(b_0-c_0)$ but $f_0$ cannot be lifted. Thus, we get $\dim_KH^1(\curly{O}_\mathcal{T},\mathcal{T}_p)=1$. \begin{theorem} The cohomology groups for perfectoid Tate Curve are given as \begin{equation} H^i(\curly{O}_{\mathcal{T}_p},\mathcal{T}_p)= \begin{cases} K & \text{~for~} i=0,1\\ 0 & \text{~for~} i\geq 2 \end{cases} \end{equation} \end{theorem} For some computations we want $d$ to be surjective, which can be accomplished by multiplying $\curly{O}(U_1)$ by $(X-1)$ which introduces the constant term $d_{-1}$ (in which we absorb $-c_0$). This makes the map $d$ onto in the following \v{C}ech complex. \begin{equation}\label{surjComp} \begin{aligned} \curly{O}(U_0)\oplus (X-1)\curly{O}(U_1)\xra{d} \curly{O}(U_{0,1,+})\oplus \curly{O}(U_{0,1,-}) &\xra{d_1} 0\\ (K,0)\oplus (0,K)&\xra{d}(K,K) \end{aligned} \end{equation} In place of $(X-1)$ we can also use a factor $X^n-1$ where $n\in\ds{Z}[1/p]$ and work with constant term $d_{-n}$. \section{Divisors on $\mathcal{T}_p$} A divisor $D$ on $\mathcal{T}$ is a finite formal finite sum of the form \begin{equation} D=\sum_{i=1}^sn_i[x_i]\text{ where }n_i\in\ds{Z}[1/p], x_i\in\mathcal{T}\text{ and }\deg D=\sum_in_i \end{equation} A positive divisor has all $n_i\geq 0$ and is denoted by $D\geq 0$. The sheaf of meromorphic functions($\curly{M}(U_i)$) are defined as the ring of quotients of $\curly{O}(U_i)$. A divisor of a non zero meromorphic function $f$ is defined as \begin{equation} \ensuremath{\mathrm{Div}}(f)=\sum_{x\in\mathcal{T}}\ensuremath{\mathrm{ord}}_xf[x] \end{equation} Notice that a function $f$ will only have a finite number of zeros and thus the $\ensuremath{\mathrm{Div}}(f)$ will make sense and give a finite sum. Also, notice that $\curly{O}(U_0)$ is not a PID, but a Bezout domain. There exists a holomorphic function $h_0$ on $U_0$ such that the divisor of $h_0$ on $U_0$ is $D$. This is possible because we can first work for the Tate case and get a function $h_0$ as described on \cite[pp 124]{fresnel2012rigid} and then replace integer powers with desired fractional powers. We define the sheaf of divisors as \begin{equation} \curly{L}(D)(U)=\{f\in\curly{M}(U)\text{ such that } \ensuremath{\mathrm{Div}}(f)\geq -D|_U\} \end{equation} For a positive divisor $D$ we have a SES with $\curly{Q}$ a coherent sheaf with finite support (skyscraper sheaf). \begin{equation} 0\rightarrow\curly{O}_\mathcal{T}\rightarrow\curly{L}(D)\rightarrow\curly{Q}\rightarrow 0 \end{equation} We know that $H^i(\mathcal{T},\curly{Q})=0$ for $i\geq 1$ and $H^i(\mathcal{T},\curly{O}_{\mathcal{T}_p})=0$ for $i\geq 2$. We get that $H^i(\mathcal{T}, \curly{L}(D))=0$ for $i\geq 2$. Considering Euler Characters for the SES of sheaves we get \begin{align} \chi(\curly{L}(D))=\chi(\curly{O}_\mathcal{T})+\chi(\curly{Q}) \end{align} Since, $\chi(\curly{O}_\mathcal{T})=0$ (the zero and one dim are both $1$ and cancel out) we get $\chi(\curly{L}(D))=\chi(\curly{Q})$. We have proved the following \begin{theorem}\label{thm5.1.2.1} For any perfectoid divisor on $\mathcal{T}$ we have the following \begin{enumerate} \item $H^i(\mathcal{T}, \curly{L}(D))=0$ for $i\geq 2$ \item $\dim H^0(\mathcal{T}, \curly{L}(D))-\dim H^1(\mathcal{T}, \curly{L}(D))=\dim H^0(\mathcal{T},\curly{Q})$ \end{enumerate} \end{theorem} The Perfectoid Riemann Roch theorem \ref{RR} gives $\dim H^0(\mathcal{T},\curly{Q})=\deg D\cdot p^i$ for $i,p$ fixed in the divisor $D$. This should be thought of as a simple change of variable $X^{1/p^i}\mapsto X$. \begin{theorem}\label{thm5.1.2.2} For any perfectoid divisor on $\mathcal{T}$, $H^1(\mathcal{T}, \curly{L}(D))=0$ for $\deg D>0$. \end{theorem} The two theorems give $\dim H^0(\mathcal{T}, \curly{L}(D))=\deg D\cdot p^i$, which can be used to construct perfectoid elliptic curves. The proof closely follows \cite[pp 125]{fresnel2012rigid}. \begin{proof} From theorem \ref{thm5.1.2.1}, there is a non zero meromorphic function $f$ such that $\ensuremath{\mathrm{Div}} f\geq -D$ (for $D>0$), and $f$ provides isomorphism between line bundles $\curly{L}(D)$ and $\curly{L}(\tilde{D})$ where $\tilde{D}=D+\ensuremath{\mathrm{Div}} f$. Hence, it suffices to work with any $\deg D>0$. Let $D\geq 1/p^i[t]$ for any $t\in \mathcal{T}$ and consider the exact sequence \begin{equation} 0\rightarrow\curly{L}(1/p^i[t])\rightarrow\curly{L}(D)\rightarrow\curly{Q}\rightarrow 0 \end{equation} with $\curly{Q}$ a skyscraper sheaf. Hence, it suffices to consider case $D=1/p^i[t]$. Let $a\in\ds{G}_{m,K}$ be the corresponding element to $t\in\curly{T}$, and $e$ correspond to $1$. Since, $\times a$ induces isomorphism on $\mathcal{T}$, there is an isomorphism between complexes $\curly{L}(1/p^i[e])$ and $\curly{L}(1/p^i[t])$, hence consider the case $t=e$, that is we can now work with factor $(X-1)$. Now, consider the \v{C}ech complex \eqref{surjComp} where it is shown that $d$ is surjective giving $H^j(\curly{T},\curly{L}(1/p^i[e]))=0$ for $j\geq 1$. \end{proof} \subsection{Perfectoid Elliptic Curves} The above can be used to construct space of the form $\curly{L}(n[e])$ and come up with an elliptic curve(s) (with $\lambda_1\neq 0$) as given in \cite[pp 126]{fresnel2012rigid} or in \cite[pp 59]{silverman2013arithmetic}. \begin{equation}\label{ellipticCurve} \begin{aligned} Y^2+\lambda_1X^3+\lambda_2XY+\lambda_3X^2+\lambda_4Y+\lambda_5X+\lambda_6&=0\text{ with }i=0\\ Y^{2/p}+\lambda_1X^{3/p}+\lambda_2X^{1/p}Y^{1/p}+\lambda_3X^{2/p}+\lambda_4Y^{1/p}+\lambda_5X^{1/p}+\lambda_6&=0\text{ with }i=1\\ Y^{2/p^2}+\lambda_1X^{3/p^2}+\lambda_2X^{1/p^2}Y^{1/p^2}+\lambda_3X^{2/p^2}+\lambda_4Y^{1/p^2}+\lambda_5X^{1/p^2}+\lambda_6&=0\text{ with }i=2\\ \vdots\hspace{50mm}&=\vdots\qquad\qquad\vdots \end{aligned} \end{equation} \section{Theta Function} The basic Theta function is described in \eqref{funtheta} or \cite[pp 128]{fresnel2012rigid} is adapted to the perfectoid case by setting $n\in\ds{Z}[1/p]$ \begin{equation} \Theta(X)=\prod_{n\geq 0}\left(1-\frac{q^n}{X}\right)\prod_{n>0}(1-q^n X) \end{equation} The corresponding divisor is $\sum_{n\in\ds{Z}[1/p]}[q^n]$ with functional equation $\Theta(X/q)=-X\Theta(X)$. In particular for a divisor $D=\sum_in_i[x_i]$ with $n_i\in\ds{Z}[1/p]$, one defines $\Theta_D=\prod_i\Theta^{n_i}_{x_i}$ with divisor as $\sum_{i,n\in\ds{Z}[1/p]}n_i[q^nx_i]$. This can be used to prove that the following sequence (corresponding to Proposition 5.1.7 \cite[pp 128]{fresnel2012rigid}) is exact for $K$ perfectoid \begin{equation} 0\rightarrow \tio{K}\rightarrow\curly{M}(\mathcal{T})\xra{\text{div}}\mathrm{Div}(\mathcal{T})\rightarrow\ds{Z}[1/p]\times \mathcal{T}\rightarrow 0 \end{equation} \section{Weierstra{\ss} Equations} The Weierstra{\ss} series is given as \begin{equation} \begin{aligned} \wp(X)&=\sum_{n\in\ds{Z}[1/p]}\frac{q^nX}{(1-q^nX)^2}\\ \frac{d}{dX}\frac{q^nX}{(1-q^nX)^2}&=\frac{q^n+q^{2n}X}{(1-q^nX)^3}\\ \wp'(X)&=\frac{1}{2}\left( X\frac{d}{dX}\wp(X)-\wp(X)\right)=\sum_{n\in\ds{Z}[1/p]}\frac{q^{2n}X^2}{(1-q^nX)^3} \end{aligned} \end{equation} The bijective map $\ds{Z}[1/p]\rightarrow\ds{Z}[1/p]$ given by $n\mapsto n-1$ shows that $\wp(q^{-1}X)=\wp(X)$ and $\wp'(q^{-1}X)=\wp'(X)$. The only problem is that the above series might not even converge for $n\in\ds{Z}[1/p]$, although it converges for $n\in\ds{Z}$. Hence, the equation of the form \eqref{Roq24} does not hold much meaning. \begin{equation}\label{Roq24} \wp'^2+\wp\wp'=\wp^3+A\wp^2+B\wp+C\qquad A,B,C\in K \end{equation} But, we can still interpret the above as the sum of all elliptic curves as given in \eqref{ellipticCurve}, where the correspondence is given by choosing $q^{1/p^i}$. For $i=0$ in $\wp$ we get the standard curve, for $i=1$ or $q^{1/p}$ we get the curve with powers $1/p$. For $i=2$ we get the curve for $i=2$ or $q^{1/p^2}$ and so on. Further work can be carried out by extending results in \cite[Chapter V, \S3-4]{silverman2013advanced} to the perfectoid case by setting $q\in\ds{Z}[1/p].$ \bibliographystyle{apalike}
1,314,259,993,667
arxiv
\section{Introduction} The motion of charged particles accelerated across a gap is of wide interest in fields such as high power diodes and vacuum microelectronics. Child and Langmuir first studied the space charge limited emission for two infinite parallel plane electrodes at fixed voltage $V_0$ in vacuum separated by a distance $D$.\cite{child,lang} The charges produced at the cathode are made to increase (e.g. by increasing the temperature in a thermionic diode or by increasing the power of a laser in case of a photocathode) so that further emission is prevented. The cloud of electrons near the cathode constitutes a space charge that will depress the potential gradient to the extent that equilibrium is achieved, and the current is said to be space charge limited. A useful approximation for the amount of current flowing in such cases is the Child-Langmuir expression for transmitted current. For electrodes having a potential difference $V$ and separated by a distance $D$, the Child-Langmuir law is obtained by solving Poisson's equation \begin{equation} \frac{d^2V}{dz^2}=-\frac{\rho}{\epsilon_0} \label{eq1} \end{equation} where $V$ is the electrostatic potential, $\rho$ is the volume charge density and $\epsilon_0$ is the permittivity of free space.\cite{poll} Because this diode is assumed to have infinite extension in the x-y directions, we can define the current density by \begin{equation} J(z)=\rho(z)v(z)=-J_{CL} \label{eq2} \end{equation} where $v$ is the velocity of the electrons. By charge conservation the current density can not vary with $z$, hence the current density is constant. Now we can find the velocity of the electrons by conservation of energy \begin{equation} \frac{mv^2}{2}-eV=0 \label{eq3} \end{equation} where $m$ and $e$ are the electron's mass and charge, respectively. In Eq.(\ref{eq3}) we have assumed that the electron is initially at rest in the grounded cathode. Solving Eq.(\ref{eq3}) for the velocity and substituting in Eq.(\ref{eq2}) we obtain the volume charge density as a function of the current density and the electrostatic potential \begin{equation} \rho(z)=-\frac{J_{CL}}{\sqrt{2eV/m}} \label{eq4} \end{equation} Substituting Eq.(\ref{eq4}) into Eq.(\ref{eq1}) we have a second order nonlinear differential equation for the electrostatic potential \begin{equation} \frac{d^2V}{dz^2}=\frac{J_{CL}}{\epsilon_0\sqrt{2eV/m}} \label{eq5} \end{equation} with the following boundary conditions \begin{equation} \frac{dV}{dz}\Bigg|_{z=0}=0 \quad \mbox{and} \quad V(z)\Bigg|_{z=0}=0 \label{eq6} \end{equation} The solution for Eq.(\ref{eq5}) is given by \begin{equation} V(z)=V_0\left(\frac{z}{D}\right)^{4/3} \label{eq6a} \end{equation} and the volume charge density in the gap is \begin{equation} \rho(z)=-\frac{4\epsilon_0V_0}{9D^2}\left(\frac{D}{z}\right)^{2/3} \label{eq6b} \end{equation} substituting Eq.(\ref{eq6a}) and Eq.(\ref{eq6b}) into Eq.(\ref{eq4}) we find that the space charge limited current density is given by \begin{equation} J_{CL}=\frac{4\epsilon_0}{9D^2}\sqrt{\frac{2e}{m}}V_0^{3/2} \label{eq6c} \end{equation} Equation (\ref{eq6c}) is known as the Child-Langmuir law which states that the behavior of the current density is proportional to the three-halves power of the bias potential and inversely proportional to the square of the gap distance between the electrodes.\\ Since the derivation of this fundamental law many important and useful variations on the classical Child-Langmuir law have been investigated to account for special geometries,\cite{lang1,lang2,page} relativistic electron energies,\cite{jory} non zero initial electron velocities,\cite{lang3,jaffe} quantum mechanical effects,\cite{lau,ang,gg1} nonzero electric field at the cathode surface,\cite{barbour} and slow varying charge density.\cite{gg2}\\ \section{New approach} Consider now that the electrostatic potential is given as a function of the volume charge density and current charge density, i.e. $V=V(\rho,J)$. This means that the electric field is given by \begin{equation} E=-\frac{dV}{dz}=-\frac{dV}{d\rho}\frac{d\rho}{dz} \label{eq7} \end{equation} and Gauss law is given by \begin{equation} \frac{\rho}{\epsilon_0}=\frac{dE}{d\rho}\frac{d\rho}{dz} \label{eq8} \end{equation} Using equation (\ref{eq8}) we obtain \begin{equation} z(\rho,J)=\int_{-\infty}^{\rho}\frac{\epsilon_0}{\rho}\frac{dE}{d\rho}d\rho \label{eq8a} \end{equation} Combining equation (\ref{eq7}) and equation (\ref{eq8}) we have \begin{equation} \epsilon_0E\frac{dE}{d\rho}=-\rho\frac{dV}{d\rho} \label{eq9} \end{equation} Solving equation (\ref{eq9}) for the electrostatic potential we have \begin{equation} V(\rho,J)=-\int\frac{1}{\rho}\frac{d}{d\rho}\left(\frac{\epsilon_0}{2}E^2\right)d\rho \label{eq10} \end{equation} If we know the electric field as a function of the volume charge density and current charge density we can use equation (\ref{eq10}) and equation (\ref{eq8a}) to know the electrostatic potential as a function of position. Our task then is to find $E=E(\rho,J)$, to do this we use Poisson's equation \begin{equation} \frac{d^2V}{dz^2}=-\frac{\rho}{\epsilon_0}=-\frac{J}{\epsilon_0\sqrt{2/m}}\frac{1}{\sqrt{K_0+eV}} \label{eq11} \end{equation} where $K_0=mv_0^2/2$ is the initial kinetic energy. Multiplying equation (\ref{eq11}) by $dV/dz$ and integrating from zero to $z$ we have \begin{equation} \frac{1}{2}E^2=-\frac{2J}{e\epsilon_0\sqrt{2/m}}\sqrt{K_0+eV}+C \label{eq12} \end{equation} where $C=E_0^2/2+Jmv_0/e\epsilon_0$ is a constant of integration given as a function of the value of the electrostatic field $E_0$ and velocity $v_0$ at $z=0$. Multiplying equation (\ref{eq12}) by $\epsilon_0\rho$ we have \begin{equation} \left(\frac{\epsilon_0}{2}E^2-\frac{\epsilon_0}{2}E_0^2\right)\rho=-\frac{mJ^2}{e}+\frac{Jmv_0}{e}\rho \label{eq13} \end{equation} where we have used equation (\ref{eq11}) in the last step. Using the relation $J=\rho v$ in equation (\ref{eq13}) we end up with \begin{equation} \Delta\delta_E=-\frac{J}{e}\Delta p \label{eq14} \end{equation} where $\delta_E=\epsilon_0E^2/2$ is the electrostatic energy density and $p=mv$ is the linear momentum. Equation (\ref{eq14}) is the microscopic Child-Langmuir law, which states that the change in electrostatic energy density is proportional to the change in linear momentum. For the case when $E_0=v_0=0$ we have \begin{equation} \delta_E=-\frac{J}{e}mv=-\frac{J^2m}{e\rho} \label{eq15} \end{equation} Substituting equation (\ref{eq15}) into equation (\ref{eq10}) and integrating we obtain the electrostatic potential \begin{equation} V=\frac{J^2m}{2e\rho^2}=\frac{m}{2e}v^2 \label{eq16} \end{equation} Note that the electrostatic potential in equation (\ref{eq16}) equals the kinetic energy per unit charge. Substituting equation (\ref{eq15}) into equation (\ref{eq8a}) and integrating we obtain $z=z(\rho,J)$ \begin{equation} z=\frac{2}{3}\sqrt{\frac{\epsilon_0J^2m}{2e}}(-\rho)^{-3/2} \label{eq17} \end{equation} Solving equation (\ref{eq17}) for $\rho$ and substituting into equation (\ref{eq16}) we end up with \begin{equation} V=\left(\frac{9Jz^2}{4\epsilon_0}\sqrt{\frac{m}{2e}}\right)^{2/3} \label{eq18} \end{equation} If we evaluate equation (\ref{eq18}) when $z=D$ and solve for the charge current density we find the space charge limited current density which is given in equation (\ref{eq6c}).\\ An interesting case is when the initial velocity at $z=0$ is non zero, i.e. $v_0\neq 0$, for this case the microscopic Child-Langmuir law is given by \begin{equation} \delta_E=-\frac{J}{e}(mv-mv_0)=\frac{J^2m}{e\rho}+\frac{Jmv_0}{e} \label{eq19} \end{equation} The electrostatic potential will be given by \begin{equation} V=\frac{m}{2e}v^2-\frac{K_0}{e}=\frac{J^2m}{2e\rho^2} \label{eq20} \end{equation} where we have not included the constant term $K_0/e$ in the electrostatic potential energy since it has no physical relevance. Substituting equation (\ref{eq19}) into equation (\ref{eq8a}) and integrating we have \begin{eqnarray} \label{eq21} z&=&\frac{J^2m}{e}\sqrt{\frac{\epsilon_0}{2}}\left[-\frac{4v_0^{3/2}}{3}\sqrt{\frac{e}{mJ^5}}+\right.\\ \nonumber & & \left.\frac{2mJ^2}{3e}\left(-\frac{e}{mJ^2\rho}\right)^{3/2}\left(1+2\frac{v_0}{v}\right)\sqrt{1-\frac{v_0}{v}}\right] \end{eqnarray} If we now make the assumption that $v_0<<v$ then we can approximate equation (\ref{eq21}) to the following expression \begin{equation} z\approx\frac{J^2m}{e}\sqrt{\frac{\epsilon_0}{2}}\left[-\frac{4v_0^{3/2}}{3}\sqrt{\frac{e}{mJ^5}}+\frac{2}{3}\sqrt{\frac{e}{J^2m}}\left(-\rho\right)^{-3/2}\right] \label{eq22} \end{equation} Solving for $\rho$ in equation (\ref{eq22}) we end up with \begin{equation} \rho(z)=-\left[\frac{3}{J}\sqrt{\frac{e}{2m\epsilon_0}}z+2\left(\frac{v_0}{J}\right)^{3/2}\right]^{-2/3} \label{eq23} \end{equation} Substituting equation (\ref{eq23}) into equation (\ref{eq20}) we have the electrostatic potential \begin{equation} V=\frac{J^2m}{2e}\left[\frac{3}{J}\sqrt{\frac{e}{2m\epsilon_0}}z+2\left(\frac{v_0}{J}\right)^{3/2}\right]^{4/3} \label{eq24} \end{equation} If we evaluate equation (\ref{eq24}) when $z=D$ and solve for the charge current density we find the space charge limited current density for non zero initial velocity which is given by \begin{equation} J=J_{CL}\left[1-2\left(\frac{K_0}{eV_0}\right)^{3/4}\right]^2 \label{eq25} \end{equation} Note that equation (\ref{eq25}) reduces to the Child-Langmuir result when $v_0=0$. Equation (\ref{eq25}) resembles the one given in Ref.\~(\cite{liu}) for the space charge limited current with non zero initial velocity.5 \section{Conclusions} The new method presented in this article of deriving the Child-Langmuir law avoids the need of solving a nonlinear differential equation and presents a new insight into the way of approaching the problem of the charge dynamics inside a vacuum tube diode. We found what we call the microsocopic Child-Langmuir law, which states that the change in electrostatic energy density is proportional to the change in linear momentum. We have shown that one has to use this microscopic Child-Langmuir law together with Gauss's law in order to obtain the space charge limited current for the case of zero and non zero initial velocities. \section{Acknowledgments} This work was supported by the program ``C\'atedras CONACYT" and by the project ``Centro Mexicano de Innovaci\'on en Energ\'ia Solar" from Fondo Sectorial CONACYT-Secretar\'ia de Energ\'ia-Sustentabilidad Energ\'etica, and by the National Labs program funded by CONACyT through the Terahertz Science and Technology National Lab (LANCyTT).\\
1,314,259,993,668
arxiv
\section{Introduction} The classical and widely studied {\em sphere packing problem} asks for a non-overlapping arrangement of equally sized spheres in a Euclidean space, such that the fraction of space covered by spheres is maximized. The problem arose from the arithmetical study of positive definite quadratic forms. By the works Thue \cite{thue-1910} and Hales \cite{hales-2005} the optimal arrangements of spheres are known in dimension~$2$ and~$3$. We refer to \cite{gl-1987}, \cite{cs-1998}, \cite{martinet-2003} and \cite{schuermann-2009} for details and further reading. For reasons related to the historical roots of the sphere packing problem, special attention has been on {\em (point) lattices} as the discrete set of sphere centers. In dimension~$2$ the {\em hexagonal lattice} and in dimension~$3$ the {\em face-centered-cubic lattice} yield optimal sphere packings. For the restriction of the sphere packing problem to lattices, the optimal configurations are known up to dimension~$8$ and in dimension~$24$ (see Table~\ref{tab:sphere-packing-results}). Here, solutions are given by fascinating objects, the so-called {\em root lattices} and the {\em Leech lattice}. We refer to \cite{cs-1998}, \cite{martinet-2003} and \cite{ns-2008} for further information on these exceptional objects. A major {\bf open problem} in the theory of sphere packing is to find a dimension in which there is a non-lattice packing that is denser than any lattice packing. In dimension~$10$ there exists a non-lattice sphere packing, that is conjectured to have a higher density than any lattice sphere packing (see \cite{ls-1970}). As shown in Table~\ref{tab:sphere-packing-results}, below dimension~$24$ similar sphere packings have been found in dimensions~$11$, $13$, $18$, $20$ and $22$. All of them are {\em periodic}, that is, a finite union of translates of a lattice sphere packing. By a well-known conjecture, attributed by Gruber \cite{gruber-2007} to Zassenhaus, optimal sphere packing density can always be attained by periodic sphere packings. It is known that their density comes arbitrarily close to the optimal value (see for example~\cite[Appendix A]{ce-2003}). A natural idea to obtain a better non-lattice sphere packing, is to ``locally modify'' one of the optimal known lattice sphere packings in dimensions $d=4,\ldots,8$. In this paper we show that such modifications are not possible within the set of all periodic sphere packings (see Corollary \ref{cor:root_lattice_periodic_extreme}). We more generally show in Theorem \ref{thm:main-periodic} that such modifications are not possible for {\em perfect, strongly eutactic lattices}. One may wonder why the restriction to periodic structures is necessary. One could also consider more general discrete sets. However, within the set of all discrete sets, we are not aware of any notion of a ``local modification'' that on the one hand could potentially lead to an improved sphere packing density, but on the other hand would allow us to generalize the result of this paper. For instance, a natural approach to define the $\epsilon$-neighborhood of a discrete set is as the collection of sets that can be obtained by changing the position of elements by at most an $\epsilon$~distance. However, such a local modification would not even change the sphere packing density. It is equal to a constant multiple of the average number of points per unit volume, which could not be changed in such an $\epsilon$-neighborhood. In contrast to that, the local changes of periodic sets considered in this paper allow arbitrarily large displacements of points, if they are far enough from the origin. The paper is organized as follows. In Section~\ref{sec:background} we recall some necessary background on lattices and positive definite quadratic forms. In Section~\ref{sec:ryshkov} we introduce the so-called Ryshkov polyhedron, and based on it we give a geometrical interpretation of Voronoi's characterization of locally optimal lattice sphere packings. This viewpoint allows a natural generalization to study local optimal periodic sphere packings. For their study we introduce a parameter space in Section~\ref{sec:periodic-parameter-space}. We give characterizations of local optimal periodic sphere packings with up to $m$~lattices translates in Section~\ref{sec:local-analysis}. Based on these general characterizations we obtain one of the main results of this paper in Section~\ref{sec:periodic-extreme}: We show that perfect, strongly eutactic lattices cannot locally be modified to yield a better periodic sphere packing -- they are {\em periodic extreme} (see Definition~\ref{def:periodic-extreme}). \begin{bigtab} \label{tab:sphere-packing-results} \begin{tabular}{c|c|c|c} $d$ & point set & $\quad\delta/\vol B^d\quad$ & author(s) \\ \hline $2$ & ${\mathsf A}_2$ & $0.2886\ldots$ & Lagrange, 1773, \cite{lagrange-1773}\\ $3$ & ${\mathsf A}_3={\mathsf D}_3, \mathsf{\ast}$ & $0.1767\ldots$ & Gau{\ss}, 1840, \cite{gauss-1840}\\ $4$ & ${\mathsf D}_4$ & $0.125\phantom{0\ldots}$ & Korkine \& Zolotareff, 1877, \cite{kz-1877}\\ $5$ & ${\mathsf D}_5,\mathsf{\ast}$ & $0.0883\ldots$ & Korkine \& Zolotareff, 1877, \cite{kz-1877}\\ $6$ & ${\mathsf E}_6,\mathsf{\ast}$ & $0.0721\ldots$ & Blichfeldt, 1935, \cite{blichfeldt-1934}\\ $7$ & ${\mathsf E}_7,\mathsf{\ast}$ & $0.0625\phantom{\ldots}$ & Blichfeldt, 1935, \cite{blichfeldt-1934}\\ $8$ & ${\mathsf E}_8$ & $0.0625\phantom{\ldots}$ & Blichfeldt, 1935, \cite{blichfeldt-1934}\\ $9$ & ${\mathsf \Lambda}_9,\mathsf{\ast}$ & $0.0441\ldots$ & \\ $10$ & ${\mathsf P}_{10c}$ & $0.0390\ldots$ & Leech \& Sloane, 1970, \cite{ls-1970}\\ $11$ & ${\mathsf P}_{11a}$ & $0.0351\ldots$ & Leech \& Sloane, 1970, \cite{ls-1970}\\ $12$ & ${\mathsf K}_{12}$ & $0.0370\ldots$ & \\ $13$ & ${\mathsf P}_{13a}$ & $0.0351\ldots$ & Leech \& Sloane, 1970, \cite{ls-1970}\\ $14$ & ${\mathsf \Lambda}_{14},\mathsf{\ast}$ & $0.0360\ldots$ & \\ $15$ & ${\mathsf \Lambda}_{15},\mathsf{\ast}$ & $0.0441\ldots$ & \\ $16$ & ${\mathsf \Lambda}_{16},\mathsf{\ast}$ & $0.0625\phantom{\ldots}$ & \\ $17$ & ${\mathsf \Lambda}_{17},\mathsf{\ast}$ & $0.0625\phantom{\ldots}$ & \\ $18$ & ${\mathsf V}_{18}$ & $0.0750\ldots$ & Bierbrauer \& Edel, 1998, \cite{be-2000}\\ $19$ & ${\mathsf \Lambda}_{19},\mathsf{\ast}$ & $0.0883\ldots$ & \\ $20$ & ${\mathsf V}_{20}$ & $0.1315\ldots$ & Vardy, 1995, \cite{vardy-1995}\\ $21$ & ${\mathsf \Lambda}_{21},\mathsf{\ast}$ & $0.1767\ldots$ & \\ $22$ & ${\mathsf V}_{22}$ & $0.3325\ldots$ & Conway \& Sloane, 1996, \cite{cs-1996}\\ $23$ & ${\mathsf \Lambda}_{23}$ & $0.5\phantom{000\ldots}$ & \\ $24$ & ${\mathsf \Lambda}_{24}$ & $1\phantom{.0000\ldots}$ & Cohn \& Kumar, 2004, \cite{ck-2004}\\ \end{tabular} \par\medskip \textbf{Table \arabic{tab}. } Point sets defining best known sphere packings up to dimension $24$. In dimensions $d\leq 8$ and $d=24$ the corresponding authors solved the lattice sphere packing problem. The other mentioned authors found the listed, densest known periodic sphere packings. The asterisk~$\mathsf{\ast}$ indicates that an equally dense, periodic non-lattice sphere packing is known. \end{bigtab} \section{Background on lattices and quadratic forms} \label{sec:background} \subsection*{Lattices and Periodic Sets} A (full rank) {\em lattice} $L$ in $\mathbb R^d$ is a discrete subgroup $L=\mathbb Z \vec{a}_1 + \ldots + \mathbb Z \vec{a}_d$ generated by $d$ linear independent (column) vectors $\vec{a}_i\in\mathbb R^d$. We say that these vectors form a {\em basis} of $L$ and associate it with the matrix $A=(\vec{a}_1,\ldots, \vec{a}_d)\in\gldr$. We write $L=A\mathbb Z^d$. It is well-known that $L$ is generated in this way precisely by the matrices $AU$ with $U\in\GL_d(\Z)$. We refer to \cite{gl-1987} for details and more background on lattices. Given a lattice $L$ and {\em translational vectors} $\vec{t}_i$, for say $i=1,\ldots,m$, the discrete set \begin{equation} \label{eqn:periodic-set} \Lambda = \bigcup_{i=1}^m \left( \vec{t}_i + L \right) \end{equation} is called a {\em periodic (point) set}. The {\em sphere packing radius} $\lambda(\Lambda)$ of a discrete set~$\Lambda$ (not necessarily periodic) in the Euclidean space $\mathbb R^d$ (with norm $\|\cdot \|$) is defined as the infimum of half the distances between distinct points: \begin{equation*} \lambda(\Lambda) = \frac{1}{2} \inf_{\vec{x},\vec{y}\in\Lambda, \vec{x}\not=\vec{y}} \|\vec{x}-\vec{y}\|. \end{equation*} The sphere packing radius is the largest possible radius $\lambda$ such that solid spheres of radius~$\lambda$ and with centers in~$\Lambda$ do not overlap. Denoting the solid unit sphere by $B^d$, the {\em sphere packing} defined by $\Lambda$ is the union of non-overlapping spheres $$ \bigcup_{\vec{x}\in \Lambda} \left( \vec{x} + \lambda(\Lambda) B^d \right). $$ Its {\em density} $\delta(\Lambda)$ is, loosely speaking, defined as the fraction of space covered by spheres. We can make this definition more precise by considering a cube $$ C=\{\vec{x}\in \mathbb R^d : |x_i| \leq 1/2 \} $$ and setting $$ \delta(\Lambda) = \lambda(\Lambda)^d \vol B^d \cdot \liminf_{\lambda\to\infty} \frac{\card (\Lambda \cap \lambda C)}{\vol \lambda C} . $$ If the limit inferior above is a true limit, the cube in the definition can be replaced by any other compact set~$C$ that is the closure of its interior, without the value of $\delta$ changing. We say that a corresponding set $\Lambda$ is {\em uniformly dense} in that case. It can be shown that the supremum of $\delta(\Lambda)$ over all discrete sets is attained by a uniformly dense set $\Lambda$. We refer to \cite{groemer-1963} and \cite[Appendix A]{ce-2003} for further reading. For general discrete sets, it may be difficult to compute the density, respectively the limit inferior in the definition. For a lattice the limit inferior can simply be replaced by $1/\det L$, where $\det L = |\det A|$ is the {\em determinant} of the lattice~$L=A\mathbb Z^d$. Note that the determinant of $L$ is independent of the particular choice of the basis~$A$. For periodic sets $\Lambda$ as in \eqref{eqn:periodic-set} we get the estimate $$ \delta(\Lambda) \leq \frac{m \lambda(\Lambda)^d \vol B^d}{\det L} , $$ with equality if and only if the lattice translates $\vec{t}_i+L$ are pairwise disjoint. \subsection*{Positive definite quadratic forms} \label{sec:background-pqf} Among similarity classes of lattices, hence in the space $O_d(\mathbb R)\backslash \gldr / \GL_d(\Z)$, there exist only finitely many local maxima of~$\delta$ up to scaling. In order to characterize and to work with them, i.e., enumerate them, it is convenient to use the language of real {\em positive definite quadratic forms} (PQFs for short). These are simply identified with the set ${\mathcal S}^d_{>0}$ of real symmetric, positive definite matrices. Given a matrix $Q\in{\mathcal S}^d_{>0}$, we set $Q[\vec{x}]=\vec{x}^tQ\vec{x}$ for $\vec{x}\in\mathbb R^d$, defining a corresponding PQF. Note that every matrix $Q\in{\mathcal S}^d_{>0}$ can be decomposed into $Q=A^tA$ with $A\in\gldr$ and therefore ${\mathcal S}^d_{>0}$ can be identified with the space $O_d(\mathbb R)\backslash \gldr$ of lattice bases up to orthogonal transformations. Two PQFs (respectively matrices) $Q$ and $Q'$ are called {\em arithmetically equivalent} (or {\em integrally equivalent}) if there exists a matrix $U\in \GL_d(\Z)$ with $Q'=U^tQU$. Thus arithmetical equivalence classes of PQFs are in one-to-one correspondence with similarity classes of lattices. The {\em arithmetical minimum} $\lambda(Q)$ of a PQF $Q$ is defined by $$ \lambda(Q) = \min_{\vec{x}\in \mathbb Z^d\setminus\{\vec{0}\}} Q[\vec{x}] . $$ If $L=A\mathbb Z^d$ with $A\in\gldr$ satisfying $Q=A^tA$ is a corresponding lattice, there is an immediate relation to the packing radius of $L$: We have $\lambda(Q)=(2\lambda(L))^2$ and therefore $$ \delta(L)= {\mathcal H}(Q)^{d/2} \frac{\vol B^d}{2^d}, $$ where $$ {\mathcal H}(Q) = \frac{\lambda(Q)}{(\det Q)^{1/d}} $$ is the so-called {\em Hermite invariant} of~$Q$. Note that ${\mathcal H}(\cdot)$ is invariant with respect to scaling. A classical problem in the arithmetic theory of quadratic forms is the determination of the {\em Hermite constant} $$ {\mathcal H}_d=\sup_{Q\in{\mathcal S}^d_{>0}} {\mathcal H}(Q) . $$ By the relation described above, it corresponds to determining the supremum of possible lattice sphere packing densities. Local maxima of the Hermite invariant on ${\mathcal S}^d_{>0}$ and corresponding lattices are called {\em extreme}. \section{Voronoi's characterization of extreme forms} \label{sec:ryshkov} \subsection*{The Ryshkov polyhedron} Since the Hermite invariant is invariant with respect to scaling, a natural approach to maximizing it is to consider all forms with a fixed arithmetical minimum, say~$1$, and minimize the determinant among them. We may even relax the condition on the arithmetical minimum and only require that it is at least~$1$. In other words, we have $$ {\mathcal H}_d = 1 / \inf_{{\mathcal R}} (\det Q)^{1/d} , $$ where \begin{equation} \label{eqn:ryshkov-polyhedron} {\mathcal R} = \left\{ Q\in{\mathcal S}^d_{>0} : \lambda(Q)\geq 1 \right\} . \end{equation} We refer to ${\mathcal R}$ as {\em Ryshkov polyhedron}, as it was Ryshkov \cite{ryshkov-1970} who noticed that this view on Hermite's constant allows a simplified description of Voronoi's theory, to be sketched below. We denote by ${\mathcal S}^d$ the space of real symmetric matrices, respectively of real quadratic forms in $d$~variables. It is a Euclidean vector space of dimension $\binom{d+1}{2}$ with the usual inner product defined by $$ \langle Q, Q' \rangle = \sum_{i,j=1}^d q_{ij} q'_{ij} = \trace(Q\cdot Q') . $$ Because of the fundamental identity \[ Q[\vec{x}] = \langle Q,\vec{x}\vec{x}^t \rangle , \] quadratic forms $Q\in {\mathcal S}^d$ attaining a fixed value on a given $\vec{x}\in\mathbb R^d\setminus\{\vec{0}\}$ lie all in a {\em hyperplane} ({\em affine subspace of co-dimension~$1$}). Thus Ryshkov polyhedra ${\mathcal R}$ are intersections of infinitely many {\em halfspaces}: \begin{equation} \label{eqn:Plambda} {\mathcal R} = \{ Q\in{\mathcal S}^d_{>0} : \langle Q, \vec{x}\vec{x}^t \rangle \geq 1 \mbox{ for all } \vec{x}\in\mathbb Z^d\setminus\{\vec{0}\} \} . \end{equation} It can be shown that ${\mathcal R}$ is ``locally like a polyhedron'', meaning that any intersection with a {\em polytope} (convex hull of finitely many vertices) is itself a polytope. For a proof we refer to \cite[Theorem~3.1]{schuermann-2009}. As a consequence ${\mathcal R}$ has {\em vertices}, {\em edges}, {\em facets} and in general {\em $k$-dimensional faces} ({\em $k$-faces}). For details on terminology and basic properties of polytopes we refer to \cite{ziegler-1998}. \subsection*{Perfect forms} The vertices $Q$ of the Ryshkov polyhedron are called {\em perfect forms}. Such forms are characterized by the fact that they are determined uniquely by their arithmetical minimum (here $1$) and its representatives $$ \Min Q =\{\vec{x}\in \mathbb Z^d : Q[\vec{x}] = \lambda(Q) \} . $$ A corresponding lattice is called perfect too. The following proposition due to Minkowski implies that the Hermite constant can only be attained among perfect forms, i.e., the maximal lattice sphere packing density can only be attained by perfect lattices. \begin{proposition}[Minkowski~\cite{minkowski-1905}] \label{prop:concave-det} $(\det Q)^{1/d}$ is a strictly concave function on~${\mathcal S}^d_{>0}$. \end{proposition} For a proof see for example \cite[\textsection~39.2]{gl-1987}. Note, that in contrast to~$(\det Q)^{1/d}$, the function $\det Q$ is not a concave function on~${\mathcal S}^d_{>0}$ (see \cite{nelson-1974}). However Minkowski's theorem implies that the set \begin{equation} \label{eqn:det-greater-equal-D} \{Q\in {\mathcal S}^d_{>0} : \det Q \geq D \} \end{equation} is strictly convex for $D>0$. Another property of perfect forms which we use later is the following. \begin{proposition} \label{prop:prefect-implies-d-linear-indep} If $Q\in{\mathcal S}^d$ is perfect, then $\Min Q$ spans $\mathbb R^d$. \end{proposition} The existence of $d$ linear independent vectors in $\Min Q$ for a perfect form $Q$ follows from the observation that the rank-$1$ forms $\vec{x}\vec{x}^t$ with $\vec{x}\in\Min Q$ have to span ${\mathcal S}^d$, since they uniquely determine $Q$ through the linear equations $\langle Q, \vec{x}\vec{x}^t \rangle = \lambda(Q)$. If however $\Min Q$ does not span $\mathbb R^d$ then these rank-$1$ forms can maximally span a $\binom{d}{2}$-dimensional subspace of ${\mathcal S}^d$. \subsection*{Finiteness up to equivalence} The arithmetical equivalence operation $Q\mapsto U^t Q U$ of $\GL_d(\Z)$ on ${\mathcal S}^d_{>0}$ leaves $\lambda(Q)$, $\Min Q$ and also ${\mathcal R}$ invariant. In fact, $\GL_d(\Z)$ acts on the sets of faces of a given dimension, thus in particular on the sets of vertices, edges and facets of ${\mathcal R}$. The following theorem shows that the Ryshkov polyhedron ${\mathcal R}$ contains only finitely many arithmetically inequivalent vertices. By Proposition~\ref{prop:concave-det} this implies in particular that ${\mathcal H}_d$ is actually attained, namely by some perfect forms. For a proof we refer to \cite[Theorem~3.4]{schuermann-2009}. \begin{theorem}[Voronoi \cite{voronoi-1907}] \label{thm:voronoi} Up to arithmetical equivalence and scaling there exist only finitely many perfect forms in a given dimension~$d\geq 1$. \end{theorem} Thus the classification of perfect forms in a given dimension, respectively the enumeration of vertices of the Ryshkov polyhedron up to arithmetical equivalence, yields the Hermite constant. Perfect forms have been classified up to dimension~$8$ (see \cite{dsv-2007b}). \subsection*{Characterization of extreme forms} From dimension~$6$ onwards not every perfect form is extreme (see \cite{martinet-2003}). In order to characterize extreme forms within the set of perfect forms the notion of {\em eutaxy} is used: A PQF $Q$ is called {\em eutactic} if its inverse~$Q^{-1}$ is contained in the (relative) interior $\relint {\mathcal V}(Q)$ of its {\em Voronoi domain} $$ {\mathcal V}(Q) = \cone \{ \vec{x}\vec{x}^t : \vec{x}\in\Min Q \} . $$ Here $\cone M $ denotes the {\em conic hull} $$ \left\{ \sum_{i=1}^n \alpha_i \vec{x}_i : n\in \mathbb N \mbox{ and } \vec{x}_i \in M, \alpha_i\geq 0 \mbox{ for } i=1,\ldots, n \right\} $$ of a set $M$. Note that the Voronoi domain is full-dimensional (of dimension $\binom{d+1}{2}$) if and only if $Q$ is perfect. Note also that the rank-$1$ forms $\vec{x}\vec{x}^t$ give inequalities $\langle Q, \vec{x}\vec{x}^t \rangle \geq 1$ defining the Ryshkov polyhedron and by this the Voronoi domain of $Q$ is equal to the {\em normal cone} \begin{equation} \label{eqn:normal-cone} \{N \in {\mathcal S}^d : \langle N, Q/\lambda(Q) \rangle \leq \langle N , Q' \rangle \mbox{ for all } Q'\in {\mathcal R} \} \end{equation} of ${\mathcal R}$ at its boundary point $Q/\lambda(Q)$. Algebraically the eutaxy condition $Q^{-1} \in \relint {\mathcal V}(Q)$ is equivalent to the existence of positive $\alpha_{\vec{x}}$ with \begin{equation} \label{eqn:eutaxy-algebraic} Q^{-1} = \sum_{\vec{x}\in\Min Q} \alpha_{\vec{x}} \vec{x}\vec{x}^t . \end{equation} Thus computationally eutaxy of~$Q$ can be tested by solving the {\em linear program} \begin{equation} \label{eqn:eutaxy-lp} \max \alpha_{\min} \quad \mbox{such that $\alpha_{\vec{x}}\geq \alpha_{\min}$ and \eqref{eqn:eutaxy-algebraic} holds.} \end{equation} The form $Q$ is eutactic if and only if the maximum is greater $0$. Voronoi \cite{voronoi-1907} showed that perfection together with eutaxy implies extremality and vice versa: \begin{theorem}[Voronoi \cite{voronoi-1907}] \label{thm:voronoi-packing} A PQF $Q\in{\mathcal S}^d_{>0}$ is extreme if and only if $Q$ is perfect and eutactic. \end{theorem} We here give a proof providing a geometrical viewpoint that turns out to be quite useful for the intended generalization discussed in the following sections. \begin{proof} The function~$\det Q$ is a positive real valued polynomial on ${\mathcal S}^d$, depending on the $\binom{d+1}{2}$ different coefficients $q_{ij}$ of $Q$. Using the expansion theorem we obtain $$ \det Q = \sum_{i=1}^d q_{ji}^\# q_{ij} $$ for any fixed column index $j\in\{ 1,\dots,d \}$. Here, $q_{ij}^\# = (-1)^{i+j} \det Q_{ij}$ (with $Q_{ij}$ the minor matrix of $Q$, obtained by removing row~$i$ and column~$j$) denote the coefficients of the {\em adjoint form} $Q^\#=(\det Q)Q^{-1}\in{\mathcal S}^d_{>0}$ of $Q$. Thus \begin{equation} \label{eqn:gradient-det} \grad \det Q = (\det Q)Q^{-1} \end{equation} and the tangent hyperplane $T$ in $Q$ of the smooth {\em determinant-$\det Q$-surface} $$ S = \{ Q'\in{\mathcal S}^d_{>0} : \det Q' = \det Q \} $$ is given by $$ T = \{ Q'\in{\mathcal S}^d : \langle Q^{-1},Q'\rangle = \langle Q^{-1},Q \rangle \} . $$ Or in other words, $Q^{-1}$ is a normal vector of the tangent plane $T$ of $S$ at $Q$. By Proposition~\ref{prop:concave-det} and the observation that~\eqref{eqn:det-greater-equal-D} is convex, we know that $S$ is contained in the halfspace \begin{equation} \label{eqn:tangent-plane} \{Q'\in{\mathcal S}^d : \langle Q^{-1} , Q'-Q \rangle \geq 0 \} , \end{equation} with $Q$ being the unique intersection point of $S$ and $T$. As a consequence, a perfect form $Q$ attains a local minimum of $\det Q$ (hence is extreme) if and only if the halfspace \eqref{eqn:tangent-plane} contains the Ryshkov polyhedron ${\mathcal R}$, and its boundary meets ${\mathcal R}$ only in $Q$. This is easily seen to be equivalent to the condition that the normal cone (Voronoi domain) ${\mathcal V}(Q)$ of ${\mathcal R}$ at $Q$ contains $Q^{-1}$ in its interior. \end{proof} Note that eutaxy alone does not suffice for extremality. However, there exist only finitely many eutactic forms in every dimension and they can (in principle) be enumerated too (see \cite[Section~9.5]{martinet-2003}). Nevertheless, this seems computationally more difficult than the enumeration of perfect forms (see \cite{stogrin-1974}, \cite{bm-1996}, \cite{batut-2001}, \cite{egs-2002}). By the geometry of~$S$ and~$T$ a eutactic form attains always a unique minimum of~$\delta$ (maximum of $\det$) on its face of the Ryshkov polyhedron. However, not all faces of the Ryshkov polyhedron contain a eutactic form. \section{Parameter spaces for periodic sets} \label{sec:periodic-parameter-space} We want to study the more general situation of periodic sphere packings. Recall from \eqref{eqn:periodic-set} that a periodic set with $m$~lattice translates (an {\em $m$-periodic set}) in $\mathbb R^d$ is of the form \begin{equation} \label{eqn:periodic-set-2} \Lambda' = \bigcup_{i=1}^{m} \left( \vec{t}'_i + L \right) , \end{equation} with a lattice $L\subset \mathbb R^d$ and translation vectors $\vec{t}'_i\in \mathbb R^d$, $i=1,\ldots ,m$. We want to work with a parameter space for $m$-periodic sets similar to ${\mathcal S}^d_{>0}$ for lattices. For this, we consider $\Lambda'$ as a linear image $\Lambda'=A\Lambda_{\vec{t}}$ of a {\em standard periodic set} \begin{equation} \label{eqn:standard-periodic-set-def} \Lambda_{\vec{t}} = \bigcup_{i=1}^{m} \left( \vec{t}_i + \mathbb Z^d \right) . \end{equation} Here, $A\in\gldr$ satisfies in particular $L=A\mathbb Z^d$. Since we are only interested in properties of periodic sets up to isometries, we encode $\Lambda'$ by $Q=A^tA\in{\mathcal S}^d_{>0}$, together with the $m$ translation vectors $\vec{t}_1,\ldots, \vec{t}_m$. Since every property of periodic sets we deal with here is invariant up to translations, we may assume without loss of generality that $\vec{t}_m=\vec{0}$. Thus we consider the parameter space \begin{equation} \label{eqn:def-sdmo} {\mathcal S}^{d,m}_{>0} = {\mathcal S}^d_{>0} \times \mathbb R^{d\times (m-1)} \end{equation} for $m$-periodic sets (up to isometries). We hereby in particular generalize the space ${\mathcal S}^{d,1}_{>0}={\mathcal S}^d_{>0}$ in a natural way. We call the elements of ${\mathcal S}^{d,m}_{>0}$ {\em periodic forms} and denote them usually by $X=(Q,\vec{t})$, where $Q\in{\mathcal S}^d_{>0}$ and $$ \vec{t}=(\vec{t}_1,\ldots,\vec{t}_{m-1})\in \mathbb R^{d\times (m-1)} $$ is a real valued matrix containing $m-1$ columns with vectors $\vec{t}_i\in\mathbb R^d$. One should keep in mind: although we omit $\vec{t}_m=\vec{0}$, we implicitly keep it as a translation vector. Note that a periodic set $\Lambda'$ as in \eqref{eqn:periodic-set-2} has many {\em representations} by periodic forms. In particular, $m$ may vary and we have different choices for $A$. A similar approach for periodic sets in dimension~$3$ has been considered in~\cite{pz-1998}. The parameter space ${\mathcal S}^{d,m}_{>0}$ is contained in the space \begin{equation} \label{eqn:def-sdm} {\mathcal S}^{d,m} = {\mathcal S}^d \times \mathbb R^{d\times (m-1)}. \end{equation} It can be turned into a Euclidean space with inner product $\langle \cdot , \cdot \rangle$, defined for $X=(Q,\vec{t})$ and $X'=(Q',\vec{t}')$ by $$ \label{not:inner_product_sdm} \langle X, X'\rangle = \langle Q, Q' \rangle + \sum_{i=1}^{m-1} \vec{t}_i^t \vec{t}'_i . $$ Note, for the sake of simplicity we use the same symbol for the inner products on all spaces ${\mathcal S}^{d,m}$. We extend the definition of the arithmetical minimum $\lambda$, by defining the {\em generalized arithmetical minimum} $$ \label{not:arith_min_periodic_sets} \lambda(X) = \min \{ Q[\vec{t}_i-\vec{t}_j-\vec{v}] : 1\leq i,j \leq m \mbox{ and } \vec{v}\in \mathbb Z^d, \mbox{ with } \vec{v}\not=\vec{0} \mbox{ if } i=j \} $$ for the periodic form~$X=(Q,\vec{t})\in{\mathcal S}^{d,m}_{>0}$. Note that we have $\lambda(X)=0$ in the case of intersecting lattice translates $(\vec{t}_i+\mathbb Z^d) \cap (\vec{t}_j+\mathbb Z^d) \not= \emptyset$ with $i\not=j$. The set of {\em representations of the generalized arithmetical minimum} $\Min X$ is the set of all $\vec{w}=\vec{t}_i-\vec{t}_j-\vec{v}$ \label{not:Min-X} attaining $\lambda(X)$. Computationally, $\Min X$ and $\lambda(X)$ can be obtained by solving a sequence of {\em closest vector problems} (CVPs), one for each pair $i,j$ with $i\not= j$. In addition one shortest vector problem (SVP) has to be solved, taking care of the cases where $i=j$. Implementations of algorithms solving CVPs and SVPs are provided for example in {\tt MAGMA} \cite{magma} or {\tt GAP} \cite{gap}. In order to define the sphere packing density function $\delta:{\mathcal S}^{d,m}_{>0} \to \mathbb R$ we set $\det X = \det Q$ for periodic forms~$X=(Q,\vec{t})$. Then \begin{equation} \label{eqn:periodic-delta} \delta(X) = \left(\frac{\lambda (X)}{(\det X)^{1/d}}\right)^{\frac{d}{2}} m \vol B^d / 2^d . \end{equation} In analogy to the lattice case, we call a periodic form~$X\in{\mathcal S}^{d,m}_{>0}$ \ {\em $m$-extreme} if it attains a local maximum of~$\delta$ within~${\mathcal S}^{d,m}_{>0}$. The relation~\eqref{eqn:periodic-delta} shows that the supremum of~$\delta$ among $m$-periodic sphere packings is up to some power and a constant factor equal to the ``Hermite like constant'' $$ \sup_{X\in{\mathcal S}^{d,m}_{>0}} \lambda(X) / (\det X)^{1/d} = 1 / \inf_{X\in{\mathcal R}_{m}} (\det X)^{1/d} , $$ where the set ${\mathcal R}_{m}$ on the right side is the {\em (generalized) Ryshkov set} \begin{equation} \label{eqn:def-Pmlambda} {\mathcal R}_{m} = \left\{ X\in{\mathcal S}^{d,m}_{>0} : \lambda(X)\geq 1 \right\} . \end{equation} The condition $\lambda(X)\geq 1$ gives infinitely many linear inequalities $$ p_{\vec{v}}(X) = Q[\vec{v}] = \langle X , (\vec{v}\vec{v}^t,0) \rangle \geq 1 $$ for $\vec{v}\in\mathbb Z^d\setminus\{\vec{0}\}$, as in the case $m=1$. For $m>1$ we additionally have the infinitely many polynomial inequalities \begin{equation} \label{eqn:def-pijv} p_{i,j,\vec{v}}(X) = Q[\vec{t}_i-\vec{t}_j-\vec{v}] \geq 1 , \end{equation} where $i,j\in\{1,\ldots, m\}$ with $i\not=j$ and $\vec{v}\in\mathbb Z^d$. These polynomials are of degree~$3$ in the parameters $q_{kl}$, $t_{kl}$ of~$X$. Note that they are linear for a fixed~$\vec{t}$. Observe also that $p_{i,m,\vec{v}}$ and $p_{m,j,\vec{v}}$ are special due to our assumption $\vec{t}_m=\vec{0}$ and that there is a symmetry $p_{i,j,\vec{v}}=p_{j,i,-\vec{v}}$ by which we may restrict our attention to polynomials with $i\leq j$. For $i=j$ we have the linear function $p_{i,j,\vec{v}}=p_{\vec{v}}$. \section{Local analysis of periodic sphere packings} \label{sec:local-analysis} \subsection*{Characterizing local optima} Before we generalize perfection and eutaxy to a notion of {\em $m$-perfection} and {\em $m$-eutaxy} (in order to obtain a sufficient condition for a periodic form to be $m$-extreme from it) we discuss a rather general setting: Assume we want to minimize a smooth function on a {\em basic closed semialgebraic set}, that is, on a region which is described by finitely many (non-strict) polynomial inequalities. Let $E$ denote a Euclidean space with inner product $\langle \cdot , \cdot \rangle$. Further, let $f:E\to \mathbb R$ be {\em smooth} (infinitely differentiable) and $g_1,\ldots, g_k$ be (real valued) polynomials on~$E$. Assume we want to determine whether or not we have a local minimum of $f$ at $X_0$ on the boundary of \begin{equation} \label{eqn:setG-definition} G=\{ X \in E : g_i(X) \geq 0 \mbox{ for } i=1,\ldots,k \} . \end{equation} For simplicity, we further assume $(\grad f)(X_0) \not= 0$ and $g_i(X_0)=0$, as well as $(\grad g_i)(X_0)\not= 0$, for $i=1,\dots, k$. Then, in a sufficiently small neighborhood of $X_0$, the function $f$ as well as the polynomials $g_i$ can be approximated arbitrarily close by corresponding affine functions. For example, $f$ is approximated by the {\em beginning of its Taylor series} $$ f(X_0) + \langle (\grad f)(X_0) , X-X_0 \rangle . $$ From this one easily derives the following well-known criterion (see for example \cite[Theorem~4.2.2]{bss-1993}) for an isolated local minimum of $f$ at $X_0$, depending on the normal cone $${\mathcal V}(X_0)=\cone \{ (\grad g_i)(X_0) : i=1,\ldots,k \}.$$ The function $f$ attains an isolated local minimum on $G$ if \begin{equation} \label{eqn:grad-criterion} (\grad f)(X_0) \in \interior {\mathcal V}(X_0) , \end{equation} and $f$ does not attain a local minimum if \begin{equation} \label{eqn:grad-criterion2} (\grad f)(X_0) \not\in {\mathcal V}(X_0) . \end{equation} The behavior in the case $(\grad f)(X_0)\in \bd \cone {\mathcal V}(X_0)$ depends on the involved functions $f$ and $g_i$ and has to be treated depending on the specific problem. For the lattice sphere packing problem we have $E={\mathcal S}^d$ and $f=\det^{1/d}$. For $Q_0\in{\mathcal S}^d_{>0}$ we set $g_i(Q)=Q[\vec{v}_i]-\lambda(Q_0)$ with $(\grad g_i)(Q)= \vec{v}_i\vec{v}_i^t$ for each pair $\pm \vec{v}_i$ in $\Min Q_0$. By Theorem \ref{thm:voronoi-packing} we have a local minimum of $f(Q)=(\det Q)^{1/d}$ at $Q_0$ on $G$ (as in \eqref{eqn:setG-definition}) if and only if $Q_0$ is perfect and eutactic, respectively if ${\mathcal V}(Q_0)$ is full-dimensional and $(\grad f)(Q_0)\in \interior {\mathcal V}(Q_0)$. Here, $(\grad f)(Q_0)$ is a positive multiple of~$Q_0^{-1}$. Thus in this special case (due to Proposition~\ref{prop:concave-det}) we do not have a local minimum of~$f$ where $(\grad f)(Q_0)\in \bd \cone {\mathcal V}(Q_0)$. Let us consider the case of $m$-periodic sets, hence of $E={\mathcal S}^{d,m}$ with $m>1$. We want to know if a periodic form~$X_0\in{\mathcal S}^{d,m}_{>0}$ attains a local minimum of $f=\det^{1/d}$. We may assume $\lambda(X_0) > 0$. The set $\Min X_0$ is finite and moreover, for $X=(Q,\vec{t})$ in a small neighborhood of $X_0=(Q_0,\vec{t}^0)$, every $\vec{t}_i-\vec{t}_j-\vec{v}\in\Min X$ corresponds to a $\vec{t}^0_i-\vec{t}^0_j-\vec{v}\in\Min X_0$. Thus locally at $X_0$, the generalized Ryshkov set ${\mathcal R}_{m}$ is given by the basic closed semialgebraic set~$G$ defined by the inequalities $p_{i,j,\vec{v}}(X)-\lambda(X_0)\geq 0$, one for each pair $\pm (\vec{t}^0_i-\vec{t}^0_j-\vec{v})$ in $\Min X_0$. As explained in Section~\ref{sec:periodic-parameter-space}, we may assume $1\leq i\leq j\leq m$ and $\vec{t}^0_j=\vec{0}$ if $j=m$. An elementary calculation yields \begin{equation} \label{eqn:pijv-gradient} (\grad p_{i,j,\vec{v}})(X) = ( \vec{w}\vec{w}^t, \vec{0},\dots,\vec{0}, 2Q\vec{w}, \vec{0},\dots,\vec{0}, -2Q\vec{w}, \vec{0},\dots,\vec{0} ), \end{equation} where we set $X=(Q,\vec{t})$ and use $\vec{w}$ to abbreviate $\vec{t}_i-\vec{t}_j-\vec{v}$. This is to be understood as a vector in ${\mathcal S}^{d,m}={\mathcal S}^d\times \mathbb R^{d\times (m-1)}$, with its ``${\mathcal S}^d$-component'' being the rank-$1$ form $\vec{w}\vec{w}^t$ and its ``translational-component'' containing the zero-vector~$\vec{0}$ in all but the $i$th and $j$th column. If $j=m$, the $j$th column is omitted and if $i=j$ the corresponding column is $\vec{0}$. For $(\grad f)(X)$ we obtain a positive multiple of $(Q^{-1},\vec{0})$. \subsection*{A sufficient condition for local \texorpdfstring{$\mathbf{m}$}{m}-periodic sphere packing optima} Generalizing the notion of perfection, we say a periodic form~$X=(Q,\vec{t})\in {\mathcal S}^{d,m}_{>0}$ (and a corresponding periodic set represented by $X$) is {\em $m$-perfect} if the {\em generalized Voronoi domain} \begin{equation} \label{eqn:periodic-voronoi-domain} {\mathcal V}(X) = \cone \{(\grad p_{i,j,\vec{v}})(X) : \vec{t}_i-\vec{t}_j-\vec{v}\in \Min X \mbox{ for some } \vec{v}\in\mathbb Z^d \} \end{equation} is full-dimensional, that is, if $\dim {\mathcal V}(X)= \dim {\mathcal S}^{d,m} = \binom{d+1}{2} + (m-1)d$. Generalizing the notion of eutaxy, we say that $X$ (and a corresponding periodic set) is {\em $m$-eutactic} if $$ (Q^{-1},\vec{0})\in\relint {\mathcal V}(X). $$ So the general discussion at the beginning of this section yields the following sufficient condition for a periodic form~$X$ to be {\em isolated $m$-extreme}, that is, for $X$ having the property that any sufficiently small change which preserves~$\lambda(X)$, necessarily lowers~$\delta(X)$. \begin{theorem} \label{thm:m-extreme-characterization} If a periodic form~$X\in {\mathcal S}^{d,m}_{>0}$ is $m$-perfect and $m$-eutactic, then $X$ is isolated $m$-extreme. \end{theorem} Note that the theorem gives a computational tool to certify isolated $m$-extremeness of a given periodic form~$X=(Q,\vec{t})\in {\mathcal S}^{d,m}_{>0}$: First, we compute $\Min X$ and use equation~\eqref{eqn:pijv-gradient} to obtain generators of the generalized Voronoi domain ${\mathcal V}(X)$. From the generators it can be easily checked if the domain is full-dimensional, hence if $X$ is $m$-perfect. Next, we can computationally test whether $(Q^{-1},\vec{0})$ is in ${\mathcal V}(X)$ or not; for example by solving a linear program similar to~\eqref{eqn:eutaxy-lp}. If we find $(Q^{-1},\vec{0}) \in \relint {\mathcal V}(X)$ (or equivalently in $\interior {\mathcal V}(X)$ as ${\mathcal V}(X)$ is assumed to be full-dimensional), the periodic form~$X$ represents an isolated $m$-extreme periodic set. If $(Q^{-1},\vec{0})\not\in{\mathcal V}(X)$, the periodic form~$X$ does not represent an $m$-extreme periodic set. In this situation, we can even find a ``direction'' $N\in{\mathcal S}^{d,m}$, for which we can improve the sphere packing density of the periodic form~$X$, that is, such that $\delta(X+\epsilon N)>\delta(X)$ for all sufficiently small $\epsilon>0$. \begin{remark} Let $X\in{\mathcal S}^{d,m}_{>0}$ with $(Q^{-1},\vec{0})\not\in{\mathcal V}(X)$. Then we can improve the sphere packing density of~$X$ in direction~$N$ given by the nearest point to $-(Q^{-1},\vec{0})$ in the polyhedral cone \begin{equation} \label{eqn:linear-cone-approx} {\mathcal P}(X)= \{ N \in {\mathcal S}^{d,m} : \langle V, N \rangle \geq 0 \, \mbox{for all} \, V\in {\mathcal V}(X) \} . \end{equation} \end{remark} Note that the cone~${\mathcal P}(X)$ is dual to the generalized Voronoi domain~${\mathcal V}(X)$ and (added to~$X$) gives locally a linear approximation of the generalized Ryshkov set~${\mathcal R}_{m}$. \subsection*{Fluid diamond packings} For general $m$ we are confronted with a difficulty which does not show up in the lattice case $m=1$: There may be non-isolated $m$-extreme sets, which are not $m$-perfect. The {\em fluid diamond packings} in dimension $9$, described by Conway and Sloane in \cite{cs-1995}, give such an example. \begin{example} The {\em root lattice $\mathsf{D}_d$} can be defined by $$ \mathsf{D}_d = \{ \vec{x}\in \mathbb Z^d : \sum_{i=1}^d x_i \equiv 0 \!\! \mod 2 \} . $$ The {\em fluid diamond packings} are $2$-periodic sets $$ \mathsf{D}_9\langle \vec{t} \rangle = \mathsf{D}_9 \cup (\mathsf{D}_9+\vec{t}) $$ with $\vec{t}\in\mathbb R^9$ such that the minimal distance among elements is equal to the minimum distance $\sqrt{2}$ of $\mathsf{D}_9$ itself. We may choose for example~$\vec{t} = \vec{t}_{\alpha}=(\tfrac{1}{2},\ldots,\tfrac{1}{2},\alpha)^t$ with any $\alpha \in \mathbb R$. For integers~$\alpha$ we obtain the densest known packing lattice $\Lambda_9=\mathsf{D}_9\langle \vec{t}_{\alpha} \rangle$ in dimension $9$, showing that it is part of a family of uncountably many equally dense $2$-periodic sets. The sets $\mathsf{D}_9\langle \vec{t}_{\alpha} \rangle$ give examples of non-isolated $2$-extreme sets, which are $2$-eutactic, but not $2$-perfect. In order to see this, let us consider a representation $X_{\alpha}\in{\mathcal S}^{9,2}_{>0}$ for $\mathsf{D}_9\langle \vec{t}_{\alpha} \rangle$. We choose a basis $A$ of $\mathsf{D}_9$. Then $X_\alpha = (Q, A^{-1} \vec{t}_{\alpha})$, with $Q=A^t A$, is a representation of $\mathsf{D}_9\langle \vec{t}_{\alpha} \rangle$. For non-integral $\alpha$ we find $\Min X_{\alpha}=\Min Q$ (using {\tt MAGMA} for example). It follows (for example by Lemma~\ref{lem:strong-eutaxy} below) that $X_\alpha$ is $2$-eutactic, but not $2$-perfect. For integral $\alpha$ we find $$ \Min X_{\alpha} = \Min Q \cup \{(x_1,\ldots,x_8,0)^t \in\{0,1\}^9 : \sum_{i=1}^8 x_i \equiv 0 \!\! \mod 2 \} . $$ Thus the vectors in $\Min X_{\alpha} \setminus \Min Q$ span only an $8$-dimensional space. Therefore $X_{\alpha}$ is not $2$-perfect. Nevertheless, a corresponding calculation shows that $X_{\alpha}$ is $2$-eutactic, as in the case of non-integral $\alpha$. In order to see that $X_{\alpha}$ is non-isolated $2$-extreme, we can apply Proposition~\ref{prop:sufficient-for-m-extreme} below. One easily checks that for integral~$\alpha$ (hence for the lattice $\Lambda_9$) we have only one degree of freedom for a local change of~$\vec{t}_{\alpha}$ giving an equally dense sphere packing. For non-integral~$\alpha$ we have nine degrees of freedom for such a modification. \newline\rightline{$\Box$} \end{example} Non-isolated $m$-extreme sets as in this example can occur for periodic forms~$X\in{\mathcal S}^{d,m}_{>0}$, only if $(Q^{-1},\vec{0})\in \bd {\mathcal V}(X)$ (which is for example always the case if $X$ is $m$-eutactic, but not $m$-perfect). In this case it is in general not clear what an infinitesimal change of $X$ in a direction $N\in{\mathcal S}^{d,m}$ leads to (already assuming it is orthogonal to $(Q^{-1},\vec{0})$ as well as in the boundary of the set ${\mathcal P}(X)$ in~\eqref{eqn:linear-cone-approx}). If ${\mathcal F}(X)$ denotes the unique face of ${\mathcal V}(X)$ containing $(Q^{-1},\vec{0})$ in its relative interior, then this ``set of uncertainty'' is equal to the face of ${\mathcal P}(X)$ dual to ${\mathcal F}(X)$, that is, equal to \begin{equation} \label{eqn:set-of-uncertainty} {\mathcal U}(X)= \{ N \in {\mathcal P}(X) : \langle V, N \rangle = 0 \; \mbox{for all} \, V\in {\mathcal F}(X) \} . \end{equation} Or in other words, the set ${\mathcal U}(X)$ is the intersection of ${\mathcal P}(X)$ with the hyperplane orthogonal to $(Q^{-1},\vec{0})$. Note that it is possible to determine ${\mathcal F}(X)$ (and hence a description of ${\mathcal U}(X)$ by linear inequalities) computationally, using linear programming techniques. \subsection*{Purely translational changes} Below we give an additional sufficient condition for $m$-extremeness. For this we consider the case when all directions in ${\mathcal U}(X)$ are ``{\em purely translational changes}'' $N = (0,\vec{t}^N) \in {\mathcal S}^{d,m}$. A vivid interpretation of a purely translational change can be given by thinking of the corresponding modification of a periodic sphere packing. The spheres of each lattice translate are jointly moved. If in such a local change all contacts among spheres are lost, we can increase their radius and obtain a new sphere packing with larger density. If some contacts among spheres are preserved however, the sphere packing density remains the same. The latter case is captured in the following proposition, which gives an easily testable criterion for $m$-extremeness. We apply this proposition in Section~\ref{sec:periodic-extreme}, where we consider potential local improvements of best known packing lattices to periodic non-lattice sets. \begin{proposition} \label{prop:sufficient-for-m-extreme} For a periodic form~$X=(Q,\vec{t})\in {\mathcal S}^{d,m}_{>0}$ with $(Q^{-1},\vec{0})\in \bd {\mathcal V}(X)$, let ${\mathcal U}(X)$ be contained in $$ \{ (\vec{0},\vec{t}^N)\in{\mathcal S}^{d,m} : \vec{t}^N_i=\vec{t}^N_j \; \mbox{for at least one} \, \vec{t}_i-\vec{t}_j-\vec{v}\in\Min X \;\mbox{with}\; \vec{v}\in\mathbb Z^d \} . $$ Then $X$ is (possibly non-isolated) $m$-extreme. \end{proposition} Note, if $X$ is $m$-eutactic (possibly not $m$-perfect), the set ${\mathcal U}(X)$ is the {\em orthogonal complement} ${\mathcal V}(X)^\perp$ of the linear hull of ${\mathcal V}(X)$. Note also that Proposition~\ref{prop:sufficient-for-m-extreme} includes in particular the special case where some $\vec{v}\in\mathbb Z^d$ are in $\Min X$ (and therefore $\vec{t}_i=\vec{t}_j=\vec{0}$ for $i=j=m$). This situation occurs for the $2$-periodic, fluid diamond packings in the example above. From the sphere packing interpretation of the proposition its assertion is clear. Nevertheless, we give a proof below, based on a local analysis in ${\mathcal S}^{d,m}_{>0}$. More than actually needed for the proof, we analyze how $\delta$ changes locally at a periodic form~$X\in{\mathcal S}^{d,m}_{>0}$ in a direction $N\in{\mathcal U}(X)$. As a byproduct, we obtain tools allowing a computational analysis of possible local optimality for a given periodic form (not necessarily covered by the proposition). These can for example be used in a numerical search for good periodic sphere packings. \begin{proof}[Proof of Proposition \ref{prop:sufficient-for-m-extreme}] The generalized Voronoi domain ${\mathcal V}(X)$ is spanned by gradients $(\grad p_{i,j,\vec{v}})(X)$ (as given in \eqref{eqn:pijv-gradient}), one for each pair of vectors $\pm\vec{w}\in\Min X$. The assumption that a direction $N=(Q^N,\vec{t}^N)$ is in ${\mathcal U}(X)$ for a periodic form~$X=(Q,\vec{t})$, implies $\langle Q^{-1}, Q^N \rangle = 0$. Moreover, for the unique maximal face ${\mathcal F}(X)$ of ${\mathcal V}(X)$ with $(Q^{-1},\vec{0})\in \relint {\mathcal F}(X)$, the condition that $N$ is orthogonal to some $(\grad p_{i,j,\vec{v}})(X)$ in ${\mathcal F}(X)$ translates into \begin{equation} \label{eqn:gradient-condition} \langle (\grad p_{i,j,\vec{v}})(X), N \rangle = Q^N[\vec{w}] + 2 (\vec{t}_i^N-\vec{t}_j^N)^t Q \vec{w} = 0 , \end{equation} with $\vec{w}=(\vec{t}_i-\vec{t}_j-\vec{v})$. Recall that in the special case $i=j$ (and for $m=1$ anyway) $p_{i,j,\vec{v}}$ is linear and \eqref{eqn:gradient-condition} reduces to the condition $Q^N[\vec{w}]=0$; if then $N$ satisfies this linear condition, $p_{i,j,\vec{v}}(X+\epsilon N)$ is a constant function in $\epsilon$. When $p_{i,j,\vec{v}}(X+\epsilon N)$ is a cubic polynomial in $\epsilon$ we need to use higher order information in order to judge its behavior. An elementary calculation yields for the {\em Hessian} \begin{equation} \label{eqn:hessian-evaluation} (\hess p_{i,j,\vec{v}})(X)[N] = 2 Q [\vec{t}_i^N-\vec{t}_j^N] + 4 (\vec{t}_i^N-\vec{t}_j^N)^t Q^N \vec{w} . \end{equation} Now how does $\delta$ change at~$X$ in direction~$N$, assuming it is in the set of uncertainty ${\mathcal U}(X)$? Among the polynomials $p_{i,j,\vec{v}}$ with $N$ satisfying \eqref{eqn:gradient-condition}, the fastest decreasing polynomial in direction $N$ determines $\lambda(X+\epsilon N)$ for small enough $\epsilon$. Thus for the local change of $\delta$ in direction $N$, we may restrict our attention to a polynomial $p_{i,j,\vec{v}}$ with the smallest value \eqref{eqn:hessian-evaluation} of its Hessian. By Proposition~\ref{prop:concave-det} we know that $\det^{1/d}$ decreases strictly at $X$ in a direction~$N\in{\mathcal U}(X)$ if and only if $Q^N\not=0$. For a purely translational change with $Q^N=0$, the function $\det^{1/d}$ remains constant. On the other hand, because of \eqref{eqn:hessian-evaluation} and since $Q$ is positive definite, we have $(\hess p_{i,j,\vec{v}})(X)[N]\geq 0$, with equality if and only if $\vec{t}_i^N-\vec{t}_j^N=\vec{0}$. The latter implies that $p_{i,j,\vec{v}}(X+\epsilon N)$ is a constant function of $\epsilon$. Thus for purely translational changes $N=(0,\vec{t}^N)\in {\mathcal U}(X)$, the density function $\delta(X+\epsilon N)$ is constant for small enough $\epsilon \geq 0$, if $\vec{t}_i^N=\vec{t}_j^N$ for some pair $(i,j)$ with $\vec{t}_i-\vec{t}_j-\vec{v}\in\Min X$ (for a suitable $\vec{v}\in\mathbb Z^d$). This proves the proposition. \end{proof} Note that our argumentation in the proof also shows that $\delta(X+\epsilon N)$ increases for small $\epsilon > 0$, for a purely translational change $N=(0,\vec{t}^N)\in {\mathcal U}(X)$ with $\vec{t}_i^N\not=\vec{t}_j^N$ for all pairs $(i,j)$ with $\vec{t}_i-\vec{t}_j-\vec{v}\in\Min X$ (for some $\vec{v}\in\mathbb Z^d$). This case corresponds to a modification of a periodic sphere packing in which all contacts among spheres are lost. \section{Periodic extreme sets} \label{sec:periodic-extreme} A given periodic set has many representations by periodic forms, in spaces ${\mathcal S}^{d,m}_{>0}$ with varying $m$. For example, by choosing some sublattice of $\mathbb Z^d$, we can add additional translational parts. It could happen that a periodic set $\Lambda$ with a given representation $X\in{\mathcal S}^{d,m}_{>0}$ is $m$-extreme, whereas a second representation $X'\in{\mathcal S}^{d,m'}$ is not $m'$-extreme. We are not aware of an example though. However, in some cases we are certain that the packing density of no representation of~$\Lambda$ can locally be improved. \begin{definition} \label{def:periodic-extreme} A periodic set is {\em periodic extreme} if it is $m$-extreme for all possible representations $X\in{\mathcal S}^{d,m}_{>0}$. \end{definition} Theorem \ref{thm:main-periodic} below gives a sufficient condition for a lattice to be periodic extreme. For its statement we need the notion of {\em strong eutaxy} for lattices, respectively PQFs: A form $Q\in{\mathcal S}^d_{>0}$ (and a corresponding lattice) is called {\em strongly eutactic} if \begin{equation} \label{eqn:strong-eutaxy-condition} Q^{-1} = \alpha \sum_{\vec{x}\in\Min Q} \vec{x}\vec{x}^t \end{equation} for some $\alpha>0$, i.e., if the coefficients in the eutaxy condition \eqref{eqn:eutaxy-algebraic} are all equal. It is well-known that a PQF~$Q$ is strongly eutactic if and only if the vectors in $\Min Q$ form a so-called {\em spherical $2$-design} with respect to $Q$ (see \cite{venkov-2001}, \cite[Corollary~16.1.3]{martinet-2003}). \begin{lemma} \label{lem:strong-eutaxy} Any representation $X\in{\mathcal S}^{d,m}_{>0}$ of a strongly eutactic lattice (respectively PQF) is $m$-eutactic. \end{lemma} \begin{proof} Let $Q\in{\mathcal S}^d_{>0}$ be strongly eutactic, satisfying \eqref{eqn:strong-eutaxy-condition} for some $\alpha>0$. Let $X=(Q^X,\vec{t}^X)\in{\mathcal S}^{d,m}_{>0}$ be some representation of~$Q$, e.g. with $m>1$. Let the corresponding eutactic lattice be denoted by~$\Lambda$. Then $Q^X$ is the Gram matrix of a basis $A\in \gldr$ of a sublattice~$L$ of~$\Lambda$. The columns of $\vec{t}^X$ are the coordinates of lattice points of~$\Lambda$ relative to~$A$. For a fixed $\vec{w}\in \Min X$ we define an abstract graph, whose vertices are the indices in $\{1,\ldots,m\}$. Two vertices $i$ and $j$ are connected by an edge whenever there is some $\vec{v}\in \mathbb Z^d$ such that $\vec{w}=\vec{t}^X_i-\vec{t}^X_j-\vec{v}$. In other words, the graph reflects via an edge $(i,j)$ that spheres of packing radius $\lambda(\Lambda)$ around points of the two sublattice translates $A(\vec{t}^X_i+Z^d)$ and $A(\vec{t}^X_j+Z^d)$ touch. For $\vec{z}\in Z^d$, the sphere with center~$A(\vec{t}^X_j+\vec{z})$ touches the sphere with center~$A(\vec{t}^X_j+\vec{z}+\vec{w})$. Since the periodic form~$X$ represents a lattice $\Lambda$, we find a chain of touching spheres at centers~$A(\vec{t}^X_j+\vec{z}+k\vec{w})$, with $k=0,1,\ldots$. Modulo some natural number less or equal to~$m$ these centers belong to the same lattice translate of~$L$. As a consequence, we find that the graph defined above is a disjoint union of cycles. So $\vec{w}$ induces a partition $(I_1,\dots,I_k)$ of $\{1,\ldots,m\}$. Let~$I$ be an index set of this partition (containing the indices of a fixed cycle of the defined graph). Summing over all triples $(i,j,\vec{v})$ with $i,j\in I$ and $\vec{v}\in\mathbb Z^d$ such that $\vec{w}=\vec{t}^X_i-\vec{t}^X_j-\vec{v}\in\Min X$, we find (using \eqref{eqn:pijv-gradient}): $$ \sum_{ \genfrac{}{}{0pt}{2}{(i,j,\vec{v}) \in I^2\times\mathbb Z^d}{\mbox{\tiny with} \; \vec{v}=\vec{t}^X_i-\vec{t}^X_j-\vec{w}}} (\grad p_{i,j,\vec{v}})(X) = 2|I| (\vec{w}\vec{w}^t, \vec{0}) . $$ The factor $2$ comes from the symmetry $\grad p_{i,j,\vec{v}}=\grad p_{j,i,-\vec{v}}$. Summation over all index sets $I$ of the partition yields \begin{equation} \label{eqn:summing_gradients} \sum_{\genfrac{}{}{0pt}{2}{(i,j,\vec{v}) \in \{1,\ldots,m\}^2\times\mathbb Z^d}{\mbox{\tiny with} \; \vec{v}=\vec{t}^X_i-\vec{t}^X_j-\vec{w}}} (\grad p_{i,j,\vec{v}})(X) = 2 m (\vec{w}\vec{w}^t, \vec{0}) . \end{equation} As a consequence we find by the strong eutaxy condition~\eqref{eqn:strong-eutaxy-condition} that $$ (Q^{-1},\vec{0}) \; = \; (\alpha/2m) \sum_{\genfrac{}{}{0pt}{2}{\vec{w}\in\Min X, (i,j,\vec{v}) \in \{1,\ldots,m\}^2\times\mathbb Z^d}{\mbox{\tiny with} \; \vec{v}=\vec{t}^X_i-\vec{t}^X_j-\vec{w} }} (\grad p_{i,j,\vec{v}})(X) , $$ with a suitable $\alpha>0$. Thus $X$ is $m$-eutactic. \end{proof} Not all PQFs (or lattices) which are strongly eutactic have to be perfect. For example the lattices~$\mathbb Z^n$ for $n \geq 2$ are of this kind. But if a strongly eutactic PQF is in addition also perfect, then the following theorem shows that this is sufficient for it to be periodic extreme. Note that this applies in particular to so called {\em strongly perfect} lattices and PQFs. For these lattices the vectors in $\Min Q$ form a spherical $4$-design with respect to~$Q$ (see \cite{nebe-2002} or \cite[Chapter~16]{martinet-2003} for further details). \begin{theorem} \label{thm:main-periodic} Perfect, strongly eutactic lattices (respectively PQFs) are periodic extreme. \end{theorem} \begin{proof} Let $Q\in{\mathcal S}^d_{>0}$ be perfect and strongly eutactic. Hence the vectors in $\Min Q$ span $\mathbb R^d$ (by Proposition \ref{prop:prefect-implies-d-linear-indep}) and satisfy \eqref{eqn:strong-eutaxy-condition} for some $\alpha>0$. Let $X=(Q^X,\vec{t}^X)\in{\mathcal S}^{d,m}_{>0}$ be a representation of $Q$. By Lemma~\ref{lem:strong-eutaxy}, $X$ is $m$-eutactic. If $X$ is $m$-perfect as well, we know by Theorem \ref{thm:m-extreme-characterization} that $X$ is also $m$-extreme. So let us assume that $X$ is not $m$-perfect; hence the generalized Voronoi domain~${\mathcal V}(X)$ is not full-dimensional. We want to apply Proposition~\ref{prop:sufficient-for-m-extreme}. For this we choose $$ N=(Q^N,\vec{t}^N)\in {\mathcal U}(X)={\mathcal V}(X)^{\perp} \quad \mbox{with} \quad N\not=0 . $$ (Recall the definition of ${\mathcal U}(X)$ from~\eqref{eqn:set-of-uncertainty} and that ${\mathcal U}(X)={\mathcal V}(X)^{\perp}$ if $X$ is $m$-eutactic.) By this assumption we have in particular $$\langle N, (\grad p_{i,j,\vec{v}})(X) \rangle = 0$$ for all triples $(i,j,\vec{v})$ with $\vec{w}=\vec{t}^X_i-\vec{t}^X_j-\vec{v}\in \Min X$. Using equation~\eqref{eqn:summing_gradients}, which we obtained in the proof of Lemma~\ref{lem:strong-eutaxy}, we get $\langle N, (\vec{w}\vec{w}^t,\vec{0}) \rangle = Q^N[\vec{w}]=0$ for every fixed $\vec{w}\in \Min X$. By Proposition~\ref{prop:prefect-implies-d-linear-indep} there exist $d$~linearly independent $\vec{w}$ in $\Min X$, which implies $Q^N=0$. Using \eqref{eqn:gradient-condition}, we obtain \begin{equation} \label{eqn:orthogonality-reduced-to} 0 = \langle N, (\grad p_{i,j,\vec{v}})(X) \rangle = 2 (\vec{t}_i^N-\vec{t}_j^N)^t Q \vec{w} . \end{equation} If $\vec{t}_i^N-\vec{t}_j^N=\vec{0}$ for some pair $(i,j)$ we can apply Proposition \ref{prop:sufficient-for-m-extreme}. Note that this includes in particular the case $i=j=m$ ($\vec{t}_i^N=\vec{t}_j^N=\vec{0}$) if $\vec{v}\in\mathbb Z^d\cap \Min X$. So we may assume that such $\vec{v}$ do not exist. We choose~$d$~linearly independent vectors $\vec{w}_1,\ldots , \vec{w}_d \in \Min X$ (that exist by Proposition~\ref{prop:prefect-implies-d-linear-indep}). By the assumption that non of the $\vec{w}_i$ is integral and by the assumption that $X$ represents a lattice, each $\vec{w}_i$ connects the origin~$\vec{t}^X_m=\vec{0}$ to another translation vector $\vec{t}^X_j$ (with $j\not=m$) via $\vec{w}_i = \vec{t}^X_j -\vec{v}$ for some $\vec{v}\in \mathbb Z^d$. In the same way each of the chosen minimal vectors connects the translation vector with index~$i$ to other translation vectors. We denote by~$I$ the subset of $\{1,\dots, m\}$ that is connected to the index~$m$ (respectively to the origin~$\vec{0}$) via a sequence of such links through the chosen $d$~minimal vectors. For each index~$i\in I$ we get from the minimal vectors $d$~independent linear conditions~\eqref{eqn:orthogonality-reduced-to} for the differences $\vec{t}_i^N-\vec{t}_j^N$, with suitable $j\in I\setminus\{i\}$. Overall we obtain $d|I|$~independent equations for~$d|I|$ differences. We deduce that all of them vanish. Moreover, as $\vec{t}_m^N = \vec{0}$ we even find $\vec{t}_i^N = \vec{0}$ for all indices~$i\in I$. \end{proof} The root lattices $\mathsf{A}_d$, $\mathsf{D}_d$ and $\mathsf{E}_d$, as well as the Leech lattice are known to be perfect and strongly eutactic (cf.~\cite{martinet-2003}). These lattices are known to solve the lattice sphere packing problem in dimensions $d\leq 8$ and $d=24$ (see Table~\ref{tab:sphere-packing-results}). As an immediate consequence of Theorem~\ref{thm:main-periodic}, we find that they cannot locally be improved to a periodic non-lattice set with greater sphere packing density. \begin{corollary} \label{cor:root_lattice_periodic_extreme} The lattices $\mathsf{A}_d$, for $d\geq 2$, $\mathsf{D}_d$, for $d\geq 3$, and $\mathsf{E}_d$, for $d=6,7,8$, as well as the Leech lattice are periodic extreme. \end{corollary} We also checked whether or not Theorem~\ref{thm:main-periodic} can be applied to other dimensions~$d \leq 24$. For these dimensions the so-called {\em laminated lattices}~$\Lambda_d$ and {\em sections $K_d$ of the Leech lattice} give the densest known lattice sphere packings. The lattices~$K_d$ are different from $\Lambda_d$ (and at the same time give the densest known lattice sphere packings) only in dimensions $d=11,12,13$. For these $d$, the lattice $K_d$ is strongly eutactic only for $d=12$, when $K_d$ is also known as {\em Coxeter-Todd lattice}. The laminated lattices $\Lambda_d$ give the densest known packing lattices in dimensions $d=9,10$ and $d=14,\dots,24$ (for $d=18,\dots,24$ they coincide with $K_d$). Among those values for $d$, the laminated lattices $\Lambda_d$ are strongly eutactic if and only if $d=15,16$ or $d\geq 20$. Concluding, we cannot exclude that densest known lattice sphere packings in dimensions $d\in\{9,10,11,13,14,17,18,19\}$ can locally be improved to better periodic sphere packings. Further analysis is required here. \section{Floating and strict periodic extreme lattices} The last step of the proof of Theorem~\ref{thm:main-periodic} has a vivid interpretation if we think of a sphere packing described by the given lattice. Let $X=(Q^X,\vec{t}^X)$ be one of its representations and let $A$ denote a sublattice basis with Gram matrix $Q^X$. Then the sublattice translates $A(\vec{t}^X_i+Z^d)$ with $i\in I$ form a ``rigid component'' of the sphere packing. If we do not want to decrease the sphere packing density in a local deformation we have to move all of its translates simultaneously. This rigid component may actually be larger than the one used in the proof of Theorem~\ref{thm:main-periodic}. It may even consist of the whole packing. A maximal rigid component of translates can be described via an abstract graph with vertices in $\{1,\ldots,m\}$: $(i,j)$ is an edge whenever there is some $\vec{v}\in \mathbb Z^d$ such that $\vec{t}^X_i-\vec{t}^X_j-\vec{v}\in\Min X$. Let $I$ be the set of indices~$i$ (vertices of the graph) connected by a path with~$m$. If $|I|=m$ the whole packing forms one rigid component. If $I$ is a strict subset of $\{1,\ldots,m\}$ we say a corresponding packing or lattice is {\em floating}. In a floating packing each connected component of the graph above corresponds to a union of translates which can jointly locally be moved without changing~$\lambda (X)$ and~$\delta(X)$ respectively. Examples are the fluid diamond packings described in the example of Section~\ref{sec:local-analysis}. The same applies to their higher-dimensional generalizations $\mathsf{D}_d^+ = \mathsf{D}_d \cup \left( \mathsf{D}_d + ( \tfrac{1}{2},\ldots, \tfrac{1}{2} ) \right)$ for $d\geq 10$ (see \cite[Section~4.7.3]{cs-1998}). For even~$d$ these $2$-periodic sets are actually lattices (hence $1$-periodic). In fact $\mathsf{E}_8 = D_8^+$. \bigskip We note that Theorem~\ref{thm:main-periodic} and Corollary~\ref{cor:root_lattice_periodic_extreme} give statements about local optimality of lattices, but not about strict local optimality. With the assumptions of Theorem~\ref{thm:main-periodic} alone strict local optimality cannot be ensured, as shown by floating lattices like $D_d^+$ for even~$d\geq 10$. These lattices have the same minimal vectors as the corresponding root lattice~$D_d$ and therefore they give a series of perfect and strongly eutactic lattices that are periodic extreme by Theorem~\ref{thm:main-periodic}. However, they can locally be modified to other $2$-periodic sets of the same density. We think that a strengthening of Corollary~\ref{cor:root_lattice_periodic_extreme} is possible for certain lattices that are non-floating, perfect and strongly eutactic. These include in particular the $\mathsf{E}_8$ root lattice and the Leech lattice. We think it is possible to show that these lattices are {\em strict periodic extreme}, meaning they are isolated $m$-extreme for all possible representations $X\in{\mathcal S}^{d,m}_{>0}$. By Lemma~\ref{lem:strong-eutaxy} and Theorem~\ref{thm:m-extreme-characterization} one has to show that a given non-floating, perfect and strongly eutactic lattice is $m$-perfect for every~$m$. Here some further work is required... \section*{Acknowledgement} The author thanks Henry Cohn, Renaud Coulangeon, Jacques Martinet, Frank Vallentin and Giovanni Zanzotto for many useful discussions. He moreover thanks an anonymous referee for several helpful suggestions. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,314,259,993,669
arxiv
\section{Introduction} In deep reinforcement learning, building policies of high-quality is challenging when the feature space of states is small and the training data is limited. In many real-world applications, however, datasets from clients are often privacy sensitive \cite{DBLP:conf/nips/DuchiJW12} and it is often difficult for such a data center to guarantee building models of high-quality. To deal with the issue, Konecny et al. propose a new learning setting, namely federated learning, whose goal is to train a classification or clustering model with training data involving texts, images or videos distributed over a large number of clients \cite{DBLP:journals/corr/KonecnyMYRSB16,DBLP:conf/aistats/McMahanMRHA17}. Different from previous federated learning setting (c.f. \cite{DBLP:journals/tist/YangLCT19}), we propose a novel federated learning framework based on reinforcement learning \cite{DBLP:books/lib/SuttonB98,Mnih2015Human,pmlr-v80-co-reyes18a}, i.e., \emph{\textbf{Fed}erated deep \textbf{R}einforcement \textbf{L}earning} ({{\tt FedRL}}), which aims to learn a private Q-network policy for each agent by sharing limited information (i.e., output of the Q-network) among agents. The information is ``encoded'' when it is sent to others and ``decoded'' when it is received by others. We assume that some agents have \emph{rewards} corresponding to states and actions, while others have only observed states without \emph{rewards}. Without rewards, those agents are unable to build decision policies on their own information. We claim that all agents benefit from joining the federation in building decision policies. There are many applications regarding federated reinforcement learning. \emph{For example, in the manufacturing industry, producing products may involve various factories which produce different components of the products. Factories' decision policies are private and will not be shared with each other. On the other hand, building individual decision policies of high-quality on their own is often difficult due to their limited businesses and lack of rewards (for some factories). It is thus helpful for them to learn decision polices federatively under the condition that private data is not given away. Another example is building medical treatment policies to patients for hospitals. Patients may be treated in some hospitals and never give feedbacks to the treatments, which indicates these hospitals are unable to collect rewards based on the treatments given to the patients and build treatment decision policies for patients. In addition, data records about patients are private and may not be shared among hospitals. It is thus necessitated to learn treatment policies for hospitals federatively. } Our {{\tt FedRL}} framework is different from multi-agent reinforcement learning, which is concerned with a set of autonomous agents that observe global states (or partial states which are directly shared to make ``global'' states), select an individual action and receive a team reward (or each agent receives an individual reward but shares it with other agents) \cite{DBLP:journals/corr/TampuuMKKKAAV15,DBLP:conf/atal/LeiboZLMG17,DBLP:conf/nips/FoersterAFW16}. {{\tt FedRL}} assumes agents do not share their partial observations and some agents are unable to receive rewards. Our {{\tt FedRL}} framework is also different from transfer learning in reinforcement learning, which aims to transfer experience gained in learning to perform one task to help improve learning performance in a related but different task or agent, assuming observations are shared with each other \cite{Taylor:2009,NIPS2018_7856}, while {{\tt FedRL}} assumes states cannot be shared among agents. Our {{\tt FedRL}} framework functions in three phases. Initially, each agent collects output values of Q-networks from other agents, which are ``encrypted'' with Gausian differentials. Furthermore, it builds a shared value network, e.g., MLP (multilayer perceptron), to compute a global Q-network output with its own Q-network output and the encrypted values as input. Finally, it updates both the shared value network and its own Q-network based on the global Q-network output. Note that MLP is shared among agents while agents' own Q-networks are unknown to others and should not be inferred based on the encrypted Q-network output shared in the training process. In the remainder of the paper, we first review previous work related to our {{\tt FedRL}} framework, and then present the problem formulation of {{\tt FedRL}}. After that, we introduce our {{\tt FedRL}} framework in detail. Finally we evaluate our {{\tt FedRL}} framework in the Grid-World domain with various sizes and the Text2Actions domain. \ignore{ Deep reinforcement learning aims to learn complex skills or decision policies from observations \cite{DBLP:journals/corr/MnihBMGLHSK16,DBLP:journals/jmlr/LevineFDA16}. Despite the success of previous approaches, however, it is often difficult or time-consuming to learn an individual policy of high-quality due to its limited observed data (numbers of features and instances are both limited) in many complex domains involving multiple agents. A higher-level policy that is provided with temporally extended and intelligent behaviours can reason at a higher level of abstraction and solve more temporally-extended tasks. Furthermore, the same lower-level skills could be reused to accomplish multiple tasks efficiently. \cite{ijcai2018-774} \cite{DBLP:conf/aaai/Bou-AmmarERT15} Deep reinforcement learning has been demonstrated effective in many applications, such as ... It is challenging when there are limited training data or exploration-exploitation experiences from environments. There have been many approaches to learn policies from limited training data, such as transfer reinforcement learning \cite{}, multi-agent reinforcement learning \cite{} distributed stochastic gradient descent \cite{}, distributed multi-task reinforcement learning \cite{}. Current solutions cannot handle the issues of model or data privacies. We thus need to build a federation of agents to do reinforcement learning. Present an example to motivate federated reinforcement learning: medical diagnosis. Present our solutions. } \section{Related Work} The nascent field of federated learning considers training statistical models directly on devices \cite{DBLP:journals/corr/KonecnyMR15,DBLP:conf/aistats/McMahanMRHA17}. The aim in federated learning is to fit a model to data generated by distributed nodes. Each node collects data in a non-IID manner across the network, with data on each node being generated by a distinct distribution. There are typically a large number of nodes in the network, and communication is often a significant bottleneck. Different from previous work that train a single global model across the network \cite{DBLP:journals/corr/KonecnyMR15,DBLP:journals/corr/KonecnyMYRSB16,DBLP:conf/aistats/McMahanMRHA17}, Smith et al. propose to learn separate models for each node which is naturally captured through a multi-task learning (MTL) framework, where the goal is to consider fitting separate but related models simultaneously \cite{DBLP:conf/nips/SmithCST17}. Different from those federated learning approaches, we consider federated settings in reinforcement learning. Our work is also related to multi-agent reinforcement learning (MARL) which involves a set of agents in a shared environment. A straightforward way to MARL is to extend the single-agent RL approaches. Q-learning has been extended to cooperative multi-agent settings, namely Independent Q-learning (IQL), in which each agent observes the global state, selects an individual action and receives a team reward \cite{DBLP:journals/corr/TampuuMKKKAAV15,DBLP:conf/atal/LeiboZLMG17}. One challenging of MARL is that multi-agent domains are non-stationary from agent's perspectives, due to other agents' interactions in the shared environment. To address this issue, \cite{DBLP:conf/icml/OmidshafieiPAHV17} propose to explore Concurrent Experience Replay Trajectories (CERTs) structures, which store different agents' histories, and align them together based on the episode indices and time steps. Due to the action space growing exponentially with the number of agents, learning becomes very difficult due to partial observability of limited communication when the number of agents is large. \cite{DBLP:conf/nips/LoweWTHAM17} thus propose to solve the MARL problem through a Centralized Critic and a Decentralized Actor, and \cite{DBLP:conf/icml/RashidSWFFW18} propose to exploit a linear decomposition of the joint value function across agents. Different from MARL, our {{\tt FedRL}} framework assumes agents do not share their partial observations and some agents are unable to receive rewards, instead of assuming observations are sharable and all agents are able to receive rewards. \ignore{ \cite{pmlr-v80-co-reyes18a} Reinforcement learning (RL) together with supervised learning and unsupervised learning are the fundamental branches of machine learning. RL problems can be typically formalized by Markov Decision Processes (MDPs) in which the agent continuously interact with the environment and receive some feedback (reward) from the environment. The agent's task is to maximize the cumulative reward (the total reward it obtains in the long run). The model of the MDPs, which describes the dynamics of the environment, is usually unknown, so the methods that solve the MDPs are called model-free methods. DQN \cite{Mnih2015Human}, which adopts a convolutional neural network to approximate the Q-function, is one of the most popular model-free method. There are many extensions of DQN, such as DRQN \cite{DBLP:conf/aaaifs/HausknechtS15} which make use of recurrent neural networks when approximating the Q-function. There are another line of model-free approaches that consider not only the value function but also the explicit policy. They are called Actor-Critic \cite{DBLP:books/lib/SuttonB98} because they consider the value function as a critic and the policy as an actor. Recently, there are some varieties, e.g. A3C \cite{DBLP:conf/icml/MnihBMGLHSK16} which considers the advantageous action-value function.} \section{Problem Definition} A Markov Decision Process (MDP) can be defined by $\langle S, A, T, r\rangle$, where $S$ is the state space, and $A$ is the action space. $T$ is the transition function: $S \times A \rightarrow S$, i.e., $T(s,a,s') = P(s'|s,a)$, specifying the probability of next state $s' \in S$ given current state $s \in S$ and $a \in A$ that applies on $s$. $r$ is the reward function: $S \rightarrow \mathcal{R}$, where $\mathcal{R}$ is the space of real numbers. Given a policy $\pi:S\rightarrow A$, the value function $V^{\pi}(s)$ and the Q-function $Q^{\pi}(s,a)$ at step $t+1$, can be updated by their $t$ step: \[V_{t+1}^{\pi}(s)=r(s)+\sum_{s'\in S}T(s,\pi(s),s')V_t^{\pi}(s'),\] and \[Q_{t+1}^{\pi}(s,a)=r(s)+\sum_{s'\in S}T(s,a,s')V_t^{\pi}(s'),\] for $t\in\{0,\ldots,K-1\}$. The solution to an MDP problem is the best policy $\pi^*$ such that $V^{\pi^*}(s)=\max_{\pi}V^{\pi}(s)$ or $Q^{\pi^*}(s,\pi^*(s))=\max_{\pi}Q^{\pi}(s,\pi(s))$. In DQN (Deep Q-Network), given transition function $T$ is unknown, the Q-function is represented by a Q-network $Q(s,a; \theta)$ with $\theta$ as parameters of the network, and updated by \[Q_{t+1}(s,a;\theta)=E_{s'}\Big\{r(s)+\gamma\max_{a'\in A}Q_t(s',a';\theta)|s,a\Big\},\] as done by \cite{Mnih2015Human}. To learn the parameters $\theta$, one way is to store transitions $\langle s,a,s',r\rangle$ in replay memories $\Omega$ and exploit a mini-batch sampling to repeatedly update $\theta$ \cite{Mnih2015Human}. Once $\theta$ is learnt, the policy $\pi^*$ can be extracted from $Q(s,a;\theta)$, \[\pi^*(s)=\arg\max_{a\in A}Q(s,a;\theta).\] \ignore{ Suppose there are two agents $\alpha$ and $\beta$ making decisions in two different environments, e.g., two doctors treating patients at two different hospitals, respectively. Agent $\alpha$ is able to collect reward $r$ and new state $s'$ from the environment given current state $s$ and action $a$, i.e., agent $\alpha$ is capable of building complete transitions $\langle s,a,s',r\rangle$ from the environment (e.g., patients are treated in the hospital). On the other hand, agent $\beta$ can only collect current state $s$ and action $a$, without any information about next state $s'$ and reward $r$ from the environment. For example, patients that doctor $\beta$ treats never come back such that $\beta$ cannot collect new state $s'$ about the patients and reward $r$ based on the treatment (i.e., action $a$) $\beta$ gave to the patient (i.e., current state $s$). } We define our \emph{federated} deep reinforcement learning problem by: \emph{given transitions $\mathcal{D}_{\alpha} = \{\langle s_{\alpha},a_{\alpha},s'_{\alpha},r_{\alpha}\rangle\}$ collected by agent $\alpha$, and pairs of states and actions $\mathcal{D}_{\beta}=\{\langle s_{\beta},a_{\beta}\rangle\}$ collected by agent $\beta$, we aim to federatively build policies $\pi^*_{\alpha}$ and $\pi^*_{\beta}$ for agents $\alpha$ and $\beta$, respectively. } Note that in this paper we consider the federation with two members for simplicity. The setting can be extended to many agents by exploiting the same federated mechanism between each two agents. We denote states, actions, Q-functions, and policies with respect to agents $\alpha$ and $\beta$, by ``$s_{\alpha}\in S_{\alpha}$, $a_{\alpha}\in A_{\alpha}$, $Q_{\alpha}$, $\pi^*_{\alpha}$'' and ``$s_{\beta}\in S_{\beta}$, $a_{\beta}\in A_{\beta}$, $Q_{\beta}$, $\pi^*_{\beta}$'', respectively. In our federated deep reinforcement learning problem, we assume: \textbf{A1:} The feature spaces of states $s_{\alpha}$ and $s_{\beta}$ are \emph{different} between agents $\alpha$ and $\beta$. For example, a state $s_{\alpha}$ denotes a patient's cardiogram in hospital $\alpha$, while another state $s_{\beta}$ denotes the same patient's electroencephalogram in hospital $\beta$, indicating the feature spaces of $s_{\alpha}$ and $s_\beta$ are different. \textbf{A2:} Transitions $\mathcal{D}_\alpha$ and $\mathcal{D}_\beta$ cannot be shared directly between $\alpha$ and $\beta$ when they learning their own models. The \emph{correspondences} between transitions from $\mathcal{D}_\alpha$ and $\mathcal{D}_\beta$ are, however, known to each other. In other words, agent $\alpha$ can send the ``ID'' of a transition to agent $\beta$, and agent $\beta$ can use that ``ID'' to find its corresponding transition in $\mathcal{D}_\beta$. For example, in hospital, ``ID'' can correspond to a specific patient. \ignore{ states $s_{\alpha}$ and rewards $r_{\alpha}$ cannot be shared with agent $\beta$, and states $s_{\beta}$ cannot be shared with agent $\beta$ as well. That is to say, } \textbf{A3:} The output of functions $Q_{\alpha}$ and $Q_{\beta}$ \emph{can} be shared with each other under the condition that they are protected by some privacy protection mechanism. \ignore{ the networks of their own are complex enough and unknown to each other (the input of them is unknown to each other as well based on \textbf{A2}), such that $\alpha$ and $\beta$ cannot induce each other's networks.} Based on \textbf{A1-A3}, we aim to learn policies $\pi^*_{\alpha}$ and $\pi^*_{\beta}$ of high-quality for agents $\alpha$ and $\beta$ by preserving privacies of their own data and models. \ignore{ which is better than the one $\alpha$ learns by itself, and $\beta$ gains a high-quality policy $\pi^*_{\beta}$ by joining in the federation, instead of ``zero'' by itself ($\beta$ cannot build policies based on its own data $\{\langle s_{\beta}, a_{\beta}\rangle\}$). } \ignore{ In this paper, we suppose there are two agents in an environment, namely $\alpha$ and $\beta$, respectively. We would like to operate the FRL task under the following constraints: \begin{itemize} \item \textbf{\emph{Partial Observation}}: This is a fundamental setting in a POMDP. At each time step $t$, the full state of the environment, namely $s^t$ is unknown to each agent. Agent $\alpha$ observes $s_\alpha^t$ and agent observes $s_\beta^t$. \item \textbf{\emph{Different Receptivity}}: The abilities of receiving the feedback from the environment vary from agent to agent. Without loss of generality, we suppose that agent $\alpha$ cannot receive the instant reward while agent $\beta$ can. $r^t = R(s^t, a^t)$ is the instant reward at time $t$ based on the full state $s_t$ and the joint action $a_t = \{a_\alpha^t, a_\beta^t\} \in A$. \item \textbf{\emph{Limited Communication}}: These two agents cannot directly share their own observations with each other because the data is privacy sensitive or large in size, but they need the information from each other to build a complete model. \end{itemize} The goal of FRL is to learn a joint model that takes advantage of the information, i.e. the states and rewards, from both agents without sharing them, which guarantees the data privacy and the cost of communication. \hankz{Give an example of the problem here!} } \section{Our {{\tt FedRL}} Approach} \ignore{ In value-based reinforcement learning methods, e.g. DQN \cite{Mnih2015Human}, the value function $V(s)$ (or action-value function $Q(s, a)$) can be seen as a mapping operation that encodes the states (or states and actions). A simple linear value function is not a good choice since it can be decoded by guessing inputs and weights. Therefore, we adopt neural networks which have complicated structures and some non-linear operations to encode the observations. We adopt the deep Q-networks \cite{Mnih2015Human} which is an extension of Q-Learning in reinforcement learning. } In this section we present our {{\tt FedRL}} framework in detail. An overview of {{\tt FedRL}} is shown in Figure \ref{alpha-beta}, where the left part is the model of agent $\alpha$, and the right part is the model of agent $\beta$. Each model is composed of a local Q-network with parameters $\theta_\alpha$ for agent $\alpha$, $\theta_\beta$ for agent $\beta$, and a global MLP module with parameters $\theta_g$ for both $\alpha$ and $\beta$. \begin{figure \centering \includegraphics[width=0.49\textwidth]{alpha-beta} \caption{The networks of agents $\alpha$ and $\beta$} \label{alpha-beta} \end{figure} \ignore{ \begin{figure} \centering \begin{subfigure}[b]{0.46\textwidth} \centering \includegraphics[width=\textwidth]{alpha-beta} \caption{The networks of agents $\alpha$ and $\beta$} \label{alpha-beta} \end{subfigure} \hfill \begin{subfigure}[b]{0.53\textwidth} \centering \includegraphics[width=\textwidth]{train-test} \caption{The training and testing procedures of agents $\alpha$ and $\beta$} \label{train-test} \end{subfigure} \end{figure} } \ignore{ \begin{figure}[!ht] \centering \includegraphics[width=0.39\textwidth]{FRL_framework.pdf} \caption{{{\tt FedRL}} framework. $s_\beta^t, s_\beta^t$ are observations of agents at time step $t$. $\theta_\alpha, \theta_\beta$ are parameters of Q-networks. $Q_\alpha, Q_\beta$ are two real-valued vectors which indicate the basic q-values corresponding to the two Q-networks, $d_a$ is the dimension of the action space. $N(0, \sigma^2)$ is the Gaussian noise. $Q_f$ is the federated Q-values and $a_t$ is the predicted action at $t$.} \label{framework} \end{figure} } \paragraph{Basic Q-networks} We build two Q-networks for agents $\alpha$ and $\beta$, denoted by $Q_\alpha(s_\alpha, a_\alpha; \theta_\alpha)$ and $Q_\beta(s_\beta, a_\beta; \theta_\beta)$, respectively, where $\theta_\alpha$ and $\theta_g$ are parameters of the Q-networks. The outputs of these two basic Q-networks are not directly used to predict the actions, but taken as input of the MLP module. \paragraph{Gaussian differential privacy} To avoid agents ``inducing'' models of each other according to repeatedly received outputs of other's Q-network during training, we consider using differential privacy \cite{DBLP:journals/fttcs/DworkR14} to protect the output of agent's Q-network. There are various mechanisms of differential privacy such as the Gaussian mechanism \cite{DBLP:conf/ccs/AbadiCGMMT016} and Binomial mechanism \cite{DBLP:conf/nips/AgarwalSYKM18}. In this paper, we exploit Gaussian mechanism since the output of the MLP with Gaussian input is Gaussian itself. In previous federated learning settings, the mechanism is applied to the gradients of agents (clients) before being sent to the server or other agents (clients). In our {{\tt FedRL}} framework, we send the output of one agent's Q-network to another, so the Gaussian noise is added to the output rather than the gradients. The mechanism is defined by \begin{eqnarray} \hat{Q}_\alpha(s_\alpha, a_\alpha; \theta_\alpha) = Q_\alpha(s_\alpha, a_\alpha; \theta_\alpha) + N(0, \sigma^2) \label{add noise} \\ \hat{Q}_\beta(s_\beta, a_\beta; \theta_\beta) = Q_\beta(s_\beta, a_\beta; \theta_\beta) + N(0, \sigma^2) \end{eqnarray} where $N(0, \sigma^2)$ is the Gaussian distribution with mean 0 and standard deviation $\sigma$. \paragraph{Federated Q-network} We build a new Q-network, namely \emph{federated} Q-network denoted by $Q_f$, to leverage the outputs of the two basic Q-networks with Gaussian noise based on MLP, which is defined by \begin{eqnarray} && Q_f(\cdot; \theta_\alpha, \theta_\beta, \theta_g) = \notag \\ && MLP([\hat{Q}_\alpha(s_\alpha, a_\alpha; \theta_\alpha)|\hat{Q}_\beta(s_\beta, a_\beta; \theta_\beta)];\theta_g) \label{federated} \end{eqnarray} where $\theta_g$ is the parameters of MLP and $[\cdot | \cdot]$ indicates the concatenation operation. Note that parameters of an MLP can be shared between agents. Once MLP is updated, the updated parameters are shared with the other agent. \ignore{ \fengwf{To protect the data privacy, a common way is to consider the \textbf{differential privacy} mechanism \cite{DBLP:journals/fttcs/DworkR14}, i.e. add noise to the values (e.g. gradients) before transmission. A popular mechanism is the Gaussian mechanism \cite{DBLP:conf/ccs/AbadiCGMMT016} since the sum of Gaussian is Gaussian itself. There is also another way which applies the Binomial mechanism \cite{DBLP:conf/nips/AgarwalSYKM18}.} } \ignore{ Our {{\tt FedRL}} approach can be seen as a linear combination of $Q_\alpha(s_\alpha, a; \theta_\alpha)$ and $Q_\beta(s_\beta, a; \theta_\beta)$, \begin{equation} Q_f(s_\alpha, s_\beta, a; \theta_\alpha, \theta_\beta) = \lambda Q_\alpha(s_\alpha, a; \theta_\alpha) + (1 - \lambda)Q_\beta(s_\beta, a; \theta_\beta), \label{federated} \end{equation} where $\lambda$ is a hyper-parameter that controls the importance of Q-values of both agents. Figure \ref{framework} is an illustration of our approach. } \ignore{ The federated Q-function $Q_f$ does not directly take the observations $s_\alpha$ and $s_\beta$ as inputs, but the Q-values of both Q-networks. The Q-values $Q_\alpha, Q_\beta \in R^{d_a}$, where $d_a$ is the dimension of the joint action space $A$. Although the two agents share the output Q-values with each other, they cannot decrypt the original information of observations $s_\alpha, s_\beta$, because the structures of Q-networks are unshared and can be different from each other. The weights and input states of the Q-networks cannot be deduced by output Q-values. } \ignore{ We assume that agents $\alpha$ and $\beta$ share their Q-values using Gaussian differential privacy and MLP modules but have their own private basic Q-networks. Due to Gaussian differential privacy protection and private basic Q-networks (both weights and structures of Q-networks are unknown to each other), Agents $\alpha$ and $\beta$ are unable to decode the Q-networks of each other when learning parameters $\theta_\alpha$, $\theta_\beta$ and $\theta_g$. Specifically, Agent $\alpha$ updates its own parameters $\theta_\alpha$ and parameters $\theta_g$ of MLP module by viewing the output of basic Q-networks of agent $\beta$ as a constant, while agent $\beta$ updates its own parameters $\theta_\beta$ and parameters $\theta_g$ of MLP module by viewing the output of basic Q-networks of agent $\alpha$ as a constant. We thus define the final outputs of Q-values with respect to agents $\alpha$ and $\beta$ are:} With respect to agents $\alpha$ and $\beta$, we define each agent's federated Q-networks by viewing the other agent's basic Q-network (with Gaussian noise) as a fixed constant when updating its own basic Q-network, as shown below, \begin{eqnarray} Q_f^\alpha(\cdot,C_\beta;\theta_\alpha,\theta_g)=MLP([\hat{Q}_\alpha(\cdot; \theta_\alpha)| C_\beta]; \theta_g) \label{QAlpha} \\ Q_f^\beta(\cdot,C_\alpha;\theta_\beta,\theta_g)=MLP([C_\alpha | \hat{Q}_\beta(\cdot; \theta_\beta)]; \theta_g) \label{QBeta} \end{eqnarray} where $C_{\alpha} = \hat{Q}_\alpha(s_\alpha, a_\alpha; \theta_\alpha)$ and $C_{\beta} = \hat{Q}_\beta(s_\beta, a_\beta; \theta_\beta)$ are fixed constants when updating agent $\beta$'s basic Q-network and agent $\alpha$'s basic Q-network, respectively. \ignore{ To estimate the optimal action-value function, the federated framework performs value iteration, and Q-values are updated by iteratively applying Bellman updates, \begin{eqnarray} Q_f^\alpha(s_\alpha, a_\alpha,C_\beta; \theta_\alpha,\theta_g) = r + \gamma \max_{a'} Q_f^\alpha(s'_\alpha, a'_\alpha,C_\beta; \theta_\alpha,\theta_g) \label{QAlphaUpdate} \\ Q_f^\beta(s_\beta, a_\beta,C_\alpha; \theta_\beta,\theta_g) = r + \gamma \max_{a'} Q_f^\beta(s'_\beta, a'_\beta,C_\alpha; \theta_\beta,\theta_g) \label{QBetaUpdate} \end{eqnarray} } The Q-networks are trained by minimizing the square error loss $L^j_\alpha(\theta_\alpha,\theta_g)$ and $L^j_\beta(\theta_\beta,\theta_g)$ of agents $\alpha$ and $\beta$, \begin{eqnarray} L^j_\alpha(\theta_\alpha,\theta_g) = \mathbb{E} \bigg[ (Y^j - Q_f^\alpha(s_\alpha^j, a_\alpha^j,C_{\beta}; \theta_\alpha,\theta_g) )^2 \bigg] \label{loss} \\ L^j_\beta(\theta_\beta,\theta_g) = \mathbb{E} \bigg[ (Y^j - Q_f^\beta(s_\beta^j, a_\beta^j,C_\alpha; \theta_\beta,\theta_g) )^2 \bigg] \label{loss2} \end{eqnarray} where $Y^j = r^j + \gamma \max \limits_a Q_f^\alpha(s_\alpha^j, a,C_\beta; \theta_\alpha,\theta_g)$. Note that agent $\beta$ is unable to compute $Y^j$ since it does not have reward $r$. $Y^j$ is computed by agent $\alpha$ and shared with agent $\beta$. \begin{figure}[!ht] \centering \includegraphics[width=0.48\textwidth]{train-test} \caption{The training and testing procedures} \label{train-test} \end{figure} \paragraph{Overview of training and testing} The training and testing procedures of agents $\alpha$ and $\beta$ are shown in Figure \ref{train-test}. When training, agent $\alpha$ computes $Y^j$ and sends it to agent $\beta$. Agent $\beta$ updates $\theta_\beta$ and $\theta_g$, computes $C_\beta$, and sends $\theta_g$ and $C_\beta$ to agent $\alpha$. After that, agent $\alpha$ updates $\theta_\alpha$ and $\theta_g$, computes $C_\alpha$, and sends $\theta_g$ and $C_\alpha$ to agent $\beta$. When testing, both agents compute and send $C_\alpha$ and $C_\beta$ to each other for computing $Q_f^\alpha$ and $Q_f^\beta$. The detailed training procedure can be seen from Algorithms \ref{FRL-alpha} and \ref{FRL-beta}. In Steps 1 and 2 of Algorithm \ref{FRL-alpha}, we initialize the basic Q-network and replay memory of agent $\alpha$ and the MLP module. In Step 3 of Algorithm \ref{FRL-alpha}, we call the function from Algorithm \ref{FRL-beta} to initialize the Q-network and replay memory of agent $\beta$. In Step 6 of Algorithm \ref{FRL-alpha}, we obtain observation of agent $\alpha$'s state $s_\alpha^t$. In Step 7 of Algorithm \ref{FRL-alpha}, we call the function in Algorithm \ref{FRL-beta} to calculate the output of basic Q-network of agent $\beta$ and obtain observation of agent $\beta$ and select the corresponding action, as shown in Steps from 6 to 10 of Algorithm \ref{FRL-beta}. In Steps from 8 and 11 of Algorithm \ref{FRL-alpha}, we perform the $\epsilon$-greedy exploration and exploitation, obtain new observations and store the transitions to the replay memory. In Steps 12 and 13 of Algorithm \ref{FRL-alpha}, we sample a record $j$ in the memory and call the function $ComputeQBeta(j)$ of Algorithm \ref{FRL-beta} to calculate the output of basic Q-network of agent $\beta$ based on the index $j$. In Steps 14 and 15 of Algorithm \ref{FRL-alpha}, we update parameters $\theta_\alpha$ and $\theta_g$. In Steps 16 and 17 of Algorithm \ref{FRL-alpha}, we compute the output of basic Q-network of agent $\alpha$ and pass it to agent $\beta$ and call the function $UpdateQ$ of agent $\beta$ to update basic Q-network of agent $\beta$ and MLP, as shown in Steps from 19 to 21 of Algorithm \ref{FRL-beta}. Note that Algorithms \ref{FRL-alpha} and \ref{FRL-beta} are executed by agents $\alpha$ and $\beta$ separately. \begin{algorithm}[!ht] \caption{\tt {{\tt FedRL}}-ALPHA} \label{FRL-alpha} \textbf{Input:} state space $S_\alpha$, action space $A_\alpha$, rewards $r$ \\ \textbf{Output:} $\theta_\alpha$,$\theta_g$ \small \begin{algorithmic}[1] \STATE Initialize $Q_\alpha, Q_f$ with random values for $\theta_\alpha, \theta_g$ \STATE Initialize replay memory $D_\alpha$ \STATE Call {{\tt {{\tt FedRL}}-BETA}.$Init()$} \FOR{episode = 1: $M$} \REPEAT \STATE Observe $s_\alpha^t$ \STATE Call {$C_{\beta}$=\tt {{\tt FedRL}}-BETA.$ComputeQBeta()$} \STATE Select action $a^t$ with probability $\epsilon$ \STATE Otherwise $a^t = \arg\max \limits_a Q_f^\alpha(s_\alpha^t, a, C_{\beta}; \theta_\alpha,\theta_g)$ \STATE Execute action $a^t$, obtain reward $r^t$ and state $s^{t+1}$ \STATE Observe $s_\alpha^{t+1}$, store $(s_\alpha^t, a^t, r^t, s_\alpha^{t+1})$ in $D_\alpha$ \STATE Sample $(s_\alpha^j, a^j, r^j, s_\alpha^{j+1})$ from $D_\alpha$ \STATE Call {$C_{\beta}$=\tt {{\tt FedRL}}-BETA.$ComputeQBeta(j)$} \STATE $Y^j = r^j + \gamma \max \limits_a Q_f^\alpha(s_\alpha^j, a,C_{\beta}; \theta_\alpha,\theta_g)$ \STATE Update $\theta_\alpha, \theta_g$ according to Eq. (\ref{QAlpha}), (\ref{loss}) \STATE $C_{\alpha} = \hat{Q}_\alpha(s_\alpha^j, a; \theta_\alpha)$ \STATE Call {$\theta_g$=\tt {{\tt FedRL}}-BETA.$UpdateQ(Y^j,j,C_{\alpha},\theta_g)$} \UNTIL{terminal $t$} \ENDFOR \end{algorithmic} \end{algorithm} \begin{algorithm}[!ht] \caption{\tt FedRL-BETA} \label{FRL-beta} \small \textbf{Input:} state space $S_{\beta}$, action space $A_{\beta}$ \\ \textbf{Output:} $\theta_\beta$, $\theta_g$ \begin{algorithmic}[1] \FUNCTION {$Init()$}{} \STATE Initialize $Q_\beta$ with random values for $\theta_\beta$ \STATE Initialize replay memory $D_\beta$ \ENDFUNCTION \FUNCTION{$ComputeQBeta()$}{} \STATE Observe $s_\beta$ \STATE Select $a_\beta\in A_{\beta}$ with probability $\epsilon$ \STATE Otherwise $a_{\beta} = \arg\max \limits_{a_{\beta}} Q_\beta(s_\beta, a_\beta; \theta_\beta)$ \STATE Store $(s_\beta,a_\beta)$ in $D_\beta$ \STATE Let $C_\beta=\hat{Q}_\beta(s_\beta, a; \theta_\beta)$ \STATE \textbf{return} $C_\beta$ \ENDFUNCTION \FUNCTION{$ComputeQBeta(j)$}{} \STATE Select $(s_\beta,a_\beta)$ from $D_\beta$ based on index $j$ \STATE Let $C_\beta=\hat{Q}_\beta(s_\beta, a_\beta; \theta_\beta)$ \STATE \textbf{return} $C_\beta$ \ENDFUNCTION \FUNCTION{$UpdateQ(Y^j,j,C_{\alpha},\theta_g)$}{} \STATE Select $(o^j_\beta,a^j_\beta)$ from $D_\beta$ based on index $j$ \STATE Update $\theta_\beta$, $\theta_g$ based on Eq. (\ref{QBeta}), (\ref{loss2}) \STATE \textbf{return} $\theta_g$ \ENDFUNCTION \end{algorithmic} \end{algorithm} \ignore{ \begin{algorithm}[!ht] \caption{\tt {{\tt FedRL}}-ALPHA} \label{FRL-alpha} \textbf{Input:} state space $S_\alpha$, action space $A_\alpha$, rewards $r$ \\ \textbf{Output:} $\theta_\alpha$,$\theta_g$ \begin{algorithmic}[1] \STATE Initialize $Q_\alpha, Q_f$ with random values for $\theta_\alpha, \theta_g$ \STATE Initialize replay memory $D_\alpha$ \STATE Call {{\tt {{\tt FedRL}}-BETA}.$Init()$} \FOR{episode = 1: $M$} \REPEAT \STATE Observe $s_\alpha^t$ \STATE Call {$C_{\beta}$=\tt FRL-BETA.$ComputeQBeta()$} \STATE Select action $a^t$ with probability $\epsilon$ \STATE Otherwise $a^t = \arg\max \limits_a Q_f^\alpha(s_\alpha^t, a, C_{\beta}; \theta_\alpha,\theta_g)$ \STATE Execute action $a^t$, obtain reward $r^t$ and state $s^{t+1}$ \STATE Observe $s_\alpha^{t+1}$, store $(s_\alpha^t, a^t, r^t, s_\alpha^{t+1})$ in $D_\alpha$ \STATE Sample $(s_\alpha^j, a^j, r^j, s_\alpha^{j+1})$ from $D_\alpha$ \STATE Call {$C_{\beta}$=\tt {{\tt FedRL}}-BETA.$ComputeQBeta(j)$} \STATE $Y^j = r^j + \gamma \max \limits_a Q_f^\alpha(s_\alpha^j, a,C_{\beta}; \theta_\alpha,\theta_g)$ \STATE Update $\theta_\alpha, \theta_g$ according to Eq. (\ref{QAlpha}), (\ref{QAlphaUpdate}), (\ref{loss}) \STATE $C_{\alpha} = \hat{Q}_\alpha(s_\alpha^j, a; \theta_\alpha)$ \STATE Call {$\theta_g$=\tt {{\tt FedRL}}-BETA.$UpdateQBeta(Y^j,j,C_{\alpha},\theta_g)$} \UNTIL{terminal $t$} \ENDFOR \end{algorithmic} \end{algorithm} \begin{algorithm}[!ht] \caption{\tt {{\tt FedRL}}-BETA} \label{FRL-beta} \textbf{Input:} state space $S_{\beta}$, action space $A_{\beta}$ \\ \textbf{Output:} $\theta_\beta$, $\theta_g$ \begin{algorithmic}[1] \FUNCTION {$Init()$}{} \STATE Initialize $Q_\beta$ with random values for $\theta_\beta$ \STATE Initialize replay memory $D_\beta$ \ENDFUNCTION \FUNCTION{$ComputeQBeta()$}{} \STATE Observe $s_\beta$ \STATE Select action $a_\beta\in A_{\beta}$ with probability $\epsilon$ \STATE Otherwise $a_{\beta} = \arg\max \limits_{a_{\beta}} Q_\beta(s_\beta, a_\beta; \theta_\beta)$ \STATE store $(s_\beta,a_\beta)$ in $D_\beta$ \STATE let $C_\beta=\hat{Q}_\beta(s_\beta, a; \theta_\beta)$ \STATE \textbf{return} $C_\beta$ \ENDFUNCTION \FUNCTION{$ComputeQBeta(j)$}{} \STATE Select $(s_\beta,a_\beta)$ from $D_\beta$ based on index $j$ \STATE let $C_\beta=\hat{Q}_\beta(s_\beta, a_\beta; \theta_\beta)$ \STATE \textbf{return} $C_\beta$ \ENDFUNCTION \FUNCTION{$UpdateQBeta(Y^j,j,C_{\alpha},\theta_g)$}{} \STATE Select $(o^j_\beta,a^j_\beta)$ from $D_\beta$ based on index $j$ \STATE Update $\theta_\beta$, $\theta_g$ according to Eq. (\ref{QBeta}), (\ref{QBetaUpdate}), (\ref{loss2}) \STATE \textbf{return} $\theta_g$ \ENDFUNCTION \end{algorithmic} \end{algorithm} } \section{Experiments} In the experiment, we evaluate {{\tt FedRL}}\footnote{The source code and datasets are available from https://github.com/FRL2019/FRL} in the following two aspects. Firstly, we would like to see if agent $\alpha$ can learn better policies by joining the federation than without joining the federation (agent $\alpha$ can build policies by itself since it has complete transitions $\{\langle s,a,s',r\rangle\}$, while agent $\beta$ cannot build policies without joining the federation since it has only pairs of states and actions $\{\langle s, a\rangle\}$). Secondly, we would like to see if our {{\tt FedRL}} approach can learn policies of high-quality which are close to the ones learnt by directly combining data of both agent $\alpha$ and $\beta$, neglecting the data-privacy issue between $\alpha$ and $\beta$. To do this, we compared our {{\tt FedRL}} approach to the following baselines: \begin{itemize} \item \textbf{DQN-alpha}: which is a deep Q-network \cite{Mnih2015Human} trained with agent $\alpha$'s data only. It takes observations $s_\alpha$ as input and outputs actions corresponding to $s_\alpha$. \item \textbf{DQN-full}: which is a deep Q-network trained by directly putting data together from both agents $\alpha$ and $\beta$, i.e., neglecting data privacy between agents $\alpha$ and $\beta$. \item \textbf{FCN-alpha}: which is a fully convolutional network (FCN) trained with agent $\alpha$'s data only, similar to DQN-alpha. FCN-alpha aims to build policies via supervised learning \cite{DBLP:conf/aaai/FurutaIY19}, i.e., viewing states as input and actions as labels. \item \textbf{FCN-full}: which is a convolutional network trained with all data of agent $\alpha$ and $\beta$ put together directly, similar to DQN-full. \end{itemize} In our experiment, we used the kernel of size $3 \times 3$ in CNN and two fully-connected layers with the size of $32\times 4$ in MLP. We set the standard deviation in Gaussian differential privacy $\sigma$ to be 1. We adopted the Adam optimizer with learning rate 0.001 and the ReLU activation for all models. We evaluated our {{\tt FedRL}} with comparison to baselines in two domains, i.e., Grid-World and Text2Action. \subsection{Evaluation in Grid-World Domain} We first conducted experiments in Grid-World domain with randomly placed obstacles \cite{DBLP:conf/nips/TamarLAWT16}. The two agents $\alpha$ and $\beta$ were randomly put in the environment as well, as shown in Figure \ref{grid world example}. The two agents aim to navigate through optimal pathes (i.e. the shortest pathes without hitting obstacles) to meet with each other. In this domain, we define states, actions and rewords as follows. \begin{figure}[!ht] \centering \includegraphics[width=0.26\textwidth]{32x32-alpha-beta.pdf} \caption{A $32 \times 32$ grid-world domain with agents $\alpha$ and $\beta$. The rectangles in blue are boundaries of observations.} \label{grid world example} \end{figure} \begin{itemize} \item \textbf{States:} The domain is represented by a $N_g \times N_g$ binary-valued matrix (0 for obstacle, 1 otherwise), where $N_g$ is the size of the domain. We evaluated our approach with respect to different sizes, i.e., $N_g = 8, 16, 32$. The observed state of agent $\alpha$, denoted by $s_\alpha$, was set to be a $3 \times 3$ matrix with the current position of agent $\alpha$ in the center of the matrix. The observed state of agent $\beta$, denoted by $s_\beta$, was set to be a $5 \times 5$ matrix with the current position of agent $\beta$ in the center of the matrix. \ignore{Since the single $3 \times 3$ or $5 \times 5$ grid information may be insufficient for an agent to confirm its location and make a decision in a partially observed environment, i.e. there are many $3 \times 3$ or $5 \times 5$ grids which are the same, we adopt a sequence of observations to construct an observation, i.e. $s_\alpha^t = [s_\alpha^{t-H+1}, s_\alpha^{t-H+2}, \dots, s_\alpha^t]$, where $s_\alpha^t$ is a $3 \times 3$ grid observation at time $t$ of agent $\alpha$ and $H$ is the length of history. We calculate the average length of optimal paths for each domains, and find out that each agent need to move 2.4 steps in $8 \times 8$ grid, 4.8 steps in $16 \times 16$ grid and 9.8 steps in $32 \times 32$ grid on average. Therefore, we empirically set $H = 2, 4, 8$ for $8 \times 8, 16 \times 16, 32 \times 32$ domain, respectively.} \item \textbf{Actions:} There are 4 actions for each agent, i.e. going towards 4 directions, denoted by $\{east, south, west, north\}$. \item \textbf{Rewards:} The reward is composed of two parts, i.e., local reward $r_l$ and global reward $r_g$, respectively. When an agent hits an obstacle, $r_l$ is set to be -10; when an agent meets the other agent, $r_l$ is set to be +50; otherwise, $r_l$ is set to be -1. Considering the goal of the task is to make the two agents eventually meet each other, we exploited an additional reward regarding the distance between two agents, namely global reward, i.e., $r_g = c / md(\alpha, \beta)$, where $md(\alpha, \beta)$ is the Manhattan distance between the two agents, and $c$ is a regularization factor which is set to be the dimension of the domain, i.e. $c = N_g$. The final reward $r$ is calculated by $r=r_l+r_g$. \ignore{ \textbf{Q-Network:} We devise a CNN-based Q-Network, with a convolutional layer of 32 kernels of size $3 \times 3$ and paddings. The outputs of the convolutional layer are then flatten and fed to two fully connected layers with 256 and 4 } \ignore{ \textbf{Baseline:} DQN-alpha only uses the alpha Q-network. CNN-alpha has the same network structure as DQN-alpha, except that it is trained in the supervised way. DQN-full uses both networks, i.e. $s_\alpha$ and $s_\beta$ will be input to two different convolutional layer with $3 \times 3$ sized kernels, then the outputs will be flatten, concatenated and input into the fully connected layers with size 256 and 4. CNN-full has the same network structure as DQN-full, except that it is trained in the supervised way. \textbf{{\tt FedRL}} federates the two networks according to Eq (\ref{add noise}) - Eq(\ref{federated}), where the MLP module comprises two fully-connected layers with size 32 and 4. The noise probability $p$ is set to 0.5, $\sigma$ (the standard deviation of Gaussian noise) is set to 1. For all models, we adopt the Adam optimizer with learning rate 0.001 and the ReLU activation. } \item \textbf{Dataset:} We generated 8000 different maps (or matrices) for each size of $8 \times 8, 16 \times 16$ and $32 \times 32$. In each map, we randomly chose two positions for the two agents, and computed the optimal path (shortest path) which was compared to the paths predicted by our {{\tt FedRL}} and baselines. We randomly split the 8000 maps into 6400 for training, 800 for validation, and 800 for testing. \item \textbf{Criteria:} In each training episode, we first took the initial state and predicted an action with a model. We then got a new state and took it as the new input of the model at the second time step. We repeated the procedure until the two agents met each other, which is indicated as \emph{a successful episode}, or it exceeded the maximum time step $T_m$, which is indicated as \emph{a failure episode}. We set $T_m$ to be twice the length of the longest optimal path, i.e. $T_m = 38, 86, 178$ for each size of $8 \times 8, 16 \times 16$ and $32 \times 32$, respectively. We finally computed the measures of successful rate $SuccRate$, and average reward $AvgRwd$\ignore{, and trajectory difference $TrajDiff$}, i.e., \[SuccRate = \frac{\# SuccessfulEpisodes}{\# TotalEpisodes},\] and \[AvgRwd = \frac{TotalCumulativeReward}{\# TotalEpisodes},\]\ignore{ and $TrajDiff = \frac{| LengthOfPredictedPaths - LengthOfShortestPaths |}{LengthOfShortestPaths}$} where $\# SuccessfulEpisodes$ indicates the number of successful episodes, $\# TotalEpisodes$ indicates the total number of episodes, $TotalCumulativeReward$ indicates the total reward of all episodes, \ignore{$LengthOfPredictedPaths$ indicates the total number of steps of predicted paths (a path is a sequence of actions predicted by a model), $LengthOfShortestPaths$ indicates the total number of steps of optimal paths, }respectively. \end{itemize} \begin{table}[!ht] \begin{minipage}[t]{0.49\textwidth} \begin{small} \centering \caption{Comparison with baselines in Grid-World} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{Metric}} & \multirow{2}{*}{\textbf{Method}} & \multicolumn{3}{c|}{\textbf{Domain} w.r.t. various sizes} \\ \cline{3-5} & & $8 \times 8$ & $16 \times 16$ & $32 \times 32$ \\ \hline \multirow{3}{*}{$SuccRate$} & FCN-alpha & 69.73\% & 48.04\% & 41.73\% \\ \cline{2-5} & DQN-alpha & 88.27\% & 76.20\% & 71.41\% \\ \cline{2-5} & FedRL-1 & 92.52\% & 79.83\% & 77.88\% \\ \cline{2-5} & FedRL-2 & \textbf{95.06\%} & \textbf{84.31\%} & \textbf{82.02\%} \\ \cline{2-5} & \grey{FCN-full} & \grey{72.16\%} & \grey{56.44\%} & \grey{50.15\%} \\ \cline{2-5} & \grey{DQN-full} & \grey{93.69\%} & \grey{83.40\%} & \grey{79.73\%} \\ \hline \hline \multirow{3}{*}{$AvgRwd$} & DQN-alpha & 13.781 & -112.084 & -285.946 \\ \cline{2-5} & FedRL-1 & 18.152 & -94.193 & -226.583 \\ \cline{2-5} & FedRL-2 & \textbf{19.101} & \textbf{-84.139} & \textbf{-189.756} \\ \cline{2-5} & \grey{DQN-full} & \grey{31.286} & \grey{-38.114} & \grey{-52.72} \\ \hline \end{tabular} \label{grid-world} \end{small} \end{minipage} \end{table} \paragraph{Experimental Results w.r.t. Domain Sizes} We ran our {{\tt FedRL}} and baselines five times and calculated an average of $SuccRate$ (as well as $AvgRwd$). We calculated two results of our {{\tt FedRL}}, denoted by {{\tt FedRL}}-1 and {{\tt FedRL}}-2, which correspond to ``adding Gausian differential privacy on the local Q-network'' and ''not adding Gausian differential privacy on the local Q-network'' in the testing data, respectively. The results are shown in Table \ref{grid-world}. From Table \ref{grid-world}, we can see that $SuccRate$ of both {{\tt FedRL}}-1 and {{\tt FedRL}}-2 is much better than DQN-alpha and CNN-alpha in all three different sizes of domains, which indicates agent $\alpha$ can indeed get help from agent $\beta$ via learning federatively with agent $\beta$. Comparing to DQN-full, we can see that the $SuccRate$ of both {{\tt FedRL}}-1 and {{\tt FedRL}}-2 is close to DQN-full, which indicates our federated learning framework can indeed take advantage of both training data from agents $\alpha$ and $\beta$, even though they are protected locally with Gausian differential privacy (the reason why {{\tt FedRL}}-2 is slightly better than DQN-full is that the hierarchical structure of our federated learning framework with three components may be more suitable for this domain than a unique DQN framework). In addition, we can also see that the $SuccRate$ generally decreases when the size of the domain increases, which is consistent with our intuition, since the larger the domain is, the more difficult the task is (which requires more training data to build models of high-quality). \ignore{DQN-based models perform better than CNN-based classification models. When considering the \emph{trajectory difference} metric, our {{\tt FedRL}} models outperform both FCN-alpha and FCN-full. Compared to DQN-alpha, {{\tt FedRL}} models induce a bit increment of trajectory difference in all domains. Meanwhile, DQN-full also induces the trajectory difference in $16 \times 16$ and $32 \times 32$ domains. Because these models try more steps to success and get more rewards in the long run, which can be demonstrated by the results of \emph{average cumulative reward} (CNN-alpha and CNN-full do not involve in this metric). } From the metric of $AvgRwd$, we can see that both {{\tt FedRL}}-1 and {{\tt FedRL}}-2 are better than DQN-alpha, which indicates agent $\alpha$ can indeed get help from agent $\beta$ in gaining rewards via federated learning with agent $\beta$. We can also find that {{\tt FedRL}}-2 outperforms {{\tt FedRL}}-1 in both $AvgRwd$ and $SuccRate$. This is because our model is trained based on Gausian differential privacy, indicating $SuccRate$ on testing data with Gausian differential privacy, as done by {{\tt FedRL}}-2, should be better than on testing data without Gausian differential privacy, as done by {{\tt FedRL}}-1. \ignore{ Regarding the different settings of Gaussian mechanism, the FRL-1 model performs more successful episodes and gets more reward than the FRL-2 model, while it induces a little bit more trajectory difference. The result shows that our {{\tt FedRL}} model is more suitable to Q-values with noise than original Q-values, which means that our approach can protect data privacy better.} \begin{figure*}[!ht] \begin{center} \includegraphics[width=0.28\textwidth]{8x8_succ.pdf} \includegraphics[width=0.28\textwidth]{16x16_succ.pdf} \includegraphics[width=0.34\textwidth]{succ.pdf} \includegraphics[width=0.28\textwidth]{8x8_rwd.pdf} \includegraphics[width=0.28\textwidth]{16x16_rwd.pdf} \includegraphics[width=0.34\textwidth]{rwd.pdf} \caption{Results about the impact of history length. \label{results of history length} \end{center} \end{figure*} \paragraph{Experimental Results w.r.t. History Length} To study when {{\tt FedRL}} works, we consider the amount of information that an agent uses. The observation input at each time is a sequence of observations, i.e. observation history. Intuitively, the longer the length of the observation history, the more information we have, and the more complicated the neural network is. We fixed the structures of all models and only changed the length of the history observations. We tested the length of history from 2 to 32 for all domains. The results are shown in Figure \ref{results of history length}. In the first row of Figure \ref{results of history length}, we can observe is that the success rates are improving with the increment of the history length. In $8 \times 8$ and $16 \times 16$ domains, the results converge when history length is longer than 16, while in $32 \times 32$ domain, they have not converged even at the history length of 32. The reason is that $32 \times 32$ domain is more complicated than the other two domains, so it needs more information (i.e. $H > 32$) to learn a model of high-quality. We can also find that when the history length is short (i.e. $H = 2$ for $16 \times 16$ and $H \leq 4$ for $32 \times 32$), DQN-alpha, DQN-full and {{\tt FedRL}}-1 perform poorly. {{\tt FedRL}}-1 and DQN-full do not show their advantages although they take as input both $s_\alpha$ and $s_\beta$ directly or indirectly. However, {{\tt FedRL}}-2, which applies differential privacy to both training and testing samples, shows its great scalability even with limited amount of history. In small domains, such as $8 \times 8$, a few steps are enough to explore the whole environment. Therefore, the DQN-alpha which takes only single observation as input can performs as well as DQN-alpha and {{\tt FedRL}} models. \ignore{We conjecture it is because the models have not been well-learned with limited input information. The experiments demonstrate that our {{\tt FedRL}} can perform as well as DQN-full in all metrics.} \subsection{Text2Actions Domain} In this experiment, we evaluated our {{\tt FedRL}} in another domain, i.e., Text2Action \cite{DBLP:conf/ijcai/FengZK18}, which aims to extract action sequences from texts. For example, consider a text ``\emph{Cook the rice the day before, or use leftover rice in the refrigerator. The important thing to remember is not to heat up the rice, but keep it cold.}", which addresses the procedure of making egg fired rice. The task is to extract words composing an action sequence ``cook(rice), keep(rice, cold)'', or ``use(leftover rice), keep(rice, cold)''. \ignore{An overall process of extracting action sequences is shown in Figure \ref{eas example}.}\ignore{We assume that there are two agents, $\alpha$ and $\beta$, where agent $\alpha$ has a rich word embedding model trained by a sufficiently large corpus, agent $\beta$ has a POS-tagger that can generate the part of speech of each word from a text.} We define states, actions and rewords as follows. \ignore{ \begin{figure}[!ht] \begin{center} \includegraphics[width=0.5\textwidth]{eas_overview.pdf} \end{center} \caption{Illustration of the overall process of the Text2Action task. ``Name extractor'' is a module for extracting action names. ``Argument extractor'' is a module for extracting action arguments based on the extracted action names and their context in the text.} \label{eas example} \end{figure} } \begin{table}[!ht] \begin{small} \caption{Comparison with baselines in Text2Action} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{\textbf{Metric}} & \multirow{2}{*}{\textbf{Method}} & \multicolumn{3}{c|}{\textbf{Dataset}} \\ \cline{3-5} & & WHS & CT & WHG \\ \hline \multirow{3}{*}{F1} & FCN-alpha & 86.19\% & 65.44\% & 55.18\% \\ \cline{2-5} & DQN-alpha & 92.11\% & 74.64\% & 66.37\% \\ \cline{2-5} & FedRL-1 & 93.76\% & \textbf{85.05\%} & 75.64\% \\ \cline{2-5} & FedRL-2 & \textbf{94.41\%} & 84.92\% & \textbf{75.85\%} \\ \cline{2-5} & \grey{FCN-full} & \grey{91.42\%} & \grey{79.03\%} & \grey{68.93\%} \\ \cline{2-5} & \grey{DQN-full} & \grey{94.55\%} & \grey{83.39\%} & \grey{74.63\%} \\ \hline \multirow{3}{*}{$AvgRwd$} & DQN-alpha & 54.762 & 47.375 & 46.510 \\ \cline{2-5} & FedRL-1 & 55.623 & \textbf{50.472} & 48.359 \\ \cline{2-5} & FedRL-2 & \textbf{55.894} & 50.452 & \textbf{48.373} \\ \cline{2-5} & \grey{DQN-full} & \grey{56.192} & \grey{50.307} & \grey{48.154} \\ \hline \end{tabular} \label{Text2Action} \end{center} \end{small} \end{table} \begin{figure*}[!ht] \centering \includegraphics[width=0.8\textwidth]{kernels.pdf} \caption{Results about the impact of the number of convolutional kernels. \label{results of model complexity} \end{figure*} \begin{itemize} \item \textbf{States:} $s_\alpha \in \mathbb{R}^{N_w \times K_1}$ is real-valued matrix that describes the part-of-speech of words, and $s_\beta \in \mathbb{R}^{N_w \times K_2}$ is real-valued matrix that describes the embedding of words. $N_w$ is the number of words in the text, $K_1$ is the dimension of a part-of-speech vector, and $K_2$ is the dimension of word vectors. In our experiments, part-of-speech vectors are randomly initialized and trained together with the Q-network, while word vectors are generated from the pre-trained word embeddings and will not be changed during the training of the Q-network. \item \textbf{Actions:} There are two actions for each agent, i.e., $\{select, neglect\}$, indicating selecting a word as an action name (or a parameter), or neglecting a word (which means the corresponding word is neither an action name nor a parameter). \item \textbf{Rewards:} The instant reward includes a basic reward and an additional reward, where the basic reward indicates whether the agent selects a word correctly or not, and the additional reward encodes the priori knowledge of the domain, i.e., the proportion of words that are related to action sequences \cite{DBLP:conf/ijcai/FengZK18}. We assume that only agent $\alpha$ knows the rewards, while agent $\beta$ does not know them. \item \textbf{Dataset:} We conducted experiments on three datasets, i.e., ``Microsoft Windows Help and Support'' (WHS) documents \cite{DBLP:conf/acl/BranavanCZB09}, and two datasets collected from ``WikiHow Home and Garden''\footnote{https://www.wikihow.com/Category:Home-and-Garden} (WHG) and ``CookingTutorial''\footnote{http://cookingtutorials.com/} (CT), respectively. In CT, there are 116 labeled texts and 134,000 words, with 10.37\% being action names and 7.44\% being action arguments. In WHG, there are 150 labeled texts and 34,000,000 words, with 7.61\% being action names and 6.3\% being action arguments. In WHS, there are 154 labeled texts and 1,500 words, with 19.47\% being action names and 15.45\% being action arguments. \item \textbf{Criteria:} For evaluation, we first fed texts to each model to obtain selected words. We then compared the outputs to their corresponding ground truth and calculated $\# TotalTruth$ (total ground truth words), $\# TotalRight$ (total correctly selected words), and $TotalSelected$ (total selected words). After that we computed metrics $precision = \frac{\#TotalRight}{\#TotalSelected}$, $recall = \frac{\#TotalRight}{\#TotalTruth}$, and \[F1 = \frac{2 \times precision \times recall}{precision + recall}.\] We used F1-metric for all baselines. For reinforcement learning methods, we also computed the average cumulative rewards \[AvgRwd = \frac{TotalCumulativeReward}{\# TotalTimeSteps},\] where $TotalCumulativeReward$ indicates the total cumulative reward of all testing texts and $\# TotalTimeSteps$ indicates the total number of steps of all testing texts. \end{itemize} We adopted the same TextCNN structure as \cite{DBLP:conf/ijcai/FengZK18}. Four convolutional layers corresponding to bigram, tri-gram, four-gram and five-gram with 32 kernels of size $n \times m$, where $n$ refers to the $n$-gram and $m = 50$. Each convolutional layer is followed by a max-pooling layer with size of $(N_w - n + 1, 1)$, where $N_w - n + 1$ is the first dimension of the outputs of the n-gram convolutional layer. The max-pooling outputs are concatenated and fed to two fully connected layers with size $128\times 2$, where $128$ equals to $4 \times 32$, indicating 4 types of $n$-grams and 32 kernels for each $n$-grams, and 2 is the size of the action space. The MLP module of {{\tt FedRL}} is composed of two fully-connected layers with size $4\times 2$. The standard deviation of Gaussian differential privacy $\sigma$ was set to be 1. For all models, we adopted the Adam optimizer with learning rate 0.001 and ReLU activation\footnote{Detailed setting can be found from the source code: https://github.com/FRL2019/FRL}. \paragraph{Experimental Results} We ran our {{\tt FedRL}} and baselines five times to calculate an average of F1 (as well as $AvgRwd$). The results are shown in Table \ref{Text2Action}. From Table \ref{Text2Action} we can see that both {{\tt FedRL}}-1 and {{\tt FedRL}}-2 outperform both FCN-alpha and DQN-alpha in all three datasets under both F1 and $AverRwd$ metrics, which indicates agent $\alpha$ can learn better policies via federated learning with agent $\beta$ than learning by itself. Comparing our {{\tt FedRL}}-1 and {{\tt FedRL}}-2 with DQN-full in both F1 and $AvgRwd$, we can see that their performances are close to each other, suggesting our federated learning framework performs as good as the DQN model directly built from all training data from both agents $\alpha$ and $\beta$. \ignore{ In addition, Texts of the WHS dataset are quite short and most verbs of the texts are exactly the words of actions to be selected, which makes the Text2Action task easy to solve, i.e. WHS is a naive domain. Therefore, all baselines perform well in WHS, and our {{\tt FedRL}} only outperforms the DQN-alpha, CNN-full and CNN-alpha by around $2\% \sim 8\%$. When processing more complicated texts which contain much more redundant verbs and sentences, e.g. texts in CT and WHG datasets, the performance of our {{\tt FedRL}} model is dominant, improving the F1 score around $10\% \sim 20\%$ absolutely compared to the baselines. Under all metrics, our {{\tt FedRL}} performs as well as DQN-full. Generally speaking, {{\tt FedRL}}-2 excels {{\tt FedRL}}-1, but the advantage is not so obvious. } To see the impact of the model complexity, we varied the number of convolutional kernels from 2 to 32 \ignore{(the last fully connected layer will change from $8 = 4 \times 2$ to $128 = 4 \times 32$, respectively)} with other parameters fixed. The results are shown in Figure \ref{results of model complexity}. We can see that all models generally perform better when the number of convolutional kernels (filters) increases, and become stable after 8 kernels. We can also see that our {{\tt FedRL}} models outperform DQN-alpha, FCN-alpha and FCN-full with respect to the number of kernels, which indicates agent $\alpha$ can indeed learns better policies via federated learning with agent $\beta$. Both {{\tt FedRL}}-1 and {{\tt FedRL}}-2 are close to DQN-full with respect to the number of filters, suggesting that our federated learning framework is effective even though the data of agents $\alpha$ and $\beta$ are not shared with each other. \section{Conclusion} we propose a novel reinforcement learning approach to considering privacies and federatively building Q-network for each agent with the help of other agents, namely Federated deep Reinforcement Learning ({{\tt FedRL}}). To protect the privacy of data and models, we exploit Gausian differentials on the information shared with each other when updating their local models. We demonstrate that the proposed {{\tt FedRL}} approach is effective in building high-quality policies for agents under the condition that training data are not shared between agents. In the future, it would be interesting to study more members in the federation with background knowledge represented by (probably incomplete) action models \cite{DBLP:journals/ai/ZhuoM014,DBLP:journals/ai/Zhuo014,DBLP:journals/ai/ZhuoK17}. \newpage
1,314,259,993,670
arxiv
\section{Introduction} Moving groups (MGs) or dynamic streams are stars that share the same velocity components but have no spatial or overdensity center discernible from the field stars. MGs are often called kinematic structures since the stars that belong to these streams have the same motion. The concept of MGs originates from the work of Eggen (1996 and references therein). With the data from large surveys like Hipparcos (Perryman et al. 1997; van Leeuwen 2007), the Geneva-Copenhagen Survey (Nordstr\"{o}m et al. 2004), the Radial Velocity Experiment (RAVE; Steinmetz et al. 2006) and the Sloan Digital Sky Survey (SDSS; York et al. 2000), dozens of MGs have been confirmed. Two main hypotheses have been proposed as the origin of MGs. MGs in the halo are generally attributed to the relics of satellite galaxies that have been disrupted by the Galaxy's tidal potential (Helmi et al. 1999). Those in the thin disk are thought to have been formed through dynamic interactions (Dehnen 1998), or from dissociated clusters (e.g., HR1614; De Silva 2007). Among MGs known to date, two are in thick disk: Arcturus stream (Navarro et al. 2004) and AF06 (Arifyanto $\&$ Fuchs 2006). However, the origin of these MGs are still disputed. Navarro et al. (2004) reanalyzed the group of stars kinematically associated with Arcturus stream and concluded that they constitute a peculiar grouping of metal-poor stars with similar apocentric radius, common angular momentum, and distinct metal abundance patterns. Their results suggest the angular momentum of Arcturus stream is too low to arise from dynamical perturbations induced by the Galactic bar, suggesting a tidal origin for this MG. This hypothesis is supported by the well-defined sequence of abundance ratios of member stars taken from the available Gratton et al. (2003) catalog. Arifyanto $\&$ Fuchs (2006), who later detected the Arcturus stream group in a compilation of various catalogs using their (\textit{V}, $\sqrt{U^{2}+2V^{2}}$) method, suggested an external origin as well, based on the goodness of theoretical 12 Gyr isochrone fits to the putative member stars. However, recent detailed abundance studies including various $\alpha$-process and other elements have cast doubt on the hypothesis that the Arcturus stream is composed of a homogeneous stellar population (Williams et al. 2009). According to thus more recent work, the putative members do not differ in abundances pattern from the surrounding disk stars. This either indicates a progenitor system that was massive enough to self-enrich to [Fe/H]= -0.6 or, more likely, a dynamical origin for this group (Ramya et al. 2012; Bensby et al. 2014). To find more MGs in the thick disk and to understand their origins, we are conducting MGs detection using the data of the Large sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST, so called the Guoshoujing Telescope). LAMOST is a National Major Scientific Project undertaken by the Chinese Academy of Science (Wang et al. 1996; Cui et al. 2012; Zhao et al. 2012). The LAMOST pilot survey conducted from October 2011 to May 2012, obtained several hundred thousand spectra (Luo et al. 2012). Since September 2012, LAMOST has been conducting a general survey, observing about one million stars per year. LAMOST has the capability to observe large, deep and dense regions in the Milky Way Galaxy, which will enable a number of research topics to explore the evolution and the structure of the Milky Way. In section 2 we describe the data used for MG detection. Section 3 discusses our detection strategy. The analysis of detected MGs is discussed in section 4. A summary of our results is given in section 5. \section{The Data} The LAMOST spectra have a resolving power of R $\sim$ 2000 spanning 3700$\rm \AA \sim $ 9000$\rm\AA$. Two arms of each spectrograph cover this wavelength range with an overlap of 200 $\rm \AA$. The blue spectral coverage is 3700$\rm \AA \sim$ 5900$\rm\AA$ and the red is 5700$\rm \AA \sim$ 9000 $\rm\AA$. The raw data were reduced with LAMOST 2D and 1D pipelines (Luo et al. 2004). These pipelines include bias subtraction, cosmic-ray removal, spectral trace and extraction, flat-fielding, wavelength calibration, sky subtraction, and combination. The radial velocities are measured by cross-correlation between the observed spectra and template spectra from the Elodie library (Moultaka et al. 2004). Stellar parameters (including $T\rm_{eff}$, log $g$, [Fe/H]) are derived by Ulyss \footnote[1]{available at: http://ulyss.univ-lyonl.fr} software package (Wu et al. 2011). This package enables full spectral fitting for to determine the stellar atmospheric parameters. It minimizes $\chi^{2}$ between an observed spectrum and a template spectrum, and the fit is performed in the pixel space. The method determines all the free parameters in a single fit in order to properly handle the degeneracy between the temperature and the metallicity. Our initial sample was obtained by a cross referencing between LAMOST DR1 and SDSS DR9. Stars without SDSS photometry and proper motions were eliminated because we need SDSS color to estimate photometric distances. Next, we selected F, G and K main sequence (MS) stars with high signal to noise (S/N $>$ 20) from the initial sample based on color: 0.3 $<$ $(g-r)_{0}\footnote[2]{The subscript nomenclature means dereddened color.}$ $<$ 1.3 and log $g$ $\geqslant$ 3.5. To eliminate M stars, we added other color constraints, i.e., $i-z$ $<$ 0.3 and $r-i$ $<$ 0.53. With the above constraints, 2,09,563 stars were culled. In order to estimate photometric parallaxes of MS stars, we adopted the relation from Ivezi\'{c} et al. (2008), which gives the absolute magnitude in the $r$ band, $M\rm_{r}$ as a function of \textit{g-i} and \textit{[Fe/H]}. Stars with proper motion errors worse than 6 mas/year are also abandoned. The rectangular velocity components relative to the Sun for these stars were then computed and transformed into Galactic velocity components \textit{U}, \textit{V}, and \textit{W}, and corrected for the peculiar solar motion (\textit{U}, \textit{V}, \textit{W}) = (-10.0, 5.2, 7.2) km s$^{-1}$ (Dehnen $\&$ Binney 1998). The UVW-velocity components are defined as a right-handed system with \textit{U} positive in the direction radially outward from the Galactic center, \textit{V} positive in the direction of Galactic rotation, and W positive perpendicular to the plane of the Galaxy in the direction of the north Galactic pole. Our typical uncertainties in \textit{U}, \textit{V} and \textit{W} are no more than $\sim$20 km s$^{-1}$. Following the Equations (1)-(3) of Bensby et al. (2003) and the characteristic velocity dispersions and asymmetric drift values given in their Table 1, we determined the likelihood that each star belongs to the thin disk, the thick disk and the halo based on its kinematic alone. Ratios of these likelihoods were then used to find stars that are more likely to be thin disk than thick disk (i.e., TD/D $<$ 0.50) and more likely to be thick disk and halo than thin disk (i.e., TD/D $>$ 10). This strategy yielded a thick disk and halo sample (\textit{d} $<$ 2 kpc) of 7,993 stars. \section{Moving group detection} MGs have two main types. One is dynamical streams, i.e., groups of stars that are trapped in a small region of phase space by dynamical resonances (such as spiral waves or a bar). The stars distribution in such streams in velocity space does not depend on the star's origin, age, or type (Dehnen 1998; Famaey et al. 2005). The other type of MG results from tidal streams, where the stars originated from the same bound object, such as a cluster or a satellite galaxy. But once they become unbound, they will have slightly different orbits and, hence, frequencies, so that they will phase-mix. This leads to a broadening of the velocity distribution which is more prominent in the W-component because the vertical frequency is shorter than the horizontal frequencies. One method for identifying MG is to search for overdensities in phase space. Helmi et al. (1999) and Helmi $\&$ de Zeeuw (2000) presented an approach involving searching for streams in the space spanned by the integrals of the motion—that is, the energy E and angular momentum L$_{z}$ vs. L$_{\bot}$ $\equiv$ (L$_{x}^{2}$ + L$_{y}^{2})^{1/2}$. This method is often used to detect halo streams (Kepley et al. 2007). Dynamic streams tend to be clustered in (\textit{U}, \textit{V}) space (Zhao et al. 2009; Antoja et al. 2008). Klement et al. (2008) and Arifyanto $\&$ Fuchs (2006) detected streams in \textit{V} and $\sqrt{U^2+2V^2}$ space. Their strategy for finding nearby stellar streams in velocity space is based on the Keplerian approximation for orbits, developed by Dekker (1976). In this paper we follow a similar approach. Fig. 1 presents our detection procedure. The left-top panel plots the distribution of 7,993 stars in \textit{V} and $\sqrt{U^2+2V^2}$ as contours. The bin size is 10 km s$^{-1}$. Color represents the number density in each bin. Several clumps appear in this contour plot. Next, MGs are detected using the wavelet method (Skuljan et al. 1999; Zhao et al. 2009; Klement et al. 2008, 2009). The wavelet transform provides an easily interpretable visual representation of clumps. A mother wavelet named Mexican hat is the second derivative of a Gaussian and gets good results when applied to find singularities. The scale of our wavelet transform is adopted as 20 km s$^{-1}$ which is the typical error of velocity measurements. The top-right panel displays the positive wavelet coefficient. An obvious overdensity locates at \textit{V} $\sim$ -100 km s$^{-1}$ and $\sqrt{U^2+2V^2}$ = 150 km s$^{-1}$, corresponding to the Arcturus group detected by Navarro et al. (2004). There are some other overdensities in this plot. To test the significance of these, we performed 250 Monte Carlo (MC) simulations using the same number of stars as in our sample randomly drawn from a Galactic model consisting of Schwarzschild distributions (Binney $\&$ Tremaine 1987) to represent our sample (see equation 1). The goal was to create a `smooth' reference model velocity distribution that matches the overall velocities of our sample. To make the distribution of this random sample consistent with our observed sample, we adopted the dispersions ($\sigma_{U}$ and $\sigma_{V}$) and rotational offsets from Local Standard of rest (LSR) as (74, 38, -80) km s$^{-1}$ . Fig. 2 shows a comparison of velocity distribution of our sample (solid line) and the MC sample (dashed line). \begin{eqnarray} f \varpropto exp -\frac{1}{2}[(\frac{U}{\sigma U})^2 + (\frac{V-V_{0}}{\sigma V})^2] \end{eqnarray} The bottom-left panel of Fig. 1 shows the contour distribution using the wavelet coefficient for a simulated sample. It is clear there are some fake features that arise from the Poission noise or other limits of the method. For each simulated sample, we obtained a wavelet coefficient. Then the average coefficient (wc$_{m}$) of 250 wavelet coefficients and the standard deviations ($\sigma wc_{m}$) are obtained. The final wavelet coefficient (wc$_{f}$) of our real sample was obtained by Equation 2 (wc$_{o}$ represents the coefficient of observed sample; wc$_{m}$ is the mean coefficient of the 250 random sample; $\sigma wc_{m}$ means the standard deviation of the 250 random sample; $wc_{f}$ is the final coefficient.) \begin{eqnarray} wc_{f}&=&\frac{wc_{o}-wc_{m}}{\sigma wc_{m}} \end{eqnarray} The bottom-right panel of Fig. 1 shows the resulting contour distribution of wc$_{f}$. Only those wc$_{f}$ $>$ 2 are ploted. Three groups are apparent and encircled with black lines labeled V1, V2 and V3. V1 at $\sim$ -100 km $s^{-1}$ is consistent with the Arcturus stream (Navarro et al. 2004). The range of V1 is from -110 to -70 km s$^{-1}$. The AF06 stream locates at -80 km s$^{-1}$. Tables 1 and 2 in Arifyanto $\&$ Fuchs (2006) show that the velocity and metallicity distributions of the members of the AF06 and Arcturus streams are practically identical. Arifyanto $\&$ Fuchs (2006) think the stars of Arcturus stream and AF06 might originate from the same population based on the colour-magnitude diagrams shown in their Fig. 4. The orbits of those two streams are very similar. This suggests the streams are probably related to each other. Thus, we regard V1 as a combination of Arcturus stream and AF06. V2, centered at -10 km s$^{-1}$, possibly corresponds to the Hyades-Plaiades stream. V3, centered at -180 km s$^{-1}$, may be connected with the stream (\textit{V} $\sim$ -160 km s$^{-1}$) found by Klement et al. (2008) using RAVE data (K08 hereafter), however, V3 and K08 are different in L$_{z}$ and L$_{\bot}$. From Fig. 15 in Klement et al. (2008), the center of L$_{\bot}$ is at 850 kpc km $s^{-1}$, while L$_{\bot}$ for V3 ranges [200, 500] kpc km $\rm s^{-1}$ (see Fig. 3). Thus, we regard V3 as a previously undetected stream. \section{Moving group analysis} Additional work to understand the formation mechanism of our candidate MGs was undertaken, but hindered by the contamination of field stars. Initial member stars were extracted by range of \textit{V} velocity in bottom-right panel of Fig. 1. Fig. 3 shows a second member stars selection criterion. The left-top panel shows the initial members in L$_{z}$ and L$_{\bot}$. Green plus signs represent V1; blue asterisks represent V2; violet diamonds represent V3. It is clear that the initial members clump in L$_{z}$. However, in the direction of L$_{\bot}$, larger scatter appears in V1 and V3, perhaps caused by contamination of field stars. The other panels in Fig. 3 show histograms of L$_{\bot}$. Generally, MGs should be clustered both in [\textit{V}, $\sqrt{U^2+2V^2}$] and [L$_{z}$, L$_{\bot}$]. The peaks of the L$_{\bot}$ histograms each MG are bracketed by dashed lines. Thus, the final members of each MG are those within the two dashed lines: V1 [100, 500] kpc km s$^{-1}$; V2 [600, 800] kpc km s$^{-1}$; V3 [200, 500] kpc km s$^{-1}$. Fig. 4 presents these three groups in a different way. The symbols have the same meaning as those of Fig. 3. The small black dots represent the non-MG stars in the whole sample. The top panel provides their distribution in [Fe/H] and \textit{W}. Most stars in V3 are metal poor; [Fe/H] $<$ -0.5 dex. The \textit{W} velocity distributions in V1 and V3 show larger dispersions than V2. The middle panel presents the distribution in [\textit{U}, \textit{V}]. V1 and V3 show very broad dispersions in U while V2 has a small dispersion perhaps because the stars in V2 have been perturbed by the Galaxy's bar. The bottom of Fig. 4 plots the [Fe/H] and vertical distance of these MGs. The distance of most stars in our sample is larger than 200 pc. However, stars in V2 are relatively nearby. All these MGs extend to about 1.5 kpc. Fig. 5 presents the eccentricity distribution of three candidate streams. Galactic orbits were calculated with the Milky Way potential model of Allen $\&$ Santillan (1991). The model is time-independent, axisymmetric, fully analytic, and consists of a spherical central bulge, a disk, a massive spherical halo, and has a total mass of 9$\times$10$^{11}$ solar masses. Output parameters are: the minimum and maximum distances from the Galactic center; R$_{min}$ and R$_{max}$ (i.e., the peri- and apocenter); the maximum distance from the Galactic plane Z$_{max}$ and the eccentricity, e=(R$_{max}$-R$_{min}$)/(R$_{max}$+R$_{min}$). The first three panels in Fig. 5 show the distribution of the total sample in eccentricity vs. L$_{z}$ and in eccentricity vs. [Fe/H]. The eccentricity and metallicity of our sample has a very wide dispersion. The last three panels present the normalized histogram of our three streams, respectively. The black solid lines represent the total sample, while the green dashed lines denote our candidate MGs. As expected, the MGs have very narrow distribution of eccentricity. The peak of V2 is e$\sim$ 0.1, suggesting this stream is not debris of an accretion event. The peak of V3 is e $\sim$ 0.8, as would be expected structure resulting from an accrete event. The peak of V2 is e $\sim$ 0.4, similar to the peak of the total sample. Generally dynamical instabilities in the Galactic disc have been found to involve velocities in the range \textit{U} and \textit{V}$\sim \pm$ 50 km s$^{-1}$. At this time there is no evidence that spiral or bar structure can cause high-velocity streams like Arcturus-AF06. This is why they are usually attributed to merger events. The Arcturus stream has been interpreted as originating from the debris of a disrupted satellite (Navarro, Helmi $\&$ Freeman 2004; Helmi et al. 2006). AF06 was assigned similar origin, based on its kinematics. However, a detailed investigation by Williams et al. (2009) found that the chemical abundances are consistent with a dynamical origin but do not entirely rule out a merger one. We conclude V1 and V3 have different origins than V2. In the L$_{z}$ and L$_{\bot}$ distributions for the stars extracted by \textit{V} velocity (Fig. 3), V2 shows a small dispersion, while V1 and V3 have very broad dispersions. Moreover, the \textit{U} component of V1 spans [-100, +100], which could not be induced by a bar. Thus, we tentatively regard V1 and V3 as products of accretion event. \section{Conclusion} Three candidate moving groups were detected with high confidence from 7,993 thick disk and halo sample using LAMOST DR1 data. The velocity distribution of the sample spans distances to $\sim$ 2 kpc from the sun. Using a wavelet technique, three significant local kinematic groups are detected. Two of them correspond to the Hyades-Pleiades and Arcturus-AF06 streams. The other is a possible new stream centered at \textit{V} $\sim$ -180 km s$^{-1}$. Among these three MGs, the Arcturus-AF06 stream is the most prominent and dominates the whole sample. The MGs show different W components and very narrow eccentricity distributions. We tentatively concluded that the Hyades-Pleiades has a dynamical origin, perhaps the result of perturbation by Galaxy Bar or spiral arms. The new stream is metal poor and has a high eccentricity, suggesting a debris of a satellite accetion event. Although the origin of Arcturus-AF06 is not very definitely determined, we provided some evidence that this stream may also be a debris form a satellite accretion event. \acknowledgments This study is supported by the National Natural Science Foundation of China under grant No. 11390371, 11233004, 11222326, 11150110135, 11103034 and the National Key Basic Research Program of China (973 program) 2014CB845701 and 2014CB845703. Support from the US National Science Foundation (AST-1358787) to Embry-Riddle Aeronautical University is acknowledged. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
1,314,259,993,671
arxiv
\section{Introduction} Extremely rapid growth in the number of known transiting planets, thanks in large part to the Kepler and K2 missions \citep[e.g.][]{Batalha2013, Thompson2018, Crossfield2016, Kruse_2019}, has enabled population-level studies based on bulk properties like orbital period and size. One of the striking features of the population is that the mode of the radius distribution lies near 2.5$R_\oplus$ -- a class of planets lacking in the architecture of our own solar system. These super-earths and/or mini-neptunes bridge the size domain of terrestrials and ice giants in our solar system. Occurrence rate distributions of these ``bridge'' planets exhibit additional features in the period-radius plane. For example, a feature of particular relevance to this work is the so-called ``radius cliff'', a steep drop in planet occurrence between 2.5 and 4.0 R$_\oplus$ \citep{Borucki2011, Howard2012, Fulton2017} for planets interior to 1AU. Both planets described in this paper fall in this radius regime. As well, the so-called ``super-Earth" (1-1.8 R$_\oplus$) and ``sub-Neptune" (1.8-4 R$_\oplus$) exoplanets are the modes of the known planet radius distribution within these bridge planets \citep{Fulton2017}, a conclusion supported by modelling \citep[e.g.][]{Owen_2017}. Description of the population in relation to the radius ``cliff'' and ``valley'' has become ubiquitous since their discovery. As theoretical understandings have matured with acquisition of mass information, a variety of origins for the radius cliff have been proposed. \cite{Kite2019} propose atmospheric sequestration into magma as the cause, as larger atmospheres achieve the critical base pressure necessary to dissolve H$_2$ from the atmosphere into the core. Work is still ongoing to understand how atmospheric observables vary across these features in order to better grasp their underlying physics. Atmospheric characterization of the planets described in this paper could help provide support for theories of the causes of the radius cliff. Though confirmed multi-planet systems are only a subset of the full planet sample, substantial work has been done to understand the properties of such systems. \cite{Weiss_2018} identify the ``peas in a pod'' phenomenon in the Kepler sample, which describes the fact that planets of a given size are more likely to have neighbors of a similar size than a random size. The nearly-identical planets in the HD 63935 system conform to this trend. They also identify a trend towards denser inner planets, which may be related to photoevaporation. Mass measurements are resource intensive to obtain, in particular for the dim host stars common in the Kepler sample, but they are crucial to understanding the population in detail. Early mass measurements by \cite{Weiss2014} demonstrated that Kepler planets above $\sim$1.8$R_\oplus$ whose stars were sufficiently bright for RV followup appear to retain substantial H/He envelopes. This is in agreement with theoretical predictions for that size regime \citep{Lopez2014}. There has since been a great deal of additional work establishing relations between planetary mass and radius, though for any given planetary radius there is a wide spread in masses, implying substantial compositional diversity \citep[e.g.][]{Rogers_2015, Wolfgang2016, ChenKippingMR2017, Zeng2019Composition}. Efforts to understand the sub-Neptune population are being supported by the \textit{Transiting Exoplanet Survey Satellite} (TESS) mission \citep{Ricker2014tess}. Because TESS is observing bright, nearby stars, among the planets it identifies we expect many high-quality atmospheric targets. One of the TESS Level 1 science goals is to ensure that the masses of 50 planets with radii less than 4 $R_\oplus$ are measured. In doing so, we will begin to better understand the processes shaping the underlying distributions of exoplanets. The first TESS catalog paper was recently released, documenting 2241 transiting candidates from the mission \citep{guerrero2021toicatalog}. Ongoing followup work, including that undertaken by our group, the TESS -Keck Survey, continues to release well-determined masses for TESS planets The TESS-Keck Survey (TKS) is a consortium performing precise radial velocity followup of TESS planet candidates \citep{Dalba2020TKSI, Weiss2021TKSII, Dai2020TKSIII, Rubenzahl2021TKSIV}. One of our group's primary science goals is measuring a diverse set of planet masses at high enough precision to be suitable for atmospheric characterization \citep{Batalha2019}, in particular with \textit{JWST }, which is what led us to observe HD 63935. One relevant axis of diversity is in host star spectral type. So far, only one sub-Neptune sized planet around a G star has been the subject of an atmospheric characterization study (HD 3167 c - \cite{mikalevans2020}). Because of their relatively small transit signals, such planets represent more challenging targets than similar-sized targets around M or K dwarfs. However, the brightness of the host stars can compensate for this, and a number of compelling targets for atmospheric characterization with JWST around G star hosts have already emerged from TESS \citep[e.g.][Lubin et. al.; in prep, Turtelboom et. al., in prep]{Gandolfi_2018, Mann_2020, weiss2020}. HD 63935 b, the subject of this paper, is one of the planets we identified as a compelling atmospheric target around a G-type host star. HD 63935 is a bright (V$_{mag}=8.58$) G5 star at a distance of 49 pc. Study of planets around this class of star is valuable to help understand how differences in host star characteristics shape planet formation. With atmospheric data, we will be able to test theoretical predictions like those of \cite{Lopez2014}, that H/He mass fraction is primarily a function of radius. Characterizing these planets will also be valuable for comparative studies with our own solar system. Although we have not discovered any planetary systems that closely resemble our own, the reasons for this are likely observational \citep{Martin2015}. A better understanding of what makes our solar system unique (or not) is important to the search for life. In this paper, we present the confirmation of the sub-Neptune planets HD 63935 b and c. Planet b is uniquely well-suited to atmospheric characterization, being the second-best target on the radius cliff and the best in its niche of sub-Neptune-sized (2.6 $R_\oplus$ $<$ $R_p$ $<$ 4 $R_\oplus$), moderately-irradiated (100 $F_\oplus$ $<$ $F_p$ $<$ 1000 $F_\oplus$) planets around G-stars. Planet c is also amenable to atmospheric characterization. We also discuss evidence for a longer-period planetary mass companion to the two confirmed planets, though we ultimately adopt a two-planet model. In Section \ref{sec:selectionalgorithm}, we describe the selection algorithm that our consortium's atmospheres working group uses to select high-quality targets. In Section \ref{sec:stellarcharacterization} we provide a description of our efforts to characterize the planets' host star. In Section \ref{sec:obs}, we will describe the observations we undertook to confirm this planetary system. In Section \ref{sec:analysis}, we will provide details of our analysis and results, and in Section \ref{sec:discussion} discuss their implications. \section{The TESS-Keck Survey: Atmospheric Target Selection} \label{sec:selectionalgorithm} This work is based on data obtained as part of the TESS-Keck Survey (TKS), which performs precise radial velocity (PRV) follow-up of TESS planet candidates using the Keck telescope on Mauna Kea and the APF telescope at Lick Observatory. As part of TKS, we are interested in selecting the TESS planet candidates that would represent the best prospects for atmospheric characterization. TESS has produced (and continues to produce) far too many promising atmospheric targets for one consortium to follow up. Consequently, we attempted to develop an algorithm to prioritize targets to add to our PRV prioritized observing list; see Chontos et. al. (in prep.) for more details about the general selection of TKS targets. Our algorithm aims to find high quality atmospheric targets in regions of parameter space mostly bereft of them. We select mostly planets in the sub-Neptune regime, as many highly-observable giant planets are already known and terrestrial planets are, with a few exceptions, not accessible with JWST. As a quantification of ``under-populated parameter space,'' our selection algorithm bins planets in stellar effective temperature, planet radius, and insolation flux. We then select targets that stand out in bins without any characterized planets. A detailed explanation of the algorithm follows. The inputs to our algorithm are the star and planet properties from NASA Exoplanet Archive\footnote{\url{https://exoplanetarchive.ipac.caltech.edu/}} and the TESS TOI list\footnote{\url{https://tev.mit.edu/data/collection/193}}. We subject these lists to certain cuts as well as manual inspections of the data validation reports \citep{Twicken2018}. We exclude TOIs with declination $<-20^{\circ}$ or V$\textrm{mag}>12$ for visibility at our facilities and to ensure acceptable SNR, respectively. We also cut planets with R$_p > 10$ R$_\oplus$, as the Jovian population is already reasonably well-sampled (for the atmospheres science case only - TKS as a whole does follow up some large planet candidates). We also exclude stars with T$_{\textrm{eff}} >$ 6500 K. After our initial culling of the sample, we calculate an estimated mass for each TESS planet candidate based on the fitting formulae of \cite{ChenKippingMR2017} and \cite{Weiss2014}. With that, we calculate a Transmission Spectroscopy Metric (TSM) value \citep{Kempton2018}. This value is an estimate for the Signal-to-Noise Ratio (SNR) of a planet candidate's atmosphere as observed with the NIRISS instrument on the James Webb Space Telescope. For our purposes, it serves as a proxy for relative observability. Our selection algorithm then computes a final parameter, which is equal to the TSM value normalized by the expected exposure time on HIRES that would be required to obtain a mass precision better than 20\%, estimated based on \cite{plavchan2015}. This metric was chosen as our ranking parameter in order to select a reasonably large sample of planets. Ranking by TSM alone results in HIRES time spent on M stars, which is sub-optimal for a visual light spectrograph. Cool stars are better suited to characterization by instruments at other facilities like MAROON-X \citep{seifahrt2018maroonx} or CARMENES \citep{quirrenbach2020carmenes}, which have more sensitivity in the red. See fig. \ref{fig:teff-insolation-sample}, which plots the most promising atmospheric targets from TESS with their estimated time to measure the mass with a precision of 20\%, for a visual representation of this. With all relevant metrics calculated, our algorithm divides planets into bins along three axes in parameter space: planet radius, stellar effective temperature, and insolation flux. We use five log-uniform radius bins (which conveniently includes edges at 1.7 R$_\oplus$, approximately the location of the radius gap, and 4 R$_\oplus$, dividing the sub- and super-Neptune populations), five log-uniform insolation flux bins, and three stellar effective temperature bins, resulting in 75 bins total, some of which were unpopulated. TOIs that had higher X metric values than other TOIs and known planets in their bin were prioritized for our PRV observing list. There were enough planet candidates meeting this criteria that we could not observe them all, and as a result we focused on those with the highest X metrics. Our algorithm permitted the removal of candidates deemed observationally unsuitable, e.g. by spectral signatures suggestive of an eclipsing binary, or by substantial stellar activity making continued observations infeasible. This selection process is what led us to observe HD 63935, the subject of this paper, which remains the highest-ranked target in its bin by X metric. \begin{figure}[h] \centering \includegraphics[width=0.46\textwidth]{teff-insol-tks-sample-labelsadded.png} \caption{Sub-Neptune TESS planet candidates with estimated TSM values higher than 84 (the value suggested by Kempton et. al. as the cutoff for followup in this size regime), colored by their estimated time to a mass precision of 20\% with HIRES. The figure also plots known planets with published atmospheric transmission spectra observations (most from Hubble's Wide Field Camera 3) as open black circles. Population-level studies are more attainable for the sub-Neptunes with G star hosts, since we can obtain high-precision mass measurements on such planets quickly. TOIs included in the TKS atmospheric sample (sub-Neptune sized planets only) are identified with stars and labelled with their TOI numbers, highlighting the sample's focus on G type host stars. HD 63935 b is colored green. HD 63935 c is not shown as it was not known as a planet candidate when our target list was finalized.} \label{fig:teff-insolation-sample} \end{figure} \section{Host Star Characterization} \label{sec:stellarcharacterization} We obtained high resolution spectra of HD 63935 (HIP 38374, TIC 453211454; other aliases in Table \ref{tab:stellar-traits}). We used SpecMatch-Syn \citep{Petigura2017} to obtain stellar effective temperature, $\log g$, and metallicity. We then used these values, combined with luminosity and parallax from Gaia \citep{gaiamission, gaiaEDR3}, as priors to obtain the stellar mass, radius, and age using \texttt{isoclassify} \citep{isoclassify-huber-2017, isoclassify-berger-2020} (Table \ref{tab:planet-params}). \texttt{isoclassify} functions by using the input parameters stated above to fit the star to a curve of constant age (isochrone) in the relevant parameter space, allowing the calculation of a radius, mass, and age value. We add a 4\% and 5\% systematic uncertainty to the stellar radius and mass values, respectively, to account for isochrone grid uncertainty, following \cite{tayar2020guide}. Note that our derived age does not account for these uncertainties. The values we obtain for mass and radius are consistent with those provided by SpecMatch Synth, those derived by the \textit{Gaia } mission \citep{gaiaEDR3}, and those from our spectral energy distribution fitting. The star is slightly smaller than the Sun (R$_*$=0.959R$_\odot$, M$_*$=0.933R$_\odot$), and its \texttt{isoclassify}-derived age suggests that the system is older as well, at $6.8\pm1.8$ Gyr. This is consistent with our nominally low value of $v \sin i = 0.24^{+1.0}_{-0.24}$ km s$^{-1}$ and low activity indicator $\log$ R'$_{HK}$ = -5.06, suggestive of an old, relatively inactive star. The two sectors of TESS photometry do not provide enough information to obtain a reliable rotation period estimate, though we discuss other methods for obtaining this value in Section \ref{sec:gyrochrone}. \begin{table}[h!] \begin{center} \caption{HD 63935 Identifiers \& Gaia Solution} \label{tab:stellar-traits} \begin{tabular}{lc} \hline \hline \multicolumn{2}{c}{Aliases}\\ HIP ID & 38374\\ TIC ID & 453211454\\ Tycho ID & 783-536-1\\ \textit{Gaia } EDR3 ID & 3145754895088191744\\ \hline \multicolumn{2}{c}{\textit{Gaia } 6D Solution}\\ Right Ascension & 07:51:42.04\\ Declination & +09:23:11.40\\ Parallax & 20.470 $\pm$ 0.019 mas\\ RA Proper Motion & -78.696 $\pm$ 0.022 mas yr$^{-1}$\\ Dec Proper Motion & -188.512 $\pm$ 0.013 mas yr$^{-1}$\\ Radial Velocity & -20.34 $\pm$ 0.19 km s$^{-1}$\\ \hline \end{tabular} \end{center} \end{table} \subsection{Spectral Energy Distribution and Activity} \label{sec:sed} As an independent determination of the stellar parameters, we also performed an analysis of the broadband spectral energy distribution (SED) of the star together with the {\it Gaia\/} EDR3 parallaxes, in order to determine an empirical measurement of the stellar radius, following the procedures described in \citet{Stassun:2016,Stassun:2017,Stassun:2018}. We obtained the $B_T V_T$ magnitudes from {\it Tycho-2}, the Str\"omgren $uvby$ magnitudes from \citet{Paunzen2015}, the $JHK_S$ magnitudes from {\it 2MASS}, the W1--W4 magnitudes from {\it WISE}, the $G G_{\rm BP} G_{\rm RP}$ magnitudes from {\it Gaia}, and the FUV and NUV magnitudes from {\it GALEX}. Together, the available photometry spans the full stellar SED over the wavelength range 0.15--22~$\mu$m (see Figure~\ref{fig:sed}). \begin{figure}[!ht] \centering \includegraphics[width=0.95\linewidth]{toi509sed-correct.png} \caption{Spectral energy distribution of HD 63935. Red symbols represent the observed photometric measurements, where the horizontal bars represent the effective width of the passband. Blue symbols are the model fluxes from the best-fit Kurucz atmosphere model (black).} \label{fig:sed} \end{figure} We performed a fit using Kurucz stellar atmosphere models, with the effective temperature ($T_{\rm eff}$), metallicity ([Fe/H]), and surface gravity ($\log g$) adopted from the spectroscopic analysis. The only additional free parameter is the extinction ($A_V$), which we restricted to the maximum line-of-sight value from the dust maps of \citet{Schlegel:1998}. The resulting fit is very good (Figure~\ref{fig:sed}) with a reduced $\chi^2$ of 1.1 and best-fit $A_V = 0.02 \pm 0.02$. Integrating the (unreddened) model SED gives the bolometric flux at Earth, $F_{\rm bol} = 1.060 \pm 0.012 \times 10^{-8}$ erg~s$^{-1}$~cm$^{-2}$3. Taking the $F_{\rm bol}$ and $T_{\rm eff}$ together with the {\it Gaia\/} EDR3 parallax, gives the stellar radius, $R_\star = 0.967 \pm 0.035 R_\odot$. In addition, we can use the $R_\star$ together with the spectroscopic $\log g$ to obtain an empirical mass estimate of $M_\star = 0.82 \pm 0.20 M_\odot$, which is consistent with that obtained via empirical relations of \citet{Torres:2010}, $M_\star = 1.02 \pm 0.06 M_\odot$. These parameters are also consistent with those we derive from \texttt{isoclassify}. Finally, the $R_\star$ and $M_\star$ together yield a mean stellar density of $\rho_\star = 1.59 \pm 0.20$~g~cm$^{-3}$. \subsection{Predicted Rotation Period from Gyrochronology} \label{sec:gyrochrone} Before photometry confirmed the existence of HD 63935 c as a transiting planet, we were interested in obtaining the stellar rotation period in order to rule that out as the source of the radial velocity signal at 21 days. Although the TESS Sector 34 photometry has since confirmed that candidate, we include the following gyrochronology analysis in the text as it provides novel information about the host star. We used Markov Chain Monte Carlo (MCMC) within \texttt{kiauhoku} \citep{Claytor_2020} to obtain a posterior probability distribution of stellar parameters for HD 63935. For input we used Gaussian priors based on the spectroscopic effective temperature and metallicity, as well as the \texttt{isoclassify}-derived age. Assuming a gyrochronological model, we were then able to predict the rotation period. We performed MCMC using two different braking laws on the same grid of stellar models: one (``fastlaunch'') uses the magnetic braking law presented by \cite{van_Saders_2013}, while the other (``rocrit'') uses the stalled braking law of \cite{van_Saders_2016}. With the fastlaunch model, we predicted $P_\mathrm{rot} = 32.4 \pm 6.5$ days, while we predicted $P_\mathrm{rot} = 31.1 \pm 4.3$ days using the rocrit model. Both of these are consistent with a star somewhat older than the sun, as implied by our measured R'$_{HK}$ value and derived via \texttt{isoclassify}, as well as our non-detection of a rotation period in the TESS data. \section{Observations} \label{sec:obs} \subsection{TESS Photometry} \label{sec:tess-photometry} HD 63935 (TIC 453211454, TOI 509) was observed by the TESS mission in sector 7 between UT 2019 January 7 and 2019 February 2, and sector 34 between UT 2021 January 13 and 2021 February 9. The star was imaged by CCD 4 of Camera 1. The data consist of 33208 data points with integration times of two minutes each. The Science Processing Operations Center \citep{Jenkins2016spoc} processed the data, generated light curves using Simple Aperture Photometry \citep[SAP][]{Twicken2010, Morris2020}, and removed known instrumental systematics using the Presearch Data Conditioning (PDCSAP) algorithm \citep{Smith_2012, Stumpe_2012, Stumpe_2014}. The Sector 7 data contained two transits of planet b, and the Sector 34 data contained two additional transits of planet b and two transits of planet c. The transit of planet c that occurred during Sector 7 happened during a gap in the TESS lightcurve (see Section \ref{sec:tessphotanalysis}. For the analysis described here, we downloaded the PDCSAP flux data from the publicly-accessible Mikulski Archive for Space Telescopes (MAST)\footnote{\url{https://archive.stsci.edu/tess/}}. The full light curve is plotted in Figure \ref{fig:lc_transit-in-gap} and the phase-folded transits in Figure \ref{fig:phase-folded-transits}. \begin{figure*}[h] \includegraphics[width=0.95\textwidth]{full-lc-incls34-rasterized.png} \caption{The systematics-corrected TESS light curve of HD 63935 in TESS Sectors 7 and 34. Planet b's transits are marked in red and planet c's in blue. Note that the sector 7 data gap contains a transit of both planets. The transit-like event near Time=1506 in Sector 7 is an artifact introduced into the light curve during background subtraction. \label{fig:lc_transit-in-gap}} \end{figure*} \begin{figure*}[h] \includegraphics[width=0.95\textwidth]{LC-phasefold-v1_1-rasterized-1.png} \caption{Phase-folded transits of HD 63935 b and c. The normalized PDSCAP flux is shown in grey. Our best fit transit model is shown in red. Residuals from each fit are plotted in the lower panels.} \label{fig:phase-folded-transits} \end{figure*} \subsection{AO Imaging} As part of our standard process for validating transiting exoplanets to assess the the possible contamination of bound or unbound companions on the derived planetary radii \citep{Ciardi_2015}, we observed TOI-509 with high-resolution near-infrared adaptive optics (AO) imaging at Palomar and Keck Observatories. The Palomar Observatory observations were made with the PHARO instrument \citep{Hayward_2001} behind the natural guide star AO system P3K \citep{Dekany_2013} on 2019~Apr~18 UT in a standard 5-point quincunx dither pattern with steps of 5\arcsec\ in the narrow-band $Br-\gamma$ filter $(\lambda_o = 2.1686; \Delta\lambda = 0.0326~\mu$m). Each dither position was observed three times, offset in position from each other by 0.5\arcsec\ for a total of 15 frames; with an integration time of 1.4 seconds per frame, the total on-source time was 21 seconds on target. PHARO has a pixel scale of $0.025\arcsec$ per pixel for a total field of view of $\sim25\arcsec$. These observations were taken at an airmass of 1.1858. The Keck Observatory observations were made with the NIRC2 instrument on Keck-II behind the natural guide star AO system \citep{Wizinowich_2000} on 2019-Mar-25 UT in the standard 3-point dither pattern that is used with NIRC2 to avoid the left lower quadrant of the detector which is typically noisier than the other three quadrants. The dither pattern step size was $3\arcsec$ and was repeated twice, with each dither offset from the previous dither by $0.5\arcsec$. NIRC2 was used in the narrow-angle mode with a full field of view of $\sim10\arcsec$ and a pixel scale of approximately $0.0099442\arcsec$ per pixel. The Keck observations were made in the narrow-band $Br-\gamma$ filter $(\lambda_o = 2.1686; \Delta\lambda = 0.0326~\mu$m) with an integration time of 0.5 second for a total of 4.5 seconds on target. Observations were taken in narrow camera mode with a 1024” x 1024” FOV and at an airmass of 1.43. The AO data were processed and analyzed with a custom set of IDL tools. The science frames were flat-fielded and sky-subtracted. The flat fields were generated from a median average of dark subtracted flats taken on-sky. The flats were normalized such that the median value of the flats is unity. The sky frames were generated from the median average of the 15 dithered science frames; each science image was then sky-subtracted and flat-fielded. The reduced science frames were combined into a single combined image using a intra-pixel interpolation that conserves flux, shifts the individual dithered frames by the appropriate fractional pixels, and median-coadds the frames (Figure \ref{fig:ao_fullfov}). The final resolution of the combined dithers was determined from the full-width half-maximum of the point spread function; 0.094\arcsec\ and 0.050\arcsec\ for the Palomar and Keck observations respectively. \subsection{Ground-Based Photometry} The TESS pixel scale is $\sim 21\arcsec$ pixel$^{-1}$, and photometric apertures typically extend out to roughly 1 arcminute, which generally results in multiple stars blending in the TESS aperture. An eclipsing binary in one of the nearby blended stars could mimic a transit-like event in the large TESS aperture. We conducted ground-based photometric follow-up observations as part of the TESS Follow-up Observing Program (TFOP)\footnote{https://tess.mit.edu/followup} with much higher spatial resolution to confirm that the transit signal of HD 63935 b is occurring on-target, or in a star so close to HD 63935 that it was not detected by Gaia DR2. \subsubsection{MuSCAT} We observed one partial transit of HD 63935 b on 2019 March 24 from 10:29 to 15:09 in UTC covering the expected egress, with the multi-color simultaneous camera MuSCAT \citep{Narita2015}, which is mounted on the 1.88 m telescope of the Okayama Astronomical Observatory in Okayama, Japan. MuSCAT has three optical channels each equipped with a 1024x1024 pixels CCD camera, enabling g-, r-, and z$_s$-band simultaneous imaging. Each camera has a pixel scale of $0 \farcs 358$ per pixel, providing a field of view (FOV) of 6.1×6.1\arcsec. The exposure times were (10, 3, 3) sec for g, r, z$_s$-bands, respectively. We performed standard aperture photometry using the custom photometry pipeline described in detail in \cite{Fukui2011}. The adopted aperture was 20 pixels or 7\arcsec which excludes any nearby stars as the source of the signal of HD 63935 b. Our precision was not enough to detect the expected 0.09\% transit signal on target in each band, but the data ruled out deep eclipses in all nearby stars within the field of view that are consistent with the transit depth from TESS. \subsubsection{LCOGT} \label{sec:lco-obs} We observed a full transit of HD 63935 b in the Pan-STARRS $Y$ filter (central wavelength 1004 nm) on UTC 2020-11-20 from the Las Cumbres Observatory Global Telescope (LCOGT) \citep{Brown2013} 1.0\,m network node at McDonald Observatory. We used the {\tt TESS Transit Finder}, which is a customized version of the {\tt Tapir} software package \citep{Jensen:2013}, to schedule our transit observations. The $4096\times4096$ LCOGT SINISTRO cameras have an image scale of $0\farcs389$ per pixel, resulting in a $26\arcmin\times26\arcmin$ field of view. The images were calibrated by the standard LCOGT {\tt BANZAI} pipeline \citep{McCully:2018}, and photometric data were extracted with {\tt AstroImageJ} \citep{Collins:2017}. The images were focused and have typical stellar point-spread-functions with a full-width-half-maximum (FWHM) of $\sim 1\farcs6$, and circular apertures with radius $\sim 7\farcs 8$ were used to extract the differential photometry. The light curve is presented in Figure \ref{fig:lco-lightcurve}. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{LCO_lightcurve_fit-1.png} \caption{Transit of HD 63935 b observed by the LCOGT 1m telescope at McDonald Observatory on UT 2020-11-20. Top: Observed differential photometry (grey) with data binned in 5-minute intervals shown in color. Bottom: Transit model fit, with detrending on airmass and BJD; the shaded region shows the 68.3\% credible interval of the posterior from the MCMC fit (see Sec.\ \ref{sec:lco-fit}). The errorbars in the lower plot include a jitter term determined by the fit. \label{fig:lco-lightcurve}} \end{figure} \subsection{Ground-Based Spectroscopy} \subsubsection{LCO NRES} We obtained a spectrum of the target with the automated LCOGT 1m/NRES optical (380-860) spectrograph \citep{Brown2013, Siverd2018}, in order to characterize the star and look for signs of a stellar binary system. The observation was done on UT 2019 March 22, at the McDonald Observatory node of the LCOGT network. We observed the target with two consecutive 20 minute exposures that were processed by the LCO Banzai reduction pipeline \citep{McCully:2018} and then stacked together for a spectrum with an effective 40 minutes exposure time and a SNR of 73. The reduced spectrum was processed by the SpecMatch-Syn pipeline \citep{Petigura2017} where spectral and stellar parameters were derived while accounting for the target's distance derived from the Gaia DR2 parallax \citep{Gaia18}. The spectrum did not show evidence for a second set of lines, and the SpecMatch-Syn analysis showed the target is a slowly rotating ($v\sin i = 0.24^{+1.0}_{-0.24}$ km s$^{-1}$) Sun-like star with an absolute RV ($-20.6 \pm 0.1$ km s$^{-1}$) consistent with the Gaia DR2 RV ($-20.34 \pm 0.19$ km s$^{-1}$). \subsubsection{TRES} We obtained two spectra, on UT 2019 March 28 and 2019 April 04, using the Tillinghast Reflector Echelle Spectrograph (TRES) on the 1.5m telescope at the Whipple Observatory on Mt. Hopkins in Arizona. TRES is an optical echelle spectrograph with a wavelength range of $385-910$ nm and has a resolution of $R=44,000$ \citep{gaborthesis, TRES}. The two TRES observations were well-separated in orbital phase (4.21 and 4.59) of the photometric ephemeris and were used to derive relative radial velocities. Using the strongest observed spectrum as a template, the second spectrum was cross-correlated order-by-order in the wavelength range $426-628$ nm. The observed template spectrum was assigned a velocity of zero and the small velocity difference between the two spectra was 13 m/s, which ruled out a stellar or brown-dwarf companion as the source of the transit-like events. The TRES observations also revealed a spectrum very similar to that of the Sun, with line broadening due to rotation of less than 4 km/s and no indication of surface activity, such as emission at Ca II H$\&$K, thus confirming that this target was well suited for PRV work. The stellar effective temperature ($T_\textrm{eff}$), metallicity ([Fe/H]), surface gravity ($\log g$), and rotation ($v\sin i$) were also determined using the Stellar Parameter Classification tool (SPC) on the TRES spectra \citep{Buchhave2012}. SPC cross-correlates the observed spectra against a library of synthetic spectra calculated using Kurucz model atmospheres \citep{Kurucz} and does a multi-dimensional fit for the stellar parameters that give the highest peak correlation value. These stellar parameter estimates are within 1$\sigma$ agreement with the results from the SpecMatch-Syn analysis of the PRV observations. \subsubsection{TNG/HARPS-N} Between UT 2019 April 2 and 2019 April 29 we collected 11 spectra of TOI-509 with the HARPS-N spectrograph \citep[(383-693 nm, R$\approx$115\,000]{2012SPIE.8446E..1VC} mounted at the 3.58-m Telescopio Nazionale Galileo (TNG) of Roque de los Muchachos Observatory in La Palma, Spain, under the observing programmes CAT19A\_162 (PI: Nowak) and CAT19A\_96 (PI: Pall\'e). The exposure time was set to 600--900 seconds, based on weather conditions and scheduling constraints, leading to a SNR per pixel of 59--119 at 5500\,\AA. The spectra were extracted using the off-line version of the HARPS-N DRS pipeline \citep{2014SPIE.9147E..8CC}, version 3.7. Doppler measurements and spectral activity indicators (Bisector Inverse Slope (BIS), full-width at half maximum (CCF\_FWHM) contrast (CCF\_CTR) of the cross-correlation function (CCF), Mount-Wilson S-index and $\log\mathrm{R_{HK}}$ index) were measured using an on-line version of the DRS, the YABI tool\footnote{Available at \url{http://ia2-harps.oats.inaf.it:8000}.}, by cross-correlating the extracted spectra with a G2 mask \citep{1996A&AS..119..373B}. We measure a $log\textrm{R'HK}$ value of -4.94. We also used {\tt serval}\footnote{\url{https://github.com/mzechmeister/serval}} \citep{2018A&A...609A..12Z} to measure relative RVs, chromatic index (CRX), differential line width (dLW), and H$\alpha$ index. The uncertainties of the relative RVs measured with {\tt serval} are in the range 0.5--1.4$\mathrm{m} \mathrm{s}^{-1}$, with a mean value of 0.83$\mathrm{m} \mathrm{s}^{-1}$. \subsubsection{Keck HIRES} Between 2019 August and 2021 March, we obtained 51 high-resolution spectra of HD 63935 with the HIRES instrument \citep{Vogt1994} on the 10m Keck 1 telescope at the W.M. Keck Observatory on Maunakea, Hawai'i. We obtained spectra with the C2 decker, which has dimensions of 14" x 0.86" and spectral resolution R$\approx$60000 at 500 nm. The chosen exposure meter setting regulates S/N at 200 photons per pixel and the resulting median exposure time is 185 s. We obtained radial velocity measurements from the spectra using the method described in \cite{Howard2010}. The RMS value of our radial velocities before fitting for any planets was 4.22 ms$^{-1}$, and the median internal uncertainty was 1.1 ms$^{-1}$. Our measured $\log \textrm{R'HK}$ value is -5.04, indicating a relatively low-activity star and consistent with the value measured by HARPS-N. \subsubsection{APF-Levy} Between 2019 August and 2021 February, we obtained 100 spectra of HD 63935 with the Levy Spectrograph, a high resolution slit-fed optical (500-620nm) echelle spectrograph \citep{Radovan2010APF} on the Automated Planet Finder Telescope (APF) at Lick observatory\citep{Vogt2014}. We observed the star using the W decker, which has dimensions of 3" x 1" and R$\approx$114000 between 374 and 970 nm. Our median exposure time with APF was 1200 seconds. We acquired one or two observations per night. Nightly observations are binned to improve the RV precision. The RMS values of our radial velocities before fitting for any planets was 8.27 ms$^{-1}$, and the median uncertainty was 1.8 ms$^{-1}$. We excluded data points with RV uncertainties $>$5 ms$^{-1}$, which resulted in the removal of five points. All such points were clear outliers and had $<$800 counts on the detector, indicating low data quality. \begin{table}[h!] \begin{center} \caption{Radial Velocities} \label{tab:RVs} \begin{tabular}{lccc} \hline \hline Time (BJD) & RV (m s$^{-1}$) & RV Unc. (m s$^{-1}$) & Inst.\\ \hline 2458733.13862 & -6.81 & 1.29 & HIRES\\ 2458744.13959 & -8.04 & 1.11 & HIRES\\ 2458777.06304 & 0.28 & 1.16 & HIRES\\ 2458788.11234 & -11.38 & 1.06 & HIRES\\ 2458795.00493 & -5.68 & 1.00 & HIRES\\ \hline \end{tabular} \end{center} \footnotesize{A sample of the radial velocities, uncertainties, and instruments for our data on HD 63935. The full table of radial velocity data is available online, which includes the Mt. Wilson S-value activity indicators.} \end{table} \section{Analysis and Results} \label{sec:analysis} \subsection{TESS Photometry Analysis} \label{sec:tessphotanalysis} We used \texttt{juliet} \citep{Espinoza2019juliet} to model the light curve data available for HD 63935. \texttt{juliet} serves as a wrapper for a variety of existing publicly-available tools. Our transit modelling used the functions based on \texttt{batman} \citep{Kreidberg2015} for transit fitting and PyMultiNest \citep{Buchner2014} for the sampling of parameter space. We fit for both planets' periods, crossing times, transit depths, and impact parameters, as well as two quadratic limb darkening parameters (following \cite{Kipping2013}, the out-of-transit flux, and a jitter term from TESS. We keep all other parameters fixed, including the mean stellar density, for which we fix the value to that derived in Section \ref{sec:stellarcharacterization}. We impose normal priors centered on the SPOC values for period and crossing time \citep{Jenkins_2002, Jenkins2010, Li_2019}, all with widths of 0.1 days, and constrain the occultation fraction and impact parameter to a uniform range between 0 and 1. When we began to study this system, the gap in the TESS Sector 7 photometry (Sector 34 data had not been obtained yet) combined with only two transits meant that two periods ($\sim18$ days, the one initially reported by SPOC, and $\sim$9 days, which we ultimately selected based on RV measurements and was later confirmed by TESS S34) were possible for planet b. The Sector 7 light curve also provided no indication of the existence of planet c, as its transit fell in the data gap as well. We selected the correct period (9 days) and identified planet c as a candidate based on our radial velocity observations (see Section \ref{sec:RVanalysis}). Both of our predictions were validated by the Sector 34 light curve, which confirmed the 9 day period as the correct one for planet b and provided 2 transits of planet c at almost exactly the period predicted by our radial velocity data (21.40 days compared to our predicted value of 21.35 days). Both planets have high SNR transits (18.6 and 23.0 for planets b and c, respectively). The results are of these fits are reported in Table \ref{tab:planet-params}. \begin{figure}[h] \centering \includegraphics[width=0.46\textwidth]{full_fov.jpg} \caption{Final combined full field of view dithers of the Palomar observations showing no companions within the TESS pixels.} \label{fig:ao_fullfov} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.46\textwidth]{contrast_fov-stacked.jpg} \caption{Companion sensitivity for the near-infrared adaptive optics imaging at Palomar (above) and Keck (below). The black points represent the 5$\sigma$ limits and are separated in steps of 1 FWHM; the purple represents the azimuthal dispersion (1$\sigma$) of the contrast determinations (see text). The inset image is of the primary target showing no additional close-in companions.}\label{fig:ao_contrast} \end{figure} \subsubsection{LCO Photometry Analysis} \label{sec:lco-fit} As a check on the orbital solution for planet b (in particular the orbit period), we performed an independent fit of the LCO lightcurve (Fig.\ \ref{fig:lco-lightcurve}, Sec.\ \ref{sec:lco-obs}) using \textsf{exoplanet} \citep{exoplanet:exoplanet}. We fixed $T_0$ at the value found from the TESS lightcurve fit. The fit to the LCO lightcurve gives 68\% credible intervals on the posterior values found for $P = 9.058875^{+0.000021}_{-0.000034}$, $b = 0.60^{+0.20}_{-0.41}$, $T_d = 3.39 \pm 0.14$ hr, and $R_p/R_* = 0.0325^{+0.0040}_{-0.0043}$. These are consistent with the values from the TESS fit (Table \ref{tab:planet-params}), though the period values are consistent only within the 2-$\sigma$ errors, and not within the 1-$\sigma$ errors. \subsection{High Resolution Imaging} To within the limits of the AO observations, no stellar companions were detected. The sensitivities of the final combined AO image were determined by injecting simulated sources azimuthally around the primary target every $20^\circ $ at separations of integer multiples of the central source's FWHM \citep{Furlan_2017, Lund2020}. The brightness of each injected source was scaled until standard aperture photometry detected it with $5\sigma $ significance. The resulting brightness of the injected sources relative to TOI~509 set the contrast limits at that injection location. The final $5\sigma $ limit at each separation was determined from the average of all of the determined limits at that separation and the uncertainty on the limit was set by the rms dispersion of the azimuthal slices at a given radial distance (Figure~\ref{fig:ao_contrast}). \begin{figure*} \begin{center} \includegraphics[width=0.85\textwidth]{current-jump-2021-04-02_2pl_rv_multipanel-1.png} \end{center} \caption{Our radial velocity points obtained for HD 63935. a) The complete RV timeseries, including data from HARPS-N (maroon), APF (green), and HIRES (yellow) and our best fit model (blue). b) Our model residuals. Note that the residual appear to exhibit some structure; we discuss this in Section \ref{sec:planet3?} c) Phase-folded radial velocity curve of HD 63935 b. d) Phase-folded radial velocity curve of HD 63935 c. \label{fig:phase-folded-rv}} \end{figure*} \subsection{Radial Velocity Analysis} \label{sec:RVanalysis} We used the \texttt{RadVel}\footnote{\url{https://radvel.readthedocs.io/en/latest/}} package \citep{Fulton2018} to model the radial velocity measurements of HD 63935. \texttt{RadVel} uses the Markov chain Monte Carlo (MCMC) sampler \texttt{emcee} \citep{ForemanMackey2013} to sample the posterior space of the model's parameters. In these fits, we fix the period and time of inferior conjunction to the values derived from photometry (previous section). We enforce circular orbits in our fits. Allowing eccentricity to vary produced a fit which was not preferred by the information criterion analysis. Varying $e_b$ or $e_c$ only were ``somewhat disfavored'' with $\Delta$AICc$=2.28$ and $\Delta$AICc$=3.34$, respectively, while varying both eccentricities was ``strongly disfavored'' with $\Delta$AICc$=6.02$). If the eccentricities are allowed to vary, we find 1$\sigma$ upper limits of 0.16 and 0.29 for planets b and c, respectively. Our preliminary single planet fits of this system were dominated by a signal at $\sim$21 days; we identified this signal as corresponding to an additional planet candidate, HD 63935 c, which was confirmed by two transits in TESS Sector 34. The final results of our radial velocity fits are displayed in Figure \ref{fig:phase-folded-rv}. \begin{table*}[t] \begin{center} \caption{Complete table of properties used in this analysis} \begin{threeparttable} \label{tab:planet-params} \bgroup \def1{1} \begin{tabular}{llccc} \hline \hline \multicolumn{1}{l}{\textbf{Parameter}} & \multicolumn{1}{l}{\textbf{Symbol}} & \multicolumn{2}{c}{\textbf{Value}} & \multicolumn{1}{c}{\textbf{Units}}\\ \hline \textbf{Stellar Parameters} & & & &\\ Mass\tnote{1} & $M_*$ & \multicolumn{2}{c}{0.933$\pm0.054$} & M$_\odot$\\%CORRECT 3/10 Radius\tnote{1} & $R_*$ & \multicolumn{2}{c}{0.959$\pm0.042$} & R$_\odot$\\%CORRECT 3/10 Age\tnote{1} & & \multicolumn{2}{c}{6.8$^{+1.8}_{-1.9}$} & Gyr\\%CORRECT 3/15 Stellar Effective Temperature\tnote{2} & $T_{\textrm{eff}}$ & \multicolumn{2}{c}{5534$\pm$100}& K \\ Surface Gravity\tnote{2} & $\log g$ & \multicolumn{2}{c}{$4.38\pm$0.1} & cm s$^{-1}$ \\%CORRECT 4/12 Metallicity\tnote{1} & [Fe/H] &\multicolumn{2}{c}{$0.07\pm0.06$} & dex\\ Activity Index\tnote{2} & $\log$ R'HK & \multicolumn{2}{c}{-5.06} & \\ V-band Magnitude\tnote{3} & $V_{\textrm{mag}}$ & \multicolumn{2}{c}{8.58} & \\ J-band Magnitude\tnote{3} & $J_{\textrm{mag}}$ & \multicolumn{2}{c}{7.30} & \\ K-band Magnitude\tnote{3} & $K_{\textrm{mag}}$ & \multicolumn{2}{c}{6.88} & \\ Distance\tnote{4} & \textit{d} & \multicolumn{2}{c}{$48.8\pm0.8$} & pc\\%strictly speaking this is from TEV which converted the parallax internally I think Luminosity\tnote{4} & \textit{L} & \multicolumn{2}{c}{$0.798\pm0.002$} & L$_\odot$\\ Limb Darkening Parameters\tnote{5} & q$_1$, q$_2$ & \multicolumn{2}{c}{$0.27^{+0.28}_{-0.15}$, $0.26^{+0.32}_{-0.18}$} & \\ \hline \textbf{Transit Parameters}\tnote{5} & & \textit{Planet b}& \textit{Planet c}& \\ Period & P & $9.058811^{+0.000017}_{-0.000016}$ & $21.4023^{+0.00189}_{-0.00194}$ & days\\%CORRECT 3/15 Transit Crossing Time & T$_0$ & 1494.4462$\pm0.0010$ & 2231.8280$\pm0.0014$ & TJD\tnote{6} \\%CORRECT 3/15 Occultation Fraction & $R{_p}/R{_*}$ & 0.0285$\pm0.0004$ & 0.0277$\pm0.0004$ & \\%CORRECT 3/15 Orbital Separation & a/R$_*$ & 18.64$\pm1.1$ & 33.06$\pm2.0$ & \\%CORRECT 3/15 TODO ADD ERRORS Inclination & $i$ & 88.49$\pm0.0018$ & 88.241$^{+0.0011}_{-0.0013}$ & $^\circ$\\%CORRECT 3/16 TODO ADD ERRORS Transit duration & $T_d$ & 3.36$\pm 0.10$ & 4.85$\pm0.33$ & hours\\%CORRECT 3/15 TODO ADD ERRORS Transit SNR & SNR & 18.6 & 23.0 & \\%CORRECT 3/15 TODO ADD SOURCE (TEV LIST) TO FOOTNOTES \hline \textbf{RV Parameters}\tnote{7} & & \textit{Planet b} & \textit{Planet c} & \\ Planet Semi-Amplitude & K$_\textrm{amp}$ & $3.18^{+0.55}_{-0.52}$ & $2.71^{+0.52}_{-0.49}$ & m s$^{-1}$\\%CORRECT 3/16 Eccentricity & $e$ & 0 (fixed) & 0 (fixed) & radians\\%CORRECT 3/16 Periastron Passage & $\omega$ & 0 (fixed) & 0 (fixed) & radians\\%CORRECT 3/16 \textit{Instrumental and GP Parameters} & & & & \\%CORRECT 3/16 Linear HIRES Offset & $\gamma_{\rm hires_j}$ & \multicolumn{2}{c}{$3.55^{+0.46}_{-0.40}$} & m s$^{-1}$\\ Linear APF Offset & $\gamma_{\rm apf}$ & \multicolumn{2}{c}{$5.49^{+0.44}_{-0.4}$} & m s$^{-1}$\\ Linear HARPS-N Offset & $\gamma_{\rm HARPS-N}$ & \multicolumn{2}{c}{$0.22^{+3.0}_{-3.1}$} & m s$^{-1}$\\ \hline \textbf{Derived Parameters} & & \textit{Planet b}& \textit{Planet c}& \\ Planet Radius & $R_p$ & $2.99\pm0.14$ & $ 2.90\pm0.13$ & $R_\oplus$\\%CORRECT 3/16 Impact Parameter & b & $0.49 \pm 0.02$ & $0.30^{+0.03}_{-0.04}$ & \\%CORRECT 3/16 Planet Mass & $M_p$ & $\bmass \pm 1.8$ & $\cmass \pm 2.4$ & $M_\oplus$\\%CORRECT 3/16 Planet Density & $\rho_p$ & $2.2 \pm 0.5$ & $ 2.5 \pm \cdensityerr$ & g cm$^{-3}$\\%CORRECT 3/21 Insolation Flux & $F_p$ & 115.6$\pm4.6$ & 36.7$\pm1.4$ & $S_\oplus$\\%CORRECT 3/21 Equilibrium Temperature\tnote{8} & T$_{eq}$ & 911$\pm27$ & 684$\pm21$ & K\\%CORRECT 3/21 Transmission Spectroscopy Metric & TSM & 108.8 $\pm$ 35.8& 72.5 $\pm$ 21.3 & \\%CORRECT 3/21 Emission Spectroscopy Metric & ESM & 10.5 $\pm$ 1.6 & 4.8 $\pm$ 0.7 & \\%CORRECT 3/21 \hline \end{tabular} \begin{tablenotes} \item[1] \texttt{isoclassify} \item[2] Specmatch-Syn \item[3] exofop \item[4] \textit{Gaia} \item[5] \texttt{juliet} \item[6] BJD-2457000 \item[7] \texttt{RadVel} \item[8] Assumes zero albedo and full day-night heat redistribution \end{tablenotes} \egroup \end{threeparttable} \end{center} \end{table*} \subsection{Is There a Third Planet?} \label{sec:planet3?} The photometry provides clear evidence for two planet candidates, whose presence we confirm with radial velocity followup. The residuals of a two-planet fit, however, show substantial structure. Because the HIRES and APF points in the residuals show similar behavior to each other, instrumental effects are unlikely to be the cause. This suggested that there was something still unaccounted for in our model, which could be a third planet. To test this, we generated a Lomb-Scargle periodogram of the radial velocity residuals from our two-planet fit (Figure \ref{fig:periodogram}). There are two primary visible peaks at longer periods, at $\sim$59 and $\sim$102 days. Because the peaks in the periodograms of the radial velocity residuals and s-values (activity indicators) do not correspond to each other, we consider it unlikely that this signal is caused by stellar activity. The lack of significant periodicity in the S-values suggests no motivation to adopt a Gaussian Process model for our data, consistent with our low value of $\log$R'HK. We have also performed an independent analysis (Section \ref{sec:gyrochrone}) to identify the stellar rotation period, arriving at the conclusion that this period is roughly 30-35 days, and therefore inconsistent with both of the longer-period RV periodogram peaks. As a method of examining the significance of these peaks, we perform a bootstrap analysis. In this analysis we repeatedly resample our entire RV dataset with replacement, calculate the power at the locations in period space where the peak is highest in our true dataset, and repeat this 10000 times. Selecting a p value of 0.05, we are unable to reject the null hypothesis for either signal. Therefore, in this paper we have adopted the two-planet model, as the evidence for a third planet does not rise to the level of statistical significance we require. Important to note, however, is that this choice does not substantially impact the mass precision of our two confirmed planets, and the resulting masses are consistent to each other to 1-$\sigma$ for both models. Further radial velocity monitoring or photometric followup could provide clarity as to the source of this additional signal. \begin{table}[H] \begin{center} \caption{Mass Comparison for Different Models} \label{tab:diff_model_masses} \begin{tabular}{cccc} \hline \hline N$_{planets}$ & $M_b$ ($M_\oplus$) & $M_c$ ($M_\oplus$) & $M_d \sin i$ ($M_\oplus$) \\ \hline 2 & $10.8 \pm 1.8$ & $11.1 \pm 2.4$ & n/a\\ 3 ($\textrm{P}_d=58.7$ d) & $11.1^{+1.6}_{-1.5}$ & $12.8^{+2.1}_{-2.0}$ & $16.8^{+2.7}_{-2.6}$\\ 3 ($\textrm{P}_d=101.7$ d) & $9.9^{+1.8}_{-1.7}$ & $11.2^{+2.2}_{-2.0}$ & $20\pm4$\\ \hline \end{tabular} \end{center} \footnotesize{The mass values for planets b and c for different possible models, as well as the $M\sin i$ for a possible third planet where relevant. Note that the masses for the two transiting planets are consistent within 1$\sigma$ no matter which model is selected.} \end{table} \begin{figure}[H] \includegraphics[width=0.49\textwidth]{periodogram_v1.png} \caption{Lomb-Scargle periodograms of the radial velocity data for HD 63935, the residuals of the RV data with the signals corresponding to the transiting planets removed, and the Mt. Wilson s-values (activity indicators) for the system. False Alarm Probabilities of 0.1, 0.01, and 0.001 are shown as horizontal grey lines in each plot. The signals corresponding to a potential long-period companion (purple) do not rise to the required level of statistical significance, but also do not correspond to peaks in the s-values. The period near 17 days in the middle panel is ruled out as a transiting planet by the TESS photometry but may be an alias of the stellar rotation period. \label{fig:periodogram}} \end{figure} \section{Discussion} \label{sec:discussion} \subsection{Examining Plausible Compositions} \label{sec:compositions} In order to better understand this planetary system, we investigated a range of possible compositions based on the planets' bulk density and corresponding positions in mass-radius space. This is of particular interest given the substantial degeneracies in composition that exist for planets in this radius range, primarily between ice/volatile dominated planets and rock-dominated interiors with substantial H/He envelopes \citep{Lopez2014}. To do this, we used the public tool \texttt{smint}\footnote{\url{https://github.com/cpiaulet/smint}} \citep{Piaulet_2021}, developed by Caroline Piaulet, which utilizes the models of \cite{Lopez2014} and \cite{Zeng2016} to do a Markov Chain Monte Carlo exploration of the posterior space of planetary interior compositions based on their mass, radius, age, and insolation flux. We use 25 MCMC walkers and 10000 steps for each in this analysis. Our results from \texttt{smint} are that HD 63935 b and c have $3.6\pm0.8$\% and $3.4\pm0.9$\% mass in H/He, respectively. Both planets could have cores that are intermediate between ice-dominated and rock-dominated. However, neither planet is sufficiently dense to be a pure ``water world''; in other words, both are expected to have substantial (few percent mass) H/He envelopes. The compositional degeneracies that exist in this part of parameter space are one of the reasons that the sub-Neptune sized planets are so compelling for atmospheric characterization. Study of these planets' atmospheres can potentially reveal more about their composition, which in turn can contribute to our understanding of how these planets form and why no analogs exist in our own solar system. \begin{figure*}[t] \includegraphics[width=\textwidth]{MR-multipanel-509-v6_2.png} \caption{The position of HD 63935 b in mass-radius space. Top: the planet sample from \cite{Zeng2019Composition}, colored by equilibrium temperature, as well as a simple subset of composition curves from \cite{Zeng2016} to contextualize the image. Bottom: a density estimation of the same planet sample, with host star spectral type indicated by marker symbol (F, G, K, M). The bimodal distribution of planets around the radius gap at $\sim$1.8 R$_\oplus$ is clearly visible. HD 63935 b and c are emphasized and colored red, with planet b the upper of the two red points. Planets with TSM values higher than HD 63935 b are colored cyan (from top to bottom, GJ 1214 b, HD 97658 b, $\pi$ Mensae c, GJ 9827 d, 55 Cancri e, HD 219134 b, and HD 219134 c), and planets which have been subjected to atmospheric characterization in the past are circled in black. Those which have published atmospheric spectra but lower TSM values (from top to bottom, HD 3167 c, K2-18 b, and HD 3167 b) are colored white. Planets meeting neither criteria are faded grey. The dark red lines correspond to a subset of plausible composition curves for HD 63935 b, from \cite{Lopez2014} and \cite{Zeng2016}. This figure emphasizes the planet's uniqueness as a quality atmospheric target in parameter space. \label{fig:MR_multipanel}} \end{figure*} \subsection{Nearly Twins: How Does HD 63935 Fit in the ``Peas in a Pod'' Structure?} \cite{Weiss_2018} identified the phenomenon of Kepler planets in multi-planet systems being more likely to be similar to each other than drawn from a random distribution. They refer to this as ``peas in a pod''. The two confirmed planets in the HD 63935 system appear to be in line with this trend, as both planets have masses and radii that are consistent to each other within one sigma. Higher precision measurements have the potential to distinguish differences between the two planets. In particular, if planet c is discovered to be higher density (as nominally appears to be the case, though the difference is not presently statistically significant), this would be interesting, because \cite{Weiss2014} note that the larger and lower density planet tends to be the exterior in their sample. They suggest that this result could be explained by photoevaporation. In this case, however, the nominally denser planet is the cooler and less-irradiated one, implying that a different explanation is required. We calculate the $\Lambda$ parameter described by \cite{Fossati_2017}, which estimates whether atmospheric erosion is relevant for a given planet. $\Lambda <25$ at T$_\textrm{eq} = 1000K$ and $\Lambda <35$ at T$_\textrm{eq} = 500K$ (Fig 4 in \cite{Fossati_2017}) are the relevant regions of atmospheric erosion for G star hosts. The respective $\Lambda$ values for HD 63935 b and c are 30 and 42, suggesting that neither planet is likely to experience significant atmospheric Jeans escape. In the absence of atmospheric loss, \cite{Zeng2019Composition} propose that denser outer planets could be explained via impacts of ice-rich planetesimals \citep[see also][]{Marcus2009CollisionalStripping}. Also potentially of interest is the differing equilibrium temperatures of these planets ($\sim$911K and $\sim$684K for b and c, respectively). Given their otherwise similar properties, they could serve as an experimental testing ground for the role of insolation on planetary composition and nature. As we describe in more detail in the following section, planets cooler than 1000K are likely to have decreasing spectral feature amplitude in transmission spectra as a result of increased haze formation \citep{Gao_2020}. Observation of such a phenomenon in the HD 63935 system could be used as further evidence for this trend and of particular significance because the planets orbit the same star (eliminating a possible confounding variable). \subsection{Assessing Atmospheric Observability} As described above in Section \ref{sec:selectionalgorithm}, we identified HD 63935 b as the best target for atmospheric characterization followup in its region of parameter space (between 2.6 and 4 Earth radii, between 10 and 100 times Earth insolation flux, and stellar effective temperatures between 5200 and 6500K). The TSM value for HD 63935 b, incorporating our measured mass, is 108.8 $\pm$ 35.8. In addition to its uniqueness within our algorithm's defined parameter space bins, planet b is also of interest as a tool to probe the radius cliff, which refers to the apparent drop-off in planet occurrence rate above $\sim 2.5 R_\oplus$. In Figure \ref{fig:MR_multipanel}, we identify planets with TSM values equal to or greater than that of HD 63935 b, and find that only one (HD 191939 b, Lubin et. al. (submitted)) falls on the radius cliff. The physical causes of the radius cliff are hypothesized to be related to atmospheric sequestration \citep{Kite2019}, signs of which may be visible in an atmosphere, further enhancing the target's desirability for characterization. Note also that of the planets in that figure, four (HD 219134 b, HD 219134 c, 55 Cnc e, and $\pi$ Mensae c) have host star magnitudes that saturate \textit{JWST }, meaning they are not suitable targets for transmission spectroscopy with \textit{JWST }. To validate the observability of HD 63935 b, we simulated planetary transmission spectra using \texttt{CHIMERA} \citep{line2012info,line2013systematic,line2014systematic}, then simulated transit observations of the planets using \texttt{PandEXO} \citep{Batalha2017pandexo}. A sample of our simulations (cloud-free cases only) are shown in Figure \ref{fig:63935b-full-simulated-spectrum}. \begin{figure*}[t] \includegraphics[width=0.9\textwidth]{63935b-simulated-spectrum-1transit-v1_0.png} \caption{This figure shows the simulated near-infrared transmission spectrum of HD 63935 b based on a single transit, with the NIRISS SOSS and Nirspec G395H instruments (note that this means that the full dataset shown in this plot would require two transits to obtain, one for each instrument and corresponding wavelength range). Different colors indicate different metallicities relative to the solar value. The true model spectra are shown as solid lines, while the points are simulated observations. \label{fig:63935b-full-simulated-spectrum}} \end{figure*} The results for planet b support our selection of the planet as one with an exceptionally high potential for atmospheric characterization. We simulated a range of metallicities and cloud opacities, then calculated the SNR of the water feature at 1.5$\mu$m. We did this by simulating the spectrum 1000 times, determining the equivalent width of the feature in each, and finding the standard deviation of the resulting distribution. Our results are displayed in Figure \ref{fig:waterheight_planetb}. We emphasize that this is a conservative estimate of the quality of atmospheric spectra, but that even with middlingly-optimistic assumptions about metallicity and cloudiness, an observation of a single transit with \textit{JWST } can produce useful spectra of this planet, and more observations would of course produce correspondingly more precise spectra. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{509b-1transit-SNR-v2_0.png} \caption{The SNR of the 1.5 $\mu$m water feature equivalent width of HD 63935 b, calculated after a single JWST transit observation for a variety of metallicites and cloud opacities (log(kcld)). log(kcld) is a model parameter that encapsulates cloud opacity in a single greying parameter; the range used here spans the gamut from cloud-free in light green to opaque atmospheres in black. Optimistic-but-reasonable atmospheric clearness assumptions are between -30 and -29 (so SNR $\sim$10 for the 100x solar metallicity case). Obviously, an opaque atmosphere would produce no visible water features, though we describe in the text why there are reasons not to expect such a scenario. Note as well that this is a very conservative estimate of the quality of atmospheric observations. There are other water features available for obtaining a water detection, the equivalent width calculation used here ignores binning as well as more advanced retrieval techniques, and the data here come from only a single \textit{JWST } transit. SNR should scale roughly with $\sqrt{N_{obs}}$ if more transits are added. \label{fig:waterheight_planetb}} \end{figure} Additionally, there is reason to expect that the atmosphere of HD 63935 b will have high-amplitude atmospheric features in its transmission spectrum. Although cloud formation in exoplanetary atmospheres is not fully understood, extensive modelling and analysis of existing transmission spectra have been done to attempt to understand which atmospheres will be dominated by clouds or hazes. Most recently, \cite{Gao_2020} used an aerosol microphysics model to predict the dominant opacity sources in giant exoplanet transmission spectra. Their results suggest that the height of spectral features (specifically the 1.4 $\mu m$ water band) increases with increasing temperature until $\sim$950 K, with opacity in this temperature regime dominated by high-altitude photochemical hazes derived from hydrocarbons. Above $\sim$950K, silicate condensate clouds become the dominant opacity source. These clouds rise higher into the atmosphere as temperature increases, resulting in reduced spectral feature amplitude with increasing temperature from 950 until $\sim$1800K. These predictions are highly relevant to this work, as HD 63935 b sits quite close to the critical equilibrium temperature of 950K which we expect to be a local maximum of feature amplitude. This prediction is also in line with earlier work by \cite{CrossfieldKreidberg2017}, which analyzed the published transmission spectra of exclusively Neptune-sized exoplanets and found a positive correlation between feature height and equilibrium temperature in their domain of 500-1100K. Taken together, these results suggest that HD 63935 b is in a region of parameter space that is maximally likely to produce observable spectral features. Combined with its other desirable qualities as an atmospheric target (bright host star, large TSM value), it is clear that HD 63935 b has exceptional promise as a target for atmospheric characterization. Finally, we note that HD 63935 is quite sun-like (M$_* ~\sim 0.933 \textrm{M}_\odot$, [Fe/H] = 0.07 $\pm$ 0.06), making characterization of its planetary system of additional interest for comparative planetology with our own solar system. We also emphasize that only one sub-Neptune planet around such a similar star has a published atmospheric characterization study \citep[HD 3167 c][]{mikalevans2020}. Combined with other such planets with G-type host stars being discovered by TESS, it could form part of a robust population study. \section{Conclusions} We have described the discovery of two planets around the bright G star HD 63935. \begin{itemize} \item HD 63935 b and c have periods $9.0600^{+0.00064}_{-0.00070}$ and $21.40 \pm 0.0019$ days, radii $2.99 \pm 0.14$ and $2.90 \pm 0.13$ $R_\oplus$, and masses $\bmass \pm 1.8$ and $\cmass \pm 2.4$ $M_\oplus$. \item Both confirmed planets have a radius higher than the median sub-Neptune and fall on the "radius cliff". \item Planet b is an outstanding target for transmission spectroscopy, being the best target in its parameter space niche and second-best among targets on the radius cliff, while planet c is also amenable to atmospheric characterization. This quality for followup is exceptionally enticing given the compositional degeneracies that exist in this region of parameter space, which could be broken with high-quality atmospheric characterization. \item Atmospheric characterization of this system could provide valuable input to theories of planetary interiors, formation, and evolution, especially given that two planets similar in all observed properties except insolation flux is as close to a variable-controlled experimental setup as exoplanet astronomy typically comes. \end{itemize} HD 63935 b and c attest to the bright future of exoplanet astronomy and we expect this system to be an excellent test case for studying exoplanetary atmospheres in coming years. \textit{Facilities} Automated Planet Finder (Levy), HARPS-N (TNG), HIRES (Keck I), Las Cumbres Observatory Global Telescope (LCOGT), NIRC2 (Keck II), PHARO (Palomar), TESS \textit{Software} \texttt{AstroImageJ} \citep{Collins:2017}, \texttt{Astropy} \citep{astropy2013}, \texttt{batman} \citep{Kreidberg2015}, \texttt{emcee} \citep{emcee}, \texttt{exoplanet} and its dependencies \citep{exoplanet:exoplanet, exoplanet:agol20, exoplanet:arviz, exoplanet:luger18, exoplanet:pymc3, exoplanet:theano}, \texttt{isoclassify} \citep{isoclassify-huber-2017}, \texttt{juliet} \citep{Espinoza2019juliet}, \texttt{Jupyter} \citep{jupyter2016}, \texttt{kiauhoku} \citep{Claytor_2020}, \texttt{matplotlib} \citep{Hunter2007matplotlib}, \texttt{numpy} \citep{numpy}, \texttt{pandas} \citep{pandas}, \texttt{Radvel} \citep{Fulton2018}, \texttt{smint} \citep{Piaulet_2021}, \texttt{SpecMatch-Syn} \citep{Petigura2017} \texttt{Transit Least Squares} \citep{TLS} \section{Acknowledgments} We thank the anonymous referee for their helpful feedback that improved the quality of this work. We thank the time assignment committees of the University of California, the California Institute of Technology, NASA, and the University of Hawaii for supporting the TESS-Keck Survey with observing time at Keck Observatory and on the Automated Planet Finder. We thank NASA for funding associated with our Key Strategic Mission Support project. We gratefully acknowledge the efforts and dedication of the Keck Observatory staff for support of HIRES and remote observing. We recognize and acknowledge the cultural role and reverence that the summit of Maunakea has within the indigenous Hawaiian community. We are deeply grateful to have the opportunity to conduct observations from this mountain. We thank Ken and Gloria Levy, who supported the construction of the Levy Spectrometer on the Automated Planet Finder. We thank the University of California and Google for supporting Lick Observatory and the UCO staff for their dedicated work scheduling and operating the telescopes of Lick Observatory. This paper is based on data collected by the TESS mission. Funding for the TESS mission is provided by NASA's Science Mission Directorate. We acknowledge the use of public TESS data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center. This research has made use of the Exoplanet Follow-up Observation Program website, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products. This paper includes data collected by the TESS mission that are publicly available from the Mikulski Archive for Space Telescopes (MAST). This research has made use of the NASA Exoplanet Archive and Exoplanet Follow-up Observation Program website, which are operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. Based on observations made with the Italian Telescopio Nazionale Galileo (TNG) operated on the island of La Palma by the Fundaci\'on Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias under programmes CAT19A\_162 and CAT19A\_96. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This work is partly supported by JSPS KAKENHI Grant Numbers JP17H04574 and JP18H05439, JST PRESTO Grant Number JPMJPR1775, and Grant-in-Aid for JSPS Fellows, Grant Number JP20J21872. This work makes use of observations from the LCOGT network. D. D. acknowledges support from the TESS Guest Investigator Program grant 80NSSC19K1727 and NASA Exoplanet Research Program grant 18-2XRP18\_2-0136. J.M.A.M. is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1842400. J.M.A.M. acknowledges the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF Cybertraining Grant No. 1829740, the Brinson Foundation, and the Moore Foundation; his participation in the program has benefited this work. M.R.K. is supported by the NSF Graduate Research Fellowship, grant No. DGE 1339067. P.D. acknowledges support from a National Science Foundation Astronomy and Astrophysics Postdoctoral Fellowship under award AST-1903811.
1,314,259,993,672
arxiv
\chapter*{} \textit{To my parents.} \includepdf{blank} \includepdf{abstract-EN} \includepdf{blank} \includepdf{abstract-FR} \chapter*{Preface.\addcontentsline{toc}{part}{Preface.}} {This manuscript summarises the research conducted during my Ph.D. studies, from October 2015 to September 2018, under the supervision of Jean-Christophe Wallet at the Laboratoire de Physique Th\'{e}orique of the Universit\'{e} de Paris-Sud XI in Orsay, France. The material presented here is based on the following papers: \begin{itemize} \item{T. Juri\'{c}, T. Poulain, and J.-C. Wallet, \href{https://doi.org/10.1007/JHEP05(2016)146}{\textit{``Closed star product on noncommutative $\mathbb{R}^3$ and scalar field dynamics,"}} J. High Energy Phys. \textbf{2016}(05):146, 2016;} \item{T. Juri\'{c}, T. Poulain, and J.-C. Wallet, \href{https://doi.org/10.1007/JHEP07(2017)116}{\textit{``Involutive representations of coordinate algebras and quantum spaces,"}} J. High Energy Phys. \textbf{2017}(07):116, 2017;} \item{T. Poulain, and J.-C. Wallet, \href{https://doi.org/10.1088/1742-6596/965/1/012032}{\textit{``Quantum spaces, central extensions of Lie groups and related quantum field theories,"}} J. Phys.: Conf. Ser. \textbf{965}:012032, 2017;} \item{T. Poulain, and J.-C. Wallet, \href{https://doi.org/10.1103/PhysRevD.98.025002}{\textit{``$\kappa$-Poincar\'{e} invariant quantum field theories with Kubo-Martin-Schwinger weight,"}} Phys. Rev. D\textbf{98}:025002, 2018;} \item{T. Juri\'{c}, T. Poulain, and J.-C. Wallet, \href{http://arxiv.org/abs/arXiv:1805.09027}{\textit{``Vacuum energy and the cosmological constant problem in $\kappa$-Poincar\'{e} invariant field theories,"}} Preprint arXiv:1805.09027, 2018;} \item{T. Poulain, and J.-C. Wallet, \href{http://arxiv.org/abs/arXiv:1808.00350}{\textit{``$\kappa$-Poincar\'{e} invariant orientable field theories at one-loop: scale-invariant couplings,"}} Preprint arXiv:1808.00350, 2018.} \end{itemize}\bigskip \paragraph{Note pour la version fran{\c{c}}aise :} \noindent{Un r{\'e}sum{\'e}, en fran{\c{c}}ais, de mon travail de th{\`e}se est donn{\'e} en appendice ; \textit{cf.} Appendix \ref{app-resumefr}.} } \chapter*{Acknowledgements.} {I am deeply grateful to Jean-Christophe Wallet for his patience and guidance during the three years of my Ph.D. I am also grateful to him for having introduced me to, and sharing his expertise with me on, noncommutative geometry and quantum field theory both from the point of view of physics and mathematics. \bigskip I warmly thank M. Dubois-Violette, G. Lechner, F. Lizzi, G. Amelino-Camelia, F. Besnard, and A. Sitarz, for refereeing my work, and for having accepted to be part of my defense committee.\bigskip I would like to acknowledge COST Action MP1405 for financially supporting my visits at the Ru{\dj}er Bo{\v{s}}kovi{\'c} Institute in Zagreb (Croatia), in September 2016, and at the Ettore Pancini Department of Physics of the Universit{\`a} Federico II in Naples (Italy), in December 2017. I warmly thank S. Meljanac, A. Samsarov, and T. Juri{\'c}, as well as P. Vitale, and F. Lizzi, for their hospitality during my visits, and for the helpful and stimulating discussions we had at these occasions.\bigskip Finally, I would like to acknowledge the \textit{{\'E}cole Doctorale n$^\circ$564}, and the \textit{Laboratoire de Physique Th{\'e}orique} -- UMR 8627 Universit{\'e} de Paris-Sud and CNRS -- for funding my research activities during the past three years.} \tableofcontents \cleardoublepage \phantomsection \addcontentsline{toc}{part}{\listtablename.} \listoftables \part*{Introduction.\addcontentsline{toc}{part}{Introduction.}\pagenumbering{arabic}} The fail of classical mechanics to describe physical phenomena which involve atomic systems ultimately led to the discovery of quantum mechanics in the 1920s. One of the most characteristic conceptual feature of quantum mechanics, which contrasts sharply with -- at the time well-established -- classical determinism, is the fundamental limit to the precision with which the (classical) properties of a physical system can be known. This fact, known as the Heisenberg uncertainty principle, is reflected at the mathematical level by the introduction of noncommuting variables acting on some Hilbert space which replace the classical dynamical quantities at the quantum level. An important example of this type is provided by the classical phase space whose point coordinates $(x,p)$ are replaced, at the quantum level, by noncommuting operators $(\hat{x},\hat{p})$, \textit{i.e.} such that $\hat{x}\hat{p}-\hat{p}\hat{x}\neq0$, therefore precluding the use of the classical notion of trajectory. Consequently, the classical idea of spacetime, as the place of the events, is, in some sense, already lost in quantum mechanics, as the spacetime cannot be reconstructed as a continuum from the measurement of successive positions of a point particle; even though the coordinate operators still commute among themselves within this picture.\smallskip The idea to extend the concept of noncommutative (phase) space to the spacetime itself dates back to the early days of the relativistic quantum theory of fields and is probably due to the founders of quantum mechanics themselves; see, e.g., \cite{Letter1,Letter2}.\footnote{See also, e.g., in \cite{Peierls:1933,Dunne:1993} where noncommutativity of spacelike coordinates of a quantum mechanical system emerges as a consequence of the presence of a (strong, constant,) magnetic field background.} One of the initial motivation \cite{Snyder:1947,Snyder:1947b,Yang:1947} was to cure (not all, but at least some of,) the ultraviolet (UV) divergences plaguing the theory of quantum electrodynamics by means of the introduction of a fundamental length scale $\ell>0$,\footnote{In this context, the parameter $\ell$ plays a role similar to that of the Planck constant $\hbar$ in quantum mechanics.} such that a natural UV cutoff would be provided by $\Lambda\sim\ell^{-1}$. (Un)fortunately, the pioneering works, which led ultimately to the powerful and fruitful renormalisation procedure, arose about the same time (1946-49), overshadowing the idea of noncommutative spacetime for a while. Nevertheless, some of the central ideas behind the reintroduction of the concept of noncommutative spacetime to reconcile gravity with quantum mechanics was already present in the original Snyder's paper.\footnote{In the following, we shall use the terminology ``quantum gravity," in a possibly broader sense than the usual, to designate any approach aiming to reconcile, or unify, gravity with quantum mechanics.} In particular, it was mentioned that ``the roots of the trouble in field theory [could] lie in the assumption of point interactions between matter and fields," or, equivalently, that the classical description of spacetime, as a continuum, might ``not provide a suitable framework within which interacting matter and fields can be described." And indeed, noncommutative field theory (NCFT), namely field theory on noncommutative background, can usually be regarded as ordinary field theory but with nonlocal interactions; see Part \ref{part-ncft}. As we are going to see, it is by now known that, the introduction of a fundamental length scale is not sufficient, in general, to fully regularise the quantum field theory, however. Instead, it is conjectured that the ``ultraviolet behaviour of a field theory on noncommutative spaces is sensitive to the topology of the spacetime considered, namely to its compactness"; see, e.g., \cite{Chaichian:2000}. Typical examples of this idea are provided by NCFT built on the Moyal space \cite{Filk:1996}, as well as on the $\kappa$-Minkowski space \cite{moi:2018a}, for which the one-loop 2-point functions for the $\phi^4$ scalar field theory are still UV divergent, while NCFT built on $\mathbb{R}^3_\theta$, a class of quantum spaces with $\mathfrak{su}(2)$ noncommutativity, are found to be finite, the deformation parameter $\theta\sim\ell$ playing the role of a natural UV cutoff \cite{JCW:2013b,moi:2016}; see Table \ref{tableau} \begin{table}[h!] \centering \includegraphics[scale=0.46]{table1ter.jpg} {\caption{\label{tableau} \small \noindent Examples of quantum spacetimes, and some properties of related models of noncommutative $\vert\phi\vert^4$ scalar field theories. Note that, the expressions for the momentum conservation laws are given by the Baker-Campbell-Hausdorff formula associated with the corresponding Lie algebras of coordinate operators, while the UV behaviour of the NCFT depends strongly on the compactness of the Lie group underlying the quantum spacetime. Here, $\Theta_{ij}$ is a skew symmetric constant tensor, while $\theta,\kappa>0$ are real.}} \end{table} From another perspective, it has been suggested, as early as the birth of Einstein's theory of general relativity, that quantum effects should lead, in one way or another, to modifications in the description of the gravitational interaction, that could be reflected in the abandonment of the classical concept of (Riemannian) geometry. For a comprehensive historical review on the development of quantum gravity see, e.g., \cite{Stachel:1999,Rovelli:2002,Prugovecki:1996}, and references therein. Besides the many attempts to quantise the gravitational field, the minimal length scale hypothesis, as a way to reconcile gravity with quantum mechanics, slowly strengthened. This was initially justified by heuristic arguments based on some kind of gravitational Heisenberg microscope argument, namely the energy used to probe a sufficiently small region of spacetime might -- according to basic arguments of general relativity and quantum mechanics -- give rise to the formation of a black hole horizon, hence making the measurement process obsolete unless a fundamental length scale under which spacetime cannot be probed exists;\footnote{In the context of quantum gravity, this length scale is often interpreted as the Planck length, \textit{i.e.} \begin{equation} \ell_p:=\sqrt{\frac{\hbar G}{c^3}}\approx 1.6\times10^{-35}\text{m}, \end{equation} the (theoretical) scale at which both quantum and gravitational effects become (equally) important.} see, e.g., \cite{Mead:1964,Maggiore:1993a,Maggiore:1993b,Doplicher:1994,Doplicher:1995}. More significantly, a renewed interest in the possible noncommutative structure of spacetime arose in the 1990s from the theoretical evidences for the existence of a minimal length in both the context of string theory \cite{Amati:1989} and that of loop quantum gravity \cite{Ashtekar:1992}; note that such theoretical evidence was already pointed out, e.g., in \cite{Born:1938}, as early as 1938. More recently, a proposal \cite{Amelino:2000,Amelino:2001} to incorporate a fundamental (observer independent) length scale into a theory compatible with the Einstein relativity principle has emerged, providing this way a good starting point to investigate testable (phenomenological) scenarios aiming to confront quantum gravity models with observations, for instance by measuring deviations from usual dispersion relations \cite{Amelino:1998}; for a review on doubly special relativity, see, e.g., \cite{Kowalski:2005,Amelino:2010}. Although we will not, strictly speaking, consider models which include gravity in the present study, we do believe that the above examples provide reasonable motivations for studying noncommutative spacetimes and noncommutative field theory.\smallskip Although the way forward to reconcile classical gravity with quantum mechanics is far from being unique, the noncommutative structure of spacetime at some (possibly Planckian) scale appears to be a common feature shared by most of the current approaches to quantum gravity. In this respect, noncommutative geometry \cite{Connes:1990,Landi:1997,Madore:1999,Gracia:2001} provides a suitable mathematical framework for undertaking the study of the quantum structure of spacetime and its consequences on the description of physical phenomena. At the commutative level, algebraic geometry provides us with a full dictionary between geometric objects and algebraic ones. For instance, the Gelfand-Naimark theorem \cite{Gelfand:1943} states that there is a one-to-one correspondence between commutative C*-algebras $\mathcal{A}$ and locally compact Hausdorff spaces $X$. In particular, the points of $X$ are identified with the characters on the commutative C*-algebra of continuous functions $C_0(X)$ vanishing at infinity. On the other hand, any abstract commutative C*-algebra $\mathcal{A}$ can be regarded as the algebra of continuous functions over some topological space. Hence, the topology of a space can be encoded into an algebraic language. To capture the geometry of the space more ingredients are needed. It has been proven by A. Connes that certain commutative algebras supplemented with some additional structures, the socalled spectral triples $(\mathcal{A},\mathcal{H},\mathcal{D})$ which are made of an algebra $\mathcal{A}$ acting on some Hilbert space $\mathcal{H}$, together with a Dirac operator $\mathcal{D}$, are in one-to-one correspondence with Riemannian spin manifolds. The idea of noncommutative geometry is then to replace commutative algebras by noncommutative ones, and interpret the result as encoding some of the characteristics of what could be a noncommutative spacetime, or quantum geometry. Among the celebrated example of physical applications of this (noncommutative) spectral triple approach to the geometry, it is to mention the Connes-Chamseddine description of the standard model of particle physics and gravity \cite{Chamseddine:1997,Chamseddine:2007}. However, it is another approach to noncommutative geometry we adopt in the present dissertation, and no use of spectral triples is made. We will be mainly concerned with noncommutative quantum field theory, using the star product formulation of noncommutative geometry. We will investigate one-loop quantum properties for various models of scalar field theory with quartic interactions, whose dynamics is characterised by various choices of kinetic operators chosen to be square of Dirac operators. For this purpose, the most important picture is that of Gelfand-Naimark: namely, to interpret noncommutative algebras of functions (endowed with star product) as hypothetical quantum spacetimes.\smallskip The (renew of) interest in noncommutative field theory slowly appeared in the physics literature from the middle of the 1980s \cite{Witten:1986,Dubois:1990a,Dubois:1990b,Madore:1991,Grosse:1992}. This interest was further increased by the observation that NCFT might emerge from some regime of string theory, and matrix theory, in external (magnetic) backgrounds \cite{Seiberg:1999,Schomerus:1999,Connes:1998}. From the beginning of the 2000s, unusual renormalisation properties of the NCFT built on the Moyal space $\mathbb{R}_\theta^4$ \cite{Minwalla:2000,Chepelev:2000} triggered a growing interest, in particular to cope with UV/IR mixing. Schematically, this phenomenon results from the existence of nonplanar diagrams which, albeit UV finite, become singular at exceptional low external momenta. This generates UV divergences in higher order diagrams in which they are involved as subdiagrams, signalling that UV and IR scales are nontrivially related. Recall that, one presentation of $\mathbb{R}_\theta^4$ is provided by $\mathbb{C}[\hat{x}_i]/\mathcal{R}$, the quotient of the free algebra generated by four Hermitian coordinates $\hat{x}_i$ by the relation $\mathcal{R}$ defined by $[\hat{x}_i,\hat{x}_j]=i\Theta_{ij}$, where $\Theta_{ij}$ is a skew symmetric constant tensor. This deformation of $\mathbb{R}^4$ can be described as a (suitable) algebra of functions on $\mathbb{R}^4$ equipped with the celebrated Groenewold-Moyal product \cite{Groenewold:1946,Moyal:1949} obtained from the Weyl quantization scheme. For a review on the various presentations of the Moyal space, as well as related NCFT, see, e.g., \cite{Douglas:2001,Szabo:2003,Wallet:2008}. The investigation of the various properties of NCFT built on $\mathbb{R}^4_\theta$ has generated many works leading to the first all order renormalisable scalar field theory with quartic interactions \cite{Grosse:2003,Grosse:2005,Grosse:2005b} where the UV/IR mixing was rendered innocuous through the introduction of a harmonic term; see also, e.g., \cite{Grosse:2004,Disertori:2007,Langmann:2003,Langmann:2004,degoursac:2008,degoursac:2011}. Other examples of quantum spaces can be obtained from algebras of coordinates which are of Lie algebra type; see, e.g., \cite{Gracia:2002,Kupriyanov:2008,moi:2017a,moi:2018b}. One such example is provided by quantum spaces with $\mathfrak{su}(2)$ noncommutativity. This type of noncommutativity was proposed, for example, in the context of spin network theory \cite{Penrose:1971}, that of $2+1$ quantum gravity \cite{Freidel:2006,Freidel:2008,Guedes:2013}, and that of brane models \cite{Alekseev:1999}. This space, known as $\mathbb{R}^3_\theta$ \cite{Hammou:2002}, can be related to the universal enveloping algebra of $\mathfrak{su}(2)$, \textit{i.e.} $\mathcal{U}\big(\mathfrak{su}(2)\big):=\mathbb{C}[\hat{x}_i]/\mathcal{R}'$, where the relation $\mathcal{R}'$ is defined by $[\hat{x}_i,\hat{x}_j]=i\theta\varepsilon_{ijk}\hat{x}_k$, $\theta>0$. Scalar field theories built on $\mathbb{R}^3_\theta$ have been studied, e.g., in \cite{JCW:2013b,Vitale:2014,moi:2016}, and will be discussed in more details in Chap. \ref{sap-ncftsu2}. Some of these NCFT have been shown to be free of perturbative UV/IR mixing and are characterised by the occurrence of a natural UV cutoff, both stemming from the group algebraic structure underlying $\mathbb{R}^3_\theta$ \cite{JCW:2016}.\smallskip Another famous example of Lie algebra type noncommutative spacetime is provided by the $\kappa$-Minkowski space. This latter appears in the physics literature to be one of the most studied noncommutative spaces with Lie algebra type noncommutativity, and is sometimes regarded as a good candidate for a quantum spacetime to be involved in a description of quantum gravity, at least in some limit \cite{Amelino:2004,Freidel:2004,Cianfrani:2016}. Informally, the $d+1$ dimensional $\kappa$-Minkowski space may be viewed as the enveloping algebra of the Lie algebra generated by the $d+1$ operators $\hat{x}_\mu$ satisfying $[\hat{x}_0,\hat{x}_i]=i\kappa^{-1} \hat{x}_i,\ [\hat{x}_i,\hat{x}_j]=0,\ i,j=1,\cdots, d$, where the deformation parameter $\kappa$ has dimension of a mass. This latter algebra of coordinates has been characterised a long time ago in \cite{Majid:1994} by exhibiting the Hopf algebra bicrossproduct structure of the $\kappa$-Poincar\'{e} quantum algebra \cite{Lukierski:1991,Lukierski:1992} which (co)acts covariantly on $\kappa$-Minkowski and may be viewed as describing its quantum symmetries. A considerable amount of literature has been devoted to the exploration of algebraic aspects related to $\kappa$-Minkowski and $\kappa$-Poincar\'e (see, e.g., \cite{Pachol:2011}, and references therein), in particular dealing with concepts inherited from quantum groups, and (twist) deformations. For a comprehensive recent review on these algebraic developments see, e.g., \cite{Lukierski:2017}, and references therein. Besides, the possibility to have testable/observable consequences from related phenomenological models has raised a growing interest and has resulted in many works dealing, for example, with doubly special relativity, modified dispersion relations, and relative locality \cite{Amelino:2000,Amelino:2001,Amelino:2002,Amelino:2009,Gubitosi:2013,Amelino:2011}, as well as in the study of the classical properties of noncommutative field theories built on $\kappa$-Minkowski \cite{Agostini:2004,Agostini:2004b,Agostini:2007,Dimitrijevic:2003,Dimitrijevic:2004,Borowiec:2009,Meljanac:2011,Meljanac:2011b}. In contrast, their quantum properties have amazingly not been so widely explored compared to the present status of the above mentioned NCFT built on $\mathbb{R}^3_\theta$ and $\mathbb{R}^4_\theta$. This is probably due to the nonunimodularity of the Lie group underlying the construction of $\kappa$-Minkowski, together with the fact that requiring the $\kappa$-Poincar{\'e} invariance of the action functional (encoding the dynamics of interacting scalar fields) induces a loss of cyclicity of the Lebesgue integral, \textit{i.e.} $\int f\star g\neq\int g\star f$. These technical difficulties have been further amplified by the complicated expressions of the star products used in these studies. Nevertheless, the UV/IR mixing within some scalar field theories on $\kappa$-Minkowski has been examined in \cite{Grosse:2006} and found to possibly occur. The corresponding analysis was based on a star product for the $\kappa$-deformation derived in \cite{Dimitrijevic:2004c} from a general relationship between the Kontsevich formula and the Baker-Campbell-Hausdorff formula that can be conveniently used when the noncommutativity is of Lie algebra type \cite{Kathotia:1998}. However, the cumbersome expression of this star product leads to very involved formulas which drastically restrain the study and the analysis of the quantum properties of NCFT built on $\kappa$-Minkowski. In the present dissertation, we will use another star product to investigate the quantum properties of $\kappa$-Poincar{\'e} invariant interacting scalar field theories. This product -- which is equivalent to the star product derived in \cite{Durhuus:2013} -- is based on a generalisation of the Weyl quantisation scheme; see Chap. \ref{sec-Minkowski}. The relatively simple expression of this star product (see eq. \eqref{star-4d}) enabled us to provide the first complete analysis of the one-loop quantum behaviour of various models of $\kappa$-Poincar{\'e} invariant scalar field theory with quartic interactions bypassing this way the above mentioned difficulties; see Chap. \ref{ch-ncft}.\smallskip The NCFT we consider in Chap. \ref{ch-ncft} are $\kappa$-Poincar\'e invariant, which is a physically reasonable requirement keeping in mind the important role played by the Poincar\'e invariance in ordinary field theories together with the fact that the $\kappa$-Poincar\'e algebra can be viewed as describing the quantum symmetries of the $\kappa$-Minkowski spacetime. Indeed, an important question to address when considering NCFT is the fate of the symmetries of a noncommutative spacetime. This has triggered a lot of works using various approaches which basically depend if one insists on preserving (almost all) the classical symmetries or if one considers deformed ones. For example, in \cite{Doplicher:1994}, the attention was focused on preserving the classical (undeformed) Lorentz or Poincar\'e symmetries for the Moyal space, as well as in \cite{Dkabrowski:2010,Dkabrowski:2011b} for $\kappa$-Minkowski space. In this latter work, the authors ensures classical covariance of $\kappa$-Minkowski space starting from a generalised version of it introduced in \cite{Lukierski:2002}, \textit{i.e.} $[\hat{x}_\mu,\hat{x}_\nu]=i\kappa^{-1}(v_\mu \hat{x}_\nu - v_\nu \hat{x}_\mu)$. The authors of \cite{Lukierski:2002} show that, under some assumptions, deformed (quantum) symmetries are not the only viable and consistent solution for treating such models. Note however that the original $\kappa$-Minkowski space, $[\hat{x}_0,\hat{x}_i]=i\kappa^{-1} \hat{x}_i,\ [\hat{x}_i,\hat{x}_j]=0$, which we consider in the present study, does not fit in that description and breaks the classical relativity principle. This leads us to the other approach widely studied in the literature, namely the extension of the usual notion of Lie algebra symmetries to the one of (deformed) Hopf algebra symmetries aiming to encode the new (canonical) symmetries for the quantum spacetimes. This point of view is motivated by the fact that, in the commutative case, the Minkowski spacetime can be regarded as the homogeneous space the Poincar\'e symmetry group acts on transitively. Hence, a deformation of the former should (in principle) implies a deformation of the latter and vice versa. This idea underlies the original derivation of $\kappa$-Minkowski as the homogeneous space associated to $\kappa$-Poincar\'e \cite{Majid:1994}. Another interesting example (to put in perspective with \cite{Doplicher:1994}) is given in \cite{Chaichian:2004,Chaichian:2005}, where it is shown that the symmetries for the Moyal space can be obtained through formal (Drinfeld) twist deformation of the Lorentz sector of the Poincar\'e algebra while translations remain undeformed. Finally, similar considerations applied to $\mathbb{R}^3_\theta$ have been considered, e.g., in \cite{Majid:1988,Majid:1991}. General discussions on the fate of the Poincar\'e symmetries within the context of noncommutative spacetimes can be found, e.g., in \cite{Amelino:2002c}, and references therein.\smallskip \paragraph{Outline.} The first part of this dissertation is devoted to the study of various classes of Lie algebra type noncommutative spaces. We construct families of star products associated with such spaces we may eventually use in the study of noncommutative field theory. The guideline underlying our constructions lies in abstract harmonic analysis on, and representation theory of, Lie groups. In Chapter \ref{sec-Minkowski}, we construct a star product associated with $\kappa$-Minkowski using standard tools from harmonic analysis and group C*-algebras. The derivation is performed in the spirit of the Weyl quantisation scheme leading to the celebrated Groenewold-Moyal product in quantum mechanics. Here, the Heisenberg group is replaced (in the 2-dimensional case) by the $ax+b$ group. The group and its algebra are characterised. In particular, we point out that the group is nonunimodular. The star product is defined as usual through the introduction of an invertible quantisation map which, in the Weyl quantisation scheme, is given (up to Fourier transform) by a bounded *-representation of the convolution algebra of the $ax+b$ group. The construction is then extended to any (spatial) dimensions, in which case the group is given by the semidirect product of two Abelian groups, namely $\mathcal{G}_{d+1}:=\mathbb{R}\ltimes\mathbb{R}^d$. The C*-algebra of fields modelling $\kappa$-Minkowski is presented. In particular, due to the nonunimodularity of $\mathcal{G}_{d+1}$, the natural involution on $\kappa$-Minkowski (\textit{i.e.} compatible with the structure of group C*-algebra) does not coincide with the ordinary complex conjugation. This plays an important role in the construction of action functionals aiming to describe the dynamics of $\mathbb{C}$-valued interacting scalar fields on $\kappa$-Minkowski background. Finally, the relation between $\kappa$-Minkowski and the $\kappa$-Poincar\'{e} algebra is recalled, emphasising the canonical action of the latter on the former, which can be interpreted as the action of a symmetry group on its homogeneous space. Basics on abstract harmonic analysis, and $\kappa$-Poincar\'{e}, are collected in Appendix \ref{sap-harmonic}, and Appendix \ref{sap-poincare}, respectively.\smallskip In Chapter \ref{sec-products}, we consider various families of quantum spaces whose algebras of coordinate operators, say $\mathfrak{g}$, are semisimple Lie algebras. Note that, as a solvable Lie algebra, the $\kappa$-Minkowski algebra of coordinates does not pertain to this category. In particular, the groups associated with $\mathfrak{g}$, say $\mathcal{G}$, are unimodular. Therefore, a natural involution to be used in the construction of noncommutative field theories on these spaces is provided by the complex conjugation. The approach adopted to construct the star products associated with these spaces is slightly different from the one adopted in Chap. \ref{sec-Minkowski}. The construction is done in two steps. First, we show that the abstract Lie algebra $\mathfrak{g}$ can be conveniently represented as an algebra of differential operators acting on some Hilbert space of functions. In view of the important role played by the involutive structures in both the construction of the algebra of fields and the study of noncommutative field theory, we require the differential representations to define morphisms of *-algebras. It is worth mentioning that this requirement is not always taken into account in the literature. We show that under these assumptions (\textit{i.e.} the differential representations to be morphisms of Lie algebras preserving the involutions) the admissible representations are classified by a set of four differential equations we call ``master equations." Next, independently of the above set of master equations, we turn to the construction of star products which we define as usual by $f\star g:=Q^{-1}\big(Q(f)Q(g)\big)$. We show that the quantisation maps $Q$ -- which are regarded as differential operators when applied to functions -- are fully determined by their evaluation on the plane waves. Namely, $Q\big(e^{ip\cdot x}\big)$ which we call deformed plane waves. Then, we show that, keeping in mind the Weyl quantisation scheme, the deformed plane waves can be interpreted as projective representations of $\mathcal{G}$. Thus, we highlight the fact that, under this assumption, families of inequivalent star products (which merely results from the multiplication of deformed plane waves) are classified by the second cohomology group, $H^2(\mathcal{G},\mathcal{A})$, of $\mathcal{G}$ with value in an Abelian group $\mathcal{A}$. However, the study of quantum properties of noncommutative field theory usually necessitates an explicit expression for the star product. This amounts to choose a representative in one of the classes of equivalence belonging to $H^2(\mathcal{G},\mathcal{A})$, which is further facilitated by using the above mentioned differential representation of the algebra of coordinate operators.\smallskip In Section \ref{sec-su2}, we apply this procedure to the case of $SU(2)$. We show that explicit expressions for the deformed plane waves can be obtained upon using $SO(3)$-equivariant differential *-representations of $\mathfrak{su}(2)$ together with polar decomposition of the deformed plane waves and the Wigner theorem for $SU(2)$. Making use of the master equations, we show that $SO(3)$-equivariant differential *-representations are labelled by three real functional of the Laplacian on $\mathbb{R}^3$. Finally, we find that the deformed planes waves are characterised by two (representation dependent) functions of the momenta defined themselves by two Volterra integrals. As a consequence, we show that the Wigner-Weyl quantisation map associated to the symmetric ordering, \textit{i.e.} such that $W(e^{ip\cdot x})=e^{ip\cdot \hat{x}}$, cannot be obtained within such approach. More precisely, it has to be modulated by some weight $W(e^{ip\cdot x})\to \omega(p)e^{ip\cdot \hat{x}}$. In Section \ref{sukont}, we select a specific representation among the family of $SO(3)$-equivariant differential *-representations, and show that the corresponding star product is equivalent to the Kontsevich product for the Poisson manifold dual to $\mathfrak{su}(2)$, namely closed for the trace functional defined by the usual Lebesgue integral $\int d^3x$. We use this product in Chap. \ref{sap-ncftsu2} for studying noncommutative field theory on $\mathbb{R}^3_\theta$, a deformation of $\mathbb{R}^3$ of $\mathfrak{su}(2)$ noncommutativity.\smallskip The second part of the dissertation is devoted to the study of the quantum properties of various model of noncommutative field theories built from the star products constructed in Part \ref{ch-ncst}. In Chapter \ref{ch-ncft}, we give the first comprehensive derivation of the one-loop order corrections to both the 2-point and 4-point functions for various models of $\kappa$-Poincar\'{e} invariant $\mathbb{C}$-valued scalar field theory with quartic interactions. In Section \ref{sec-action}, we discuss and analyse the properties a physically reasonable action functional aiming to describe the dynamics of scalar fields on $\kappa$-Minkowski background should satisfy. First of all, owing to the natural action of the $\kappa$-deformed Poincar\'{e} (Hopf) algebra on the $\kappa$-Minkowski space -- the $\kappa$-Poincar\'{e} algebra playing the role of the algebra of symmetries for the quantum space -- together with the important role played by the Poincar\'{e} algebra in ordinary quantum field theory, it is physically relevant to require the $\kappa$-Poincar\'{e} invariance of any physically reasonable action functional. This is supported by the fact that both the $\kappa$-Minkowski space and $\kappa$-Poincar\'{e} algebra tend, respectively, to the ordinary Minkowski spacetime and Poincar\'{e} algebra in the commutative (low energy) limit, $\kappa\to\infty$. It is known that the Lebesgue integral is $\kappa$-Poincar\'{e} invariant. That latter is not cyclic with respect to the star product constructed in Chap. \ref{sec-Minkowski}, however. We emphasise that the Lebesgue integral defines a twisted trace instead. But whenever there is a twisted trace, there is a related KMS condition. We show that the positive linear functional given by $\zeta(f):=\int d^4x f(x)$ actually defined a KMS weight on the C*-algebra of fields modelling $\kappa$-Minkowski, which is equivalent to have a KMS condition. The related modular group and Tomita modular operator are characterised. To summarise, to enforce the $\kappa$-Poincar\'{e} invariance of the action functional trades the cyclicity of the Lebesgue integral for a KMS condition. This new interpretation we give to the loss of cyclicity sheds new light on the possible role played by $\kappa$-Minkowski, $\kappa$-Poincar\'{e}, and related noncommutative field theories, in the description of physics at Planck scale. After recalling the initial motives for introducing the KMS condition in quantum statistical mechanics, we discuss the possible applications and implications of this KMS condition in the context of quantum gravity and Planckian physics. Next, decomposing the action functional into a kinetic term and interaction term, we discuss their admissible expressions. In particular, we choose the kinetic operator to be related to the square of some Dirac operators. We restrict our attention on polynomial interactions which tend to the usual $\vert\phi\vert^4$ model in the commutative limit. We find that there is essentially four inequivalent interactions.\smallskip In Section \ref{sec-2point}, we compute the one-loop 2-point functions for two kinetic operators one being given by the first Casimir of the $\kappa$-Poincar\'{e} algebra, the other being given by the square of an $\mathcal{U}_\kappa(\text{iso}(4))$-equivariant Dirac operator. (A third case, for which the kinetic operator is given by the square of a modular Dirac operator, is also briefly discussed.) We show that, thanks to the relatively simple (integral) expression of the star product, we can identify noncommutative field theories with ordinary, albeit nonlocal, field theories. This enables us to use standard techniques from path integral quantisation and perturbation theory. The related material is recalled in Appendix \ref{sap-perturbation} for completeness. We find that one model has milder UV divergences than its commutative counterpart. The other model is found to diverge slightly worst. In both case, whenever the interaction considered is nonorientable, we find UV/IR mixing. In Section \ref{sec-4point}, we restrict our attention to the model with equivariant kinetic operator and orientable interactions. We compute the corresponding 4-point function at one-loop order and show that all the contributions are UV finite with no IR singularity. One-loop renormalisation is briefly discussed.\smallskip In Chapter \ref{sap-ncftsu2}, we study the quantum behaviour of both real and complex scalar field (both massive and massless) theories, with quartic interactions, built on $\mathbb{R}^3_\theta$. Using the Kontsevich product derived in $\S$\ref{sec-kont}, we exhibit two type of contributions to the one-loop 2-point function. In all of the cases, we find that the contributions are UV and IR finite, the parameter $\theta$ encoding the deformation playing the role of a natural UV and IR cutoff for the models. No UV/IR mixing is found within these models.\smallskip We finally summarise and comment our result in the last part of the dissertation. \part{Noncommutative spacetimes and star products.}\label{ch-ncst} In the spirit of the Gelfand-Naimark theorem, it is common to define quantum spaces as noncommutative, associative, C*-algebras of functions. One natural way to construct such algebra amounts to deform, in some \textit{smooth way}, (the algebraic structures of) the commutative algebra of $\mathbb{C}$-valued smooth functions associated with the classical spacetime,\footnote{Here, by ``classical spacetime" we mean smooth manifold $\mathcal{M}$.} say $\mathcal{A}:=\big(\mathfrak{F}(\mathcal{M}),\cdot\big)$, into a noncommutative algebra of $\mathbb{C}$-valued smooth functions, $\mathcal{A}_\theta:=\big(\mathfrak{F}(\mathcal{M}),\star_\theta\big)$.\footnote{We insist on the fact that, for a real deformation parameter $\theta>0$, only the algebraic structure of $\mathcal{A}$ is deformed, \textit{i.e.} $\cdot\mapsto\star_\theta$, while the structure of linear space is not. In other words, the fields remain classical (in both the sense of $\hbar$ and $\theta$), only the way they compose is modified. Moreover, note that the space of functions, $\mathfrak{F}(\mathcal{M})$, we start from may differ from one case to the other, depending on both the nature of $\mathcal{M}$ and the nature of the deformation, \textit{i.e.} which type of noncommutative algebra of coordinate operators we consider. In practice, a good starting point is to consider the space of Schwartz functions, then to enlarge this space by successive completions; see below.} Formally, the associative (noncommutative) star product, resulting from the deformation of $\mathcal{A}$, may be defined in such a way that it differs from the usual pointwise product by terms of order, at least, $\theta$; namely \begin{equation}\label{star-expansion} (f\star_\theta g)(x)=(f\cdot g)(x)+\mathcal{O}(\theta), \end{equation} with $(f\cdot g)(x)=f(x)g(x)$, for any $f,g\in\mathfrak{F}(\mathcal{M})$. Furthermore, it is to expect the involutive structure, as well as the C*-norm, on $\mathcal{A}$ to be deformed in accordance with the star product so that the whole structure of C*-algebra is transferred to $\mathcal{A}_\theta$. In any case, the noncommutative algebra $\mathcal{A}_\theta$ is required to reduce to the commutative algebra $\mathcal{A}$ we start from when taking the commutative limit, $\theta\to0$. In particular, this implies $(f\star_\theta g)(x)\to f(x)g(x)$ when $\theta\to0$, a condition which is formally satisfied when considering star product of the form of \eqref{star-expansion}.\smallskip This latter requirement simply reflects the desire to recover a known physical theory in some limit of the noncommutative model, for example when the typical length scale of the physical system under consideration becomes large compared with $\theta$. In a quantum gravity prospect -- identifying the deformation parameter $\theta$ with the Planck length $\ell_p:=\sqrt{\hbar G/c^3}$ -- this requirement would be to recover either a known quantum field theory in the limit $G\to0$ or Einstein's theory of gravitation in the limit $\hbar\to0$, both limits corresponding to $\ell_p\to0$. From a phenomenological point of view, this approach enables us to balance the lack of experimental/observational data to guide the construction of new physical models. Moreover, the semicommutative limit, namely (small) departures from the commutative theory, can be easily studied (in a controlled manner) upon expending the star product in power of the deformation parameter. \smallskip Of course, the way to deform $\mathcal{A}$ is not unique, and to one commutative algebra may correspond several, possibly inequivalent, noncommutative algebras $\mathcal{A}_\theta$, each algebra being characterised by a specific star product. The characterisation and classification of such deformation procedures have been the subject of many mathematical studies and led to a huge amount of literature. Once more, the initial motivation finds its origin in quantum mechanics, more precisely the quantisation of classical systems, and dates back to the early works of H. Weyl \cite{Weyl:1927}, E. Wigner \cite{Wigner:1932}, and J. von Neumann \cite{vNeumann:1931}, which ultimately led to the celebrated Groenewold-Moyal star product \cite{Groenewold:1946,Moyal:1949}. It is instructive to recall the main steps leading to the Groenewold-Moyal product as we are going to use a similar approach to construct star products associated with $\kappa$-Minkowski.\medskip Recall that one important feature of this scheme is the notion of ``twisted convolution" of two functions on (the 2-dimensional) phase space, that we denote by $f\hat{\circ}g$, whose explicit expression was first given in \cite{vNeumann:1931}; for a more recent treatment see, e.g., \cite{Hennings:2010}.\\ This product is defined by \begin{subequations}\label{Weyl-Heisenberg} \begin{equation} W(f\hat{\circ}g):=W(f)W(g),\ \forall f,g\in L^1(\mathbb{R}^2), \end{equation} where the Weyl operator is given by $W(f):=\int d\xi_1d\xi_2 f(\xi_1,\xi_2) e^{i(\xi_1\hat{p}+\xi_2\hat{q})}$, in which the unitary operator in the integrand can be viewed as (a unitary representation of) an element of the unimodular simply connected Heisenberg group $\mathcal{G}$ obtained by exponentiating the Heisenberg algebra, \textit{i.e.} a central extension of the Abelian Lie algebra, $[\hat{q},\hat{p}]=0$, say $[\hat{q},\hat{p}]=i\hbar$, where $\hbar$ is a central element.\\ From this, it follows the expression of the Groenewold-Moyal product defining the deformation of $\mathbb{R}^2$. It is defined by \begin{equation} f\star_\hbar g:=\mathcal{F}^{-1}\big(\mathcal{F}f\hat{\circ}\mathcal{F}g\big), \end{equation} where the invertible Wigner-Weyl quantisation map is given by \begin{equation} Q:L^2(\mathbb{R}^2)\to\mathcal{L}\big(L^2(\mathbb{R})\big),\ Q(f):=W(\mathcal{F}f), \end{equation} \end{subequations} and $\mathcal{F}$ is the ordinary Fourier transform on $\mathbb{R}^2$.\footnote{Note that, within the above derivation, we have implicitly identified functions on the Heisenberg group with functions on $\mathbb{R}^2$. This simply reflects the fact that we have parametrised, in term of real parameters, the group elements of the Heisenberg group.}\\ In fact, this picture fits perfectly within the theory of harmonic analysis on locally compact groups, the relevant group being, in the present case, provided by the Heisenberg group. Moreover, the Wigner-Weyl quantisation map provides an isometric isomorphism between the group C*-algebra of the Heisenberg group and the C*-algebra of Hilbert-Schmidt operator on $L^2(\mathbb{R})$, thus ensuring the equivalence between the probabilistic (star product) interpretation of quantum mechanics and the (more conventional) Hilbert space approach. Basics on harmonic analysis are recalled in Appendix \ref{sap-harmonic} for completeness.\medskip The generalisation of the above quantisation scheme to spaces of functions equipped with more general Poisson structure than the classical symplectic phase space led to the theory of deformation quantisation; for a review see, e.g., \cite{Sternheimer:1998,Dito:2002}, and references therein. Note that, in this case, the star product \eqref{star-expansion} takes the form \begin{equation}\label{star-poisson} (f\star_\theta g)(x)=f(x)g(x)+\frac{i\theta}{2}\lbrace f,g\rbrace+\mathcal{O}(\theta^2), \end{equation} where $\lbrace f,g\rbrace$ is the Poisson bracket characterising the Poisson manifold to be deformed. It is worth noting that, as pointed out in \cite{Rieffel:1990a}, to arbitrary star products of the form \eqref{star-poisson} does not always correspond a closed (integral) expression, but rather a formal (not necessarily convergent) power series in the deformation parameter $\theta$. Therefore, it may exist deformations for which $f\star_\theta g\notin\mathfrak{F}(\mathcal{M})$, \textit{i.e.} the algebra $\mathcal{A}_\theta$ is not closed under star multiplication, which is troublesome for defining reasonable notion of quantum spacetime. In addition, star products which have integral expression are easier to manipulate in (noncommutative) quantum field theory than formal power series. In view of eq. \eqref{Weyl-Heisenberg}, see also Appendix \ref{sap-harmonic}, this problem can in principle be bypassed, at least in the case the algebra $\mathfrak{g}$ of coordinate operators is of Lie algebra type, by adapting (up to technicalities) the above Weyl quantisation scheme to the harmonic analysis on the Lie group $\mathcal{G}$, related to $\mathfrak{g}$, in order to derive a suitable expression for the star product. A suitable definition for the C*-algebra of fields modelling the quantum spacetime, which is characterised by $\mathfrak{g}$, is then provided by the group C*-algebra $C^*(\mathcal{G})$ of $\mathcal{G}$. Early considerations supporting this interpretation can be found in \cite{Rieffel:1990b}, see also \cite{Majid:1988,Majid:1990,Majid:1994}.\bigskip In Chapter \ref{sec-Minkowski}, we extend the above Weyl quantisation scheme to the construction of a star product associated with $\kappa$-Minkowski. This is achieved by replacing the Heisenberg group by the nonunimodular $ax+b$ group. Note that, this approach has already been used in \cite{Durhuus:2013} to construct a star product for $\kappa$-Minkowski. This product will be used in Part \ref{part-ncft} to construct action functionals aiming to describe the dynamics of various families of 4-dimensional $\kappa$-Poincar\'{e} invariant interacting scalar field theory. We will see that, the relatively simple expression of this star product leads to very tractable expressions for the propagator and interaction potential characterising the NCFT. This will enable us to compute radiative corrections without resorting to expansions in $\kappa$. In another (independent) chapter, Chap. \ref{sec-products}, we present the construction of various families of star products in the case the Lie algebra of coordinate operators is semisimple. We show that the construction can conveniently be carried out by representing the abstract coordinate operators as differential operators acting on some Hilbert space. Families of quantisation maps, giving rise to the star products, are characterised by their action on the plane waves together with polar decomposition of operator. We finally emphasise that inequivalent families of star products can be obtain from considerations stemming from group cohomology. Although differential representations have been widely used in the literature (much more than the group algebraic approach we adopt in Chap. \ref{sec-Minkowski}), it is worth mentioning that many of these studies do not care about the preservation of the involutive structures while constructing star products. In contrast, in our derivation particular attention is drawn to the preservation of the various involutive structures underlying the quantum spaces, this all along the various steps leading to the expressions of the star products. It seems to us that this requirement is of primary importance, for both mathematical and physical purposes, in order to prevent any inconsistency in the derivation of the star products and the algebras of fields. This will be discussed in more details in Chap. \ref{sec-products}. \chapter{\texorpdfstring{$\kappa$}{k}-Minkowski as a group algebra.}\label{sec-Minkowski} In this chapter, we focus on the specific example of $\kappa$-Minkowski, whose algebra $\mathfrak{m}_\kappa$ of coordinate operators is given, in the $(d+1)$-dimensional case, by \begin{equation}\label{kappa-Lie} [\hat{x}_0,\hat{x}_i]=\frac{i}{\kappa}\hat{x}_i,\ [\hat{x}_i,\hat{x}_j]=0,\ i,j=1, ...,d, \end{equation} where $\kappa>0$ is a real, dimensionful, parameter labelling the deformation, and the coordinates $\hat{x}_\mu$ are assumed to be selfadjoint operators acting on some Hilbert space $\mathcal{H}$.\smallskip The real interest of $\kappa$-Minkowski, among all of the other (currently known) possible choices of quantum spacetimes, lies in its relation to the $\kappa$-Poincar\'{e} algebra \begin{equation}\label{bicrossproduct} \mathfrak{P}_\kappa=\mathcal{U}\big(\mathfrak{so}(1,3)\big)\triangleright\!\!\!\blacktriangleleft \mathfrak{T}_\kappa. \end{equation} The $\kappa$-Poincar\'{e} algebra has been originally obtained in \cite{Lukierski:1991,Lukierski:1992} by In\"{o}n\"{u}-Wigner contraction \cite{Inonu:1953} of $SO_q(3,2)$.\footnote{With $SO_q(3,2)$ a q-deformation of the classical anti-de Sitter space.} Another presentation of $\mathfrak{P}_\kappa$ amounts to exhibit its bicrossproduct structure \cite{Majid:1994}, eq. \eqref{bicrossproduct}, which merely reflects the fact that deforming the action of the Lorentz sector on the translations induces a backreaction of these latter on the former. Within this picture, $\kappa$-Minkowski naturally arises as the Hopf dual of the translation Hopf subalgebra $\mathfrak{T}_\kappa\subseteq\mathfrak{P}_\kappa$ which acts covariantly on it, and, in fact, because of the bicrossproduct structure of $\mathfrak{P}_\kappa$, the whole $\kappa$-Poincar\'{e} algebra can be shown to act covariantly on $\mathfrak{m}_\kappa$. For more details on this derivation, as well as algebraic properties of $\mathfrak{P}_\kappa$, see Appendix \ref{sap-poincare}, and references therein. Therefore, the $\kappa$-Poincar\'{e} algebra can be interpreted as describing the (quantum) symmetries of $\kappa$-Minkowski. This implies that not only the Minkowski spacetime is transposed at the noncommutative level, but the whole (dual) picture $\lbrace\text{spacetime}+\text{symmetries}\rbrace$, with the nice property that $\kappa$-Minkowski (resp. $\kappa$-Poincar\'{e}) tends to the ordinary Minkowski spacetime (resp. Poincar\'{e} algebra) in the commutative (low energy) limit $\kappa\to\infty$.\smallskip In addition to this, it is worth mentioning that, the deformation parameter $\kappa$ is of mass dimension. Hence, this deformation, eq. \eqref{kappa-Lie} and \eqref{bicrossproduct}, provides a natural energy scale at which the effects thereof, \textit{i.e.} departure from the commutative, should become significant. This scale has been interpreted in the literature as being the Planck scale or, at least, some intermediate quantum gravity scale. One of the first (successful) attempt to (provide a framework to) consistently incorporate such dimensionful (observer independent) parameter into a physical theory, in a way compatible with the relativity principle, is provided by the theory of doubly special relativity \cite{Amelino:2000,Amelino:2001}. Of course, due to relativistic effects such as length contractions, it is not possible to (naively) incorporate a fundamental scale of mass or length dimension within the framework of Einstein's theory of special relativity. One way to proceed, then, consists in deforming the relativity symmetry group of special relativity, hence the ordinary dispersion relation, in such a way that both the speed of light $c$ and the new (quantum gravity) scale are now observer independent. It turns out \cite{Kowalski:2001,Bruno:2001} that this can be achieved in the framework of $\kappa$-Poincar\'{e} Hopf algebra, with the benefit that the ordinary symmetry group, as well as dispersion relations, and relativity principle, can be recovered in a smooth way by simply taking the limit $\kappa\to\infty$ thanks to the construction of quantum groups. Finally, note that, geometrically, the $\kappa$-deformation of the Poincar\'{e} algebra implies that the corresponding (group) manifold of energy-momentum is curved, the curvature being proportional to $\kappa$, hence reflecting the non-Abelian structure of the group whose vector fields are (locally) provided by the $\kappa$-Minkowski algebra of coordinate operators \cite{Majid:1994,Kowalski:2013}. For a recent comprehensive review on the development of $\kappa$-Poincar\'{e}, $\kappa$-Minkowski and their possible physical implications see, e.g., \cite{Borowiec:2010,Kowalski:2017}, and references therein. We will come back to these points later on when constructing $\kappa$-Poincar\'{e} invariant action functional aiming to describe the dynamics of interacting scalar fields on $\kappa$-Minkowski background; see Chap. \ref{ch-ncft}. For the moment, let us restrict our attention on eq. \eqref{kappa-Lie} and construct the corresponding C*-algebra of fields, endowed with star product, modelling the $\kappa$-Minkowski space.\smallskip As already mentioned, a convenient presentation of the $\kappa$-Minkowski space is obtained by exploiting standard objects from the framework of harmonic analysis and group C*-algebras. This approach, which has been used in \cite{Durhuus:2013} to derive a star product for the 2-dimensional $\kappa$-Minkowski space, is the one we mainly follow in this section. It can be mentioned that, besides the use of this approach to derive the celebrated Groenewold-Moyal product, eq. \eqref{Weyl-Heisenberg}, associated with the Moyal plan, this framework has also been used in recent studies on $\mathbb{R}^3_\theta$, a deformation of $\mathbb{R}^3$, related to the convolution algebra of the compact Lie group $SU(2)$; see, e.g., in \cite{JCW:2015,JCW:2016}, and in $\S$\ref{sec-su2}, for more details. The material presented in this chapter is published in \cite{moi:2018a}. Additional details on harmonic analysis on locally compact group can be found in Appendix \ref{sap-harmonic}. \section{Characterisation of the convolution algebra.}\label{sec-convolution} Let us define, for any $n\in\mathbb{N}$, the Lie subalgebra $\mathfrak{m}_\kappa^{(n+1)}:=[\mathfrak{m}_\kappa^{(n)},\mathfrak{m}_\kappa^{(n)}]\subseteq\mathfrak{m}_\kappa$, with initial condition $\mathfrak{m}_\kappa^{(0)}:=\mathfrak{m}_\kappa$. From eq. \eqref{kappa-Lie}, we easily infer that the derived Lie algebra $\mathfrak{m}_\kappa^{(1)}=\text{span}(\hat{x}_i)_{i=1,...,d}$, as vector space, is a nilpotent ideal of $\mathfrak{m}_\kappa$, while $\mathfrak{m}_\kappa^{(n)}=0$, $\forall n\geq 2$. Hence, the derived series \begin{subequations}\label{kappa-alg} \begin{equation} \mathfrak{m}_\kappa\supseteq\mathfrak{m}_\kappa^{(1)}\supseteq0, \end{equation} forms an elementary sequence, indicating that the Lie algebra $\mathfrak{m}_\kappa$ is (split) solvable.\\ In addition, $\mathfrak{m}_\kappa$ decomposes into a semidirect product of Lie algebras, \textit{i.e.} \begin{equation}\label{alg-decomposition} \mathfrak{m}_\kappa=\mathbb{R}\hat{x}_0\oplus_{d\tau}\mathfrak{m}_\kappa^{(1)}, \end{equation} \end{subequations} where the Lie algebra homomorphism $d\tau:\mathbb{R}\hat{x}_0\to\text{Der}\ \mathfrak{m}_\kappa^{(1)}$ characterises the (adjoint) action of $\hat{x}_0$ on the ``spacelike directions" $\hat{x}_i$.\footnote{See, e.g, propositions 1.22-3 in ref. \cite{Knapp:2002}.}\smallskip It turns out that the group $\mathcal{G}_{d+1}$, obtained by exponentiating $\mathfrak{m}_\kappa$, is a solvable, simply connected, Lie group, diffeomorphic to an Euclidean space.\footnote{See, e.g., theorem 5.9 in ref. \cite{Onishchik:1993}.} Hence, $\mathcal{G}$ is amenable.\footnote{See, e.g., theorem 2.3.3 in ref. \cite{Greenleaf:1969}.} Moreover, because of the very structure of $\mathfrak{m}_\kappa$, eq. \eqref{kappa-alg}, $\mathcal{G}_{d+1}$ satisfies the sequence \begin{subequations} \begin{equation} \mathcal{G}_{d+1}\supseteq\mathcal{G}_{d+1}^{(1)}\supseteq\lbrace1\rbrace, \end{equation} entailing the semidirect product structure for $\mathcal{G}_{d+1}$, \textit{i.e.} \begin{equation}\label{grp-decomposition} \mathcal{G}_{d+1}=\mathbb{R}\ltimes_\tau\mathcal{G}_{d+1}^{(1)}, \end{equation} \end{subequations} where $\mathcal{G}^{(1)}_{d+1}=\mathbb{R}^d$ is normal in $\mathcal{G}_{d+1}$.\footnote{See, e.g., corollary 1.126 in ref. \cite{Knapp:2002}.} Recall that, a group, say $A$, is said to act on another group, say $B$, by automorphism if there exists a smooth map $\tau:A\times B\to B$ such that $g\mapsto\tau(g,\cdot)$ is a group homomorphism between $A$ and $\text{Aut}B$, the group of automorphisms of $B$. Hence, we call semidirect product of $A$ on $B$, and we write $C:=A\ltimes_\tau B$, the Lie group, with Cartesian product topology, whose composition law and inverse are given, for any $a_i\in A$ and $b_i\in B$, by \begin{equation}\label{group-law2} (a_1,b_1)(a_2,b_2)=\big(a_1a_2,\tau(a_1,b_2)b_1\big),\ (a_3,b_3)^{-1}=\big(a_3^{-1},\tau(a_3^{-1},b_3^{-1})\big). \end{equation} If $C=BA$ with $A,B\subseteq C$ as subgroups, $B$ normal in $C$, and $A\cap B=\lbrace1\rbrace$, then $\tau$ is nothing but the adjoint action of $A$ on $B$, namely $\tau(a,b)=aba^{-1}$; conditions that are fulfilled by the group, eq. \eqref{grp-decomposition}, underlying the construction of $\kappa$-Minkowski.\smallskip Note that the decompositions \eqref{alg-decomposition} and \eqref{grp-decomposition} hold independently of the dimension $d$ of $\mathfrak{m}_\kappa^{(1)}$. Therefore, it is convenient to begin with the construction of the star product for $d=1$, then extend the construction to the desired dimension.\smallskip Let us set $d=1$. In two dimensions, there exists, up to isomorphism, only one non-Abelian Lie algebra of which eq. \eqref{Lie-bracket} provides a specific choice of basis. The corresponding group is given by $\mathcal{G}_{2}=\mathbb{R}\ltimes_\tau\mathbb{R}$, isomorphic to the $ax+b$ group widely studied in the mathematical literature. For basic mathematical details see, e.g., \cite{Khalil:1974,Williams:2007}, and references therein. In view of the decomposition \eqref{grp-decomposition}, this group can be conveniently characterised by \begin{equation}\label{group-elem} W(p^0,p^1):=e^{ip^1\hat{x}_1}e^{ip^0\hat{x}_0}, \end{equation} where the parameters $p^0,p^1\in\mathbb{R}$ will be interpreted as (Fourier) momenta in due time. Note that, the above group elements, eq. \eqref{group-elem}, can be related to the more conventional exponential form of the Lie algebra \eqref{kappa-Lie} through mere redefinition of $p^1$. To see this, use the simplified Baker-Campbell-Hausdorff formula $e^Xe^Y=e^{\lambda(u)X+Y}$, valid whenever $[X,Y]=uX$, to obtain \begin{equation}\label{group-elem2} W(p^0,p^1)=e^{i(p^0\hat{x}_0+\lambda(p^0/\kappa)p^1\hat{x}_1)}, \end{equation} where $\lambda(u)=ue^u(e^u-1)^{-1}$; see, e.g., \cite{Vanbrunt:2015}. For the ensuing computations, the group elements are easier to manipulate with the parametrisation \eqref{group-elem} than \eqref{group-elem2}, however.\smallskip Now, using the identity $e^Xe^Y=e^{Y}e^{e^uX}$, which holds true whenever $[X,Y]=uX$, we easily obtain the composition law for $\mathcal{G}_2$, which is given by \begin{subequations} \begin{equation}\label{group-law} W(p^0,p^1)W(q^0,q^1)=W(p^0+q^0,p^1+e^{-p^0/\kappa}q^1). \end{equation} It follows that the unit element and inverse are given by \begin{equation}\label{group-unit-inverse} 1=W(0,0),\ W^{-1}(p^0,p^1)=W(-p^0,-e^{p^0/\kappa}p^1). \end{equation} \end{subequations} Equation \eqref{group-law} provides us with what will be identified with the energy-momentum composition law when we will consider NCFT on $\kappa$-Minkowski background in Chap. \ref{ch-ncft}. This essentially reflects the nontrivial coproduct structure of the $\kappa$-Poincar\'{e} algebra; see eq. \eqref{hopf1}. The more usual composition law for the $ax+b$ group is easily recovered by representing the group elements \eqref{group-elem} as upper triangular matrices \begin{equation}\label{grp-matrix} W(p^0,p^1)\mapsto (a,b):=\begin{pmatrix}a&b\\0&1\end{pmatrix}, \end{equation} such that $a:=e^{-p^0/\kappa}$ and $b:=p^1$. Then, from the decomposition \begin{equation} \begin{pmatrix}a&b\\0&1\end{pmatrix}=\begin{pmatrix}1&b\\0&1\end{pmatrix}\begin{pmatrix}a&0\\0&1\end{pmatrix}, \end{equation} we easily infer the semidirect product structure of the $ax+b$ group, namely $\mathcal{G}_2\cong BA$, with $A:=\big\lbrace (a,0),\ a>0\big\rbrace$ and $B:=\big\lbrace (1,b),\ b\in\mathbb{R}\big\rbrace$, together with group law \eqref{group-law2} and group action $\tau:A\times B\to B$ given by $\big((a,0),(1,b)\big)\mapsto (a,0)(1,b)(a^{-1},0)$. In view of \eqref{group-law}, this latter (adjoint) action is reflected at the level of the parameters $p^\mu$ in \begin{equation}\label{group-action} \tau:\mathbb{R}\times\mathcal{G}^{(1)}_{2}\to\mathcal{G}^{(1)}_{2},\ \tau(p^0,p^1)=e^{-p^0/\kappa}p^1, \end{equation} with $\mathcal{G}^{(1)}_{2}=\mathbb{R}$.\smallskip According to the results of Appendix \ref{sap-harmonic}, the convolution algebra of $\mathcal{G}_2$ is characterised, for any functions $f,g\in L^1(\mathbb{R}^2)$, by the product \begin{subequations}\label{set-convolution-2d} \begin{equation} (f\hat{\circ}g)(p_1^0,p_1^1)=\int_{\mathbb{R}^2} f\big(p_1^0-p^0_2,p^1_1-p^1_2e^{-(p_1^0-p^0_2)/\kappa}\big)g\big(p^0_2,p^1_2\big)\ dp_2^0dp_2^1, \end{equation} together with the involution\footnote{Note that the definition of the involution, eq. \eqref{involution2-ap}, has to be slightly adjusted when working with right invariant Haar measure, see eq. \eqref{broll1}. In this case we have $f^*(x):=\Delta_{\mathcal{G}}(x)\overline{f^\flat(x)}$.} \begin{equation}\label{invol-Fourier} f^*(p^0,p^1)=e^{p^0/\kappa}\bar{f}(-p^0,-e^{p^0/\kappa}p^1), \end{equation} and the modular function \begin{equation}\label{function-modulaire-2d} \Delta_{\mathcal{G}_2}(p^0,p^1)=e^{p^0/\kappa}, \end{equation} \end{subequations} where we have identified functions on $\mathcal{G}_2$ with functions on $\mathbb{R}^2$ in view of \eqref{grp-decomposition} and the parametrisation \eqref{group-elem}. In particular, the right invariant Haar measure $d\nu(p^0,p^1)$ coincides with the usual Lebesgue measure on $\mathbb{R}^2$, while the left invariant Haar measure is given by $d\mu(p^0,p^1)=e^{p^0/\kappa}dp^0dp^1$. From now on, except otherwise stated, we shall work with the right invariant measure. \section{Weyl quantisation map and related star product.} Let $\pi_U:\mathcal{G}_2\to\mathcal{B}(\mathcal{H}_\pi)$ be a (strongly continuous) unitary representation of $\mathcal{G}_2$ on some Hilbert space $\mathcal{H}_\pi$, and $\mathcal{B}(\mathcal{H}_\pi)$ be the C*-algebra of bounded operators on $\mathcal{H}_\pi$.\footnote{We can think, for example, to the right regular representation $\pi_U:\mathcal{G}_2\to L^2(\mathbb{R}^2)$ defined by $(\pi_U(s)f)(t)=f(ts)$.} Accordingly, any representation of the convolution algebra defined, for any $f\in L^1(\mathbb{R}^2)$, by \begin{equation}\label{rep-minkowski} \pi:L^1(\mathbb{R}^2)\to \mathcal{B}(\mathcal{H}_\pi),\ \pi(f):=\int_{\mathbb{R}_2} f(p^0,p^1)\pi_U(p^0,p^1) dp^0dp^1, \end{equation} is a nondegenerate bounded *-representation.\footnote{In the case of $\pi_U$ is the right regular representation, we have $\pi(f)g=f\hat{\circ}g$.} Indeed, let $\langle\cdot,\cdot\rangle$ denote the Hilbert product on $\mathcal{H}_\pi$, such that \begin{equation} \langle u,\pi(f)v \rangle=\int_{\mathcal{G}_2} f(s) \langle u,\pi_U(s)v \rangle d\nu(s),\ u,v\in\mathcal{H}_\pi, f\in L^1(\mathcal{G}_2). \end{equation} On the one hand, we have \begin{subequations}\label{broll1} \begin{equation} \langle u,\pi(f^*)v \rangle=\int_{\mathcal{G}_2} \Delta_{\mathcal{G}_2}(s)\bar{f}(s^{-1}) \langle u,\pi_U(s)v \rangle d\nu(s), \end{equation} while, on the other hand, $\langle u,\pi(f)^\dag v \rangle:=\langle \pi(f)u,v \rangle=\overline{\langle v,\pi(f)u \rangle}$ such that \begin{equation} \langle u,\pi(f)^\dag v \rangle=\int_{\mathcal{G}_2}\bar{f}(s) \langle \pi_U(s) u,v \rangle d\nu(s)=\int_{\mathcal{G}_2}\bar{f}(s) \langle u,\pi_U(s^{-1})v \rangle d\nu(s), \end{equation} \end{subequations} from which we conclude that $\pi(f)^\dag=\pi(f^*)$.\smallskip We now turn to the construction of the quantisation map from which a star product associated with $\kappa$-Minkowski can formally be defined by $Q(f\star g):=Q(f)Q(g)$.\\ In the following, we shall denote by \begin{equation} \mathcal{F}f(p^0,p^1):=\int_{\mathbb{R}^2}f(x_0,x_1)e^{-i(p^0x_0+p^1x_1)}dx_0dx_1, \end{equation} the Fourier transform of $f\in L^1(\mathbb{R}^2)$, and by $S_c(\mathbb{R}^2)$ the spaces of Schwartz functions on $\mathbb{R}^2=\mathbb{R}\times\mathbb{R}$ with compact support in the first (\textit{i.e.} timelike) variable.\\ Following the line of the Weyl quantisation scheme, eq. \eqref{Weyl-Heisenberg}, we define $Q$ by \begin{equation}\label{Weyl-Minkowski} Q(f):=\pi(\mathcal{F}f),\ \forall f\in L^1(\mathbb{R}^2)\cap \mathcal{F}^{-1}\big(L^1(\mathbb{R}^2)\big), \end{equation} where $\pi$ is a representation given by eq. \eqref{rep-minkowski}; see, e.g., in ref. \cite{Durhuus:2013,moi:2018a}. Notice that, in view of eq. \eqref{group-elem}, the functions appearing in eq. \eqref{set-convolution-2d}, and \eqref{rep-minkowski}, are interpreted as Fourier transforms of functions of spacetime coordinates. This interpretation is supported by the fact that, taking the formal commutative limit, $\kappa\to\infty$, in eq. \eqref{group-elem}, and \eqref{set-convolution-2d}, we recover all of the usual notions of plane waves, convolution and involution. Hence the occurrence of $\mathcal{F}f$ in the right-hand-side of eq. \eqref{Weyl-Minkowski}.\\ Requiring $Q$ to define a morphism of *-algebras, we can write \begin{subequations} \begin{align} &Q(f\star g)=Q(f)Q(g)=\pi(\mathcal{F}f)\pi(\mathcal{F}g)=\pi\big(\mathcal{F}f\hat{\circ}\mathcal{F}g\big),\label{q-star}\\ &Q(f^\ddagger)=\pi(\mathcal{F}f^*) \end{align} \end{subequations} identifying eq. \eqref{q-star} with $Q(f\star g)=\pi\big(\mathcal{F}(f\star g)\big)$, we finally obtained the expressions for the star product and the involution \begin{subequations}\label{star-general} \begin{align} &f\star g=\mathcal{F}^{-1}\big(\mathcal{F}f\hat{\circ}\mathcal{F}g\big),\\ &f^\ddagger=\mathcal{F}^{-1}\big(\mathcal{F}f^*\big),\label{invol-general} \end{align} \end{subequations} where $\mathcal{F}^{-1}$ is the inverse Fourier transform on $\mathbb{R}^2$. Observe that both the star product and the involution are representation independent despite the fact that $Q$ depends on $\pi$.\smallskip Now, upon combining the explicit expressions for the convolution and involution, eq. \eqref{set-convolution-2d}, with eq. \eqref{star-general}, we find that, for any $f,g\in\mathcal{F}^{-1}\big(S_c(\mathbb{R}^2)\big)$, \begin{subequations}\label{involstar-2d} \begin{align} &(f\star g)(x_0,x_1)=\int \frac{dp^0}{2\pi} dy_0\ e^{-iy_0p^0}f(x_0+y_0,x_1)g(x_0,e^{-p^0/\kappa}x_1),\label{star-2d}\\ &f^\ddagger(x_0,x_1)= \int \frac{dp^0}{2\pi} dy_0\ e^{-iy_0p^0}{\bar{f}}(x_0+y_0,e^{-p^0/\kappa}x_1),\label{invol-2d} \end{align} \end{subequations} with $f\star g, f^\ddagger\in\mathcal{F}(\mathcal{S}_c)$, which coincide with the star product and involution of \cite{Durhuus:2013}.\smallskip Before proceeding to the extension of the above results to $d=3$, it is worth mentioning that it as been shown in ref. \cite{Durhuus:2013} that eq. \eqref{involstar-2d} extend to (a subalgebra of) the multiplier algebra $\mathcal{N}_c(\mathbb{R}^2)$ of $\mathcal{F}^{-1}\big(S_c(\mathbb{R}^2)\big)$ involving the smooth functions on $\mathbb{R}^2$, with compact support in the first variable, which satisfy standard polynomial bounds, together with all their derivatives.\footnote{More precisely, any $f\in \mathcal{N}_c(\mathbb{R}^2)$ satisfies polynomial bounds of the form \begin{equation} \vert\partial_0^n\partial_1^m f(p^0,p^1)\vert\leq c_{n,m}(1+\vert p^0\vert)^{N_n}(1+\vert p^1\vert)^{M_{n,m}},\ n,m\in\mathbb{N}, \end{equation} where $N_n$, $M_{n,m}$, and $c_{n,m}$ are some constants, with $c_{n,m}:=0$; for more details see ref. \cite{Durhuus:2013}. } In particular, $x_0$, $x_1$ and the unit function belong to $\mathcal{N}_c(\mathbb{R}^2)$. Therefore, from \eqref{star-2d} and \eqref{invol-2d}, we easily obtain \begin{equation} x_0\star x_1=x_0x_1+\frac{i}{\kappa}x_1,\ x_1\star x_0=x_0x_1,\ x_\mu^\ddagger=x_\mu,\ \mu=0,1, \end{equation} consistent with the defining relation \eqref{kappa-Lie} for $d=1$.\smallskip In view of eq. \eqref{alg-decomposition} and \eqref{grp-decomposition}, the extension of the above construction to the 4-dimensional ($d=3$) case is straightforward and merely amounts to substitute $p^1$ with $\vec{p}$ in the various above expressions. Explicitly, we have $\mathcal{G}_4=\mathbb{R}\ltimes_\tau\mathbb{R}^3$ with $\tau(p^0,\vec{p})=e^{-p^0/\kappa}\vec{p}$ and \begin{subequations}\label{group4d-parametrization} \begin{equation} W(p^0,\vec{p}):=e^{i\vec{p}\cdot \vec{x}}e^{ip^0x_0}. \end{equation} The group law \eqref{group-law} becomes \begin{equation} W(p^0,\vec{p})W(q^0,\vec{q})=W(p^0+q^0,\vec{p}+e^{-p^0/\kappa}\vec{q}), \end{equation} while unit and inverse \eqref{group-unit-inverse} are now given by \begin{equation} 1=W(0,\vec{0}),\ W^{-1}(p^0,\vec{p})=W(-p^0,-e^{p^0/\kappa}\vec{p}). \end{equation} \end{subequations} Then, the construction leading to \eqref{involstar-2d} can be thoroughly reproduced, replacing $\mathbb{R}^2$ by $\mathbb{R}^4$ and \eqref{function-modulaire-2d} by \begin{equation}\label{function-modulaire-4d} \Delta_{\mathcal{G}_4}(p^0,\vec{p})=e^{3p^0/\kappa}. \end{equation} Setting for short $x:=(x_0,\vec{x})$, we obtain, for any functions $f,g\in\mathcal{F}^{-1}\big(\mathcal{S}_c(\mathbb{R}^4)\big)$, \begin{subequations}\label{starinvol4d} \begin{align} &(f\star g)(x)=\int \frac{dp^0}{2\pi} dy_0\ e^{-iy_0p^0}f(x_0+y_0,\vec{x})g(x_0,e^{-p^0/\kappa}\vec{x}),\label{star-4d}\\ &f^\ddagger(x)= \int \frac{dp^0}{2\pi} dy_0\ e^{-iy_0p^0}{\bar{f}}(x_0+y_0,e^{-p^0/\kappa}\vec{x}),\label{invol-4d} \end{align} \end{subequations} with $f\star g\in\mathcal{F}^{-1}\big(\mathcal{S}_c(\mathbb{R}^4)\big)$ and $f^\ddagger\in\mathcal{F}^{-1}\big(\mathcal{S}_c(\mathbb{R}^4)\big)$. Moreover, we can show that \begin{equation}\label{antivolution} (f\star g)^\ddagger=g^\ddagger\star f^\ddagger. \end{equation} For latter convenience, note that, thanks to the Paley-Wiener theorem, functions in $\mathcal{F}^{-1}\big(\mathcal{S}_c(\mathbb{R}^4)\big)$ are by construction analytic in the (timelike) variable $x_0$, being Fourier transforms of functions with compact support in the (timelike) variable $p^0$.\smallskip Finally, it is instructive to get more insight on $C^*(\mathcal{G}_4)$, the group C*-algebra which models the (4-dimensional) $\kappa$-Minkowski space. In view of the discussion given at the end of Appendix \ref{sap-harmonic}, the completion of $L^1(\mathcal{G}_4)$ with respect to the norm related to the right regular representation on $L^2(\mathcal{G}_4)$ yields the reduced group C*-algebra $C_r^*(\mathcal{G}_4)$. Furthermore, since $\mathcal{G}_4$ is amenable, as any solvable Lie group, we have $C_r^*(\mathcal{G}_4)\cong C^*(\mathcal{G}_4)$, involving as dense *-subalgebra the set of Schwartz functions with compact support equipped with the above convolution product.\smallskip For the ensuing construction of (4-dimensional) $\kappa$-Poincar\'{e} invariant NCFT, in Chap. \ref{ch-ncft}, it will be sufficient to consider the algebra $\mathcal{F}^{-1}\big(\mathcal{S}_c(\mathbb{R}^4)\big)$ which we denote by $\mathcal{M}_\kappa$.\smallskip Until then, we now turn to the presentation of the construction of various families of star products associated with quantum spaces whose algebras of coordinate operators are given by semisimple Lie algebras. \chapter{Other examples of quantum spaces.}\label{sec-products} In the previous chapter, we have derived a star product for $\kappa$-Minkowski by adapting the Weyl quantisation scheme to the $ax+b$ group. This approach -- which easily extends to the construction of star products associated with any other quantum spacetime of Lie algebra type noncommutativity -- appears to be well suited to the study of NCFT as it provides us with (almost) all the needed ingredients for constructing an action functional aiming to describe the dynamics thereof. More precisely, let $\mathfrak{g}$ denote the noncommutative Lie algebra of coordinate operators characterising the quantum space and $\mathcal{G}$ the corresponding Lie group. We have seen that the C*-algebra of fields modelling the quantum space can conveniently be identified with the group C*-algebra $C^*(\mathcal{G})$ of $\mathcal{G}$. Natural candidates for a star product, an involution, and a measure of integration, are then canonically provided by the harmonic analysis on $\mathcal{G}$; see Chap. \ref{sec-Minkowski}.\smallskip Another approach for constructing star products, based on the use of differential representations, has been (much more) extensively studied in the physics literature, often forgetting about the above group algebraic structure underlying the quantum space.\\ Although the star product is still defined through the introduction of an invertible linear morphism of algebras $Q$, \textit{i.e.} $f\star g:=Q^{-1}\big(Q(f)Q(g)\big)$, it is now assumed to associate with any function $f\in\mathfrak{F}(\mathbb{R}^n)$ a differential operator $Q(f)$ such that $(f\star g)(x):=Q(f)\triangleright g(x)$, where $\triangleright$ denotes the left action of operator. In addition, we require that $f\star1=1\star f=f$.\\ In practice, quantisation maps fulfilling the above requirements can be determined by first representing the abstract involutive algebra $\mathfrak{g}$ of coordinate operators as an involutive algebra $\mathfrak{h}$ of differential operators acting on some Hilbert space $\mathcal{H}$, then formally constructing the differential operators $Q(f)$ as functions of the generators of $\mathfrak{h}$. Thus, $Q$ is characterised by a choice of differential representation for $\mathfrak{g}$, this choice being itself strongly constrained by the requirement $Q(f)\triangleright 1=f$. In fact, because of the linearity of $Q$, it is in principle sufficient to determine the action of the quantisation map on the plane waves to fully determine the expression of the star product.\smallskip The purpose of this section is to present a method for deriving expressions for the deformed plane waves, $E_k(\hat{x}):=Q(e^{ik\cdot x})$, appearing in the expression of the star product. Our derivation follows broadly the lines sketched above insisting on the preservation of the various involutive structures, however. Moreover, we informally make the link between the objects appearing within this approach and those of harmonic analysis, hence keeping track of the group algebraic structures underlying the quantum space. This leads us to the conclusion that families of inequivalent star products can be obtained, in principle, from group cohomological considerations. The material presented in this section is published in \cite{moi:2017a}. \section{Deformed plane waves, star products, and group cohomology.}\label{sec-dpw} Let $\mathfrak{g}$ be a semisimple Lie algebra with Lie bracket \begin{equation}\label{Lie-bracket} [\hat{x}_\mu,\hat{x}_\nu]=i\theta C_{\mu \nu}^{\hspace{11pt} \rho}\hat{x}_\rho, \end{equation} where $\theta>0$ is a dimensionful, $[\theta]=L$, real parameter and $C_{\mu\nu\rho}\in\mathbb{R}$ are the structure constants determining $\mathfrak{g}$.\footnote{Note that, the algebra $\mathfrak{m}_\kappa$, eq. \eqref{kappa-Lie}, characterising $\kappa$-Minkowski is not semisimple. Indeed, as a solvable Lie algebra, $\text{rad}(\mathfrak{m}_\kappa)\neq0$. Recall that the radical $\text{rad}(\mathfrak{g})$ of a Lie algebra $\mathfrak{g}$ is the unique solvable ideal of $\mathfrak{g}$ containing every solvable ideals of $\mathfrak{g}$, while a (nonzero) Lie algebra $\mathfrak{g}$ is said to be semisimple if $\text{rad}(\mathfrak{g})=0$.} It follows that, the (connected component of the) corresponding Lie group $\mathcal{G}$ is semisimple, hence unimodular.\footnote{See, e.g., proposition 2.29 of ref. \cite{Folland:1995}.} In view of eq. \eqref{involution2-ap} this means that a natural involution to be involved in the construction of an action functional for NCFT on such quantum space (\textit{i.e.} of semisimple Lie algebra type) is provided by the complex conjugation $f\mapsto \bar{f}$. In particular, the reality of the action functional can be conveniently controlled by introducing the Hilbert product on $\mathcal{H}=L^2(\mathbb{R}^n)$, $\langle f,g \rangle_2:=\int f(x) \bar{g}(x)d^n x$. \subsection{Differential *-representations.} Explicit expressions for the deformed plane waves can be obtained by representing the abstract coordinate operators as differential operators acting on some suitable Hilbert space $\mathcal{H}$. Let \begin{equation}\label{diff-rep} \pi:\mathfrak{g}\to\mathfrak{h}:=\pi(\mathfrak{g}),\ \ \pi(ab)=\pi(a)\pi(b),\ \forall a,b\in\mathfrak{g}, \end{equation} be such differential representation. It follows that the Lie algebraic structure \eqref{Lie-bracket} of $\mathfrak{g}$ is automatically transferred to $\mathfrak{h}$, namely $[\pi(\hat{x}_\mu),\pi(\hat{x}_\nu)]=i\theta C_{\mu\nu\rho}\pi(\hat{x}^\rho)$. From now on, we shall write $\hat{x}_\mu$ for designating both the abstract operators and their representations $\pi(\hat{x}_\mu)$. An as natural as important requirement for this representation is to define a *-algebras' morphism, \textit{i.e.} to satisfy $\pi(a^{*})=\pi(a)^\dag$, $\forall a\in\mathfrak{g}$, where $^*$ ($^\dag$) denotes an involution of $\mathfrak{g}$ ($\mathfrak{h}$). This requirement ensures the involutive structures of the algebra of fields modelling the noncommutative space to be preserved under representation and, in particular, that selfadjoint (resp. unitary) operators made of $\hat{x}_\mu$ are represented as selfadjoint (resp. unitary) differential operators. Besides the purely algebraic motive, this requirement makes possible the implementation of reasonable reality conditions in the construction of an expression for the action functional when considering NCFT. \\ It is worth mentioning that this condition is not always satisfied in the literature.\smallskip It is further convenient to consider representations of the form \begin{equation}\label{diff-rep2} \hat{x}_\mu=x^\nu\varphi_{\nu\mu}(\partial)+\chi_\mu(\partial), \end{equation} where the functionals $\varphi_{\nu\mu}$ and $\chi_\mu$ are viewed as formal expansions in the deformation parameter $\theta$.\footnote{In the Poincar\'{e}-Birkhoff-Witt basis, the functionals $\varphi_{\nu\mu}$ and $\chi_\mu$ take formally the form \begin{equation} \varphi_{\nu\mu}(\partial)=\sum (\tilde{\varphi}_{k_1\cdots k_n})_{\nu\mu}\theta^{\sum_i k_i}\partial_1^{k_1}\cdots\partial_n^{k_n},\ \ \chi_\mu(\partial)=\sum (\tilde{\chi}_{k_1\cdots k_n})_{\mu}\theta^{\sum_i k_i}\partial_1^{k_1}\cdots\partial_n^{k_n}.\nonumber \end{equation} } Requiring $\hat{x}_\mu\to x_\mu$ when $\theta\to0$, implies \begin{equation} \varphi_{\nu\mu}(\partial)=\delta_{\nu\mu}+\mathcal{O}(\theta),\ \chi_\mu(\partial)=\mathcal{O}(\theta). \end{equation} Combining \eqref{diff-rep2} with \eqref{Lie-bracket}, and using the algebraic relation $[\hat{x}_\lambda,h(x,\partial)]=-\partial h/\partial(\partial^\lambda)$ valid for any functional $h$ depending on $x$ and $\partial$, we find that the Lie algebraic structure of $\mathfrak{g}$ is preserved under representation provided the functionals $\varphi_{\nu\mu}$ and $\chi_\mu$ satisfy \begin{subequations}\label{diff-rep3} \begin{align} &\frac{\partial \varphi_{\lambda\mu}}{\partial (\partial_\rho)} \varphi_{\rho\nu} - \frac{\partial \varphi_{\lambda\nu}}{\partial (\partial_\rho)} \varphi_{\rho\mu} = i\theta C_{\mu \nu}^{\hspace{11pt} \rho} \varphi_{\lambda\rho},\label{diff-rep3a}\\ &\frac{\partial\chi_\mu}{\partial(\partial_\rho)}\varphi_{\rho\nu}-\frac{\partial\chi_\nu}{\partial(\partial_\rho)}\varphi_{\rho\mu}= i\theta C_{\mu \nu}^{\hspace{11pt} \rho}\chi_\rho. \end{align} \end{subequations} The above set of differential equations generates infinitely many solutions for the representation $\pi$ defined by eq. \eqref{diff-rep2}. Further constraints on $\varphi_{\nu\mu}$ and $\chi_\mu$ stem from the requirement the differential representation $\pi$ to be a *-representation. This is achieved by requiring that $\langle f,\hat{x}_\mu g \rangle=\langle \hat{x}_\mu f,g \rangle$ for any $f,g\in\mathcal{H}$, namely $\hat{x}_\mu^\dag=\hat{x}_\mu$ to be selfadjoint. Upon using $\partial^\dag_\mu=-\partial_\mu$ and $h^\dag(\partial)=\bar{h}(-\partial)$, we can compute \begin{align}\label{diff-rep-calcul} \langle f,\hat{x}^\dag_\mu g \rangle &= \langle\big(x^\alpha\varphi_{\alpha\mu}(\partial)+\chi_\mu(\partial)\big)f,g \rangle=\langle f,{\bar{\varphi}}_{\alpha\mu}(-\partial)x^\alpha g\rangle+\langle f,{\bar{\chi}}_\mu(-\partial)g\rangle\\ &=\langle f,x^\alpha{\bar{\varphi}}_{\alpha\mu}(-\partial) g\rangle_2+\langle f,\frac{\partial{\bar{\varphi}}_{\alpha\mu}(-\partial)}{\partial(\partial_\alpha)}g\rangle+\langle f,{\bar{\chi}}_\mu(-\partial)g\rangle.\nonumber \end{align} Comparing the last line of \eqref{diff-rep-calcul} with $\langle f,\hat{x}_\mu g \rangle=\langle f, \big(x^\alpha\varphi_{\alpha\mu}(\partial)+\chi_\mu(\partial)\big)g \rangle$, we conclude that the representation \eqref{diff-rep2} is selfadjoint if, and only if, \begin{subequations}\label{diff-rep4} \begin{align} &{\bar{\varphi}}_{\alpha\mu}(-\partial)= \varphi_{\alpha\mu}(\partial),\label{diff-rep4a}\\ &\frac{\partial{\bar{\varphi}}_{\alpha\mu}(-\partial)}{\partial(\partial_\alpha)}= \chi_\mu(\partial)-{\bar{\chi}}_\mu(-\partial). \end{align} \end{subequations} From \eqref{diff-rep4a}, we readily infer that $\varphi_{\alpha\mu}$ must have the following decomposition \begin{equation}\label{diff-phi-decomposition} \varphi_{\alpha \mu} (\partial) = \Phi_{\alpha \mu}(\partial) + i \Psi_{\alpha \mu}(\partial) \ , \end{equation} with the real functional $\Phi_{\alpha \mu}$ (resp. $\Psi_{\alpha \mu}$) of even (resp. odd) degree in $\partial$.\smallskip The four master equations, supplemented with eq. \eqref{diff-rep3} and \eqref{diff-rep4}, provides us with a set of differential equations from which admissible expressions for the differential *-representation $\pi$ can be derived. Obviously, the solution is not unique and depends on the Lie algebra of coordinate operators we start from. This will be exemplified in $\S$\ref{sec-su2} when considering the case of $\mathfrak{su}(2)$ Lie algebra. For the moment, we turn to the construction of the quantisation maps from which expressions for the star products can be derived \subsection{Quantisation maps and related star products.}\label{sec-qmapsu2} As already mentioned at the beginning of this chapter, the *-algebras' morphism, called quantisation map, $Q$ can be characterised by its action on plane waves, \textit{i.e.} $Q(e^{ik\cdot x})$. Consequently, the corresponding star product which is defined, for any $f,g\in\mathfrak{F}(\mathbb{R}^n)$, by \begin{equation}\label{sustar} (f\star g)(x)=\int \frac{d^nk_1}{(2\pi)^n}\frac{d^nk_2}{(2\pi)^n}\ \tilde{f}(k_1)\tilde{g}(k_2)Q^{-1}\big(Q(e^{ik_1\cdot x})Q(e^{ik_2\cdot x})\big), \end{equation} where $\tilde{f}(k):=\int d^nxf(x)e^{-ik\cdot x}$, is fully characterised once the deformed plane waves \begin{equation}\label{qpw} E_k(\hat{x}):=Q(e^{ik\cdot x}), \end{equation} together with the inverse map $Q^{-1}$, are determined.\smallskip First, we observe that, for a given differential *-representation, eq. \eqref{diff-rep}, the determination of $Q^{-1}$ can be conveniently carried out by enforcing the condition \begin{equation}\label{q-act} Q(f)\triangleright 1=f, \end{equation} since, for $Q$ invertible, we have by definition $Q^{-1}\big(Q(f)\big)=f=Q(f)\triangleright 1$.\smallskip The next step consists in observing that the expression \begin{equation}\label{q-fourier} Q(f)(\hat{x})=\int \frac{d^nk}{(2\pi)^n}\ \tilde{f}(k) E_k(\hat{x}) \end{equation} looks like eq. \eqref{induced-rep} defining the induce representation of the convolution algebra of $\mathcal{G}$ we used in Chap. \ref{sec-Minkowski}. It follows that the deformed plane waves can informally be interpret as (stemming from) a representation of $\mathcal{G}$, namely \begin{equation}\label{themap} E:\mathcal{G}\to\mathcal{L}(\mathcal{H}),\ E:g\mapsto E(g):=E_k(\hat{x}), \end{equation} with $E(g^\dag)=E(g)^\dag$ and where $\mathcal{L}(\mathcal{H})$ is the set of linear operators acting on $\mathcal{H}$.\\ In eq. \eqref{diff-rep}, the group representation used was unitary. However, the operators $E(g)$ decompose in general into an angular part and a radial part, both stemming from the polar decomposition of operators. Explicitly, we can write \begin{equation}\label{polar} E(g)=U(g)|E(g)|, \end{equation} where $U:\mathcal{G}\to\mathcal{L}(\mathcal{H})$ is a unitary operator and $|E(g)|:=\sqrt{E(g)^\dag E(g)} \neq 0$.\\ Note that the previous situation, considered in Chap. \ref{sec-Minkowski}, is recovered if $|E(g)|=1$. Indeed, in this case, $E(g)=U(g)$ is unitary.\\ In view of the Stone's theorem, it is legitimate to parametrise the unitary operator as \begin{equation}\label{unitpw} U(g)=e^{i\xi_g^\mu\hat{x}_\mu}, \end{equation} where $\xi_g^\mu\in\mathbb{R}$ can be regarded as a kind of generalised Fourier parameter for the deformed plane waves appearing in eq. \eqref{q-fourier}. Hence, $U(g)$ define grouplike elements which compose as \begin{equation} e^{i\xi_{g_1}\cdot\hat{x}}e^{i\xi_{g_2}\cdot\hat{x}}=e^{iB(\xi_{g_1},\xi_{g_2})\cdot\hat{x}}, \end{equation} where $B(\xi_{g_1},\xi_{g_2})\in\mathbb{R}$ stems from the Baker-Campbell-Hausdorff (BCH) formula for the Lie algebra $\mathfrak{g}=\text{Lie}(\mathcal{G})$ and satisfies \begin{equation}\label{Bproperties} B(\xi_{g_1},\xi_{g_2})=-B(-\xi_{g_2},-\xi_{g_1}) , \ B(\xi_g,0)=\xi_g . \end{equation} As we are going to see, $B(\xi_{g_1},\xi_{g_2})$ can actually be interpreted as the deformed composition law between momenta associated with the deformed plane waves. This explains why the use of the star product formalism is not always the most suitable for studying NCFT, even though integral expressions for the star product can formally be derived. Indeed, the BCH formula generally provides us with an expression for $B(\xi_{g_1},\xi_{g_2})$ which is an infinite sum of elements of $\mathfrak{g}$ and exact expressions for $B(\xi_{g_1},\xi_{g_2})$ do not always exist. This is for instance the case of the $\mathfrak{su}(2)$ Lie algebra. For an illustration of such difficulties see, e.g., in Chap. \ref{sap-ncftsu2}.\smallskip The mapping $U:\mathcal{G}\to\mathcal{L}(\mathcal{H})$ clearly defines a unitary representation of $\mathcal{G}$, \textit{i.e.} \begin{equation}\label{projectif-su2} U(g_1)U(g_2)=U(g_1g_2), \end{equation} which holds up to unitary equivalence as mere application of the Wigner theorem.\\ On the other hand, we demand $E:\mathcal{G}\to\mathcal{L}(\mathcal{H})$ to be a projective representation of $\mathcal{G}$, namely \begin{equation}\label{projective} E(g_1)E(g_2)=\Omega(g_1,g_2)E(g_1g_2), \end{equation} with $\Omega:\mathcal{G}\times\mathcal{G}\to{\mathbb{C}\hspace{-2pt}\setminus\hspace{-2pt}\lbrace0\rbrace}$ obeying a 2-cocycle condition, \textit{i.e.} \begin{equation} \Omega(g_1,g_2)\Omega(g_1g_2,g_3)=\Omega(g_1,g_2g_3) \Omega(g_2,g_3), \end{equation} such that the associativity of the related star product is ensured.\smallskip Recall that projectively inequivalent representation of a group $\mathcal{G}$ are classified by $H^2\big(\mathcal{G},{\mathbb{C}\hspace{-2pt}\setminus\hspace{-2pt}\lbrace0\rbrace}\big)$, the second cohomology group of $\mathcal{G}$ with value in ${\mathbb{C}\hspace{-2pt}\setminus\hspace{-2pt}\lbrace0\rbrace}$; see, e.g., \cite{Gannon:2006}. Since eq. \eqref{projective} actually reflects the composition of plane waves which eventually determine the expression of the star product, we infer from the above remark that inequivalent families of star products are classified by group cohomology. \section{Focus on \texorpdfstring{$\mathfrak{su}(2)$}{su(2)} noncommutative space.}\label{sec-su2} We now apply the procedure developed in $\S$\ref{sec-dpw} to quantum spaces whose algebras of coordinate operators are characterised by the $\mathfrak{su}(2)$ Lie algebra,\footnote{Such quantum spaces are generally called $\mathbb{R}^3_\theta$, or $\mathbb{R}^3_\lambda$, in the literature and can be regarded as deformations of $\mathbb{R}^3$.} \textit{i.e.} satisfying \begin{equation}\label{su2per} [\hat{x}_\mu,\hat{x}_\nu]=i2\theta\varepsilon_{\mu \nu}^{\hspace{11pt} \rho}\hat{x}_\rho. \end{equation} The corresponding set of master equations is easily obtained upon substituting $C_{\mu\nu\rho}$ with $2\varepsilon_{\mu\nu\rho}$ in eq. \eqref{diff-rep3} and \eqref{diff-rep4}. Standard computations yield \begin{subequations}\label{master} \begin{align} &i 2\theta \varphi_{\alpha \rho}= \varepsilon_{\rho}^{\hspace{4pt} \mu \nu} \frac{\partial \varphi_{\alpha \mu}}{\partial (\partial_\beta)} \varphi_{\beta \nu},\ \ \varphi^\dagger_{\alpha\rho}= \varphi_{\alpha \rho},\label{master1}\\ &i 2\theta \chi_\rho= \varepsilon_{\rho}^{\hspace{4pt} \mu \nu} \frac{\partial \chi_\mu}{\partial (\partial_\alpha)} \varphi_{\alpha \nu},\ \ \frac{\partial \varphi^\dagger_{\alpha \rho}}{\partial (\partial_\alpha)}= \chi_\rho - \chi_\rho^\dagger,\label{master2} \end{align} \end{subequations} where we have used the algebraic relation $\delta_{\mu \gamma} \delta_\nu^{\hspace{4pt} \sigma} - \delta_\mu^{\hspace{4pt} \sigma} \delta_{\nu \gamma} = \varepsilon_{\mu \nu}^{\hspace{11pt} \rho} \varepsilon_{\rho \gamma}^{\hspace{11pt} \sigma}$. \subsection{\texorpdfstring{$SO(3)$}{SO(3)}-equivariant *-representations.}\label{repsu} More insight on the actual expressions of $\varphi_{\mu\nu}$ and $\chi_\mu$ can be obtained by observing that $\mathbb{R}^3_\theta\subseteq\mathcal{U}\big(\mathfrak{su}(2)\big)$ supports a natural action of $SU(2)/\mathbb{Z}_2\cong SO(3)$. Therefore, it seems natural to require the differential representation to be compatible with this structure, namely $\pi$ to be $SO(3)$-equivariant. We look for such representation in the sequel.\\ Mere application of the Schur-Weyl decomposition theorem for $SO(3)$ yields \begin{equation}\label{phi-so3a} \varphi_{\alpha \mu}(\partial) = \alpha(\Delta) \delta_{\alpha \mu} + \beta(\Delta) \left( \frac{1}{3} \delta_{\alpha \mu} \Delta - \partial_\alpha \partial_\mu \right) + \gamma(\Delta) \varepsilon_{\alpha\mu}^{\hspace{11pt} \rho} \partial_\rho, \end{equation} where $\alpha$, $\beta$ and $\gamma$ are $SO(3)$-invariant functionals depending on the Laplacian $\Delta$, to be determined in a while.\footnote{See, for instance, H. Weyl, \textit{Classical groups}, Princeton University Press, 1946.} It will be further assumed that $\alpha$ and $\beta$ (resp. $\gamma$) are real (resp. purely imaginary) functionals so that \eqref{diff-phi-decomposition} is satisfied. This motivate the following factorisation \begin{subequations}\label{rep-so3} \begin{equation}\label{phi-so3b} \varphi_{\alpha \mu} (\partial) = f(\Delta) \delta_{\alpha \mu} + g(\Delta) \partial_\alpha \partial_\mu + i h(\Delta) \varepsilon_{\alpha\mu}^{\hspace{11pt} \rho} \partial_\rho, \end{equation} where the real $SO(3)$-invariant functionals $f$, $g$, $h$ can easily be related to the old quantities $\alpha, \beta, \gamma$ by mere comparison of \eqref{phi-so3a} with \eqref{phi-so3b}. Similarly, \begin{equation} \chi_\mu(\partial) = \ell (\Delta) \partial_\mu, \end{equation} \end{subequations} where $\ell(\Delta)$ is a complex $SO(3)$-invariant functional to be determined. Hence, every $SO(3)$-equivariant differential *-representation $\pi$ is labelled by $f, g, h$ and $\ell$.\smallskip Combining eq. \eqref{rep-so3} with eq. \eqref{master}, we find that the master equations reduce to two systems of differential equations for $f, g,h$ and $\ell$. On the one hand, the first system \begin{subequations}\label{rep-sys1} \begin{align} &(f+g \Delta)h' - (h - \theta)g= 0,\label{rep-sys1a}\\ &(f+g \Delta)h' \Delta + (h - \theta)f= 0,\label{rep-sys1b}\\ &2(f+g \Delta)f' + (h - 2\theta)h - gf= 0,\label{rep-sys1c} \end{align} \end{subequations} provides us with differential representations compatible with the $\mathfrak{su}(2)$ commutation relations. In particular, linear combination of the two first equations in eq. \eqref{rep-sys1} yields \begin{equation}\label{rep-sys1d} (f+g\Delta)(h - \theta) = 0. \end{equation} On the other hand, the second system \begin{subequations}\label{rep-sys2} \begin{align} &h= \theta,\label{rep-sys2a} \\ &2(f+g\Delta)' + 2g= \ell + \ell^\dag.\label{rep-sys2b} \end{align} \end{subequations} selects, among the solutions of \eqref{rep-sys1}, those defining *-representations.\footnote{In eq. \eqref{rep-sys1} and \eqref{rep-sys2}, the prime $'$ denotes the derivative with respect to their arguments of the functions under consideration.}\smallskip Before giving the general expression for admissible $SO(3)$-equivariant differential *-representations, one comment is in order. As long as $\chi_\mu = 0$, the first equation in \eqref{master2} is trivially satisfied and gives no constraints on neither $f$, $g$, nor $h$. In particular, \eqref{rep-sys2a} does not necessarily hold true. Assuming $h\neq\theta$, we find from \eqref{rep-sys1d} that $f+g\Delta=0$ and there exists only two solutions compatible with the remaining equations.\footnote{Whenever $\chi_\mu=0$, eq. \eqref{rep-sys1c} and \eqref{rep-sys2b} prevent $f+g\Delta=0$ and $h=\theta$ to be satisfied simultaneously.} Namely, either $\hat{x}_\mu = 0$ or $\hat{x}_\mu = i 2\theta x^\sigma \varepsilon_{\sigma \mu}^{\hspace{11pt} \rho}\partial_\rho$, solutions we shall disregard since in both cases $\hat{x}_\mu \triangleright 1 \neq x_\mu$ and $\hat{x}_\mu \triangleright f(x)\rightarrow 0$ when $\theta\rightarrow 0$ for any function $f\in\mathcal{H}$. Therefore, we now consider differential *-representation for which $h=\theta$ (with either $\chi_\mu\ne 0$ or $\chi_\mu=0$).\smallskip The family of $SO(3)$-equivariant differential *-representations is finally found to be given by \begin{subequations}\label{general_rep} \begin{equation} \hat{x}_\mu = x^\alpha \left[ f(\Delta) \delta_{\alpha \mu} + g(\Delta) \partial_\alpha \partial_\mu + i\theta \varepsilon_{\alpha \mu}^{\hspace{11pt} \rho} \partial_\rho \right] + \ell(\Delta) \partial_\mu, \end{equation} where the $SO(3)$-invariant functionals $f(\Delta)$, $g(\Delta)$ and $\ell(\Delta)$ satisfy, $f,g$ real, \begin{align} &2\left[(f+g\Delta)' + g \right]= \ell + \ell^\dagger, \label{1st_condition} \\ &2(f+g\Delta)f'= gf + \theta^2.\label{2nd_condition} \end{align} \end{subequations} \subsection{Determination of the deformed plane waves.} From eq. \eqref{projective}, we see that whenever the group under consideration is unitary, \textit{i.e.} $g^\dag g=1$ for any $g\in\mathcal{G}$, the radial part of the deformed plane waves is automatically determined by the 2-cocycle $\Omega$. Indeed, in this case, eq. \eqref{projective} becomes \begin{equation}\label{calcul13} E(g^\dag)E(g)=\Omega(g^\dag,g)E(1),\ E(1)={\text{\usefont{U}{dsss}{m}{n}\char49}}. \end{equation} Therefore, $\Omega(g^\dag,g)>0$ is real $\forall g\in\mathcal{G}$, and $\vert E(g)\vert=\sqrt{\Omega(g^\dag,g)}\ {\text{\usefont{U}{dsss}{m}{n}\char49}}$ such that \begin{equation}\label{cond-central} [|E(g)|,U(g)]=0. \end{equation} Combining \eqref{cond-central} with \eqref{polar} and \eqref{projectif-su2} we easily obtain \begin{equation}\label{intermediaire} E(g_1)E(g_2)=|E(g_1)||E(g_2)|U(g_1g_2)=|E(g_1)||E(g_2)||E(g_1g_2)|^{-1}E(g_1g_2). \end{equation} Setting for short $\omega_g:=\sqrt{\Omega(g^\dag,g)}$, we find that the plane waves compose as \begin{subequations}\label{compopw} \begin{align} &E(g_1)E(g_2)=(\omega_{g_1}\omega_{g_2}\omega^{-1}_{g_1g_2})E(g_1g_2),\label{planew-multiplic}\\ &E(g_1g_2)=\omega_{g_1g_2}e^{iB(\xi_{g_1},\xi_{g_2})\cdot\hat{x}}. \end{align} \end{subequations} We are now in position to fully determined the expression of the plane waves. To do so, it is convenient to reintroduce the explicit dependence in the momenta of the deformed plane waves. This is achieved by formally identifying $E(g)=\omega_g e^{\xi_g\cdot\hat{x}}$ with \begin{equation}\label{generalform-ncexpo} E_p(\hat{x})=\omega(p)e^{i\xi(p)\cdot\hat{x}}. \end{equation} Next, the two functions $\omega(p)$ and $\xi(p)$ can be obtained by taking full advantage of the family of differential representations \eqref{general_rep} derived in $\S$\ref{repsu} applied to eq. \eqref{q-act}. \subsubsection{Derivation of the phase\texorpdfstring{ $\xi$.}{.}} Let us first derive the expression for $\xi$. Combining eq. \eqref{qpw} with \eqref{q-act} we obtain \begin{equation} \label{expo_appendix} e^{i\xi(p)\cdot\hat{x}}\triangleright 1 = \frac{e^{ip\cdot x}}{\omega(p)}. \end{equation} Applying the same procedure to $e^{-i\xi(p)\cdot \hat{x}} \partial_\mu e^{i\xi(p) \cdot\hat{x}}$, we compute \begin{align} e^{-i\xi(p)\cdot\hat{x}} \partial_\mu e^{i\xi(p)\cdot \hat{x}} \triangleright 1 &= e^{-i\xi(p)\cdot \hat{x}} \partial_\mu \triangleright \frac{e^{ip\cdot x}}{\omega(p)} \\ &= e^{-i\xi(p)\cdot \hat{x}} \triangleright (ip_\mu) \frac{e^{ip\cdot x}}{\omega(p)}= (ip_\mu) e^{-i\xi(p)\cdot \hat{x}} e^{i\xi(p) \cdot\hat{x}} \triangleright 1\nonumber \end{align} which combined with $e^{-i\xi(p)\cdot\hat{x}} e^{i\xi(p)\cdot\hat{x}} \equiv {\text{\usefont{U}{dsss}{m}{n}\char49}}$ yield \begin{equation} \label{operator_identity} e^{-i\xi(p)\cdot\hat{x}} \partial_\mu e^{i\xi(p)\cdot\hat{x}} = (ip_\mu) {\text{\usefont{U}{dsss}{m}{n}\char49}},\ \forall p\in \mathbb{R}^3. \end{equation} In particular, if we rescale $p \mapsto \lambda p$, $\lambda \in \mathbb{R}$, in \eqref{operator_identity}, and take the derivative with respect to the parameter $\lambda$, we easily obtain for the right-hand-side of \eqref{operator_identity} $ip_\mu$, while the derivation of the left-hand-side leads to \begin{equation} \frac{d}{d\lambda} \left[ e^{-i\xi(\lambda p)\cdot\hat{x}} \partial_\mu e^{i\xi(\lambda p)\cdot\hat{x}} \right] = i \frac{d}{d\lambda} \left[ \xi^\nu(\lambda p) \right] \left( e^{-i\xi(\lambda p)\cdot\hat{x}} \varphi_{\mu \nu}(\partial) e^{i\xi(\lambda p)\cdot\hat{x}} \right), \end{equation} where we have explicitly used the expression of the representation \eqref{diff-rep2} through the relation $[\partial_\mu, \hat{x}_\nu] = [\partial_\mu,x^a\varphi_{a\nu}] = \varphi_{\mu \nu}$. In view of \eqref{operator_identity}, we readily infer that \begin{equation} e^{-i\xi(\lambda p)\cdot\hat{x}} \varphi_{\mu \nu}(\partial) e^{i\xi(\lambda p)\cdot\hat{x}} = \varphi_{\mu\nu}(i\lambda p) {\text{\usefont{U}{dsss}{m}{n}\char49}}, \end{equation} from which we conclude that \begin{equation}\label{diff-xi} \varphi_{\mu\nu}(i\lambda p) \frac{d}{d\lambda} \left[ \xi^\nu(\lambda p) \right] = p_\mu , \end{equation} indicating that the function $\xi$ is merely determined by a first order differential equation. To solve this latter, it remains to invert $\varphi_{\mu\nu}$. In view of the Schur-Weyl decomposition for $SO(3)$, we are looking for solutions of the form \begin{subequations} \begin{equation} \label{phi-inverse-general} (\varphi^{-1})_{\mu\nu}(\partial) = X(\Delta) \delta_{\mu \nu} + Y(\Delta) \partial_\mu \partial_\nu + Z(\Delta) \varepsilon_{\mu \nu}^{\hspace{11pt} \rho} \partial_\rho, \end{equation} such that $\varphi_{\mu\nu} (\varphi^{-1})^{\nu\sigma} = \delta_\mu^{\hspace{3pt}\sigma}$. Standard computations lead to the following system \begin{equation} fX - i\theta \Delta Z = 1,\ (f+\Delta g)Y + gX + i\theta Z = 0,\ fZ + i\theta X = 0, \end{equation} which admits the following unique solution, assuming $f^2\neq\theta^2\Delta$,\footnote{In the case $f^2-\theta^2\Delta=0$, $\varphi_{\mu\nu}$ is not invertible.} \begin{equation}\label{XYZ-coord} X(\Delta)=\frac{f(\Delta)}{f^2(\Delta)-\theta^2\Delta},\ Y(\Delta)= - \frac{2f'(\Delta)}{f^2(\Delta)-\theta^2\Delta},\ Z(\Delta)= - \frac{i \theta}{f^2(\Delta)-\theta^2\Delta}, \end{equation} \end{subequations} where we have used eq. \eqref{2nd_condition} to simplify the expression of $Y$. We conclude that \begin{subequations} \begin{equation}\label{phi-inverse} (\varphi^{-1})^{\mu\nu}(ip) = \frac{1}{f^2+\theta^2 p^2} \left( f \delta^{\mu\nu} + 2f'p^\mu p^\nu + \theta \varepsilon^{\mu \nu \rho} p_\rho \right), \end{equation} where $f$ and its derivative are (real) functions of $-p^2$.\smallskip Finally, integrating $d\xi^\mu = (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} p_\nu d\lambda$ between 0 and 1 on both side of the equality, we obtain \begin{equation} \label{solution-xi} \xi^\mu(p) = \int_0^1 (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} p_\nu\ d\lambda, \end{equation} \end{subequations} where the initial condition $\xi^\mu(0)=0$ stems from $E(1)=E_0(\hat{x})={\text{\usefont{U}{dsss}{m}{n}\char49}}$.\footnote{The notation $\varphi^{-1}_{\vert_y}$ means that the function $\varphi^{-1}$ is evaluated at $y$.}$^,$\footnote{Observe that $\xi$ depends only on one of the three functionals characterising the differential representation.} \subsubsection{Derivation of the radius\texorpdfstring{ $\omega$.}{.}} In order to fully characterise $E_p(\hat{x})$, it remains to determine its radial part, namely to derive the expression for $\omega(p)$. Again, the strategy amounts to rescale $p\mapsto \lambda p$ by some real parameter $\lambda$, then differentiating with respect to $\lambda$. On the one hand, we compute \begin{equation} \frac{d}{d \lambda} \left[ e^{i\xi(\lambda p)\cdot\hat{x}} \right] = i \frac{d}{d\lambda} \left[ \xi^\mu(\lambda p) \right] \hat{x}_\mu e^{i\xi(\lambda p)\cdot\hat{x}} = i (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} p_\nu \hat{x}_\mu e^{i\xi(\lambda p)\cdot\hat{x}}, \end{equation} such that, evaluating this expression on 1, we have \begin{align}\label{calcul11} \frac{d}{d \lambda} \left[ e^{i\xi(\lambda p)\cdot\hat{x}} \right] \triangleright 1 &= i (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} p_\nu \left(x^\alpha \varphi_{\alpha \mu}(\partial) + \chi_\mu(\partial) \right) \triangleright \frac{e^{i\lambda p\cdot x}}{\omega(\lambda p)} \\ &= i (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} p_\nu \left(x^\alpha \varphi_{\alpha \mu}(i\lambda p) + \chi_\mu(i\lambda p) \right) \frac{e^{i\lambda p\cdot x}}{\omega(\lambda p)} \nonumber\\ &= i \left( x^\nu + \chi_\mu (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} \right) p_\nu \frac{e^{i\lambda p\cdot x}}{\omega(\lambda p)}.\nonumber \end{align} On the other hand, we find \begin{equation}\label{calcul10} \frac{d}{d \lambda} \left[ \frac{e^{i\lambda px}}{\omega(\lambda p)} \right] = \left( ix^\nu p_\nu - \frac{1}{\omega(\lambda p)} \frac{d}{d \lambda} \left[ \omega(\lambda p) \right] \right) \frac{e^{i\lambda px}}{\omega(\lambda p)}. \end{equation} Owing to the relation\footnote{Indeed, let $g(\lambda,x)=\hat{A}(\lambda)f(x)$, with $f\in\text{Dom}(\hat{A})$. Then, \begin{equation} \frac{dg}{d\lambda} =: \lim_{\epsilon \rightarrow 0} \frac{g(\lambda + \epsilon)-g(\lambda)}{\epsilon}= \lim_{\epsilon \rightarrow 0} \left( \frac{\hat{A}(\lambda + \epsilon)-\hat{A}(\lambda)}{\epsilon} \right) f(x), \end{equation} from which we conclude that \begin{equation} \frac{d}{d\lambda} \left[ \hat{A} f(x) \right] = \frac{d\hat{A}}{d\lambda} f(x). \end{equation}} \begin{equation} \frac{d}{d\lambda} \left[ e^{i\xi(\lambda p)\cdot\hat{x}} \triangleright 1 \right] = \frac{d}{d\lambda} \left[ e^{i\xi(\lambda p)\cdot\hat{x}} \right] \triangleright 1, \end{equation} we can identify eq. \eqref{calcul10} with \eqref{calcul11}, to obtain \begin{equation} i \left( x^\nu + \chi_\mu (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} \right) p_\nu = ix^\nu p_\nu - \frac{1}{\omega(\lambda p)} \frac{d}{d \lambda} \left[ \omega(\lambda p) \right] , \end{equation} or equivalently \begin{equation} \frac{1}{\omega(\lambda p)} \frac{d}{d \lambda} \left[ \omega(\lambda p) \right] = - i \chi_\mu (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} p_\nu. \end{equation} Integrating the above differential equation, we find the following solution for $\omega$ \begin{equation}\label{solution-omega} \omega(p) = e^{-i \int_0^1 d\lambda \ \chi_\mu(i\lambda p) (\varphi^{-1})^{\mu\nu}_{\vert_{i\lambda p}} p_\nu}. \end{equation} This concludes the derivation of the deformed plane waves. \subsubsection{Summary.} Let us summarise and comment on our results. \begin{enumerate}[label={$\textit{(\roman*)}$}] \item{First, let us recap the main lines of our construction. We have seen, eq. \eqref{sustar}, that quantisation maps, hence star products, are fully characterised by the socalled deformed plane waves \eqref{qpw}. Having in mind the Weyl quantisation scheme, which essentially identifies quantisation maps with group representations, we have then identified the deformed plane waves with projective representations of the group \begin{equation} E(g_1)E(g_2)=\Omega(g_1,g_2)E(g_1g_2),\ [\Omega]\in H^2\big(\mathcal{G},\mathbb{C}\!\setminus\!\!\lbrace0\rbrace\big),\ g_1,g_2\in\mathcal{G}. \end{equation} This led us to the conclusion that group cohomology could be used to classify the various inequivalent families of star products. However, to exhibit one representative of such deformed plane wave, only based on cohomological considerations, may not be an easy task in general. In practice, we have shown that an explicit expression for the deformed plane waves can be conveniently obtain by representing the abstract coordinate operators as differential ones. Recall that such representative is needed for performing actual computations in the context of NCFT;} \item{In the case the group underlying the quantum space is given by $SU(2)$, we have shown that the star products can be indexed by three real functionals stemming from the requirement the differential star representation $\hat{x}_\mu$, eq. \eqref{general_rep}, defines an $SO(3)$-equivariant morphism of *-algebras. In this case, upon using the explicit expression of $\varphi^{-1}$, eq. \eqref{phi-inverse-general}, in both the expressions of $\xi$ and $\omega$, eq. \eqref{solution-xi} and \eqref{solution-omega}, we have found that the deformed plane waves \begin{equation} E(g)\mapsto E_p(\hat{x})=\omega(p)e^{i\xi(p)\cdot\hat{x}}, \end{equation} are fully determined by a set of two Volterra integrals \begin{subequations}\label{volterra} \begin{align} &\xi^\mu(p) = \int_{-p^2}^0 \frac{dt}{2\|\vec{p}\| \sqrt{-t}} \ \left[X(t) + t Y(t)\right] p^\mu,\\ &\omega(p) = e^{\int_{-p^2}^0 dt \left[X(t) + t Y(t)\right]\ell(t)}, \end{align} \end{subequations} in which $X$ and $Y$, eq. \eqref{XYZ-coord}, are representation dependent;} \item{Next, from the positivity of $\omega$, eq. \eqref{calcul13}, together with the reality of $X$ and $Y$ (which depend only on the real functional $f$), it follows that $\ell(t)$ has to be real $\forall t\in\mathbb{R}$. This is achieved by requiring $\ell^\dag=\ell$. Hence, eq. \eqref{1st_condition}, entering the definition of the family of $^*$-representations \eqref{general_rep}, reduces to \begin{equation} \ell=(f+g\Delta)^\prime+g, \end{equation} therefore constraining the expression for $\ell$ once $f$ and $g$ satisfying \eqref{2nd_condition} are determined;} \item{Finally, we conclude this chapter with the following important remark. Let us focus for a moment on the solution $\xi_\mu(p)=p_\mu$, $\omega(p)=1$, for all $p\in\mathbb{R}^3$. Thus, the deformed plane wave reduces to $E_p(\hat{x})=e^{ip\cdot\hat{x}}=:W(e^{ip\cdot{x}})$, which is nothing but the Wigner-Weyl quantisation map corresponding to the symmetric ordering of operators. Then, we can easily show that it is not possible to find any representation belonging to the family \eqref{general_rep} for which $W(e^{ip\cdot{x}})\triangleright1=e^{ip\cdot{x}}$. Indeed, let assume such representation exists. Then, $\xi$ and $\omega$ are such that \begin{subequations} \begin{align} &\int_{-p^2}^0\ dt(X(t)+tY(t))\ell(t)=0,\label{integralone}\\ &\int_{-p^2}^0\ \frac{dt}{2\|\vec{p}\|\sqrt{-t}}(X(t)+tY(t))=1.\label{integraltwo} \end{align} \end{subequations} Whenever $\chi(t)\ne0$, it can be easily checked that \eqref{integralone} and \eqref{integraltwo} cannot be simultaneously satisfied. Indeed, \eqref{integralone} implies $(X(t)+tY(t))\ell(t)=0$ and therefore $X(t)+tY(t)=0$, which clearly contradicts \eqref{integraltwo}. Despite this fact, it is worth mentioning that such Wigner-Weyl quantisation map has been used in the literature within approaches similar to the one presented here.} \end{enumerate} \section{Kontsevich product for \texorpdfstring{$\mathfrak{su}(2)$}{su(2)} noncommutative space.}\label{sukont} In this section, we adapt the material of $\S$\ref{sec-su2} to the derivation of a suitable star product for $\mathbb{R}^3_\theta$. This product is obtained from a subfamily of $SO(3)$-equivariant differential *-representations indexed by only one real functional of $\Delta$, the ordinary Laplacian on $\mathbb{R}^3$. The expression of the deformed plane waves is given. Finally, making use of the Harish-Chandra map \cite{Harish:1951}, the corresponding star product is shown to be equivalent to the Kontsevich product \cite{Kontsevich:2003} for the Poisson manifold dual to the finite dimensional Lie algebra $\mathfrak{su}(2)$, namely closed for the trace functional defined by the usual Lebesgue integral $\int d^3x$. This product will be used in Chap. \ref{sap-ncftsu2} to construct an action functional aiming to describe the dynamics of interacting scalar fields on $\mathbb{R}^3_\theta$. \subsection{A suitable family of differential *-representations.} Setting $f+g\Delta=:R(\Delta)$, with $R(\Delta)$ a real functional of $\Delta$, we see that eq. \eqref{2nd_condition} gives rise to a Riccati equation, \textit{i.e.} \begin{equation}\label{riccati} 2g' = \left(\frac{2R'(\Delta)}{\Delta} - \frac{\theta^2}{\Delta R(\Delta)} \right) - \frac{3}{\Delta}g + \frac{1}{R(\Delta)} g^2. \end{equation} The family we are looking for is characterised by the constraint $R(\Delta)=1$. Equation \eqref{riccati} reduces to \begin{equation}\label{equadiff-reduced} 2t \frac{dG}{dt} + 3\left(G(t)+1 \right) - \frac{t}{6} G^2(t) = 0,\ g(t)=:\frac{\theta^2}{3} G(2\theta^2 t). \end{equation} It can be shown \cite{Kupriyanov:2015} that this equation admits the following solution \begin{equation}\label{kv-g} G(t) = -6\sum_{n=1}^\infty \frac{2^n B_{2n}}{(2n)!}\ t^{n-1}, \end{equation} where $B_n$'s are Bernoulli numbers. On the other hand, eq. \eqref{1st_condition} reduces to \begin{equation}\label{kv-chi} \ell(\Delta)=g(\Delta). \end{equation} Hence, \begin{subequations}\label{kv-star} \begin{align} &\hat{x}_\mu=x^\alpha\left[(1-g(\Delta)\Delta)\delta_{\alpha\mu}+g(\Delta)\partial_\alpha\partial_\mu+ i\theta\varepsilon_{\alpha\mu}^{\hspace{11pt}\rho}\partial_\rho\right]+g(\Delta)\partial_\mu, \\ &g(\Delta)=-\sum_{n=1}^\infty \frac{(2\theta)^{2n} B_{2n}}{(2n)!}\ \Delta^{n-1},\label{gkv-star} \end{align} \end{subequations} which selects a subfamilies of *-representations among those given by \eqref{general_rep}. \subsection{Deformed plane waves and Kontsevich star product.}\label{sec-kont} The deformed plane waves associated with \eqref{kv-star} are easily obtained from eq. \eqref{volterra}. Upon combining $f=1-tg$, which is equivalent to the constraint $R(t)=1$, with eq. \eqref{2nd_condition}, we find that $2tf'=f-f^2+\theta^2t$. It follows that \begin{equation} X(t)+tY(t)=\frac{f-2tf'}{f^2-\theta^2t}=1, \end{equation} where $X$ and $Y$ are defined in eq. \eqref{XYZ-coord}. Using this last equation in \eqref{volterra}, we obtain \begin{subequations} \begin{align} &\xi_\mu(p)=p_\mu,\\ &\omega(p) = \exp \left( \int_{-p^2}^0 g(t) dt\right). \end{align} \end{subequations} Now, observe that $g$, eq. \eqref{gkv-star}, formally converges to \begin{equation}\label{closed_g} g(\Delta) = - \Delta^{-1} \left( \theta\sqrt{\Delta} \coth(\theta\sqrt{\Delta}) - 1 \right) . \end{equation} Passing from hyperbolic to trigonometric functions and performing the change of variable $x=\theta \sqrt{t}$, we can rewrite $\omega$ as \begin{equation} \omega(p) = \exp\left(2 \int^{\theta |p|}_0 \left( \cot(x) - \frac{1}{x} \right) dx \right). \end{equation} Integrating this latter expression, we finally obtain the following expression for the deformed plane waves \begin{equation} \label{pw-example} Q(e^{ip\cdot x})=E_p(\hat{x}) = \left( \frac{\sin(\theta |p|)}{\theta |p|} \right)^2 e^{ip\cdot\hat{x}}. \end{equation} According to the discussion of $\S$\ref{sec-qmapsu2} together with eq. \eqref{compopw}, the corresponding star product is readily obtained from \begin{subequations} \begin{align} &e^{ip\cdot x} \star_Q e^{iqx} = \mathcal{W}^2(p,q) e^{iB(p,q)\cdot x},\\ &\mathcal{W}(p,q) := \frac{|B(p,q)|}{\theta |p||q|}\frac{\sin(\theta |p|)\sin(\theta |q|)}{\sin(\theta |B(p,q)|)},\label{theweight} \end{align} \end{subequations} with $B(p,q)$ stemming from the Baker-Campbell-Hausdorff formula for $\mathfrak{su}(2)$. We have \begin{equation} (f\star_Q g)(x) = \int \frac{d^3p}{(2\pi)^3}\frac{d^3q}{(2\pi)^3}\tilde{f}(p)\tilde{g}(q) \mathcal{W}^2(p,q) e^{iB(p,q)\cdot x},\ f,g \in \mathfrak{F}(\mathbb{R}^3). \end{equation} To relate the above star product to the Kontsevich product, it is convenient to define a new quantization map. Let $\mathcal{K}:\mathfrak{F}(\mathbb{R}^3) \to \mathcal{L}(\mathcal{H})$ be defined by \begin{subequations} \begin{align} &\mathcal{K} := Q \circ H,\\ &H := \frac{\theta \sqrt{\Delta}}{\sinh(\theta \sqrt{\Delta})}.\label{Kontsevich} \end{align} \end{subequations} The operator $H$ is such that \begin{equation} H(f\star_\mathcal{K}g)=H(f)\star_QH(g),\ f,g \in \mathfrak{F}(\mathbb{R}^3), \end{equation} which defines obviously an equivalence relation between the star products $\star_Q$ and $\star_\mathcal{K}$.\\ A standard calculation yields \begin{equation}\label{checkpoint2} \mathcal{K}(e^{ip\cdot x}) = \frac{\sin(\theta |p|)}{\theta |p|} e^{ip\cdot\hat{x}}. \end{equation} Hence, the star product $\star_\mathcal{K}$ associated with $\mathcal{K}$, which is ($H$-)equivalent to $\star_Q$, is obtained from \begin{equation}\label{duflopw} e^{ip\cdot x} \star_\mathcal{K} e^{iq\cdot x} = \mathcal{W}(p,q) e^{iB(p,q)\cdot x}, \end{equation} and we can write \begin{equation}\label{kontsev-product} (f\star_\mathcal{K}g)(x)=\int \frac{d^3p}{(2\pi)^3}\frac{d^3q}{(2\pi)^3}\tilde{f}(p)\tilde{g}(q) \mathcal{W}(p,q) e^{iB(p,q)\cdot x},\ f,g \in \mathfrak{F}(\mathbb{R}^3), \end{equation} where $\mathcal{W}(p,q)$ is still given by \eqref{theweight}.\smallskip This star product $\star_\mathcal{K}$ coincides with the Kontsevich product \cite{Kontsevich:2003}. It has been derived for $\mathbb{R}^3_\theta$ within a different approach in \cite{Freidel:2008,Guedes:2013,Kupriyanov:2015}, namely via the relation \begin{equation}\label{deriv-konts} \mathcal{K}=W\circ j^{\frac{1}{2}}(\Delta), \end{equation} where $W$ is the Weyl quantization map and \begin{equation}\label{duflo} j^{\frac{1}{2}}(\Delta)=\frac{\sinh(\theta \sqrt{\Delta})}{\theta \sqrt{\Delta}}, \end{equation} is the Harish-Chandra map \cite{Harish:1951,Duflo:1970,Duflo:1977}. Recall that $\star_\mathcal{K}$ is closed for the trace functional defined by the Lebesgue integral on $\mathbb{R}^3$, namely \begin{equation}\label{starclos} \int d^3x\ (f \star_\mathcal{K} g)(x) = \int d^3x\ f(x) g(x). \end{equation} Finally, comparing \eqref{Kontsevich} and \eqref{duflo}, we infer \begin{equation} j^{\frac{1}{2}}(\Delta)=H^{-1}. \end{equation} Hence, $H$ is the inverse of the Harish-Chandra map. Notice that, by using \eqref{pw-example} combined with \eqref{checkpoint2}, we have \begin{equation} \mathcal{K}(e^{ip\cdot x})\triangleright 1=\frac{\theta|p|}{\sin(\theta|p|)}e^{ip\cdot x}, \end{equation} while \eqref{checkpoint2} and \eqref{deriv-konts} yield \begin{equation} W(e^{ip\cdot x})=\frac{\theta|p|}{\sin(\theta|p|)}\mathcal{K}(e^{ip\cdot x})=e^{ip\cdot\hat{x}}. \end{equation} \part{Noncommutative quantum field theory.}\label{part-ncft} Spacetime plays a central role in the current description of the physical phenomena as it provides the mathematical framework upon which the construction of physical models is based on.\footnote{Here, by ``spacetime" we designate either the spacetime as a whole, or space and time separately.} It is therefore to expect that modifications in the structure of spacetime, as occurring in noncommutative geometry, induce modifications in the descriptions of the physical phenomena themselves. This is all the more true as modern physics is mainly based on the notion of field, which is nothing but a mathematical object which associates with each point in space and time a physical, dynamical or not, quantity.\smallskip It is precisely the aim of noncommutative field theory (NCFT) to study the interplay between quantum spacetime and field dynamics or, in other words, to study the possibly new (both classical and quantum) behaviours of fields on noncommutative background. Unfortunately, as far as we know, there is no canonical way of carrying out such a study, as no experimental, nor observational, evidences exist for guiding the construction of a reasonable action functional describing the dynamics of interacting fields on noncommutative background. Nevertheless, based on our current understanding in both ordinary quantum field theory and classical gravity, together with the assumption that known physical theories should be recovered in some limit of the NCFT, (what we believed to be) reasonable requirements the action functional should satisfy can be postulated to guide its derivation. This last statement will be clarified throughout this part. Note that, in the present dissertation we are only concerned with the quantum properties of NCFT, unfortunately leaving aside the study of their classical ($\hbar=0$) properties. Furthermore, although attempts to extend the canonical quantisation scheme to the context of field theory have been studied in the literature, see, e.g., \cite{Fiore:2010,Bu:2006,Arzano:2007}, and references therein, we here adopt the point of view of path integral quantisation. \smallskip One way to investigate the quantum properties of a NCFT is to represent this latter as a matrix model. This has been done in the Moyal case, see, e.g., \cite{Grosse:2003,Grosse:2005,Grosse:2008,JCW:2008a,JCW:2008b,JCW:2013a}, as well as for $\mathbb{R}_\theta^3$, see, e.g., \cite{JCW:2013b,JCW:2015,JCW:2016}. For a review see, e.g., \cite{Lizzi:2014}, and references therein. One of the advantage of this approach is that, formulated in the matrix basis, the expression of the interaction potential may become very simple. On the other hand, the expression of the kinetic operator may become cumbersome and difficult to invert. Therefore, albeit powerful, it may happen that the matrix model formulation of a NCFT becomes unexploitable due to severe technical difficulties. Another alternative (widely used) framework to study properties of NCFT is to take advantage of star products and deformation theory approach, either from the standard viewpoint of formal deformations extending the quantisation approach of classical phase space, or taking advantage of underlying Hopf algebra structures and related twists. We adopt the former viewpoint in this dissertation. Examples of star products -- we are going to use to investigate the quantum properties of scalar NCFT built on $\mathbb{R}^3_\theta$ and $\kappa$-Minkowski -- have been derived in Part \ref{ch-ncst}. The use of the star product formulation of NCFT is often convenient for fast construction of reasonable action functionals but may lead to technical difficulties whenever the star product is represented by a complicated formula. This is the case, e.g., when the algebra of coordinates associated with the quantum spacetime is isomorphic to a semisimple Lie algebra; see Chap. \ref{sap-ncftsu2} for an example of such difficulties. On the contrary, whenever the algebra of coordinates is isomorphic to a nilpotent or solvable Lie algebra, the corresponding BCH formula (from which the momentum conservation law, as well, composition of plane waves, can be read off) admits a simplified expression, and the corresponding star product may admit a relatively simple expression as well.\smallskip Although quantum properties of NCFT on Moyal space, as well as $\mathbb{R}^3_\theta$, have been widely studied in the literature, it is not the case for NCFT built on $\kappa$-Minkowski. This is probably due to the very different structure of the algebra of fields modelling $\kappa$-Minkowski than that of Moyal space or $\mathbb{R}^3_\theta$. Indeed, for these two latter spaces, the corresponding groups are unimodular. This is not the case for $\kappa$-Minkowski. As we are going to see, the nonunimodularity of the Lie group underlying $\kappa$-Minkowski is reflected at the level of the action functional by the loss of cyclicity of the integral involved in it. To be more precise, requiring the action functional to be $\kappa$-Poincar\'{e} invariant conditioned the Lebesgue integral to define a twisted trace, the twist being related to the modular function, eq. \eqref{function-modulaire-4d}. As far as we know, there is only one other paper in the literature \cite{Grosse:2006} dealing with interacting scalar field theories on $\kappa$-Minkowski. In this paper, the NCFT was built from another (albeit presumably equivalent) star product and a different kinetic operator was used. The conclusions we obtain seem to qualitatively agree with those obtained in \cite{Grosse:2006}. However, the precise comparison between both works is drastically complicated by the technical approach used in \cite{Grosse:2006} leading to very involved formulas.\smallskip This part is organised in two independent chapters. The first chapter (Chap. \ref{ch-ncft}) is devoted to the study of the quantum properties of various models of $\kappa$-Poincar\'{e} invariant noncommutative field theories. We give the full derivation of the one-loop order corrections to both the 2-point and 4-point functions for different families of (quartic) interactions and kinetic operators. As we are going to see, the relatively simple expression of the star product \eqref{star-4d} makes the computation of the various contributions very tractable. In Chap. \ref{sap-ncftsu2}, we use the Kontsevich star product derived in $\S$\ref{sec-kont} to study the one-loop 2-point function for two models of noncommutative field theory with quartic interactions. In this case, we find that the deformation parameter $\theta$ provides a natural UV cutoff which regularises both the UV and the IR. On the contrary, $\kappa$ does not play a role similar to $\theta$ for the NCFT built from \eqref{starinvol4d}. We find that, for the three propagators considered, the NCFT on $\kappa$-Minkowski are not finite. In one case however, the one-loop 2-point function has milder behaviour than its commutative counterpart, and the one-loop 4-point function is found to be finite. These results are summarised in Table \ref{tableau2}. Finally, we give an interpretation of the noncyclicity of the Lebesgue integral as reflecting the occurrence of a KMS condition at the level of the algebra of fields. This is discussed $\S$\ref{sec-KMS}.\smallskip \noindent In the following, we work with Euclidean signature. \chapter{\texorpdfstring{$\kappa$}{k}-Poincar\'{e} invariant scalar field theories.}\label{ch-ncft} \section{Construction of real action functionals from KMS weight.}\label{sec-action} The presentation of the $\kappa$-Minkowski space $\mathcal{M}_\kappa$ given in Chap. \ref{sec-Minkowski} provides us with all the needed ingredients for constructing physically reasonable expressions for an action functional $\mathcal{S}_{\kappa,\star}$ aiming to describe the dynamics of interacting complex scalar fields on noncommutative $\kappa$-Minkowski background. In this framework, the (classical) fields are merely the elements of the group C*-algebra $C^*(\mathcal{G}_4)$ modelling $\kappa$-Minkowski, while a canonical measure of integration to be involved in the action functional is provided by the (right invariant) Haar measure of the (nonunimodular) locally compact Lie group $\mathcal{G}_4=\mathbb{R}\ltimes\mathbb{R}^{3}$. Recall that this measure of integration coincides in the group parametrisation \eqref{group4d-parametrization} with the Lebesgue measure on $\mathbb{R}^4$, \textit{i.e.} $\int d^4x$. Nevertheless, the construction of the action functional remains equivocal and additional assumptions are needed to guide its full derivation. \subsection{Preliminary considerations.} \noindent In this purpose, we demand the action functional to obey the following two conditions: \begin{description} \item[{\footnotesize(SP)}]{First, to be invariant under the action of the $\kappa$-Poincar\'{e} algebra;} \item[{\footnotesize(CP)}]{Next, to reduce to a known field theory in the commutative (low energy) limit.} \end{description} The former condition (SP) constitutes the core principle upon which our construction of noncommutative field theories is based. It is physically motivated by the important role played by the Poincar\'{e} invariance in ordinary quantum field theory together with the fact that the $\kappa$-Poincar\'{e} algebra can be viewed as describing the (quantum) symmetries of the $\kappa$-Minkowski space; see Appendix \ref{sap-poincare}. Hence, a reasonable requirement for an action functional aiming to describe the dynamics of elementary particles on $\kappa$-Minkowski background is to be compatible with the symmetries of this deformed spacetime.\\ Straightforward computations show that the Lebesgue measure is invariant under the action of the $\kappa$-Poincar\'{e} Hopf algebra $\mathfrak{P}_\kappa$ in the sense that\footnote{Since every element of $\mathfrak{P}_\kappa$ can be written as a linear combination of the generators $\mathcal{E}$, $P_i$, $M_i$, and $N_i$, it is sufficient to check the invariance of the measure under the action of these generators to prove its $\kappa$-Poincar\'{e} invariance.} \begin{equation}\label{measure-invariance} \int d^4x\left(h\triangleright f\right)(x)=\epsilon(h)\int d^4x\ f(x),\ \forall h\in\mathfrak{P}_\kappa,\ \forall f\in\mathcal{M}_\kappa, \end{equation} where the action $\triangleright$ is given by eq. \eqref{module-action} and $\epsilon$ is the counit, eq. \eqref{hopf3}, of $\mathfrak{P}_\kappa$.\smallskip Since the (star) product of two any functions $f,g\in\mathcal{M}_\kappa$ still belongs to $\mathcal{M}_\kappa$, as well as $f^\ddagger\in\mathcal{M}_\kappa$ and $\mathcal{O}f\in\mathcal{M}_\kappa$ for any operator $\mathcal{O}:\mathcal{M}_\kappa\to\mathcal{M}_\kappa$ with dense domain in $\mathcal{M}_\kappa$, it follows from the $\kappa$-Poincar\'{e} invariance of the Lebesgue measure, eq. \eqref{measure-invariance}, that any action functional of the form \begin{equation} S_{\kappa,\star}[\phi,\phi^\ddagger]:=\int d^4x\ \mathcal{L}_\kappa[\phi,\phi^\ddagger](x), \end{equation} is $\kappa$-Poincar\'{e} invariant provided that the Lagrangian density $\mathcal{L}_\kappa$ is made of polynomials in the fields $\phi$ and $\phi^\ddagger$, together with terms of the form $\mathcal{O}\phi$ and $\mathcal{O}\phi^\ddagger$. Namely, \begin{equation}\label{action-invariance} h\blacktriangleright S_{\kappa,\star}[\phi,\phi^\ddagger]:=\int d^4x\left(h\triangleright\mathcal{L}_\kappa[\phi,\phi^\ddagger]\right)(x)=\epsilon(h)S_{\kappa,\star}[\phi,\phi^\ddagger],\ \forall h\in\mathfrak{P}_\kappa, \end{equation} which is a Hopf algebraic formulation of the more familiar $\delta S=0$; see, e.g., \cite{Agostini:2004}.\smallskip The second condition (CP) is a guideline to offset the lack of observational and experimental data which would ideally constrain the admissible expressions for $\mathcal{S}_{\kappa,\star}$. This requirement is supported by the fact that the $\kappa$-deformed Minkowski space (resp. Poincar\'{e} algebra) is obtained from (smooth) deformation of the classical Minkowski space-time (resp. Poincar\'{e} algebra), the real, positive, dimensionful parameter $\kappa^{-1}$ increasing from zero to go from commutative to noncommutative. Consequently, any $\kappa$-Poincar\'{e} invariant noncommutative field theory can be interpreted as providing a high energy extension of some Poincar\'{e} invariant field theory we should recover when $\kappa^{-1}\to0$.\\ In the following, we restrict our analysis to $\kappa$-Poincar\'{e} invariant action functional describing the dynamics of $\mathbb{C}$-valued scalar field, with quartic interactions, admitting the ordinary $\vert\phi\vert^4$ model as commutative limit, namely \begin{equation}\label{action-limit} \lim\limits_{\kappa\to\infty}S_{\kappa,\star}[\phi,\phi^\ddagger]=\int d^4x\left(\frac{1}{2}\bar{\phi}(x)(-\partial_\mu\partial^\mu+\bar{m}_0^2)\phi(x)+\frac{\bar{g}}{4!}\vert\phi(x)\vert^4\right), \end{equation} where $\bar{m}_0$ is the (bare) rest mass of the complex scalar field $\phi$ and $\bar{g}$ the corresponding (bare) coupling constant.\bigskip The determination of admissible expressions for $S_{\kappa,\star}$, satisfying all the above-mentioned requirements, is further facilitated by the introduction of a Hilbert product on $\mathcal{M}_\kappa$. Let $\langle\cdot,\cdot\rangle_\star:\mathcal{M}_\kappa\times\mathcal{M}_\kappa\to\mathbb{C}$ be a positive definite Hermitian form defined, $\forall f,g\in\mathcal{M}_\kappa$, by \begin{equation}\label{hilbert-def2} \langle f,g\rangle_\star:=\int d^4x\left(f\star g^\ddagger\right)(x). \end{equation} Upon developing the expressions of the star product and involution, eq. \eqref{starinvol4d}, in the left-hand-side of the above expression, we easily obtain \begin{equation}\label{hilbert-prop2} \int d^4x \left(f\star g^\ddagger\right)(x)=\int d^4x f(x)\bar{g}(x),\ \int d^4x f^\ddagger(x)=\int d^4x \bar{f}(x). \end{equation} From this, it follows that \begin{equation} \| f\|^2_\star:=\langle f,f\rangle_\star=\int d^4x \vert f(x)\vert^2=\| f\|^2_2\geq0,\ \forall f\in\mathcal{M}_\kappa, \end{equation} with equality if, and only if, $f=0$, while $\langle g,f\rangle_\star=\overline{\langle f,g\rangle}_\star$ where we have used \eqref{antivolution}.\\ The Hilbert product \eqref{hilbert-def2} provides an efficient tool to control the reality of the action functional, including the properties that the kinetic operator have to satisfy to ensure this reality condition. We easily deduce that a sufficient condition to ensure the reality of the action functional consists in considering terms of the form $\langle f,f\rangle_\star$ and $\langle f,\mathcal{O}f\rangle_\star$ provided $f\in\mathcal{M}_\kappa$ is any (star) polynomial in the fields $\phi$, $\phi^\ddagger$, and $\mathcal{O}:\mathcal{M}_\kappa\to\mathcal{M}_\kappa$ is selfadjoint. More details on the actual expressions of $f$ and $\mathcal{O}$ are discussed in sections $\S$\ref{sec-kinetic} and $\S$\ref{sec-kinetic}. For convenience, we further assume the action functional decomposes as usual into a kinetic term and an interaction one, \textit{i.e.} \begin{equation}\label{actiondecomposition} S_{\kappa,\star}[\phi,\phi^\ddagger]=S^\text{kin}_{\kappa,\star}[\phi,\phi^\ddagger]+S^\text{int}_{\kappa,\star}[\phi,\phi^\ddagger]. \end{equation} Anticipating the ensuing derivation of $S_{\kappa,\star}$, note that, for the theories under consideration, the mass dimension of the fields and parameters are $[\phi]=[\phi^\ddagger]=1$, $[g]=0$ and $[m]=1$, where $g$ (resp. $m$) denotes generically a coupling constant (resp. a mass).\smallskip \noindent Before proceeding to this analysis, some comments are in order. \begin{enumerate}[label={$\textit{(\roman*)}$}] \item{First of all, making use of the $\kappa$-Poincar\'{e} invariance of the Lebesgue integral together with \eqref{dag-hopfoperat} and \eqref{derivtwist}, we can show that both $\mathcal{E}$ and the $P_i$'s are selfadjoint with respect to \eqref{hilbert-def2}, \textit{i.e.} $\langle f,t^\dag g\rangle_\star:=\langle tf, g\rangle_\star=\langle f,t g\rangle_\star$, for any $t\in \mathfrak{T}_\kappa$.\\ For example, we compute \begin{align} \langle P_i f, g\rangle_\star&=-\int d^4x\ (\mathcal{E}^{-1}P_i\triangleright f^\ddagger)\star g=-\int d^4x\ (P_i\triangleright f^\ddagger)\star (\mathcal{E}\triangleright g)\\ &=\int d^4x\ (\mathcal{E}\triangleright f^\ddagger)\star (P_i\mathcal{E}\triangleright g)=\int d^4x\ f^\ddagger\star (P_i\triangleright g)=\langle f,P_ig\rangle;\nonumber \end{align} } \item{Next, we see that our construction involves both $\phi$ and $\phi^\ddagger$ as primary objects. Some might object that the action functional should rather involved $\phi$ and $\bar{\phi}$ as primary objects, as it is the case in ordinary quantum field theory (QFT) or even in most of the NCFT in the literature. To this objection, we answer that, unlike in ordinary QFT or NCFT built on Moyal or $\mathbb{R}^3_\theta$, the group underlying the construction of the $\kappa$-Minkowski space is nonunimodular. Recall that, in this context, the involution compatibles with the structure of group C*-algebra underlying the quantum spaces, is defined, eq. \eqref{involution2-ap} and \eqref{invol-general}, by \begin{equation} f^\ddagger:=\mathcal{F}^{-1}\Big(\big(\Delta_{\mathcal{G}_4}\overline{\mathcal{F}f}\hspace{2pt}\big)^\flat\Big). \end{equation} Hence, as already mentioned in Part \ref{ch-ncst}, the fact that for NCFT built on Moyal of $\mathbb{R}^3_\theta$ a natural involution is provided by the complex conjugation $f\mapsto\bar{f}$ simply reflects the fact that the underlying groups are unimodular, namely $\Delta_\mathcal{G}=1$. This is obviously not the case for $\kappa$-Minkowski. For these reasons, together with the compatibility of $\ddagger$ with the star product, it is necessary to incorporate $\phi^\ddagger$ in the expression of the action functional to ensure its reality. Note that, the complex conjugation is still needed since the fields are $\mathbb{C}$-valued functions;} \item{Finally, straightforward computations show that the Lebesgue integral is not cyclic. Indeed, \begin{equation}\label{twistedtrace} \int d^4x\ f\star g=\int d^4x\ (\sigma\triangleright g)\star f, \end{equation} where we have defined for convenience \begin{equation}\label{twist-def} \sigma\triangleright f := \mathcal{E}^3 \triangleright f = e^{-3P_0/\kappa}\triangleright f, \end{equation} with $\mathcal{E}$ given by eq. \eqref{twist0}. Hence, the Lebesgue integral cannot define a trace, but instead, it defines a twisted trace. The next subsection is devoted to this important issue.\label{remark}} \end{enumerate} \subsection{Trading cyclicity for a KMS condition.}\label{sec-KMS} In the previous section, we have discussed some of the general properties a reasonable action functional describing the dynamics of interacting scalar fields on $\kappa$-Minkowski background must satisfy. We found that requiring both the action functional to be $\mathbb{R}$-valued and invariant under the action of the $\kappa$-Poincar\'{e} algebra strongly constrain its admissible expressions. This has conditioned the definition of the Hilbert product \eqref{hilbert-def2}. More drastically, we found that these requirements lead to the loss of cyclicity of the Lebesgue integral with respect to the star product, eq. \eqref{twistedtrace}. Instead of being a trace, $\text{Tr}(a\star b)=\text{Tr}(b\star a)$, the Lebesgue integral defines a twisted trace, $\text{Tr}(a\star b)=\text{Tr}(\sigma b\star a)$, on the algebra of fields, the twist being given by $\sigma$.\smallskip Although this loss of cyclicity is known for a long time in the physics literature, it has often been considered as a troublesome feature of $\kappa$-Poincar\'{e} invariant field theories; this having probably discouraged the pursue of many studies of their properties at the quantum level. On the other hand, probably as an attempt to avoid this difficulty, a lot of investigations have been undertaken working with momenta instead of spacetime variables. But also in this case, it has been noted that depending on the ordering chosen to define the deformed plane waves (in the sense of Chap. \ref{sec-products}), various candidates for a measure of integration (over the momenta) are possible. This has sometimes been regarded \cite{Amelino:2000b} as constituting an ambiguity for deriving an expression for the action functional. Recall that, in view of \eqref{group-elem2}, a choice of ordering merely reflects a choice of parametrisation of the group elements, \textit{i.e.} a redefinition of the momenta. It must be emphasised that this loss of cyclicity merely reflects the nonunimodularity of the Lie group underlying the construction of the C*-algebra of fields modelling $\kappa$-Minkowski. We can be convinced by comparing the expression of the twist, \textit{i.e.} $\sigma=e^{-3P_0/\kappa}$, with the expression of the modular function, \textit{i.e.} $\Delta_{\mathcal{G}_4}(k)=e^{3k^0/\kappa}$. In fact, the two objects are related via $\mathcal{F}(\sigma\triangleright f)(k)=\Delta_{\mathcal{G}_4}^{-1}(k)\mathcal{F}(f)(k)$. Thus, the two above issues of the integration's measure (in both spacetime and momentum space) are actually the two sides of the same coin. \smallskip In the following, we show that eq. \eqref{twistedtrace} can be interpreted as reflecting a Kubo-Martin-Schwinger (KMS) condition for the positive linear functional defined by \begin{equation}\label{zeta} \zeta(f):=\int d^4x f(x),\ f\in\mathcal{M}_\kappa, \end{equation} thus giving a positive interpretation/solution to what we discussed above. Observe that $\zeta$ and the Hilbert product \eqref{hilbert-def2} are related by $\langle f,g\rangle_\star=\zeta(f\star g^\ddagger)$. \subsubsection{KMS condition in quantum statistical mechanics.} The KMS condition has been introduced a long time ago in the context of quantum statistical mechanics \cite{Kubo:1957a,Martin:1959,Haag:1967} as a tool to characterise equilibrium temperature states of quantum systems; see also, e.g., \cite{Bratteli:1981,Haag:2012}. It can be (schematically) illustrated as follow.\smallskip Let us consider an arbitrary quantum mechanical system with time independent Hamiltonian $H$. Let $A\in\mathcal{B}(\mathcal{H})$ be a (bounded) observable acting on the Hilbert space $\mathcal{H}$. In general, a (mixed) state is described by a density matrix, say $\rho$, such that the expectation value of $A$ in this state is given by \begin{equation}\label{statekms0} \langle A\rangle=\text{Tr}(\rho A),\ A\in\mathcal{B}(\mathcal{H}). \end{equation} If the system is in thermal equilibrium at finite temperature $T=\beta^{-1}$, a thermal state is defined by $\rho=e^{-\beta H}$ and eq. \eqref{statekms0} becomes (Gibbs formula) \begin{equation}\label{statekms1} \langle A\rangle_\beta=Z_\beta^{-1}\text{Tr}\big(e^{-\beta H} A\big),\ Z_\beta^{-1}:=\text{Tr}\big(e^{-\beta H}\big). \end{equation} Now, let us define the following objects \cite{Fulling:1987} \begin{subequations}\label{corrKMS} \begin{align} &G_{+}^\beta(t;A,B):=\langle \Sigma_t(A)B\rangle_\beta,\\ &G_{-}^\beta(t;A,B):=\langle B\Sigma_t(A)\rangle_\beta,\ A,B\in\mathcal{B}(\mathcal{H}), \end{align} \end{subequations} where $\Sigma_t(A)$ gives the time translate of $A$ in the Heisenberg picture, \textit{i.e.} \begin{equation}\label{timeA} \Sigma_t(A):=e^{itH}Ae^{-itH},\ t\in\mathbb{R}. \end{equation} Making use of the cyclicity of $\text{Tr}$ in \eqref{statekms1}, together with $[e^{itH},e^{-\beta H}]=0$, we compute \begin{align} G_{+}^\beta(t;A,B)&= Z_\beta^{-1}\text{Tr}\big(e^{-\beta H} e^{itH}Ae^{-itH}B\big)\\ &=Z_\beta^{-1}\text{Tr}\big(e^{-\beta H} Ae^{-itH}Be^{itH}\big)=G_{-}^\beta(-t;B,A).\nonumber \end{align} Equations \eqref{corrKMS} can formally be extended to the complex plane, $z\in\mathbb{C}$, via \begin{subequations}\label{corrKMS2} \begin{align} &G_{+}^\beta(z;A,B)=Z_\beta^{-1}\text{Tr}\big(e^{i(z+i\beta)H}Ae^{-izH}B\big),\label{corrKMS2a}\\ &G_{-}^\beta(z;A,B)=Z_\beta^{-1}\text{Tr}\big(B e^{izH} Ae^{-i(z-i\beta)H}\big).\label{corrKMS2b} \end{align} \end{subequations} To ensure the exponents of the exponentials involved in \eqref{corrKMS2a} (resp. \eqref{corrKMS2b}) to decay, the imaginary part of $z=t+is$ have to satisfy the following bounds $-\beta<s<0$ (resp. $0<s<\beta$). We conclude that $G^\beta_\pm$ define holomorphic functions in these respective strips, and we have, for any $z\in\mathbb{C}$ such that $0\leq \text{Im}(z)\leq\beta$, \begin{equation} G_{-}^\beta(z;A,B)=G_{+}^\beta(z-i\beta;A,B). \end{equation} Written in term of expectation value $\langle\cdot\rangle_\beta$, we finally obtain the celebrated KMS condition at temperature $\beta^{-1}$ \begin{equation}\label{trueKMS} \langle B\Sigma_z(A)\rangle_\beta=\langle\Sigma_{z-i\beta}(A)B\rangle_\beta, \end{equation} which can be related to a periodicity property for the thermal (2-point) correlation function.\smallskip We can link this derivation to our case of study by formally rewriting eq. \eqref{statekms1} \begin{align} \omega(A):=\langle A\rangle_\beta. \end{align} Hence, eq. \eqref{trueKMS} becomes \begin{equation} \omega\big(B\Sigma_z(A)\big)=\omega\big(\Sigma_{z-i\beta}(A)B\big), \end{equation} which for $z=0$ reads \begin{equation} \omega(BA)=\omega(\Sigma_{-i\beta}(A)B). \end{equation} Recall that, using $\zeta$, eq. \eqref{zeta}, the twisted trace property of the Lebesgue integral, eq. \eqref{twistedtrace} becomes \begin{equation}\label{twistedtrace2} \zeta\big(f\star g\big)=\zeta\big((\sigma\triangleright g)\star f\big). \end{equation} Hence, identifying $\zeta$ with $\omega$, and $\sigma$ with $\Sigma_{-i\beta}$, we find that eq. \eqref{twistedtrace} looks like a KMS condition for $\zeta$. We now characterised this assertion in more details. \subsubsection{KMS weight on \texorpdfstring{$\mathcal{M}_\kappa$}{k-Minkowski}.} To show that eq. \eqref{twistedtrace2} actually reflects the occurrence of a KMS condition, we can show that $\zeta$ defines a KMS weight on $\mathcal{M}_\kappa$ for the one-parameter group of *-automorphisms $\lbrace\sigma_t\rbrace_{t\in\mathbb{R}}$, $\sigma_t\in\text{Aut}(\mathcal{M}_\kappa)$, defined by \begin{subequations} \begin{equation}\label{defautomorphism} \sigma_t(f):=e^{\frac{3t}{\kappa}\partial_0}\triangleright f,\ t\in\mathbb{R},\ f\in\mathcal{M}_\kappa. \end{equation} Let us first characterised $\lbrace\sigma_t\rbrace_{t\in\mathbb{R}}$. Standard computations show that \begin{equation}\label{modulargroup} \sigma_{t_1}\circ\sigma_{t_2}=\sigma_{t_1+t_2},\ \sigma_t^{-1}=\sigma_{-t},\ \forall t,t_1,t_2\in\mathbb{R}, \end{equation} where $\circ$ is the composition of functions, \textit{i.e.} $\sigma_{t_1}\circ\sigma_{t_2}(f)=\sigma_{t_1}\big(\sigma_{t_2}(f)\big)$, together with \begin{equation}\label{modular-sigma} \sigma_t (f\star g)=\sigma_t(f) \star \sigma_t(g),\ \sigma_t(f^\ddagger)=\sigma_t(f)^\ddagger,\ \forall t\in\mathbb{R},\ f,g\in\mathcal{M}_\kappa. \end{equation} \end{subequations} Thus, $\lbrace\sigma_t\rbrace_{t\in\mathbb{R}}$ defines a group of *-automorphisms of $\mathcal{M}_\kappa$.\smallskip In order to link together $\sigma_t$ and the twist $\sigma$ appearing in eq. \eqref{twistedtrace}, we have to extend $t\mapsto\sigma_t$ to the complex plane. We define \begin{subequations} \begin{equation}\label{sigmat-modul-z} \sigma_z(f):=e^{\frac{3z}{\kappa}\partial_0}\triangleright f,\ \forall z\in\mathbb{C},\ f\in\mathcal{M}_\kappa, \end{equation} such that \eqref{modulargroup} and \eqref{modular-sigma} extend respectively to \begin{align} &\sigma_{z_1}\sigma_{z_2}=\sigma_{z_1+z_2},\ \sigma^{-1}_z=\sigma_{-z},\ \forall z,z_1,z_2\in\mathbb{C}, \label{prop-modulargroup-z}\\ &\sigma_z(f\star g)=\sigma_z(f)\star\sigma_z(g),\ \forall z\in\mathbb{C},\label{morphalg-modul-z} \end{align} while $\sigma_z$ is no longer an automorphism of $^*$-algebra. Instead, we have \begin{equation}\label{modulargroup-z} \sigma_z(f^\ddagger)=\sigma_{\bar{z}}(f)^\ddagger,\ \forall z\in\mathbb{C}. \end{equation} \end{subequations} In particular, the twist $\sigma$, eq. \eqref{twist-def}, is recovered for $z=i$, \textit{i.e.} \begin{equation}\label{b10} \sigma=\sigma_{z=i}, \end{equation} and one has $\sigma(f^\ddagger)=\sigma^{-1}(f)^\ddagger$. This type of automorphism is known as a regular automorphism and occurs in the framework of twisted spectral triples. It has been introduced in \cite{Connes:2008} in conjunction with the assumption of the existence of a distinguished group of *-automorphisms of the algebra indexed by one real parameter, says $t$, \textit{i.e.} the modular group, such that the analytic extension $\sigma_{t=i}$ coincides precisely with the regular automorphism. Here, the modular group linked with the twisted trace is defined by $\lbrace\sigma_t\rbrace_{t\in\mathbb{R}}$ while the twist $\sigma=\sigma_{t=i}$ defines the related regular automorphism.\smallskip Recall that a KMS weight on a C*-algebra $\mathcal{A}$ for a modular group of $^*$-automorphisms $\{\sigma_t\}_{t\in\mathbb{R}}$ is defined \cite{Kustermans:1997} as a (densely defined lower semi-continuous) linear map $\zeta:\mathcal{A}_+\to\mathbb{R}^+$, where $\mathcal{A}_+$ is the set of positive elements of $\mathcal{A}$, such that $\lbrace\sigma_t\rbrace_{t\in\mathbb{R}}$ admits an analytic extension, still a (norm continuous) one-parameter group, $\lbrace\sigma_z\rbrace_{z\in\mathbb{C}}$, acting on $\mathcal{A}$ satisfying the following two conditions \begin{subequations}\label{prop-kmsweight} \begin{align} &\zeta\circ\sigma_z=\zeta,\label{prop-kmsweighta}\\ &\zeta(a^\ddagger \star a)=\zeta\big(\sigma_{\frac{i}{2}}(a)\star\sigma_{\frac{i}{2}}(a)^\ddagger\big),\ a\in\text{Dom}\big(\sigma_{\frac{i}{2}}\big).\label{prop-kmsweightb} \end{align} \end{subequations} The characterization of the relevant C*-algebra has been discussed in Chap. \ref{sec-Minkowski}, see also Appendix \ref{sap-harmonic}. For our purpose, it will be sufficient to keep in mind that it involves $\mathcal{M}_\kappa$ as a dense *-subalgebra. For more mathematical details on KMS weights see, e.g., \cite{Kustermans:1997}. Note that the notion of KMS weight related to the present twisted trace has been already used in \cite{Matassa:2013,Matassa:2014} to construct a modular spectral triple for $\kappa$-Minkowski space.\smallskip On the one hand, the first property, eq. \eqref{prop-kmsweighta}, is found to be satisfied by $\zeta$ as a mere consequence of its $\kappa$-Poincar\'{e} invariance (in the sense of \eqref{measure-invariance}). We compute \begin{align} \zeta\big(\sigma_z(f)\big)&=\int d^4x\ \sigma_z(f)(x)=\int d^4x \ (e^{\frac{3z}{\kappa}\partial_0}\triangleright f)(x)\\ &=\int d^4x \ (\mathcal{E}^{-3iz}\triangleright f)(x)=\epsilon(\mathcal{E})^{-3iz}\int d^4x f(x)=\zeta(f)\nonumber. \end{align} On the other hand, using eq. \eqref{morphalg-modul-z}, \eqref{modulargroup-z}, and \eqref{twistedtrace}, we find \begin{align}\label{proof-kmsweight} \zeta(\sigma_{\frac{i}{2}}(f)\star\sigma_{\frac{i}{2}}(f)^\ddagger)&=\int d^4x\ \sigma_{\frac{i}{2}}(f)\star\sigma_{-\frac{i}{2}}(f^\ddagger)=\int d^4x\ \sigma_{\frac{i}{2}}(f\star\sigma_{-i}(f^\ddagger))\\ &=\int d^4x\ f\star\sigma_{-i}(f^\ddagger)=\int d^4x\ \sigma(\sigma_{-i}(f^\ddagger))\star f =\zeta(f^\ddagger\star f).\nonumber \end{align} This shows that the two properties \eqref{prop-kmsweight} are satisfied by \eqref{zeta}. Hence, $\zeta$ defines a KMS weight on $\mathcal{M}_\kappa$. Now, the Theorem 6.36 of \cite{Kustermans:1997} guarantees, for each pair $(a,b)\in\mathcal{A}$, the existence of a bounded continuous function $f:\Sigma\to\mathbb{C}$, where $\Sigma$ is the strip defined by $\{z\in\mathbb{C},\ 0\le\textrm{Im}(z)\le 1\}$, such that one has \begin{equation}\label{KMS-abst} f(t)=\zeta(\sigma_t(a)\star b),\ \ f(t+i)=\zeta(b\star\sigma_t(a)), \end{equation} which is nothing but an abstract version of the KMS condition introduced in eq. \eqref{trueKMS}. Therefore, the requirement of $\kappa$-Poincar\'{e} invariance trades the cyclicity of the Lebesgue integral for a KMS condition.\smallskip Finally, note that $\sigma_t$, eq. \eqref{defautomorphism}, defines ``time translations" since we have $\sigma_t(\phi)(x_0,\vec{x})=\phi(x_0+\frac{3t}{\kappa},\vec{x})$. We can show that this evolution is transferred at the level of the operators stemming from the Weyl quantisation map $Q$, eq. \eqref{Weyl-Minkowski}. To see it, we introduce the Gelfand-Naimark-Segal (GNS) representation of $\mathcal{M}_\kappa$, $\pi_{GNS}:\mathcal{F}(\mathcal{S}_c)\to\mathcal{B}(\mathcal{H})$, defined as usual by $\pi_{GNS}(\phi)\cdot v=\phi\star v$ for any $v\in\mathcal{H}$. Then, we compute \begin{align}\label{compo2} \pi_{GNS}(\sigma_t\phi)\cdot\omega&=(\sigma_t\phi)\star\omega=\sigma_t(\phi\star(\sigma_t^{-1}\omega)) =(\sigma_t\circ\pi_{GNS}(\phi)\circ\sigma_t^{-1})\cdot\omega\\ &=\big((\Delta_T)^{it}\circ\pi_{GNS}(\phi)\circ(\Delta_T)^{-it}\big)\cdot\omega,\nonumber \end{align} for any $\omega\in\mathcal{H}$ and any $\phi\in\mathcal{M}_\kappa$, and $\Delta_T$ is the Tomita operator given by \begin{equation} \Delta_T:=e^{\frac{3P_0}{\kappa}}, \end{equation} which coincides with \eqref{function-modulaire-4d} and such that $\sigma_t=(\Delta_T)^{it}$. Equation \eqref{compo2} indicates that the modular group defined by $\lbrace\sigma_t\rbrace_{t\in\mathbb{R}}$ generates an evolution for the operators stemming from the Weyl quantisation map $Q$. \subsection{Derivation of admissible kinetic terms.}\label{sec-kinetic} We begin with the construction of the kinetic term $\mathcal{S}_{\kappa,\star}^\text{kin}[\phi,\phi^\ddagger]$ encoding the dynamics of two \textit{independent} massive \textit{free} $\mathbb{C}$-valued scalar fields $\phi$ and $\phi^\ddagger$ we assume to be described by the same kinematic, although they may have different masses which should differ by terms of order (at least) $\kappa^{-1}$ in view of (CP) and eq. \eqref{action-limit}. \subsubsection{General structure.} Making use of the Hilbert product, eq. \eqref{hilbert-def2}, we find that admissible expressions for $\mathcal{S}_{\kappa,\star}^\text{kin}$ are given by terms of the form $\langle\phi_a,T_a\phi_a\rangle_\star$ where $a$ labels the nature of the field (i.e. either $\phi$ or $\phi^\ddagger$) and $T_a:\mathcal{M}_\kappa\to\mathcal{M}_\kappa$, $T_a:=K+m_a$, denotes any selfadjoint kinetic operator $K:\mathcal{M}_\kappa\to\mathcal{M}_\kappa$, with dense domain in $\mathcal{M}_\kappa$, supplemented by a mass-like term $m_a$. The reality of the action functional follows as a mere consequence of the selfadjointness of $K$. Indeed, $\langle\phi_a,T_a\phi_a\rangle_\star=:\langle T_a^\dag\phi_a,\phi_a\rangle_\star=\langle T_a\phi_a,\phi_a\rangle_\star=\overline{\langle\phi_a,T_a\phi_a\rangle}_\star$.\\ The general expression for the kinetic term we consider in the present dissertation is then given by \begin{equation}\label{action-kin} \mathcal{S}^\text{kin}_{\kappa,\star}[\phi,\phi^\ddagger]=\frac{1}{4}\langle\phi,T_1\phi\rangle_\star+\frac{1}{4}\langle\phi^\ddagger,T_2\phi^\ddagger\rangle_\star. \end{equation} Notice that, we cannot turn off one of the two terms in eq. \eqref{action-kin} since the omitted term would eventually reappeared at the quantum level. This might signal the occurrence of an internal symmetry for the action functional, namely under the exchange $\phi\leftrightarrow\phi^\ddagger$. More evidence pointing in this direction will be given when studying the one-loop quantum properties of the various NCFT considered in next section.\smallskip We further assume the kinetic operator $K$ to be a differential operator we identify with a function of the generators $\mathcal{E}$, $P_i$, of the translations' Hopf subalgebra $\mathfrak{T}_\kappa$.\footnote{Recall that the action of $P_\mu$ on the elements of $\mathcal{M}_\kappa$ is defined through the representation $P_\mu=-i\partial_\mu$. For explicit expressions see eq. \eqref{module-action} in Appendix \ref{sap-poincare}.} Formally, we have $K(\partial)\to K(P)$, which leads to the following expression \begin{equation}\label{kin-kernel} \left(Kf\right)(x)=\int d^4y \left(\int \frac{d^4k}{(2\pi)^4}\ \tilde{K}(k)e^{ik\cdot(x-y)}\right) f(y), \end{equation} valid for any $f\in\mathcal{M}_\kappa\cap\text{Dom}(K)$. Hence, the selfadjointness of $K$ requires its symbol $\tilde{K}$ to be real,\footnote{Indeed, as a differential operator $K$ defines an integral transform $(Kf)(x):=\int d^4y\ K(x,y)f(y)$ whose kernel has to satisfy $K(x,y)=\overline{K(y,x)}$ to ensure the selfadjointness of $K$. This can be shown by developing both expressions $\langle\phi,K\phi\rangle_\star$ and $\langle K\phi,\phi\rangle_\star$. The reality of $\tilde{K}$ follows directly from \eqref{kin-kernel}.} while the compatibility condition, eq. \eqref{pairing-involution}, among the various involutions yields \begin{equation}\label{kin-involution} \left(K(P)\triangleright\phi_a\right)^\ddagger=S\big(K(P)\big)^\dag\triangleright\phi_a^\ddagger=K\big(S(P)\big)\triangleright\phi_a^\ddagger, \end{equation} where we have used the fact that the antipode $S$ defines an algebra (linear) anti-homomorphism together with $[P_\mu,P_\nu]=0$ and \eqref{dag-hopfoperat}. Now, straightforward computations show that \begin{equation}\label{hilbert-prop} \langle f^\ddagger,f^\ddagger\rangle_\star=\langle \sigma f,f\rangle_\star, \end{equation} which combined with \eqref{kin-involution} in the second term of \eqref{action-kin} yield \begin{equation} \langle\phi^\ddagger,T_2\phi^\ddagger\rangle_\star=\langle\sigma\left(T_2\phi^\ddagger\right)^\dag,\phi\rangle_\star=\langle\sigma S(T_2)\phi,\phi\rangle_\star=\langle\phi,\sigma S(T_2)\phi\rangle_\star. \end{equation} Hence, the kinematic for the $\phi^\ddagger$'s can actually be described in term of the $\phi$'s by appropriately adjusting the kinetic operator $T_2\to\sigma S(T_2)$. It follows that \begin{equation}\label{action-kin2} \mathcal{S}^\text{kin}_{\kappa,\star}[\phi,\phi^\ddagger]=\frac{1}{4}\langle\left(T_1+S(T_2)\sigma\right)\phi,\phi\rangle_\star. \end{equation} As we are going to see throughout this chapter, thanks to the integral representation of both the star product \eqref{star-4d} and the involution \eqref{invol-4d}, any $\kappa$-Poincar\'{e} invariant NCFT involving $\phi$, $\phi^\ddagger$, and the star product can be conveniently represented as an ordinary, albeit nonlocal, complex scalar field theory depending on $\phi$, $\bar{\phi}$, and the pointwise (commutative) product among functions. This will be formally achieved upon identifying $\mathcal{S}_{\kappa,\star}[\phi,\phi^\ddagger]$ with $\mathcal{S}_\kappa[\phi,\bar{\phi}]$. Although trivial, this identification will lead to great simplifications in the computation of the propagator as well as in the analysis of the quantum properties of the various NCFT under consideration as it will enable us to make use of standard techniques from path integral quantisation and perturbation theory, reducing the analysis to ordinary quantum field theory computations.\smallskip Combining the expression of the Hilbert product \eqref{hilbert-def2} with \eqref{hilbert-prop2} in \eqref{action-kin2} yields \begin{subequations}\label{kinetic-map} \begin{equation} \mathcal{S}^\text{kin}_{\kappa,\star}[\phi,\phi^\ddagger]\to\mathcal{S}^\text{kin}_{\kappa}[\phi,\bar{\phi}]=\frac{1}{2}\int d^4x_1d^4x_2\ \bar{\phi}(x_1)\mathcal{K}(x_1,x_2)\phi(x_2), \end{equation} where $\mathcal{K}$ denotes the (nontrivial) kinetic operator for the complex scalar field theory characterised by $\mathcal{S}_{\kappa}[\phi,\bar{\phi}]$, illustrating the above discussion.\footnote{Note that $\mathcal{K}$ is symmetric with respect to the (canonical) Hilbert product on $L^2(\mathbb{R}^4)$.} It is defined by \begin{align} &\mathcal{K}(x_1,x_2):=\int\frac{d^4k}{(2\pi)^4}\ \tilde{\mathcal{K}}(k)e^{ik\cdot(x_1-x_2)},\\ &\tilde{\mathcal{K}}(k):=\frac{1}{2}(\tilde{K}(k)+m^2_1)+\frac{1}{2}(\tilde{K}\big(S(k)\big)+m^2_2)e^{-3k^0/\kappa}.\label{kinetic-mapb} \end{align} \end{subequations} From this expression can easily be computed the expression of the Feynman propagator $\Delta_F$ associated with such NCFT. This is achieved as usual by mere inversion of the kinetic operator $\mathcal{K}$, namely by solving, for any suitable test function $f$, \begin{equation} \int d^4x_2 d^4y\ \Delta_F(x_1,y)\mathcal{K}(y,x_2) f(x_2)=\int d^4 x_2\ \delta^{(\hspace{-1pt}4\hspace{-1pt})}(x_1-x_2)f(x_2). \end{equation} Standard computations yield \begin{equation}\label{gen-propagator} \Delta_F(x_1,x_2)=\int\frac{d^4k}{(2\pi)^4}\ \Delta_F(k)e^{ik\cdot(x_1-x_2)},\ \Delta_F(k):=1/\tilde{\mathcal{K}}(k). \end{equation} Assuming that $S(K)=K$, a condition that is fulfilled by the models considered in the present dissertation, the symbol of the above kinetic operator, eq. \eqref{kinetic-mapb}, takes the convenient form \begin{subequations}\label{kin-hyp} \begin{align} &\tilde{\mathcal{K}}(k)=\frac{1}{2}\left(1+e^{-3k^0/\kappa}\right)\left(\tilde{K}(k)+M^2\right),\label{kin-hypa}\\ &M^2(k^0;m_1,m_2):=\frac{m_1^2+m_2^2e^{-3k^0/\kappa}}{1+e^{-3k^0/\kappa}},\label{full-mass} \end{align} \end{subequations} where the masslike (energy dependent) term, $M^2(k^0)$, is bounded with (energy independent) bounds given by \begin{equation}\label{bounds-mass} \min(m_1^2,m_2^2)\leq M^2(k^0)\leq\max(m_1^2,m_2^2). \end{equation} This last result indicates that it is sufficient to consider the case $M=m(\kappa)\in\mathbb{R}$ constant to study the general quantum behaviour of such models of NCFT. Note that this condition is automatically satisfied if $m_1=m_2=m$, in which case eq. \eqref{full-mass} reduces to $M^2=m^2$. Under this assumption, the symbol of the propagator, eq. \eqref{gen-propagator}, becomes \begin{equation} \Delta_F(k):=\frac{2}{\left(1+e^{-3k^0/\kappa}\right)\left(\tilde{K}(k)+m^2\right)}. \end{equation} Before proceeding to the presentation of the kinetic operators used to investigate the quantum properties of various models of NCFT, two remarks are in order. \begin{enumerate}[label={$\textit{(\roman*)}$}] \item{First, the kinetic operators considered in the present study will be assumed to be square of Dirac operators, \textit{i.e.} $K=\mathcal{D}_\mu \mathcal{D}^\mu$. This choice is motivated by the early proposal \cite{Chamseddine:1997,Chamseddine:1996,Chamseddine:2007} that a Hilbert-Einstein-Yang-Mills action functional for describing fundamental physics might be provided by the spectral action associated with some spectral triple.\footnote{Recall that the spectral action is essentially a regularized heat kernel expansion of the Dirac operator defining the spectral triple.}$^{,}$\footnote{Note however that, as far as we now, no spectral triple (enjoying all of the required properties to defined a spectral triple) \textit{\`{a} la} Connes for $\kappa$-Minkowski has been constructed until now.} The kinetic operator $K(P)$ being a function of $\mathcal{E}$ and $P_i$, so are the $\mathcal{D}_\mu$'s. From this follows the selfadjointness of the $\mathcal{D}_\mu$'s. Indeed, assuming \begin{equation} \mathcal{D}_\mu(P)=\sum_{\alpha_0,\cdots,\alpha_3}\lambda_{\alpha_0,\cdots,\alpha_3;\mu}\ \mathcal{E}^{\alpha_0}P_1^{\alpha_1}\cdots P_3^{\alpha_3},\ \ \lambda_{\alpha_0,\cdots,\alpha_3;\mu}\in\mathbb{R}, \end{equation} we easily find that, for any $f,g\in\mathcal{M}$, \begin{equation} \langle f,\mathcal{D}_\mu g\rangle_\star=\sum_{\alpha_0,\cdots,\alpha_3}\lambda_{\alpha_0,\cdots,\alpha_3;\mu}\ \langle f,\mathcal{E}^{\alpha_0}P_1^{\alpha_1}\cdots P_3^{\alpha_3} g\rangle_\star=\langle \mathcal{D}_\mu f,g\rangle_\star, \end{equation} where we have used the selfadjointness of $\mathcal{E}$ and $P_i$ together with $[P_\mu,P_\nu]=0$ to show the last equality. Finally, observe that from the selfadjointness of $\mathcal{D}_\mu$ follows the pleasant property \begin{equation} \langle f,K_\kappa g\rangle_\star=\langle f,\mathcal{D}^\mu\mathcal{D}_\mu g\rangle_\star=\langle\mathcal{D}^\mu f,\mathcal{D}_\mu g\rangle_\star,\ \ \forall f,g\in\mathcal{M}_\kappa. \end{equation} However, in view of eq. \eqref{derivtwist}, there is no reason for the $\mathcal{D}_\mu$'s to be derivations of the algebra $\mathcal{M}_\kappa$ since, in general, $\mathcal{D}_\mu(f\star g)\neq\mathcal{D}_\mu(f)\star g+f\star\mathcal{D}_\mu(g)$, $f,g\in\mathcal{M}_\kappa$.} \item{Next, we mention that a full family of kinetic operators, hence propagators, associated with $K$, which are still compatible with the desired properties for $S^\text{kin}_{\kappa,\star}$, can be obtained upon substituting \begin{equation}\label{fields-family} \phi\to\phi_{\alpha}:=\mathcal{E}^{\alpha}\phi,\ \ \phi^\ddagger\to\phi^\ddagger_{\alpha}:=\mathcal{E}^{-\alpha}\phi^\ddagger,\ \ \alpha\in\mathbb{R}, \end{equation} in \eqref{action-kin}. This family of fields, labelled by $\alpha$, are formally obtained from the action of some power of the twist factor \eqref{twist-def} on $\phi$ and $\phi^\ddagger$, all of them admitting the same commutative limit.\footnote{Indeed, setting $\alpha=3\alpha'$ in \eqref{fields-family}, these transformations read $\phi_{\alpha}=\sigma^{\alpha'}\phi$ and $\phi^\ddagger_{\alpha}=\sigma^{-\alpha'}\phi^\ddagger$. }$^,$\footnote{Note that the respective powers of the twist factor in front of $\phi$ and $\phi^\ddagger$ are not independent. This is essential to ensure consistency of relations of the form $\langle\phi,\phi\rangle=\langle\sigma\phi^\ddagger,\phi^\ddagger\rangle$.} Now, let $\mathcal{O}:\mathcal{M}_\kappa\to\mathcal{M}_\kappa$ be a selfadjoint operator with dense domain in $\mathcal{M}_\kappa$ and let $f_\alpha:=\mathcal{E}^\alpha f$ for any $f\in\mathcal{M}_\kappa$. Then, \begin{equation} \langle f_\alpha,\mathcal{O}f_\alpha\rangle_\star=\langle\mathcal{E}^\alpha f,\mathcal{O}\mathcal{E}^\alpha f\rangle_\star=\langle f,\mathcal{O}_\alpha f\rangle_\star, \end{equation} where we have used the selfadjoitness of $\mathcal{E}$ to obtain the last equality and defined \begin{equation}\label{op-family} \mathcal{O}_\alpha:=\mathcal{E}^\alpha\mathcal{O}\mathcal{E}^\alpha,\ \forall\alpha\in\mathbb{R}. \end{equation} Upon applying this latter relation to \eqref{action-kin2}, mere adaptation of the derivation leading to \eqref{kinetic-mapb} yield \begin{equation}\label{kin-family} \tilde{\mathcal{K}}_\alpha(k):=e^{-2\alpha k^0/\kappa}\tilde{\mathcal{K}}(k),\ \alpha\in\mathbb{R}. \end{equation} Assuming $S(K)=K$, the previous factorisation \eqref{kin-hyp}, and related discussions still applied and the family of corresponding propagator is finally found to be given by \begin{equation}\label{family-propagator} \Delta_{F,\alpha}(k):=e^{2\alpha k^0/\kappa}\Delta_{F}(k),\ \alpha\in\mathbb{R}. \end{equation} } \end{enumerate} \subsubsection{Casimir kinetic operator.} The simplest (natural) example of kinetic operator we can think about to generalise the ordinary scalar field theory is that of the first Casimir operator $\mathcal{C}_\kappa$ of the $\kappa$-Poincar\'{e} algebra. This latter is given, in the Majid-Ruegg basis, by \begin{equation}\label{Casimir0} \mathcal{C}_\kappa(P):=4\kappa^2\sinh^2\left(\frac{P_0}{2\kappa}\right) + e^{P_0/\kappa}P_iP^i. \end{equation} For latter convenience, it is useful to rewrite the Casimir operator as \begin{equation}\label{Casimir} \mathcal{C}_\kappa(P)=\mathcal{E}^{-1}\left(\mathcal{P}_0^2+P_iP^i\right),\ \mathcal{P}_0:=\kappa\left(1-\mathcal{E}\right). \end{equation} From these expressions, we readily infer that $S(\mathcal{C}_\kappa)=\mathcal{C}_\kappa$, while $\mathcal{C}_\kappa\to P_\mu P^\mu$ in the limit $\kappa\to\infty$. Actually, any polynomial in $\mathcal{C}_\kappa$ satisfies these properties and could, in principle, be used as kinetic operator. Observe that, in view of the expression of the first Casimir of the (ordinary) Poincar\'{e} algebra, \textit{i.e.} $\mathcal{C}(P):=P_\mu P^\mu$, it seems natural to interpret the quantity $\mathcal{P}_0$ appearing in eq. \eqref{Casimir}, which by the way reduces to $P_0$ in the low energy limit, as the natural quantity replacing the standard (undeformed) energy, ${P}_0$, in the context of $\kappa$-deformed theories involving deformed dispersion relation. This is supported by the role already played by $\mathcal{E}$ in the description of the $\kappa$-deformed translations' Hopf subalgebra encoding some of the symmetries of $\kappa$-Minkowski. Finally, $\mathcal{C}_\kappa$ can be put into the form $\mathcal{C}_\kappa(P)=\mathcal{D}_0^2+\mathcal{D}_i\mathcal{D}^i$, where the selfadjoint operators $\mathcal{D}_\mu$ are defined by \begin{equation} \mathcal{D}_0:=\kappa \mathcal{E}^{-1/2}(1-\mathcal{E}),\ \mathcal{D}_i:= \mathcal{E}^{-1/2}P_i. \end{equation} Identifying $\tilde{K}(k)$ with $\mathcal{C}_\kappa(k)$ in \eqref{family-propagator} leads to the following expression for the Feynman propagator associated with the Casimir kinetic operator \begin{subequations}\label{c-propagator} \begin{align} &\Delta^c_{F;\alpha}(k)=\frac{e^{(2\alpha-1)k^0/\kappa}}{\left(1+e^{-3k^0/\kappa}\right)}\frac{2}{\|\vec{k}\|^2+\kappa^2\mu_c^2(k^0)},\\ &\mu_c^2(k^0;m):=\left(1-e^{-k^0/\kappa}\right)^2+\left(m/\kappa\right)^2e^{-k^0/\kappa}. \end{align} \end{subequations} Let us investigate in more details the properties of the above (positive) propagator.\\ On the one hand, its decay properties can be studied by taking the large momentum limit in eq. \eqref{c-propagator}. Namely, keeping the energy $k^0$ fixed, we find \begin{subequations}\label{cuv-propagator} \begin{equation} \lim_{\|\vec{k}\|\to\infty}\Delta^c_{F;\alpha}(k)=\lim_{\|\vec{k}\|\to\infty}\frac{1}{\|\vec{k}\|^2}=0, \end{equation} while, keeping $\|\vec{k}\|$ fixed, we find \begin{align} &\lim_{k^0\to+\infty}\Delta^c_{F;\alpha}(k)=\|\vec{k}\|^{-2}\lim_{k^0\to+\infty}e^{(2\alpha-1)k^0/\kappa},\label{casimir1c}\\ &\lim_{k^0\to-\infty}\Delta^c_{F;\alpha}(k)=\lim_{k^0\to-\infty}e^{(4+2\alpha)k^0/\kappa}.\label{casimir2c} \end{align} \end{subequations} We can infer from the above results that the propagator vanishes at large (infinite) momenta if, and only if, \begin{equation} -2<\alpha\leq\frac{1}{2}, \end{equation} hence restricting the number of admissible transformations \eqref{fields-family}, then kinetic operators \eqref{kin-family}. On the other hand, the massless case ($m=0$) is singular in the infrared as its commutative counterpart. This is apparent from \begin{equation} \Delta^c_F(k)=\frac{e^{-k^0/\kappa}}{1+e^{-3k^0/\kappa}}\ \frac{2}{\mathcal{P}_0^2(k^0)+\|\vec{k}\|^2}, \end{equation} which diverges when $\mathcal{P}_0$ and $\|\vec{k}\|$ are taken simultaneously to 0. Since $\mathcal{P}_0:\mathbb{R}\to]-\infty,\kappa[$ is in one-to-one correspondence with $k^0$, we concludes that the assertion ``infrared singular" can be understood in its ordinary sense. \subsubsection{Equivariant kinetic operator} A second natural choice for the kinetic operator is provided by the square of the $\mathcal{U}_\kappa(\text{iso}(4))$-equivariant Dirac operator appearing in the construction of an equivariant spectral triple aiming to encode the geometry of $\kappa$-Minkowski \cite{dAndrea:2006}.\footnote{Note however that, as mentioned by the author of \cite{dAndrea:2006}, this Dirac operator does not satisfy the axioms for defining a spectral triple.} It is defined by \begin{equation} \mathcal{D}^\text{eq}_0=\frac{\mathcal{E}^{-1}}{2\kappa}\big(\kappa^2(1-\mathcal{E}^2)-P_iP^i\big),\ \mathcal{D}^\text{eq}_i=\mathcal{E}^{-1}P_i. \end{equation} A useful factorisation of the equivariant kinetic operator $K^\text{eq}$, when supplemented by a masslike term $m$, is given, assuming $m^2\leq\kappa^2$, by \begin{subequations} \begin{align} &\tilde{K}^\text{eq}(k)+m^2 = \frac{e^{2k^0/\kappa}}{4\kappa^2}\left(\|\vec{k}\|^2+\kappa^2\mu^2_{+}(k^0)\right)\left(\|\vec{k}\|^2+\kappa^2\mu^2_{-}(k^0)\right),\\ &\mu^2_{\pm}(k^0;m):=1+ e^{-2k^0/\kappa}\pm 2e^{-k^0/\kappa} \sqrt{1-\left(m/\kappa\right)^2}. \end{align} \end{subequations} This leads to the following expression for the Feynman propagator \begin{equation}\label{eq-propagator} \Delta_{F;\alpha}^\text{eq}(k)=\frac{e^{2(\alpha-1)k^0/\kappa}}{1+e^{-3k^0/\kappa}}\ \frac{8\kappa^2}{(\|\vec{k}\|^2+\kappa^2\mu^2_{+})(\|\vec{k}\|^2+\kappa^2\mu^2_{-})}, \end{equation} whose decay properties are given by \begin{subequations}\label{eq-decay} \begin{align} &\lim_{\|\vec{k}\|\to\infty}\Delta^\text{eq}_{F;\alpha}(k)=\lim_{\|\vec{k}\|\to\infty}\frac{1}{\|\vec{k}\|^4}=0,\\ &\lim_{k^0\to+\infty}\Delta^\text{eq}_{F;\alpha}(k)=\|\vec{k}\|^{-4}\lim_{k^0\to+\infty}e^{2(\alpha-1)k^0/\kappa},\\ &\lim_{k^0\to-\infty}\Delta^\text{eq}_{F;\alpha}(k)=\lim_{k^0\to-\infty}e^{(5+2\alpha)k^0/\kappa}, \end{align} \end{subequations} indicating that, this time, the propagator vanishes at large momenta if, and only if, \begin{equation} -\frac{5}{2}<\alpha\leq1. \end{equation} Again, the propagator is IR singular as it is apparent from eq. \eqref{eq-Casimir} below.\smallskip It is interesting to notice that the equivariant kinetic operator is related to the Casimir operator through the relation \begin{equation}\label{eq-Casimir} K^\text{eq}=\mathcal{C}_\kappa\left(1+\frac{1}{4\kappa^2}\mathcal{C}_\kappa\right). \end{equation} It follows from this observation that the ``equivariant propagator," eq. \eqref{eq-propagator}, can actually be regarded as a Pauli-Villars regularised version of the ``Casimir propagator," eq. \eqref{c-propagator}. This is obvious when considering the massless theory, $m=0$, namely \begin{subequations}\label{c-Villars} \begin{align}\label{c-Pauli} \Delta_F^\text{eq}(k)=\frac{2}{1+e^{-3k^0/\kappa}}\left(\frac{1}{\mathcal{C}_\kappa(k)}-\frac{1}{\mathcal{C}_\kappa(k)+4\kappa^2}\right), \end{align} $2\kappa$ playing the role of a Pauli-Villars cutoff. A similar interpretation is still possible in the massive case. To do so, it is first convenient to use a partial fraction decomposition to write the propagator \eqref{eq-propagator} as \begin{align} &\Delta_F^\text{eq}(k)=\frac{2\kappa}{\sqrt{\kappa^2-m^2}\left(1+e^{-3k^0/\kappa}\right)}\left(\frac{1}{C_\kappa+m_{-}^2}-\frac{1}{C_\kappa+m_{+}^2}\right),\\ &m_\pm(m,\kappa):=2\kappa^2\left(1\pm\sqrt{1-(m/\kappa)^2}\right), \end{align} then to identify $m_{-}$ with the bare mass for the NCFT. In that case, the initial parameter $m$ becomes $m=m_{-}\sqrt{1-(m_{-}/2\kappa)^2}$ and $m_{+}=4\kappa^2-m_{-}$, such that \begin{equation} \Delta_F^\text{eq}(k)=\frac{4\kappa^2}{(2\kappa^2-m_{-}^2)\left(\lambda_1+\lambda_2e^{-3k^0/\kappa}\right)}\left(\frac{1}{C_\kappa+m_{-}^2}-\frac{1}{C_\kappa+4\kappa^2-m_{-}^2}\right), \end{equation} \end{subequations} where the cutoff is now $\sqrt{4\kappa^2-m_{-}^2}$ and \eqref{c-Pauli} is recovered in the limit $m_{-}\to0$. \subsubsection{Modular kinetic operator.} A third example of kinetic operator is provided by the square of the Dirac operator introduced in \cite{Matassa:2014,Matassa:2013} in the attempt to construct a modular spectral triple for $\kappa$-Minkowski. This Dirac operator is characterised by \begin{equation} \mathcal{D}^m_0:=\kappa(1-\mathcal{E}),\ \mathcal{D}^m_i:= P_i, \end{equation} such that $K^m(P)=\mathcal{E}\mathcal{C}_\kappa(P)$. Hence, unlike both the Casimir and equivariant kinetic operators, the modular kinetic operator does not satisfy the relation $S(K)=K$. Instead, \begin{equation}\label{lost} S(K^m)=S(\mathcal{C}_\kappa)S(\mathcal{E})=\mathcal{E}^{-1}\mathcal{C}_\kappa=\mathcal{E}^{-2}K^m(P), \end{equation} and the factorisation \eqref{kin-hyp} does not hold anymore. Going back to eq. \eqref{kinetic-map}, we find \begin{subequations} \begin{equation} \tilde{\mathcal{K}}^m(k)=\frac{1}{2}\left(1+e^{-k^0/\kappa}\right)\left(\tilde{K}^m(k)+M^2(k^0)\right), \end{equation} where $M^2$ is now given, assuming $m_1=m_2=m$, by\footnote{We made used of the decomposition $1+y^3=(1+y)(1-y+y^2)$.} \begin{equation}\label{massmod} M^2(y;m):=(1-y+y^2) m^2. \end{equation} \end{subequations} Simple inspection of eq. \eqref{massmod} shows that no bounds such as those previously found in eq. \eqref{bounds-mass} can be used in the present case to simplify the computations. In particular, we can no longer treat $M$ as a constant of the energy.\footnote{This conclusion would have been the same in the more general case $m_1\neq m_2$.} The corresponding family of propagators is given by \begin{subequations} \begin{align} &\Delta^m_{F;\alpha}(k)=\frac{e^{2\alpha k^0/\kappa}}{1+e^{-k^0/\kappa}}\ \frac{2}{\|\vec{k}\|^2+(\kappa^2+m^2)\mu_\text{m}^2(k^0)},\\ &\mu_\text{m}^2(k^0;m):=1-\left(1+\frac{\kappa^2}{\kappa^2+m^2}\right)e^{-k^0/\kappa}+e^{-2k^0/\kappa}, \end{align} \end{subequations} whose decay properties are given by \begin{subequations} \begin{align} &\lim_{\|\vec{k}\|\to\infty}\Delta^m_{F;\alpha}(k)=\lim_{\|\vec{k}\|\to\infty}\frac{1}{\|\vec{k}\|^2}=0,\\ &\lim_{k^0\to+\infty}\Delta^m_{F;\alpha}(k)=\|\vec{k}\|^{-2}\lim_{k^0\to+\infty}e^{2\alpha k^0/\kappa},\label{mod1c}\\ &\lim_{k^0\to-\infty}\Delta^m_{F;\alpha}(k)=\lim_{k^0\to-\infty}e^{(3+2\alpha)k^0/\kappa},\label{mod2c} \end{align} \end{subequations} indicating that this propagator vanishes at large momentum if, and only if, \begin{equation} -\frac{3}{2}<\alpha\leq0. \end{equation} Let us compare the decay properties of the modular propagator with those of the Casimir propagator. We first observe that they have the same dependence in the spacelike variable, $\vec{k}$. Comparing their respective behaviour at large energy (large $k^0$), namely \eqref{casimir1c} with \eqref{mod1c}, and \eqref{casimir2c} with \eqref{mod2c}, we find \begin{subequations} \begin{align} &\lim_{k^0\to+\infty}\frac{\Delta^m_{F;\alpha}}{\Delta^c_{F;\alpha}}(k)=\lim_{k^0\to+\infty}e^{k^0/\kappa},\\ &\lim_{k^0\to-\infty}\frac{\Delta^m_{F;\alpha}}{\Delta^c_{F;\alpha}}(k)=\lim_{k^0\to-\infty}e^{-k^0/\kappa}. \end{align} \end{subequations} From the above results, it is to expect the UV behaviour of the NCFT built from the modular kinetic operator to be worth (\textit{i.e.} more divergent) than the UV behaviour of NCFT built from the Casimir kinetic operator. We have checked that it is indeed the case. This propagator also has a pole at $(0,\vec{0})$ in the massless case. For these reasons, and to not overload the presentation, we will not study this model further. \subsection{Derivation of admissible interaction potentials.}\label{sec-interaction} We now turn to the analysis of the interaction term $S_{\kappa,\star}^\text{int}$. As already mentioned at the beginning of this section, a sufficient condition to ensure the reality of the action functional $S_{\kappa,\star}$ consists in considering interaction terms of the form $\langle f,f\rangle_\star$ where $f\in\mathcal{M}_\kappa$ is any (star) polynomial in the fields $\phi$ and $\phi^\ddagger$. It follows that, in contrast with the commutative $\vert\phi\vert^4$ model for which there exists only one (local) interaction, one can easily exhibit for its ($\kappa$-Poincar\'e invariant) noncommutative counterpart four different (nonlocal) interactions which result from the noncommutativity of the star product together with the noncyclicity of the integral involved in $S_{\kappa,\star}$.\smallskip According to the terminology of NCFT we distinguish two \textit{orientable interactions} \begin{subequations}\label{interactions} \begin{align}\label{o-int} S_{\kappa,\star}^{\text{int},\textit{o}}[\phi,\phi^\ddagger]:=&\ \frac{g_1}{4!}\langle\phi^\ddagger\star\phi,\phi^\ddagger\star\phi\rangle_\star+\frac{g_2}{4!}\langle\phi\star\phi^\ddagger,\phi\star\phi^\ddagger\rangle_\star,\ \ g_1,g_2\in\mathbb{R}, \end{align} where the $\phi$'s alternate with the $\phi^\ddagger$'s, and two \textit{nonorientable interactions}\footnote{For more technical details on the diagrammatic associated with orientable/nonorientable interactions see, e.g., \cite{Vignes:2006,Vignes:2007} for NCFT on Moyal space and \cite{moi:2016} for the $\mathbb{R}^3_\theta$ case and references therein.} \begin{align}\label{no-int} S_{\kappa,\star}^{\text{int},\textit{no}}[\phi,\phi^\ddagger]:=&\ \frac{g_3}{4!}\langle\phi\star\phi,\phi\star\phi\rangle_\star+\frac{g_4}{4!}\langle\phi^\ddagger\star\phi^\ddagger,\phi^\ddagger\star\phi^\ddagger\rangle_\star,\ \ g_3,g_4\in\mathbb{R}. \end{align} \end{subequations} Now, identifying $S_{\kappa,\star}^{\text{int},\textit{o}}[\phi,\phi^\ddagger]\to S_{\kappa}^{\text{int},\textit{o}}[\phi,\bar{\phi}]$, the ``orientable model" reduces to \begin{subequations}\label{o-int2} \begin{equation} S_{\kappa}^{\text{int},\textit{o}}(\bar{\phi},\phi)=\frac{1}{4!}\int \prod_{\ell=1}^4\left[\frac{d^4k_\ell}{(2\pi)^4}\right] \bar{\phi}(k_1)\phi(k_2)\bar{\phi}(k_3)\phi(k_4)\mathcal{V}_\textit{o}(k_1,k_2,k_3,k_4), \end{equation} where the nonlocal 4-vertex function $\mathcal{V}_\textit{o}$ is defined by \begin{align} &\mathcal{V}_\textit{o}(k_1,k_2,k_3,k_4):=(2\pi)^4\left(g_1+g_2e^{3k_1^0/\kappa}\right)V_\textit{o}(k_1,k_2,k_3,k_4),\label{o-vertex}\\ &V_\textit{o}(k_1,k_2,k_3,k_4):=\delta(k_4^0-k_3^0+k_2^0-k_1^0)\delta^{(\hspace{-1pt}3\hspace{-1pt})}((\vec{k}_4-\vec{k}_3)e^{k_4^0/\kappa}+(\vec{k}_2-\vec{k}_1)e^{k_1^0/\kappa}).\label{o-delta} \end{align} \end{subequations} In the same way, we find for the ``nonorientable model" \begin{subequations}\label{no-int2} \begin{align} &S_{\kappa}^{\text{int},\textit{no}}(\bar{\phi},\phi)=\frac{1}{4!}\int \prod_{\ell=1}^4\left[\frac{d^4k_\ell}{(2\pi)^4}\right] \bar{\phi}(k_1)\phi(k_2)\bar{\phi}(k_3)\phi(k_4)\mathcal{V}_\textit{no}(k_1,k_2,k_3,k_4),\\ &\mathcal{V}_\textit{no}(k_1,k_2,k_3,k_4):=(2\pi)^4\left(g_3+g_4e^{-3(k_1^0+k_3^0)/\kappa}\right)V_\textit{no}(k_1,k_2,k_3,k_4),\label{no-vertex}\\ &V_\textit{no}(k_1,k_2,k_3,k_4):=\delta(k_4^0-k_3^0+k_2^0-k_1^0)\delta^{(\hspace{-1pt}3\hspace{-1pt})}(\vec{k}_4-\vec{k}_3+e^{-k_4^0/\kappa}\vec{k}_2-e^{-k_3^0/\kappa}\vec{k}_1).\label{no-delta} \end{align} \end{subequations} We are now in position to compute radiative corrections to both the 2-point and 4-point functions for all of the various models presented above. Before proceeding to the analysis, let us comment on the admissible expressions for the interaction term. \begin{enumerate}[label={$\textit{(\roman*)}$}] \item{First, taking the limit $\kappa\to\infty$ in \eqref{interactions}, we recover the expected commutative limit, \textit{i.e.} $(\bar{g}/4!)\int d^4x\vert\phi(x)\vert^4$, provided the coupling constants $g_i$ differ from their commutative counterpart $\bar{g}$ by terms of order at least $\kappa^{-1}$;} \item{Next, eq. \eqref{o-delta} and \eqref{no-delta} exhibit the energy-momentum conservation laws for each of these theories. As expected, the conservation law for the energy (timelike momenta) sector is the standard one, while the 3-momentum conservation law is nonlinear. This merely reflects the semidirect product structure of the group underlying the noncommutative C*-algebra of fields modelling $\kappa$-Minkowski, as well as the deformed Hopf algebraic (coproduct) structure of the $\kappa$-Poincar\'{e} algebra underlying its (quantum) symmetries. Note this has been sometimes geometrically interpreted (for instance in the context of relative locality \cite{Amelino:2011,Gubitosi:2013,Amelino:2013}) as reflecting the curvature of the energy-momentum space at very high (i.e. of order $\kappa$) energy;} \item{Finally, as in $\S$\ref{sec-kinetic}, full families of orientable and nonorientable interactions can be obtained upon performing the substitution \eqref{fields-family} in \eqref{interactions}. The family of 4-vertex functions corresponding to this new set of interactions are then obtained by substituting \begin{subequations} \begin{align} &\mathcal{V}_\textit{o}(k_1,k_2,k_3,k_4)&&\hspace{-3cm}\to e^{-2\alpha(k_1^0+k_3^0)/\kappa}\mathcal{V}_\textit{o}(k_1,k_2,k_3,k_4)\label{ov-gen},\\ &\mathcal{V}_\textit{no}(k_1,k_2,k_3,k_4)&&\hspace{-3cm}\to e^{-4\alpha(k_1^0+k_3^0)/\kappa}\mathcal{V}_\textit{no}(k_1,k_2,k_3,k_4),\label{nov-gen} \end{align} \end{subequations} in \eqref{o-vertex} and \eqref{no-vertex} respectively.} \end{enumerate} \section{One-loop 2-point functions.}\label{sec-2point} In the previous section, we have discussed the admissible expressions for a $\kappa$-Poincar\'{e} invariant action functional aiming to describe the dynamics of a self-interacting $\mathbb{C}$-valued scalar field $\phi$ on $\kappa$-Minkowski background. We now turn to the study of the quantum properties of various models of NCFT built from the material presented in there, each model being characterised by one of the kinetic operator given in $\S$\ref{sec-kinetic} together with a quartic interaction potential to be chosen among the interactions given in $\S$\ref{sec-interaction}.\\ In order to clarify the presentation, we treat separately the models with orientable interactions from those with nonorientable interactions, even though we will be able to gather the various results in the end. This is essentially because to each of these families of interactions correspond nonequivalent conservation laws between the 3-momenta as it is apparent from eq. \eqref{o-delta} and \eqref{no-delta}. The extremely different structure in the expressions for the corresponding vertex functions actually gives rise to very different quantum behaviours already at the level of the one-loop 2-point function. Indeed, the former family of interactions leads only to planar contributions to the one-loop 2-point function while nonorientable interactions lead to nonplanar contributions as well. We find that the planar contributions diverge in the UV for all of the models while the nonplanar contributions diverge only at zero external momenta albeit finite otherwise, likely indicating the occurrence of UV/IR mixing for some of the models.\smallskip To deal with the perturbative expansion, we follow the usual route taken in most of the studies of NCFT, which we briefly recall now. The essential point of this derivation lies in the fact that any $\kappa$-Poincar\'e invariant action functional $S_{\kappa,\star}[\phi,\phi^\ddagger]$ involving the star product can be represented as an ordinary, albeit nonlocal, action functional $S_{\kappa}[\phi,\bar{\phi}]$ involving the commutative pointwise product among functions, hence describing the dynamics of an ordinary (self-interacting) complex scalar field. This has already been discussed at length in previous section $\S$\ref{sec-action} and we shall not argue more about it here except by recalling that this results essentially from the existence of an integral expression for the star product involved in the construction of the action functional. Accordingly, the perturbative expansion related to the NCFT is nothing but the usual perturbative expansion for an ordinary complex scalar field theory and is obtained upon expanding (up to the desired order) the generating functional of connected correlation functions $W$. This latter is defined from the partition function \begin{equation} \mathcal{Z}[J,\bar{J}]:=\int d\bar{\phi}d\phi \ e^{-S_\kappa[\phi,\bar{\phi}]+\int d^4x \left( \bar{J}(x)\phi(x) + J(x)\bar{\phi}(x)\right)}, \end{equation} via the relation $W[J,\bar{J}]:=\ln\left(\mathcal{Z}[J,\bar{J}]\right)$.\smallskip Note that the functional measure appearing in the expression of $\mathcal{Z}$ is merely the ordinary functional measure for a complex scalar field theory implementing formally the integration over the field configurations $\phi$ and $\bar{\phi}$. The correlation functions built from $\phi$ and $\bar{\phi}$ are then generated by the repeated action of standard functional derivatives with respect to $\bar{J}$ and $J$ which satisfy the usual functional rules \begin{equation} \frac{\delta J_a(p)}{\delta J_b(q)}=\delta_{ab}\delta^{(\hspace{-1pt}4\hspace{-1pt})}(p-q), \end{equation} where $a$ and $b$ label the nature of the source, \textit{i.e.} either $J$ or $\bar{J}$.\\ As a mere consequence of the above discussion, it follows that there is no need to introduce a notion of noncommutative (star) functional derivative in the present approach.\footnote{For another approach dealing with ``star functional derivatives" in the context of $\kappa$-Minkowski see, e.g., \cite{Amelino:2002b}.}\smallskip To complete the derivation of the one-loop 2-point function, it remains to define the effective action $\Gamma$ as the Legendre transform of $W$ and read the expressions for the various contributions to the one-loop 2-point function from the expansion of $\Gamma$. These last steps of calculation are recall in details (as well as the derivation of the one-loop 4-point function) in Appendix \ref{sap-perturbation} to which we refer. The expression of the one-loop quadratic part of the effective action is finally found to be given by \begin{subequations} \begin{align} &\Gamma^{(2)}_1[\phi,\bar{\phi}]:=\frac{1}{2}\int\frac{d^4k_1}{(2\pi)^4}\frac{d^4k_2}{(2\pi)^4}\ \bar{\phi}(k_1)\phi(k_2) \Gamma^{(2)}_1(k_1,k_2),\\ &\Gamma^{(2)}_1(k_1,k_2)=\frac{1}{(2\pi)^4}\int\frac{d^4k_3}{(2\pi)^4}\ \Delta_F(k_3)\Big[\mathcal{V}_{3312}+\mathcal{V}_{1233}+\mathcal{V}_{1332}+\mathcal{V}_{3213}\Big],\label{gamma-2pts} \end{align} \end{subequations} where $\Delta_F$ and $\mathcal{V}$ denote respectively any of the propagators and vertex functions presented in $\S$\ref{sec-action}. Also, we have introduced the notation $\mathcal{V}_{abcd}:=\mathcal{V}(k_a,k_b,k_c,k_d)$. \subsection{Model with Casimir kinetic operator.}\label{sec-2pt-Casimir} We begin our study of the quantum properties of $\kappa$-Poincar\'{e} invariant NCFT by considering models which are characterised by the Casimir kinetic operator \eqref{c-propagator}. \subsubsection{Orientable model.} The various contributions to the one-loop 2-point function are easily obtained by combining \eqref{o-vertex} with \eqref{gamma-2pts}. Straightforward computations show that all of the Wick contracted vertex functions $\mathcal{V}_\textit{o}$ involved in \eqref{gamma-2pts} reduce to the ordinary (linear) delta of conservation between external momenta, \textit{i.e.} $\delta^{(\hspace{-1pt}4\hspace{-1pt})}(k_2-k_1)$, times some power of the twist factor (which may differ from one case to the other, however). After some trivial manipulations, we find that the one-loop quadratic part of the effective action reduces to \begin{subequations}\label{2-point} \begin{equation} \Gamma_1^{(2)}[\phi,\bar{\phi}]=\frac{1}{2}\int \frac{d^4k}{(2\pi)^4}\ \bar{\phi}(k)\left(\omega_1+\omega_2e^{-3k^0/\kappa}\right)\phi(k), \end{equation} thus indicating that the tree-level structure of the mass operator, \textit{i.e.} $m_1^2+m_2^2\sigma$,\footnote{In the following, we shall refer to the term proportional to the twist factor $\sigma$ as the ``twisted mass" or ``twisted component of the mass operator," while the other term will be called ``ordinary (component of the) mass (operator)."} is preserved by radiative corrections, at least at first order in $\hbar$.\smallskip The corrections are given by \begin{align} &\omega_1:=\int\frac{d^4k}{(2\pi)^4}\ \left(3g_2+g_1e^{-3k^0/\kappa}\right)e^{-2\alpha k^0/\kappa}\Delta_{F;\alpha}(k),\label{2-pointa}\\ &\omega_2:=\int\frac{d^4k}{(2\pi)^4}\ \left(3g_1+g_2e^{3k^0/\kappa}\right)e^{-2\alpha k^0/\kappa}\Delta_{F;\alpha}(k),\label{2-pointb} \end{align} \end{subequations} where, at this stage, $\Delta_{F;\alpha}$ denotes any of the propagators belonging to one of the family \eqref{c-propagator} or \eqref{eq-propagator}. In fact, $e^{-2\alpha k^0/\kappa}\Delta_{F;\alpha}(k)=\Delta_{F}(k)$, $\forall\alpha\in\mathbb{R}$. This shows that, when applied consistently in both the kinetic term and the interaction term, the transformations $\phi\to\phi_\alpha$ and $\phi^\ddagger\to\phi^\ddagger_\alpha$, eq. \eqref{fields-family}, do not affect the one-loop quantum corrections to the 2-point function. Therefore, we shall restrict in the following our attention to the case $\alpha=0$, namely by considering the propagator $\Delta_{F}$.\smallskip Going back to the model with Casimir kinetic operator, one finds that the two contributions, eq. \eqref{2-point}, admit the generic expression \begin{subequations}\label{2pt-co} \begin{align} &\omega_j=\frac{\kappa}{\pi}\int_0^\infty dy\ \Phi_j(y) J(y),\ j=1,2,\\ &\text{with}\ \ \ J(y):=\int_{\mathbb{R}^3}\frac{d^3\vec{k}}{(2\pi)^3}\ \frac{1}{\|\vec{k}\|^2+\kappa^2\mu_c^2(y)}.\label{2pt-spatial} \end{align} The functions $\Phi_j$, which can be read from \eqref{2-point}, are given by \begin{equation}\label{2pt-coy} \Phi_1(y):=\frac{3g_2-g_1}{1+y^3}+g_1,\ \ \Phi_2(y):=\frac{3g_1-g_2}{1+y^3}+\frac{g_2}{y^3}, \end{equation} \end{subequations} where we have used, for latter computational convenience, the decompositions \begin{equation} \frac{y^3}{1+y^3}=1-\frac{1}{1+y^3},\ \ \frac{1}{y^3(1+y^3)}=\frac{1}{y^3}-\frac{1}{1+y^3}. \end{equation} Let us go back a moment to eq. \eqref{full-mass}. Recall that this equation gives the expression of the (energy dependent) masslike term, $M^2(k^0;m_1,m_2)$, which was shown to be bounded from above and below by the masses $m_1$ and $m_2$. To make the presentation (slightly) more complete, let us assume for a moment that $m_1\neq m_2$. Because of our correspondence principle (CP), we know that the two masses, which both admit the same commutative limit, differ from their commutative counterpart, \textit{i.e.} $\bar{m}_0$ in eq. \eqref{action-limit}, only by term of order $\kappa^{-1}$. Therefore, we now assume that $\vert m_1^2 -m^2_2\vert=:\varepsilon^2\ll m_1^2,m^2_2$ together with $m_1^2\geq m_2^2$. It follows that \begin{equation} M^2(k^0;m_2)=m_2^2+\frac{\varepsilon^2}{1+y^3}. \end{equation} Under this assumption, eq. \eqref{c-propagator} now reads \begin{equation} \mu_c^2(y)=R_c(y)+\frac{(\varepsilon/\kappa)^2y}{1+y^3},\ \ R_c(y)=1-\left(\frac{2\kappa^2-m_2^2}{\kappa^2}\right) y + y^2.\label{decomposition-muc} \end{equation} We now return to the computation of the one-loop corrections to the 2-point function.\smallskip Observe that we have performed the following change of variables \begin{equation}\label{ychange-variable} k^0\mapsto y:=e^{-k^0/\kappa}, \end{equation} to obtain \eqref{2pt-co} from \eqref{2-point}. First, note that both the lower (0) and upper ($\infty$) bounds of integrations in $\int_0^\infty dy$ correspond to the UV (\textit{i.e.} large $\vert k^0\vert$) regime. Then, observe that the $y$ variable is formally related to the quantity $\mathcal{P}_0$ replacing the (ordinary) energy at the level of the $\kappa$-deformed field theory as it is apparent from the expression of the first Casimir of the $\kappa$-Poincar\'{e} algebra; see eq. \eqref{Casimir}.\footnote{To be more precise, $y$ is related to $\mathcal{E}=e^{-P_0/\kappa}\in\mathfrak{T}_\kappa$, eq. \eqref{twist0}.} We have \begin{equation}\label{deformed-energy} \mathcal{P}_0(k^0)=\kappa(1-y), \end{equation} which reduces obviously to $k^0$ in the commutative ($\kappa\to\infty$) limit.\\ Hence, according to the discussion given in $\S$\ref{sec-kinetic}, the $y$-integrals will be regularised with respect to this quantity rather than $k^0$ in a sense explained below.\\ Let $\Lambda_0$ be a cutoff for $\mathcal{P}_0$ defined by $\vert\mathcal{P}_0\vert\leq\Lambda_0$. From eq. \eqref{deformed-energy}, we easily infer the following bounds for the $y$ variable \begin{equation}\label{ybounds} \frac{\kappa}{\kappa+\Lambda_0} \leq y \leq \frac{\kappa+\Lambda_0}{\kappa}, \end{equation} which will be used to regularised the $y$-integral; see eq. \eqref{2pt-coreg}. Of course, to assume $\vert\mathcal{P}_0\vert\leq\Lambda_0$ automatically implies the variable $k^0$ to be bounded as well. Denoting by $M_\kappa(\Lambda_0)$ the cutoff for the Fourier parameter $k^0$, we find that the two regulators are related via the relation \begin{equation} M_\kappa(\Lambda_0):=\kappa\ln\left(1+\frac{\Lambda_0}{\kappa}\right), \end{equation} with $M_\kappa(\Lambda_0)\to\Lambda_0$ as $\mathcal{P}_0(k^0)\to k^0$ in the limit $\kappa\to\infty$.\smallskip The analysis of the 2-point function can be more conveniently carried out upon using a Pauli-Villars regularisation to extract the singular behaviour of the 3-dimensional (spacelike) integral. This is formally achieved by introducing a cutoff $\Lambda$ (\textit{a priori} different from $\Lambda_0$), then substituting $J(y)\to J_\Lambda(y)$ in \eqref{2pt-co} with \begin{equation} J_{\Lambda}(y):=\int_{\mathbb{R}^3}\frac{d^3\vec{k}}{(2\pi)^3}\left(\frac{1}{\|\vec{k}\|^2+\kappa^2\mu_c^2(y)}-\frac{1}{\|\vec{k}\|^2+\Lambda^2}\right). \end{equation} This latter integrals is easily computed upon using the two relations \begin{subequations}\label{formulassss} \begin{align} \frac{1}{A^aB^b}=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)} & \int_0^1 du \ \frac{u^{a-1}(1-u)^{b-1}}{\left(uA+(1-u)B\right)^{a+b}},\ \ a,b>0, \\ \int \frac{d^np}{(2\pi)^n} \frac{1}{(p^2+M^2)^{m}}&=M^{n-2m}\frac{\Gamma(m-n/2)}{(4\pi)^{n/2} \Gamma(m)},\ \ m>n/2> 0, \end{align} \end{subequations} where $\Gamma(z)$ is the Euler gamma function. Explicitly, we have \begin{align} J_{\Lambda}(y)&=\int_{\mathbb{R}^3}\frac{d^3\vec{k}}{(2\pi)^3}\frac{\Lambda^2-\kappa^2\mu_c^2(y)}{\big(\|\vec{k}\|^2+\kappa^2\mu_c^2(y)\big)\big(\|\vec{k}\|^2+\Lambda^2\big)}\\ &=\int_0^1du\int_{\mathbb{R}^3}\frac{d^3\vec{k}}{(2\pi)^3}\frac{\Lambda^2-\kappa^2\mu_c^2(y)}{\big(\|\vec{k}\|^2+\Lambda^2+u(\kappa^2\mu_c^2(y)-\Lambda^2)\big)^2}\nonumber\\ &=\frac{1}{8\pi}\int_0^1 du\ \frac{\Lambda^2-\kappa^2\mu_c^2(y)}{\sqrt{\Lambda^2+u(\kappa^2\mu_c^2(y)-\Lambda^2)}}=\frac{1}{4\pi}\big(\Lambda-\kappa\mu_c(y)\big),\nonumber \end{align} exhibiting a linear (UV) divergence in $\Lambda$. It remains to compute \begin{equation}\label{2pt-coreg} \omega_j(\Lambda,\Lambda_0)=\frac{\kappa}{4\pi^2}\int_{\Lambda_0} dy\ \Phi_j(y)\big(\Lambda-\kappa\mu_c(y)\big), \end{equation} where the $y$-integral is understood to be regularised as $\int_0^\infty\to\int_{\Lambda_0}$ by means of using the bounds of integrations given by \eqref{ybounds}. The integration over the $y$ variable consists in standard integrals that can be found in any handbook of mathematics or table of integrals.\footnote{See e.g. I. S. Gradshteyn and I. M. Ryzhik, \textit{Table of Integrals, Series, and Products} (Boston: Academic Press, 2007).} Nevertheless, for the sake of completeness, we present the full computation of all of these integrals below. We suggest the reader familiar of such calculations, and who would skip the presentation of the computational details, to go directly to the final results and discussions starting just after eq. \eqref{go-to-result1}.\smallskip We begin with the computation of $\omega_1$. The computation of the first term in \eqref{2pt-coreg}, which involves the cutoff $\Lambda$, amounts to compute \begin{equation} \int_{\Lambda_0} dy\ \frac{1}{1+y^3}=\frac{2\pi}{3\sqrt{3}},\ \ \int_{\Lambda_0} dy=1+\frac{\Lambda_0}{\kappa}, \end{equation} while the second term involves integrals of the form \begin{equation}\label{integral1} \int_{\Lambda_0} dy\ \frac{\mu_c(y)}{1+y^3},\ \ \int_{\Lambda_0} dy\ \mu_c(y). \end{equation} Simple inspection of the decay properties of the integrand of the first integral in \eqref{integral1} shows that the integrand behaves like a constant when $y\to0$ while it behaves like $y^{-2}$ when $y\to\infty$, indicating that the first integral is finite. In contrast, a similar analysis shows that the second integral diverges at most quadratically. The singularities are obtained by expanding $\mu_c$, eq. \eqref{decomposition-muc}, in power of $\varepsilon$. This leads to \begin{equation} \int_{\Lambda_0} dy\ \mu_c(y)=\sum_{n=0}^\infty c_n \left(\frac{\varepsilon}{\kappa}\right)^{2n} \int_{\Lambda_0} dy\ \frac{y^n\sqrt{R_c(y)}}{(1+y^3)^nR_c^n(y)}, \end{equation} where the $c_n$ are the coefficients of the Taylor expansion (around 0) of $x\mapsto\sqrt{1+x}$. We easily find that the integrals involved in the series expansion converge for all $n\geq1$. The diverging part is then given by \begin{align} &\int dy\ \sqrt{R_c(y)}=\frac{\big(2\kappa^2y-(2\kappa^2-m_2^2)\big)\sqrt{R_c(y)}}{4\kappa^2}+\frac{\big(4\kappa^2-m_2^2\big)m_2^2}{8\kappa^4}\int \frac{dy}{\sqrt{R_c(y)}},\\ &\text{with}\ \ \ \int \frac{dy}{\sqrt{R_c(y)}}=\ln\left(2\sqrt{R_c(y)}+2y-\frac{2\kappa^2-m^2_2}{\kappa^2}\right),\nonumber \end{align} which leads to \begin{equation} \int_{\Lambda_0} dy\ \sqrt{R_c(y)}=\frac{\Lambda_0^2}{2\kappa^2}+\frac{m_2^2\Lambda_0}{2\kappa^3}+\frac{(4\kappa^2-m_2^2)m_2^2}{8\kappa^4}\ln\left(1+\frac{\Lambda_0}{\kappa}\right)+\lbrace\text{finite terms}\rbrace. \end{equation} Putting these results all together, we finally find \begin{subequations}\label{result-co1} \begin{align} \omega_1(\Lambda,\Lambda_0)=&\ \frac{g_1}{4\pi^2}\left(\Lambda-\frac{\Lambda_0}{2}\right)\Lambda_0+\frac{6\pi g_2+(3\sqrt{3}-2\pi)g_1}{12\pi^2\sqrt{3}}\ \kappa\Lambda-\frac{m_2^2g_1}{8\pi^2\kappa}\ \Lambda_0-\\ &-\frac{(4\kappa^2-m_2^2)m_2^2g_1}{32\pi^2\kappa^2}\ln\left(1+\frac{\Lambda_0}{\kappa}\right)+F_1(\kappa,\varepsilon),\nonumber \end{align} where the finite terms are given by \begin{align} F_1(\kappa,\varepsilon):=&-\frac{m_2^2g_1}{16\pi^2}\left(1+4\ln\left(\frac{2\kappa}{m_2}\right)\right)-\frac{(3g_2-g_1)\kappa^2}{4\pi^2}\int dy\ \frac{\mu_c(y)}{1+y^3}-\\ &-\frac{g_1\varepsilon^2}{4\pi^2}\sum_{n:=1} c_n \left(\frac{\varepsilon}{\kappa}\right)^{2n-2} \int dy\ \frac{y^n\sqrt{R_c(y)}}{(1+y^3)^nR_c^n(y)}+\mathcal{O}(\kappa^{-1}).\nonumber \end{align} \end{subequations} The computation of the second contribution $\omega_2$ is quite similar, some of the integrals to compute being the same as for $\omega_1$. The integrals which differ from $\omega_1$ are \begin{equation} \int_{\Lambda_0} \frac{dy}{y^3}=\frac{\kappa^2+2\kappa\Lambda_0+\Lambda_0^2}{2\kappa^2}, \end{equation} for the first term in eq. \eqref{2pt-coreg}, while for the second term we have \begin{equation}\label{integral2} \int_{\Lambda_0} dy\ \frac{\mu_c(y)}{y^3}=\sum_{n=0}^\infty c_n \left(\frac{\varepsilon}{\kappa}\right)^{2n} \int_{\Lambda_0} dy\ \frac{y^n\sqrt{R_c(y)}}{y^3(1+y^3)^nR_c^n(y)}. \end{equation} Mere analysis of the integrand in \eqref{integral2} shows that the integrals involved in the above series are finite for all $n\geq3$. The diverging parts of the remaining integrals are given by \begin{subequations} \begin{align} &\int dy\ \frac{\sqrt{R_c(y)}}{y^3}=\big((2\kappa^2-m_2^2)y-2\kappa^2\big)\frac{\sqrt{R_c(y)}}{4\kappa^2y^2}+\frac{(4\kappa^2-m_2^2)m_2^2}{8\kappa^4}\int\frac{dy}{y\sqrt{R_c(y)}},\\ &\int \frac{dy}{y^2\sqrt{R_c(y)}}=-\frac{\sqrt{R_c(y)}}{y}+\frac{2\kappa^2-m_2^2}{2\kappa^2}\int\frac{dy}{y\sqrt{R_c(y)}},\\ &\int \frac{dy}{y\sqrt{R^3_c(y)}}=\frac{2\left((2\kappa^2-m_2^2)\kappa^2y+(4\kappa^2-m_2^2)m_2^2-2\kappa^4\right)}{(4\kappa^2-m_2^2)m_2^2\sqrt{R_c(y)}}+\int\frac{dy}{y\sqrt{R_c(y)}},\\ &\text{with}\ \ \ \int \frac{dy}{y\sqrt{R_c(y)}}=-\ln\left(\frac{2\kappa^2-(2\kappa^2-m_2^2)y+2\kappa^2\sqrt{R_c(y)}}{\kappa^2y}\right).\nonumber \end{align} \end{subequations} Straightforward computations yield \begin{equation}\label{integral4} \int_{\Lambda_0} dy\ \frac{\mu_c(y)}{y^3}=\frac{\Lambda_0^2}{2\kappa^2}+\frac{m_1^2\Lambda_0}{2\kappa^3}+\frac{(4\kappa^2-m_1^2)m_1^2}{8\kappa^4}\ln\left(1+\frac{\Lambda_0}{\kappa}\right)+\lbrace\text{finite terms}\rbrace. \end{equation} Putting these results all together, we finally find \begin{subequations}\label{result-co2} \begin{align} \omega_2(\Lambda,\Lambda_0)=&\ \frac{g_2}{8\pi^2\kappa}\ \Lambda\Lambda_0^2+\frac{g_2}{4\pi^2}\left(\Lambda-\frac{\Lambda_0}{2}\right)\Lambda_0+\frac{12\pi g_1+(3\sqrt{3}-4\pi)g_2}{24\pi^2\sqrt{3}}\ \kappa\Lambda-\\ &-\frac{m_1^2g_2}{8\pi^2\kappa}\ \Lambda_0-\frac{(4\kappa^2-m_1^2)m_1^2g_2}{32\pi^2\kappa^2}\ln\left(1+\frac{\Lambda_0}{\kappa}\right)+F_2(\kappa,\varepsilon),\nonumber \end{align} where the finite terms are given by \begin{align}\label{go-to-result1} F_2(\kappa,\varepsilon):=&\ -\frac{m_1^2g_2}{4\pi^2}\ln\left(\frac{2\kappa}{m_2}\right)-\frac{(\kappa^2+\varepsilon^2)g_2}{8\pi^2}-\frac{(3g_1-g_2)\kappa^2}{4\pi^2}\int dy\ \frac{\mu_c(y)}{1+y^3}-\\ &\ -\frac{g_2\varepsilon^2}{8\pi^2}\int \frac{ydy}{(1+y^3)\sqrt{R_c(y)}}-\frac{g_2\varepsilon^4}{32\pi^2\kappa^2}\int dy\ \frac{y^2(2+y^3)}{(1+y^3)^2\sqrt{R^3_c(y)}}-\nonumber\\ &\ -\frac{g_2\varepsilon^2}{4\pi^2}\sum_{n=3}^\infty c_n \left(\frac{\varepsilon}{\kappa}\right)^{2n-2} \int dy\ \frac{y^n\sqrt{R_c(y)}}{y^3(1+y^3)^nR_c^n(y)}+\mathcal{O}(\kappa^{-1}).\nonumber \end{align} \end{subequations}\smallskip We now summarise the results we have found so far and discuss them in light of the well known quantum behaviour of the commutative $\vert\phi\vert^4$ field theory. To make the comparison more relevant, let us begin by presenting what are exactly the one-loop quantum corrections $\Omega$ to the corresponding 2-point function within the same regularisation scheme as we have used for the NCFT. We have $\Gamma_1^{(2)}[\phi,\bar{\phi}]=\frac{1}{2}\int \frac{d^4k}{(2\pi)^4}\ \bar{\phi}(k)\Omega\phi(k)$ with \begin{subequations} \begin{equation} \Omega\to\Omega(\Lambda,\Lambda_0):=g\int_{-\Lambda_0}^{\Lambda_0}\frac{dk_0}{2\pi}\int_{\mathbb{R}^3}\frac{d^3\vec{k}}{(2\pi)^3}\left(\frac{1}{\|\vec{k}\|^2+(k_0^2+m_0^2)}-\frac{1}{\|\vec{k}\|^2+\Lambda^2}\right). \end{equation} Standard computations yield \begin{align}\label{integral3} \Omega(\Lambda,\Lambda_0)&=\frac{g}{8\pi^2}\left(2\Lambda-\sqrt{\Lambda_0^2+m_0^2}\right)\Lambda_0-\frac{m_0^2g}{8\pi^2}\ \text{arcsinh}\left(\frac{\Lambda_0}{m_0}\right)\\ &=\frac{g}{4\pi^2}\left(\Lambda-\frac{\Lambda_0}{2}\right)\Lambda_0-\frac{m_0^2g}{8\pi^2}\ln\left(\frac{2\Lambda_0}{m_0}\right)-\frac{m^2g}{16\pi}.\nonumber \end{align} \end{subequations} where we made use of the series expansion (around 0) of $x\mapsto\sqrt{1+x}$, together with the asymptotic expression of $x\mapsto\text{arcsinh}(x)$, \textit{i.e.} $\text{arcsinh}(x)=\ln(2x)+\mathcal{O}(x^{-2})$, to go from the first line to the second one in \eqref{integral3}. Thus, $\Omega$ diverges quadratically.\smallskip Going back to the present NCFT, we have shown that the first order radiative corrections to the mass operator which can be read from the expression of the one-loop quadratic part of the effective action \begin{subequations}\label{result-co} \begin{equation} \Gamma_1^{(2)}[\phi,\bar{\phi}]=\frac{1}{2}\int \frac{d^4k}{(2\pi)^4}\ \bar{\phi}(k)\left(\omega_1+\omega_2e^{-3k^0/\kappa}\right)\phi(k), \end{equation} are given by \begin{align} &\omega_1(\Lambda)=\frac{g_1}{8\pi^2}\ \Lambda^2+\left(\frac{(3g_2-g_1)\kappa}{6\pi\sqrt{3}}+\frac{(2\kappa^2-m_2^2)g_1}{8\pi^2\kappa}\right)\Lambda-\\ &\hspace{1.5cm}-\frac{(4\kappa^2-m_2^2)m_2^2g_1}{32\pi^2\kappa^2}\ln\left(1+\frac{\Lambda}{\kappa}\right)+\lbrace\text{finite terms}\rbrace,\nonumber\\ &\omega_2(\Lambda)=\frac{g_2}{8\pi^2\kappa}\ \Lambda^3+\frac{g_2}{8\pi^2}\ \Lambda^2+\left(\frac{(3g_1-g_2)\kappa}{6\pi\sqrt{3}}+\frac{(\kappa^2-m_1^2)g_2}{8\pi^2\kappa}\right)\Lambda-\\ &\hspace{1.5cm}-\frac{(4\kappa^2-m_1^2)m_1^2g_2}{32\pi^2\kappa^2}\ln\left(1+\frac{\Lambda}{\kappa}\right)+\lbrace\text{finite terms}\rbrace,\nonumber \end{align} \end{subequations} where we have set $\Lambda=\Lambda_0$ for simplicity.\smallskip First of all, in view of the above results, we conclude that setting $g_2=0$ while keeping $g_1\neq0$ the NCFT behaves the same (at leading order in the cutoff) as its commutative counterpart. On the contrary, turning off $g_1$ while restoring $g_2\neq0$ we find that the NCFT diverges cubically, thus slightly worse than its commutative counterpart. Anyhow, in both cases, we find that the mass degeneracy is lifted by quantum fluctuations since the two masses receive one-loop corrections whose respective (leading order) dependences on $\Lambda$ are different. In particular, assuming $g_1,g_2\neq0$, we find \begin{equation} \left\vert\frac{\omega_1-\omega_2}{\omega_1}\right\vert=\frac{g_2}{\kappa g_1}\ \Lambda+\mathcal{O}(1), \end{equation} which is far from being small. This seems to indicate that the ansatz $\varepsilon\ll m_1,m_2$ does not survive the quantum fluctuations. However, as we are going to see when considering the case with nonorientable interactions below, the planar contributions arising in the one-loop 2-point function of the nonorientable model reverse in some sense this situation. In this case, we will find that the one-loop corrections to $m_1$ are proportional (at leading order in $\Lambda$) to $\Lambda^3$ while the corrections to $m_2$ are proportional (at leading order) to $\Lambda^2$. Therefore, the relation $\varepsilon\ll m_{1},m_2$ might be preserved if considering the full theory involving all of the interactions in eq. \eqref{interactions}, \textit{i.e.} both orientable and nonorientable.\smallskip Then, it is interesting to notice that the integral, eq. \eqref{integral4}, appearing in the computation of $\omega_2$, involves terms of the form $m_2^2+\varepsilon^2$ which corresponds to $m_1^2$ by definition. Hence, at least in the approximation $\vert m_1^2-m_2^2\vert\ll m_1^2,m_2^2$, the respective masses of the fields $\phi$ and $\phi^\ddagger$ are in some sense mixed together by quantum fluctuations.\smallskip Finally, let us comment on the peculiar role played by the deformation parameter $\kappa$ within such model of NCFT. Taking the (formal commutative) limit $\kappa\to\infty$ in eq. \eqref{result-co}, while keeping $\Lambda$ finite,\footnote{Note that similar results would have been obtained, to within unessential rescaling of the coefficients in eq. \eqref{limit-co}, by assuming the ratio $\Lambda/\kappa$ to remain constant in the limit $\kappa\to\infty$.} we find \begin{subequations} \begin{equation}\label{limit-co} \omega_j(\Lambda)\xrightarrow[\kappa\to\infty]{}\frac{g_j}{8\pi^2}\ \Lambda^2+a_j\kappa\Lambda+\lim_{\kappa\to\infty}F_j(\kappa,0), \end{equation} where we have set $\varepsilon=0$ in order to simplify the discussion,\footnote{Indeed, to set $\varepsilon=0$ before taking the commutative limit simplifies significantly the computation of $\lim_{\kappa\to\infty}F_j(\kappa,0)$. Note however that this is formally consistent with the fact that $\varepsilon\to0$ when $\kappa\to\infty$.} and the $a_j$'s are some constant coefficients we can read from eq. \eqref{result-co}. Surprisingly, although not strictly speaking a regulator for the the NCFT (in the sense that the integrals involved in $\omega_j$ have to be regularised in both timelike and spacelike variables), we find that $\kappa$ plays a role similar to $\Lambda$ in the commutative limit, \textit{i.e.} when $\kappa\to\infty$. This can be compared, for instance, with what happens in the context of $\mathbb{R}^3_\theta$. In this case, we will find that the NCFT is (UV) finite at fixed (nonzero) $\theta$, while the usual UV behaviour is recovered when taking the commutative limit $\theta\to0$. It follows that, $\theta$ plays the role of a (natural) UV cutoff within such model of NCFT on $\mathbb{R}^3_\theta$; see Chap. \ref{sap-ncftsu2}.\\ Going back to the case under consideration, the reason we kept track of the finite terms in eq. \eqref{result-co1} and \eqref{result-co2} is that these terms $F_j(\kappa,\varepsilon)$ become singular in the limit $\kappa\to\infty$. Indeed, we find \begin{align}\label{limit-co2} F_j(\kappa,0)\xrightarrow[\kappa\to\infty]{}b_j\kappa^2-\frac{m^2g_j}{4\pi^2}\ln\left(\frac{2\kappa}{m}\right)-\frac{m^2g_j}{16\pi^2},\ b_j\in\mathbb{R}. \end{align} \end{subequations} Hence, combining eq. \eqref{limit-co} with \eqref{limit-co2}, we recover the same behaviour as for the ordinary (commutative) $\vert\phi\vert^4$-theory. See eq. \eqref{integral3} for a comparison. \subsubsection{Nonorientable model.} We now turn to the analysis of the model with nonorientable interactions. Combining eq. \eqref{no-vertex} with \eqref{gamma-2pts}, we find that the one-loop quadratic part of the effective action decomposes into two families of contributions. The first family involves planar contributions for which a delta of conservation depending only on the external momenta can be factorised out from the Wick contracted vertex functions appearing in \eqref{gamma-2pts}. These contributions are of the same nature as the contributions encounter in ordinary quantum field theory and are similar to the ones found above for the orientable model. The other family of contributions involves nonplanar contributions which result from the Wick contraction of two nonadjacent fields in the expression of the interaction term \eqref{no-int2}. We write \begin{equation} \Gamma_1^{(2)}[\phi,\bar{\phi}]=\Gamma^{(2)}_\text{N}[\phi,\bar{\phi}]+\Gamma^{(2)}_\text{NP}[\phi,\bar{\phi}], \end{equation} where $\Gamma^{(2)}_{N}$ (resp. $\Gamma^{(2)}_\text{NP}$) denotes the planar (resp. nonplanar) component of the quadratic part of the effective action.\smallskip Let us begin with the planar contributions. Mere analysis of the expressions of $\Gamma_1^{(2)}$ shows that we have to set $\alpha=0$ in eq. \eqref{nov-gen} in order the tree-level structure of the mass operator to be preserved by quantum fluctuations. A situation we assume from now on. We find \begin{subequations}\label{contributions-cnop} \begin{equation} \Gamma^{(2)}_\text{N}[\phi,\bar{\phi}]=\frac{1}{2}\int \frac{d^4k}{(2\pi)^4}\ \bar{\phi}(k)\left(\omega_3+\omega_4e^{-3k^0/\kappa}\right)\phi(k), \end{equation} with \begin{align} &\omega_3:=\frac{\kappa g_3}{2\pi}\int\frac{dy}{y^4} \frac{d^3\vec{k}}{(2\pi)^3}\left(1+y^3\right)\Delta^{c}_{F}(k),\\ &\omega_4:=\frac{\kappa g_4}{2\pi}\int \frac{dy}{y}\frac{d^3\vec{k}}{(2\pi)^3}\left(1+y^3\right)\Delta^{c}_{F}(k). \end{align} \end{subequations} Setting formally $g_1=3g_2$ in \eqref{2-pointa} and $g_2=3g_1$ in \eqref{2-pointb} we obtain \begin{equation} \omega_3=\frac{g_3}{g_2}\omega_2,\ \omega_4=\frac{g_4}{g_1}\omega_1. \end{equation} Hence, no additional computations are needed here and we can read the singular UV behaviour of the planar contributions for the nonorientable model from the results already obtained for the orientable model. Mere adaptation of \eqref{result-co} yields \begin{subequations} \begin{align} &\omega_3(\Lambda)=\frac{g_3}{8\pi^2\kappa}\ \Lambda^3+\frac{g_3}{8\pi^2}\ \Lambda^2+\frac{(\kappa^2-m_1^2)g_3}{8\pi^2\kappa}\ \Lambda-\frac{(4\kappa^2-m_1^2)m_1^2g_3}{32\pi^2\kappa^2}\ln\left(\frac{\Lambda_0}{\kappa}\right)+\\ &\hspace{1.5cm}+\lbrace\text{finite terms}\rbrace,\nonumber\\ &\omega_4(\Lambda)=\frac{g_4}{8\pi^2}\ \Lambda^2+\frac{(2\kappa^2-m_1^2)g_4}{8\pi^2\kappa}\ \Lambda-\frac{(4\kappa^2-m_2^2)m_2^2g_4}{32\pi^2\kappa^2}\ln\left(\frac{\Lambda_0}{\kappa}\right)+\\ &\hspace{1.5cm}+\lbrace\text{finite terms}\rbrace,\nonumber \end{align} \end{subequations} and the same conclusions as for the orientable model still apply here.\smallskip Notice that, unlike the orientable model for which the cubically divergent term ($\sim\Lambda^3$) appears as a correction to the mass of the field $\phi^\ddagger$, within the nonorientable model this term appears as a correction to the mass of the field $\phi$. It follows that, because of the twist factor which splits the mass operator into two components (which are given at the level of the classical action by $m_1^2$ and $m_2^2$ respectively), the cubic divergence cannot be cancelled out by an appropriate combination of the coupling constants except by setting $g_2=g_3=0$. However, as we are going to see next section, $\S$\ref{sec-4point}, when considering the one-loop 4-point function, the quantum fluctuations tend to restore the configuration $g_2,g_3\neq0$. This could have been expected since the relation $g_2=g_3=0$ is not related to a symmetry of the action. On the other hand, as already mentioned when discussing the results for the orientable model, this helps to restore the balance between the two components of the mass operator and in particular ensures the relation $\vert m_1^2-m_2^2\vert\ll m_1^2,m^2_2$ to be preserved at the quantum level.\smallskip We now turn to the analysis of the nonplanar contributions which constitute the main difference between the present model and the model involving orientable interactions. We find \begin{subequations}\label{contributionNP} \begin{align} &\Gamma^{(2)}_\text{NP}(k_1,k_2)=\big(\Xi(k_1,k_2)+\Xi(k_2,k_1)\big)\delta(k_2^0-k_1^0),\label{gammaNP}\\ &\Xi(k_a,k_b):=\frac{\kappa}{2\pi}\int\frac{dy}{y}\int\frac{d^3\vec{k}}{(2\pi)^3}\ \big(g_3+g_4e^{-3k_a^0/\kappa}y^3\big)\Delta_F^{c}(k)\times\label{fonctionXI}\\ &\hspace{6cm}\times\delta^{(\hspace{-1pt}3\hspace{-1pt})}\left((1-e^{-k_a^0/\kappa})\vec{k}-\vec{k}_a+y\vec{k}_b\right).\nonumber \end{align} \end{subequations} This time, the internal momentum, $\vec{k}$, does not cancel out in the 3-dimensional delta function. Instead, integrating over $\vec{k}$ yields \begin{subequations} \begin{equation}\label{integrandNP} \Xi(k_a,k_b)=\frac{\kappa e^{-3k_a^0/\kappa}}{(2\pi)^4\vert1-e^{-k_a^0/\kappa}\vert}\int \frac{dy}{c(k_a)-\tilde{c}y+c(k_b) y^2} \left( g_4+\frac{g_3e^{3k_a^0/\kappa}-g_4}{1+y^3} \right), \end{equation} where we have assumed that $k_a^0\neq0$ and defined \begin{equation} c(k):=\|\vec{k}\|^2+\kappa^2(1-e^{-k^0/\kappa})^2,\ \tilde{c}:=2\vec{k}_a\cdot\vec{k}_b+(2\kappa^2-m^2)(1-e^{-k^0/\kappa})^2. \end{equation} \end{subequations} Mere inspection of the integrand in \eqref{integrandNP} shows that $\Xi(k_a,k_b)$ is finite at ``nonexceptional" external momenta. On the other hand, going back to eq. \eqref{fonctionXI}, we see that setting one of the external momenta to zero, say $(k_a,\vec{k}_a)=(0,\vec{0})$, we recover integrals of the same type as the ones involved in the computation of planar contributions. Namely \begin{equation} \Xi(0,k_b)=\frac{\kappa}{2\pi}\int\frac{dy}{y}\int\frac{d^3\vec{k}}{(2\pi)^3}\big(g_3+g_4y^3\big)\Delta_F^{c}(k)\delta^{(\hspace{-1pt}3\hspace{-1pt})}(\vec{k}_b), \end{equation} which has to be compared, for instance, with \eqref{2-pointa}. First, note that the conservation law between external momenta is preserved, \textit{i.e.} $k_b\to0$ when $k_a\to0$. Next, in view of \eqref{gammaNP}, similar conclusions would have been obtained setting $k_b$ to zero instead of $k_a$. Finally, we conclude that nonplanar contributions, albeit finite at nonzero external momenta, diverge in the IR sector. This last phenomenon reflects the existence of UV/IR mixing when considering models with nonorientable interactions. \smallskip The UV/IR mixing is often regarded as problematic since it spoils the renormalisation properties of the quantum field theory. Indeed, although (UV) finite a one-loop order, the above contribution \eqref{contributionNP} may induce (IR) singularities at higher-loop order. One way to cure the theory from the UV/IR mixing would be to extract the singular behaviour (in the external momenta) of \eqref{contributionNP} then to add a counterterm accordingly. This would necessitate to compute exactly \eqref{integrandNP}, then study carefully the behaviour around zero of $\Xi(k_a,k_b)$. We leave this analysis to future studies. Nevertheless, insight in the result can be easily obtained under some assumptions.\smallskip Let us set $g_3=g_4$. Expanding $\Xi(k_a,k_b)$ around $k^0_a\sim0$ in \eqref{integrandNP} yields \begin{equation} \Xi(k_a,k_b)\underset{k^0_a\to0}{\sim}\frac{g_3\kappa}{(2\pi)^4}\int \frac{dy}{k_a^2-\tilde{c}y+k_b^2 y^2} \left(\frac{\kappa}{\vert k_a^0\vert}+\mathcal{O}(1)\right). \end{equation} At leading order, the integration over $y$ can easily be performed to get \begin{equation} \Xi(k_a,k_b)\underset{k^0_a\to0}{\sim}\frac{g_3\kappa^2}{(2\pi)^4}\frac{1}{\vert k_a^0\vert\sqrt{4k_a^2k_b^2-\tilde{c}^2}}\left(\frac{\pi}{2}-\arctan\left(\frac{-\tilde{c}}{\sqrt{4k_a^2k_b^2-\tilde{c}^2}}\right)\right), \end{equation} exhibiting a quadratic singularity. \subsection{Models with equivariant kinetic operator.}\label{sec-2pt-equivariant} In this section, we investigate the quantum properties of another NCFT characterised this time by the equivariant kinetic operator, eq. \eqref{eq-propagator}. This latter is related to the Casimir operator $\mathcal{C}_\kappa$ via the relation $K^\text{eq}=\mathcal{C}_\kappa+\mathcal{C}^2_\kappa/(4\kappa^2)$ and possesses the interesting characteristic to be equivariant under the action of $\kappa$-Poincar\'{e}. Actually, the corresponding propagator can be physically interpreted as a (kind of) Pauli-Villars regularised version of the propagator considered in $\S$\ref{sec-2pt-Casimir} whose cutoff would be related to some function of $\kappa$.\footnote{This is apparent from an appropriate redefinition of the mass parameters in the expression of the Lagrangian density; see eq. \eqref{c-Villars}.} It follow that the analysis of the NCFT with Casimir kinetic operator can be (almost) straightforwardly adapted to the present context. Although the one-loop 2-point function of the equivariant models is still (UV) singular, the actual divergence is milder than for the models with Casimir kinetic operator as it could have been expected from the strong decay properties of the equivariant propagator which decreases as $\|\vec{k}\|^{-4}$ when $\|\vec{k}\|\to\infty$.\smallskip In view of the negligible benefit obtained in $\S$\ref{sec-2pt-Casimir} under the assumption $m_1\neq m_2$, together with the more involved expression of the equivariant propagator compared to that of the Casimir propagator, we now set $m_1=m_2=m$. \subsubsection{Orientable model.} The various contributions to the one-loop 2-point function can be read from the (one-loop) quadratic part of the effective action, eq. \eqref{gamma-2pts}, which reduces after some trivial manipulations to \begin{subequations} \begin{equation} \Gamma_1^{(2)}[\phi,\bar{\phi}]=\frac{1}{2}\int \frac{d^4k}{(2\pi)^4}\ \bar{\phi}(k)\left(\omega_1+\omega_2e^{-3k^0/\kappa}\right)\phi(k), \end{equation} where $\omega_1$ and $\omega_2$ are still given by \eqref{2-point}. Upon integrating over $\int d^3\vec{k}$, we find \begin{equation}\label{2pt-eqo} \omega_j(\Lambda_0):=\frac{\kappa^3}{4\pi^2\sqrt{\kappa^2-m^2}}\int_{\Lambda_0}dy\ \Phi_j(y)\big(\mu_{+}(y)-\mu_{-}(y)\big), \end{equation} \end{subequations} where the functions $\Phi_j$ are still given by \eqref{2pt-coy}. Recall that \begin{equation} \mu^2_{\pm}(y)=1\pm 2y\sqrt{1-\left(\frac{m}{\kappa}\right)^2}+y^2. \end{equation} A mere comparison between eq. \eqref{2pt-eqo} and \eqref{2pt-coreg} shows that the same kind of integrals as in $\S$\ref{sec-2pt-Casimir} have to be computed here. Thus, from straightforward adaptation of the material presented in there, we find \begin{subequations}\label{result-eqo} \begin{equation} \omega_j(\Lambda_0)=\frac{g_j\kappa}{2\pi^2}\ \Lambda_0+F_j(\kappa),\ j=1,2, \end{equation} exhibiting a linear UV divergence. The finite contributions are given by \begin{align} F_1(\kappa):=&\ \frac{g_1\kappa^2}{4\pi^2}+\frac{m^2g_1\kappa}{8\pi^2\sqrt{\kappa^2-m^2}}\ln\left(\frac{\kappa+\sqrt{\kappa^2-m^2}}{\kappa-\sqrt{\kappa^2-m^2}}\right)+\\ &+\frac{(3g_2-g_1)\kappa^3}{4\pi^2\sqrt{\kappa^2-m^2}}\int dy\ \frac{\mu_{+}(y)-\mu_{-}(y)}{1+y^3},\nonumber\\ F_2(\kappa):=&\ \frac{g_2\kappa^2}{4\pi^2}+\frac{m^2g_2\kappa}{8\pi^2\sqrt{\kappa^2-m^2}}\ln\left(\frac{\kappa+\sqrt{\kappa^2-m^2}}{\kappa-\sqrt{\kappa^2-m^2}}\right)+\\ &+\frac{(3g_1-g_2)\kappa^3}{4\pi^2\sqrt{\kappa^2-m^2}}\int dy\ \frac{\mu_{+}(y)-\mu_{-}(y)}{1+y^3}.\nonumber \end{align} \end{subequations} Again, the expected quantum behaviour for the complex $\vert\phi\vert^4$-model is recovered in the limit $\kappa\to\infty$; see eq. \eqref{integral3} for a comparison. We find \begin{equation} (\omega_1+\omega_2e^{-\frac{3k^0}{\kappa}})\xrightarrow[\kappa\to\infty]{}\frac{g_1+g_2}{4\pi^2}\left(2\kappa\Lambda_0+\kappa^2\big(1+\frac{16\pi}{\sqrt{3}}\big)+m^2\ln\left(\frac{2\kappa}{m}\right)-\frac{4\pi m^2}{\sqrt{3}}\right). \end{equation} Hence, as for the model with Casimir kinetic operator, $\kappa$ plays once more time the role of a cutoff for the ordinary quantum field theory. Nevertheless, the situation is a bit different from the previous case considered in $\S$\ref{sec-2pt-Casimir} since $\kappa$ appears to be related to some spacelike (Pauli-Villars) regulator for the model with equivariant kinetic operator and it seems more natural to interpret $\kappa$ as a cutoff for the (commutative) $\vert\phi\vert^4$-model within the present context. It is important to keep in mind that $\kappa$ is fixed once and for all when working at the level of the NCFT, however. In particular, $\kappa$ will not be interpreted as a cutoff in the sense of the renormalisation scheme when deriving the beta function in the next section, $\S$\ref{sec-4point}.\smallskip As it could have been expected from the decay properties of the equivariant propagator, see eq. \eqref{eq-decay}, the quantum behaviour of the NCFT with equivariant kinetic operator is milder than that of its commutative counterpart, although the one-loop 2-point function remains singular in the UV. This seems to indicate that NCFT equipped with equivariant kinetic operator are more suitable for describing realistic physical model of $\kappa$-Poincar\'{e} invariant quantum field theory at least in comparison with the NCFT characterised by the Casimir kinetic operator. Even more interesting is the apparent symmetry of the one-loop corrections for the two components of the mass operator, namely of the respective masses of the fields $\phi$ and $\phi^\ddagger$, already at the level of the orientable model (provided $g_1,g_2\neq0$). This reflects likely the existence of a symmetry of the action functional under the exchange $\phi\leftrightarrow\phi^\ddagger$, as it is apparent from the expression of $\mathcal{S}_\kappa$; see $\S$\ref{sec-action}. Indeed, setting one of the coupling constant (either $g_1$ or $g_2$) to zero would lift the degeneracy between the two masses. As already mentioned in $\S$\ref{sec-2pt-Casimir}, this is supported by the fact that the quantum fluctuations tend to restore the symmetry $\phi\leftrightarrow\phi^\ddagger$ as we are going to see later on when considering the one-loop 4-point function. \subsubsection{Nonorientable model.} Again, the quadratic part of the effective action decomposes into a planar component and a nonplanar one. On the one hand, the planar contributions are similar to the contributions computed for the orientable model, namely \begin{subequations} \begin{equation} \Gamma^{(2)}_\text{N}[\phi,\bar{\phi}]=\frac{1}{2}\int \frac{d^4k}{(2\pi)^4}\ \bar{\phi}(k)\left(\omega_3+\omega_4e^{-3k^0/\kappa}\right)\phi(k), \end{equation} with \begin{equation} \omega_j(\Lambda_0)=\frac{g_j\kappa}{2\pi^2}\ \Lambda_0+\frac{g_j\kappa^2}{4\pi^2}+\frac{m^2g_j\kappa}{8\pi^2\sqrt{\kappa^2-m^2}}\ln\left(\frac{\kappa+\sqrt{\kappa^2-m^2}}{\kappa-\sqrt{\kappa^2-m^2}}\right),\ j=3,4. \end{equation} \end{subequations} On the other hand, the nonplanar contributions take the form \begin{subequations} \begin{align} &\Gamma^{(2)}_\text{NP}(k_1,k_2)=\big(\Xi(k_1,k_2)+\Xi(k_2,k_1)\big)\delta(k_2^0-k_1^0),\\ &\Xi(k_a,k_b):=\frac{\kappa}{2\pi}\int\frac{dy}{y}\int\frac{d^3\vec{k}}{(2\pi)^3}\ \big(g_3+g_4e^{-3k_a^0/\kappa}y^3\big)\Delta_F^\text{eq}(k)\times\\ &\hspace{6cm}\times\delta^{(\hspace{-1pt}3\hspace{-1pt})}\left((1-e^{-k_a^0/\kappa})\vec{k}-\vec{k}_a+y\vec{k}_b\right),\nonumber \end{align} \end{subequations} analogous to eq. \eqref{contributionNP}. Although the computation are slightly more involved than for the model with Casimir kinetic operator, the nonplanar contributions are found to diverge when one of the external momenta is set to zero albeit finite otherwise. We conclude that UV/IR mixing is also present in this case. \section{One-loop 4-point function and beta function.}\label{sec-4point} Previous section, we have considered various models of $\kappa$-Poincar\'{e} invariant scalar field theory. Each model was characterised by a specific choice of kinetic operator together with interaction potential. In view of the results obtained in $\S$\ref{sec-2point}, we now restrict our attention on the model which exhibits the best behaviour, namely the model with equivariant kinetic operator. Recall that the other model, with Casimir kinetic operator, diverges cubically, \textit{i.e.} worst than the commutative $\vert\phi\vert^4$ model. Moreover, since the tree-level structure of the 2-point function, as well as the relation $m_1=m_2$, are preserved at one-loop order when considering only the orientable interaction, we now focus on this interaction. Note that, preliminary computations for the model with Casimir kinetic operator and orientable interaction indicate that some of the contributions to the one-loop 4-point function are linearly divergent and some others exhibit UV/IR mixing.\bigskip The one loop order corrections to the 4-point functions are obtained by expanding the generating functional of the connected correlation function $W$ up to the second order in the coupling constant. Standard computations, which are recalled in Appendix \ref{sap-perturbation}, yield the following expression for the quartic part of the effective action \begin{subequations}\label{gamma4pts} \begin{align} &\Gamma^{(4)}_1[\phi,\bar{\phi}]:=\frac{1}{4!}\int\prod_{\ell=1}^n\left[\frac{d^4k_\ell}{(2\pi)^4}\right]\ \bar{\phi}(k_1)\phi(k_2)\bar{\phi}(k_3)\phi(k_4)\Gamma^{(4)}_1(k_1,k_2,k_3,k_4),\\ &\Gamma^{(4)}_1(k_1,k_2,k_3,k_4)=\frac{1}{(2\pi)^8}\int \frac{d^4k_5}{(2\pi)^4}\frac{d^4k_6}{(2\pi)^4}\ \Delta_F(k_5)\Delta_F(k_6)\times\\ &\hspace{5cm}\times\Big[2\mathcal{V}_{5462}\mathcal{V}_{3615}+2\mathcal{V}_{5462}\mathcal{V}_{3516}+2\mathcal{V}_{5216}\mathcal{V}_{3465}+\nonumber\\ &\hspace{5.175cm}+2\mathcal{V}_{1652}\mathcal{V}_{3465}+2\mathcal{V}_{5612}\mathcal{V}_{6435}+2\mathcal{V}_{5612}\mathcal{V}_{3564}+\nonumber\\ &\hspace{5.175cm}+2\mathcal{V}_{5216}\mathcal{V}_{3564}+2\mathcal{V}_{5612}\mathcal{V}_{3465}+\hspace{2pt}\mathcal{V}_{5612}\mathcal{V}_{6534}\hspace{3pt}+\nonumber\\ &\hspace{5.175cm}+\hspace{2pt}\mathcal{V}_{5216}\mathcal{V}_{6435}\hspace{3pt}+\hspace{2pt}\mathcal{V}_{1652}\mathcal{V}_{3564}\hspace{3pt}+\mathcal{V}_{1256}\mathcal{V}_{3465}\Big].\nonumber \end{align} \end{subequations} As we are going to see, at the level of the one-loop 4-point function, even the orientable interaction leads to nonplanar contributions. \smallskip From now on, we focus on the model with orientable interactions, eq. \eqref{o-int}. The symmetries of the 4-vertex function associated to this interaction, regarded as a distribution, can be easily read from eq. \eqref{o-delta}. They are given by \begin{subequations} \begin{align} &V_\textit{o}(k_1,k_2,k_3,k_4)\equiv V_\textit{o}(k_4,k_3,k_2,k_1),\\ &V_\textit{o}(k_1,k_2,k_3,k_4)\equiv e^{3(k^0_3-k_4^0)/\kappa}V_\textit{o}(k_3,k_4,k_1,k_2). \end{align} \end{subequations} Combining these above symmetry properties with the fusion rules \begin{subequations}\label{fusion-rule} \begin{align} &V_\textit{o}(k_1,k_2,\mathbf{k_6},\mathbf{k_5})V_\textit{o}(\mathbf{k_5},\mathbf{k_6},k_3,k_4)\equiv V_\textit{o}(\mathbf{k_5},\mathbf{k_6},k_3,k_4)V_\textit{o}(k_1,k_2,k_3,k_4),\\ &V_\textit{o}(k_1,\mathbf{k_5},\mathbf{k_6},k_4)V_\textit{o}(\mathbf{k_5},k_2,k_3,\mathbf{k_6})\equiv V_\textit{o}(\mathbf{k_5},k_2,k_3,\mathbf{k_6})V_\textit{o}(k_1,k_2,k_3,k_4), \end{align} \end{subequations} where the bold characters denote the Wick contracted momentum we sum over in \eqref{gamma4pts}, we find that $\Gamma_1^{(4)}$ decomposes into four families of contributions some being planar, the other being not. As we are going to see, the tree-level structure of the action functional is preserved by radiative corrections provided both $g_1$ and $g_2$ are different from zero.\\ We write \begin{equation} \Gamma^{(4)}_1=\Gamma^P_1+\Gamma^P_2+\Gamma^{N\!P}_3+\Gamma^{N\!P}_4. \end{equation} The two families of planar contributions, hereafter denoted by (P1) and (P2), admit respectively the following expressions \begin{subequations}\label{Pcontrib} \begin{align} \Gamma^P_1(k_1,k_2,k_3,k_4):=&\ (2\pi)^{-8}V_{1234}\int \frac{d^4k_5}{(2\pi)^4}\frac{d^4k_6}{(2\pi)^4}\ \Psi_1(k^0_5)\Delta^\text{eq}_F(k_5)\Delta^\text{eq}_F(k_6) V_{5634},\label{planar1}\\ \Gamma^P_2(k_1,k_2,k_3,k_4):=&\ (2\pi)^{-8}V_{1234}\int \frac{d^4k_5}{(2\pi)^4}\frac{d^4k_6}{(2\pi)^4}\ \Psi_2(k^0_5)\Delta^\text{eq}_F(k_5)\Delta^\text{eq}_F(k_6) V_{5236},\label{planar2} \end{align} \end{subequations} where the two functions $\Psi_j$, $j=1,2$, are given by \begin{subequations}\label{CoeffP} \begin{align} &\Psi_1(k^0_5):=a_1+b_1e^{3k_5^0/\kappa}+c_1e^{6k_5^0/\kappa},\\ &\Psi_2(k^0_5):=a_2+b_2e^{3k_5^0/\kappa}+d_2e^{-3k_5^0/\kappa}, \end{align} \end{subequations} where the coefficients depend only on the external momenta.\footnote{The coefficients associated with the planar contributions are the following. For $\Psi_1$ we have \begin{subequations} \begin{align} &a_1:=2g_1^2\big(1+e^{3(k^0_1-k^0_2)/\kappa}\big)+g_1g_2\big(1+e^{3(k^0_1-k^0_3)/\kappa}+2e^{3(k^0_1-k^0_4)/\kappa}\big)e^{3k^0_4/\kappa}+g_2^2e^{3(k^0_1+k^0_4)/\kappa},\\ &b_1:=g_1g_2\big(3+e^{3(k^0_1-k^0_2)/\kappa}\big)+2g_2^2e^{3k^0_1/\kappa},\ c_1:=g_2^2. \end{align} \end{subequations} For $\Psi_2$ we have \begin{subequations} \begin{align} &a_2:=2g_1\big(g_1+g_2e^{3k^0_1/\kappa}\big)+g_1g_2\big(e^{3k^0_1/\kappa}+e^{3k_4^0/\kappa}\big),\ d_2:=g_1^2e^{3k^0_1/\kappa}\\ &b_2:=2g_2\big(g_1+g_2e^{3k^0_1/\kappa}\big)+g_2^2e^{3k^0_4/\kappa}+\big(g_1+g_2e^{3k^0_3/\kappa}\big)\big(g_1+g_2e^{3k^0_1/\kappa}\big)e^{-3k^0_2/\kappa}. \end{align} \end{subequations} }\smallskip The nonplanar contributions, hereafter denoted by (NP3) and (NP4), are given by \begin{subequations}\label{NPcontrib} \begin{align} \Gamma^{N\!P}_3(k_1,k_2,k_3,k_4):=&\ (2\pi)^{-8}\int \frac{d^4k_5}{(2\pi)^4}\frac{d^4k_6}{(2\pi)^4}\ \Psi_3(k_5^0)\Delta^\text{eq}_F(k_5)\Delta^\text{eq}_F(k_6)V_{5216}V_{3465},\label{nonplanar1}\\ \Gamma^{N\!P}_4(k_1,k_2,k_3,k_4):=&\ (2\pi)^{-8}\int \frac{d^4k_5}{(2\pi)^4}\frac{d^4k_6}{(2\pi)^4}\ \Psi_4(k_5^0)\Delta^\text{eq}_F(k_5)\Delta^\text{eq}_F(k_6)V_{5163}V_{5462},\label{nonplanar2} \end{align} \end{subequations} with \begin{subequations}\label{CoeffNP} \begin{align} &\Psi_3(k^0_5):=a_3+b_3e^{3k_5^0/\kappa}+c_3e^{6k_5^0/\kappa},\\ &\Psi_4(k^0_5):=a_4+b_4e^{3k_5^0/\kappa}+c_4e^{6k_5^0/\kappa}, \end{align} \end{subequations} where the coefficients depend only on the external momenta.\footnote{The coefficients associated with the nonplanar contributions are the following. For $\Psi_3$ we have \begin{subequations} \begin{align} &a_3:=g_1^2\big(1+e^{3(k_3^0-k_4^0)/\kappa}\big)+g_1g_2e^{3k_3^0/\kappa},\ c_3:=g_1g_2\big(1+e^{-3k_2^0/\kappa}\big)+g_2^2e^{3(k_3^0-k_4^0)/\kappa},\\ &b_3:=g_1^2\big(e^{-3k_1^0/\kappa}+e^{-3k_2^0/\kappa}\big)+g_2^2\big(e^{3k_3^0/\kappa}+e^{3k_4^0/\kappa}\big)+\\ &\hspace{4truecm}+g_1g_2\big(2+e^{3(k_3^0-k_2^0)/\kappa}+e^{3(k_1^0-k_4^0)/\kappa}+e^{3(k_3^0-k_4^0)/\kappa}\big).\nonumber \end{align} \end{subequations} For $\Psi_4$ we have \begin{equation} a_4:=g_1\big(g_1+g_2e^{3k_3^0/\kappa}\big),\ b_4:=g^2_1e^{-3k_1^0/\kappa}+2g_1g_2+g_2^2e^{3k_3^0/\kappa},\ c_4:=g_2\big(g_2+g_1e^{-3k_1^0/\kappa}\big). \end{equation} } \subsection{Planar contributions.} We begin with the study of the planar contributions, eq. \eqref{Pcontrib}. Integrating over $k_6$, we can express this latter in term of $k_5$ and the external momenta. Namely, \begin{equation}\label{int4pt} k^0_6=k^0_5+Q_j^0, \ \vec{k}_6= A_j\left(\vec{k}_5+y^{\epsilon_j}\vec{Q}_j\right),\ y=e^{-k_5^0/\kappa},\ j=1,2, \end{equation} where $Q_j^0$ and $\vec{Q}_j$ are functions of the external momenta, which are related to the noncommutative counterparts of the usual $s,t,u$ channels. Their expressions can be read from the delta functions \eqref{o-delta} involved in eq. \eqref{Pcontrib}. The coefficients appearing in eq. \eqref{int4pt} are given by $(A_1,\epsilon_1)=(1,1)$ and $(A_2,\epsilon_2)=(e^{-Q_2^0/\kappa},0)$. Note that, anticipating the computation of \eqref{Pcontrib}, we have introduced the variable $y$ corresponding to the change of variables \eqref{ychange-variable} already discussed in $\S$\ref{sec-2point}.\smallskip Here, we are not interested in the exact expressions for the various contributions. Rather, we are going to show that all of the contributions are finite. To do so, it is convenient to exploit a particular estimate for the equivariant propagator \eqref{eq-propagator}, namely \begin{equation}\label{borne} \Delta_F^\text{eq}(k)\leq\frac{e^{-2k^0/\kappa}}{1+e^{-3k^0/\kappa}} \ \frac{8\kappa^2}{\big(\vec{k}^{\hspace{2pt}2}+\kappa^2\mu^2_{-}(k^0)\big)^2}, \end{equation} which permits us to control the growth of each contributions. Now, upon using the relations \eqref{formulassss} together with \eqref{borne} in \eqref{Pcontrib}, we find after standard computations \begin{align}\label{eq-planar} \Gamma_j^P(k_\text{ext})&\leq\ \frac{e^{-2Q^0_j/\kappa}}{A_j(2\pi)^{10}}\int_0^\infty\frac{y^6\Psi_j(y)\ dy}{(1+y^3)(1+y^3e^{-3Q^0_j/\kappa})}\int_0^1\frac{x(1-x)\ dx}{\left(-\alpha_j(y)x^2+\beta_j(y)x+\mu_{-}^2(y)\right)^{5/2}}\\ &\leq\ \frac{e^{-2Q^0_j/\kappa}}{6A_j(2\pi)^{10}}\int_0^\infty\frac{y^6\Psi_j(y)\ dy}{(1+y^3)(1+y^3e^{-3Q^0_j/\kappa})}\ \max\left[\frac{1}{\mu^2_{-}(y)},\frac{A^2_j}{\mu^2_{-}(ye^{-Q^0_j/\kappa})}\right]^{5/2},\nonumber \end{align} in which the $\Psi_j$'s are given by eq. \eqref{CoeffP}, and the coefficients appearing in the first line of \eqref{eq-planar} are given by \begin{equation} \alpha_j(y):=\frac{\|\vec{Q}_j\|^{2}}{\kappa^2}\ y^{2\epsilon_j},\ \beta_j(y):=\alpha_j(y)+A_j^{-2}\mu_{-}^2(ye^{-Q^0_j/\kappa})-\mu_{-}^2(y). \end{equation} Now, setting $j=1$, it is easy to see that the integrand in the second line of \eqref{eq-planar} behaves (at leading order) like $\sim y^3$ when $y\to0$ while it behaves like $\sim y^{-5}$ when $y\to\infty$. This indicates that (P1) is finite. Setting $j=2$, we find that the integrand now behaves like a constant when $y\to0$ while it behaves like $\sim y^{-2}$ when $y\to\infty$. Thus, (P2) is also finite and we conclude that all of the planar contributions are UV finite. \subsection{Nonplanar contributions.} We now turn to the study of the nonplanar contributions, eq. \eqref{NPcontrib}. In this case, it is not possible to factorise out a delta function depending only on the external momenta as it was the case for the planar contributions. In particular, it is not possible to use factorisation rules like those given in eq. \eqref{fusion-rule}. Rather, after integrating over $k_6$, one of the two 3-momenta delta function involving $\vec{k}_5$ remains and the integration with respect to this latter variable can be done easily. However, a simple analysis based on the respective positions of the contracted momenta involved in \eqref{nonplanar1} and \eqref{nonplanar2} shows that the two families (NP3) and (NP4) are of different nature, see below.\smallskip Integrating over $k_6$ in \eqref{nonplanar1}, we obtain \begin{equation}\label{NP1p2} k^0_6=k^0_5+(k^0_4-k^0_3),\ \ \vec{k}_6=\vec{c}_1y+\vec{c}_2,\ \ \vec{k}_5=\vec{c}_3y+\vec{c}_4. \end{equation} Making use of the bound \eqref{borne} on the propagator, we find \begin{equation}\label{eqNP1} \Gamma^{N\!P}_3(k_\text{ext})\leq\kappa^5\int\frac{\delta\left(k_4^0-k_3^0+k_2^0-k_1^0\right)y^9(1+y^3)^{-1}(1+c_0^3y^3)^{-1}\Psi_3(y)\ dy}{\left[(\vec{c}_3y+\vec{c}_4)^2+\kappa^2\mu_{-}^2(y)\right]^2\left[(\vec{c}_1y+\vec{c}_2)^2+\kappa^2\mu_{-}^2(c_0y)\right]^2}, \end{equation} where $c_0$, $\vec{c}_i$, $i=1,\cdots,4$, are (non vanishing) functions of the external momenta (constant of $y$) whose explicit expressions are unessential for the ensuring analysis. Note that we have dropped the overall $2\pi$ factors. In the limit $y\to0$ we find that the integrand behaves (at leading order) like $\sim y^3$, while it behaves like $\sim y^{-5}$ when $y\to\infty$. Hence, (NP3) is (UV) finite.\smallskip In the same way, integrating over $k_6$ in \eqref{nonplanar2}, we obtain \begin{equation}\label{NP2p2} k^0_6=-k^0_5+(k^0_1+k^0_3),\ \vec{k}_6=\frac{1}{y}\left(\vec{c}_1^{\hspace{3pt}'}y+\vec{c}_2^{\hspace{3pt}'}\right),\ \vec{k}_5=\vec{c}_3^{\hspace{3pt}'}y+\vec{c}_4^{\hspace{3pt}'}, \end{equation} such that \begin{equation}\label{eqNP2} \Gamma^{N\!P}_4(p_\text{ext})\leq\kappa^5\int\frac{\delta\left(p_4^0-p_3^0+p_6^0-p_5^0\right)y^9(1+y^3)^{-1}(y^3+c_0^{'\hspace{2pt}3})^{-1}\Psi_4(y)\ dy}{[(\vec{c}_4^{\hspace{3pt}'}+\vec{c}_3^{\hspace{3pt}'}y)^2+\kappa^2\mu_{-}^2(y)]^2[(\vec{c}_2^{\hspace{3pt}'}+\vec{c}_1^{\hspace{3pt}'}y)^2+\kappa^2y^2\mu_{-}^2(\frac{c_0^{'}}{y})]^2}, \end{equation} where $c^{'}_0$, $\vec{c}_i^{\hspace{3pt}'}$, $i=1,\cdots,4$, are other (non vanishing) functions of the external momenta (constant of $y$), and we have dropped the overall $2\pi$ factors. Mere comparison of eq. \eqref{eqNP1} and \eqref{eqNP2} shows that (NP4) is also finite. We conclude that the nonplanar contributions are finite at nonexceptional external momenta.\smallskip It remains to check that the nonplanar contributions have no singularities at some specific values of the external momenta. Since $\Gamma^{(4)}_1$ involves four external momenta, the analysis is a bit less straightforward than for the 2-point functions. However, a careful analysis shows that neither (NP3) nor (NP4) become singular for some values of the external momenta. To proceed, we have to turn off successively (one by one) the external momenta, and repeat the operation for all possible configurations. It is easy to show that turning off only one of the external momenta in eq. \eqref{NPcontrib}, the contributions (NP3) and (NP4) remain nonplanar and are still finite. When turning off two external momenta, we find that there are two configurations of the external momenta for which (NP3) and (NP4) become planar. For all of the other configurations they remain nonplanar and finite.\smallskip Setting $(k^0_1,\vec{k}_1)=(k^0_2,\vec{k}_2)$ in \eqref{nonplanar1}, one finds \begin{equation} \Gamma^{N\!P}_3(k_1,k_1,k_3,k_4)\leq\frac{4\kappa^{5/2}}{(2\pi)^{10}}V_{1134}\int\ \frac{y^6(1+y^3)^{-2}\Psi_3(y)\ dy}{(\kappa-2\sqrt{\kappa^2-m^2}y+\kappa y^2)^{5/2}}, \end{equation} which behaves asymptotically like $\int^\infty y^{-5}$ and $\int_0dy$. Similar computations show that $\Gamma^{N\!P}_3(k_1,k_2,k_3,k_3)\propto V_{1233}$ is finite. Thus, no singularity shows up in this case.\smallskip Setting $(k^0_2,\vec{k}_2)=(k^0_3,\vec{k}_3)$ in \eqref{nonplanar2} and using \eqref{borne} together with \eqref{formulassss}, we find \begin{align}\label{eq-nonplanar2} \Gamma^{NP}_2(k_1,k_2,k_2,k_4)&\leq\int dy\ \frac{y^6\Psi_4(y)}{(1+y^3)(y^3+c_0^{'\hspace{2pt}3})} \int_0^1 \frac{(1-x)x\ e^{(k_1^0-k^0_2)/\kappa}V_{1224}\ dx}{\left(-h_1(y)x^2+h_2(y)x+\mu^2_{-}(y)\right)^{5/2}},\\ &\leq\int dy\ \frac{y^6\Psi_4(y)}{(1+y^3)(y^3+c_0^{'\hspace{2pt}3})}\frac{e^{(k_1^0-k^0_2)/\kappa}V_{1224}}{(\kappa-2\sqrt{\kappa^2-m^2}y+\kappa y^2)^{5/2}},\nonumber \end{align} where the new coefficients are given by \begin{equation} h_1(y):=\frac{(\vec{k}_4+\vec{k}_2e^{k_2^0/\kappa}y)^2}{\kappa^2},\ h_2(y):=h_1(y)+y^2\mu^2_{-}\left(\frac{e^{(k_2^0+k_4^0)/\kappa}}{y}\right)-\mu^2_{-}(y). \end{equation} From the last inequality of eq. \eqref{eq-nonplanar2}, we conclude that $\Gamma^{N\!P}_4(k_1,k_2,k_2,k_4)$ is finite since the integrand in \eqref{eq-nonplanar2} behaves like $\sim y^{-5}$ when $y\to\infty$ while it behaves like a constant when $y\to0$. A similar conclusion hold when setting $(k_1^0,\vec{k}_1)=(k_4^0,\vec{k}_4)$. Thus, we conclude that no singularity shows up in this case neither. \subsection{Beta function.} In the previous section we have shown that the one-loop 4-point function for the model with equivariant kinetic operator and orientable interactions is finite and without UV/IR mixing. This indicate that the (one-loop) beta function is zero and that the coupling constant is not renormalised. This will be discussed in the conclusion. On the other hand, from the computations of both $\Gamma^{(2)}_1$ and $\Gamma^{(4)}_1$, we can deduce the counterterms entering the definition of the renormalised action. Keeping in mind eq. \eqref{kinetic-map} and \eqref{o-int2}, we can write \begin{subequations} \begin{align} \mathcal{S}_{\kappa,r}[\phi,\bar{\phi}]=&\ \int \frac{d^4k}{(2\pi)^4}\left(\bar{\phi}_r(k)\tilde{\mathcal{K}}_0^\text{eq}(k)\phi_r(k)+\bar{\phi}_r(k)(m_1^2+m_2^2e^{-3k_0/\kappa})\phi_r(k)\right)+\\ &\hspace{1truecm}+\int \prod_{i=1}^4\left[\frac{d^4k_i}{(2\pi)^4}\right]\bar{\phi}_r(k_1)\phi_r(k_2)\bar{\phi}_r(k_3)\phi_r(k_4) \mathcal{V}_\textit{o}{(k_1,k_2,k_3,k_4)},\nonumber \end{align} where $\tilde{\mathcal{K}}^\text{eq}_0$ is given by \eqref{kin-hypa} with $M=0$. In particular, neither the wave functions ($\phi_r=\phi$) nor the coupling constant(s) are renormalised. The renormalized mass terms are related to the counterterms $\delta m_i^2$ and the bare quantities involved in the classical action by \begin{align} m_i^2+\delta m_i^2=m^2, \end{align} \end{subequations} where $\delta m_i$ can be read from \eqref{result-eqo}. Note that $m_1$ and $m_2$ differ from each other only by finite renormalisation terms in such a way that the tree-level structure of the action functional is preserved. \chapter{Quantum field theory on \texorpdfstring{$\mathfrak{su}(2)$}{su(2)} noncommutative spacetime.}\label{sap-ncftsu2} In this chapter, we consider both real and complex scalar field theories with quartic interactions and massive Laplacian of $\mathbb{R}^3$ as kinetic operators. The models are built from the material introduced in $\S$\ref{sukont}, namely making use of the Kontsevich product associated with $\mathbb{R}^3_\theta$. Because of the unimodularity of the compact Lie group $SU(2)$, a natural notion of involution is provided by the ordinary complex conjugation $f\mapsto\bar{f}$, which we shall use in the following to define reasonable reality condition for the action functional. Both UV and IR behaviours of the corresponding one-loop 2-point functions are analysed. By a simple inspection of the perturbative expansion of the effective action $\Gamma$ (see Appendix \ref{sap-perturbation}), it can be easily realised that the one-loop 2-point correlation function for the $\mathbb{R}$-valued field case receives two types of contributions, hereafter called Type-I and Type-II contributions, depending on whether or not the contracted lines giving rise to the propagator are related to two consecutive exponential factors or not, upon taking into account the cyclicity of the trace $\int d^3x$. In the case the fields are $\mathbb{C}$-valued, the form of the interaction term determines which type of contributions have to be taken into account for the 2-point function. Only Type-I contributions matter when the interaction is given by $\int d^3x\ \bar{\phi}\star_\mathcal{K}\phi\star_\mathcal{K}\bar{\phi}\star_\mathcal{K}\phi$, while both Type-I and Type-II actually contribute when the interaction is given by $\int d^3x\ \bar{\phi}\star_\mathcal{K}\bar{\phi}\star_\mathcal{K}\phi\star_\mathcal{K}\phi$. As already mentioned in $\S$\ref{sec-interaction}, the first (resp. second) type of interactions is known as orientable (nonorientable) interactions. The terminology invariant (resp. noninvariant) interactions is also sometimes used to designate these interactions. Invariance or noninvariance is with respect to the transformations defined by the natural action of the automorphisms of the algebra viewed as a (right-)module on itself, compatible with the canonical Hermitian structure used here, namely $h(a_1,a_2)=\bar{a}_1\star_\mathcal{K} a_2$. Thus, we can write $\phi^g=g\star_\mathcal{K} \phi$, for any $g$ with $\bar{g}\star_\mathcal{K} g=g\star_\mathcal{K} \bar{g}={\text{\usefont{U}{dsss}{m}{n}\char49}}$ so that $h(\phi_1^g,\phi_2^g)=h(\phi_1,\phi_2)$. In Sec. \ref{sec-typeI}, we first consider the real field case and focus on the analysis of Type-I contributions, showing that they are both IR and UV finite. The extension to the case of complex scalar field with interaction $\int d^3x\ \bar{\phi}\star\phi\star\bar{\phi}\star\phi$ is then given. Similar conclusions are obtained for the $\mathbb{C}$-valued NCFT. In Sec. \ref{sec-typeII}, we go back to the real field case and consider Type-II contributions. These are found to be IR finite. The corresponding UV behaviour is then analysed. The case of complex model with noninvariant interaction is also discussed. The results presented here are published in \cite{moi:2016}. \section{Type-I contributions.}\label{sec-typeI} We first consider a $\mathbb{R}$-valued scalar field theory with quartic interaction whose classical action is defined by \begin{equation}\label{real-clas-action} \mathcal{S}:=\int d^3x\left(\frac{1}{2}\partial_\mu\phi\star_\mathcal{K}\partial_\mu\phi+\frac{1}{2}m^2\phi\star\phi+\frac{\lambda}{4!}\phi\star_\mathcal{K}\phi\star_\mathcal{K}\phi\star_\mathcal{K}\phi\right)(x), \end{equation} where $\star_\mathcal{K}$ is given by \eqref{kontsev-product}. The fields and parameters are assumed to have the usual $\mathbb{R}^3$ mass dimensions, namely $[\phi]=\frac{1}{2}$, $[\lambda]=1$ and $[m]=1$. Thanks to the property \eqref{starclos} of the star product under integration sign, the kinetic term of \eqref{real-clas-action} reduces to \begin{equation} \mathcal{S}^\text{kin}:=\int d^3x\ \big(\partial_\mu\phi\partial_\mu\phi+m^2\phi\phi\big)(x), \end{equation} where the $\partial_\mu$'s are the usual derivatives (with respect to $x^\mu$) on $\mathbb{R}^3$.\smallskip The interaction term can be conveniently recast into the form \begin{subequations}\label{interaction0} \begin{equation}\label{interaction} \mathcal{S}^\text{int}:=\frac{\lambda}{4!}\int d^3x\int\ \left[\prod_{i=1}^4 \frac{d^3k_i}{(2\pi)^3} \widetilde\phi(k_i)\right](e^{ik_1\cdot x}\star_\mathcal{K} e^{ik_2\cdot x}\star_\mathcal{K} e^{ik_3\cdot x}\star_\mathcal{K} e^{ik_4\cdot x})(x), \end{equation} which can be equivalently written as, using eq. \eqref{duflopw}, \begin{equation}\label{interaction1} \mathcal{S}^\text{int}=\frac{\lambda}{4!}\int\ \left[\prod_{i=1}^4 \frac{d^3k_i}{(2\pi)^3} \widetilde\phi(k_i)\right]\mathcal{W}(k_1,k_2)\mathcal{W}(k_3,k_4)\delta(B(k_1,k_2)+B(k_3,k_4)). \end{equation} \end{subequations} We will use alternatively \eqref{interaction} and \eqref{interaction1} in the computation of the contributions to the 2-point correlation function. Notice that the standard conservation law of the momenta $\delta(\sum_{i=1}^4k_i)$ of the commutative $\phi^4$ theory on $\mathbb{R}^3$ in replaced by a nonlinear one as it can be seen from the delta function in \eqref{interaction1}. This complicates strongly the perturbative calculations, which however can be partly overcome by a suitable use of \eqref{interaction} combined with properties of the plane waves and cyclicity of the trace.\smallskip A typical contribution of Type-I to the one-loop effective action is easily found to be given by \begin{equation} \Gamma^{(I)}_2=\int d^3x \left[\prod_{i=1}^4 \frac{d^3k_i}{(2\pi)^3}\right]\widetilde\phi(k_3)\widetilde\phi(k_4)\frac{\delta(k_1+k_2)}{k_1^2+m^2}(e^{ik_1\cdot x}\star_\mathcal{K} e^{ik_2\cdot x}\star_\mathcal{K} e^{ik_3\cdot x}\star_\mathcal{K} e^{ik_4\cdot x})(x)\label{type1} \end{equation} where we dropped the overall constant $\sim\lambda$. Combining \eqref{type1} with \eqref{Bproperties}, \eqref{theweight} and \begin{equation}\label{plane-norm} (e^{ikx}\star_\mathcal{K} e^{-ikx})(x)=\frac{4}{\theta^2}\frac{\sin^2(\frac{\theta}{2}|k|)}{|k|^2}, \end{equation} we obtain \begin{subequations} \begin{align}\label{amplitude-I} \Gamma^{(2)}_{1;I}&=\int d^3x\frac{d^3k_3}{(2\pi)^3}\frac{d^3k_4}{(2\pi)^3}\widetilde\phi(k_3)\widetilde\phi(k_4)(e^{ik_3x}\star_\mathcal{K} e^{ik_4x})(x) \omega^{(I)}\\ &=\int d^3x(\phi\star_\mathcal{K}\phi)(x)\omega^{(I)}=\int d^3x\ \phi(x)\phi(x)\omega^{(I)},\nonumber \end{align} with \begin{equation} \omega_{I}=\frac{4}{\theta^2}\int\frac{d^3k}{(2\pi)^3}\frac{\sin^2(\frac{\theta}{2}|k|)}{k^2(k^2+m^2)} \label{type1omega}. \end{equation} \end{subequations} This integral can be easily compute upon using spherical coordinates, \textit{i.e.} \begin{equation}\label{type1-final} \omega_{I}=\frac{4}{\theta^2}\int\frac{d^3k}{(2\pi)^3}\frac{\sin^2(\frac{\theta}{2}|k|)}{k^2(k^2+m^2)}=\frac{1}{\pi^2\theta^{2}}\int^{\infty}_{0}dr\frac{1-\cos(\theta r)}{r^2+m^2}=\frac{1-e^{-\theta m}}{2\pi m\theta^2}, \end{equation} from which we conclude that $\omega_{I}$ is finite whenever $\theta\neq0$. Moreover, expanding the exponential around $m\sim 0$, we find $\omega_{I}=(2\pi\theta)^{-1}+\mathcal{O}(m)$ indicating that the massless case is also not singular. Hence, Type-I contributions are UV finite and do not exhibit IR singularity. We conclude that whenever $\theta\ne0$, Type-I contributions cannot generate UV/IR mixing. Note that the closed star product structure of the quadratic part of the effective action survives the one-loop quantum corrections at it is apparent from \eqref{amplitude-I}.\smallskip From \eqref{type1-final}, we readily obtain the small $\theta$ expansion of $\omega_{I}$, namely \begin{equation}\label{comutlim1} \omega_{I}\xrightarrow[\theta\rightarrow 0]{}\Lambda+\mathcal{O}(1),\ \Lambda:=\frac{1}{2\pi\theta}. \end{equation} Thus, we recover the expected linear divergence (showing up when $\Lambda\to\infty$) which occurs in the 2-point function for the commutative theory with $\Lambda$ as the UV cutoff. In physical words, the present noncommutativity of $\mathfrak{su}(2)$ type gives rise to a natural UV cutoff for the scalar field theory \eqref{real-clas-action} which regularises both the UV and the IR (massless case). \paragraph{Complex scalar field theories.}{} The above one-loop analysis extends easily to Type-I contributions for the 2-point function of the complex scalar field theories with orientable or nonorientable interactions. \\ In the case of invariant interaction, the 2-point function only receives Type-I contributions. The action is \begin{equation} \mathcal{S}=\int d^3x\big[\partial_\mu\bar{\phi}\star_\mathcal{K}\partial_\mu\phi+m^2\bar{\phi}\star_\mathcal{K}\Phi+ {\lambda}\bar{\phi}\star_\mathcal{K}\phi\star_\mathcal{K}\bar{\phi}\star_\mathcal{K}\Phi\big]\label{complx-clas-action}. \end{equation} The one-loop quadratic part of the effective action is now given by \begin{equation} \Gamma^{(2)}_{1;I}=\int d^3x \left[\prod_{i=1}^4 \frac{d^3k_i}{(2\pi)^3}\right]\widetilde{\bar{\phi}}(k_3)\widetilde{\phi}(k_4)\frac{\delta(k_1+k_2)}{k_1^2+m^2}(e^{ik_1x}\star_\mathcal{K} e^{ik_2x}\star_\mathcal{K} e^{ik_3x}\star_\mathcal{K} e^{ik_4x})(x)\label{type1-complx} \end{equation} where we dropped again the overall constant $\sim\lambda$. Adapting the previous analysis, we immediately obtain \begin{equation}\label{omega-complx} \Gamma^{(2)}_{1;I}=\int d^3x\ \bar{\phi}\phi\ \omega_{I}, \end{equation} with $\omega_{I}$ still given by eq. \eqref{type1omega}. We conclude that the Type-I contribution of the complex case is similar to the Type-I contribution of the real case. Hence, the Type-I is finite and the same conclusions as before hold. A similar conclusion obviously holds true for the Type-I contributions involved in the 2-point function related to the scalar theory with noninvariant interaction. However, Type-II contributions mentioned at the beginning of this section are also involved in that case. \section{Type-II contributions.}\label{sec-typeII} Let us go back to the real scalar field theory \eqref{real-clas-action}. A typical Type-II contribution to the one-loop quadratic part of the effective action is given by \begin{subequations} \begin{equation}\label{type2} \Gamma^{(2)}_{1;I\hspace{-1pt}I}=\int\frac{d^3k_2}{(2\pi)^3}\frac{d^3k_4}{(2\pi)^3}\widetilde\phi(k_2)\widetilde\phi(k_4)\omega^{(I \hspace{-1pt} I)}(k_2,k_4), \end{equation} with \begin{align}\label{type2-omega} \omega_{I\hspace{-1pt}I}(k_2,k_4)&=\int d^3x\frac{d^3k_1}{(2\pi)^3}\frac{d^3k_3}{(2\pi)^3}\frac{\delta(k_1+k_3)}{k_1^2+m^2}(e^{ik_1\cdot x}\star_\mathcal{K} e^{ik_2\cdot x}\star_\mathcal{K} e^{ik_3\cdot x}\star_\mathcal{K} e^{ik_4\cdot x})(x)\\ &=\int d^3x\frac{d^3k}{(2\pi)^3}\frac{1}{k^2+m^2}(e^{ik\cdot x}\star_\mathcal{K} e^{ik_2\cdot x}\star_\mathcal{K} e^{-ik\cdot x}\star_\mathcal{K} e^{ik_4\cdot x})(x),\nonumber \end{align} \end{subequations} where the internal momentum involves two nonadjacent exponential factors.\smallskip We first consider the infrared regime of \eqref{type2-omega} corresponding to the small external momenta region, \text{i.e.} $k_2\sim0,\ k_4\sim0$. From \eqref{theweight}, we infer \begin{equation}\label{expo-unit} \lim_{k_2\to0}(e^{ik_1\cdot x}\star_\mathcal{K} e^{ik_2\cdot x})(x)=e^{ik_1\cdot x} \end{equation} which simply reflects the fact that $E_{k=0}(\hat{x})={\text{\usefont{U}{dsss}{m}{n}\char49}}$, or equivalently, $Q(1)={\text{\usefont{U}{dsss}{m}{n}\char49}}$.\\ Then, we can write \begin{align}\label{ir-omega2} \omega_{I\hspace{-1pt}I}(0,k_4)&=\int d^3x\frac{d^3k}{(2\pi)^3}\frac{1}{k^2+m^2}(e^{ik\cdot x}\star_\mathcal{K} e^{-ik\cdot x}\star_\mathcal{K} e^{ik_4\cdot x})(x)\\ &=\delta(k_4)\frac{4}{\theta^2}\int\frac{d^3k}{(2\pi)^3}\frac{\sin^2(\frac{\theta}{2}|k|)}{k^2(k^2+m^2)}=\delta(k_4)\ \omega_{I}.\nonumber \end{align} where $\omega_{I}$ is given by \eqref{type1-final} and we have used \eqref{plane-norm} to obtain the second equality. From the discussion about Type-I contributions given in $\S$\ref{sec-typeI}, we conclude that \eqref{ir-omega2} is not IR singular (and also UV finite). A similar result holds true for $\omega_{I\hspace{-1pt}I}(k_2,0)$. The extension to complex scalar field theories is obvious. We conclude that no IR singularity shows up in the 2-point functions for both real and complex (even massless $m=0$) scalar field theories at one-loop so that these NCFT are free from UV/IR mixing.\smallskip Unfortunately, since exponentials no longer simplify in \eqref{type2-omega}, now we have to deal with infinite expansions stemming from the $B(k_1,k_2)$ function in \eqref{theweight} and/or delta functions with nonlinear arguments which complicate considerably the UV analysis of the Type-II contributions. However, this situation can be slightly simplified by considering a somewhat restricted situation for which the coordinate functions $x_\mu$ satisfying the relation \eqref{su2per} for the Lie algebra $\mathfrak{su}(2)$ are represented as Pauli matrices. Note that a similar representation is used in models related to quantum gravity \cite{Freidel:2008,Guedes:2013} in which are involved noncommutative structures similar to the ones we considered here. Namely, we introduce the following morphism of algebra $\rho: \mathfrak{su}(2)\to \mathbb{M}_2(\mathbb{C})$ \begin{equation}\label{decadix} \rho(\hat{x}_\mu) = \theta \sigma_\mu,\ \rho({\text{\usefont{U}{dsss}{m}{n}\char49}}) = {\text{\usefont{U}{dsss}{m}{n}\char49}}. \end{equation} From the usual properties of the Pauli matrices, we obtain the following relation \begin{equation} \label{representation_operateur} \rho (\hat{x}_i \hat{x}_j) = \rho(\hat{x}_i) \rho(\hat{x}_j) = \theta^2 \delta_{ij} {\text{\usefont{U}{dsss}{m}{n}\char49}} + i \frac{\theta}{2} \varepsilon_{ij}^{\hspace{5pt} k} \rho (\hat{x}_k), \end{equation} which will give rise to a rather simple expression for the exponential factors occurring in \eqref{type2-omega}. Indeed, after some algebraic manipulations, we obtain \begin{equation}\label{expo-coupe} e^{ik^\mu \rho(\hat{x}_\mu)} = \cos\left(\theta |k| \right) {\text{\usefont{U}{dsss}{m}{n}\char49}} + i \frac{\sin\left(\theta |k| \right)}{\theta |k|} k^\mu \rho(\hat{x}_\mu), \end{equation} which finally implies \begin{equation}\label{commut-expon} [ e^{ik^\mu\rho(\hat{x}_\mu)} , e^{ip^\nu \rho(\hat{x}_\nu)}] = - i \theta \frac{\sin\left(\theta |k| \right)}{\theta |k|} \frac{\sin\left(\theta |p| \right)}{\theta |p|} \varepsilon_{\sigma \nu}^{\hspace{8pt} \rho} k^\sigma p^\nu \rho(\hat{x}_\rho). \end{equation} On the other hand, $\Gamma^{(2)}_{1;I\hspace{-1pt}I}$, eq. \eqref{type2}, can be conveniently rewritten as \begin{equation} \label{TypeII_asI} \Gamma^{(2)}_{1;I\hspace{-1pt}I} = \Gamma^{(2)}_{1;I} + \int \frac{d^3k_2 }{(2\pi)^3} \frac{d^3k_4 }{(2\pi)^3} \widetilde{\phi}(k_2) \widetilde{\phi}(k_4) I(k_2,k_4), \end{equation} where in obvious notations \begin{align}\label{ik2k4} I(k_2,k_4) =& \int \frac{d^3k }{(2\pi)^3} \frac{d^3x}{k^2+m^2} \left[ e^{ik^\nu x_\nu} , e^{ik_2^\nu x_\nu} \right]_{\star_\mathcal{K}} \star_\mathcal{K} e^{-ik^\nu x_\nu} \star_\mathcal{K} e^{ik_4^\nu x_\nu} \\ =& \left(\frac{2}{\theta}\right)^4 \int \frac{d^3k }{(2\pi)^3} \frac{d^3x}{k^2+m^2} \left(\frac{\sin(\frac{\theta}{2}|k|)}{|k|}\right)^2 \frac{\sin(\frac{\theta}{2}|k_2|)}{|k_2|} \frac{\sin(\frac{\theta}{2}|k_4|)}{|k_4|}\nonumber\\ &\times \mathcal{Q}^{-1} \left( \left[ e^{ik^\mu \hat{x}_\mu} , e^{ik_2^\nu \hat{x}_\nu} \right] e^{-ik^\sigma \hat{x}_\sigma} e^{ik_4^\rho \hat{x}_\rho} \right)\nonumber . \end{align} Hence, making use of \eqref{decadix} together with \eqref{commut-expon}, we arrive after a lengthy computation given in Appendix \ref{appendixb} to the following expression \begin{align} \label{typeII_spherique} I(k_2,k_4)&\\ = &\ \frac{J(k_2,k_4)}{\pi^3 \theta^4} \int d\alpha d\beta d r \frac{\sin^2(\frac{\theta}{2}r)}{r^2+m^2} \left[ \frac{1}{2} \sin\left(2 \theta r \right) \sin\gamma + \sin^2\left(\theta r \right) \sin^2\frac{\gamma}{2} \right] \sin\alpha,\nonumber \end{align} with \begin{equation}\label{france-j} J(k_2,k_4) = \frac{\sin\left(\theta |k_2| \right) \sin(\frac{\theta}{2}|k_2|)}{|k_2|} u^\mu \delta_\mu^{'}(k_4). \end{equation} In the above expressions, we have introduced spherical coordinates for the momentum $k$, namely $k=(r=|k|,\alpha,\beta)$, together with the following notations: $u_\mu$ is the $\mu$-component of a unit vector $u$, $\gamma$ is the angle between the momenta $k$ and $k_2$ (depending only on $\alpha$, $\beta$ and $\alpha_2$, $\beta_2$ entering the spherical coordinates for $k_2$) and $\delta^\prime_\mu$ is defined by $\langle \delta^\prime_\mu, f\rangle=-\frac{\partial f}{\partial k_{4}^\mu}$ for any test function $f$. Upon introducing an estimate on the integrand appearing in eq. \eqref{typeII_spherique}, we can check that $I(k_2,k_4)$ is finite; see eq. \eqref{typeIIbounds}. We conclude that the Type-II contributions are UV finite. The exact derivation can be performed as follow.\smallskip The radial integration in \eqref{typeII_spherique} can be performed by further using \begin{subequations}\label{GS-integral-prime} \begin{align} &\int_0^\infty dx\frac{\cos(ax)}{\beta^2+x^2}=\frac{\pi}{2\beta}e^{-a\beta},\ a\geq 0,\ \text{Re}(\beta)>0,\\ &\int_0^\infty dx\frac{\sin(ax)}{\beta^2+x^2} = \frac{1}{2\beta} \left( e^{-a \beta} \overline{\text{Ei} (a \beta)} - e^{a \beta} \text{Ei} (- a \beta) \right),\ a>0,\ \beta >0, \end{align} \end{subequations} where $\text{Ei}$ is the exponential integral function defined by \begin{equation} \text{Ei}(x) = - \underset{e \rightarrow 0^{+}}{\lim} \left[ \int^{-e}_{-x} \frac{e^{-t}}{t}dt + \int^\infty_e \frac{e^{-t}}{t}dt\right],\ x>0 \end{equation} and we have \begin{equation} \text{Ei}(x) = \textbf{C} + \ln |x| +\sum \limits^\infty_{n=1} \frac{x^n}{n.n!},\ x\neq 0, \end{equation} in which \textbf{C} the Euler-Mascheroni constant. We obtain \begin{subequations}\label{commutative_typeII} \begin{align}\label{commutative_typeII_1} \frac{1}{2}\int d r \frac{\sin^2(\frac{\theta}{2}r)}{r^2+m^2}&\sin\left(2 \theta r \right)\\ =&\ \frac{1}{16m} \left[ 2e^{-2 \theta m} \overline{\text{Ei} (2 \theta m)} - 2e^{2 \theta m} \text{Ei} (-2 \theta m) - e^{-3 \theta m} \overline{\text{Ei} (3 \theta m)} \right. \nonumber \\ &\hspace{1.5truecm}+ \left. e^{3 \theta m} \text{Ei} (-3 \theta m) - e^{- \theta m} \overline{\text{Ei} ( \theta m)} + e^{ \theta m} \text{Ei} (- \theta m) \right], \nonumber \end{align} and \begin{equation}\label{commutative_typeII_2} \int d r \frac{\sin^2(\frac{\theta}{2}r)}{r^2+m^2} \sin^2\left(\theta r \right) = \frac{\pi}{8m} \left[ 1-\left(1+\sinh(\theta m)\right)e^{-2\theta m} \right]. \end{equation} \end{subequations} In the small $\theta$ limit (i.e formal commutative limit $\theta\to0$), we infer \begin{subequations}\label{decadix-1} \begin{equation} \frac{J(k_2,k_4)}{\pi^3 \theta^4} = \frac{1}{\pi^3 \theta^2} |k_2| u^n \delta^{'}_n(k_4) + \mathcal{O}(1), \end{equation} and eq. \eqref{commutative_typeII} becomes \begin{align} &\frac{1}{2}\int d r \frac{\sin^2(\frac{\theta}{2}r)}{r^2+m^2}\sin\left(2 \theta r \right)=(6\ln3-8\ln2)\theta+\mathcal{O}(\theta^2),\\ &\int d r \frac{\sin^2(\frac{\theta}{2}r)}{r^2+m^2} \sin^2\left(\theta r \right) =\frac{\pi}{8} \theta + \mathcal{O}(\theta^2). \end{align} \end{subequations} Combining \eqref{decadix-1} with \eqref{typeII_spherique} yields the following small $\theta$ limit: \begin{equation} I(k_2,k_4)=\frac{C(\alpha_2,\beta_2)}{\theta} |k_2| u^n \delta^{'}_n(k_4) + \mathcal{O}(1), \end{equation} where $C(\alpha_2,\beta_2)$ is finite. Hence, as for the Type-I contributions, the $\theta$ expansion of $I(k_2,k_4)$ is \begin{equation} I\xrightarrow[\theta\rightarrow 0]{}\Lambda+\mathcal{O}(1),\ \Lambda=\frac{1}{2\pi\theta}. \end{equation} Thus, combining this result with the decomposition \eqref{TypeII_asI} of $\Gamma_{1;I \hspace{-1pt} I}^2$, we recover once more the expected linear divergence when $\Lambda\to\infty$ ($\theta\to0$) occurring in the 2-point function for the commutative theory. Again, the present $\mathfrak{su}(2)$ noncommutativity generates a natural UV cutoff for the scalar field theory. Notice that this holds true even when $m=0$. This result extends obviously to complex scalar field theories. \part*{Summary and outlook.\addcontentsline{toc}{part}{Summary and outlook.}} In the present dissertation, we have considered various candidates for a quantum spacetime to be involved in the description of physical phenomena at some quantum gravity scale. These candidates were characterised by different Lie algebras of coordinate operators, say $\mathfrak{g}=\text{Lie}(\mathcal{G})$, which can be gathered into two different categories. In one case, $\mathfrak{g}$ was semisimple, while in the other case it was solvable. Accordingly, in the former case the corresponding Lie group was unimodular, in the latter it was not. Starting from their respective algebras of coordinate operators, we have constructed various families of star products within different approaches. Then, focusing on $\mathbb{R}^3 _\theta$ (a deformation of $\mathbb{R}^3$ with $\mathfrak{su}(2)$ noncommutativity) and $\kappa$-Minkowski, we have constructed various models of noncommutative field theory and studied their quantum properties at one-loop order. Part \ref{ch-ncst} was devoted to the construction of star products associated with these spaces. Part \ref{part-ncft} was devoted to the study of the one-loop order quantum properties of various models of noncommutative scalar field theory with quartic interactions.\smallskip In Chap. \ref{sec-products}, we have considered various quantum spaces whose algebras of coordinates were semisimple Lie algebras. We have shown that, assuming the existence of a linear, invertible, quantisation map $Q$, it is possible to construct star products, eq. \eqref{sustar}, only by determining the expression for the deformed plane waves $Q(e^{ip\cdot x})$. Identifying these deformed plane waves with projective representations of $\mathcal{G}$, eq. \eqref{projective}, we have highlighted that inequivalent families of star products can in principle be classified by the second cohomology group of $\mathcal{G}$ with value in ${\mathbb{C}\hspace{-2pt}\setminus\hspace{-2pt}\lbrace0\rbrace}$, \textit{i.e.} $H^2\big(\mathcal{G},{\mathbb{C}\hspace{-2pt}\setminus\hspace{-2pt}\lbrace0\rbrace}\big)$. The advantage of this point of view is that many examples of $H^2\big(\mathcal{G},\mathcal{A}\big)$, with $\mathcal{A}$ an Abelian group, are already classified in the mathematical literature. On the other hand, in view of application to noncommutative field theories (\textit{i.e.} with actual numbers in the end), it is necessary to exhibit (at least) one representative of the deformed plane waves, \textit{i.e.} one representative of (one of the) cohomology class of equivalence which are classified by $H^2\big(\mathcal{G},\mathcal{A}\big)$. And, as we have shown, even in the simple(st) case of $\mathcal{G}=SU(2)$, the derivation of the expression of $Q(e^{ip\cdot x})$ is not trivial and necessitates a few assumptions. This is true despite the fact that $H^2\big(\mathcal{G},\mathcal{A}\big)$ may be trivial. To this end, we have proposed a systematic method to derive explicit expressions for the deformed plane waves, which we now briefly recall the main lines. This approach amounts to represent the abstract coordinate operators (generators of $\mathfrak{g}$) as differential operators acting on some Hilbert space of functions. Although such idea is not new, we have emphasised that in the mathematical-physics literature the preservation of the various involutive structures underlying the construction of the noncommutative spacetime is not always taken seriously; namely, the selfadjoint coordinate operators are not represented as selfadjoint differential operators. In our opinion, this may have dramatic consequences on the construction of the algebra of fields modelling the quantum space, as well as on the study of noncommutative field theories (and on any other physical applications) as it prevent us from defining a reasonable reality condition for the action functional describing the dynamics of such models. Therefore, in our derivation, particular attention has been drawn to the preservation of the various involutive structures all along the various steps leading to the expressions of the star products. We have shown that the admissible expressions for the differential involutive representations can be obtained from a set of four master differential equations, eq. \eqref{diff-rep3} and \eqref{diff-rep4}. Assuming such a representation to be chosen, we have shown that the expression for $Q(e^{ip\cdot x})$ can be determined by enforcing $Q(f)\triangleright f=f\star g$ and $Q(f)\triangleright1=f$, together with polar decomposition of the deformed plane waves, eq. \eqref{polar}. Focusing on $\mathfrak{g}=\mathfrak{su}(2)$, we have shown that further assuming the representation to be $SO(3)$-equivariant singles out a particular family of involutive representations indexed by two real $SO(3)$-invariant functionals, eq. \eqref{general_rep}. In this case, the deformed plane waves are found to be characterised by two (representation dependent) functions of the momenta whose expressions are given by two Volterra equations, eq. \eqref{volterra}. The product of two plane waves is merely given (up to some details) by the Baker-Campbell-Hausdorff formula for $SU(2)$, reflecting the nonlinear composition law between momenta. Among these representations, we have shown that one of them is equivalent to the Kontsevich product for the Poisson manifold dual to the finite dimensional Lie algebra $\mathfrak{su}(2)$, namely closed for the trace functional defined by the usual Lebesgue integral $\int d^3x$. Using this product, we have computed in Chap. \ref{sap-ncftsu2} the one-loop 2-point functions for both real and complex, massive and massless, scalar field theories with quartic interactions. We have exhibited two types of contribution, depending on whether or not the Wick contracted lines giving rise to the propagator are related to two adjacent exponential factors or not; see eq. \eqref{interaction0}. We have found that both Type-I and Type-II contributions were ultraviolet (UV) finite with no infrared (IR) singularity even in the massless case. This likely indicates that such theories are free of UV/IR mixing. Moreover, we have found that the deformation parameter $\theta$ plays the role of a natural UV cutoff in this context. These results qualitatively agree with previous similar investigations; see, e.g., \cite{JCW:2013b}.\smallskip One of the result of Chap. \ref{sap-ncftsu2} is the highlighting of the limitations of the star product formulation for studying noncommutative field theories. At least for some choice of spacetime noncommutativity. This is clear from the computation of the Type-II contribution in $\S$\ref{sec-typeII}. The reason is the very complicated structure of the Baker-Campbell-Hausdorff formula for semisimple Lie algebra. An alternative to this approach consists in working directly at the level of the operators. However, this presumes the existence of a matrix basis in which the operators can be conveniently decomposed. This is the case for the Moyal space \cite{Gracia:1988}, as well as for any quantum space whose underlying group is compact. In this latter case, a basis can be obtained by mere application of the Peter-Weyl theorem which states (among other things) that the matrix coefficients of the irreducible unitary representations of $\mathcal{G}$ form an orthonormal basis of $L^2(\mathcal{G})$, the convolution algebra of $\mathcal{G}$. Unfortunately no such a basis exists for $\kappa$-Minkowski (in this case, we have to deal with objects such that direct integral of representations). Hopefully, because the $\kappa$-Minkowski algebra of coordinates is a solvable Lie algebra, the corresponding Baker-Campbell-Hausdorff formula enjoys nice properties as illustrated in $\S$\ref{sec-convolution}. This fact holds for any other quantum space of solvable Lie algebra noncommutativity. In the case $\mathfrak{g}$ is nilpotent, the Baker-Campbell-Hausdorff formula even admits finite expansion. Unlike $\mathfrak{su}(2)$, for this type of noncommutativity the star product formulation proves efficient. This has been illustrated in the present dissertation by the computation of the one-loop 2-point and 4-point functions for noncommutative field theories built on $\kappa$-Minkowski.\smallskip Natural extensions of this work would be to consider other Lie groups than $SU(2)$, taking benefit from the natural cohomological setting underlying the construction of the deformed plane waves; see $\S$\ref{sec-dpw}. Indeed, group cohomology with value in an Abelian group is the proper tool to investigate the socalled central extension of a group. This would enable us to study how noncommutative field theory on a given quantum space is modified when built from inequivalent star products. For instance we could applied this framework to $SL(2,R)$ of which the $ax+b$ group is a subgroup. About the results obtained in Chap. \ref{sap-ncftsu2}, it would be interesting to study the full renormalisation properties of noncommutative field theories on $\mathbb{R}^3_\theta$. Intermediate steps would be to compute the one-loop 4-point functions for $\phi^4$ theory, as well as consider other polynomial interactions such as $\phi^6$. Indeed, it is known that, in the commutative case, the 3-dimensional $\phi^4$ model is super renormalisable. Therefore, exploring the $\phi^6$ model would provide a way to test if the quantum behaviour is really improved on $\mathfrak{su}(2)$ noncommutative space. In particular, if the deformation parameter still provides a cutoff. However, for the reason already mentioned, the right approach to adopt to undertake such study should be to use the natural matrix basis stemming from the Peter-Weyl decomposition of $SU(2)$. This approach already proves useful in the context of gauge theory on $\mathbb{R}^3_\theta$, see, e.g., \cite{JCW:2016}.\smallskip The other main concern of the present study was noncommutative field theory on $\kappa$-Minkowski background. In Chap. \ref{sec-Minkowski}, we have derive a star product associated with $\kappa$-Minkowski using an approach slightly different from the above-mentioned one. In this case, we have taken advantage of tools from abstract harmonic analysis and group C*-algebras, identifying quantisation maps with *-representations of the convolution algebra of $\mathcal{G}_{d+1}$, the nonunimodular locally compact Lie group obtained by exponentiating the $\kappa$-Minkowski algebra of coordinate operators. This approach was motivated by the original Weyl quantisation scheme used to construct the Groenewold-Moyal product in quantum mechanics. This has been achieved by replacing the Heisenberg group with the $ax+b$ group which is isomorphic to $\mathcal{G}_{d+1}$ in 2-dimensions ($d=1$). This enables us to construct quite easily a well controlled star product, indicating that the Weyl quantisation scheme provides a natural and powerful framework to describe $\kappa$-deformations of the Minkowski spacetime. Actually, this approach might be applied profitably to the construction of star products for any other quantum space whose algebra of coordinates is a solvable or nilpotent Lie algebra (note that this is the case for the Heisenberg algebra entering the construction of the Groenewold-Moyal product). Moreover, within this framework, star product, involution and measure of integration are canonically provided from their corresponding notions at the level of the convolution algebra of the group. Note that these structures actually underlies the three most well known examples of quantum spacetime, \textit{i.e.} Moyal space, $\mathbb{R}^3_\theta$ and $\kappa$-Minkowski.\smallskip In Chap. \ref{ch-ncft}, we have discussed the properties a reasonable action functional describing the dynamics of noncommutative scalar fields on $\kappa$-Minkowski should have. In view of the very important role played by the Poincar\'{e} algebra in ordinary quantum field theory, together with the fact that $\kappa$-Minkowski support a natural action of the $\kappa$-Poincar\'{e} algebra (a deformation of the ordinary Poincar\'{e} Lie algebra) which plays the role of the algebra of symmetry of the quantum space, it is physically relevant to require the $\kappa$-Poincar\'{e} invariance of any physically reasonable action functional. This conditioned the use of the Lebesgue integral in the construction of the action functional. Despite its very simple expression, it turns out that the Lebesgue integral is not a trace for the star product associated with $\kappa$-Minkowski. Instead, we have shown that the Lebesgue integral defines a KMS weight on the algebra of fields modelling $\kappa$-Minkowski. This indicates that the $\kappa$-Poincar\'{e} invariance of the action functional trades the cyclicity of the Lebesgue integral for a KMS condition. As discussed in $\S$\ref{sec-KMS} this KMS condition \begin{equation}\label{finaleq} \zeta\big((\sigma_t\triangleright f)\star g\big)=\zeta\big(g\star (\sigma_{t-i}\triangleright f)\big), \end{equation} represent an abstract version of the KMS condition introduced a long time ago as a tool to characterise equilibrium temperature states of quantum systems in field theory and statistical physics. However, in this case the KMS condition holds at the level of the correlation function $\langle\Sigma_t(A)B\rangle_\beta$ computed from some thermal vacuum where $A$ and $B$ are now functionals of the fields and $\Sigma_t$ is the Heisenberg evolution operator, hence elements pertaining to the algebra of observables of the theory. But whenever a KMS condition holds true on the algebra of observables of a quantum system or a quantum field theory, the flow generated by the modular group, \textit{i.e.} the Tomita flow, may be used to define a global (observer independent) time which can be interpreted as the ``physical time." This reflects the deep correspondence between KMS and dynamics, and underlies the interesting proposal about the thermal origin of time introduced in \cite{Connes:1994} namely, the ``emergence of time" from noncommutativity. Therefore, it would be tempting to interpret $\sigma_t$, eq. \eqref{defautomorphism}, as generating a ``physical time" for the present system, akin to the thermal time mentioned above. However, no conclusion can yet be drawn. In fact, eq. \eqref{finaleq}, linked to the modular group, only holds at the level of $\mathcal{M}_\kappa$, the algebra of (classical) fields modelling the $\kappa$-Minkowski space. To show that a natural global time can be defined requires to determine if eq. \eqref{finaleq} forces a KMS condition to hold true at the level of the algebra of observables. This could be achieved by actually showing the existence of some KMS state on this latter algebra built from the path integral machinery. Also, it might be useful to exploit objects from C*-dynamical systems to reach this goal. Indeed, C*-dynamical systems naturally arise whenever the Lie group underlying the construction of the C*-algebra of fields has a structure of semidirect product as it is the case for $\kappa$-Minkowski. In view of the possibility to associate to $\kappa$-Poincar\'{e} invariant noncommutative field theories a natural global (cosmological) time, a physically appealing property, the implications of the KMS condition \eqref{finaleq} shared by all these theories obviously deserves further study. Finally, notice that if the KMS condition, eq. \eqref{finaleq}, is transferred at the level of the algebra of observables, any $\kappa$-Poincar\'{e} invariant scalar field theory could be interpreted as a thermal field theory whose thermal bath temperature would be given by some function of $\kappa$. Thus, providing us with a model (possibly) describing the Higgs dynamics at Planck scale.\smallskip Another interesting feature arising from the construction of the action functional is the central role played by the (canonical) involution $\ddagger$ within such theory. Recall that this involution naturally arises from the construction of the (group) C*-algebra of fields modelling $\kappa$-Minkowski. We have shown that this involution ensures the implementation of a reasonable reality condition for the action functional, namely through the definition of the Hilbert product $\langle f,g\rangle_\star:=\int d^4x (f\star g^\ddagger)(x)$. Note by the way that this Hilbert product is related to the positive linear functional (KMS weight) $\zeta$ via $\langle f,g\rangle_\star=\zeta(f\star g^\ddagger)$. Therefore, $\ddagger$ replaces (and differs from) the ordinary complex conjugation of function. Thus, it would be interesting to explore more deeply how this could modify the usual charge conjugation at the level of $\kappa$-deformed theories, hence its impact on the CPT theorem. Note that deviation from (or violation of) CPT theorem (in a quantum gravity prospect) is an active research area; see, e.g., \cite{Mavromatos:2005} for a review.\smallskip Thanks to the above Hilbert product we were able to construct the expressions for various candidates of $\kappa$-Poincar\'{e} invariant action functional. We have restricted our analysis to two kinetic operators which are related to the square of some Dirac operator, namely the first Casimir operator of the Poincar\'{e} algebra and the $\mathcal{U}_\kappa(\text{iso}(4))$-equivariant Dirac operator constructed in \cite{dAndrea:2006}. The decay properties of the corresponding propagators have been analysed. On the other hand, we have restricted our attention to polynomial quartic interactions such that the full action functional tends to the ordinary complex $\vert\phi\vert^4$ model in the commutative (low energy) limit, $\kappa\to\infty$. It turned out that the unique interaction of the commutative case is replaced at the level of $\kappa$-Minkowski by four inequivalent interaction potentials. We shows that these interactions can actually be gathered into two families of interaction depending on the relative position of the fields $\phi$ and $\phi^\ddagger$ entering in them. One family was identified with orientable interactions, the other with nonorientable interactions. Finally, we have emphasised that thanks to the relatively simple expression of the star product associated with $\kappa$-Minkowski it was possible to represent any noncommutative field theory as an ordinary field theory (\textit{i.e.} involving pointwise product among functions) with nonlocal interactions.\smallskip This latter fact enabled us to carry out the first complete study of the one-loop (quantum) properties of the 2-point and 4-point functions for various models of interacting $\kappa$-Poincar\'{e} invariant field theory. In $\S$\ref{sec-2point}, we have computed the one-loop 2-point functions for four different models of noncommutative field theory, resulting from various combinations of the two above kinetic operators with the various interactions. In the case the kinetic operator was provided by the Casimir kinetic operator, we find that it is necessary to consider the full set of interactions (\textit{i.e.} both orientable and nonorientable) to ensure that the two mass terms are renormalised the same way. We have found that in this case the theory diverges cubically, thus slightly worst than in the commutative case. On the contrary, we show that the models built from the equivariant kinetic operator diverges only linearly, thus slightly milder than in the commutative case. In this case, the two masses are renormalised in the same way already when considering only one type of interactions (\textit{i.e.} either orientable or nonorientable) indicating likely a symmetry of the action functional by exchange $\phi\leftrightarrow\phi^\ddagger$. For both kinetic operators, we have found that the models were plagued with UV/IR mixing whenever the interaction considered was nonorientable; see Table \ref{tableau2}. In section $\S$\ref{sec-4point}, we have computed the one-loop 4-point functions for the model with equivariant kinetic operator and orientable interactions. Thanks to an estimate for the propagator, we have shown that the one-loop 4-point function is UV finite. We have concluded that the beta function is zero and that only the mass terms have to be renormalised. \begin{table}[h!] \centering \includegraphics[scale=0.46]{table2.jpg} {\caption{\label{tableau2} \small \noindent One-loop quantum properties of the 2-point functions for various models of $\kappa$-Poincar{\'e} invariant scalar field theory with quartic interactions. All of the kinetic operators are functions of the first Casimir operator of the $\kappa$-Poincar{\'e} algebra, \textit{i.e.} $\mathcal{C}_\kappa=4\kappa^2\sinh^2\left(\frac{P_0}{2\kappa}\right)+e^{P_0/\kappa}P_iP^i $.}} \end{table} A immediate extension of this work would be to renormalise at all order the above theory (\textit{i.e.} with equivariant kinetic operator and orientable interactions). This would require to investigate more closely the expressions of the various amplitudes, and define a reasonable power counting to characterise all of the (possibly) singular contributions. Already at the level of the one-loop theory it remains to define a ``physically reasonable" subtraction scheme to complete the one-loop renormalisation. Which subtraction point to choose is not obvious due to the peculiar role played by the parameter $\kappa$ within such models. Indeed, although $\kappa$ cannot be interpreted as a cutoff ($\kappa$ is fixed), it still regularises in some way the theory. One guideline however is provided by the desire to recover the ordinary $\phi^4$ model in the commutative limit. On the one hand, the fact that the beta function is exactly zero in our model indicates that the coupling constant is constant at the scale at which the effects of the $\kappa$-deformations become relevant (\textit{i.e.} $\kappa$). On the other hand, in the commutative case, we know that the coupling constant is increasing with increasing momenta. Therefore, for the two pictures to coincide, the flow of the (commutative) coupling constant must be bounded from above when the energy scale tends to $\kappa$. This indicates that, within this picture, it must exist a crossover (inflection point) in order the two models (low energy and high energy) to be compatible. In particular, the scale independent (noncommutative) coupling constant must recover its usual growth properties in the commutative limit ($\kappa\to\infty$). Of course, the inflection point should be given by some function of $\kappa$. Hence, we conclude that characterising this inflection point would enable us to provide an estimate for the value of $\kappa$, namely at which energy the crossover takes place. To conclude, under some physically reasonable subtraction scheme, our model of $\kappa$-Poincar\'{e} invariant scalar field theory would admit the ordinary scalar field theory which describes the Higgs dynamics in the standard model of particle physics. Said the other way around, our model of $\kappa$-Poincar\'{e} invariant scalar field theory would provide a high energy (quantum gravity) extension for studying the Higgs dynamics at Planck scale, resolving by the way the socalled triviality of the $\phi^4$ theory. Accordingly, it would be interesting to study the fate of Higgs mechanism at Planck scale within this picture.\smallskip Finally, we conclude by mentioning that the star product considered in the description of $\kappa$-Minkowski could be used in the construction of other (not necessarily $\kappa$-Poincar\'{e} invariant) noncommutative field theories as well as gauge versions of them. In this latter case, due to the twisted trace property of the Lebesgue integral used in the construction of the action functional, the differential calculus should presumably be adapted to this situation, namely by defining a (reasonable) twisted differential calculus. In particular it has been proved that there is no 4-dimensional bicovariant differential calculi that are also Lorentz covariant \cite{Sitarz:1995}; see also \cite{Mercati:2016}. Nevertheless, the study of gauge theories on $\kappa$-Minkowski constitutes one of the important issues to investigate. In particular, a better understanding of the structures of gauge theory compatible with the $\kappa$-Poincar\'{e} invariance of the action functional would provide more insight in the symmetries entering the construction of any $\kappa$-Poincar\'{e} invariant action functional.
1,314,259,993,673
arxiv
\section{Introduction} \label{sec:introduction} \begin{change}Multi-tier distributed systems are systems composed of several distributed nodes organized in layered tiers. Each tier implements a set of conceptually homogeneous functionalities that provides services to the tier above in the layered structure, while using services from the tier below in the layered structure. The distributed computing infrastructure and the connection among the vertical and horizontal structures make multi-tier distributed systems extremely complex and difficult to understand even for those who developed them. Indeed, runtime failures are becoming the norm rather than the exception in many multi-tier distributed systems, such as ultra large systems~\cite{Feiler:ultraLargSystems:cmu:2006} systems of systems~\cite{Sommerville:SoS:cacm:2012,Nielsen:SoS:ACM-CR:2015} and cloud systems~\cite{Chen:cloudFailures:ISSRE:2014, Ko:cloudFailures:IEEESpec:2012, Vishwanath:cloudFailures:SoCC:2012}. In these systems, failures become unavoidable due to both their characteristics and the adoption of commodity hardware.\end{change} The characteristics that increase the chances of failures are the increasing size of the systems, the growing complexity of the system--environment interactions, the heterogeneity of the requirements and the evolution of the operative environment. The adoption of low quality commodity hardware is becoming common practice in many contexts, notably in cloud systems~\cite{ETSI:NFV:2014, Bauer:CloudReliability:2012}, and further reduces the overall system reliability. Limiting the occurrences of runtime failures is extremely important in many common applications, where runtime failures and the consequent reduced dependability negatively impact on the expectations and the fidelity of the customers, and becomes a necessity in systems with strong dependability requirements, such as telecommunication systems that telecom companies are migrating to cloud-based solutions~\cite{ETSI:NFV:2014}. \emph{Predicting failures at runtime} is essential to trigger automatic and operator-driven reactions to either avoid the incoming failures or mitigate their effects with a positive impact on the overall system reliability. Approaches for predicting failures have been studied in several contexts, such as mobile devices~\cite{Riganelli:PowerOptimization:HASE:2008,Nistor:SunCat:ISSTA:2014}, system performance deviations~\cite{Malik:AutomaticDetection:ICSE:2013,Jin:Performance:2007}, distributed systems~\cite{Williams:BlackBoxPrediction:IPDPS:2007,Fu:EventCorrelation:SC:2007}, online and telecommunication applications~\cite{Ozcelik:Seer:TSE:2016,Salfner:PredictingFailure:IPDPS:2006,Haran:ApplyingClassification:FSE:2005}. Current approaches for predicting failures exploit either anomaly- or signature-based strategies. Anomaly-based strategies consider behaviors that significantly deviate from the normal system behavior as symptoms of failures that may occur in the near future~\cite{Sauvanaud:Anomaly:ISSRE:2016,Jin:Performance:2007,Guan:failurePrediction:ICCCN:2011,Williams:BlackBoxPrediction:IPDPS:2007,Tan:AnomalyPrediction:PODC:2010,Tan:anomalyPrediction:ICDCS:2012,IBM:SmartCloudAnalyticsPI:2015}. Anomaly-based techniques suffer from false positives, because of the difficulty of distinguishing faulty from rare albeit legal behaviors, in the absence of information about failure patterns. \begin{change}Signature-based strategies rely on known patterns of failure-prone behaviors, called signatures, to predict failures that match the patterns~\cite{Ozcelik:Seer:TSE:2016,Nistor:SunCat:ISSTA:2014,Malik:AutomaticDetection:ICSE:2013,Vilalta:EventPrediction:ICDM:2002,Fu:EventCorrelation:SC:2007,Salfner:PredictingFailure:IPDPS:2006}. By working with known patterns, signature-based techniques cannot cope with emerging failures. Moreover, signature-based techniques usually work with patterns of discrete events, such as error reports and system reboots, and do not cope well with failures that directly impact on performance indicators whose values vary in continuous domains over time. Performance indicators with continuous variables that span a wide range of values are common in multi-tier distributed systems, and signature-based techniques working on simple sample-based discretization often have limited accuracy in the presence of combinations of values not experienced in the past.\end{change \begin{figure}[th!] \begin{center} \includegraphics[width=1.0\columnwidth]{intuition} \caption{The overall flow of \approach online activities to predict failures} \label{fig:intuitionProud} \end{center} \end{figure} In this paper, we present \approach (PREdicting failures in Multi-tIer distributed SystEms), a novel approach that can accurately predict failures and precisely locate the responsible faults in \begin{change} multi-tier distributed systems. By addressing the challenges that characterize complex multi-tier distributed systems, \approach addresses also the subset of challenges that characterize singe-tier systems. \end{change} \approach originally combines signature-based with anomaly-based approaches, to improve the accuracy of signature-based approaches in predicting failures that impact on \begin{change} performance indicators\end{change}. As illustrated in Figure~\ref{fig:intuitionProud}, \approach \vspace{-0.3cm} \begin{enumerate}[label=-,itemsep=3pt] \item monitors the status of the system \begin{change}by collecting\end{change} (a large set of) performance indicators from the system nodes, for instance \emph{CPU utilization} for each CPU in the system, that we refer to as Key \begin{change}Performance\end{change} Indicators (KPIs) (\emph{KPI monitoring} in the figure), \begin{change} \item identifies deviations from normal behaviors by pinpointing anomalous KPIs with anomaly-based techniques (\emph{Anomaly detection} in the figure), \item identifies incoming failures by identifying symptomatic anomalous KPI sets with signature-based techniques.\end{change (\emph{Signature-based failure prediction} in the figure). \end{enumerate} \medskip In the \emph{KPI monitoring} activity, \approach collects KPIs from different layers of the target multi-tier distributed system. \begin{change} KPIs are metrics collected on specific resources, and are the performance indicators that failure prediction approaches use to estimate the status of the system.\end{change} In the \emph{anomaly detection} activity, \approach exploits multivariate time series analysis to identify anomalies. In details, \approach elaborates the KPI values collected during a training phase to produce a baseline model that represents the legal behavior of the system, and relies on time series analysis to combine the information from multiple KPIs provided in the baseline model for revealing anomalous behaviors. For example the baseline model can identify a simultaneous increase in both memory usage and memory cached as either a symptom of an anomalous behavior when occurring in the presence of a normal workload, or as a normal albeit infrequent behavior when occurring in the presence of a high workload. The baseline model accurately reveals anomalies in the behavior of the system as a whole, but cannot \begin{inparaenum}[(i)] \item distinguish between malign and benign anomalies, that is, symptoms of incoming failures from normal albeit uncommon behaviors, \item predict the type of the incoming failures, and \item locate the sources of the incoming failures. \end{inparaenum} In the \emph{failure prediction} activity, \approach exploits signature-based techniques to accurately distinguish malign from benign anomalies: It identifies the incoming failures that correspond to malign anomalies, predicts the type of incoming failures, and locates the sources of incoming failures. More in details, \approach uses historical data about correct and failing behaviors to learn patterns that correlate malign anomalies to failure types, and to relate failures to failure sources. For example, the signature-based failure prediction activity may discard as benign series of anomalous combination of memory usage, memory cached and normal workload, and identify an excessive re-transmission of network packets jointly with a lack of system service response as symptoms of a possible packet loss problem in a network node, problem that may cause a system failure in the long run. We evaluated \approach on a prototype multi-tier distributed architecture that implements telecommunication services. The experimental data indicate that \approach can predict failures and locate faults with high precision and low false positive rates for some relevant classes of faults, thus confirming our research hypotheses. The main contributions of this paper are: \begin{enumerate}[label=-,itemsep=3pt] \item An approach that combines anomaly- and signature-based techniques to predict failure occurrences and locate the corresponding faults with high precision and low false positive rates, by exploiting information collected from performance indicators in multi-tier distributed systems. The proposed \approach approach can distinguish between anomalous albeit legal behaviors from erroneous behaviors that can lead to failures, and can identify the type and location of the causing faults. \item A set of experimental results obtained on a multi-tier distributed system that hosts a telecommunication system, which resembles an industrial telecommunication infrastructure, and which provides evidence of the precision and accuracy of the approach in the context of cloud systems, a relevant type of multi-tier distributed systems. \end{enumerate} The paper is organized as follows. Section~\ref{sec:systemDesign} introduces the \approach approach. Section~\ref{sec:offline} discusses the offline training of the models. Section~\ref{sec:online} presents the online failure prediction mechanism, based on an original combination of anomaly- and signature-based techniques. Section~\ref{sec:evaluationMethodology} illustrates the methodology that we followed to evaluate the approach, introduces the evaluation metrics and the experimental setting, provides the essential implementation details of the evaluation infrastructure, and presents both the types of faults injected in the system and the reference workload used to evaluate the approach. Section~\ref{sec:results} discusses the experimental results about the effectiveness and the overhead of the proposed approach. Section~\ref{sec:related} overviews the main related approaches, highlighting the original contribution of our approach. Section~\ref{sec:conclusion} summarizes the main contribution of the paper, and indicates the research directions open with the results documented in this paper. \section{The \approach approach} \label{sec:systemDesign} \approach detects failure symptoms, correlates the detected symptoms to failure types, and locates the resources responsible of the possible failures which may occur in the near future. Several anomalous behaviors of many types can often be observed well in advance with respect to system failures, which can be frequently mitigated or avoided, especially in multi-tier distributed systems. For instance in cloud systems, early observed communication issues can trigger dynamic reallocation of resources to mitigate or avoid failures. Differently from current approaches, which simply report anomalous behaviors~\cite{Tan:anomalyPrediction:ICDCS:2012,Dean:anomalyPrediction:ICAC:2012,Tan:AnomalyPrediction:PODC:2010,Guan:failurePrediction:ICCCN:2011}, \approach \begin{enumerate}[label=-,itemsep=3pt] \item distinguishes anomalous behaviors that are caused by software faults and that can lead to failures from anomalous behaviors that are derived from exceptional albeit legal situations and that do not lead to failures, thus reducing the amount of false alarms of current failure prediction approaches, \item correlates anomalous behaviors detected at the system level to specific types of faults, and predicts not only the occurrence but also the type of possible failures, thus simplifying the identification of effective corrective actions, and \item identifies the resources likely responsible for the predicted failure, thus providing the developers with a useful starting point for investigating and solving the problem. \end{enumerate} \medskip As illustrated in Figure~\ref{fig:onlinePredicting}, \approach is composed of an \mbox{offline} model training and an online failure prediction phase. As discussed in details in the next sections, in the \emph{offline model training} phase, \approach builds baseline and signature models that capture the system behavior, and in the \emph{online failure prediction} phase, \approach uses the baseline and signature models to detect anomalies and predict failures, respectively. \begin{figure*}[ht!] \begin{center} \includegraphics[width=1.0\columnwidth]{newFailurePrediction} \caption{The \approach learning and predicting phases} \label{fig:onlinePredicting} \end{center} \end{figure*} \section{Offline Model Training} \label{sec:offline} In the offline learning phase \approach builds a \emph{baseline model} and a \emph{signature model}. The baseline model identifies anomalous behaviors that might be interpreted as symptoms of failures, while the \emph{signature model} associates sets of anomalous behaviors to either legal albeit spurious behaviors or symptoms of future failures, and locate the resources likely responsible for the failure. As illustrated in Figure~\ref{fig:onlinePredicting} in the offline learning phase, \approach \emph{monitors series of KPIs} over time under normal execution conditions to \emph{learn the baseline model}, and \emph{seeds faults} of the target types to \emph{extract the signature model}. \begin{review} The baseline model is a system model and, as such, it is obtained by modeling only the failure-free behavior, that is, the normal behavior of the system. The model is used to calculate the expected values, at which the measured current values are compared to. If the expected and actual values differ significantly, the system is suspected to not behave as intended. The detection of several anomalous values is a relevant indicator of failures that may happen in the future. In contrast with the baseline model, which focuses on failure-free behaviors, the generation of the \emph{signature model} requires training data for both the failure-prone and failure-free executions. \approach uses the \emph{signature model} to decide whether sets of anomalies are due to novel but normal behaviors or specific classes of failures. \end{review} \approach can build the models from different kinds of KPIs, granted that their values can be monitored as series of values over time, and can train the signature model with different kinds of seeded faults, granted that the consequent failures occur after some system degradation over time. As a simple example, \approach might detect combinations of anomalous rates of packet re-transmission and aborted operations, by means of the baseline model. It may then associate these anomalous behaviors to either a transient situation due to a high and unexpected peak of requests or to a communication problem that will likely cause a system failure in the future, by means of the signature model. It may also identify the subsystems likely responsible for the incoming communication problems, from the information provided with the detected violations patterns. \begin{review}While model incompleteness is both possible and probable, this can be compensated by incrementally collecting additional evidence about the behavior of the system. For instance, the baseline model can be improved incrementally and the signature model can be retrained regularly.\end{review} \subsection{KPI Monitor} \approach collects an extensive number of KPIs from different tiers of many system components without interfering with the system behavior by relying on lightweight monitoring infrastructures, often natively available at the different levels~\cite{Case:SNMP:RFC:1996, Openstack:Ceilometer:2015}, and elaborates the collected data on an independent node that executes the data processing routines. In this way \approach affects the monitored nodes only with the negligible costs of collecting data, and not with the relevant costs of the computation, which is relegated to an independent node. The empirical results reported in Section~\ref{subsec:RQ5} confirm the non-intrusiveness of the monitoring infrastructure. The values monitored for each KPI are time series data, that is, sequences of numeric values, each associated with a timestamp and a resource of the monitored system. Table~\ref{tab:timeSeriesBytesSentPerSec} reports a sample excerpt of the time series data for the KPI \emph{BytesSentPerSec} collected from a resource named \emph{Homer}\footnote{Resource \emph{Homer} is one of the virtual machines used in our empirical setting, a protoype of a cloud infrastructure used by telecommunication companies to provide SIP services, as described in Section~\ref{sec:implementationDetails}. Resource \emph{Homer} is a standard XML Document Management Server that stores MMTEL (MultiMedia TELephony) service settings of the users.}. Columns \emph{Timestamp}, \emph{Resource} and \emph{BytesSentPerSec} indicate the time information, the name of the resource that produced the KPI value, and the number of bytes sent in the last second, respectively. \begin{review}In the context of this paper, \end{review}KPIs are metrics measured on specific resources. More formally, a KPI is a pair $<resource, metric>$. For example, \emph{BytesSentPerSec} collected at \emph{Homer} is a KPI, and the same metric collected at another resource is a different KPI. Thus, the number of collected KPIs depends on the product of monitored metrics and resources. \approach can be customized with different sets of KPIs, collected at heterogeneous levels. The selection of KPIs depends on the characteristics of the target system, and impacts on the type of failures that \approach can detect: \approach can detect only failures whose occurrence follows some anomalous behaviors that reflect in anomalies in the series of KPIs monitored over time. We experimented with over 600 KPIs of over 90 different types, collected KPIs at different abstraction levels, ranging from the Linux operating system to the Clearwater application level, and predicted failures of different types, ranging from network to memory failures. We discuss the experimental setting and the results in details in Section~\ref{sec:implementationDetails}. \begin{table}[h] \caption {Sample time series for KPI \emph{BytesSentPerSec} collected at node \emph{Homer}} \label{tab:timeSeriesBytesSentPerSec} \begin{center} \begin{tabular}{|c|c|c|} \hline \textbf{Timestamp} & \textbf{Resource} & \textbf{BytesSentPerSec}\\ \hline \ldots & \ldots & \ldots \\ Dec. 20, 2016 22:22:35 & Homer & 101376 \\ Dec. 20, 2016 22:23:36 & Homer & 121580 \\ Dec. 20, 2016 22:24:36 & Homer & 124662 \\ Dec. 20, 2016 22:25:36 & Homer & 106854 \\ \ldots & \ldots & \ldots \\ \hline \end{tabular} \end{center} \end{table} \subsection{Baseline Model Learner} The \emph{baseline model learner} derives the baseline model from series of KPIs collected under normal system executions, and thus represents correct system behaviors. The \emph{baseline model learner} generates models with inference solutions that capture temporal relationships in time series data, by including trends and seasonalities~\cite{Box:TimeSeriesAnalysis:2008}. In particular, the \emph{baseline model learner} applies Granger causality tests~\cite{Arnold:TCMGranger:KDD:2007} to determine wether a time series variable can predict the evolution of another variable. A time series $x$ is said to be a \emph{Granger cause} of a time series $y$, if and only if the regression analysis of $y$ based on past values of both $y$ and $x$ is statistically more accurate than the regression analysis of $y$ based on past values of $y$ only. The ability of Granger causality analysis to analyze the dependency between KPIs is a key factor for improving the accuracy of the analysis, because many KPIs are correlated. For instance the CPU load often depends on the rate of incoming requests, and several phenomena could be fully interpreted only by considering multiple time series jointly. For instance, a high CPU load might be anomalous or not depending on the rate of incoming requests that are received by the system. The \approach baseline model includes both models for the single KPIs and models of the correlation among KPIs. Figure~\ref{fig:baselineModel} illustrates a baseline model of a single KPI, namely the model inferred for KPI \emph{BytesSentPerSec} collected from the \emph{Homer} virtual machine. The figure indicates the average value of the time series (dark blue line) and the confidence interval for new samples (light green area around the line). Figure~\ref{fig:casualityGraph} shows an excerpt of a Granger causality graph that represents the causal dependencies among KPIs. Causal dependencies indicate the strength of the correlation between pairs of KPIs, that is, the extent to which changes of one KPI are related to changes of another KPI. Nodes in the causality graph correspond to KPIs, and weighted edges indicate the causal relationships among KPIs, as a value in the interval $[0,1]$, indicating the increasing strength of the correlation. In the example, the values of \emph{BytesSentPerSec} metric in node \emph{Homer} are strongly correlated to and can thus be used to predict values of \emph{Sscpuidle} metric in node \emph{Homer}. \begin{figure \begin{center} \includegraphics[width=12cm]{baselineModel.pdf} \caption{A sample baseline model of a single KPI: The \emph{BytesSentPerSec} KPI for the \emph{Homer} virtual machine} \label{fig:baselineModel} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[width=7cm]{graph.png} \caption{A baseline model of the correlation among KPIs: an excerpt from a Granger causality graph} \label{fig:casualityGraph} \end{center} \end{figure} \subsection{Fault Seeder} The baseline model captures both the shape of the KPI values over time for single KPIs and the correlation among KPIs under normal execution conditions. The signature model captures the relations among anomalous KPIs observed both during normal execution conditions and with seeded faults. The Signature Model Extractor can be trained with different types of seeded faults, whose consequent failures occur after some system degradation over time. Being trained with seeded faults, the signature model can build patterns of anomalies related to failures, and thus distinguish between benign and failure-prone anomalies when monitoring the system execution. The \emph{Fault seeder} injects a fault of a given type in a system location for each system run. \approach can accurately predict failures and locate faults of the types of faults injected in the training phase. Following common practice, we chose the faults to inject according to the Pareto distribution of the frequency and severity of the fault types. The signature model can be extended incrementally to new types of faults. \subsection{Signature Model Extractor} \label{subsec:learningFPModel} The \emph{Signature model extractor} derives the signature model from a sample set of anomalous behaviors that correspond to correct as well as faulty executions. The \emph{Signature model extractor} learns the model from anomalies identified by the Anomaly Detector\footnote{We discuss the Anomaly Detector in details in Section~\ref{sec:online} reveals during the training phase, by relying on the baseline model and faulty execution traces.}. Anomalies are tuples $\langle (a_1,\ldots, a_n), f, r \rangle$, where $(a_1,\ldots, a_n)$ is a (possibly empty) sequence of anomalous KPIs that are detected during an execution window of a fixed length, $f$ is a failure type, and $r$ is the resource responsible for the failure. Thus, anomalous KPIs are KPIs without time stamp, and indicate that the KPIs assume anomalous values for at least a time stamp within the considered window. For instance, the tuple $\langle$($\langle BytesReceivedPerSec, Homer\rangle$, $\langle Sprouthomerlatencyhwm, Sprout\rangle$, $\langle Sscpuidle, Sprout\rangle$), $Packet loss, Sprout\rangle$ indicates three correlated KPIs that assume anomalous values in the considered execution window, and signals a predicted packet loss failure in node Sprout with a likelihood encoded in the tuple $\langle 30,1 \rangle$, as discussed after in this section. Both $f$ and $r$ are empty when the execution window corresponds to a normal execution with no active faults. In the training phase \approach seeds at most a fault per execution window, and considers execution windows that originate with the activation of the fault and slide through the faulty execution up to a maximum size, and thus collects anomalies that occur immediately after the fault activation as well as along a possibly long lasting failing execution. We discuss the tuning of the size of the sliding window in Section~\ref{sec:results}. \approach records both the type of the failure and the resource seeded with the fault to learn signatures that can predict the failure type and locate the responsible resource. The \emph{Signature Model Extractor} relies on several faults for each type, seeded in different locations, and uses multi-label probabilistic classifiers as signature extractors. Probabilistic classifiers generate probability distributions over sets of class labels from given sets of samples. \approach uses the probability distributions to compute the confidence of the predictions, thus producing a signature model that the \emph{Failure predictor} can exploit to predict both the type of the failure and the location of the resources that are most likely responsible for the failure, and to compute the confidence on the prediction. We empirically investigated signature extractors based on \emph{Support Vector Machine (SVM)}, \emph{Bayesian Network (BN)}, \emph{Best-First Decision Tree (BFDT)}, \emph{Na\"ive Bayes (NB)}, \emph{Decision Table (DT))}, \emph{Logistic Model Tree (LMT)} and \emph{Hidden Na\"ive Bayes (HNB)} algorithms. We introduce the algorithms in Section~\ref{sec:implementationDetails}, and discuss their experimental evaluation in Section~\ref{sec:results}. As an example of signature model, Figure~\ref{fig:decisionTree} shows an excerpt of a decision tree that \approach inferred for packet loss failures\footnote{We experimented with different models. Here we report a signature in the form of a decision tree because decision trees are easier to visualise and discuss than other models. }. Nodes correspond to KPIs and edges indicate the \emph{anomaly} relation. Leaf nodes are annotated with the confidence level of the prediction indicates as pairs $\langle total, correct\rangle$, where \emph{total} is the amount of samples that reach the node, and \emph{correct} is the amount of samples that are correctly classified according to the leaf, that is, the number of samples corresponding to failed executions of the specific type caused by the specific resource, as indicated in the model. The ratio between \emph{correct} and \emph{total} indicates the likelihood of the prediction to be correct. The model indicates that anomalous values of \emph{BytesReceivedPerSec} in node \emph{Homer} are a symptom of a possible packet loss failure in node \emph{Homer}, that a combination of non-anomalous values of \emph{BytesReceivedPerSec} in node \emph{Homer} with anomalous values of \emph{Sscpuidle} in node \emph{Sprout} are a symptom of a possible packet loss failure in node \emph{Sprout}, and that the likelihood of a failure increases when both \emph{BytesReceivedPerSec} in \emph{Homer} and \emph{Sscpuidle} in \emph{Sprout} are anomalous. This may happen because packet loss problems may cause a drop in the number of accesses to the user service settings stored in the \emph{Homer} XDMS server, since a packet loss problem may decrease the frequency of authentication requests received by \emph{Sprout} and thus increasing the CPU idle time. The branches of the decision tree not reported in the figure indicate additional relationship between symptoms and failures. \begin{figure} \begin{center} \includegraphics[width=9.5cm]{DecisionTree.pdf} \caption{A sample signature model based on decision trees} \label{fig:decisionTree} \end{center} \end{figure} \section{Online Failure Prediction} \label{sec:online} In the online failure prediction phase, \approach uses the baseline model to detect anomalies and the signature model to predict failures. \subsection{Anomaly Detector} \label{sec:anomalyDetector} Anomalies are behaviors that differ from the expectations, and are thus suspicious. The baseline model encodes expected behavior as a collection of time series of single KPIs and as the Granger correlation among KPIs, as illustrated in Figures~\ref{fig:baselineModel} and~\ref{fig:casualityGraph}, respectively. The \emph{Anomaly Detector} signals univariate and multivariate anomalies when the values of the collected or correlated KPIs differ enough from the baseline model. Univariate anomalies depend on single KPIs, while multivariate anomalies depend on the combination of more than one KPI, each of which may or may not be identified as anomalous by the univariate analysis. The \emph{Anomaly Detector} detects univariate anomalies as samples out of range, as shown in Figure~\ref{fig:UnivariateAnomaly}. Given an observed value $y_t$ of a time series $y$ at time $t$, and the corresponding expected value $\hat{y}_t$ in $y$, $y_t$ is anomalous if the variance $\hat{\sigma}^2(y_t,\hat{y}_t)$ is above an inferred threshold. \begin{figure} \begin{center} \includegraphics[width=11cm]{univariateAnomaly.pdf} \caption{A sample univariate anomalous behavior} \label{fig:UnivariateAnomaly} \end{center} \end{figure} The \emph{Anomaly Detector} detects multivariate anomalies as joint unexpected values of subsets of samples for different variables. It deduces multivariate anomalies among the KPI variables when their relation violates the Granger correlation encoded in the Granger causality graph. For example, it can infer that \emph{successful call rate} and \emph{number of incoming calls} are correlated: the \emph{successful call rate} usually decreases with an increasing \emph{number of incoming calls}, and thus the anomaly detector may signal a multivariate anomaly in the presence of a decreasing \emph{successful call rate} without a corresponding increase of \emph{number of incoming calls}, regardless of the results of the univariate analysis of two values. The \emph{Anomaly Detector} identifies multivariate anomalies with the Granger causality test that checks if a set of correlated KPIs preserves the inferred casual relationships. \subsection{Failure Predictor} The \emph{Failure Predictor} identifies possible failures as sets of anomalies that find a matching in the signature model. As discussed in Section~\ref{subsec:learningFPModel}, \approach trains the signature model with sets of anomalies detected during execution windows of fixed size in terms of anomalous KPI samples. The \approach failure predictor analyzes the sets of anomalies detected in a sliding windows of the same size of the windows used during training. For instance, the \emph{Failure Predictor} can predict an incoming packet loss failure in the presence of an anomalous value of \emph{Sscpuidle} (idle time for the authentication service) in node Sprout when occurring with a normal value of \emph{BytesReceivePerSec} (number of received requests) in the Homer XDMS server, based on the signature model shown in Figure~\ref{fig:decisionTree}. In fact, the sequence $\langle \langle\emph{BytesReceivePerSec} \allowbreak \emph{in}~\allowbreak \emph{Homer}, \allowbreak not~anomalous\rangle, \langle\emph{Sscpuidle~in~Sprout}, anomalous \rangle\rangle$ in Figure~\ref{fig:decisionTree} leads to \emph{Likely \allowbreak Packet \allowbreak Loss \allowbreak Failure \allowbreak in \allowbreak Sprout} \approach generates both \emph{general} and \emph{failure-specific alerts} that correspond to generic failure-prone behaviors and specific failures, respectively. Following common practice in failure prediction solutions that focus their capability to make prediction on recent observations~\cite{Ozcelik:Seer:TSE:2016}, \approach collects anomalies in overlapping windows sliding over time. Anomalies first occur in the sliding window that includes the first occurrence of the anomalous KPI, and persist in the following windows, until the anomaly falls out of the windows themselves. Right after injecting a fault, sliding windows include mostly anomalies produced during the previous failure-free execution segments and only few anomalies caused by the injected fault, while forward moving sliding windows include increasingly many anomalies caused by the activated fault. When sliding windows includes only a small portion of anomalies, the prediction might be imprecise. The \emph{Failure Predictor} refines the predicted failure type over time, until the prediction stabilizes. The \emph{Failure Predictor} copes with this transitory phase by refining an initially general prediction into a failure-specific prediction once the prediction stabilizes, that is, it predicts the same failure type with a confidence level of at least 90\% for 4 consecutive times. In the training phase, \approach builds signature models starting with data collected just after activating the injected fault, and thus the signature model encodes early predictors, that is, sets of anomalies that occur as early as the fault is activated, often long before the consequent failure. This strategy allows \approach to quickly refine a general into a failure-specific prediction, as confirmed by the experimental results reported in Section~\ref{sec:predictionTimeResults}. The predictions indicates the type of expected failure and the set of anomalous KPIs that substantiate the prediction. \approach uses the information about the anomalous KPIs to localize the source of the problem, which might be a resource different from the resources responsible for the anomalous KPIs. For instance in our experiments, \approach correctly predicted a packet loss failure in a specific virtual machine by analyzing a set of $37$ anomalous KPIs generated by $14$ different resources. This is a good example of the importance of locating the fault given a large set of resources involved in the anomalous samples. \section{Evaluation Methodology} \label{sec:evaluationMethodology} In this section, we introduce the research questions (Section~\ref{subsec:researchQuestions}), the testbed that implements a realistic telecom cloud-based system (Section~\ref{sec:experimentalSettings}), the prototype implementation that we used in the experiments (Section~\ref{sec:implementationDetails}), the fault seeding strategy that we adopted to collect data about failures caused by different types of faults (Section~\ref{sec:faultSeeding}), the workflow that we simulated in the experiments (Section~\ref{sec:workload}) and the quality measures that we used to evaluate the results (Section~\ref{sec:qualityMetrics}). \subsection{Research Questions} \label{subsec:researchQuestions} We address six research questions that evaluate the effectiveness of \approach, compare \approach with state of the art approaches and quantify the \approach overhead. \smallskip \subsubsection*{Effectiveness} To evaluate the capability of \approach to successfully predict failures in a realistic cloud-based system we investigate four research questions: \begin{description} \item[RQ1] \textit{Does the size of the sliding window impact on the effectiveness of \approach?} We executed \approach with different window sizes, and measured the impact of the window size on the ability to correctly predict failures. We used the results of this experiment to identify an appropriate window size that we used in the other experiments. \item[RQ2] \textit{Can \approach accurately predict failures and localize faults?} We executed \approach with different failure types and failure patterns, and measured its ability to predict failures occurrence, types and locations. We experimented with several multi-label mining algorithms, and compare their performance and effectiveness in predicting failures. We used the most effective algorithm in the other experiments. \item[RQ3] \textit{Can \approach correctly identify \begin{review}normal\end{review} behaviors not experienced in the past?} We executed \approach with workflows that differ significantly from the workflow used in the training phase and measured its ability to classify these executions as \begin{review}normal\end{review} executions. \item[RQ4] \textit{How early can \approach predict a failure?} We executed a number of experiments to determine how early \approach can predict failure occurrences, for different types of failures. \end{description} \begin{review}RQ1 is intended to analyze the sensitivity of failure prediction and fault localization to changes in the window parameter. RQ2 focuses on the effectiveness of \approach mainly in case of faulty executions, while RQ3 studies \approach with perturbed workloads under normal conditions to study the false positive rate. RQ4 investigates prediction time in faulty executions.\end{review} \subsubsection*{Comparison to state-of-the-art approaches} \begin{change}[R2.1]We compare \approach with \emph{IBM OA-PI --- Operational Analytics - Predictive Insights}~\cite{IBM:SmartCloudAnalyticsPI:2015}, an industrial anomaly-based approach, and with the Grey-Box Detection Approach (G-BDA) of Sauvanaud et al.~\cite{IBM:SmartCloudAnalyticsPI:2015}, a state-of-the-art signature-based approach. We discuss the following research question: \begin{description \item[RQ5] \textit{Can \approach predict failures more accurately than state-of-the-art approaches?} We compare \approach to OA-PI and G-BDA by executing all approaches on the same set of normal and failing executions, and comparing their ability to predict failures. Since OA-PI cannot predict the type of failure and locate the corresponding fault, we only evaluated its ability to predict failure occurrences. \end{description} \end{change} \subsubsection*{Overhead} We investigated the impact of \approach on the overall performance of a cloud-based system by addressing the following research question: \begin{description} \item[RQ6] \textit{What is the overhead of \approach on the performance of the target system?} This question is particularly relevant in the context of multi-tier distributed systems with strict performance requirements, like telecommunication infrastructures. Thus, we designed an experiment referring to such applications. \approach executes the resource-intensive tasks, that is, anomaly detection and failure prediction, on a dedicated physical server, and thus the overhead on the system derives only from monitoring the KPIs. We evaluated the impact of monitoring the KPIs on the system performance by measuring the consumption of the different resources when running the system with and without KPI monitoring active. Both \approach \linebreak and data analytics solutions monitor the same KPIs, and thus share the same performance overhead, but \approach further processes the anomalies revealed with data analytics approaches, and presents more accurate predictions than competing approaches. \end{description} \subsection{Testbed}\label{sec:experimentalSettings} As representative case of multi-tier distributed system, we considered the case of a complete cloud-based environment running an industrial-level IP multimedia sub-system. To control the study, we created a private cloud consisting of \begin{inparaenum}[(i)] \item a controller node responsible for running the management services necessary for the virtualization infrastructure, \item six compute nodes that run VM instances, \item a network node responsible for network communication among virtual machines. \end{inparaenum} The characteristics of the different nodes are summarized in Table~\ref{tab:HWconf}. \begin{table}[h] \caption{Hardware configuration} \label{tab:HWconf} \centering \begin{tabular}{|c|c|l|c|c|} \hline \textbf{Host} & Controller & Network & Compute (x2) & Compute (x4) \\ \hline \textbf{CPU} & \multicolumn{4}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Intel(R) Core(TM)2 Quad CPU Q9650\\ (12M Cache, 3.00 GHz, 1333 MHz FSB)\end{tabular}}} \\ \hline \textbf{RAM} & 4 GB & 4 GB & 8 GB & 4 GB \\ \hline \textbf{Disk} & \multicolumn{4}{c|}{250 GB SATA hard disk} \\ \hline \textbf{NIC} & \multicolumn{4}{c|}{Intel(R) 82545EM Gigabit Ethernet Controller} \\ \hline \end{tabular} \end{table} We used OpenStack~\cite{Openstack:Cloud:2015} version Icehouse on Ubuntu 14.04 LTS as open source cloud operating system and KVM~\cite{KVM:Cloud:2015} as hypervisor. To evaluate our approach, we deployed Clearwater~\cite{Clearwater:CloudCaseStudy:2015} on the cloud-based infrastructure. Clearwater is an open source IP Multimedia Subsystem (IMS) and provides IP-based voice, video and message services. Clearwater is specifically designed to massively scale on a cloud-based infrastructure, and is a product originated from the current trend of migrating traditional network functions from inflexible and expensive hardware appliances to cloud-based software solutions. Our Clearwater deployment consists of the following virtual machines: \begin{description} \item [Bono:] the entry point for all clients communication in the Clearwater system. \item [Sprout:] the handler of client authentications and registrations. \item [Homestead:] a Web service interface to Sprout for retrieving authentication credentials and user profile information. \item [Homer:] a standard XML Document Management Server that stores MMTEL (MultiMedia Telephony) service settings for each user. \item [Ralf:] a service for offline billing capabilities. \item [Ellis:] a service for self sign-up, password management, line management and control of multimedia telephony service settings. \end{description} Each component is running on a different VM. Each VM is configured with 2 vCPU, 2GB of RAM and 20GB hard disk space, and runs the Ubuntu 12.04.5 LTS operating system. Our multi-tier distributed system is thus composed of eight machines running components from three tiers: the operating system, infrastructure and application tiers, running Linux, OpenStack, and virtual machines with Clearwater, respectively. We refer to this environment as the \emph{testbed}. \subsection{Prototype Implementation} \label{sec:implementationDetails} Our prototype implementation of the monitoring infrastructure collects a total of 633 KPIs of 96 different types from the 14 physical and virtual machines that comprise the protoype. We collected KPIs at three levels: 162 KPIs at the application level with the SNMPv2c (Simple Network Management Protocol) monitoring service for Clearwater~\cite{Case:SNMP:RFC:1996}, 121 KPIs at the IaaS level with the OpenStack telemetry service (Ceilometer) for OpenStack~\cite{Openstack:Ceilometer:2015} and 350 KPIs at the operating system level with a Linux OS agent that we implemented for Ubuntu. We selected the KPIs referring to the multi-tired distributed nature of our prototype and the low-impact requirements that characterize most industrial scale systems. We collected KPIs from all the tiers characterizing the system, by relying on already available services, when possible, and on ad-hoc build monitors otherwise. We collected only KPIs that can be monitored with no functional impact and negligible performance overhead. As expected, \approach did not impose any limitation on the set of collected and processed KPIs, and we expect this to be valid in general. \begin{review} Studying the impact of noisy and redundant KPIs on the performance of PreMiSE may further improve the technique. \end{review} At the application tier, \approach collects both standard SNMP KPIs, such as communication latency between virtual machines, and Clearwater specific KPIs, such as the number of rejected IP-voice calls. At the IaaS tier, \approach collects KPIs about the cloud resource usage, such as the rate of read and write operations executed by OpenStack. At the operating system tier, \approach collects KPIs about the usage of the physical resources, such as consumption of computational, storage, and network resources. In our evaluation, we used a sampling rate of 60 seconds. \approach elaborates KPIs from both simple and aggregated metrics, that is, metrics that can be sampled directly, such as CPU usage, and metrics derived from multiple simple metrics, for example the call success rate, which can be derived from the number of total and successful calls, respectively. The \emph{KPI Monitor} sends the data collected at each node to the predictor node that runs \approach on a Red Hat Enterprise Linux Server release 6.3 with Intel(R) Core (TM) 2 Quad Q9650 processor at 3GHz frequency and 16GB RAM. We implemented the \emph{Baseline Model Learner} and the \emph{Anomaly Detector} on release 1.3 of OA-PI~\cite{IBM:SmartCloudAnalyticsPI:2015}, a state-of-the-art tool that computes the correlation between pairs of KPIs. OA-PI detects anomalies by implementing the following anomaly detection criteria: normal baseline variance, normal-to-flat variation, variance reduction, Granger causality, unexpected level, out-of-range values, rare values\footnote{\url{https://www.ibm.com/support/knowledgecenter/SSJQQ3_1.3.3/com.ibm.scapi.doc/intro/r_oapi_adminguide_algorithms.html}}, and issues alarms after revealing anomalies in few consecutive samples. \begin{change} OA-PI can analyze large volumes of data in real-time (the official IBM documentation show an example server configuration to manage $1,000,000$ KPIs)\footnote{\url{https://www.ibm.com/support/knowledgecenter/en/SSJQQ3_1.3.3/com.ibm.scapi.doc/intro/c_tasp_intro_deploymentscenarios.html}}, and thus enables \approach to deal with the amount of KPIs that characterise large-scale distributed systems. The OP-PI learning phase requires data from at least two weeks of normal operation behaviors, thus determining the \approach two weeks training interval. \end{change} We implemented the \emph{Signature Model Extractor} and the \emph{Failure Predictor} on top of the Weka library~\cite{Weka:DataMining:2015}, a widely used open-source library that supports several classic machine learning algorithms. We empirically compared the effectiveness of seven popular algorithms for solving classification problems when used to generate signatures: \begin{itemize} \item a function-based \emph{Support Vector Machine (SVM)} algorithm that implements a sequential minimal optimization~\cite{Platt:SupportVectorMachine:SMO:1998}, \item a \emph{Bayesian Network (BN)} algorithm based on hill climbing~\cite{Cooper:BayesianNetwork:ML:1992}, \item a best-first \emph{Best-First Decision Tree (BFDT)} algorithm that builds a decision tree using a best-first search strategy~\cite{Friedman:BestFirstDT:ML:2000}, \item a \emph{Na\"ive Bayes (NB)} algorithm that implements a simple form of Bayesian network that assumes that the predictive attributes are conditionally independent, and that no hidden or latent attributes influence the prediction process~\cite{John:NaiveBayes:ML:1995}, \item a \emph{Decision Table (DT))} algorithm based on a decision table format that may include multiple exact condition matches for a data item, computing the result by a majority vote~\cite{Kohavi:DecisionTable:ML:1995}, \item a \emph{Logistic Model Tree (LMT)} algorithm that combines linear logistic regression and tree induction~\cite{Niels:LMT:ML:2005}, \item a \emph{Hidden Na\"ive Bayes (HNB)} algorithm that uses the mutual information attribute weighted method to weight one-dependence estimators~\cite{Zhang:HNB:ML:2005}. \end{itemize} As discussed in Section~\ref{sec:results} and illustrated in Table~\ref{tab:multiClassPredictions}, the results of our evaluation do not indicate major differences among the considered algorithms, with a slightly better performance of Logistic Model Trees that we adopt in the experiments. \subsection{Fault Seeding} \label{sec:faultSeeding} In this section, we discuss the methodology that we followed to seed faults in the testbed. Fault seeding consists of introducing faults in a system to reproduce the effects of real faults, and is a common approach to evaluate the dependability of systems and study the effectiveness of fault-tolerance mechanisms~\cite{Bennett:ChaosMonkey:CloudDependabilityTest:2015,Sharma:CloudPD:DSN:2013} in test or production environments~\cite{Sauvanaud:Anomaly:ISSRE:2016,Blohowiak:FaultInjection:ISSRE:2016}. Since we use a cloud-based system to evaluate \approachNoSpace, we identify a set of faults that are well representative of the problems that affect cloud-based applications. We analyze a set of issue reports\footnote{We conducted the analysis in July 2014 selecting the most recent issue reports at the time of the inspection.} of some relevant cloud projects to determine the most relevant fault types that threat Cloud applications. We analyze a total of 106 issue reports, 18 about KVM\footnote{https://bugzilla.kernel.org/buglist.cgi?component=kvm}, 62 about OpenStack\footnote{https://bugs.launchpad.net/openstack}, 19 about CloudFoundry\footnote{https://www.pivotaltracker.com/n/projects/956238}, and 7 about Amazon\footnote{http://aws.amazon.com}, and we informally assess the results with our industrial partners that operate in the telecommunication infrastructure domain. We classify the analyzed faults in thirteen main categories. Figure~\ref{fig:failures} plots the percentage of faults per category in decreasing order of occurrence for the analyzed fault repositories. The figure indicates a gap between the three more frequent and the other categories of faults, and we thus experimented with the three most frequent categories: \textit{Network}, \textit{Resource leaks} and \textit{High overhead} faults. \textit{Network} faults consist of networking issues that typically affect the network and transport layers, such as packet loss problems. \textit{Resource leaks} occur when resources that should be available for executing the system are not obtainable, for instance because a faulty process does not release memory when not needed any more. \textit{High overhead} faults occur when a system component cannot meet its overall objectives due to inadequate performance, for instance because of poorly implemented APIs or resource-intensive activities. \begin{figure}[!htbp] \centering \includegraphics[width=10cm]{kindsOfFailures.png} \caption{Occurrences of categories of faults in the analyzed repositories} \label{fig:failures} \end{figure} Based on the results of this analysis, we evaluate \approach with injected faults of six types, that characterize the three top ranked categories of faults in Figure~\ref{fig:failures}: \emph{Network faults} that depend on \emph{Packet Loss} due to \emph{Hardware} and \emph{Excessive workload} conditions, increased \emph{Packet Latency} due to network delay and \emph{Packet corruption} due to errors in packet transmission and reception, \emph{Resource leak faults} that depend on \emph{Memory Leaks}, and \emph{High overhead faults} that depend on \emph{CPU Hogs}. In details, \begin{inparaenum}[(i)] \item A packet loss due to hardware conditions drops a fraction of the network packets, and simulates the degradation of the cloud network; \item A packet loss due to excessive workload conditions corresponds to an extremely intensive workload, and causes an intensive packet loss; \item An increased packet latency and \item corruption due to channel noise, routing anomalies or path failures, simulates degraded packet delivery performances; \item A memory leak fault periodically allocates some memory without releasing it, simulates a common software bug, which severely threaten the dependability of cloud systems; \item A CPU Hog fault executes some CPU intensive processes that consume most of the CPU time and cause poor system performance. \end{inparaenum} We limited our investigation to the most relevant categories of faults to control the size of the experiment, which already involves an extremely large number of executions. The results that we discuss in Section~\ref{sec:results} demonstrate the effectiveness of \approach across all the faults considered in the experiments. We expect comparable results for other fault categories with the same characteristics of the considered ones, namely faults that lead to the degradation of some KPI values over time before a failure. This is the case of most of the fault categories of Figure~\ref{fig:failures}, with the exception of host and guest crashes, which may sometime occur suddenly and without an observable degradation of KPI values over time. Confirming this hypothesis and thus extending the results to a broader range of fault categories would require additional experiments. We inject packet loss, packet latency, packet corruption, memory leaks and CPU Hogs faults into both the host (Openstack) and guest (Clearwater) layers, and excessive workload faults by altering the nature of the workload, following the approaches proposed in previous studies on dependability of cloud systems~\cite{Bennett:ChaosMonkey:CloudDependabilityTest:2015} and on problem determination in dynamic clouds~\cite{Sharma:CloudPD:DSN:2013}. We study a wide range of situations by injecting faults according to three activation patterns: \begin{description} \item[Constant:] the fault is triggered with a same frequency over time. \item[Exponential:] the fault is activated with a frequency that increases exponentially, resulting in a shorter time to failure. \item[Random:] the fault is activated randomly over time. \end{description} Overall, we seeded 12 faults in different hosts and VMs. Each fault is characterized by a fault type and an activation pattern. \subsection{Workload Characteristics} \label{sec:workload} \begin{review}In practice, it is hard to know if a set of executions is general enough. In our specific settings, we define\end{review} the workload used in the experimental evaluation to replicate the shape of real SIP traffic as experienced by our industrial partners in the telecommunication domain. We carefully tuned the peak of the workflow to use as much as 80\% of CPU and memory. We generate the SIP traffic with the SIPp traffic generator~\cite{Gayraud:SIPpTrafficGenerator:2015}, which is an open source initiative from Hewlett-Packard (HP) and is the de facto standard for SIP performance benchmarking. SIPp can simulate the generation of multiple calls using a single machine. The generated calls follow user-defined scenarios that include the exact definition of both the SIP dialog and the structure of the individual SIP messages. In our evaluation, we use the main SIP call flow dialogs as documented in Clearwater\footnote{http://www.projectclearwater.org/technical/call-flows/}. \begin{figure}[!htbp] \centering \includegraphics[width=12cm]{dailyVariations.png} \caption{Plot with calls per second generated by our workload over a week} \label{fig:weeklyWorkload} \end{figure} Our workload includes a certain degree of randomness, and generates new calls based on a call rate that changes according to calendar patterns. In particular, we consider two workload patterns: \begin{description} \item[Daily variations] The system is busier on certain days of the week. In particular, we consider a higher traffic in working days (Monday through Friday) and lower traffic in the weekend days (Saturday and Sunday). Figure~\ref{fig:weeklyWorkload} graphically illustrates the structure of our workload over a period of a week. \item[Hourly variations] To resemble daily usage patterns, our workload is lighter during the night and heavier during the day with two peaks at 9am and 7pm, as graphically illustrated in Figure~\ref{fig:dailyWorkload}. \end{description} \begin{review}In our empirical evaluation, we obtained good results already with the workload that we designed, without the need of introducing extensive variability in the normal executions used for training. This is probably a positive side effect of the usage of anomaly detection and failure prediction in a pipeline. In fact, the failure predictor component can compensate the noise and false positives produced by anomaly detector.\end{review} \begin{figure}[!htbp] \centering \includegraphics[width=12cm]{hourlyVariations.png} \caption{Plot with calls per second generated by our workload over a day} \label{fig:dailyWorkload} \end{figure} \subsection{Evaluation Measures} \label{sec:qualityMetrics} We addressed the research questions RQ1, RQ2 and RQ5 by using 10-fold cross-validation~\cite{Witten:dataMining:2011}. \approach analyzes time series data, and collects anomalous KPIs every 5 minutes, to comply with the requirements of IBM OA-PI~\cite{IBM:SmartCloudAnalyticsPI:2015}, the time series analyzer used in \approach. Signature-base analysis does not consider the order in which the sliding windows are arranged, thus we collected the \emph{samples} necessary to apply 10-fold cross validation during the execution of our workload with sliding windows of length $l$. Since each run lasts 120 minutes and the size of the interval in the sliding window is 5 minutes, each workload execution produces $(120-l)/5$ samples that can be used for prediction. In the evaluation, we first studied the impact of $l$ on the results (RQ1), and then used the best value in our contest for the other experiments. Overall we collected samples from a total of 648 runs, which include 24 passing executions and 24 failing executions for each type of failure. A failing execution is characterized by a fault of a given type injected in a given resource with a given activation pattern. As discussed in Section~\ref{sec:faultSeeding}, we injected faults of six different types (packet loss, excessive workload, packet latency, packet corruption, memory leak and cpu hog) following three activation patterns (constant, exponential and random). For all but excessive workload, we injected faults on five different target resources (the Bono, Sprout, and Homestead virtual machines in Clearwater and two compute nodes in OpenStack), resulting in $5\times 3\times 5=75$ failure cases. For excessive workload, we injected faults with three patterns with no specific target resource, since excessive workload faults target the system and not a specific resource. We thus obtained $75+3=78$ failure cases. To avoid biases due to the fault injection pattern, we repeated every experiment 8 times, thus obtaining 624 failing executions for the evaluation. The extensive investigation of the different fault types, activation patterns, and affected resources made the set of executions available for the experiment unbalanced between passing executions (24 cases) and failing executions (624 cases). Since we use $l=90$ for RQ2 and RQ5, we obtained a total of 4,782 samples collected from both passing and failing executions. The number of samples available for RQ1 is higher because we tried different values for $l$. To apply 10-fold cross-validation, we split the set of samples into 10 sets of equals size, using nine of them to learn the prediction model and the remaining set to compute the quality of the model. The \approach failure prediction algorithm does not consider the order of the samples in time, since it classifies each sample independently from the others. We evaluated the quality of a prediction model using the standard measures that are used to define contingency tables and that cover the four possible outcomes of failure prediction (see Table~\ref{tab:contingencyTable}). We also measured the following derived metrics: \begin{description} \item[Precision:] the ratio of correctly predicted failures over all predicted failures. This measure can be used to assess the rate of false alarms, and thus the rate of unnecessary reactions that might be triggered by the failure predictor. \item[Recall:] the ratio of correctly predicted failures over actual failures. This measure can be used to assess the percentage of failures that can be predicted with the failure predictor. \item[F-Measure:] the uniformly weighted harmonic mean of precision and recall. This measure captures with a single number the tradeoff between precision and recall. \item[Accuracy:] the ratio of correct predictions over the total number of predictions. The accuracy provides a quantitative measure of the capability to predict both failures and correct executions. \item[FPR (False Positive Rate):] the ratio of incorrectly predicted failures to the number of all correct executions. The FPR provides a measure of the false warning frequency. \end{description} Table~\ref{tab:contingencyTableMetrics} summarizes the derived metrics that we used by presenting their mathematical formulas and meanings. \begin{table}[h] \caption {Contingency table} \label{tab:contingencyTable} \vspace{-0.4cm} \begin{center} \begin{tabular}{cc|c|c|} \cline{3-4} & & \multicolumn{2}{c|}{\textbf{Predicted}} \\ \cline{3-4} & & Failure & Not-Failure \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Actual}}} & Failure & \begin{tabular}[c]{@{}c@{}}True Positive (TP)\\ \textit{(correct warning)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}False Negative (FN)\\ \textit{(missed warning)}\end{tabular} \\ \cline{2-4} \multicolumn{1}{|c|}{} & Not-failure & \begin{tabular}[c]{@{}c@{}}False Positive (FP)\\ \textit{(false warning)}\end{tabular} & \begin{tabular}[c]{@{}c@{}}True Negative (TN)\\ \textit{(correct no-warning)}\end{tabular} \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h] \caption {Selected metrics obtained from the contingency table} \label{tab:contingencyTableMetrics} \begin{center} \begin{tabular}{|c|c|c|} \hline Metric & Formula & Meaning \\ \hline & & How many predicted \\ Precision& $\frac{TP}{(TP+FP)}$ & failures are actual \\ & & failures? \\ \hline & & How many actual \\ Recall&$\frac{TP}{(TP+FN)}$ & failures are correctly \\ & & predicted as failures? \\ \hline & & Harmonic mean of \\ F-measure&$2*\frac{(Precision*Recall)}{(Precision+Recall)}$ & $Precision$ and $Recall$ \\ & & \\ \hline & & How many predictions \\ Accuracy&$\frac{(TP+TN)}{(TP+TN+FP+FN)}$ & are correct? \\ & & \\ \hline & & How many correct \\ FPR&$\frac{(FP)}{(TN+FP)}$ & executions are \\ & & predicted as failures? \\ \hline \end{tabular} \end{center} \vspace{-0.5cm} \end{table} We addressed the research question RQ3 by computing the \emph{percentage of samples that \approach correctly classifies as failure-free} given a set of samples collected by running workflows that differ significantly from the workflow used during the training phase. To this end, we designed two new workflows: \emph{random40} and \emph{random100}. The \emph{random40} workflow behaves like the training workflow with a uniformly random deviation between 0\% to 40\%, while the \emph{random100} workflow behaves with a deviation of up to 100\%. We addressed the research question RQ4 by computing \emph{the time needed to generate a prediction} and \emph{the time between a failure prediction and its occurrence} from a total of 18 faulty runs lasting up to twelve hours. The former time measures the capability of \approach to identify and report erroneous behaviors. The latter time estimates how early \approach can predict failure occurrences. As shown in Figure~\ref{fig:predictionTime}, we define four specific measures: \begin{description} \item[Time-To-General-Prediction (\emph{TTGP}):] the distance between the time a fault is active for the first time and the time \approach produces a general prediction, \item[Time-To-Failure-Specific-Prediction (\emph{TTFSP}):] the distance between the time a fault is active for the first time and the time \approach predicts a specific failure type, \item[Time-To-Failure for General Prediction (\emph{TTF(GP)}):] the distance between the time \approach predicts a general failure and the time the failure happens, \item[Time-To-Failure for Failure-Specific Prediction (\emph{TTF(FSP)}):] the distance between the time \approach predicts a specific failure type and the time the system fails, \end{description} where the \emph{Fault occurrence} is the time the seeded fault becomes active in the system, the \emph{General prediction} is the first time \approach signals the presence of a failure without indicating the fault yet, that is, it identifies an anomaly with an empty fault and resource, the \emph{Failure-specific prediction} is the first time \approach indicates also the fault type and the faulty resource, the \emph{Failure} is the time the delivered service deviated from the system function. Failures depend on the seeded faults. In our case, failures manifest either as system crashes or as success rate dropping below 60\%, as indicated in Section~\ref{sec:results} when discussing RQ4. \begin{figure} \begin{center} \includegraphics[width=9.0cm]{predictionTime.pdf} \caption{Prediction time measures} \label{fig:predictionTime} \end{center} \vspace{-0.7cm} \end{figure} To answer RQ6, we measured the resource consumption as \begin{inparaenum}[(i)] \item the percentage of \emph{CPU} used by the monitoring activities, \item the amount of \emph{memory} used by the monitoring activities, \item the amount of \emph{bytes read/written} per second by the monitoring activities, and \item the packets received/sent per second over the \emph{network} interfaces by the monitoring activities. \end{inparaenum} \section{Experimental Results} \label{sec:results} In this section we discuss the results of the experiments that we executed to answer the research questions that we introduced in the former section. \begin{figure} \centering \includegraphics[width=8.7cm]{observationWindow} \caption{Average effectiveness of failure prediction approaches with different sliding window sizes} \label{fig:observationWindow} \vspace{-0.4cm} \end{figure} \begin{figure} \centering \includegraphics[width=8cm]{fprObservationWindow} \caption{Average false positive rate for different sliding window sizes} \label{fig:fprObservationWindow} \vspace{-0.3cm} \end{figure} \begin{table} \caption {Comparative evaluation of the effectiveness of \approach prediction and localization with the different algorithms for generating signatures} \label{tab:multiClassPredictions} \vspace{-0.2cm} \begin{center} \begin{tabular}{|l||r|r|r|r|r|} \hline \emph{Model} & \emph{Precision} & \emph{Recall} & \emph{F-measure} & \emph{Accuracy} & \emph{FPR} \\\hline \hline \emph{BN} & 84.906 & 82.438 & 82.430 & 82.438 & 0.685 \\ \hline \emph{BFDT} & 97.746 & 97.719 & 97.726 & 97.719 & 0.093 \\ \hline \emph{NB} & 83.931 & 80.925 & 80.786 & 80.925 & 0.745 \\ \hline \emph{SVM} & 98.632 & 98.632 & 98.632 & 98.632 & 0.057 \\ \hline \emph{DT} & 92.650 & 88.555 & 89.943 & 88.555 & 0.470 \\ \hline \emph{LMT} & 98.798 & 98.797 & 98.797 & 98.797 & 0.050 \\ \hline \emph{HNB} & 92.307 & 91.831 & 91.765 & 91.831 & 0.325 \\ \hline \end{tabular} \end{center} \vspace{-0.2cm} \end{table} \subsubsection*{RQ1: Sliding window size} \label{sec:slidingWindowResults} \approach builds the prediction models and analyses anomalies referring to time sliding windows of fixed size. The sliding windows should be big enough to contain a sufficient amount of information to predict failures and small enough to be practical and sensitive to failure symptoms. With this first set of experiments, we investigate the impact of the window size on the effectiveness of the prediction. We experimented with the seven algorithms described in Section~\ref{sec:implementationDetails}, each with sliding windows of size 60, 90 and 120 minutes to study the impact of the window size, and to chose the size for the next experiments. We built a total of 27 prediction models. We executed the prototype tool with the different prediction models both with and without seeded faults, for a total of 24 execution for 27 configurations, 26 of which corresponding to configurations each seeded with a different fault, and one no-faulty configuration, for a total of 648 executions. The configurations correspond to the raw of Table~\ref{tab:LMTperformance} that we discuss later in this section. Figure~\ref{fig:observationWindow} compares the average precision, recall, F-measure and accuracy over all the experiments. These results indicate that the window size has a moderate impact on the predictions, and that a window size of 90 minutes reaches the best prediction effectiveness among the experimented sizes. Figure~\ref{fig:fprObservationWindow} shows the average false positive rates for the different window sizes, and confirms the choice of a window of 90 minutes as the optimal choice among the evaluated sizes. The results collected for the individual algorithms are consistent with the average ones. In all the remaining experiments, we use 90-minutes windows. \subsubsection*{RQ2: Predicting Failures and locating faults} We evaluated the effectiveness of \approach as the ability of predicting incoming failures and identifying the kind and location of the related faults. Table~\ref{tab:multiClassPredictions} shows the precision, recall, F-measure, accuracy and False Positive Rate (FPR) of failure prediction and fault localization for \approach with the prediction algorithms presented at the end of Section~\ref{sec:implementationDetails}. The table indicates that \approach performs well with all the algorithms, with slightly better indicators for \emph{LMTs (Logistic Model Trees)} that we select for the remaining experiments. Table~\ref{tab:LMTperformance} shows the effectiveness of \approach with LMT for the different fault types and locations. The metrics were calculated on a window basis as you need to make a forecast about each window. This means that windows that belong to both failed and correct executions are taken into account. The results in the table indicate that the approach is extremely accurate: \approach suffered from only~74 false predictions out of~4,782 window samples. \approach can quickly complete the offline training phase. To learn the \emph{baseline model}, the data collected from two weeks of execution required less than 90 minutes of processing time. When the training phase runs in parallel to the data collection process, it completes almost immediately after the data collection process has finished. The \emph{signature model extractor} has taken less than 15 minutes to be learnt using the anomalies from two weeks. \begin{table \begin{center} \begin{scriptsize} \caption {Effectiveness of the LogicModel tree (LMT) failure prediction algorithm for fault type and location} \label{tab:LMTperformance} \begin{tabular}{|c|c|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Fault type (Location) \end{tabular} & Precision & Recall & F-Measure & Accuracy & FPR \\ \hline \begin{tabular}[c]{@{}c@{}}CPU hog (Bono) \end{tabular} & 100\% & 93.529\% & 96.657\% & 0.998\% & 0\% \\ \hline \begin{tabular}[c]{@{}c@{}}CPU hog (Sprout) \end{tabular} & 100\% & 97.059\% & 98.507\% & 0.999\% & 0\% \\ \hline \begin{tabular}[c]{@{}c@{}}CPU hog (Homestead) \end{tabular} & 100\% & 97.041\% & 98.498\% & 0.999\% & 0\% \\ \hline \begin{tabular}[c]{@{}c@{}}CPU hog (Compute \#5) \end{tabular} & 93.820\% & 98.817\% & 96.254\% & 0.997\% & 0.236\% \\ \hline \begin{tabular}[c]{@{}c@{}}CPU hog (Compute \#7) \end{tabular} & 96.875\% & 91.716\% & 94.225\% & 0.996\% & 0.107\% \\ \hline \begin{tabular}[c]{@{}c@{}}Excessive workload \end{tabular} & 100\% & 100\% & 100\% & 1.000\% & 0\% \\ \hline \begin{tabular}[c]{@{}c@{}}Memory leak (Bono) \end{tabular} & 100\% & 98.810\% & 99.401\% & 1.000\% & 0\% \\ \hline \begin{tabular}[c]{@{}c@{}}Memory leak (Sprout) \end{tabular} & 100\% & 95.833\% & 97.872\% & 0.999\% & 0\% \\ \hline \begin{tabular}[c]{@{}c@{}}Memory leak (Homestead) \end{tabular} & 100\% & 96.429\% & 98.182\% & 0.999\% & 0\% \\ \hline \begin{tabular}[c]{@{}c@{}}Memory leak (Compute \#5) \end{tabular} & 76.119\% & 91.071\% & 82.927\% & 0.987\% & 1.031\% \\ \hline \begin{tabular}[c]{@{}c@{}}Memory leak (Compute \#7) \end{tabular} & 93.333\% & 75.000\% & 83.168\% & 0.989\% & 0.193\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet corruption (Bono) \end{tabular} & 85.973\% & 99.476\% & 92.233\% & 0.993\% & 0.669\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet corruption (Sprout) \end{tabular} & 87.558\% & 99.476\% & 93.137\% & 0.994\% & 0.583\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet corruption (Homestead) \end{tabular} & 99.429\% & 91.579\% & 95.342\% & 0.996\% & 0.022\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet corruption (Compute \#5) \end{tabular} & 100\% & 100\% & 100\% & 1.000\% & 0\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet corruption (Compute \#7) \end{tabular} & 100\% & 100\% & 100\% & 1.000\% & 0\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet latency (Bono) \end{tabular} & 96.000\% & 100\% & 97.959\% & 0.998\% & 0.173\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet latency (Sprout) \end{tabular} & 76.777\% & 84.375\% & 80.397\% & 0.984\% & 1.058\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet latency (Homestead) \end{tabular} & 72.028\% & 53.646\% & 61.493\% & 0.973\% & 0.864\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet latency (Compute \#5) \end{tabular} & 82.857\% & 75.521\% & 79.019\% & 0.984\% & 0.648\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet latency (Compute \#7) \end{tabular} & 62.069\% & 75.000\% & 67.925\% & 0.972\% & 1.900\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet loss (Bono) \end{tabular} & 100\% & 73.837\% & 84.950\% & 0.991\% & 0\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet loss (Sprout) \end{tabular} & 99.429\% & 100\% & 99.713\% & 1.000\% & 0.022\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet loss (Homestead) \end{tabular} & 94.152\% & 95.833\% & 94.985\% & 0.996\% & 0.215\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet loss (Compute \#5) \end{tabular} & 100\% & 100\% & 100\% & 1.000\% & 0\% \\ \hline \begin{tabular}[c]{@{}c@{}}Packet loss (Compute \#7) \end{tabular} & 82.266\% & 99.405\% & 90.027\% & 0.992\% & 0.773\% \\ \hline \begin{tabular}[c]{@{}c@{}}Correct execution \end{tabular} & 100\% & 100\% & 100\% & 1.000\% & 0\% \\ \hline \end{tabular} \end{scriptsize} \end{center} \vspace{-0.5cm} \end{table} \subsubsection*{RQ3: Detecting Legal Executions} \label{sec:legalExec} While the workload conditions do not alter the failure detection and fault localization capabilities, they may impact on the false positive rate in the absence of faults. Thus experimented with different types of workloads in the absence of faults: workload \emph{random40} that differs from the workload used in the training phase for 40\% of the cases, and \emph{random100} that differs completely from the workload used in the training phase. We generated 72 samples for \emph{random40} and \emph{random100} by running each workload for 2 hours, producing a total of 144 samples. \approach has been able to correctly classify all the samples as belonging to failure-free executions. Jointly with the results discussed for RQ2, we can say that \approach shows a very low number of false positives, even if we are analyzing data from normal executions with workloads completely different from those used in the training phase. \subsubsection*{RQ4: Prediction Earliness} \label{sec:predictionTimeResults} We evaluated the earliness of the prediction as the Time-To-General-Prediction (\emph{TTGP}), the Time-To-Failure-Specific-Prediction (\emph{TTFSP}), the Time-To-Failure for General Prediction (\emph{TTF(GP)}) and the Time-To-Failure for Failure-Specific Prediction (\emph{TTF(FSP)} illustrated in Figure~\ref{fig:predictionTime}. In the experiments, failures correspond to either system crashes or drops in the successful SIP call rate below 60\% for 5 consecutive minutes. Table~\ref{tab:predictionTime} reports the results of the experiment. The columns \emph{from fault occurrence to failure prediction} show the time that \approach needed to predict a general (TTGP) and specific (TTFSP) failure, respectively. \approach has been able to produce a general failure prediction in some minutes: 5 minutes in the best case, less than 31 minutes for most of the faults, and 65 minutes in the worst case. Moreover, \approach has generated the failure specific prediction few minutes after the general prediction, with a worst case of 35 minutes from the general to the specific prediction. The readers should notice that we measure the time to prediction starting with the first activation of the seeded fault, which may not immediately lead to faulty symptoms. The columns \emph{From failure prediction to failure} indicate that the failures are predicted well in advance, leaving time for a manual resolution of the problem. \approach has detected both the general and failure specific predictions at least 48 minutes before the failure, which is usually sufficient for a manual intervention. These results are also valuable for the deployment of self-healing routines, which might be activated well in advance with respect to failures. \approach predicts failure based on the analysis of \emph{OA-PI}, which works with sampling intervals of 5 minutes. Indeed \approach can effectively predict a failure with few anomalous samples. It could predict failures in a shorter time than 5 minutes with an anomaly detector that requires smaller sampling intervals. \begin{figure}[ht!] \begin{center} \includegraphics[width=12cm]{successrate.png} \caption{Call success rate over time} \label{fig:successrate} \end{center} \end{figure} Faults of different type have very different impacts on the system, and can thus result in largely different patterns. Figure~\ref{fig:successrate} exemplifies the different impact of faults of various types by plotting the percentage of successful calls in the experiments characterized by the longest and shortest Time-to-Failure, which correspond to CPU hog and packet corruption faults, respectively. Packet corruption faults have a gradual impact on the system, while the CPU hog faults do not cause failures in the first three hours of execution for the reported experiment. Overall, \approach demonstrated to be able to effectively predict failures, including their type well in advance to the failure time for the four classes of problems that have been investigated. \begin{table} \caption {\approach prediction earliness for fault type and pattern} \label{tab:predictionTime} \begin{center} \begin{tabular}{| l| | l | l |l |l|} \hline & \multicolumn{2}{c|}{From fault occurrence} & \multicolumn{2}{c|}{From failure prediction} \\ & \multicolumn{2}{c|}{to failure prediction} & \multicolumn{2}{c|}{to failure}\\ {\it \begin{tabular}[c]{@{}l@{}} Fault Type (Pattern)\end{tabular}} & \emph{TTGP} & \emph{TTFSP} &\emph{TTF (GP)} & \emph{TTF (FSP)} \\ \hline \hline {\it \begin{tabular}[c]{@{}l@{}}CPU hog (Random)\end{tabular}} &65 mins& 80 mins & $>$12 hours & $>$12 hours \\ \hline {\it \begin{tabular}[c]{@{}l@{}}CPU hog (Constant)\end{tabular}} &45 mins& 60 mins & $>$12 hours & $>$12 hours \\ \hline {\it \begin{tabular}[c]{@{}l@{}}CPU hog (Exponential)\end{tabular}} &5 mins& 30 mins & $>$12 hours & $>$12 hours \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Excessive workload \\ (Random)\end{tabular}} &35 mins& 50 mins & 192 mins & 177 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Excessive workload \\ (Constant)\end{tabular}} &40 mins & 55 mins &110 mins &95 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Excessive workload \\ (Exponential)\end{tabular}} &30 mins& 45 mins & 80 mins & 65 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Memory leak (Random)\end{tabular}} &5 mins& 20 mins & 55 mins & 40 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Memory leak (Constant)\end{tabular}} &5 mins& 20 mins & 56 mins & 41 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Memory leak \\ (Exponential)\end{tabular}} &5 mins& 20 mins & 56 mins & 41 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Packet corruption \\ (Random)\end{tabular}} &30 mins &60 mins &121 mins &91 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Packet corruption \\ (Constant)\end{tabular}} &30 mins& 60 mins & 172 mins & 148 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Packet corruption \\ (Exponential)\end{tabular}} &30 mins& 55 mins & 48 mins & 23 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Packet latency (Random)\end{tabular}} &45 mins& 70 mins & 132 mins & 107 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Packet latency (Constant)\end{tabular}} &30 mins& 60 mins & 132 mins & 102 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Packet latency \\ (Exponential)\end{tabular}} &45 mins& 60 mins & 59 mins & 44 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Packet loss (Random)\end{tabular}} &50 mins& 65 mins & 142 mins & 127 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Packet loss (Constant)\end{tabular}} &30 mins& 65 mins & 85 mins & 50 mins \\ \hline {\it \begin{tabular}[c]{@{}l@{}}Packet loss \\ (Exponential)\end{tabular}} &50 mins& 65 mins & 52 mins & 37 mins \\ \hline \end{tabular} \medskip \begin{footnotesize} \emph{$>$12 hours} indicates the cases of failures that have not been observed within 12 hours, although in the presence of active faults that would eventually lead to the system failures. \end{footnotesize} \end{center} \vspace{-0.5cm} \end{table} \subsubsection*{RQ5: Comparative Evaluation} \label{subsec:rq4} \begin{change}We compare \approach to both \emph{OA-PI} and \emph{G-BDA} on the same testbed. \emph{OA-PI} is a widely adopted industrial anomaly-based tool, \emph{G-BDA} is a state-of-the-art signature-based approach. We use \emph{OA-PI} as a baseline approach, and \emph{G-BDA} as a relevant representative of competing approaches. Table~\ref{tab:comparison} reports precision, recall, F-Measure, accuracy and false positive rate of \approach, OA-PI and G-BDA. \end{change} OA-PI infers the threshold of normal performance for KPI values, and raises alarms only for persistent anomalies, that is, if the probability that a KPI is anomalous for 3 of the last 6 intervals is above a certain threshold value~\cite{IBM:ITOAPI:Tutorial:2015}. Columns \emph{OA-PI (anomalies)} and \emph{OA-PI (alarms)} of Table~\ref{tab:comparison} report all and the persistent anomalies that OA-PI detects and signal, respectively. In both cases OA-PI is less effective than \approachNoSpace: OA-PI does not raise any alarm, thus failing to predict the failure (recall = $0\%$, precision and F-measure not computable), and records far too many anomalies, thus signalling all potential failures (recall of $100\%$) diluted in myriads false alarms (false positive rate = 100\%). In a nutshell, OA-PI reports every legal executions as a possible failure. \approach is effective: The high values of the five measures indicate that \approach predicts most failures with a negligible amount of false positives. \begin{change} G-BDA is a signature-based approach that collects VM metrics to detect preliminary symptoms of failures. G-BDA detects both excessive workload and anomalous virtual machines. G-BDA analyzes a single tier of a distributed system. Columns \emph{G-BDA (single-tier)} and \emph{G-BDA (multi-tier)} of Table~\ref{tab:comparison} report precision, recall, F-measure, accuracy and FPR of G-BDA, by referring to the analysis of faults injected in a single VM and faults injected in different tiers, respectively. In both cases \approach outperforms G-BDA on all five measures, and reduces FPR from over 2\% to 0.05\%. \end{change} \begin{change} In summary, the \approach combination of anomaly detection and signature-based analysis is more effective than either of the two techniques used in isolation. \end{change} \begin{table}[] \caption {Comparative evaluation of \approach and state-of-art approches}\label{tab:comparison} \resizebox{\textwidth}{!}{\begin{tabular}{cccccc} \hline \multirow{2}{*}{\textbf{Measures}} & \multirow{2}{*}{\textbf{\approach}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}OA-PI\\ (alarms)\end{tabular}}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}OA-PI \\ (anomalies)\end{tabular}}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}\begin{change}G-BDA\end{change}\\ \begin{change}(single-tier)\end{change}\end{tabular}}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}\begin{change}G-BDA\end{change}\\ \begin{change}(multi-tier)\end{change}\end{tabular}}} \\ & & & & & \\ \hline Precision & 98.798\% & -- & 94.118\% & \begin{change}90.967\%\end{change} & \begin{change}87.933\%\end{change} \\ \hline Recall & 98.797\% & 0\% & 100\% & \begin{change}90.567\%\end{change} & \begin{change}87.533\%\end{change} \\ \hline F-Measure & 98.797\% & -- & 96.970\% & \begin{change}90.367\%\end{change} & \begin{change}87.400\%\end{change} \\ \hline Accuracy & 98.797\% & 5.882\% & 94.118\% & \begin{change}90.567\%\end{change} & \begin{change}87.533\% \end{change} \\ \hline FPR & 0.05\% & 0\% & 100\% & \begin{change}2.833\%\end{change} & \begin{change}2.3\%\end{change} \\ \hline \end{tabular}} \end{table} \subsubsection*{RQ6: Overhead} \label{subsec:RQ5} \approach interacts directly with the system only with the \emph{KPI Monitor}, which in our prototype implementation collects the KPIs by means of the SNMPv2c monitoring service for Clearwater~\cite{Case:SNMP:RFC:1996}, the Ceilometer telemetry service for OpenStack~\cite{Openstack:Ceilometer:2015} and a Linux OS agent that we implemented for Ubuntu. All other computation is performed on a dedicated node, and does not impact on the overall performance of the target system. Thus, the \approach overhead on the running system is limited to the overhead of the monitoring services that we expect to be very low. \begin{figure}[ht!] \begin{center} \includegraphics[width=10cm]{overhead-comparison.pdf} \caption{\approach overhead} \label{fig:overhead} \end{center} \end{figure} The experimental results confirm the absence of overhead, which means no measurable difference, on the target system. We only observe small differences in resource consumption as reported in Figure~\ref{fig:overhead}, which reports cpu, memory, disk and network consumption when the system is executed with and without the monitoring infrastructure. The monitoring infrastructure has a negligible impact on disk (measured as read and written bytes) and network (measured as number of sent and received packets) usage, accounting for few hundreds bytes over tens of thousands and few packets over thousands, respectively. The impact on CPU and memory usage is also quite low, with an average increase of 2.63\% and 1.91\%, respectively. These results are perfectly compatible with systems with strong performance requirements, such as telecommunication infrastructures. \subsubsection*{Threats To Validity} \label{sec:threat} In this article, we reported our experience with an IMS in a network function virtualization environment. Although we have achieved success predictors under different runtime conditions, this may not generalize to other cloud systems. While the approach is generalised easily, it cannot be assumed in advance that the results of a study will generalise beyond the specific environment in which it was conducted. This is an external threat to the validity. \begin{change}One way to mitigate this threat is to analyze other real-world cloud systems. However, there is no publicly available benchmark from realistic cloud deployments such as Project Clearwater. To overcome this limitation, we are partnering with industrial companies to test PreMiSE in their pre-production systems.\end{change} An internal threat to validity is the limited number of faults we examined in the study. We chose popular faults without apriori knowledge of their behavior. However, it is possible that there are faults that do not exhibit any performance deviations. To mitigate the threat mentioned above, we plan to extend the study with a larger set of experiments, so that statistical significant test can be meaningfully applicable. \section{Related Work} \label{sec:related} State-of-the-art techniques for predicting failures and locating the corresponding faults are designed to support system administrators, enable self-healing solutions or dealing with performance issues~\cite{Ibidunmoye:AnomalyDetectionSurvey:2015, Salfner:PredictSurv:ACMCompSurv:2010}. Current techniques to predict system failures derive abstractions of the behavior of a system in the form of models and rules, and exploit either \emph{signature-} or \emph{anomaly-based} strategies to dynamically reveal symptoms of problems, and predict related failures. \emph{Signature-based approaches} capture failure-prone behaviors that indicate how the monitored system behaves when affected by specific faults, and aim to reveal faults of the same kinds at runtime. \emph{Anomaly-based approaches} capture non-failure-prone behaviors that represent the correct behavior of the monitored system and aim to reveal behaviors that violate these abstractions at runtime. \emph{Performance-related approaches} dynamically identify anomalies and bottlenecks. Signature-based approaches can accurately predict the occurrences of failures whose symptoms are encoded in the models that capture the fault signatures, but are ineffective against failures that do not correspond to signatures encoded in the models. Anomaly-based approaches can potentially predict any failure, since they reveal violations of models of the positive behaviors of the application, but depend on the completeness of the models, and suffer from many false alarms. In a nutshell, \emph{signature-based approaches may miss several failures while anomaly-based approaches may generate many false alarms}. \approach integrates anomaly- and signature-based approaches to benefit from both the generality of anomaly-based techniques and the accuracy of signature-based techniques. While current signature-based approaches derive failure signatures from application events~\cite{Vilalta:EventPrediction:ICDM:2002,Fu:EventCorrelation:SC:2007,Salfner:PredictingFailure:IPDPS:2006}, the original \approach approach derives failure signatures from anomalies that are good representative of failure-prone behaviors, and thus particularly effective in predicting failures. \emph{Performance-related approaches} address performance problems by detecting anomalies and bottlenecks that do not affect the functional behavior of the system, and as such are largely complementary to \approach and related approaches. \subsection{Signature-Based Approaches} The main signature-based approaches are the Vilalta et al.'s approach~\cite{Vilalta:EventPrediction:ICDM:2002}, hPrefects~\cite{Fu:EventCorrelation:SC:2007}, SEIP~\cite{Salfner:PredictingFailure:IPDPS:2006}, Seer~\cite{Ozcelik:Seer:TSE:2016}, Sauvanaud et al.~\cite{Sauvanaud:Anomaly:ISSRE:2016}, SunCat~\cite{Nistor:SunCat:ISSTA:2014} and the approach defined by Malik et al.~\cite{Malik:AutomaticDetection:ICSE:2013}. Vilalta et al.\ introduced an approach that mines failure reports to learn associative rules that relate events that frequently occur prior to system failures to the failures themselves, and use the mined rules to predict failures at runtime, before their occurrence~\cite{Vilalta:EventPrediction:ICDM:2002}. Fu and Zu's \emph{hPrefects} approach extends Vilalta et al.'s rules to clustered architectures~\cite{Fu:EventCorrelation:SC:2007}. \emph{hPrefects} learns how failures propagate in time and space from failure records, represents temporal correlations with a spherical covariance model, and spatial correlations with stochastic models, and includes a cluster-wide failure predictor that uses the learned models to estimate the probability that a failure occurs in the current execution. Salfner et al.'s \emph{SEIP} approach synthesizes a semi-Markov chain model that includes information about error frequency and error patterns~\cite{Salfner:PredictingFailure:IPDPS:2006}, and signals a possible system failure, when the model indicates that the probability that the current execution will produce a failure exceeds a given threshold. Ozchelik and Yilmaz's \emph{Seer} technique combines hardware and software monitoring to reduce the runtime overhead, which is particularly important in telecommunication systems~\cite{Ozcelik:Seer:TSE:2016}. \emph{Seers} trains a set of classifiers by labeling the monitored data, such as caller-callee information and number of machine instructions executed in a function call, as passing or failing executions, and uses the classifiers to identify the signatures of incoming failures. Sauvanaud et al. capture symptoms of service level agreement violations: They collect application-agnostic data, and classify system behaviors as normal and anomalous with a Random Forest algorithm, and show that collecting data from an architectural tier impacts on the accuracy of the predictions~\cite{Sauvanaud:Anomaly:ISSRE:2016}. Nistor and Ravindranath's \emph{SunCat} approach predicts performance problems in smartphone applications by identifying calling patterns of string getters that may cause performance problems for large inputs, by analyzing similar calling patterns for small inputs~\cite{Nistor:SunCat:ISSTA:2014}. Malik et al.~\cite{Malik:AutomaticDetection:ICSE:2013} developed an automated approach to detect performance deviations before they become critical problems. The approach collects performance counter variables, extracts performance signatures, and then uses signatures to predict deviations. Malik et al. built signatures with a supervised and three unsupervised approaches, and provide experimental evidence that the supervised approach is more accurate than the unsupervised ones even with small and manageable subsets of performance counters. Lin et al.'s~\cite{Lin:PNF:FSE:2018} \emph{MING} technique uses an ensemble of supervised machine learning models to predict failures in cloud systems by analyzing both temporal and spatial data. Compared to PreMiSe, MING does not consider the multi-level nature of cloud systems and can predict failures only at the granularity of the host. El-Sayed et al.~\cite{ElSayed:LearningFromFailure:ICDCS:2017} note that unsuccessful jobs across different clusters exhibit patterns that distinguish them from successful executions. On the basis of this observation, they use random forests to identify signatures of unsuccessful terminations of jobs or tasks running in the cluster. Predictions at the job level are then used to mitigate the effect of unsuccessful job executions. \approach introduces several novel features that improve over current signature-based approaches: \begin{inparaenum}[(i)] \item it creates signatures from anomalies, which better represent failure occurrences than general events, \item predicts the type of failure that will occur, \item integrates and correlates data extracted from all layers and components of a multi-tier distributed architecture, and \item restricts the scope of the location of the causing faults. \end{inparaenum} \subsection{Anomaly-Based Approaches} The main anomaly-based approaches are the algorithms proposed by Fulp et al.~\cite{Fulp:PredictingFailure:WASL:2008}, Jin et al.~\cite{Jin:Performance:2007} and Guan et al.~\cite{Guan:failurePrediction:ICCCN:2011}, and the \emph{Tiresias}~\cite{Williams:BlackBoxPrediction:IPDPS:2007}, \emph{ALERT}~\cite{Tan:AnomalyPrediction:PODC:2010}, \emph{PREPARE}~\cite{Tan:anomalyPrediction:ICDCS:2012} and \emph{OA-PI}~\cite{IBM:SmartCloudAnalyticsPI:2015} technologies. Fulp et al.'s approach and \emph{PREPARE} address specific classes of failures. Fulp et al. defined a spectrum-kernel support vector machine approach to predict disk failures using system log files~\cite{Fulp:PredictingFailure:WASL:2008}, while \emph{PREPARE} addresses performance anomalies in virtualized systems~\cite{Tan:anomalyPrediction:ICDCS:2012}. Fulp et al. exploit the sequential nature of system messages, the message types and the message tags, to distill features, and use support vector machine models to identify message sequences that deviate from the identified features as symptoms of incoming failures. \emph{PREPARE} combines a 2-dependent Markov model to predict attribute values with a tree-augmented Bayesian network to predict anomalies. Differently from both thees approaches that are specific to some classes of failures, \approach is general and can predict multiple types of failures simultaneously. Guan et al. proposed an ensemble of Bayesian Network models to characterize the normal execution states of a cloud system~\cite{Guan:failurePrediction:ICCCN:2011}, and to signal incoming failures when detecting states not encoded in the models. \emph{ALERT} introduces the notion of alert states, and exploits a triple-state multi-variant stream classification scheme to capture special alert states and generate warnings about incoming failures~\cite{Tan:AnomalyPrediction:PODC:2010}. \emph{Tiresias} integrates anomaly detection and Dispersion Frame Technique (DFT) to predict anomalies~\cite{Williams:BlackBoxPrediction:IPDPS:2007}. Jin et al. use benchmarking and production system monitoring to build an analytic model of the system that can then be used to predict the performance of a legacy system under different conditions to avoid unsatisfactory service levels due to load increases~\cite{Jin:Performance:2007}. Anomaly-based approaches are inevitably affected by the risk of generating many false positives as soon as novel legal behaviors emerge in the monitored system. \approach overcomes the issue of false positives by integrating an anomaly detection approach with a signature-based technique that issues alarms only when the failure evidence is precise enough. The results reported in Section~\ref{subsec:rq4} show that \approach dramatically improves over current anomaly-based detection techniques, including modern industrial-level solutions such as IBM \emph{OA-PI}~\cite{IBM:SmartCloudAnalyticsPI:2015}. \subsection{Performance-Related Approaches} Performance anomaly detection approaches predict performance issues and identify bottlenecks in production systems, while performance regression approaches detect performance changes~\cite{Ibidunmoye:AnomalyDetectionSurvey:2015}. \begin{change} Classic performance anomaly detection approaches work with historical data: they build statistical models of low-level system metrics to detect performance issues of distributed applications~\cite{Cohen:PerformanceDetection:2005,Bodik:FDA:2010,Lim:PerformanceIssues:14,He:Log:2018}. They derive formal representations, called signatures, that are easy to compute and retrieve, to capture the state of the system, and quickly identify similar performance issues that occurred in the past. These approaches aim to discriminate different types of performance issues in order to aid root cause analysis. The most recent performance anomaly detectors, BARCA, Root and TaskInsight, do not need historical data.\end{change} BARCA monitors system metrics, computes performance indicators, like mean, standard deviation, skewness and kurtosis, and combines SVMs, to detects anomalies, with multi-class classifier analysis, to identify related anomalous behaviors, like deadlock, livelock, unwanted synchronization, and memory leaks~\cite{Alvarez:BARCA:DSC:2018}. Root works as a Platform-as-a-Service (PaaS) extension~\cite{Jayathilaka:ROOT:CC:2018}. It detect performance anomalies in the application tier, classifies their cause as either workload change or internal bottleneck, and locates the most likely causes of internal bottlenecks with weighted algorithms. TaskInsight detects performance anomalies in cloud applications, by analysing system level metrics, such as CPU and memory utilization, with a clustering algorithm, to identify abnormal application threads\cite{Zhang:TaskInsight:CLOUD:2016}. It detects and identifies abnormal application threads by analyzing system level metrics such as CPU and memory utilization. Differently from \approachNoSpace, these approaches \begin{inparaenum}[(i)] \item do not locate the faulty resource that causes the performance anomaly, and \item cannot detect performance problems at different tiers, which remains largely an open challenge~\cite{Ibidunmoye:AnomalyDetectionSurvey:2015}. \end{inparaenum} \medskip \emph{Performance regression} approaches detect changes in software system performance during development aiming to prevent performance degradation in the production system~\cite{Chen:PerformanceRegression:ICSME:2017}. They reveal changes in the overall performance of the development system due to changes in the code. Ghaith et al.~\cite{Ghaith:PerformanceRegression:CSMR:2013} detect performance regression by comparing transaction profiles to reveal performance anomalies that can occur only if the application changes. Transaction profiles reflects the lower bound of the response time in a transaction under idle condition, and do not depend on the workload. Foo et al.~\cite{Foo:PerformanceRegressions:ICSE:2015} detect performance regressions in heterogeneous environments in the context of data centers, by building an ensemble of models to detect performance deviations. Foo et al. aggregate performance deviations from different models, by using simple voting as well as weighted algorithms to determine whether the current behavior really deviate from the expected one, and is not a simple environment-specific variation. Performance regression approaches assume a variable system code base and a stable runtime environment, while \approach collects operational data to predict failures and localize faults caused by a variable production environment in an otherwise stable system code base. \section{Conclusions} \label{sec:conclusion} In this paper, we presented \approachNoSpace, an original approach to automatically predict failures and locate the corresponding faults in multi-tier distributed systems, where faults are becoming the norm rather then the exception. Predicting failure occurrence as well as locating the responsible faults produce information that is essential for mitigating the impact of failures and improving the dependability of the systems. Current failure prediction approaches rarely produce enough information to locate the faults corresponding to the predicted failures, and either suffer from false positives (anomaly-based) or work with patterns of discrete events and do not cope well with failures that impact on continuous indicators (signature-based). \approach originally blends anomaly- and signature-based techniques to address failures that impact on continuous indicators, and to precisely locate the corresponding faults. It uses data time series analysis and Granger causality tests to accurately reveal anomalies in the behavior of the system as a whole, probabilistic classifiers to distill signatures that can distinguish \begin{review}failure-free\end{review} albeit anomalous behaviors from failure-prone executions, and signature-based techniques to accurately distinguish malign from benign anomalies, predict the type of the incoming failures, and locate the sources of the incoming failures. \approach executes on a node independent from the target system, and limits the online interactions with the monitored applications to metric collection. In this paper, we report the results of experiments executed on the implementation of a Clearwater IP Multimedia Subsystem, which is a system commonly adopted by telecommunication companies for their VOIP (voice over IP), video and message services. The results confirm that \approach can predict failures and locate faults with higher precision and less false positives than state of the approaches, without incurring in extra execution costs on the target system. Differently from state-of-the-art approaches, \approach can effectively identify the type of the possible failure and locate the related faults for the kinds of faults and failures used in the training phase. We designed and studied \approach in the context of multi-tier distributed systems, to predict failures and locate faults at the level of individual tier of the nodes of the system. Studying the \approach approach in the context of other systems that can be extensively monitored is a promising research direction. \section{Acknowledgements} This work has been partially supported by the H2020 Learn project, which has been funded under the ERC Consolidator Grant 2014 program (ERC Grant Agreement n. 646867) and by the GAUSS national research project, which has been funded by the MIUR under the PRIN 2015 program (Contract 2015KWREMX).
1,314,259,993,674
arxiv
\section{Summary and conclusions} We have shown that magnetohydrodynamic waves such as those found in stellar atmospheres present phase singularities or dislocations, in a similar way as previously found for sound and electromagnetic waves. In the simplified case of an isothermal atmosphere both Alfv\'en and magneto-acoustic waves can carry scalar dislocations of either the edge or vortex kind. We have re-examined observations of the longitudinal component of the velocity in magneto-acoustic waves in sunspots and found the signature of dislocations. We identify these predominantly as edges and gliding edges. Such dislocations can easily be described as the coherent addition of two modes, one without dislocations characterized by an azimuthal number $m=0$ and the other carrying a vortex dislocation ($m=1$). The phase and amplitude relations between both modes describe all observed dislocations with a predominant phase difference of $\pi/2$. The observations show that at least half of the observed periods carry dislocations, and we conclude that the excitation of those two modes with the fixed phase relation between them should be a favoured behaviour of the wave excitation mechanism. The other half of the observed periods either carry no dislocation or it was missed by the position of the slit or the height of formation of the spectral line observed. We can therefore only set a lower bound of 50\% on the number of wave periods carrying a dislocation {of the type $m=1$. Although it is known from theory that modes with $m=1$ and $m=0$ are the most likely to be excited. This is the first time, to our knowledge, that such a result has been verified observationally.} Other than the presence of dislocations in MHD waves, we wish to stress as a result of this work the importance of the observation of dislocations in observed waves, as in the Sun, to determine the nature and modal properties of those waves. An example of such diagnosing capabilities has been given here by determining from observations that at least 50\% of the wave periods correspond to a wave with an $m=1$ component. {We have not addressed what the mechanisms might be that generate such dislocations. In many cases, a cylindrical boundary is enough to generate a vortex dislocation. Flux tubes in the solar atmosphere must therefore be prone to show dislocations in the waves that propagate along them. It has also been found that a wave propagating through a fluid with non-zero vorticity adquires an edge dislocation \citep{berry_wavefront_1980}. It is tempting to think that edge dislocations in solar waves can also be a tracer of regions with high vorticity along the wave path and use them to determine the vorticity in the solar atmosphere. Finally, it is also known }that vortex dislocations carry torque and that this torque can be transferred to an absorbing medium \cite{allen_orbital_1992,simpson_optical_1996,demore_mechanical_2012,volke-sepulveda_transfer_2008}. {In the solar atmosphere the torque that might be transferred to the solar coronal plasma by upward propagating waves could have interesting implications in the dynamics of the corona.}
1,314,259,993,675
arxiv
\section{Introduction\label{sec:Introduction}} Generative Adversarial Networks (GANs) \citep{goodfellow2014generative} are successful in synthesizing high fidelity real images, such as face. A GAN consists of two key competing components, i.e., a generator part and a discriminator part. The generator network learns to map from a latent code, which is often sampled from a random distribution, to synthesized image. The discriminator network is trained to discriminate the real and the fake images, while the generator network is trained to make the synthesized image better to fool the discriminator. A series of works \citep{goetschalckx2019ganalyze, jahanian2019steerability,shen2020interfacegan} suggest that during the training, semantics spontaneously emerges within the latent code. The manipulation of the corresponding attributes of the latent code leads to change of the image in semantic. To enable the edit for real images, it is inevitable to obtain the inverted latent code for the real image. A simple minimization of the Mean Squared Error (MSE) between the generated samples and the associated real samples often leads to unsatisfied latent codes, because this approach ignores the domain constraints for both inverted codes and reconstructed images. Small MSE does not imply the reconstructed images lying in the real image space. \cite{zhu2020domain} propose an \textit{in-domain} GAN to force the reconstructed image lying in the real image space. The semantics editing based on the inverted code obtained by the \textit{in-domain} GAN is much better than the naive approach. However, our experiments show that the inverted codes obtained by the \textit{in-domain} GAN often deviate from the latent space significantly. In this work, we further rescue in-domain crisis for the latent code. Based on the \textit{in-domain} GAN, we propose a \textit{force-in-domain} GAN. We add a discriminator network for the inverted code, therefore, the inversion network can be trained to be good enough to fool the discriminator. The \textit{force-in-domain} GAN works between two domains, i.e., the real image domain and the latent code domain, thus, it can also be interpreted by a cycle-GAN but with slight modification. Our experiments show that the inverted code obtained by the \textit{force-in-domain} GAN faithfully overlaps with latent space and reconstructs the target images at the pixel level and semantic level. In addition, extensive experiments show that our \textit{force-in-domain} GAN shows good performance for semantic editing. \section{Related Works}\label{section2} {\bfseries Generative Adversarial Networks.} GANs can learn the distribution of real images through adversarial training to generate realistic images \citep{goodfellow2014generative}. It has been found that GANs spontaneously learn semantics in the latent space, which makes it feasible for controlling and explaining the generation process of GAN. In theory, \cite{chen2018metrics, arvanitidis2017latent, kuhnel2018latent} use Riemannian manifold to study the semantic editing of GANs, \cite{shen2020interfacegan} focus on GANs that generates face images, and explores the connection between semantic space and actual images. However, due to the limitation of the GANs' structure, it is still difficult to freely edit the semantics of the latent space to change the generated images. \\ {\bfseries GAN Inversion.} GANs learn a map from a random distribution to the real data distribution. Hence, it is for making inference on real images. GAN inversion can help us apply the semantic editing of the latent space to the generated pictures. Given a fixed GAN model, GAN inversion aims at finding the most accurate latent code to recover the input images. The concept of GAN inversion is described in detail in \cite{perarnau2016invertible, lipton2017precise, creswell2018inverting}. Prior work to achieve GAN inversion are mainly divided into two ways, one is to set an optimization problem (\cite{abdal2019image2stylegan, ma2019invertibility, abdal2020image2stylegan++}), and the other is to use the trained GAN generator to construct the encoder (\cite{dumoulin2016adversarially, donahue2016adversarial, zhu2019lia}). \\ {\bfseries Semantic Faces Editing with GANs.} Semantic face editing aims to realize the manipulation of face attributes by manipulating the latent variables of the latent space. In semantic face editing, we hope that during the editing process, only the face attributes we operate have changed, while other face attributes remain unchanged. This is the difficulty and key point of semantic face editing. In order to achieve this goal, different methods are proposed: \cite{odena2017conditional, chen2016infogan, tran2017disentangled} constructs a special loss function to realize the semantic editing of human faces; \cite{donahue2017semantically, shen2018faceid} construct a special structure of GAN to achieve semantic editing. \\ \section{\textit{Force-in-domain} GAN inversion} \begin{figure*}[hb] \centering \includegraphics[width=0.8\textwidth]{figure/network/network.png} \caption{Our \textit{force-in-domain} GAN inversion structure. Note that FC block is from the SyleGAN that maps from a Gaussian distribution to the latent space. Only green blocks are trainable.} \label{fig:structure} \end{figure*} In GAN model, the semantics refer to the emergent knowledge that GAN has learned from the observed data. Style-GAN model generates realistic images based on a disentangled representation in the latent space $\mathcal{W}$, which is obtained by samples from a Gaussian distribution through a fully-connected neural network \citep{karras2019style}. The latent space $\mathcal{W}$ is interpreted as a disentangled space of different image styles. A good GAN inversion model should not only recover input image by the pixel values, but also account for inverting the latent code semantically. For this purpose, we propose a \textit{force-domain-guided} encoder which forces the inverted latent code having the same distribution with the latent space in StyleGAN. In general, the distribution of the latent space is unknown. To force the inverted latent code align with the original latent space, a new discriminator is added in our model, compared with the in-domain GAN \citep{zhu2020domain}, as shown in Fig. \ref{fig:structure}. \subsection{\textit{Force-in-domain} GAN structure} The \textit{in-domain} GAN consists of the following components: encoder, $E(\cdot):\mathcal{X}\rightarrow\mathcal{W}$, obtains the latent code corresponding to the input image; generator, $G(\cdot):\mathcal{W}\rightarrow\mathcal{X}$, synthesizes high-quality images; discriminator $D(\cdot)$ distinguishes real data from synthesized data. Our \textit{force-in-domain} GAN adds the following components: the fully connected layer FC from the StyleGAN that maps from the Gaussian distribution $z$ to the latent space $w^{z}$; a new discriminator $D^{w}(\cdot)$ distinguishes inverted latent codes from real distribution of the latent space. Encoder $E$ is the reverse mapping of the generator $G(\cdot)$, i.e., mapping from images $\mathbf{x}^{real}$ to latent code $\mathbf{w}$ that can recover the real images $\mathbf{x}^{real}$. Our purpose is to align $w$ with $w^{z}$ of the prior knowledge in the pre-trained StyleGAN model. The structure of the \textit{force-in-domain} GAN is depicted in Fig. \ref{fig:structure}. Note that only green blocks are trainable. \subsection{Interpretation by Cycle-GAN} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{figure/network/cycle_gan.png} \caption{Interpretation by Cycle-GAN: the whole structure without the red block is the cycle-GAN. Our \textit{force-in-domain} GAN is the left part with the red block and the block is the new discriminator we introduce in our network.} \label{cycle-gan} \end{figure*} As shown in Fig. \ref{cycle-gan}, cycle-GAN (without the red dashed block) transfers variables $x$ in one domain to $w$ in the other domain by $E$ and also converts $w$ back to $x$ by $G$. In the context of \textit{force-in-domain} GAN, two domains are the image domain and latent space, respectively. In Cycle-GAN, two discriminator networks are imposed to force $\hat{w}$ and $\hat{x}$ in their domains, respectively. In our work, the generator $G$ is already well-trained, and we only focus on the reconstruction of real images, therefore, \textit{force-in-domain} GAN can be regarded as left part of cycle-GAN. In addition, a domain discriminator $D$ (in the red dashed block) is equipped for the reconstructed image $x^{\prime}$ to ensure $x^{\prime}$ in the real image domain. \subsection{Adversarial Loss} We apply adversarial losses \citep{goodfellow2014generative} to the two discriminators. The objective is: \begin{equation} \begin{aligned} \mathcal{L}_{adv}^{D} &= \mathop{\mathbb{E}}\limits_{\mathbf{x}^{real}\sim P_{data}}[\log(D(\mathbf{x}^{real}))]\\ &+\mathop{\mathbb{E}}\limits_{\mathbf{x}^{real}\sim P_{data}}[\log(1-D(G(E(\mathbf{x}^{real}))))] \\ & +\frac{\gamma}{2}\mathop{\mathbb{E}}_{\mathbf{x}^{real}\sim P_{data}}[\Vert\nabla_{\mathbf{w}}D(\mathbf{x}^{real})\Vert^2], \end{aligned} \end{equation} \begin{equation} \begin{aligned} \mathcal{L}_{adv}^{D^{w}} &= \mathop{\mathbb{E}}\limits_{z\sim \mathcal{N}(0, \mathbf{I})}[\log(D^{w}(FC(z)))]\\ &+\mathop{\mathbb{E}}\limits_{x^{real}\sim P_{data}}[log(1-D_{\mathcal{W}}(E(x^{real})))] \\ & +\frac{\gamma}{2}\mathop{\mathbb{E}}_{w\sim FC(\mathcal{N}(0, \mathbf{I}))}[\Vert\nabla_{\mathbf{w}}D^{w}(w)\Vert^2], \end{aligned} \end{equation} where $\gamma$ is the hyper-parameter for the gradient regularization, $P_{data}$ is the distribution of the real image data and $\mathcal{N}(0, \mathbf{I})$ denotes the Gaussian distribution. \subsection{Forward Cycle Consistent Loss} In GAN inversion tasks, it requires the reconstructed images close to the original images, i.e. $\mathbf{x}\rightarrow E(\mathbf{x})\rightarrow G(E(\mathbf{x}))\approx \mathbf{x}$. This forward cycle consistent loss can be expressed as follows: \begin{equation} \mathcal{L}_{cyc} = \mathop{\mathbb{E}}\limits_{\mathbf{x}^{real}\sim P_{data}}\Vert\mathbf{x}^{real} - G(E(\mathbf{x}^{real}))\Vert_{2}. \end{equation} \subsection{Perceptual Loss} Previous work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pre-trained networks \citep{johnson2016perceptual}. In our paper, we choose VGG \citep{simonyan2014very} as the pre-trained networks to introduce the perceptual loss: \begin{equation} \mathcal{L}_{vgg} = \mathop{\mathbb{E}}\limits_{\mathbf{x}^{real}\sim P_{data}}\Vert F(\mathbf{x}^{real}) - F(G(E(\mathbf{x}^{real})))\Vert_{2}, \end{equation} where F($\cdot$) denotes the VGG feature extraction model. \subsection{Force-in-domain Loss} We train the \textit{force-in-domain} GAN, which is illustrated in Fig. \ref{fig:structure}, with the loss consisting of the forward cycle consistent loss, adversarial loss and the perceptual loss. By using these three kinds of losses, the encoder can inverse the input images to the latent space $\mathcal{W}$ and reconstruct the original image in both the pixel level and the semantic level. \begin{equation} \begin{aligned} \mathcal{L}_{adv}^{E} = \mathcal{L}_{cyc} + \lambda_{adv}\mathcal{L}_{adv}^{D} + \lambda_{adv}^{D^{w}}\mathcal{L}_{adv}^{D^{w}} + \lambda_{vgg}\mathcal{L}_{vgg}, \end{aligned} \end{equation} where $\lambda$'s are hyper-parameters. \section{Experiments} In this section, we experimentally show that the latent codes obtained by the \textit{force-in-domain} GAN inversion overlaps well with latent space of the original StyleGAN. We also utilize the \textit{in-domain} GAN to edit real images. \subsection{Experimental settings} We conduct experiments on FFHQ dataset\citep{karras2019style}, which contains 70,000 high-quality face images. The generator $G$ to be inverted is from the pre-trained StyleGAN \cite{karras2019style}. When training the \textit{in-domain} GAN, the generator $G$ and the $FC$ components are fixed and we update the encoder $E$, the discriminator $D$ and $D^{w}$. We set $\lambda_{vgg} = 5\times e^{-5}, \lambda_{adv} = \lambda_{adv}^{D^{w}} = 0.1$ and $\gamma = 10$. \subsection{High quality of image reconstruction} \begin{table}[ht] \centering \caption{Quantitative Comparison between different inversion methods. For each model, we invert 10k images for evaluation. $\downarrow$ means lower is better.} \begin{tabular}{|l|c|c|} \hline Method & FID$\downarrow$ & MSE$\downarrow$\\ \hline In-Domain GAN Encoder without BN & 17.55 & 0.49\\ \hline Force-in-domain Encoder &15.06 & 0.067\\ \hline \end{tabular} \label{quantitative_comparison} \end{table} A necessary condition for a good GAN inversion is that the reconstructed image is close to original image in both pixel and feature levels. To show that the \textit{force-in-domain} GAN can faithfully reconstruct the original image, we quantitatively compute two common indexes, that is, Frchet Inception Distance (FID) \citep{heusel2017gans} and Mean-Square Error (MSE). The FID is a metric for assessing the quality of images generated by GAN, which compares the distribution of generated images with the distribution of real images that were used to train the generator. MSE quantifies the difference between the reconstructed images and the original images in the pixel level. We randomly select $10,000$ images from FFHQ dataset for computation. As shown in Table \ref{quantitative_comparison}, both FID and MSE of the \textit{force-in-domain} GAN are much smaller that those of \textit{in-domain} GAN. Therefore, the \textit{force-in-domain} GAN can improve the fidelity of the image reconstruction. Next, we would show in the latent space, the \textit{force-in-domain} GAN can also align the inverted latent codes well with the original latent space, thus, serving as a good model for real images editing due to the preservation of the semantics in latent space. \subsection{Image inversion} We found that in the process of image reconstruction, \textit{in-domain} GAN would produce some strange images shown in Fig. \ref{fig:their_bad}. For these images, our \textit{force-in-domain} GAN can reconstruct well. Note that in our experiments, \textit{force-in-domain} GAN never produce such strange images. \citep{zhu2020domain} proposed BN-\textit{in-domain} GAN recently to solve this problem, but our further projection results also show that the reconstructed latent code in BN-\textit{in-domain} GAN is not on its real manifold, either. \begin{figure}[hb] \centering \includegraphics[width=0.45\textwidth]{figure/our_training/their_bad/example00029.png} \includegraphics[width=0.45\textwidth]{figure/our_training/their_bad/example00192.png} \caption{Reconstruction of images from FFHQ. Images in the left are the original. Images in the middle are the reconstruction results of \textit{in-domain} GAN. Images in the right are the reconstruction results of \textit{force-in-domian} FA} \label{fig:their_bad} \end{figure} \subsection{Good alignment of latent codes} It is reasonable that the high quality of image reconstruction is due to the good inverted latent codes. Compared with the \textit{in-domain} GAN, the \textit{force-in-domain} GAN makes the inverted latent code closer to the original latent space in the sense of distribution. To verify this point, we project high-dimensional latent codes into two-dimensional space. The procedure to perform this comparison is depicted in Fig. \ref{projection}. The original latent code $w^{z}$ is obtained through the sampling from Gaussian distribution and the FC component. We pass $w^{z}$ through generator $G$ to obtain a image and then use the encoder $E$ to obtain the inverted latent code $w$. Then, we project a bunch of $w^{z}$'s and $w$'s into two-dimensional space by two methods, i.e., principle component analysis (PCA) and t-SNE \citep{van2008visualizing}. For both methods, we sample $100,000$ samples from the Gaussian distribution to ensure enough precision. \begin{figure}[hb] \centering \includegraphics[width=0.5\textwidth]{figure/network/project_method.png} \caption{The idea of our projection method} \label{projection} \end{figure} The projection results of PCA method are shown in Fig. \ref{fig:pca_original}(a). There are so many outliers of the \textit{in-domain} GAN that we cannot visualize the comparison clearly. For visualization, we remove part of the data of \textit{in-domain} GAN, whose absolute values are bigger than ten times of the original projection results. As shown in Fig. \ref{fig:pca_original}(b), the projection of the inverted latent codes of \textit{in-domain} GAN deviate significantly from the true distribution, and there are many outliers that are far away from the true distribution. The projection results of \textit{force-in-domain} GAN are relatively concentrated and can cover the true distribution. \begin{figure}[!hb] \centering \subfloat[Original projection results]{\includegraphics[width=0.25\textwidth]{figure/projection/pca/w_scatter_100000.png}} \subfloat[Projection results without outliers]{\includegraphics[width=0.25\textwidth]{figure/projection/pca/w_scatter_100000_without_abnormal.png}} \caption{Projection results using PCA method. In both figures, ``Original'' can the projection results of the true distribution of the latent codes (sampling from Gaussian distribution and mapping to the space); ``\textit{Force-in-domain} GAN'' means the projection results of \textit{force in-domain} GAN and ``In-domain GAN'' means the projection results of \textit{in-domain} GAN. Further more, both of the results of force-in-domian GAN and in-domain GAN are derived by the same samples.} \label{fig:pca_original} \end{figure} For the projection results of t-SNE method, as shown in Fig. \ref{fig:tsne_original}(a), the projection of the \textit{in-domain} GAN also has many outliers. We similarly remove outliers and as shown in Fig. \ref{fig:tsne_original}(b), the projection of the \textit{in-domain} GAN is significantly different from the projection of original latent codes. However, the projection of the \textit{force-in-domain} GAN overlaps with the projection of original latent codes much better. \begin{figure}[h] \centering \subfloat[Original projection results]{\includegraphics[width=0.23\textwidth]{figure/projection/tsne/w_scatter_100000.png}} \subfloat[Projection results without outliers]{\includegraphics[width=0.23\textwidth]{figure/projection/tsne/w_scatter_100000_without_abnormal.png}} \caption{Projection results using t-SNE method. Illustration is the same as Fig. \ref{fig:pca_original}.} \label{fig:tsne_original} \end{figure} While preparing this paper, \cite{zhu2020domain} release a better \textit{in-domain} GAN by adding batch normalization. For convenience, we denote this model by \textit{BN-in-domain} GAN. We perform similar projection examination for this \textit{BN-in-domain} GAN. Since the FC components in \textit{BN-in-domain} GAN and \textit{force-in-domain} GAN are different, the original latent codes of these two models are different for a same Gaussian sample. We display the comparison of the inverted codes and the original latent codes for the \textit{BN-in-domain} GAN in Fig. \ref{fig:tsne_update}(a,b) and for the \textit{force-in-domain} GAN in Fig. \ref{fig:tsne_update}(c), respectively. Similarly, the inverted codes of the \textit{force-in-domain} GAN align well with the original latent codes while the \textit{BN-in-domain} GAN fails to align the inverted latent codes with the original latent space. \begin{figure}[hb] \centering \subfloat[Original projection results]{ \includegraphics[width=0.15\textwidth]{figure/projection/tsne_update/tsne_in_domain_gan_100000_with_abnormal.png}} \subfloat[Projection results without outliers] {\includegraphics[width=0.15\textwidth]{figure/projection/tsne_update/tsne_in_domain_gan_100000.png}} \subfloat[Original projection results of our model] {\includegraphics[width=0.15\textwidth]{figure/projection/tsne_update/tsne_force_in_domain_gan_100000.png}} \caption{Projection results using t-SNE method for \textit{BN-in-domain} GAN in (a,b) and \textit{force-in-domain} GAN in (c). Illustration is the similar as Fig. \ref{fig:pca_original}.} \label{fig:tsne_update} \end{figure} From the projection results, we can find that our method can force the encoder to map real images much closer to the true latent space, thus, explaining the well reconstruction of image and being a better model for semantics editing for real images. \begin{figure}[ht] \centering \includegraphics[width=0.5\textwidth]{figure/semantic/interpolation.png} \caption{Image interpolation results using our \textit{force-in-domian} GAN.} \label{fig:interpolation} \end{figure} \subsection{Real Image Editing} In this section, we apply \textit{force-in-domain} GAN inversion approach in real image editing tasks, including image interpolation, semantic diffusion and semantic manipulation. \\{\bfseries Image Interpolation} For given two images, we can interpolate them similar to the interpolation between two points. Image interpolation aims at semantically interpolating two images but not simply on the pixel level. Thus, it is appropriate to interpolate two images through their latent codes. It is reasonably to expect that the semantic of the interpolated image would vary continuously with the weights of two inverted codes. As shown in Fig. \ref{fig:interpolation}, for the two input images (first and the last column), we use \textit{force-in-domain} GAN to obtain the inversion images. Then, we interpolate the inverted latent codes and feed the interpolated latent codes into the generator. A smooth variation of the interpolated images can be clearly achieved by the \textit{force-in-domain} GAN. \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{figure/semantic/diffusion.png} \caption{Semantic diffusion result using our \textit{force-in-domain} GAN.} \label{fig:diffusion} \end{figure} \\{\bfseries Semantic Diffusion.} Semantic diffusion is a task that diffuses a representative part of the target image into the context of another image. In our task, we diffuse the center part of a face image to another face image. As shown in Fig. \ref{fig:diffusion}, we mask the context face (second column) by the center of the target face (first column). We obtain the inverted latent code of the mixed face through \textit{force-in-domain} GAN and reconstruct the mixed face through the generator. As shown in the last column, the reconstructed faces well preserve the identity of the target fact and reasonably integrate into different surroundings. \begin{figure*}[hb] \centering \includegraphics[width=0.7\textwidth]{figure/semantic/semantic-7.png} \caption{Semantic manipulation result using our \textit{force-in-domain} GAN.} \label{fig:semantic} \end{figure*} \\{\bfseries Semantic Manipulation.} An important and interesting task for GAN inversion is image manipulation in semantic level. As the latent code encodes rich semantic knowledge during the training, it is able to find decision boundaries for different semantic attributes, by which we can manipulate the image through tuning the latent code towards or away from the boundary. Following the procedure in \citep{shen2020interfacegan}, we first use a pre-trained attribute prediction model \footnote{\url{https://www.faceplusplus.com.cn}} to assign attribute scores for synthesized images. Then, for each attribute, we learn a linear SVM, resulting in a decision boundary. Then the image editing process can be formulated as \begin{equation}\nonumber \mathbf{x}^{edit} = G(\mathbf{z}^{inv} + \alpha \mathbf{n}), \end{equation} where $\mathbf{n}$ is the normal direction of the decision boundary for a particular semantic attribute and $\alpha$ is the step size for manipulation. It is also expected to see a continuous variation of the edited image as the step size $\alpha$. As shown in Fig. \ref{fig:semantic}, we manipulate real images on the semantics of smile, age, pose and glass based on the \textit{force-in-domain} GAN inversion. For different examples, we obtain satisfying manipulation results. \section{Conclusion \label{sec:conclusion}} A key factor that affects the quality of GAN inversion is whether the inverted code lies in the original latent space. In this work, we have proposed a \textit{force-in-domain} GAN that can faithfully align the inverted codes with original latent space, thus, being a good model for real image editing. \section{Acknowledgements} We thank Prof. Bolei Zhou and Prof. Lei Zhang for helpful discussions.
1,314,259,993,676
arxiv
\section{Introduction} \label{INTRO} \color{black} In Western Countries, cardiovascular related diseases represent nowadays the first cause of death in the adult population \cite{MB_heart}. Non-invasive experimental techniques, such as phase-contrast magnetic resonance imaging (PC-MRI) and computational tomography (CT) scans, allow to inspect the blood fluid-dynamics and displacement of blood vessels. These methods are widely used to better understand the complex physiology of the cardiovascular system as well as to investigate pathological conditions \cite{KWPP_mri,KAT_kinetic}. Cardiovascular diseases diagnosis can also be assessed through 4D flow magnetic resonance imaging (4D flow MRI) \cite{4DflowMRI}, a tool which provides 3D visualization of the blood flow along time. Differently from standard experimental techniques \cite{ZPL_mri,CMN_leftheart}, 4D flow MRI allows to measure hemodynamics indicators as the wall shear stress (WSS) \cite{4DflowMRI_vs_CFD1}. However, such imaging based techniques - both standard and more advanced - do not allow to recover the spatial and temporal fine scales of these flows. Hence, they might not accurately catch typical flows features as small coherent structures, recirculation regions and possible regions of transition to turbulence, as pointed out in \cite{4DflowMRI_vs_CFD1}. For the aforementioned reasons, mathematical modeling and numerical simulations are largely employed to complement the available imaging techniques in an effort to better understand the physiology and pathology of the cardiovascular system \cite{QMC_review,QLRR_review}. Literature is abundant concerning the fluid dynamics of the whole circulatory system, the study of heart valves, specific arteries and biomedical devices \cite{QMC_review,QLRR_review,MKKDL_ventr,WSKH_fsi,NMP_heartvalve, KS_dynamicsheartvalve, TDG_cerebral, Griffith_LVAD, Griffith_LVAD_2020, Peskin_2001}. By far, the most studied part of the heart is the left ventricle (LV), that has been considered from the electro-mechanical and fluid dynamical viewpoints, both for idealized and patient-specific data \cite{MKKDL_ventr,SSB_el-mech,FPS_electro, mittal_diseased, mittal_mitralvalve}. The LA is far less investigated, at least in normal conditions \cite{VGY_leftatrium,KFH_leftatrium, MAFM_atrium, MAFM_atrium_2, MAFM_atrium_3, Nordsletten_2018}. Understanding the blood flow behavior in the LA can shed light on its functioning in physiological conditions and can be also regarded as a valuable step towards the study of the complete left heart. Idealized geometries for the numerical simulation of blood flows offer the possibility of building a parametrized model that allows to obtain medical indicators for several patients without the need of performing expensive patient-specific simulations. To take into account the large geometrical inter-patient variability, an accurate idealized computational model of the LA can be parametrized based on patient-specific image acquisitions. Another motivation behind the use of an idealized geometry with a prescribed kinematics, which we deduce from the Wiggers diagram \cite{Wiggers, Wiggers_Mitchell}, lays in the fact that patient-specific data for the atria in normal (physiological) conditions are scarce. Moreover, even if good quality kinematics images of the LA may become available, these would be typically acquired in individuals affected by pathological conditions, such as atrial fibrillation \cite{MAFM_atrium, MAFM_atrium_2, MAFM_atrium_3}. An open issue in the blood fluid dynamics is whether a transition to turbulence occurs whenever the blood velocity increases and the interactions among vortices are strong. The Navier-Stokes equations are in principle suitable to model both transitional and turbulent flows. However, the spatial and temporal resolutions required to fully capture the details of the flow features through a Direct Numerical Simulation (DNS) for the discretized Navier-Stokes equations would require prohibitive computational resources \cite{WIL}. For this reason, usually a turbulence model is employed, like e.g. the Reynolds Averaged Navier-Stokes equations (RANS models) and the Large Eddy Simulation (LES models) \cite{WIL,MEN,dynamicles}. From a theoretical point of view, in a fluid flow, it is possible to distinguish the eddies on the basis of their kinetic energy \cite{WIL, POPE}. The distribution of the kinetic energy as a function of the eddy length scale (or wave number ${k}$, when a Fourier transform is applied to the energy spectrum) follows some well established findings in homogeneous and isotropic turbulence, such as the $k^{-5/3}$ rule for the energy spectrum in the inertial range \cite{WIL,POPE}. \color{black} A DNS would allow to solve the whole range of spatio and temporal scales down to the Kolmogorov scales. \color{black} In RANS models one solves for an average flow field in which only the large scale eddies containing the highest energy are considered, while the effect of the inertial range and of the fine scales is accounted by an extra term, called Reynolds stress, to be added to the momentum balance equation of the Navier-Stokes equations \cite{WIL, POPE}. When using isotropic models, the overall effect of the Reynolds stress term is to increase the viscosity of the fluid with a turbulent viscosity that is added to the physical one. RANS models may become too dissipative and yielding to unrealistic flows when used in transitional or even laminar conditions. \color{black} On the other hand, LES models aim at explicitly solving the large eddies of the flow by reaching the inertial range, while modeling the effect of the smallest eddies by exploiting self-similarity properties of the flow \cite{WIL, POPE}. \color{black} Stabilization methods of the Navier-Stokes equations to obtain a solution inf-sup stable and free of numerical instabilities evolved towards the formulation of a Variational Multiscale (VMS) framework, contextually yielding a LES model \cite{BCC_vmsles, Hughes_1995, HCS_2005, HMJ_2000, HOM_2001, HSF_2004}. In this work, we develop a computational model of the human LA based on the incompressible Navier-Stokes equations expressed in the ALE formulation; specifically, we prescribe a law of contraction and relaxation of the LA coherent with the features of the cardiac cycle. Our numerical study allows a full characterization of blood flow in the LA in normal conditions. Several meaningful fluid dynamics indicators are also provided. We purposely use the VMS-LES method developed in \cite{BCC_vmsles} and later extended in \cite{FD_vmsles} to stabilize the numerical solution of the Navier-Stokes equations in ALE formulation and to simultaneously account for turbulence modeling, see e.g. \cite{ALE_Baz}. In particular, the formulation of \cite{FD_vmsles} considers space discretization based on Finite Element Method (FEM) \cite{Hughes_book, AQ_book}, time discretization based on (Backward Differentiation Formula) BDF \cite{AQ_FS_RS} and quasi-static approximation of the fine scale solutions. We generate a reference solution on a very fine grid and we compare these results with those obtained on coarser mesh levels with the standard Streamline Upwind Petrov-Galerkin (SUPG) and with the VMS-LES stabilization method \cite{FD_vmsles}. \color{black} We show that the two methods yield similar results in terms of total kinetic energy and enstrophy based on the phase-averaged velocity field; moreover, as the mesh is refined, the effects of the LES model become less evident, as expected. However, especially for the coarsest mesh used in this study, remarkable differences are observed on the fluctuating kinetic energy, a suitable indicator of transitional effects and a proper measure of cycle-to-cycle variations: the VMS-LES method better captures these variations and impacts on the ability of the turbulence model to better predict the total kinetic energy peaks based on the instantaneous velocity field. \color{black} The outlook of the paper is as follows: in Section \ref{MATH} we recall the mathematical model, the numerical methods and the LA model that we propose based on physiological data. In Section \ref{GRIDGENERATION} we present the three mesh levels adopted, while in Section \ref{RESU} we report and discuss the numerical results obtained from the simulation run on the fine mesh (reference solution) in terms of phase-averaged flow properties. Moreover, we perform a mesh converge study along with a comparison between SUPG and VMS-LES stabilization methods. Finally, conclusions are drawn in Section \ref{CONCLU}. \section{Mathematical model and numerical methods} \label{MATH} In this section we first review the Navier-Stokes equations in ALE framework, then we introduce our numerical methods and the turbulence models. Finally, we discuss the boundary conditions and the LA volume variation in time based on physiological data. \subsection{The Navier-Stokes equations in ALE formulation and its numerical approximation} In large vessels, as well as in the heart chambers, blood behaves as a Newtonian incompressible fluid and the presence of small particles suspended and carried by the plasma can be neglected. In moving domains the Navier-Stokes equations can be reformulated in an Arbitrary Lagrangian Eulerian (ALE) framework with a mesh-moving technique \cite{DGH_ale,JT_ale}. In this work, we do not study the interactions between the fluid and the endocardium, but we consider that the solid-fluid interface has a prescribed velocity, which is equal to the fluid one with no-slip conditions on the wall. Moreover, we use a standard harmonic extension of the displacement in the fluid domain in order to maintain a good mesh quality while moving it without the need of remeshing \cite{JT_ale}. \subsubsection{The Navier-Stokes equations in ALE framework} \label{section_NS_ALE} Let $\Omega_t \subset \mathbb{R}^d$ be the fluid domain at a specific time instant $t>0$, provided with a sufficiently regular boundary $\Gamma_t$ oriented by outward pointing normal unit vector $\hat {\bm {n}}$. We denote as $\Gamma_{t}^D$ and $\Gamma_{t}^N$ the portions of the boundary where respectively Dirichlet and Neumann type boundary conditions are prescribed, with $\Gamma_t = \overline{\Gamma_t^D} \cup \overline{\Gamma_t^N}$ and $\overset{\circ}{\Gamma_t^D} \cap \overset{\circ}{\Gamma_t^N} = \emptyset$. Let $\bm u$ be the fluid velocity and $p$ be the pressure field. The incompressible Navier-Stokes equations in ALE framework read: \begin{align} \nabla \cdot \bm u & = 0 & \quad \text{ in } \Omega_t \times (0, T], \label{eq_div}\\ \rho \frac{\hat \partial \bm u}{\partial t} + \rho \left( \left( \bm{u} - \bm{u}^{\text{ALE}} \right) \cdot \nabla \right) \bm{u} - \nabla \cdot \bm{\sigma} (\bm u, p) & = \bm f & \quad \text{ in } \Omega_t \times (0, T], \label{eq_ns} \\ \bm u & = \bm g & \quad \text{ on } \Gamma_t^D \times (0, T], \label{eq_dirichletbc} \\ \bm \sigma (\bm u, p) \bm{\hat n} & = \bm h & \quad \text{ on } \Gamma_t^N \times (0, T], \label{eq_neumannbc} \\ \bm u & = \bm u_0 & \quad \text{ in } \Omega_0 \times \{0\}. \end{align} In particular, $\frac{\hat \partial \bm u }{\partial t} = \frac{\partial \bm u }{\partial t} + (\bm u_{\text{ALE}} \cdot \nabla ) \bm u $ is the ALE derivative, $\rho$ the fluid density and $\bm{\sigma} (\bm u, p)$ the total stress tensor defined for Newtonian, incompressible and viscous fluids as $ \bm \sigma (\bm u, p)=-p\bm I + 2 \mu \bm \varepsilon (\bm u), $ being $\mu$ the dynamic viscosity and $\bm \varepsilon (\bm u)$ the strain-rate tensor defined as $ \bm \varepsilon (\bm u) = \frac{1}{2} \left ( \nabla \bm u + \left ( \nabla \bm u \right )^T\right). $ The function $\bm f$ is the forcing term, $\bm g$ and $\bm h$ are Dirichlet and Neumann data, $\bm u_0$ the initial condition. We prescribe a velocity $\bm{g}^{\text{ALE}}$ on the whole boundary $\Gamma_t$ and we recover $\bm{u}^{\text{ALE}}$ in the whole domain at each time through an harmonic extension: \begin{equation} \begin{aligned} - \nabla \cdot \left( \bm K \nabla \bm u^{\text {ALE}}\right) = & \, \bm 0 & \quad \text{ in } \Omega_t \times (0, T], \\ \bm u^{\text{ALE}} = & \, \bm g^{\text{ALE}} & \quad \text{ on } \Gamma_t \times (0, T], \end{aligned} \label{ALE_laplacian} \end{equation} where $\bm{K}$ is a positive-definite tensor that can be properly set to better tune the harmonic extension operator, for example depending on the local spatial scales as done in \cite{JT_ale}. Finally, the domain displacement $\bm d (\bm x, t)$ is obtained integrating over time the ALE velocity: $ \bm d (\bm x, t) = \int_{0}^t \bm u^{\text{ALE}} (\bm x, \tau) d \tau \, . $ \noindent We introduce the infinite dimensional function spaces: \begin{align} \mathcal V_{\bm g} := & \, \{ \bm v \in [H^1(\Omega_t)]^d: \bm v = \bm g \text{ on } \Gamma_t^D\}, \\ \mathcal Q := & \, L^2(\Omega_t), \end{align} to define the weak formulation of the Navier-Stokes equations in ALE framework, which reads: given $\bm u_0$, for any $t \in (0, T]$, find $(\bm u, p) \in \mathcal V_{\bm g} \times \mathcal Q$ such that: \begin{equation} \begin{split} \left ( \bm v, \rho \frac{\hat \partial \bm u}{\partial t}\right ) + \left ( \bm v, \rho (\bm u - \bm u^{\text{ALE}}) \cdot \nabla \bm u\right) + \left (\nabla \bm v, \mu \nabla \bm u \right) - \left ( \nabla \cdot \bm v, p \right) + \left ( q, \nabla \cdot \bm u \right ) = \\ \left (\bm v, \bm f\right ) + \left (\bm v, \bm h\right )_{\Gamma_t^N}, \quad \text{ for all } (\bm v, q) \in \mathcal V_{\bm 0} \times \mathcal Q. \end{split} \label{eq_ns-weak} \end{equation} We have denoted with $(\cdot, \cdot)$ and $(\cdot, \cdot)_{\Gamma_t^N}$ the $L^2$ inner product with respect to $\Omega_t$ and $\Gamma_t^N$ respectively. \color{black} Equivalently, in a more compact form, by introducing the function space $ \bm{\mathcal V}_{\bm g} := \mathcal{V}_{\bm g} \times \mathcal{Q}$: given $\bm u_0$, for any $t \in (0, T]$, find $\bm U = \bm U (t) = \{\bm u, p\} \in \bm{\mathcal V}_{\bm g}$ such that: \begin{equation} A(\bm V, \bm U) = F(\bm V), \quad \text{ for all } \bm V = \{\bm v, q\} \in \bm{\mathcal V}_{\bm 0}, \label{weak_compact} \end{equation} being \begin{subequations} \begin{align} A (\bm V, \bm U) & = A_1(\bm V, \bm U) + A_2(\bm V, \bm U, \bm U), \label{formA}\\ A_1(\bm V, \bm U) & = \left ( \bm v, \rho \frac{\hat{\partial} \bm u}{\partial t}\right ), + \left (\nabla \bm v, \mu \nabla \bm u \right) - \left ( \nabla \cdot \bm v, p \right) + \left ( q, \nabla \cdot \bm u \right ), \\ A_2(\bm U, \bm V, \bm W) & = \left ( \bm u, \rho \left ( \bm v - \bm u^\text{ALE}\right ) \cdot \nabla \bm w\right), \\ F(\bm V) & = \left (\bm v, \bm f\right ) + \left (\bm v, \bm h\right )_{\Gamma_N}. \end{align} \end{subequations} \color{black} \subsubsection{Numerical methods and turbulence modeling} \label{SEC:turbulence} For the space discretization of Eq. \eqref{eq_ns-weak}, we introduce a finite element (FE) discretization with piecewise Lagrange polynomials of degree $r \geq 1$. The function space of the FE is $X_r^h = \{ v^h \in C^0(\overline{ \Omega}_t):\, v^h|_K \in \mathbb{P}_r,\, \forall K \in \mathcal T_h\}$, being $\mathcal T_h$ a triangulation of $\Omega_t$ and $h$ the diameter of the grid element $K \in \mathcal T_h$. \begin{comment} In the variational multiscale method, a direct-sum decomposition of the spaces $\mathcal {V}_{\bm g}$, $\mathcal {V}_{\bm 0}$ and $\mathcal Q$ is assumed as \cite{BCC_vmsles, FD_vmsles} \begin{equation} \mathcal V_{\bm g} = \mathcal V_{\bm g}^h \oplus \mathcal V_{\bm g}'\, , \qquad \mathcal V_{\bm 0} = \mathcal V_{\bm 0}^h \oplus \mathcal V_{\bm 0}'\, , \qquad \mathcal Q= \mathcal Q^h \oplus \mathcal Q'. \end{equation} In particular, $\mathcal V_{\bm g}^h$, $\mathcal V_{\bm 0 }^h$, $\mathcal Q^h$ are finite dimensional spaces associated to the FE dicretization, whereas $\mathcal V_{\bm g}'$, $ \mathcal V_{\bm 0}'$, $\mathcal Q'$ infinite dimensional ones, with $\mathcal V_{\bm g}^h = \mathcal V_{\bm g} \cap [X_r^h]^d$, $\mathcal V_{\bm 0}^h = \mathcal V_{\bm 0} \cap [X_r^h]^d$ and $\mathcal Q^h = \mathcal Q \cap [X_r^h]^d$. In this way, we introduce an a-priori splitting of the solution into coarse and fine scales, thus every function can be written as: \begin{align} \bm{u} &= \bm{u}^h + \bm{u}'\, , \qquad p = p^h + p' \, , \qquad \bm{v} = \bm{v}^h + \bm{v}' \, , \qquad q = q^h + q' \,., \label{solution_decomposition} \end{align} \color{black} with $\bm u^h \in \bm{\mathcal V}_{\bm g}^h$, $\bm u' \in \bm{\mathcal V}_{\bm g}'$, $\bm v^h \in \bm{\mathcal V}_{\bm 0}^h$, $\bm v' \in \bm{\mathcal V}_{\bm 0}'$, $p^h, q^h \in \mathcal{Q}^h$, and $p', q' \in \mathcal{Q}'$. By plugging the decomposition \eqref{solution_decomposition} in Eq. \eqref{eq_ns-weak}, we get the coupled coarse-scale and fine-scale equations. Considering the coarse scale quation \end{comment} \color{black} In the VMS method, one assumes a direct sum decomposition of both trial and test function spaces into coarse and fine scales subspaces as $ \bm{\mathcal V}_{\bm g} = \bm{\mathcal V}_{\bm g}^h \oplus \bm{\mathcal V}_{\bm g}'\, , \bm{\mathcal V}_{\bm 0} = \bm{\mathcal V}_{\bm 0}^h \oplus \bm{\mathcal V}_{\bm 0}'$ \cite{Hughes_1995, HCS_2005, HMJ_2000, HOM_2001, HSF_2004}. Specifically, $\bm{\mathcal V}_{\bm g}^h = \mathcal V_{\bm g}^h \times \mathcal Q^h$, $\bm{\mathcal V}_{\bm 0}^h = \mathcal V_{\bm 0}^h \times \mathcal Q^h$ are the coarse scale function spaces, with $\mathcal V_{\bm g}^h = \mathcal V_{\bm g} \cap [X_r^h]^d$, $\mathcal V_{\bm 0}^h = \mathcal V_{\bm 0} \cap [X_r^h]^d$ and $\mathcal Q^h = \mathcal Q \cap X_r^h$. While $\bm{\mathcal V}_{\bm g}' = \mathcal V_{\bm g}' \times \mathcal Q'$, $\bm{\mathcal V}_{\bm 0}' = \mathcal V_{\bm 0}' \times \mathcal Q'$ are the infinite-dimensional function spaces which represent the fine scale solution. In this way, we introduce an a priori splitting of the solution (and test functions) into coarse (resolved) and fine (or subgrid, modelled) scales as : \begin{equation} \bm U = \bm U ^ h + \bm U ', \quad \bm V = \bm V ^ h + \bm V ' \, . \label{splitting} \end{equation} Accordingly, the superscripts $(\cdot)^h$ and $(\cdot)'$ denote the projections of $\bm U$ and $\bm V$ on coarse scale and fine scale solution spaces, respectively, with $\bm U^h \in \bm{\mathcal{V}}_{\bm g}^h$, $\bm U' \in \bm{\mathcal{V}}_{\bm g}'$ $\bm V^h \in \bm{\mathcal{V}}_{\bm 0}^h$ and $\bm V' \in \bm{\mathcal{V}}_{\bm 0}'$. Using the decomposition \eqref{splitting} in Eq. \eqref{weak_compact}, one gets the following coupled coarse-scale and fine-scale equations: \begin{align} A(\bm V^h , \bm U^h + \bm U') = F(\bm V^h), \label{coarse_eq} \\ A(\bm V' , \bm U^h + \bm U') = F(\bm V'). \label{fine_eq} \end{align} \begin{comment} Considering the coarse-scale equation \eqref{coarse_eq}, expressing all the terms in $A(\cdot, \cdot)$ according to Eq. \eqref{formA} and exploiting bilinearity yields to: \begin{equation} \begin{aligned} &A_1(\bm V^h , \bm U^h ) + A_1(\bm V^h , \bm U') + \\ &+ A_2(\bm V^h , \bm U^h , \bm U^h) + A_2(\bm V^h , \bm U^h , \bm U') + A_2(\bm V^h , \bm U' , \bm U^h) + A_2(\bm V^h , \bm U' , \bm U') \\ & = F(\bm V^h). \label{coarse_eq_exp} \end{aligned} \end{equation} In the equation above, $A_2(\bm V^h , \bm U^h , \bm U')$ and $A_2(\bm V^h , \bm U' , \bm U^h)$ are the cross-stress terms, while $A_2(\bm V^h , \bm U' , \bm U')$ the Reynolds stress-term. Under the assumptions \cite{BCC_vmsles} $\frac{\partial \bm v^h}{\partial t} = \bm 0$ in $\Omega_t$, $\bm u' = \bm 0$ on $\Gamma_t$ and $\left(\nabla \bm v^h, \mu \Delta \bm u' \right) = \bm 0$ in $\Omega_t$, we integrate by parts fine-scale terms in Eq. \eqref{coarse_eq_exp}, and it can be shown that the latter reduces to \cite{BCC_vmsles,FD_vmsles}: \end{comment} Following \cite{BCC_vmsles}, it can be shown that Eq. \eqref{coarse_eq} reduces to: given $\bm u_0$, for any $t \in (0, T]$, find $\bm U^h = \bm U^h (t) = \{\bm u^h, p^h\} \in \bm{\mathcal V}_{\bm g}^h$ such that: \begin{equation} \begin{split} \left ( \bm v^h, \rho \frac{\partial \bm u^h}{\partial t}\right ) + \left ( \bm v^h, \rho ( (\bm u^h - \bm u^\text{ALE}) \cdot \nabla) \bm u^h\right) + \left (\nabla \bm v^h, \mu \nabla \bm u^h \right) - \left ( \nabla \cdot \bm v^h, p^h \right) + \left ( q^h, \nabla \cdot \bm u^h \right ) \\ \underbrace{- \left ( \rho \bm u^h \cdot \nabla \bm v^h + \nabla q^h, \bm u'\right) - \left(\nabla \cdot \bm v^h, p'\right)}_{\text{(I)}} \underbrace{- \left(\rho \bm u^h \cdot (\nabla \bm w^h)^T, \bm u' \right)}_{\text{(II)}} \underbrace{- \left(\rho \nabla \bm v^h, \bm u' \otimes \bm u' \right)}_{\text{(III)}} \\ = \left (\bm v^h, \bm f\right ) + \left (\bm v^h, \bm h\right )_{\Gamma_N}, \quad \text{ for all } \bm V^h = \{\bm v^h, q^h\} \in \bm{\mathcal V}_{\bm 0}^h. \end{split} \label{coarse_withfinesolution} \end{equation} In Eq. \eqref{coarse_withfinesolution}, the first and last rows contain standard terms of the Navier-Stokes equation. The second row contains additional stabilization terms, namely (I) the Streamline Upwind Petrov Galerkin (SUPG) term, (II) an additional stabilization term arising from VMS modeling and (III) the LES term which models the Reynolds stress \cite{BCC_vmsles, FD_vmsles}. We also observe that the fine-scale solution $\bm U' = \{ \bm u', p'\}$ is still defined in an infinite dimensional function space. Following analogous arguments adopted for the coarse-scale equation, the solution $\bm U'$ of the fine scale equation (Eq. \eqref{fine_eq}) can be represented in terms of the coarse-scale solution $\bm U^h$ and the residual $\bm R(\bm U^h)$ of the coarse-scale equation projected onto the fine-scale space $\bm{\mathcal{V}}_{\bm 0}'$ \cite{BCC_vmsles}: \begin{comment} fine-scale equation \eqref{fine_eq} reduces to \cite{BCC_vmsles}: \begin{equation} A_1(\bm V' , \bm U' ) + A_2(\bm V' , \bm U^h , \bm U') + A_2(\bm V' , \bm U' , \bm U^h) + A_2(\bm V' , \bm U' , \bm U') =\bm{R}(\bm U^h), \label{fine_eq_exp_ord_Res} \end{equation} where the right hand side $\bm{R}(\bm U^h)$ is the residual of the coarse-scale equation \eqref{coarse_eq} projected onto the fine-scale space $\bm{\mathcal{V}}_{\bm 0}'$. The solution of \eqref{fine_eq_exp_ord_Res} can be represented as a functional of the coarse-scale solution $\bm U^h$ and $\bm{R}(\bm U^h)$: \end{comment} \begin{equation} \bm U ' = \bm{\mathcal{F}'}(\bm U^h,\bm{R}(\bm U^h) ). \end{equation} The latter can be inserted in Eq. \eqref{coarse_eq} to close finally the coarse-scale equation: \begin{equation} \text{find } \, \bm U^h \in \bm{\mathcal{V}}_{\bm g}^h \, : \quad A(\bm V^h , \bm U^h +\bm{\mathcal{F}'}(\bm U^h,\bm{R}(\bm U^h) ) = F(\bm V^h), \quad \text{for all } \bm V^h \in \bm{\mathcal{V}}_{\bm 0}^h. \label{coarse_eq_withres} \end{equation} In order to find a numerical solution of Eq. \eqref{coarse_eq_withres}, one needs to approximate the differential operator with $\widetilde{\bm{\mathcal{F}'}} \approx \bm{\mathcal{F}'}$, which will lead to an approximation of both coarse and fine-scale solutions, namely $\widetilde{\bm U^h} \approx \bm U^h$ and $\widetilde{\bm U'}\approx \bm U'$. However, for the sake of simplicity, from now on we will refer to differential operator and solutions without the superscript $\sim$. In particular, for the approximation of $\bm{\mathcal{F}'}$, we choose a \textit{quasi-static} approach yielding to the following approximation of fine velocity and pressure scales (or subgrid scales) \cite{BCC_vmsles,FD_vmsles}:\color{black} \begin{align} \bm u' & \simeq - \tau_{\text M}(\bm u^h) \bm r_{\text M} (\bm u^h, p^h) \label{eq_fine_v}\\ p' & \simeq - \tau_{\text C} (\bm u^h) r_{\text C}(\bm u^h), \label{eq_fine_p} \end{align} being $\bm r_{\text M} (\bm u^h, p^h)$ and $r_{\text C}(\bm u^h)$ the strong residuals of (\ref{eq_ns}) and (\ref{eq_div}) defined respectively as: \begin{align} \bm r_{\text M} (\bm u^h, p^h) & = \rho \frac{\hat \partial \bm u^h}{\partial t} + \rho \left( \left( \bm{u}^h - \bm{u}^{\text{ALE}} \right) \cdot \nabla \right) \bm{u}^h - \nabla \cdot \bm{\sigma} (\bm u^h, p^h) - \bm f \\ r_{\text C}(\bm u^h) & = \nabla \cdot \bm{u}^h \end{align} The stabilization parameters are chosen as in \cite{BCC_vmsles,FD_vmsles}: \begin{align} \tau_{\text M}(\bm{u}^h) &= \left( \frac{\rho^2 }{\Delta t^2} + \rho^2 \, (\bm u^h - \bm u^{\text{ALE}}) \cdot \tilde{\bm{G}}(\bm u^h - \bm u^{\text{ALE}}) + C_r \mu^2 \tilde{\bm{G}} :\tilde{\bm{G}} \right)^{-\frac{1}{2}} \,, \\ \tau_{\text C}(\bm{u}^h) &= \left ( \tau_M(\bm{u}^h) \tilde{\bm{g}} \cdot \tilde{\bm{g}} \right )^{-1} \,, \end{align} being $\Delta t$ the time step that will be used for the time discretization and $C_r=15\cdot{2^{r}}$ is a constant obtained by an inverse inequality depending on the polynomial degree $r$ \cite{BCC_vmsles, FD_vmsles}. Moreover, $\tilde{\bm{G}}$ is the metric tensor and $\tilde{\bm{g}}$ the metric vector: \begin{equation} \tilde{G}_{ij} = \sum_{k=1}^d \frac{\partial \xi_k }{\partial x_i} \frac{\partial \xi_k }{\partial x_j}, \quad \tilde{g}_i = \sum_{j=1}^{d} \frac{\partial \xi_j}{\partial x_i}, \label{metric} \end{equation} whereas\color{black}, as depicted in Figure \ref{parametric_global}\color{black}, we denote with $\bm x = \left \{ x_i\right \}_{i=1}^d$ the coordinates of the mesh element $K$ in the physical space and with $\bm \xi = \left \{ \xi_i\right \}_{i=1}^d$ the coordinates of element $\hat{K}$ in the parametric space. Let $\bm x = \bm x (\bm \xi ): \, \hat{K} \to K$ be a continuous and differentiable mapping from the parametric to the physical space, with a continuously differentiable inverse. $\frac{\partial \bm \xi}{\partial \bm x}$ in Eq. \eqref{metric} is the inverse Jacobian of the mapping \cite{BCC_vmsles}. \begin{figure}[t!] \centering% \includegraphics[trim={0 0 0 0},clip,width=1.0\textwidth]{figures2020/parametric_global.png} \caption{\label{parametric_global} \color{black} Mapping $\bm x = \bm x (\bm \xi ): \, \hat{K} \to K$ from the parametric element $\hat K$ to the physical one $K$. \color{black}} \end{figure} The semi-discrete variational multiscale formulation with LES modeling of the Navier-Stokes equations in ALE framework reads: given $\bm u_0$, for any $t \in (0, T]$, find $ (\bm {u}^h,p^h) \in \mathcal {V}_{\bm g}^h \times Q^{h}$ such that: \begin{equation} \begin{aligned} & \left ( \bm v^h, \rho \frac{\hat \partial \bm u^h}{\partial t}\right ) + \left ( \bm v^h, \rho \left ( (\bm u^h - \bm u^{\text{ALE}}) \cdot \nabla \right ) \bm u^h\right) + \left (\nabla \bm v^h, \mu \nabla \bm u^h \right) - \left ( \nabla \cdot \bm v^h, p^h \right) + \left ( q^h, \nabla \cdot \bm u^h \right ) \\ & + {\left ( \rho (\bm u^h - \bm u^{\text{ALE}}) \cdot \nabla \bm v^h + \nabla q^h , \, \tau_{\text M}(\bm u^h ) \bm r_{\text M} (\bm u^h, p^h)\right ) + \left (\nabla \cdot \bm v^h, \tau_{\text C}(\bm u^h)r_{\text C}(\bm u^h)\right)} \\ & + {\left(\rho \bm u^h \cdot (\nabla \bm v^h)^T , \tau_{\text M}(\bm u^h)\bm r_{\text M}(\bm u^h, p^h) \right )} \\ & - {\left( \rho \nabla \bm v^h, \tau_{\text M}(\bm u^h)\bm r_{\text M}(\bm u^h, p^h) \otimes \tau_{\text M}(\bm u^h)\bm r_{\text M}(\bm u^h, p^h) \right ) } = \\ & \left (\bm v^h, \bm f\right ) + \left (\bm v^h, \bm h\right )_{\Gamma_t^N}, \quad \text{for all } (\bm v^h, q^h) \in \mathcal V_{\bm 0}^h \times \mathcal Q^h. \end{aligned}\label{semidiscretevmsles} \end{equation} \color{black} We use the Backward Euler Method to discretize the problem in time and we extrapolate $\bm{u}^h$ in the non-linear terms by means of the Newton-Gregory backward polynomials of order one. This yields a single linear problem at each time step. For more details on this implementation and on its strengths and limitations, the interested reader can see \cite{FD_vmsles}. We partition the time interval into $N_t$ subintervals of equal size $\Delta t = \frac{T}{N_t}$, with $t_n = n \Delta t $ and we denote with the subscript $n$ quantities related to the time step $n$, with $n=0, \dots, N_t$. The fully discretized linearized semi-implicit VMS-LES formulation of the Navier-Stokes equations in ALE framework with Backward Euler Method as time integration method reads: Given $\bm u^h_n$, for any $n = 0, \dots, N_t -1$, find $(\bm u^h_{n+1}, p^h_{n+1}) \in \mathcal{V}_g^h \times \mathcal{Q}^{h} $ such that: \begin{equation} \begin{aligned} & \left ( \bm v^h, \rho \frac{\bm u^h_{n+1}}{\Delta t }\right )_{\Omega_{n+1}} + \left ( \bm v^h, \rho (\bm u^h_n - \bm u^{\text{ALE}}_{n+1}) \cdot \nabla \bm u^h_{n+1}\right)_{\Omega_{n+1}} + \left (\nabla \bm v^h, \mu \nabla \bm u^h_{n+1} \right)_{\Omega_{n+1}} \\ & - \left ( \nabla \cdot \bm v^h, p^h_{n+1} \right)_{\Omega_{n+1}} + \left ( q^h, \nabla \cdot \bm u^h_{n+1} \right )_{\Omega_{n+1}} \\ & + \underbrace{\left ( \rho (\bm u^h_n - \bm u^{\text{ALE}}_{n+1}) \cdot \nabla \bm v^h + \nabla q^h , \, \tau_{\text M}(\bm u^h_{n+1} ) \bm r_{\text M} (\bm u^h_{n+1}, p^h_{n+1})\right )_{\Omega_{n+1}} + \left (\nabla \cdot \bm v^h, \tau_{\text C}(\bm u^h_{n+1})r_{\text C}(\bm u^h_{n+1})\right)_{\Omega_{n+1}}}_\text{SUPG} \\ & + \underbrace{\left(\rho (\bm u^h_n - \bm u^{\text{ALE}}_{n+1}) \cdot (\nabla \bm v^h)^T, \tau_{\text M}(\bm u^h_{n+1})\bm r_{\text M}(\bm u^h_{n+1}, p^h_{n+1}) \right )_{\Omega_{n+1}}}_\text{VMS} \\ & - \underbrace{\left( \rho \nabla \bm v^h, \tau_{\text M}(\bm u^h_{n+1})\bm r_{\text M}(\bm u^h_n, p^h_n) \otimes \tau_{\text M}(\bm u^h_{n+1})\bm r_{\text M}(\bm u^h_{n+1}, p^h_{n+1}) \right )_{\Omega_{n+1}}}_\text{LES} \\ & = \left (\bm v^h, \bm f_{n+1}\right )_{\Omega_{n+1}} + \left (\bm v^h, \bm h_{n+1}\right )_{\Gamma_{n+1}^N} + \left ( \bm v^h, \rho \frac{\bm u^h_{n}}{\Delta t }\right )_{\Omega_{n}} \quad \text{for all } (\bm v^h, q^h) \in \mathcal{V}_0^h \times \mathcal{Q}^h. \label{discretevmsles} \end{aligned} \end{equation} The strong residuals, after time discretization, read \begin{align} \bm r_{\text M} (\bm u^h_{*}, p^h_{*}) = & \, \rho \frac{ \bm u^h_{*}-\bm u^h_n}{\Delta t} + \rho \left( \bm{u}^h_{n} - \bm{u}^{\text{ALE}}_{n+1} \right) \cdot \nabla \bm{u}^h_{*} - \mu \Delta \bm u^h_{*}+ \nabla p^h_{*} - \bm f_{n+1}, \label{rm_space_time} \\ r_{\text C} (\bm u^h_{n+1}) = & \, \nabla \cdot \bm u^h_{n+1} \, . \label{rc_space_time} \end{align} where the subscript $*$ denotes either the time step $n$ or $n+1$, as the residuals appear in Eq. \eqref{discretevmsles}. \color{black} As for Eq. \eqref{coarse_withfinesolution}, the first, second and last rows in Eq. \eqref{discretevmsles} contain integrals of the standard Navier-Stokes equations in ALE framework (see Eq. \eqref{eq_ns-weak}), while on the remaining rows the additional stabilization and turbulence modeling terms, namely the standard SUPG term, the VMS term and, finally, the LES term. From this point of view, the standard SUPG stabilization method can be considered as a step towards the fully stabilized formulation \cite{BCC_vmsles}. In this paper, we will adopt either the VMS-LES method, i.e. the whole formulation in Eq. \eqref{discretevmsles}, and the SUPG method, i.e. the formulation in Eq. \eqref{discretevmsles} without the additional terms VMS and LES. We recall that, on the one hand, both the SUPG and VMS-LES methods allow to control instabilities in the velocity field arising from convection-dominated (i.e. high Reynolds number) regimes and instabilities due to the fact that equal FE spaces $\mathbb{P}_k - \mathbb{P}_k$ would not satisfy the \textit{inf-sup} (or LBB) condition, yielding to numerical oscillations of the pressure field \cite{LBB, BCC_vmsles, FD_vmsles}. On the other hand, the VMS-LES method - as the name itself emphasizes and differently from SUPG - also yields to a LES-type modeling \cite{BCC_vmsles, Hughes_1995, HCS_2005, HMJ_2000, HOM_2001, HSF_2004, FD_vmsles} to account for the transitional-nearly turbulent flow regime that typically occurs in cardiac haemodynamics. \color{black} \subsection{Left atrium model} The LA is a chamber located in the left part of the heart anchored on the top of the LV, connected to the pulmonary circulation system through the pulmonary veins (PVs) and to the LV through the mitral valve (MV). The position, size and even the number of PVs is specific to the individual, but there are usually four veins situated in the upper part of the LA in a perpendicular direction with respect to the MV axis. The left atrial appendage (left auricle) is a small secondary cavity located on one side of the LA and connected to the main cavity through an orifice. In Figure \ref{fig_lageom} we report the geometry of the idealized LA that is used for the numerical simulations, while in Figure \ref{torso} we highlight the position of the chamber inside a human torso. The LA boundary $\Gamma_t$ is split into six portions: four PVs sections $\Gamma_{\text{PV}_i}, \, i= 1, \dots, 4$, the MV section $\Gamma_{\text{MV}}$ and the LA endocardium $\Gamma_\text{w}$. The PVs are considered equal sized and the left atrial appendage is labelled as LAA. The section area of the MV is 6.74 cm$^2$, while the area of each PV is 0.78 cm$^2$, if the former were to be considered circular, their diameters would be 2.93 cm and 1 cm respectively. In physiological conditions, during diastole, blood is ejected from the LA into the LV through the open MV with a first strong ejection and a second weaker one, strengthened by the LA contraction known also as atrial kick. This process is characterized by a volume reduction of about $25\%$ of the initial volume. The first blood ejection from the LA is called Early wave (E-wave) while the atrial kick is also known as After wave (A-wave). During systole the MV closes and the LA is filled with blood coming from the PVs, enlarging to reach the original volume. \begin{figure}[t!] \centering% \includegraphics[trim={2 2 2 2},clip,width=0.9\textwidth]{figures2020/torso.png} \caption{\label{torso} Position of the LA inside the torso. The idealized LA geometry is in green and the remaining heart's chambers in red. The 3D torso model is taken for visualization purposes from the repository CoMMLab \cite{commlab,Ferrer_2015}. } \end{figure} \begin{figure}[t!] \centering% \includegraphics[trim={1cm 1cm 1cm 1cm},clip,width=0.49\textwidth]{figures2020/la_geometry_front.png} \includegraphics[trim={1cm 1cm 1cm 1cm},clip, width=0.49\textwidth]{figures2020/la_geometry_back.png} \caption{\label{fig_lageom} The idealized LA geometry from two different angles. The domain boundary is $\Gamma_t = \Gamma_{\text w} \cup \Gamma_{\text{MV}} \cup \left (\bigcup_{i=1}^4 \Gamma_{\text{PV}_i}\right )$ } \end{figure} \begin{figure}[t!] \centering \includegraphics[trim={0cm 0cm 0.5cm 0.5cm},clip, width=0.49\textwidth]{figures2020/fluxes_time.png} \includegraphics[trim={0cm 0cm 0.5cm 0.5cm},clip, width=0.49\textwidth]{figures2020/volume_time.png} \caption{\label{fig_flow} Blood flow through the MV section ($\Gamma_{\text{MV}}$) and in each PV ($\Gamma_{\text{PV}_i}, \, i = 1, \dots, 4$) vs. time (left). Idealized LA volume vs. time (right).} \end{figure} In literature, the MV flow has been studied and measured in both physiological and pathological conditions \cite{KWPP_mri,CMN_leftheart,QMC_review,TDQ_3d, mittal_mitralvalve, VGY_leftatrium}. In Figure \ref{fig_flow} (left) we report the inlet (PVs section) and outlet (MV section) flow rates against time. The first peak during diastole is the E-wave, while the second one is the A-wave. During systole the flow through the MV is zero because the valve is closed. The heart cycle considered in this work corresponds to a rest condition at 60 bpm, i.e. the period is equal to $T_\text{HB}=1$ s. The diastole lasts for $T_{\text{dias}}=0.68$ s and the systole for the remaining $T_{\text{syst}}=0.32$ s; a whole heartbeat lasts $T_\text{HB} = T_{\text{dias}} + T_{\text{syst}}$. We simulate respectively diastole and systole, so that the initial time corresponds to the end systolic phase. The volume variation of the LA is based on the ejection phases, so the volume decrease is modeled in two phases corresponding to the E and A-waves. The LA filling phase is shorter and is accomplished with a continuous rise of the volume. The LA volume as a function of time $V(t)$ is reported in Figure \ref{fig_flow} (right) \cite{KFH_leftatrium}. As explained in Section \ref{section_NS_ALE}, we prescribe a velocity $\bm g^{\text{ALE}}$ on the boundary $\Gamma_t$ and we extend it harmonically to the whole domain to get the ALE velocity $\bm u^{\text{ALE}}$ (see Eq. \eqref{ALE_laplacian}). In particular, we compute the ALE velocity on the LA boundary by assuming separation of variables as: \begin{equation} \bm g^{\text{ALE}} (\bm x, t) = \bm f^{\text{ALE}}(\bm x)\, g^{\text{ALE}}(t) \quad \text{on } \Gamma_t, \label{gALEdef} \end{equation} where $\bm f^{\text{ALE}}(\bm x)$ contains the directions of $\bm g^{\text{ALE}}$ and $g^{\text{ALE}}(t)$ is a time-dependent function. We design $\bm f^{\text{ALE}} $ to decrease the wall velocity near the PVs ($\bm g^{\text{ALE}}= \bm 0$ on ${\Gamma_{\text{PV}_i}}, \, i = 1, \dots, 4$). Let $x_G, \, y_G, \, z_G$ be the coordinates of the LA center of mass (units in cm), we define the function $\bm f^{\text{ALE}}$ as: \begin{equation} \bm f^{\text{ALE}} (\bm x) = F(z) ((x-x_G) \hat{\bm x} + (y-y_G) \hat{\bm y} + 0.6(z - z_G) \hat{\bm z} ), \label{fALE} \end{equation} with \begin{equation} F(z)= \left\{ \begin{array}{l l l} 0.5 & \text{ if } |z-z_G| \in \left[0,2.5\right]\text{ cm}, \\ \\ 0.5 \left(\dfrac{2.5 - |z-z_G|}{0.72} + 1 \right) & \text{ if }|z-z_G| \in \left[2.5, 3.22\right]\text{ cm}, \\ \\ 0 & \text{ if } |z-z_G| \in \left[3.22, 10\right]\text{ cm}. \end{array} \right. \label{Fz} \end{equation} The function $F$ is represented in Figure \ref{coefficientC_displacement} (left). In order to get the time variation of the prescribed ALE velocity $g^{\text{ALE}}(t)$, we consider the volume variation and we exploit the Reynolds transport theorem (RTT) and Eq. \eqref{gALEdef}: \begin{equation} \frac{dV(t)}{dt} = \frac{d}{dt} \int_{\Omega_t} d \Omega \overset{\text{RTT}}{=} \int_{\Gamma_t} \bm g^{\text{ALE}} \cdot \hat{\bm n} d \Gamma \overset{\text{Eq. \eqref{gALEdef}}}{=} g^{\text{ALE}}(t) \int_{\Gamma_t} \bm f^{\text{ALE}} \cdot \hat{\bm n} d \Gamma, \end{equation} which gives the following definition of ${g}^{\text{ALE}}$: \begin{equation} \everymath{\displaystyle} {g}^{\text{ALE}}(t) = \dfrac{1}{\int_{\Gamma_t} \bm f^{\text{ALE}} \cdot \hat{\bm n} d \Gamma} \dfrac{dV(t)}{dt}. \end{equation} \begin{figure}[!t] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures2020/la_coefficientC.png} \end{subfigure} \begin{subfigure}{.5\textwidth} \includegraphics[width=0.9\textwidth]{figures2020/la_displacement.png} \end{subfigure} \caption{ \color{black}Function $F(z)$ on the LA surface (left). LA geometry at its maximum contraction at end diastole (right): the colors in the deformed geometry highlight the magnitude of the displacement field $|\bm d|$. \color{black}} \label{coefficientC_displacement} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{figures2020/fluxes_PV2_PV4.png} \caption{\color{black} Inlet flow rates on $\Gamma_{\text{PV}_2}$ and $\Gamma_{\text{PV}_4}$. \color{black}} \label{flux_PV2_PV4} \end{figure} To better appreciate the LA deformation, in Figure \ref{coefficientC_displacement} (right) we overlap the geometry of the LA in its relaxed and contracted configurations, at the beginning and at the end of diastole, where the maximum LA contraction is met, respectively. In terms of boundary conditions, during diastole (MV is open), we set a homogeneous Neumann boundary condition on the MV section and we prescribe Poiseuille profiles on the PVs. We do this using, for each vein, a parabolic velocity profile and imposing the inlet flow rate $\Phi_{\text{PV}_i}(t), \, i = 1, \dots, 4$ that fulfils the mass balance: \begin{equation} \sum_{i=1}^4|\Phi_{\text{PV}_i}(t)| = |\Phi_{\text{MV}}(t)| + \left|\frac{d V(t)}{dt}\right |, \label{eq_incompr} \end{equation} whereas the flow rates are defined as: \begin{equation} \Phi_{\text{PV}_i}(t) = \int_{\Gamma_{\text{PV}_i}} \left ( \bm u - \bm u ^ {\text{ALE}}\right )\cdot \hat{\bm n} d\Gamma, \quad i = 1, \dots, 4, \end{equation} \begin{equation} \Phi_{\text{MV}} (t)= \int_{\Gamma_{\text{MV}}} \left ( \bm u - \bm u ^ {\text{ALE}}\right )\cdot \hat{\bm n} d\Gamma. \end{equation} During systole, the MV is closed ($\Phi_{\text{MV}}(t)=0$), so we switch the boundary condition on $\Gamma_{\text{MV}}$ to a Dirichlet one to model the closed behaviour of the valve: $\bm u = \bm g^{\text{ALE}}$. The sudden switch of boundary conditions from natural to essential and viceversa -- aimed at replicating the rapid closing and opening stages of the MV -- may potentially introduce some artifacts on the numerical solution, even if these are negligible in our experience. \color{black} However, we observed that, during systole, numerical oscillations would arise by keeping Dirichlet boundary conditions on all the inlet sections (as done during diastole) and on the MV section. For this reason, unlike diastole, we use a homogeneous Neumann boundary condition on one of the PVs (specifically, we choose $\Gamma_{\text{PV}_4}$) while keeping a Dirichlet boundary condition with assigned flow rate given by (\ref{eq_incompr}) on the other three. As a matter of fact, from the simulation results, we found that numerical oscillations are strongly reduced using this boundary setting. In addition, we have found that during systole the flow is always entering in the domain (as for the remaining inlet sections), with a flow rate on $\Gamma_{\text{PV}_4}$ almost equal to the remaining sections where Dirichlet boudary is prescribed. We show this in Figure \ref{flux_PV2_PV4} where we compare the flow rate prescribed on inlet portion $\Gamma_{\text{PV}_2}$, where a Dirichlet boundary condition is set for the whole heartbeat, and the flow rate computed on $\Gamma_{\text{PV}_4}$, where the homogeneous Neumann boundary condition is prescribed in systole and a Dirichlet one in diastole. Mass balance condition in Eq. \eqref{eq_incompr} is hence always satisfied. By setting the boundary conditions as explained, we finally obtain the fluxes through the MV section and through each PV as reported in Figure \ref{fig_flow}. \color{black} Moreover, a backflow stabilization is introduced in all the homogeneous Neumann-type boundary conditions in order to weakly penalize the reverse flow \cite{BGHMZ_backflow}: \begin{equation} \bm{\sigma}(\bm u, p) \bm{\hat n}= \rho (\{ \left (\bm u -\bm u^\text{ALE} \right )\cdot \hat{\bm n }\}_{-})\left (\bm u -\bm u^\text{ALE} \right )\quad \text{ on } \Gamma^N_t, \end{equation} being $\{\left (\bm u -\bm u^\text{ALE} \right ) \cdot \hat{\bm n }\}_{-}$ the negative part of $\left (\bm u -\bm u^\text{ALE} \right )\cdot \hat{\bm n}$: \begin{equation} \{\left (\bm u -\bm u^\text{ALE} \right ) \cdot \hat{\bm n }\}_{-} = \begin{cases} \left (\bm u -\bm u^\text{ALE} \right ) \cdot \hat{\bm n} & \text{ if } \left (\bm u -\bm u^\text{ALE} \right ) \cdot \hat{\bm n} < 0, \\ 0 & \text{ if } \left (\bm u -\bm u^\text{ALE} \right )\cdot \hat{\bm n} \geq 0. \end{cases} \end{equation} Finally, we summarize in Eq. \eqref{wholemodel} the whole set of boundary and initial conditions for the modelling of blood flow in the LA. \begin{equation} \begin{aligned} &\bm u = -\frac{|\Phi_{\text{PV}_i}(t)|}{4|\Gamma_{\text{PV}_i}|}\left (1-\frac{r(\bm{x})^2}{R_i^2} \right ) {\bm{ \hat n}}_i & \quad \quad & \text{ on } \Gamma_{\text{PV}_i} \times (0, T_{\text{dias}} ) , \, i=1 \dots, 4 \text{, } \\ &\bm{\sigma}(\bm u, p) \bm{\hat n}= \rho (\{ \left (\bm u -\bm u^\text{ALE} \right )\cdot \hat{\bm n }\}_{-})\left (\bm u -\bm u^\text{ALE} \right ) && \text{ on } \Gamma_{\text{MV}} \times (0, T_{\text{dias}}], \\ &\bm u = \bm g^{\text{ALE}} && \text{ on } \Gamma_{\text w} \times (0, T_{\text{dias}}], \\ &\bm u = -\frac{|\Phi_{\text{PV}_i}(t)|}{4|\Gamma_{\text{PV}_i}|}\left (1-\frac{r(\bm{x})^2}{R_i^2} \right ) {\bm{ \hat n}}_i && \text{ on } \Gamma_{\text{PV}_i} \times (T_{\text{dias}}, T_\text{HB}] , \, i=1 \dots, 3, \\ &\bm{\sigma}(\bm u, p) \bm{\hat n}= \rho (\{ \left (\bm u -\bm u^\text{ALE} \right )\cdot \hat{\bm n }\}_{-})\left (\bm u -\bm u^\text{ALE} \right ) && \text{ on } \Gamma_{\text{PV}_4} \times (T_{\text{dias}}, T_\text{HB}],\\ &\bm u = \bm g^{\text{ALE}} && \text{ on } \Gamma_{\text w} \cup \Gamma_{\text{MV}} \times (T_{\text{dias}}, T_\text{HB}],\\ &\bm u = \bm 0 && \text{ in } \Omega_0 \times \{0\}, \\ \end{aligned} \label{wholemodel} \end{equation} in the Dirichlet inflow boundary condition, $r(\bm x)=|\bm x|$, $R_i$ is the radius of the $i$--th PV section and $\hat{\bm n}_i$ its outward directed unit vector normal to. \section{Mesh generation} \label{GRIDGENERATION} \begin{table}[!t] \centering \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & Mesh & $\mathcal{T}_{h_1}$ & $\mathcal{T}_{h_2}$ & $\mathcal{T}_{h_3}$ \\ \cline{2-5} \multicolumn{1}{c|}{}& \# elements & 575'220 & 1'711'622 & 8'344'030 \\ \hline &$\bm u^h$ & 291'561 & 830'517 & 4'030'227 \\ \multirow{1}{*}{\# DOFs ($\mathbb P1-\mathbb P1$)} & $p^h$ & 97'187 & 276'839 & 1'343'409 \\ \cline{2-5} & total & 388'748 & 1'107'356 & 5'373'636 \\ \hline \multirow{2}{*}{Inner elements} & $h_\text{min}$[cm] & 0.05 & 0.05 & 0.05 \\ & $h_\text{max}$ [cm] & 0.2 & 0.1 & 0.05 \\ \hline & $\delta_{\text{BL}}$ [cm] & 0.05 & 0.05 & 0.05 \\ \multirow{1}{*}{Boundary layer} & $n_{\text{layers}}$ & 3 & 4 & 5 \\ & $\chi_{\text{BL}}$ & 0.8 & 0.8 & 0.8 \\ \hline \end{tabular} \caption{Details on the three meshes $\mathcal{T}_{h_i}, \, i = 1, \dots, 3$: number of elements; number of degrees of freedom (DOFs) using Lagrangian linear elements (for velocity, pressure and total number); minimum and maximum cell size for the inner elements of the mesh; boundary layer: boundary layer thickness $\delta_{\text{BL}}$, number of layers $n_{\text{layers}}$ and ratio among successive layers' thicknesses $\chi_{\text{BL}}$.}. \label{table_grids} \end{table} \begin{figure}[!t] \centering \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={4.5cm 0 1.5cm 0},clip, width=0.9\textwidth]{figures2020/la_mesh_coarse.png} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={4.5cm 0 1.5cm 0},clip,width=0.9\textwidth]{figures2020/la_mesh_medium.png} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={4.5cm 0 1.5cm 0},clip,width=0.9\textwidth]{figures2020/la_mesh_fine.png} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures2020/la_mesh_coarse_inlet.png} \caption{mesh $\mathcal{T}_{h_1}$} \label{la_mesh_coarse} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[width=0.9\textwidth]{figures2020/la_mesh_medium_inlet.png} \caption{mesh $\mathcal{T}_{h_2}$} \label{la_mesh_medium} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[width=0.9\textwidth]{figures2020/la_mesh_fine_inlet.png} \caption{mesh $\mathcal{T}_{h_3}$} \label{la_mesh_fine} \end{subfigure} \caption{The three meshes $\mathcal{T}_{h_i}, \, i = 1, \dots, 3$ adopted for the CFD simulations of the idealized LA geometry with a focus on a inlet section. } \label{la_mesh} \end{figure} \color{black} The LA endocardium was originally built by means of NURBS with the purpose of modeling the electric potential wavefront \cite{PDL_lageom}. In particular, the LA endocardium is built as a single NURBS patch starting from B-splines basis functions of degree 2; the LA fluid mesh is then obtained by filling the obtained surface. For further details on the idealized LA geometrical representation we refer the interested reader to \cite{PDL_lageom} and references there in. \color{black} We generate three meshes, namely a coarse, medium and a fine one, denoted respectively as $\mathcal{T}_{h_1}$, $\mathcal{T}_{h_2}$ and $\mathcal{T}_{h_3}$. As shown in Figure~\ref{la_mesh} and reported in Table~\ref{table_grids}, for $\mathcal{T}_{h_1}$ and $\mathcal{T}_{h_2}$, a non-uniform distribution of mesh element size is considered in order to have a well resolved LAA. In particular, we adopt for all mesh levels the same minimum cell-size $h_\text{min}=0.05$ cm in the lower corner of the LAA and we increase it linearly through an appropriate distance function (for $\mathcal{T}_{h_1}$ and $\mathcal{T}_{h_2}$ only). $\mathcal{T}_{h_3}$ instead keeps uniform grid cells sizes $h_\text{min} = h_\text{max} = 0.05$ cm. Furthermore, in order to accurately catch viscous effects near the wall, we introduce a boundary layer made of $n_{\text{layers}}$ layers with linearly variable element thicknesses. In particular, we adopt for all the meshes the same boundary layer thickness $\delta_{\text{BL}}=0.05$ cm, while we increase the number of layers - going from a mesh level to another - keeping the same ratio among successive layers' thicknesses $\chi_\text{BL}$. Table~\ref{table_grids} lists quantitative information about the three meshes. Mesh generation is performed by exploiting the VMTK library \cite{vmtk, vmtk_fedele}. Meshes are uploaded to a GitLab repository and publicly accessible \cite{repository_cfdmesh}. \section{Numerical results and discussion} \label{RESU} We report the numerical results obtained performing numerical simulations\footnote{Numerical simulations were run on the cluster iHEART (Lenovo SR950 8 x 24-Core Intel Xeon Platinum 8160, 2100 MHz and 1.7TB RAM) available at MOX, Dipartimento di Matematica, Politecnico di Milano. Furthermore, simulations on the mesh $\mathcal{T}_{h_3}$ were run on the cluster GALILEO supercomputer (IBM NeXtScale cluster, 1022 nodes (Intel Broadwell), 2 x 18-Cores Intel Xeon E5-2697 v4 at 2.30 GHz, 36 cores/node, 26.572 cores in total with 128 GB/node) by CINECA. } with the FE library LifeV \cite{LifeV, LifeV_paper} for the solution of the fluid dynamics in the idealized LA as modeled in Sections \ref{MATH} and~\ref{GRIDGENERATION}. Blood is set as Newtonian, incompressible and viscous fluid with density $\rho=1.06$~g/cm$^3$ and dynamic viscosity $\mu=0.035$~g/(cm~s). For each $\mathcal{T}_{h_i}$, $i=1, \dots, 3$ , we simulate six heartbeats, starting from the initial condition $\bm u_0 = \bm 0$. Due to the periodicity in time of the boundary conditions of the problem, we analyse the output of the numerical simulations with a phase-averaging filter in order to get average quantities on one representative cycle. Furthermore, in order to remove the influence the unphysical initial condition $\bm u_0 = \bm 0$, we discard the first two heartbeats. Hence, referring to $N_{\text{HB}} = 4 $ heartbeats, with period $T_\text{HB} = 1$ s, we introduce the phase-averaging filter for the velocity as: \begin{equation} \langle{\bm u }(\bm x, t) \rangle = \frac{1}{N_\text{HB}} \sum_{n = 1} ^ {N_\text{HB}} {\bm u} (\bm x, t + (n-1)T_\text{HB}). \label{phase_averaged} \end{equation} First, we present the results achieved with the mesh $\mathcal{T}_{h_3}$ using the SUPG stabilization method, which will represent our reference solution. Then, we perform a mesh convergence study using both SUPG and VMS-LES methods and we compare the two methods in terms of fluid dynamics indicators with the results achieved with the reference solution. \begin{figure}[!t] \centering \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={50 1 50 80 },clip,width=\textwidth]{figures2020/results/slice_u_20_fine.png} \caption{$t = 0.20$ s} \label{slice_u_20_fine} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={50 1 50 80 },clip,width=\textwidth]{figures2020/results/slice_u_40_fine.png} \caption{$t = 0.40$ s} \label{slice_u_40_fine} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={50 1 50 80 },clip,width=\textwidth]{figures2020/results/slice_u_60_fine.png} \caption{$t = 0.60$ s} \label{slice_u_60_fine} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={50 1 50 80 },clip,width=\textwidth]{figures2020/results/slice_u_68_fine.png} \caption{$t = 0.68$ s} \label{slice_u_20_fine} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={50 1 50 80 },clip,width=\textwidth]{figures2020/results/slice_u_80_fine.png} \caption{$t = 0.80$ s} \label{slice_u_80_fine} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={50 1 50 80 },clip,width=\textwidth]{figures2020/results/slice_u_100_fine.png} \caption{$t = 1.00$ s} \label{slice_u_100_fine} \end{subfigure} \caption{Reference solution: phase-averaged velocity magnitude $|\langle \bm u \rangle |$ on a slice cutting two PVs (top-left) at different time instants.} \label{fig_slice} \end{figure} \subsection{The reference solution} \label{RESU_reference} We report the results obtained with the mesh $\mathcal{T}_{h_3}$ by adopting a SUPG stabilization method, with a time step $\Delta t = 6.25 \cdot 10^{-5}$~s. The numerical solution correspondingly obtained is denoted as our reference solution. We remark that the results that will present are referred to the phase-averaged velocity $\langle \bm u \rangle $ which is representative of a heartbeat defined in the time domain $[0, T_\text{HB}]$. In Figure \ref{fig_slice}, we report the phase-averaged velocity magnitude of the blood on a slice cutting two PVs at six time instants corresponding to the diastolic peak of the E-wave ($t = 0.20$ s), the plateau between E and A-waves ($t = 0.40$ s), the A-wave ($t = 0.60$ s), the beginning of systole ($t = 0.68$ s), the filling phase during systole ($t = 0.80$ s) and the end of systole ($t = 1.00$ s). The peak velocity attained in our simulations is around 90 cm/s during the E-wave. The jets coming from the PVs impact one on each other, as it can be seen at time 0.20 s. In Figure \ref{fig_volume_rendering}, we report volume rendering of the phase-averaged velocity magnitude at different time instants. The flow shows quite complex features, in particular we observe that the jets impact during the heartbeat in three peculiar instants: the E-wave (Figure \ref{volume_rendering_20}), the A-wave (Figure \ref{volume_rendering_40}) and during the filling phase of systole (Figure \ref{volume_rendering_80}). \begin{figure}[!t] \centering \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={50 1 100 30 },clip,width=\textwidth]{figures2020/results/volume_rendering_20.png} \caption{$t = 0.20$ s} \label{volume_rendering_20} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={50 1 100 30 },clip,width=\textwidth]{figures2020/results/volume_rendering_40.png} \caption{$t = 0.40$ s} \label{volume_rendering_40} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={50 1 100 30 },clip,width=\textwidth]{figures2020/results/volume_rendering_60.png} \caption{$t = 0.60$ s} \label{volume_rendering_60} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={50 1 100 30 },clip,width=\textwidth]{figures2020/results/volume_rendering_68.png} \caption{$t = 0.68$ s} \label{volume_rendering_68} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={50 1 100 30 },clip,width=\textwidth]{figures2020/results/volume_rendering_80.png} \caption{$t = 0.80$ s} \label{volume_rendering_80} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={50 1 100 30 },clip,width=\textwidth]{figures2020/results/volume_rendering_100.png} \caption{$t = 1.00$ s} \label{volume_rendering_100} \end{subfigure} \caption{Reference solution: volume rendering of phase-averaged velocity magnitude $|\langle \bm u \rangle |$ at different time instants.} \label{fig_volume_rendering} \end{figure} \begin{figure}[!t] \centering \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={0 1 20 40 },clip,width=\textwidth]{figures2020/results/Qcriterion_vorticity_20.png} \caption{$t = 0.20$ s} \label{Qcriterion_vorticity_20} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={0 1 20 40 },clip,width=\textwidth]{figures2020/results/Qcriterion_vorticity_40.png} \caption{$t = 0.40$ s} \label{Qcriterion_vorticity_40} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={0 1 20 40 },clip,width=\textwidth]{figures2020/results/Qcriterion_vorticity_60.png} \caption{$t = 0.60$ s} \label{Qcriterion_vorticity_60} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={0 1 20 40 },clip,width=\textwidth]{figures2020/results/Qcriterion_vorticity_68.png} \caption{$t = 0.68$ s} \label{Qcriterion_vorticity_68} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={0 1 20 40 },clip,width=\textwidth]{figures2020/results/Qcriterion_vorticity_80.png} \caption{$t = 0.80$ s} \label{Qcriterion_vorticity_80} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={0 1 20 40 },clip,width=\textwidth]{figures2020/results/Qcriterion_vorticity_100.png} \caption{$t = 1.00$ s} \label{Qcriterion_vorticity_100} \end{subfigure} \caption{Reference solution: isosurfaces of Q-criterion $ Q = 2000$ Hz$^2$ coloured by phase-averaged vorticity magnitude $|\nabla \times \langle \bm u \rangle |$ at different time instants.} \label{fig_Qcriterion} \end{figure} We split the velocity gradient $\nabla \langle \bm u \rangle$ into its symmetric $\bm \varepsilon (\langle \bm u \rangle)$ and skew-symmetric $\bm \omega (\langle \bm u \rangle)$ parts: \begin{equation} \nabla \langle \bm u \rangle = \frac{1}{2} \left ( \nabla \langle \bm u \rangle + \left ( \nabla \langle \bm u \rangle \right )^T\right ) + \frac{1}{2} \left ( \nabla\langle \bm u \rangle - \left ( \nabla \langle \bm u \rangle\right )^T\right ) = \bm \varepsilon ({{\langle \bm u \rangle}}) + \bm \omega (\langle \bm u \rangle), \end{equation} being respectively the strain-rate tensor and the rotation tensor. In order to identify coherent vortex structures, we introduce the scalar function \cite{Hunt_QCriterion}: \begin{equation} Q (\langle \bm u \rangle) = \frac{1}{2}\left( | \bm \omega(\langle \bm u \rangle)|_{\text{F}}^2 - |\bm \varepsilon(\langle \bm u \rangle)|_{\text{F}}^2 \right), \end{equation} where $|\cdot|_{\text{F}}$ is the Frobenius norm of a tensor. If $ Q (\langle \bm u \rangle) > 0$, the rotation of a fluid element becomes dominant over its stretching: the Q-criterion consists in analysing the isosurfaces of the positive part of $ Q (\langle \bm u \rangle)$ \cite{Hunt_QCriterion}. In Figure \ref{fig_Qcriterion} we plot the isosurfaces corresponding to $ Q=2000$ Hz$^2$ coloured with the phase-averaged vorticity magnitude $|\nabla \times \langle \bm u \rangle |$. \color{black} The main feature of this flow is the formation of vortex rings out of the PVs when the blood enters in the LA. These rings mutually interact when the corresponding jets impact and then form structures that become smaller and smaller until disappearing by dissipating their energy. In Figure \ref{Qcriterion_vorticity_20}, we highlight the impact among the strong jets during the E-wave. Then, at time $t=0.40$ s (Figure \ref{Qcriterion_vorticity_40}), the structures become smaller and they have nearly disappeared as new jet enters at time $t=0.60$ s (Figure \ref{Qcriterion_vorticity_60}) forming four well visible vortex rings around the PVs sections (A-wave). In the refilling phase of systole ($t=0.80$ s), the vortex rings are again visible with some residual structures still present at the center of the chamber. By focusing on the impact during the E-wave, in Figure \ref{fig_slice_vorticity}, we show the projection of the phase-averaged vorticity $\nabla \times \langle \bm u \rangle$ on the normal direction of a slice cutting two PVs. We observe the formation of shear layers from the PVs (Figure \ref{slice_vorticity_16}), a early-stage interaction in Figure \ref{slice_vorticity_17} along with some recirculation regions. Then, from $t=0.18$ s, we observe perturbed shear layers with a coalescence of vortices and a dispersion of the organized flow pattern previously seen (Figures \ref{slice_vorticity_18}, \ref{slice_vorticity_20}). In particular, the vortices breakdown propagates in the rest of the chamber, towards the MV section (Figures \ref{slice_vorticity_21}, \ref{slice_vorticity_22}). \begin{figure}[!t] \centering \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={1 1 1 1 },clip,width=\textwidth]{figures2020/results/slice_vorticity_16.png} \caption{$t = 0.16$ s} \label{slice_vorticity_16} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={1 1 1 1 },clip,width=\textwidth]{figures2020/results/slice_vorticity_17.png} \caption{$t = 0.17$ s} \label{slice_vorticity_17} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={1 1 1 1 },clip,width=\textwidth]{figures2020/results/slice_vorticity_18.png} \caption{$t = 0.18$ s} \label{slice_vorticity_18} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={1 1 1 1 },clip,width=\textwidth]{figures2020/results/slice_vorticity_20.png} \caption{$t = 0.20$ s} \label{slice_vorticity_20} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={1 1 1 1 },clip,width=\textwidth]{figures2020/results/slice_vorticity_21.png} \caption{$t = 0.21$ s} \label{slice_vorticity_21} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={1 1 1 1 },clip,width=\textwidth]{figures2020/results/slice_vorticity_22.png} \caption{$t = 0.22$ s} \label{slice_vorticity_22} \end{subfigure} \caption{Reference solution: projection of the phase-averaged vorticity on the normal direction of a slice cutting two PVs (top-left) $(\nabla \times \langle \bm u \rangle ) \cdot \hat{\bm n}$. Results at different time instants in the proximity of the E-wave ($t=0.20$ s).} \label{fig_slice_vorticity} \end{figure} The velocity profile at the MV is an interesting output of this computation since it can be used as input for the simulation of the LV hemodynamics \cite{TDQ_2d,TDQ_3d}. In Figure \ref{fig_flux_MV}, we report glyphs of velocity vector at the MV section during diastole (i.e. when the MV is open) on a slice coloured with $\langle \bm{u} \rangle \cdot \hat{\bm n}_{\text{MV}}$, i.e. the scalar product among the phase-averaged velocity and the outward pointing unit vector normal to the MV section. We notice that the velocity profile that we obtain is highly variable in time and, more importantly, the velocity shows a flat profile only at some specific times, such as at $t=0.10$ s (Figure \ref{flux_MV_10}). Even when the flow is intense, such as at $t=0.20$ s or $t=0.60$ s, the velocity profile is never flat but, on the contrary, the presence of vortices located above the MV section produces low velocity regions as shown in Figures \ref{flux_MV_20} and \ref{flux_MV_60}. During the time between the two waves, the flow rate is positive, as can be seen in Figures \ref{flux_MV_30}, \ref{flux_MV_40} and \ref{flux_MV_50}, but some recirculating velocities are visible in some spots reaching negative values of $\langle \bm{u} \rangle \cdot \hat{\bm n}_{\text{MV}} = - 15$ cm/s. \color{black} In Figure \ref{velocity_MV_warp}, we also report the MV velocity profile at different instants during diastole, which we remark being an output of our numerical simulations. The velocity profiles obtained significantly differ from a flat profile, a Pouiseuille profile, or, more generally, from those analytical profiles generally prescribed as inlet boundary conditions during diastole for haemodynamic simulations of the LV (i.e. on the MV section) \cite{Domenichini_Pedrizzetti_2011, Domenichini_Pedrizzetti_2005, MKKDL_ventr}. We made these profiles publicly available at the repository \cite{repository_cfdmesh}: they can be used, after a suitable fitting in space and time, to prescribe an inflow boundary condition at MV section during diastole for the LV simulation. This boundary treatment better accounts for the effect of the flow coming from the LA, which may considerably affects the haemodynamics of the LV. \color{black} \begin{figure}[t!] \centering \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={40 1 40 10 },clip,width=\textwidth]{figures2020/results/fluxMV_10_white.png} \caption{$t = 0.10$ s} \label{flux_MV_10} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={40 1 40 10 },clip,width=\textwidth]{figures2020/results/fluxMV_20_white.png} \caption{$t = 0.20$ s} \label{flux_MV_20} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={40 1 40 10 },clip,width=\textwidth]{figures2020/results/fluxMV_30_white.png} \caption{$t = 0.30$ s} \label{flux_MV_30} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={40 1 40 10 },clip,width=\textwidth]{figures2020/results/fluxMV_40_white.png} \caption{$t = 0.40$ s} \label{flux_MV_40} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={40 1 40 10 },clip,width=\textwidth]{figures2020/results/fluxMV_50_white.png} \caption{$t = 0.50$ s} \label{flux_MV_50} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={40 1 40 10 },clip,width=\textwidth]{figures2020/results/fluxMV_60_white.png} \caption{$t = 0.60$ s} \label{flux_MV_60} \end{subfigure} \caption{Reference solution: glyphs of velocity vector at the MV section during diastole on a slice coloured with $\langle \bm{u} \rangle \cdot \hat{\bm n}_{\text{MV}}$, i.e. the scalar product among the phase-averaged velocity and the outward pointing unit vector normal to the MV section.} \label{fig_flux_MV} \end{figure} \begin{figure}[t!] \centering \includegraphics[trim={1 1 1 1 },clip,width=\textwidth]{figures2020/results/velocity_MV_warp.png} \caption{\color{black}Reference solution: velocity profile at the MV section at different time during diastole. \color{black}} \label{velocity_MV_warp} \end{figure} \newpage \begin{figure}[!t] \centering \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={100 1 20 30 },clip,width=\textwidth]{figures2020/results/WSS_ref_20.png} \caption{$t = 0.20$ s} \label{WSS_ref_20} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={100 1 20 30 },clip,width=\textwidth]{figures2020/results/WSS_ref_40.png} \caption{$t = 0.40$ s} \label{WSS_ref_40} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={100 1 20 30 },clip,width=\textwidth]{figures2020/results/WSS_ref_60.png} \caption{$t = 0.60$ s} \label{WSS_ref_60} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={100 1 20 30 },clip,width=\textwidth]{figures2020/results/WSS_ref_68.png} \caption{$t = 0.68$ s} \label{WSS_ref_68} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={100 1 20 30 },clip,width=\textwidth]{figures2020/results/WSS_ref_80.png} \caption{$t = 0.80$ s} \label{WSS_ref_80} \end{subfigure} \begin{subfigure}{.325\textwidth} \includegraphics[trim={100 1 20 30 },clip,width=\textwidth]{figures2020/results/WSS_ref_100.png} \caption{$t = 1.00$ s} \label{WSS_ref_100} \end{subfigure} \caption{Reference solution: wall shear stress (WSS) magnitude at different time instants.} \label{WSS_ref} \end{figure} In view of calculating hemodynamic indicators, we define the viscous stress tensor related to the phase-averaged velocity field as \begin{equation} \bm \tau (\langle \bm u \rangle) = 2 \mu \bm \varepsilon (\langle \bm u \rangle). \end{equation} We compute the vector wall shear stress (\textbf{WSS}) on the boundary of the reference configuration $\Omega_0$ (i.e. the LA at the beginning of diastole) as \begin{equation} \textbf{WSS} (\langle \bm u \rangle) = \bm \tau (\langle \bm u \rangle) \hat{\bm n} - \left ( \bm \tau (\langle \bm u \rangle) \hat{\bm n} \cdot \hat{\bm n}\right )\hat {\bm n } \quad \text{on } \partial \Omega_0, \end{equation} and the scalar fields time averaged wall shear stress (TAWSS), oscillatory shear index (OSI) and relative residence time (RRT) (see \cite{KFH_leftatrium,KU_osi,HG_rrt}). \color{black}These indicators can help shedding light on long-term response of endothelial cells since they are affected by both the magnitude of the WSS and its evolution in time. For this reason, they can be used to identify formation of new tissues, plaques and the promoting of neointimal hyperplasia \cite{KU_osi}. \color{black} With the WSS, we compute the TAWSS as the integral over the time period of the magnitude of the \textbf{WSS}, \begin{equation} \text{TAWSS}(\langle \bm u \rangle) = \frac{1}{T_\text{HB}} \int_{0}^{T_\text{HB}} | \mathbf{WSS} (\langle \bm u \rangle) |_2 dt \quad \text{on } \partial \Omega_0, \end{equation} where $| \cdot |_2$ denotes the Euclidean norm of a vector. The OSI is defined as \cite{KU_osi}: \begin{equation} \text{OSI}(\langle \bm u \rangle) = \frac{1}{2} \left( 1 - \dfrac{\left | \int_{0}^{T_\text{HB}} \mathbf{WSS} (\langle \bm u \rangle) dt \right|_2}{ \int_{0}^{T_\text{HB}} \left| \mathbf{WSS} (\langle \bm u \rangle) \right|_2 dt } \right) \quad \text{on } \partial \Omega_0, \end{equation} and it is higher in regions where the WSS changes much during a heart cycle. Finally, we compute the RRT as in \cite{HG_rrt} \begin{equation} \text{RRT}(\langle \bm u \rangle) = \left( \left(1 - 2\,\text{OSI} (\langle \bm u \rangle) \right) \dfrac{1}{T_\text{HB}} \int_{0}^{T_\text{HB}} \left| \mathbf{WSS} (\langle \bm u \rangle) \right|_2 dt \right)^{-1} \quad \text{on } \partial \Omega_0. \end{equation} The RRT is proportional to the residence time of blood particles in the proximity of the wall, and it can be regarded as a convenient fluid dynamics indicator to identify regions where WSS is both low and oscillatory \cite{Domanin_2017}. In Figure \ref{WSS_ref}, we report the WSS magnitude as computed on the surface of the LA at different time instants by using the phase averaged velocity. The largest values are attained during the E-wave in the middle of the surface of the LA, towards the MV. This region corresponds to areas where vortices interact and are pushed towards the LA wall. During the rest of the cycle, the WSS values remain quite small; large values are attained only in the PVs and in the lower part of the LA. Figure~\ref{TAWSS_ref} shows the TAWSS on the reference configuration from two different perspectives: low values of the TAWSS are achieved in the LAA, while some peaks can be appreciated in the opposite side of the chamber, in accordance with the large values of $|\textbf{WSS}|$ previously observed due to the interaction among the vortices and the endocardium. In Figure \ref{OSI_ref} we report the OSI computed in the same settings of Figure \ref{TAWSS_ref}. The OSI is large on the top of the LA where a large recirculation is present and on the bottom of the LAA, revealing hence a significant variation of the wall shear stress. As a qualitative indication of the time that a fluid particle spends in the vicinity of the wall, we report in Figure~\ref{RRT_ref} the RRT: as expected, the largest values are attained in the bottom of the LAA. We suggest it could be related to the shape and position of the LAA, where the blood reaches very low velocities and recirculation effects are observed. Interestingly, analogous considerations in terms of all the analysed hemodynamic indicators are found in healthy patient-specific studies as highlighted in \cite{KFH_leftatrium}, both in terms of magnitude and their distribution on the LA surface. \begin{comment} \begin{figure}[!t] \centering \begin{subfigure}{.495\textwidth} \centering \includegraphics[trim={60 0 60 0 },clip,width=\textwidth]{figures2020/results/TAWSS_ref_a.png} \end{subfigure} \begin{subfigure}{.495\textwidth} \centering \includegraphics[trim={60 0 60 0 },clip,width=\textwidth]{figures2020/results/TAWSS_ref_b.png} \end{subfigure} \caption{Reference solution: different views of time averaged wall shear stress (TAWSS). } \label{TAWSS_ref} \end{figure} \begin{figure}[!t] \centering \begin{subfigure}{.495\textwidth} \centering \includegraphics[trim={60 0 60 0 },clip,width=\textwidth]{figures2020/results/OSI_ref_a.png} \end{subfigure} \begin{subfigure}{.495\textwidth} \centering \includegraphics[trim={60 0 60 0 },clip,width=\textwidth]{figures2020/results/OSI_ref_b.png} \end{subfigure} \caption{Reference solution: different views of oscillatory shear index (OSI)} \label{OSI_ref} \end{figure} \begin{figure}[!t] \centering \begin{subfigure}{.495\textwidth} \centering \includegraphics[trim={60 0 60 0 },clip,width=\textwidth]{figures2020/results/RRT_ref_a.png} \end{subfigure} \begin{subfigure}{.495\textwidth} \centering \includegraphics[trim={60 0 60 0 },clip,width=\textwidth]{figures2020/results/RRT_ref_b.png} \end{subfigure} \caption{Reference solution: different views of relative residence time (RRT). } \label{RRT_ref} \end{figure} \end{comment} \begin{figure}[!t] \centering \begin{subfigure}{0.325\textwidth} \centering \includegraphics[trim={5 2 4 2 },clip,width=1.1\textwidth]{figures2020/results/TAWSS_both.png} \caption{TAWSS} \label{TAWSS_ref} \end{subfigure} \begin{subfigure}{0.325\textwidth} \centering \includegraphics[trim={5 2 2 2 },clip,width=1.1\textwidth]{figures2020/results/OSI_both.png} \caption{OSI} \label{OSI_ref} \end{subfigure} \begin{subfigure}{0.325\textwidth} \centering \includegraphics[trim={5 2 4 2 },clip,width=1.1\textwidth]{figures2020/results/RRT_both.png} \caption{RRT} \label{RRT_ref} \end{subfigure} \caption{Reference solution, haemodynamic indicators from two different perspectives: (a) TAWSS, (b) OSI, (c) RRT.} \label{TAWSS_OSI_RRT_ref} \end{figure} \begin{table}[!t] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline {time} [s] & $t = 1.00$ & $t = 2.00$ & $t = 3.00$ & $t = 4.00$ & $t = 5.00 $ & $t = 6.00 $ \\ \hline {particles} & 37'971 & 7'336 & 1'583 & 344 & 91 & 42 \\ \hline {\% on total injected} & 75.23 & 14.53 & 3.14 & 0.68 & 0.18 & 0.08 \\ \hline \end{tabular} \caption{Reference solution: particles remaining in the LA at the end of each cardiac cycle and percentage of particles on total injected. } \label{table_particles} \end{table} \begin{figure}[!t] \centering \begin{subfigure}{.48\textwidth} \centering \includegraphics[trim={1 1 1 1},clip,width=\textwidth]{figures2020/results/flux_injectedparticles_plot.png} \caption{} \label{injected_particles} \end{subfigure} \begin{subfigure}{.48\textwidth} \centering \includegraphics[trim={1 1 1 1},clip,width=\textwidth]{figures2020/results/result_particles_LA.png} \caption{} \label{plot_particles} \end{subfigure} \caption{{Reference solution. Particles injected every 0.05 s (in red) are proportional to the inlet flow rate (in blue) (left). Number of particles inside the LA during 6 cardiac cycles, introducing particles in the first cycle only. With different colours: the number of particles in the chamber coming from different PVs (right).}} \label{injected_particles_and_plot_particles} \end{figure} \begin{figure}[!t] \centering \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0006-min.png} \caption{$t = 0.05 $ s} \label{frame_0p05} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0011-min.png} \caption{$t = 0.10 $ s} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0016-min.png} \caption{$t = 0.15 $ s} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0021-min.png} \caption{$t = 0.20 $ s} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0031-min.png} \caption{$t = 0.30 $ s} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0041-min.png} \caption{$t = 0.40 $ s} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0051-min.png} \caption{$t = 0.50 $ s} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0071-min.png} \caption{$t = 0.70 $ s} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0081-min.png} \caption{$t = 0.80 $ s} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0101-min.png} \caption{$t = 1.00 $ s} \label{frame_1p00} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0201-min.png} \caption{$t = 2.00 $ s} \label{frame_2p00} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0301-min.png} \caption{$t = 3.00 $ s} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0401-min.png} \caption{$t = 4.00 $ s} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0501-min.png} \caption{$t = 5.00 $ s} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={200 250 200 250 },clip,width=\textwidth]{figures2020/results/particles/videoparticles0600-min.png} \caption{$t = 6.00 $ s} \label{frame_6p00} \end{subfigure} \caption{Reference solution: blood particles in the LA during six heartbeats, injecting particles for the first heartbeat only in a number proportional to the inlet flow rate. From (a) to (j) injection during the first cycle, from (k) to (o) particles remained inside the chamber at the end of each heart-cycle.} \label{particles_frames} \end{figure} Large values of RRT in the LAA suggest the stasis of blood particles, i.e. the coagulation of blood in low velocity regions, which may result in the formation of blood clots \cite{Gupta_2014, Nordsletten_2018}. For this reason, we want to count the number of blood particles remaining in the LA at the end of each heart-cycle. \color{black} Details on the methodology adopted to study the particles are provided in \ref{appendix_particles}. \color{black} We inject particles in the four PVs every 0.05 s, proportionally to the inlet flow rate. In Figure \ref{injected_particles} we plot the number of particles injected. In Figure \ref{particles_frames} we report snapshots of the blood particles during six heartbeats, injecting in the first heart cycle only and leaving particles in the chamber for the following five cycles. We studied the contribution of particles coming from different veins representing with different colours particles from different inlets. We can observe the formation of four vortex rings coming from the PVs, with four jets impacting in the middle of the chamber and producing hence a mixing of particles. Particles remain inside the LAA, as also confirmed by large values of RRT previously found. In order to quantify wash-out effects, we stop particles introduction at $t=1.00$ s, counting the number of particles at the end of each cycle. This result is then visualized in Figure \ref{plot_particles} and quantified in Table \ref{table_particles}. The overall number of particles introduced in the chamber during the first heartbeat is 50'471 and, at the end of each cardiac cycle, we report the percentage of particles still inside, showing that, after 5 cycles, in the LA there are the 0.08\% of the total injected particles. \color{black} \subsection{Mesh convergence and comparison of SUPG and VMS-LES} \label{comparison_SUPG_VMSLES} We present a comparison between VMS-LES and SUPG stabilization methods using the meshes $\mathcal{T}_{h_1}$ and $\mathcal{T}_{h_2}$. The results are compared with the reference solution of Section~\ref{RESU_reference}, which we remark has being obtained performing numerical simulation on the mesh $\mathcal{T}_{h_3}$ with the SUPG method. In Table \ref{table_setup_simulations}, we summarize details of the five numerical simulations performed \color{black} along with the Courant numbers computed as in \cite{AQ_book} using the average mesh element size and the maximum (in space and time) velocity magnitude obtained in the numerical simulations. When the same mesh and the same time step are adopted, different Courant numbers are achieved due to different velocities obtained by the two methods. \color{black} Further features on the meshes adopted are given in Table \ref{table_grids}. \begin{table}[!t] \centering \begin{tabular}{|c|c|c|c|} \hline Mesh level & $\Delta t$ [s] & Method & \color{black}Courant numbers\color{black} \\ \hline $\mathcal{T}_{h_1}$ & $1.00 \cdot 10^{-3}$ & SUPG & 0.8281 \\ $\mathcal{T}_{h_1}$ & $1.00 \cdot 10^{-3}$ & VMS-LES & 0.7666\\ $\mathcal{T}_{h_2}$ & $2.50 \cdot 10^{-4}$ & SUPG & 0.3425\\ $\mathcal{T}_{h_2}$ & $2.50 \cdot 10^{-4}$ & VMS-LES & 0.3871 \\ $\mathcal{T}_{h_3}$ (reference) & $6.25 \cdot 10^{-5}$ & SUPG & 0.1560\\ \hline \end{tabular} \caption{Details on the numerical simulations used to compare SUPG and VMS-LES stabilization methods in transitional regime. In all the simulations, we adopt $\mathbb{P}1-\mathbb{P}1$ FE spaces, Backward Euler Method as time discretization scheme, and a semi-implicit treatment of the non linear terms.} \label{table_setup_simulations} \end{table} \begin{figure}[!t] \centering \includegraphics[clip,width=0.9\textwidth]{figures2020/results/TKE_with_zoom.png} \caption{Total kinetic energy $E_{k}(\langle \bm u \rangle)$ using SUPG and VMS-LES methods on meshes $\mathcal{T}_{h_1}$ and $\mathcal{T}_{h_2}$ compared to the reference solution; zoom on the first peak.} \label{TKE} \end{figure} We define some turbulent indicators obtained integrating suitable variables over the whole domain that are then compared with reference data in order to validate the results of the numerical simulations. Specifically, we compute the total kinetic energy of the flow, by using the phase-averaged velocity (defined in Eq. \eqref{phase_averaged}), as \begin{equation} E_{k}\left ( \langle \bm u \rangle \right ) = \frac{1}{2} \rho \int_{\Omega_t} |{\langle \bm u \rangle }|_2^2 d\Omega. \end{equation} Moreover, we define the enstrophy of the flow as \cite{UME_enst,LMDK_enst} \begin{equation} S\left ( \langle \bm u \rangle \right )=\frac{1}{2} \rho \int_{\Omega_t} |\nabla \times \langle \bm u \rangle |_2^2 d\Omega. \end{equation} The latter is a fluid dynamics indicator that can be used to identify a transitional flow \cite{UME_enst,LMDK_enst}. In Figure \ref{TKE}, we report $E_{k}$ computed on the reference solution and for the meshes $\mathcal{T}_{h_1}$ and $\mathcal{T}_{h_2}$ with SUPG and VMS-LES methods. The total kinetic energy presents three peaks in correspondence of E-wave, A-wave and systolic filling phase. Energy production is observed when high-speed blood flows arrive from the PVs. As the jets impact in the middle of the cardiac chamber, dissipation of the kinetic energy can be appreciated. All the methods and meshes share the same overall behaviour and coherent with the reference solution result. For the mesh $\mathcal{T}_{h_2}$, the results are almost always comparable, whereas small differences can be appreciated in correspondence of the first peak among VMS-LES and SUPG on the mesh $\mathcal{T}_{h_1}$: with a coarse level, we see how the VMS-LES method represents more accurately our reference solution, whereas SUPG overestimates it. \begin{figure}[!t] \centering \includegraphics[clip,width=0.7\textwidth]{figures2020/results/Enstrophy.png} \caption{{Enstrophy $S(\langle \bm u \rangle)$ using SUPG and VMS-LES methods on meshes $\mathcal{T}_{h_1}$ and $\mathcal{T}_{h_2}$ compared to the reference solution.}} \label{Enstrophy} \end{figure} In Figure \ref{Enstrophy}, we show the enstrophy $S$ computed on the reference solution and for the meshes $\mathcal{T}_{h_1}$ and $\mathcal{T}_{h_2}$ with both SUPG and VMS-LES methods. As for the total kinetic energy $E_k$, we observe three main peaks during the hearbeat in correspondence of the production and consequent dissipation of vorticity. The solution largely depends on the underlying mesh and, as it is refined, the solution becomes more accurate and no remarkable differences among the methods can be appreciated. \begin{figure}[!t] \centering \includegraphics[trim={80 1 10 1}, clip,width=\textwidth]{figures2020/results/TKE_Enstrophy_4cycles.png} \caption{\color{black} Total kinetic energy $E_{k}(\bm u )$ and enstrophy $S( \bm u )$ of the reference solution during four heartbeats.} \label{TKE_Enstrophy} \end{figure} \begin{figure}[!t] \centering \includegraphics[clip,width=0.9\textwidth]{figures2020/results/FKE_with_zoom.png} \caption{{Fluctuating kinetic energy $E_{kf}(\bm \sigma_{\bm u})$ using SUPG and VMS-LES methods on meshes $\mathcal{T}_{h_1}$ and $\mathcal{T}_{h_2}$ compared to the reference solution; zooms on the second and third peak.}} \label{FKE} \end{figure} \color{black} In Figure \ref{TKE_Enstrophy}, we report $E_{k}(\bm u )$ and $S( \bm u )$ computed on the reference solution during four heartbeats with the instantaneous velocity (not phase-averaged). The plot shows a high variability of the solution among heartbeats in terms of integral quantities: for instance, with respect to the last heartbeat, the E-wave peak shows a relative variation of about 6\% for $E_{k}$ and 12\% for $S$. Thus, to quantify the large variation of the solution during different heart cycles, \color{black}we introduce the fluctuating kinetic energy of the flow as \cite{CMN_leftheart,TDQ_2d} \begin{equation} E_{kf}\left ( \bm \sigma_{\bm u} \right ) = \frac{1}{2} \rho \int_{\Omega_t} |\bm \sigma_{\bm u} |_2^2 d\Omega, \end{equation} being $\bm \sigma_{\bm u} = \left ( \sigma_{u_1}, \sigma_{u_2}, \sigma_{u_3}\right )^T$ a vector containing the standard deviation (i.e. the fluctuations) of each component $k$ of the velocity field with respect to the phase-averaged velocity. Its $k$--th component is defined as \begin{equation} \sigma_{u_k}(\bm x, t) = \sqrt{\text{var}(u_k(\bm x, t))} = \sqrt{\langle {u_k} ^2(\bm x, t) \rangle - \langle {u_k}(\bm x, t) \rangle ^2 }, \quad k = 1, 2, 3. \label{standard_deviation} \end{equation} The fluctuating kinetic energy is an important indicator of transition to turbulence but also provides informations on cycle-to-cycle variations. For this reason, it can be seen as one of the most characteristic indicator of transitional flow for hemodynamic applications \cite{CMN_leftheart,TDQ_2d}. In Figure \ref{FKE}, we observe that the $E_{kf}$ shows a peak with a large amplitude immediately after the E-wave. This result suggests that velocity fluctuations $\bm \sigma_{\bm u}$ are higher during the first peak mainly due to small differences in the location of the shear layer and the vortical structures (where velocity gradients are high), as observed also in \cite{CMN_leftheart}. We show this by reporting in Figure \ref{ekinfl_slice} the specific fluctuating kinetic energy ($\frac{1}{2} \rho |\bm \sigma_{\bm u} |^2 $) on a slice passing through the four PVs at time $t=0.25$ s. It can be observed in fact that the largest values are obtained in the area where jets and vortical structures impact. \color{black} Large values of $E_{kf}$ also confirm the large variability among heartbeats previously observed in Figure \ref{TKE_Enstrophy}. In terms of meshes and methods, differently from the total kinetic energy $E_k$ and the enstrophy $S$, we found more noticeable differences in the fluctuating kinetic energy $E_{kf}$ as can be seen in Figure \ref{FKE}. On the one hand, analysing the results related to the mesh $\mathcal{T}_{h_2}$, we observe that the solutions obtained with SUPG and VMS-LES methods are very similar and both are close to the reference solution, except from the third peak (zoom B) when the VMS-LES method better predicts the reference solution. Moreover, to better quantify these differences, we report in Table \ref{table_e_kin_fl} the minimum, maximum and average discrepancy $e(t)$ achieved in terms of the fluctuating kinetic energy with respect to the reference solution $E_{kf}^\text{REF}(t)$, which shows that for the mesh $\mathcal{T}_{h_2}$, the VMS-LES method has a lower average and maximum discrepancy than SUPG. On the other hand, we found more remarkable differences for the mesh $\mathcal{T}_{h_1}$: as shown in Figure \ref{FKE}, the amplitude of the first peak is highly dependent on the stabilization method adopted; in particular, the VMS-LES solution produces a lower maximum error (that in this case coincides with the E-wave peak) than the one achieved with SUPG, as also confirmed in Table \ref{table_e_kin_fl} in terms of maximum discrepancy. Moreover, the VMS-LES solution on $\mathcal{T}_{h_1}$ better predicts the reference solution than the SUPG on the same mesh (as confirmed also by zooms A and B and then better quantified in Table \ref{table_e_kin_fl} in terms of mean discrepancy). We investigated how the result obtained in terms of difference among fluctuating kinetic energy with coarse meshes ($\mathcal{T}_{h_1}$) may affect the actual flow field: in Figure \ref{EKIN_coarse_4HBs} we report the total kinetic energy computed with the instantaneous velocity $\bm u$ on four heartbeats obtained with the reference solution and with the SUPG and VMS-LES methods on $\mathcal{T}_{h_1}$. We observe that both methods correctly represent $E_k(\bm u)$, but they both loose accuracy during energy dissipation stages, not revealing a clear trend among the two methods during these phases. On the contrary, the main difference between the two methods - and explained also by previous outcomes - is observed during the E-wave energy peak, which always shows that VMS-LES gives more accurate results, while SUPG method underestimates the peaks, revealing also how VMS-LES better predicts E-wave peaks variation from a cycle to another. This result shows that role of the VMS-LES method is more evident in the solution when larger Reynolds numbers are achieved, as during the E-wave, where at the MV section we measured a Reynolds number $Re_\text{MV} \approx 3800 $ and turbulence phenomena are more evident, as thoroughly detailed in Section \ref{RESU_reference}. We believe this justifies the use of additional stabilization terms in Eq. \eqref{discretevmsles} modelling also Reynolds stresses \cite{BCC_vmsles, FD_vmsles} in a LES fashion. In the literature, we found few works that compare the SUPG and VMS-LES methods, and their conclusions go along different directions. In \cite{Behr_2019}, the stabilized formulations for the fully-implicit log-morphology equation is adopted and applied to the centrifugal ventricular assist device: it is shown that the VMS stabilized formulation has better convergence behaviour and superior stabilization properties compared to the SUPG one. On the other hand, in \cite{Ahmed_2019} the numerical tests carried out revealed that both SUPG and VMS-LES methods exhibit comparable accuracy and they conclude that for their case the SUPG stabilization method is accurate enough. However, in our experience, we found that, as the mesh is refined, comparable results are achieved with SUPG and VMS-LES methods: the role of the turbulence model hence vanishes as the mesh becomes finer, which is coherent with the standard definition of a LES model. Thus, if sufficiently fine meshes are adopted, the SUPG method is accurate enough to predict transitional flows, and the use of the additional terms modelled by the VMS-LES do not yield additional benefits in terms of accuracy. On the contrary, the two methods show significant differences with coarser meshes in terms of fluctuating kinetic energy: VMS-LES produces a lower discrepancy with respect to the reference solution than with SUPG stabilization method; we also found that VMS-LES better predicts the E-wave kinetic energy peaks and their variations from a heartbeat to another. Thus, the VMS-LES method plays a significant role allowing to better catch transitional effects usually occurring in cardiac haemodynamics and cycle-to-cycle flow variations, fluid properties well described by the fluctuating kinetic energy. For this reason, when relatively coarse meshes are adopted, the use of a standard SUPG stabilization method might be not sufficient to correctly model cardiac haemodynamics. \color{black} \begin{figure}[!t] \centering \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={1 1 1 1 },clip,width=\textwidth]{figures2020/results/ekinfl_slice_view.png} \caption{} \label{ekinfl_slice_view} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={1 1 1 1 },clip,width=\textwidth]{figures2020/results/ekinfl_coarseSUPG.png} \caption{$\mathcal{T}_{h_1}$ SUPG} \label{ekinfl_coarseSUPG} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={0 0 0 0 },clip,width=\textwidth]{figures2020/results/ekinfl_coarseVMSLES.png} \caption{$\mathcal{T}_{h_1}$ VMS-LES} \label{ekinfl_coarseVMSLES} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={0 0 0 0 },clip,width=\textwidth]{figures2020/results/ekinfl_mediumSUPG.png} \caption{$\mathcal{T}_{h_2}$ SUPG} \label{ekinfl_mediumSUPG} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={0 0 0 0 },clip,width=\textwidth]{figures2020/results/ekinfl_mediumVMSLES.png} \caption{$\mathcal{T}_{h_2}$ VMS-LES} \label{ekinfl_mediumVMSLES} \end{subfigure} \begin{subfigure}{.325\textwidth} \centering \includegraphics[trim={0 0 0 0 },clip,width=\textwidth]{figures2020/results/ekinfl_REFERENCE.png} \caption{Reference solution} \label{ekinfl_REFERENCE} \end{subfigure} \caption{Specific fluctuating kinetic energy $\frac{1}{2}\rho |\bm \sigma _{\bm u}|$ on a slice passing through the four PVs (see (a)) at time $t=0.25$ s using different meshes and methods. Large values of fluctuating velocities are observed in the region of impact among jets. } \label{ekinfl_slice} \end{figure} \begin{table}[!t] \centering \begin{tabular}{|c|c|c|c|c|} \hline Mesh level & Method & $\min_t(e(t))$ [mJ] & $\max_t(e(t))$ [mJ] & $\overline{e(t)}$ [mJ] \\ \hline $\mathcal{T}_{h_1}$ & SUPG & 0.0001 & 0.3408 & 0.0492 \\ $\mathcal{T}_{h_1}$ & VMS-LES & 0.0009 & 0.2130 & 0.0438\\ $\mathcal{T}_{h_2}$ & SUPG & 0.0003 & 0.1826 & 0.0266 \\ $\mathcal{T}_{h_2}$ & VMS-LES & 0.0002 & 0.1490 &0.0256\\ \hline \end{tabular} \caption{\color{black}Minimum, maximum and average discrepancy of fluctuating kinetic energy with respect to the reference solution $e(t)=|E_{kf}(t) - E_{kf}^\text{REF}(t)|$.\color{black}} \label{table_e_kin_fl} \end{table} \begin{figure}[!t] \centering \includegraphics[trim={4.5cm 0 4.5cm 0 },clip,width=\textwidth]{figures2020/results/EKIN_coarse_4HBs_new.png} \caption{\color{black}Total kinetic energy $E_{k}(\bm u )$ during four heart-cycles with SUPG and VMS-LES on $\mathcal T_{h_1}$ compared with reference solution.} \label{EKIN_coarse_4HBs} \end{figure} \section{Conclusions} \label{CONCLU} In this paper, we simulated the hemodynamics of an idealized human LA with the goal of better characterizing and understanding the blood flow behavior in this little explored chamber. We used the standard SUPG and the VMS-LES stabilization methods to yield stable, discrete formulations of the Navier-Stokes equations approximated by means of the Finite Element method and to take into account of turbulence modelling (in the case of VMS-LES). The ALE formulation with prescribed deformation of the computational domain has been considered in combination with the Navier-Stokes solver. We run simulations on a fine mesh for six heartbeats discarding the first two in order to forget the influence of the initial conditions. The result obtained plays the role of our reference solution and it shows some characteristic blood flow features in the LA. The formation of vortex rings from the PVs is the main process occurring in this chamber. The impact of flow jets from the PVs and vortices breakup induce blood mixing and large values of the WSS in the wall nearby the impact regions. Large variability among the cardiac cycles is observed too. This, in combination with other fluid dynamics indicators, highlights that the blood flow in the LA (in these idealized physiological conditions) is definitely neither laminar nor fully turbulent, but rather transitional. Such transitional nature of the blood flow is also highlighted in the LV cavity as shown e.g. in \cite{CMN_leftheart,TDQ_3d}. A further indication that we deduce from our study is that the blood velocity profile at the MV section considerably departs from that of a flat or a Poiseuille profile, an assumption that is often, but incorrectly made when simulating LV hemodynamics; this result is coherent with the findings of \cite{TDQ_2d,TDQ_3d}. As a matter of fact, we found that the formation of vortices above the MV section produces low velocities and recirculation regions. We computed hemodynamic indicators and we deduced that a significant variation of WSS is observed in the bottom of the LAA and on the top of the LA. In particular, in the LAA low velocities and recirculation effects are observed, with consequent high values of RRT which suggests blood stasis. To quantify the latter, we propose a method useful to compute the number of particles inside a chamber. Finally, we present a mesh refinement study combined with an analysis on the numerical results obtained by means of the SUPG and VMS-LES methods. We compute total kinetic energy and enstrophy based on the velocity field phase-averaged on four heartbeats, and we compare these results with our reference solution. In terms of these turbulence indicators, we found that, as the mesh is refined, the solution is more accurate using both stabilization methods. In particular, discrepancies among methods become less evident as the mesh becomes finer. Furthermore, we compared our results in terms of fluctuating kinetic energy, based on the standard deviation of the velocity field. This is an important measure in hemodynamic applications since it represents an indicator of cycle-to-cycle variations and also of transitional flow regimes. We found that the position where jets and vortices impact is highly variable from cycle-to-cycle, producing hence high values of fluctuating kinetic energy. \color{black} In terms of the latter, we found that when relatively coarse meshes are adopted, the SUPG shows a larger discrepancy with respect to the reference solution compared to VMS-LES. Moreover, in terms of turbulent kinetic energy computed with the instantaneous velocity, the VMS-LES better catches the E-wave peaks and their cycle-to-cycle variations. To conclude, we found that if sufficiently fine meshes are adopted, the two methods provide almost comparable numerical solutions, hence the SUPG method is accurate enough to correctly catch transitional effects in the LA; on the contrary, larger differences among the two methods are more evident if coarser meshes are adopted and when large Reynolds numbers are achieved during the heartbeat, as during the E-wave. For coarser meshes, the usage of VMS-LES becomes significant in these applications, even if a fully turbulent regime is never met during the heartbeat, allowing to correctly predict transitional effects that typically occur in cardiac flows, as in the haemodynamics of the LA in normal conditions. \color{black} \cleardoublepage
1,314,259,993,677
arxiv
\section{Introduction} In computational complexity theory, many hard still open questions concern relationships between complexity classes that are expected to be quite small in comparison to the mainstream complexity class $\PTIME$ of tractable languages. One of the smallest such classes is $\NC[1]$, the class of languages decided by Boolean circuits of polynomial length, logarithmic depth and bounded fan-in, a relevant and meaningful class, that has many characterisations but whose internal structure still mostly is a mystery. Indeed, among its most important subclasses, we count $\AC[0]$, $\CC[0]$ and $\ACC[0]$: all of them are conjectured to be different from each other and strictly within $\NC[1]$, but despite many efforts for several decades, this could only be proved for the first of those classes. In the late eighties, Barrington and Thérien~\cite{Barrington-Therien-1988}, building on Barrington's celebrated theorem~\cite{Barrington-1989}, gave an interesting viewpoint on those conjectures, relying on algebraic automata theory. They defined the notion of a program over a monoid $M$: a sequence of instructions $(i, f)$, associating through function $f$ some element of $M$ to the letter at position $i$ in the input of fixed length. In that way, the program outputs an element of $M$ for every input word, by multiplying out the elements given by the instructions for that word; acceptance or rejection then depends on that outputted element. A language of words of arbitrary length is consequently recognised in a non-uniform fashion, by a sequence of programs over some fixed monoid, one for each possible input length; when that sequence is of polynomial length, it is said that the monoid \emph{p}-recognises that language. Barrington and Thérien's discovery is that $\NC[1]$ and almost all of its significant subclasses can each be exactly characterised by \emph{p}-recognition over monoids taken from some suitably chosen variety of finite monoids (a class of finite monoids closed under basic operations on monoids). For instance, $\NC[1]$, $\AC[0]$, $\CC[0]$ and $\ACC[0]$ correspond exactly to \emph{p}-recognition by, respectively, finite monoids, finite aperiodic monoids, finite solvable groups and finite solvable monoids. Understanding the internal structure of $\NC[1]$ thus becomes a matter of understanding what finite monoids from some particular variety are able to \emph{p}-recognise. \vspace{0pt plus 1pt} It soon became clear that regular languages play a central role in understanding \emph{p}-recognition: McKenzie, Péladeau and Thérien indeed observed~\cite{McKenzie-Peladeau-Therien-1991} that finite monoids from a variety $\FMVariety{V}$ and a variety $\FMVariety{W}$ \emph{p}-recognise the same languages if and only if they \emph{p}-recognise the same regular languages. Otherwise stated, most conjectures about the internal structure of $\NC[1]$ can be reformulated as a statement about where one or several regular languages lie within that structure. This is why a line of previous works got interested into various notions of tameness, capturing the fact that for a given variety of finite monoids, \emph{p}-recognition does not offer much more power than classical morphism-recognition when it comes to regular languages (see~\cite{PhD_thesis/Peladeau,Peladeau-Straubing-Therien-1997,% Maciel-Peladeau-Therien-2000,Straubing-2000,Straubing-2001,% PhD_thesis/Tesson,Lautemann-Tesson-Therien-2006,% Grosshans-McKenzie-Segoufin-2017}). \vspace{0pt plus 1pt} This paper is a contribution to an ongoing study of what regular languages can be \emph{p}-recognised by monoids taken from ``small'' varieties, started with the author's Ph.D. thesis~\cite{PhD_thesis/Grosshans}. In a previous paper by the author with McKenzie and Segoufin~\cite{Grosshans-McKenzie-Segoufin-2017}, a novel notion of tameness was introduced and shown for the ``small'' variety of finite aperiodic monoids $\FMVDA$. This allowed them to characterise the class of regular languages \emph{p}-recognised by monoids from $\FMVDA$ as those recognised by so called quasi-$\FMVDA$ morphisms and represented a first small step towards a new proof that the variety $\FMVA$ of finite aperiodic monoids is tame. This is a statement equivalent to Furst's, Saxe's, Sipser's~\cite{Furst-Saxe-Sipser-1984} and Ajtai's~\cite{Ajtai-1983} well-known lower bound result about $\AC[0]$. In~\cite{Grosshans-McKenzie-Segoufin-2017}, the authors also observed that, while $\FMVDA$ ``behaves well'' with respect to \emph{p}-recognition of regular languages, the variety $\FMVJ$, a subclass of $\FMVDA$, does, in contrast, ``behave badly'' in the sense that monoids from $\FMVJ$ do \emph{p}-recognise regular languages that are not recognised by quasi-$\FMVJ$ morphisms. \vspace{0pt plus 1pt} Now, $\FMVJ$ is a well-studied and fundamental variety in algebraic automata theory (see, e.g.,~\cite{Books/Pin-1986,Pin-2017}), corresponding through classical morphism-recognition to the class of regular languages in which membership depends on the presence or absence of a finite set of words as subwords. This paper is a contribution to the understanding of the power of programs over monoids in $\FMVJ$, a knowledge that certainly does not bring us closer to a new proof of the tameness of $\FMVA$ (as we are dealing with a strict subvariety of $\FMVDA$), but that is motivated by the importance of $\FMVJ$ in algebraic automata theory and the unexpected power of programs over monoids in $\FMVJ$. The results we present in this article are threefold: first, we exhibit a fine hierarchy within the class of languages \emph{p}-recognised by monoids from $\FMVJ$, depending on the length of those programs and on a parametrisation of $\FMVJ$; second, we show that a whole class of regular languages, threshold dot-depth one languages, are \emph{p}-recognised by monoids from $\FMVJ$ while, in general, they are not recognised by any quasi-$\FMVJ$ morphism; third, we give an algebraic characterisation of the new class of threshold dot-depth one languages. This class forms a subclass of that of dot-depth one languages~\cite{Pin-2017} where, roughly said, detection of a given factor does work only when it does not appear too often as a subword. We actually even conjecture that this class of languages with additional positional modular counting (that is, letters can be differentiated according to their position modulo some fixed number) corresponds exactly to all regular languages \emph{p}-recognised by monoids in $\FMVJ$. The characterisation of threshold dot-depth one languages as being exactly the dot-depth one languages that are also recognised by monoids in $\FMVDA$ is a statement that is interesting in itself for automata theory and constitutes an essential step towards the proof of the aforementioned conjecture. \paragraph{Organisation of the paper.} Following the present introduction, Section~\ref{sec:Preliminaries} is dedicated to the necessary preliminaries. In Section~\ref{sec:Fine_hierarchy}, we present the results about the fine hierarchy and in Section~\ref{sec:Regular_languages} we expose the results concerning the regular languages \emph{p}-recognised by monoids from $\FMVJ$. Section~\ref{sec:Algebraic_characterisation_TDDO_languages} is dedicated to the algebraic characterisation of threshold dot-depth one languages. Finally, Section~\ref{sec:Conclusion} gives a short conclusion. \paragraph{Note.} \hspace{0pt plus 1pt} This article is partly based on unpublished parts of the author's Ph.D. thesis~\cite{PhD_thesis/Grosshans}. \section{Preliminaries} \label{sec:Preliminaries} \subsection{Various mathematical materials} We assume the reader is familiar with the basics of formal language theory, semigroup theory and recognition by morphisms, that we might designate by classical recognition; for those, we only specify some things and refer the reader to the two classical references of the domain by Eilenberg~\cite{Books/Eilenberg-1974,Books/Eilenberg-1976} and Pin~\cite{Books/Pin-1986}. \paragraph{General notations and conventions.} Let $i, j \in \N$. We shall denote by $\intinterval{i}{j}$ the set of all $n \in \N$ verifying $i \leq n \leq j$. We shall also denote by $[i]$ the set $\intinterval{1}{i}$. Given some set $E$, we shall denote by $\powerset{E}$ the powerset of $E$. All our alphabets and words will always be finite; the empty word will be denoted by $\emptyword$. Given some alphabet $\Sigma$ and some $n \in \N$, we denote by $\Sigma^{\geq n}$, $\Sigma^{=n}$ and $\Sigma^{< n}$ the set of words over $\Sigma$ of length, respectively, at least $n$, exactly $n$ and less than $n$. For any word $u$ over an arbitrary alphabet, we will denote by $\alphabet(u)$ the set of letters that appear in it. \paragraph{Varieties and languages.} A \emph{variety of monoids} is a class of finite monoids closed under submonoids, Cartesian product and morphic images. A \emph{variety of semigroups} is defined similarly. When dealing with varieties, we consider only finite monoids and semigroups, each having an \emph{idempotent power}, a smallest $\omega \in \N_{>0}$ such that $x^\omega = x^{2 \omega}$ for any element $x$. To give an example, the variety of finite aperiodic monoids, denoted by $\FMVA$, contains all finite monoids $M$ such that, given $\omega$ its idempotent power, $x^\omega = x^{\omega + 1}$ for all $x \in M$. Formally, we see a \emph{class of languages $\mathcal{C}$} as a correspondence that associates a set of languages $\mathcal{C}(\Sigma^*)$ over $\Sigma$ to each alphabet $\Sigma$. To each variety $\FMVariety{V}$ of monoids or semigroups we associate the class $\DLang{\FMVariety{V}}$ of languages such that, respectively, their syntactic monoid or semigroup belongs to $\FMVariety{V}$. For instance, $\DLang{\FMVA}$ is well-known to be the class of star-free languages. \paragraph{Quasi $\FMVariety{V}$ languages.} If $S$ is a semigroup we denote by $S^1$ the monoid $S$ if $S$ is already a monoid and $S \cup \set{1}$ otherwise. The following definitions are taken from~\cite{Pin-Straubing-2005}. Let $\varphi$ be a surjective morphism from $\Sigma^*$ to a finite monoid $M$. For all $k$ consider the subset $\varphi(\Sigma^k)$ of $M$ (where $\Sigma^k$ is the set of words over $\Sigma$ of length $k$). As $M$ is finite there is a $k$ such that $\varphi(\Sigma^{2k}) = \varphi(\Sigma^k)$. This implies that $\varphi(\Sigma^k)$ is a semigroup. The semigroup given by the smallest such $k$ is called the \emph{stable semigroup of $\varphi$}. If $S$ is the stable semigroup of $\varphi$, $S^1$ is called \emph{the stable monoid of $\varphi$}. If $\FMVariety{V}$ is a variety of monoids or semigroups, then we shall denote by $\StVQuasi{\FMVariety{V}}$ the class of such surjective morphisms whose stable monoid or semigroup, respectively, is in $\FMVariety{V}$ and by $\DLang{\StVQuasi{\FMVariety{V}}}$ the class of languages whose syntactic morphism is in $\StVQuasi{\FMVariety{V}}$. \paragraph{Programs over monoids.} Programs over monoids form a non-uniform model of computation, first defined by Barrington and Thérien~\cite{Barrington-Therien-1988}, extending Barrington's permutation branching program model~\cite{Barrington-1989}. Let $M$ be a finite monoid and $\Sigma$ an alphabet. A \emph{program $P$ over $M$ on $\Sigma^n$} is a finite sequence of instructions of the form $(i, f)$ where $i \in [n]$ and $f \in M^\Sigma$; said otherwise, it is a word over $([n] \times M^\Sigma)$. The \emph{length} of $P$, denoted by $\length{P}$, is the number of its instructions. The program $P$ defines a function from $\Sigma^n$ to $M$ as follows. On input $w \in \Sigma^n$, each instruction $(i, f)$ outputs the monoid element $f(w_i)$. A sequence of instructions then yields a sequence of elements of $M$ and their product is the output $P(w)$ of the program. A language $L \subseteq \Sigma^n$ is consequently recognised by $P$ whenever there exists $F \subseteq M$ such that $L = P^{-1}(F)$. A language $L$ over $\Sigma$ is \emph{recognised} by a sequence of programs $(P_n)_{n \in \N}$ over some finite monoid $M$ if for each $n$, the program $P_n$ is on $\Sigma^n$ and recognises $L^{=n}$. We say $(P_n)_{n \in \N}$ is of length $s(n)$ for $s\colon \N \to \N$ whenever $\length{P_n} = s(n)$ for all $n \in \N$ and that it is of length at most $s(n)$ whenever there exists $\alpha \in \R_{>0}$ verifying $\length{P_n} \leq \alpha \cdot s(n)$ for all $n \in \N$. For $s\colon \N \to \N$ and $\FMVariety{V}$ a variety of monoids, we denote by $\Prog{\FMVariety{V}, s(n)}$ the class of languages recognised by sequences of programs over monoids in $\FMVariety{V}$ of length at most $s(n)$. The class $\Prog{\FMVariety{V}} = \bigcup_{k \in \N} \Prog{\FMVariety{V}, n^k}$ is then the class of languages \emph{p}-recognised by a monoid in $\FMVariety{V}$, i.e. recognised by sequences of programs over monoids in $\FMVariety{V}$ of polynomial length. The following is an important property of $\Prog{\FMVariety{V}}$. \begin{proposition}[{\cite[Corollary 3.5]{McKenzie-Peladeau-Therien-1991}}] \label{lemma-simple-closure-P} Let $\FMVariety{V}$ be a variety of monoids, then $\Prog{\FMVariety{V}}$ is closed under Boolean operations. \end{proposition} Given two alphabets $\Sigma$ and $\Gamma$, a $\Gamma$-program on $\Sigma^n$ for $n \in \N$ is defined just like a program over some finite monoid $M$ on $\Sigma^n$, except that instructions output letters from $\Gamma$ and thus that the program outputs words over $\Gamma$. Let now $L \subseteq \Sigma^*$ and $K \subseteq \Gamma^*$. We say that \emph{$L$ program-reduces to $K$} if and only if there exists a sequence $(\Psi_n)_{n \in \N}$ of $\Gamma$-programs (the program-reduction) such that $\Psi_n$ is on $\Sigma^n$ and $L^{=n} = \Psi_n^{-1}(K^{=\length{\Psi_n}})$ for each $n \in \N$. The following proposition shows closure of $\Prog{\FMVariety{V}}$ also under program-reductions. \begin{proposition}[{\cite[Proposition~3.3.12 and Corollary~3.4.3] {PhD_thesis/Grosshans}}] \label{ptn:P(V)_program-reduction_closure} \label{ptn:Program-reduction_to_regular_language} Let $\Sigma$ and $\Gamma$ be two alphabets. Let $\FMVariety{V}$ be a variety of monoids. Given $K \subseteq \Gamma^*$ in $\Prog{\FMVariety{V}, s(n)}$ for $s\colon \N \to \N$ and $L \subseteq \Sigma^*$ from which there exists a program-reduction to $K$ of length $t(n)$, for $t\colon \N \to \N$, we have that $L \in \Prog{\FMVariety{V}, s(t(n))}$. In particular, when $K$ is recognised (classically) by a monoid in $\FMVariety{V}$, we have that $L \in \Prog{\FMVariety{V}, t(n)}$. \end{proposition} \subsection{Tameness and the variety \texorpdfstring{$\FMVJ$}{J}} We won't introduce any of the proposed notions of tameness but will only state that the main consequence for a variety of monoids $\FMVariety{V}$ to be tame in the sense of~\cite{Grosshans-McKenzie-Segoufin-2017} is that $\Prog{\FMVariety{V}} \cap \Reg \subseteq \DLang{\StVQuasi\FMVariety{V}}$. This consequence has far-reaching implications from a computational-complexity-theoretic standpoint when $\Prog{\FMVariety{V}}$ happens to be equal to a circuit complexity class. For instance, tameness for $\FMVA$ implies that $\Prog{\FMVA} \cap \Reg \subseteq \DLang{\StVQA}$, which is equivalent to the fact that $\AC[0]$ does not contain the language $\FLang{MOD_m}$ of words over $\set{0, 1}$ containing a number of $1$s not divisible by $m$ for any $m\!\in\!\N, m\!\geq\!2$ (a central result in complexity theory~\cite{Furst-Saxe-Sipser-1984,Ajtai-1983}). Let us now define the variety of monoids $\FMVJ$. A finite monoid $M$ of idempotent power $\omega$ belongs to $\FMVJ$ if and only if $(xy)^\omega = (xy)^\omega x = y (xy)^\omega$ for all $x, y \in M$. It is a strict subvariety of the variety $\FMVDA$, containing all finite monoids $M$ of idempotent power $\omega$ such that $(xy)^\omega = (xy)^\omega x (xy)^\omega$ for all $x, y \in M$, itself a strict subvariety of $\FMVA$. The variety $\FMVJ$ is a ``small'' one, well within $\FMVA$. We now give some specific definitions and results about $\FMVJ$ that we will use, based essentially on~\cite{Klima-Polak-2010}, but also on~\cite[Chapter 4, Section 1]{Books/Pin-1986}. For some alphabet $\Sigma$ and each $k \in \N$, let us define the equivalence relation $\sim_k$ on $\Sigma^*$ by $u \sim_k v$ if and only if $u$ and $v$ have the same set of $k$-subwords (subwords of length at most $k$), for all $u, v \in \Sigma^*$. The relation $\sim_k$ is a congruence of finite index on $\Sigma^*$. For an alphabet $\Sigma$ and a word $u \in \Sigma^*$, we shall write $u \shuffle \Sigma^*$ for the language of all words over $\Sigma$ having $u$ as a subword. In the following, we consider that $\shuffle$ has precedence over $\cup$ and $\cap$ (but of course not over concatenation). We define the \emph{class of piecewise testable languages $\LVPT$} as the class of regular languages such that for every alphabet $\Sigma$, the set $\LVPT(\Sigma^*)$ contains all languages over $\Sigma$ that are Boolean combinations of languages of the form $u \shuffle \Sigma^*$ where $u \in \Sigma^*$. In fact, $\LVPT(\Sigma^*)$ is the set of languages over $\Sigma$ equal to a union of $\sim_k$-classes for some $k \in \N$ (see~\cite{Simon-1975}). Simon showed~\cite{Simon-1975} that a language is piecewise testable if and only if its syntactic monoid is in $\FMVJ$, i.e. $\LVPT = \DLang{\FMVJ}$. We can define a hierarchy of piecewise testable languages in a natural way. For $k \in \N$, let the \emph{class of $k$-piecewise testable languages $\LVPT[k]$} be the class of regular languages such that for every alphabet $\Sigma$, the set $\LVPT[k](\Sigma^*)$ contains all languages over $\Sigma$ that are Boolean combinations of languages of the form $u \shuffle \Sigma^*$ where $u \in \Sigma^*$ with $\length{u} \leq k$. We then have that $\LVPT[k](\Sigma^*)$ is the set of languages over $\Sigma$ equal to a union of $\sim_k$-classes. Let us define $\FMVJ[k]$ the inclusion-wise smallest variety of monoids containing the quotients of $\Sigma^*$ by $\sim_k$ for any alphabet $\Sigma$: we have that a language is $k$-piecewise testable if and only if its syntactic monoid belongs to $\FMVJ[k]$, i.e. $\LVPT[k] = \DLang{\FMVJ[k]}$. (See~\cite[Section 3]{Klima-Polak-2010}.) \section{Fine Hierarchy} \label{sec:Fine_hierarchy} The first part of our investigation of the computational power of programs over monoids in $\FMVJ$ concerns the influence of the length of programs on their computational capabilities. We say two programs over a same monoid on the same set of input words are \emph{equivalent} if and only if they recognise the same languages. Tesson and Thérien proved in~\cite{Tesson-Therien-2001} that for any monoid $M$ in $\FMVDA$, there exists some $k \in \N$ such that for any alphabet $\Sigma$ there is a constant $c \in \N_{>0}$ verifying that any program over $M$ on $\Sigma^n$ for $n \in \N$ is equivalent to a program over $M$ on $\Sigma^n$ of length at most $c \cdot n^k$. Since $\FMVJ \subset \FMVDA$, any monoid in $\FMVJ$ does also have this property. However, this does not imply that there exists some $k \in \N$ working for all monoids in $\FMVJ$, i.e. that $\Prog{\FMVJ}$ collapses to $\Prog{\FMVJ, n^k}$. In this section, we show on the one hand that, as for $\FMVDA$, while $\Prog{\FMVJ, s(n)}$ collapses to $\Prog{\FMVJ}$ for any super-polynomial function $s\colon \N \to \N$, there does not exist any $k \in \N$ such that $\Prog{\FMVJ}$ collapses to $\Prog{\FMVJ, n^k}$; and on the other hand that $\Prog{\FMVJ[k]}$ does optimally collapse to $\Prog{\FMVJ[k], n^{\ceiling{k / 2}}}$ for each $k \in \N$. \subsection{Strict hierarchy} Given $k, n \in \N$, we say that $\sigma$ is a \emph{$k$-selector over $n$} if $\sigma$ is a function of $\powerset{[n]}^{[n]^k}$ that associates a subset of $[n]$ to each vector in $[n]^k$. For any sequence $\Delta = (\sigma_n)_{n \in \N}$ such that $\sigma_n$ is a $k$-selector over $n$ for each $n \in \N$ --- a sequence we will call a \emph{sequence of $k$-selectors} ---, we set $L_\Delta = \bigcup_{n \in \N} K_{n, \sigma_n}$, where for each $n \in \N$, the language $K_{n, \sigma_n}$ is the set of words over $\set{0, 1}$ of length $(k + 1) \cdot n$ that can be decomposed into $k + 1$ consecutive blocks $u^{(1)}, u^{(2)}, \ldots, u^{(k)}, v$ of $n$ letters where the first $k$ blocks each contain $1$ exactly once and uniquely define a vector $\rho$ in $[n]^k$, where for all $i \in [k]$, $\rho_i$ is given by the position of the only $1$ in $u^{(i)}$ (i.e. $u^{(i)}_{\rho_i} = 1$) and $v$ is such that there exists $j \in \sigma_n(\rho)$ verifying that $v_j$ is $1$. Observe that for any $k$-selector $\sigma_0$ over $0$, we have $K_{0, \sigma_0} = \emptyset$. We now proceed similarly to what has been done in Subsection~5.1 in~\cite{Grosshans-McKenzie-Segoufin-2017} to show, on one hand, that for all $k \in \N$, there is a monoid $M_k$ in $\FMVJ[2 k + 1]$ such that for any sequence of $k$-selectors $\Delta$, the language $L_\Delta$ is recognised by a sequence of programs over $M_k$ of length at most $n^{k + 1}$; and, on the other hand, that for all $k \in \N$ there is a sequence of $k$-selectors $\Delta$ such that for any finite monoid $M$ and any sequence of programs $(P_n)_{n \in \N}$ over $M$ of length at most $n^k$, the language $L_\Delta$ is not recognised by $(P_n)_{n \in \N}$. \paragraph{Upper bound.} We start with the upper bound. Given $k \in \N$, we define the alphabet $Y_k = \set{e, \#} \cup \set{\bot_l, \top_l \mid l \in [k]}$; we are going to prove that for all $k \in \N$ there exists a language $Z_k \in \LVPT[2 k + 1](Y_k^*)$ such that for all $\Delta = (\sigma_n)_{n \in \N}$ sequences of $k$-selectors, there exists a program-reduction from $L_\Delta$ to $Z_k$ of length at most $2 \cdot (k + 1)^{-k} \cdot n^{k + 1}$. To this end, we use the following proposition and the fact that the language of words of length $n \in \N$ of $L_\Delta$ is exactly $K_{n', \sigma_{n'}}$ when there exists $n' \in \N$ verifying $n = (k + 1) \cdot n'$ and $\emptyset$ otherwise. \begin{proposition} \label{ptn:k-selectors_languages_in_P(J)} For all $k \in \N$ there is a language $Z_k \in \LVPT[2 k + 1](Y_k^*)$ such that $\emptyword \notin Z_k$ and for all $n \in \N$ and all $k$-selectors $\sigma_n$ over $n$, we have $K_{n, \sigma_n} = \Psi_{(k + 1) \cdot n, \sigma_n}^{-1} (Z_k^{=\length{\Psi_{(k + 1) \cdot n, \sigma_n}}})$ where $\Psi_{(k + 1) \cdot n, \sigma_n}$ is a $Y_k$-program on $\set{0, 1}^{(k + 1) \cdot n}$ of length at most $2 \cdot (k + 1) \cdot n^{k + 1}$. \end{proposition} \begin{proof} We first define by induction on $k$ a family of languages $Z_k$ over the alphabet $Y_k$. For $k = 0$, set $Z_0 = Y_0^* \# Y_0^*$. For $k \in \N_{>0}$, the language $Z_k$ is the set of words containing each of $\top_k$ and $\bot_k$ exactly once, the first before the latter, and verifying that the factor between the occurrence of $\top_k$ and the occurrence of $\bot_k$ belongs to $Z_{k - 1}$, i.e. $Z_k = Y_{k - 1}^* \top_k Z_{k - 1} \bot_k Y_{k - 1}^*$. A simple induction on $k$ shows that $Z_k$ for $k \in \N$ is defined by the expression \[ Y_{k - 1}^* \top_k Y_{k - 2}^* \top_{k - 1} \cdots Y_1^* \top_2 Y_0^* \top_1 Y_0^* \# Y_0^* \bot_1 Y_0^* \bot_2 Y_1^* \cdots \bot_{k - 1} Y_{k - 2}^* \bot_k Y_{k - 1}^* \displaypunct{,} \] hence it belongs to $\LVPT[2 k + 1](Y_k^*)$ and in particular does not contain the empty word $\emptyword$. Fix $n \in \N$. If $n = 0$, the proposition follows trivially since for any $k$-selector $\sigma_0$ over $0$, we have $K_{0, \sigma_0} = \emptyset$ and $\emptyword \notin Z_k$; otherwise, we define by induction on $k$ a $Y_k$-program $P_k(d, \sigma)$ on $\set{0, 1}^{(d + k + 1) \cdot n}$ for every $k$-selector $\sigma$ over $n$ and every $d \in \N$. For any $j \in [n]$ and $\sigma$ a $0$-selector over $n$, which is just a function in $\powerset{[n]}^{\set{\emptyword}}$, let $h_{j, \sigma}\colon \set{0, 1} \to Y_0$ be the function defined by $h_{j, \sigma}(0) = e$ and $h_{j, \sigma}(1) = \begin{cases} \# & \text{if $j \in \sigma(\emptyword)$}\\ e & \text{otherwise} \end{cases}$. For all $k \in \N_{>0}$, we also let $f_k$ and $g_k$ be the functions in ${Y_k}^{\set{0, 1}}$ defined by $f_k(0) = g_k(0) = e$, $f_k(1) = \top_k$ and $g_k(1) = \bot_k$. Moreover, for any $k$-selector $\sigma$ over $n$, the symbol $\sigma|j$ for $j \in [n]$ denotes the $(k - 1)$-selector over $n$ such that for all $\rho' \in [n]^{k - 1}$, we have $i \in \sigma|j(\rho')$ if and only if $i \in \sigma((j, \rho'))$. For $k \in \N_{>0}$, for $d \in \N$ and $\sigma$ a $k$-selector over $n$, the $Y_k$-program $P_k(d, \sigma)$ on $\set{0, 1}^{(d + k + 1) \cdot n}$ is the following sequence of instructions: \begin{align*} & (d \cdot n + 1, f_k) P_{k - 1}(d + 1, \sigma|1) (d \cdot n + 1, g_k)\\ \cdots & (d \cdot n + n, f_k) P_{k - 1}(d + 1, \sigma|n) (d \cdot n + n, g_k) \displaypunct{.} \end{align*} In words, for each position $i \in \intinterval{d \cdot n + 1}{d \cdot n + n}$ with a $1$ in the $(d + 1)$-th block of $n$ letters in the input, the program runs, between the symbols $\top_k$ and $\bot_k$, the program $P_{k - 1}(d + 1, \sigma|i)$ obtained by induction for $\sigma|i$ the $(k - 1)$-selector over $n$ obtained by restricting $\sigma$ to all vectors in $[n]^k$ whose first coordinate is $i$. For $k = 0$, for $d \in \N$ and $\sigma$ a $0$-selector over $n$, the $Y_0$-program $P_0(d, \sigma)$ on $\set{0, 1}^{(d + 1) \cdot n}$ is the following sequence of instructions: \[ (d \cdot n + 1, h_{1, \sigma}) (d \cdot n + 2, h_{2, \sigma}) \cdots (d \cdot n + n, h_{n, \sigma}) \displaypunct{.} \] In words, for each position $i \in \intinterval{d \cdot n + 1}{d \cdot n + n}$ with a $1$ in the $(d + 1)$-th block of $n$ letters in the input, the program outputs $\#$ if and only if $(i - d \cdot n)$ does belong to the set $\sigma(\emptyword)$. In short, $P_k(d, \sigma)$ is designed so that for any $w \in \set{0, 1}^{(d + k + 1) \cdot n}$, the word $P_k(d, \sigma)(w)$ belongs to $Z_k$ if and only if the last $(k + 1) \cdot n$ letters of $w$ form a word of $K_{n, \sigma}$. A simple computation shows that for any $k \in \N$, any $d \in \N$ and $\sigma$ a $k$-selector over $n$, the number of instructions in $P_k(d, \sigma)$ is at most $2 \cdot (k + 1) \cdot n^{k + 1}$. A simple induction on $k$ shows that for any $k \in \N$ and $d \in \N$, when running on a word $w \in \set{0, 1}^{(d + k + 1) \cdot n}$, for any $\sigma$ a $k$-selector over $n$, the program $P_k(d, \sigma)$ returns a word in $Z_k$ if and only if when $u^{(1)}, u^{(2)}, \ldots, u^{(k)}, v$ are the last $k + 1$ consecutive blocks of $n$ letters of $w$, then $u^{(1)}, u^{(2)}, \ldots, u^{(k)}$ each contain $1$ exactly once and define the vector $\rho$ in $[n]^k$ where for all $i \in [k]$, the value $\rho_i$ is given by the position of the only $1$ in $u^{(i)}$, verifying that there exists $j \in \sigma_n(\rho)$ such that $v_j$ is $1$. Therefore, for any $k \in \N$ and $\sigma_n$ a $k$-selector over $n$, if we set $\Psi_{(k + 1) \cdot n, \sigma_n} = P_k(0, \sigma_n)$, we have $K_{n, \sigma_n} = \Psi_{(k + 1) \cdot n, \sigma_n}^{-1} (Z_k^{=\length{\Psi_{(k + 1) \cdot n, \sigma_n}}})$ where $\Psi_{(k + 1) \cdot n, \sigma_n}$ is a $Y_k$-program on $\set{0, 1}^{(k + 1) \cdot n}$ of length at most $2 \cdot (k + 1) \cdot n^{k + 1}$. \end{proof} Consequently, for all $k \in \N$ and any sequence of $k$-selectors $\Delta$, since the language $Z_k$ is in $\LVPT[2 k + 1](Y_k^*)$ and thus recognised by a monoid from $\FMVJ[2 k + 1]$, we have, by Proposition~\ref{ptn:Program-reduction_to_regular_language}, that $L_\Delta \in \Prog{\FMVJ[2 k + 1], n^{k + 1}}$. \paragraph{Lower bound.} For the lower bound, we use the following claim, whose proof can be found in~\cite[Claim 10]{Grosshans-McKenzie-Segoufin-2017}. \begin{claim}\label{claim-monoid-fixed} For all $i \in \N_{>0}$ and $n \in \N$, the number of languages in $\set{0, 1}^n$ recognised by programs over a monoid of order $i$ on $\set{0, 1}^n$, with at most $l \in \N$ instructions, is upper-bounded by $i^{i^2} 2^i \cdot (n \cdot i^2)^l$. \end{claim} If for some $k \in \N$ and $i \in [\alpha]$ with $\alpha \in \N_{>0}$, we apply this claim for all $n \in \N$ and $l = \alpha \cdot ((k + 1) \cdot n)^k$, we get a number $\mu_i(n)$ of languages in $\set{0, 1}^{(k + 1) \cdot n}$ recognised by programs over a monoid of order $i$ on $\set{0, 1}^{(k + 1) \cdot n}$ with at most $l$ instructions that is in $2^{\Omicron(n^k \log_2(n))}$, which is asymptotically strictly smaller than the number of distinct $K_{n, \sigma_n}$ when the $k$-selector $\sigma_n$ over $n$ varies, which is $2^{n^{k + 1}}$, i.e. $\mu_i(n)$ is in $\omicron(2^{n^{k + 1}})$. Hence, for all $j \in \N_{>0}$, there exist an $n_j \in \N$ and $\tau_j$ a $k$-selector over $n_j$ such that no program over a monoid of order $i \in [j]$ on $\set{0, 1}^{(k + 1) \cdot n_j}$ and of length at most $j \cdot ((k + 1) \cdot n_j)^k$ recognises $K_{n_j, \tau_j}$. Moreover, we can assume without loss of generality that the sequence $(n_j)_{j \in \N_{>0}}$ is increasing. Let $\Delta = (\sigma_n)_{n \in \N}$ be such that $\sigma_{n_j} = \tau_j$ for all $j \in \N_{>0}$ and $\sigma_n\colon [n]^k \to \powerset{[n]}, \rho \mapsto \emptyset$ for any $n \in \N$ verifying that it is not equal to any $n_j$ for $j \in \N_{>0}$. We show that no sequence of programs over a finite monoid of length $\Omicron(n^k)$ can recognise $L_\Delta$. If this were the case, then let $i$ be the order of the monoid. Let $j \in \N, j \geq i$ be such that for any $n \in \N$, the $n$-th program has length at most $j \cdot n^k$. But, by construction, we know that there does not exist any such program on $\set{0, 1}^{(k + 1) \cdot n_j}$ recognising $K_{n_j, \tau_j}$, a contradiction. This implies the following hierarchy, using the fact that for all $k \in \N$ and all $d \in \N, d \leq \ceiling{\frac{k}{2}} - 1$, any monoid from $\FMVJ[d]$ is also a monoid from $\FMVJ[k]$. \begin{proposition} \label{ptn:P(J)_strict_hierarchy} For all $k \in \N$, we have $\Prog{\FMVJ, n^k} \subset \Prog{\FMVJ, n^{k + 1}}$. More precisely, for all $k \in \N$ and $d \in \N, d \leq \ceiling{\frac{k}{2}} - 1$, we have $\Prog{\FMVJ[k], n^d} \subset \Prog{\FMVJ[k], n^{d + 1}}$. \end{proposition} \subsection{Collapse} Looking at Proposition~\ref{ptn:P(J)_strict_hierarchy}, it looks at first glance rather strange that, for each $k \in \N$, we can only prove strictness of the hierarchy inside $\Prog{\FMVJ[k]}$ up to exponent $\ceiling{\frac{k}{2}}$. We now show, in a way similar to Subsection~5.2 in~\cite{Grosshans-McKenzie-Segoufin-2017}, that in fact $\Prog{\FMVJ[k]}$ does collapse to $\Prog{\FMVJ[k], n^{\ceiling{k / 2}}}$ for all $k \in \N$, showing Proposition~\ref{ptn:P(J)_strict_hierarchy} to be optimal in some sense. \begin{proposition} \label{ptn:P(J)_collapse} Let $k \in \N$. Let $M \in \FMVJ[k]$ and $\Sigma$ be an alphabet. Then there exists a constant $c \in \N_{>0}$ such that any program over $M$ on $\Sigma^n$ for $n \in \N$ is equivalent to a program over $M$ on $\Sigma^n$ of length at most $c \cdot n^{\ceiling{k / 2}}$. In particular, $\Prog{\FMVJ[k]} = \Prog{\FMVJ[k], n^{\ceiling{k / 2}}}$ for all $k \in \N$. \end{proposition} Actually, the equivalent shorter program we give is even a \emph{subprogram} of the original one, i.e. a subsequence of the latter. For $P$ some program over a finite monoid $M$, we may denote by $\evalP{P}$ the function that associates to each possible input word $w$ the word in $M^{\length{P}}$ obtained by successively evaluating the instructions of $P$ for $w$. Observe that given $P$ a program over some finite monoid $M$ on $\Sigma^n$ for $n \in \N$ and $\Sigma$ an alphabet, a subprogram $P'$ of $P$ is equivalent to $P$ if and only if for every language $K \subseteq M^*$ recognised by the evaluation morphism $\eta_M$ of $M$, the unique morphism from $M^*$ to $M$ extending the identity on $M$, we have $\evalP{P}(w) \in K \Leftrightarrow \evalP{P'}(w) \in K$ for all $w \in \Sigma^n$. Moreover, every language recognised by $\eta_M$ is precisely a language of $\LVPT[k](M^*)$ when $M \in \FMVJ[k]$ for some $k \in \N$. The result is hence a consequence of the following lemma and the fact that every language in $\LVPT[k](M^*)$ is a union of $\sim_k$-classes, each of those classes corresponding to all words over $M$ having the same set of $k$-subwords, that is finite. \begin{lemma} \label{lem:Program_compression_subword_presence} Let $\Sigma$ be an alphabet and $M$ a finite monoid. For all $k \in \N$, there exists a constant $c \in \N_{>0}$ verifying that for any program $P$ over $M$ on $\Sigma^n$ for $n \in \N$ and any word $t \in M^k$, there exists a subprogram $Q$ of $P$ of length at most $c \cdot n^{\ceiling{k / 2}}$ such that for any subprogram $Q'$ of $P$ that has $Q$ as a subprogram, we have that $t$ is a subword of $\evalP{P}(w)$ if and only if $t$ is a subword of $\evalP{Q'}(w)$ for all $w \in \Sigma^n$. \end{lemma} \begin{proof} A program $P$ over $M$ on $\Sigma^n$ for $n \in \N$ is a finite sequence $(p_i, f_i)$ of instructions where each $p_i$ is a positive natural number which is at most $n$ and each $f_i$ is a function from $\Sigma$ to $M$. We denote by $l$ the number of instructions of $P$. For each set $I \subseteq [l]$ we denote by $P[I]$ the subprogram of $P$ consisting of the subsequence of instructions of $P$ obtained after removing all instructions whose index is not in $I$. When $I = \intinterval{i}{j}$ for some $i, j \in [l]$, we may write $P[i, j]$ instead of $P[I]$. We prove the lemma by induction on $k$, fixing the constant to be $c_k = k! \cdot \card{\Sigma}^{\ceiling{k / 2}}$ for a given $k \in \N$. The intuition behind the proof for a program $P$ on inputs of length $n$ and some $t$ of length at least $3$ is as follows. Given $l$ the length of $P$, we will select a subset $I$ of the indices of instructions numbered from $1$ to $l$ to obtain $P[I]$ verifying the conditions of the lemma. Consider all the indices $1 \leq i_1 < i_2 < \cdots < i_s \leq l$ that each correspond, for some letter $a$ and some position $p$ in the input, to the first instruction of $P$ that would output the element $t_1$ when reading $a$ at position $p$ or to the last instruction of $P$ that would output the element $t_k$ when reading $a$ at position $p$. We then have that, given some $w$ as input, $t$ is a subword of $\evalP{P}(w)$ if and only if there exist $1 \leq \gamma < \delta \leq s$ verifying that the element at position $i_\gamma$ of $\evalP{P}(w)$ is $t_1$, the element at position $i_\delta$ of $\evalP{P}(w)$ is $t_k$ and $t_2 \cdots t_{k - 1}$ is a subword of $\evalP{P[i_\gamma + 1, i_\delta - 1]}(w)$. The idea is then that if we set $I$ to contain $i_1, i_2, \ldots, i_s$ as well as all indices obtained by induction for $P[i_j + 1, i_{j + 1} - 1]$ and $t_\alpha \cdots t_\beta$ for all $1 \leq j \leq s - 1$ and $1 < \alpha \leq \beta < k$, we would have that for all $w$, the word $t$ is a subword of $\evalP{P}(w)$ if and only if it is a subword of $\evalP{P[I]}(w)$, that is $\evalP{P}(w)$ where only the elements at indices in $I$ have been kept. The length upper bound of the order of $n^{\ceiling{k / 2}}$ would be met because the number of possible values for $j$ is $s - 1$, hence at most linear in $n$, and the number of possible values for $(\alpha, \beta)$ is quadratic in $k$, a constant. The intuition behind the proof when $t$ is of length less than $3$ is essentially the same, but without induction. \paragraph{Inductive step.} Let $k \in \N, k \geq 3$ and assume the lemma proven for all $k' \in \N, k' < k$. Let $P$ be a program over $M$ on $\Sigma^n$ for $n \in \N$ of length $l \in \N$ and some word $t \in M^k$. Observe that when $n = 0$, we necessarily have $P = \emptyword$, so that the lemma is trivially proven in that case. So we now assume $n > 0$. For each $p \in [n]$ and each $a \in \Sigma$ consider within the sequence of instructions of $P$ the first instruction of the form $(p, f)$ with $f(a) = t_1$ and the last instruction of that form with $f(a) = t_k$, if they exist. We let $I_{(1, k)}$ be the set of indices of these instructions for all $a$ and $p$. Notice that the size of $I_{(1, k)}$ is at most $2 \cdot \card{\Sigma} \cdot n$. Let $s = \card{I_{(1, k)}}$ and let us denote $I_{(1, k)} = \set{i_1, i_2, \ldots, i_s}$ where $i_1 < i_2 < \cdots < i_s$. Given $\alpha, \beta \in [k]$, we also set $t^{(\alpha, \beta)} = t_\alpha t_{\alpha + 1} \cdots t_\beta$. For all $\alpha, \beta \in [k]$ such that $1 < \alpha \leq \beta < k$ and $j \in [s - 1]$, we let $J_{j, (\alpha, \beta)}$ be the set of indices of the instructions within $P[i_j + 1, i_{j + 1} - 1]$ appearing in its subprogram obtained by induction for $P[i_j + 1, i_{j + 1} - 1]$ and $t^{(\alpha, \beta)}$. We now let $I$ be the union of $I_{(1, k)}$ and $J_{j, (\alpha, \beta)}' = \set{e + i_j \mid e \in J_{j, (\alpha, \beta)}}$ for all $\alpha, \beta \in [k]$ such that $1 < \alpha \leq \beta < k$ and $j \in [s - 1]$ (the translation being required because the first instruction in $P[i_j + 1, i_{j + 1} - 1]$ is the $(i_j + 1)$-th instruction in $P$). We claim that $Q = P[I]$, a subprogram of $P$, has the desired properties. First notice that by induction the size of $J_{j, (\alpha, \beta)}'$ for all $\alpha, \beta \in [k]$ such that $1 < \alpha \leq \beta < k$ and $j \in [s - 1]$ is upper bounded by \[ (\beta - \alpha + 1)! \cdot \card{\Sigma}^{\ceiling{(\beta - \alpha + 1) / 2}} \cdot n^{\ceiling{(\beta - \alpha + 1) / 2}} \leq (k - 2)! \cdot \card{\Sigma}^{\ceiling{(k - 2) / 2}} \cdot n^{\ceiling{(k - 2) / 2}} \displaypunct{.} \] Hence, the size of $I$ is at most \begin{align*} & \card{I_{(1, k)}} + \sum_{j = 1}^{s - 1} \sum_{1 < \alpha \leq \beta < k} \card{J_{j, (\alpha, \beta)}'}\\ \leq & 2 \cdot \card{\Sigma} \cdot n + (2 \cdot \card{\Sigma} \cdot n - 1) \cdot \frac{(k - 1) \cdot (k - 2)}{2} \cdot (k - 2)! \cdot \card{\Sigma}^{\ceiling{(k - 2) / 2}} \cdot n^{\ceiling{(k - 2) / 2}}\!\!\\ \leq & 2 \cdot \card{\Sigma} \cdot n + (2 \cdot \card{\Sigma} \cdot n - 1) \cdot \frac{k \cdot (k - 1)}{2} \cdot (k - 2)! \cdot \card{\Sigma}^{\ceiling{(k - 2) / 2}} \cdot n^{\ceiling{(k - 2) / 2}}\\ \leq & k! \cdot \card{\Sigma}^{\ceiling{k / 2}} \cdot n^{\ceiling{k / 2}} = c_k \cdot n^{\ceiling{k / 2}} \end{align*} as $\card{\set{(\alpha, \beta) \in \N^2 \mid 1 < \alpha \leq \beta < k}} = \sum_{j = 2}^{k - 1} (k - j) = \sum_{j = 1}^{k - 2} j = \frac{(k - 1) \cdot (k - 2)}{2}$ and $2 \cdot \card{\Sigma} \cdot n \leq \frac{k!}{2} \cdot \card{\Sigma}^{\ceiling{(k - 2) / 2}} \cdot n^{\ceiling{(k - 2) / 2}}$ since $k \geq 3$, so that $P[I]$ has at most the required length. Let $Q'$ be a subprogram of $P$ that has $Q$ as a subprogram: it means that there exists some set $I' \subseteq [l]$ containing $I$ such that $Q' = P[I']$. Take $w \in \Sigma^n$. Assume now that $t$ is a subword of $\evalP{P}(w)$. It means that there exist $r_1, r_2, \ldots, \allowbreak r_k \in [l]$, $r_1 < r_2 < \cdots < r_k$, such that for all $j \in [k]$, we have $f_{r_j}(w_{p_{r_j}}) = t_j$. By definition of $I_{(1, k)}$, there exist $\gamma, \delta \in [s], \gamma < \delta$, such that $i_\gamma \leq r_1 < r_k \leq i_\delta$ and $f_{i_\gamma}(w_{p_{i_\gamma}}) = t_1$ and $f_{i_\delta}(w_{p_{i_\delta}}) = t_k$. For each $j \in \intinterval{\gamma}{\delta - 1}$, let $m_j \in \intinterval{2}{k}$ be the smallest integer in $\intinterval{2}{k - 1}$ such that $i_j \leq r_{m_j} < i_{j + 1}$ and $k$ if it does not exist, and $M_j \in \intinterval{1}{k - 1}$ be the biggest integer in $\intinterval{2}{k - 1}$ such that $i_j \leq r_{M_j} < i_{j + 1}$ and $1$ if it does not exist. Observe that, since for each $j \in \intinterval{\gamma}{\delta - 1}$, we have $t^{(m_j, M_j)} = t^{(k, 1)} = \emptyword$ if there does not exist any $o \in \intinterval{2}{k - 1}$ verifying $i_j \leq r_o < i_{j + 1}$, it holds that $t^{(2, k - 1)} = \prod_{j = \gamma}^{\delta - 1} t^{(m_j, M_j)}$. For all $j \in \intinterval{\gamma}{\delta - 1}$, we have that for any set $J \subseteq [i_{j + 1} - i_j - 1]$ containing $\bigcup_{1 < \alpha \leq \beta < k} J_{j, (\alpha, \beta)}$, the word $t^{(m_j, M_j)}$ is a subword of $f_{i_j}(w_{p_{i_j}}) \evalP{P[i_j + 1, i_{j + 1} - 1][J]}(w)$ when $m_j < k$ and $r_{m_j} = i_j$, and of $\evalP{P[i_j + 1, i_{j + 1} - 1][J]}(w)$ otherwise. Indeed, let $j \in \intinterval{\gamma}{\delta - 1}$. \begin{itemize} \item If $m_j < k$ and $r_{m_j} = i_j$, then $f_{i_j}(w_{p_{i_j}}) = f_{r_{m_j}}(w_{p_{r_{m_j}}}) = t_{m_j}$ and $i_j = r_{m_j} < r_{m_j + 1} < \cdots < r_{M_j} < i_{j + 1}$, so $t^{(m_j + 1, M_j)}$ is a subword of $\evalP{P[i_j + 1, i_{j + 1} - 1]}(w)$. This implies, directly when $m_j = M_j$ or by induction otherwise, that for any set $J \subseteq [i_{j + 1} - i_j - 1]$ containing $\bigcup_{1 < \alpha \leq \beta < k} J_{j, (\alpha, \beta)}$, the word $t^{(m_j + 1, M_j)}$ is a subword of $\evalP{P[i_j + 1, i_{j + 1} - 1][J]}(w)$. This implies in turn that $t^{(m_j, M_j)}$ is a subword of $f_{i_j}(w_{p_{i_j}}) \evalP{P[i_j + 1, i_{j + 1} - 1][J]}(w)$. \item Otherwise, when $m_j = k$, there does not exist any $o \in \intinterval{2}{k - 1}$ verifying $i_j \leq r_o < i_{j + 1}$, so $t^{(m_j, M_j)} = \emptyword$ is trivially a subword of $\evalP{P[i_j + 1, i_{j + 1} - 1][J]}(w)$ for any set $J \subseteq [i_{j + 1} - i_j - 1]$ containing $\bigcup_{1 < \alpha \leq \beta < k} J_{j, (\alpha, \beta)}$. And when $m_j < k$ but $r_{m_j} \neq i_j$, it means that $r_{m_j} > i_j$, hence $i_j < r_{m_j} < r_{m_j + 1} < \cdots < r_{M_j} < i_{j + 1}$, so $t^{(m_j, M_j)}$ is a subword of $\evalP{P[i_j + 1, i_{j + 1} - 1]}(w)$. This implies, by induction, that $t^{(m_j, M_j)}$ is a subword of $\evalP{P[i_j + 1, i_{j + 1} - 1][J]}(w)$ for any set $J \subseteq [i_{j + 1} - i_j - 1]$ containing $\bigcup_{1 < \alpha \leq \beta < k} J_{j, (\alpha, \beta)}$. \end{itemize} Therefore, using the convention that $i_0 = 0$ and $i_{s + 1} = l + 1$, if we define, for each $j \in \intinterval{0}{s}$, the set $I_j' = \set{e - i_j \mid e \in I', i_j < e < i_{j + 1}}$ as the subset of $I'$ of elements strictly between $i_j$ and $i_{j + 1}$ translated by $-i_j$, we have that $t^{(2, k - 1)}$ is a subword of \begin{align*} \evalP{P[i_\gamma + 1, i_{\gamma + 1} - 1][I_\gamma']}(w) & f_{i_{\gamma + 1}}(w_{p_{i_{\gamma + 1}}}) \evalP{P[i_{\gamma + 1} + 1, i_{\gamma + 2} - 1][I_{\gamma + 1}']}(w) \cdots\\ & f_{i_{\delta - 1}}(w_{p_{i_{\delta - 1}}}) \evalP{P[i_{\delta - 1} + 1, i_\delta - 1][I_{\delta - 1}']}(w) \end{align*} (since we have $r_{m_\gamma} \geq r_2 > r_1 \geq i_\gamma$), so that, as $f_{i_\gamma}(w_{p_{i_\gamma}}) = t_1$ and $f_{i_\delta}(w_{p_{i_\delta}}) = t_k$, we have that $t = t_1 t^{(2, k - 1)} t_k$ is a subword of \begin{align*} & \evalP{P[1, i_1 - 1][I_0']}(w) f_{i_1}(w_{p_{i_1}}) \evalP{P[i_1 + 1, i_2 - 1][I_1']}(w) \cdots f_{i_s}(w_{p_{i_s}}) \evalP{P[i_s + 1, l][I_s']}(w)\\ = & \evalP{P[I']}(w) \displaypunct{.} \end{align*} Assume finally that $t$ is a subword of $\evalP{P[I']}(w)$. Then it is obviously a subword of $\evalP{P}(w)$, as $\evalP{P[I']}(w)$ is a subword of $\evalP{P}(w)$. Therefore, $t$ is a subword of $\evalP{P}(w)$ if and only if $t$ is a subword of $\evalP{Q'}(w) = \evalP{P[I']}(w)$, as desired. \paragraph{Base case.} There are three subcases to consider. \emph{Subcase $k = 2$.} Let $P$ be a program over $M$ on $\Sigma^n$ for $n \in \N$ of length $l \in \N$ and some word $t \in M^2$. We use the same idea as in the inductive step. Observe that when $n = 0$, we necessarily have $P = \emptyword$, so that the lemma is trivially proven in that case. So we now assume $n > 0$. For each $p \in [n]$ and each $a \in \Sigma$ consider within the sequence of instructions of $P$ the first instruction of the form $(p, f)$ with $f(a) = t_1$ and the last instruction of that form with $f(a) = t_2$, if they exist. We let $I$ be the set of indices of these instructions for all $a$ and $p$. Notice that the size of $I$ is at most $2 \cdot \card{\Sigma} \cdot n = 2! \cdot \card{\Sigma}^{\ceiling{2 / 2}} \cdot n^{\ceiling{2 / 2}} = c_2 \cdot n^{\ceiling{2 / 2}}$. We claim that $Q = P[I]$, a subprogram of $P$, has the desired properties. We just showed it has at most the required length. Let $Q'$ be a subprogram of $P$ that has $Q$ as a subprogram: it means that there exists some set $I' \subseteq [l]$ containing $I$ such that $Q' = P[I']$. Take $w \in \Sigma^n$. Assume now that $t$ is a subword of $\evalP{P}(w)$. It means there exist $i_1, i_2 \in [l], i_1 < i_2$ such that $f_{i_1}(w_{p_{i_1}}) = t_1$ and $f_{i_2}(w_{p_{i_2}}) = t_2$. By definition of $I$, there exist ${i_1}', {i_2}' \in I$, such that ${i_1}' \leq i_1 < i_2 \leq {i_2}'$ and $f_{{i_1}'}(w_{p_{{i_1}'}})= t_1$ and $f_{{i_2}'}(w_{p_{{i_2}'}}) = t_2$. Hence, as $f_{{i_1}'}(w_{p_{{i_1}'}}) f_{{i_2}'}(w_{p_{{i_2}'}})$ is a subword of $\evalP{P[I']}(w)$ (because $I \subseteq I'$), we get that $t = t_1 t_2$ is a subword of $\evalP{P[I']}(w)$. Assume finally that $t$ is a subword of $\evalP{P[I']}(w)$. Then it is obviously a subword of $\evalP{P}(w)$, as $\evalP{P[I']}(w)$ is a subword of $\evalP{P}(w)$. Therefore, $t$ is a subword of $\evalP{P}(w)$ if and only if $t$ is a subword of $\evalP{Q'}(w) = \evalP{P[I']}(w)$, as desired. \emph{Subcase $k = 1$.} Let $P$ be a program over $M$ on $\Sigma^n$ for $n \in \N$ of length $l \in \N$ and some word $t \in M^1$. We again use the same idea as before. Observe that when $n = 0$, we necessarily have $P = \emptyword$, so that the lemma is trivially proven in that case. So we now assume $n > 0$. For each $p \in [n]$ and each $a \in \Sigma$ consider within the sequence of instructions of $P$ the first instruction of the form $(p, f)$ with $f(a) = t_1$, if it exists. We let $I$ be the set of indices of these instructions for all $a$ and $p$. Notice that the size of $I$ is at most $\card{\Sigma} \cdot n = 1! \cdot \card{\Sigma}^{\ceiling{1 / 2}} \cdot n^{\ceiling{1 / 2}} = c_1 \cdot n^{\ceiling{1 / 2}}$. We claim that $Q = P[I]$, a subprogram of $P$, has the desired properties. We just showed it has at most the required length. Let $Q'$ be a subprogram of $P$ that has $Q$ as a subprogram: it means that there exists some set $I' \subseteq [l]$ containing $I$ such that $Q' = P[I']$. Take $w \in \Sigma^n$. Assume now that $t$ is a subword of $\evalP{P}(w)$. It means there exists $i \in [l]$ such that $f_i(w_{p_i}) = t_1$. By definition of $I$, there exists $i' \in I$ such that $i' \leq i$ and $f_{i'}(w_{p_{i'}}) = t_1$. Hence, as $f_{i'}(w_{p_{i'}})$ is a subword of $\evalP{P[I']}(w)$ (because $I' \subseteq I$), we get that $t = t_1$ is a subword of $\evalP{P[I']}(w)$. Assume finally that $t$ is a subword of $\evalP{P[I']}(w)$. Then it is obviously a subword of $\evalP{P}(w)$, as $\evalP{P[I']}(w)$ is a subword of $\evalP{P}(w)$. Therefore, $t$ is a subword of $\evalP{P}(w)$ if and only if $t$ is a subword of $\evalP{Q'}(w) = \evalP{P[I']}(w)$, as desired. \emph{Subcase $k = 0$.} Let $P$ be a program over $M$ on $\Sigma^n$ for $n \in \N$ of length $l \in \N$ and some word $t \in M^0$. We claim that $Q = \emptyword$, a subprogram of $P$, has the desired properties. First notice that the length of $Q$ is $0 \leq 0! \cdot \card{\Sigma}^{\ceiling{0 / 2}} \cdot n^{\ceiling{0 / 2}} = c_0 \cdot n^{\ceiling{0 / 2}}$, at most the required length. Let $Q'$ be a subprogram of $P$ that has $Q$ as a subprogram. As $t \in M^0$, we necessarily have that $t = \emptyword$, which is a subword of any word in $M^*$. Therefore, we immediately get that for all $w \in \Sigma^n$, the word $t$ is a subword of $\evalP{P}(w)$ if and only if $t$ is a subword of $\evalP{Q'}(w)$, as desired. \end{proof} \section{Regular Languages in \texorpdfstring{$\Prog{\FMVJ}$}{P(J)}} \label{sec:Regular_languages} The second part of our investigation of the computational power of programs over monoids in $\FMVJ$ is dedicated to understanding exactly what regular languages can be \emph{p}-recognised by monoids in $\FMVJ$. \subsection{Non-tameness of \texorpdfstring{$\FMVJ$}{J}} \label{sse:Non-tameness_of_J} It is shown in~\cite{Grosshans-McKenzie-Segoufin-2017} that $\Prog{\FMVJ} \cap \Reg \nsubseteq \DLang{\StVQJ}$, thus giving an example of a well-known subvariety of $\FMVA$ for which \emph{p}-recognition allows to do unexpected things when recognising a regular language. How far does this unexpected power go? The first thing to notice is that, though none of them is in $\DLang{\StVQJ}$, all languages of the form $\Sigma^* u$ and $u \Sigma^*$ for $\Sigma$ an alphabet and $u \in \Sigma^+$ are in $\Prog{\FMVJ}$. Indeed, each of them can be recognised by a sequence of constant-length programs over the syntactic monoid of $u \shuffle \Sigma^*$: for every input length, just output the image, through the syntactic morphism of $u \shuffle \Sigma^*$, of the word made of the $\length{u}$ first or last letters. So, informally stated, programs over monoids in $\FMVJ$ can check for some constant-length beginning or ending of their input words. But they can do much more. Indeed, the language $(a + b)^* a c^+$ does not belong to $\DLang{\StVQJ}$ (compute the stable monoid), yet it is in $\Prog{\FMVJ}$. The crucial insight is that it can be program-reduced in linear length to the piecewise testable language of all words over $\set{a, b, c}$ having $ca$ as a subword but not the subwords $cca$, $caa$ and $cb$ by using the following trick (that we shall call ``feedback-sweeping'') for input length $n \in \N$: read the input letters in the order $2, 1, 3, 2, 4, 3, 5, 4, \ldots, n, n - 1$, output the letters read. This has already been observed in~\cite[Proposition 5]{Grosshans-McKenzie-Segoufin-2017}; here we give a formal proof of the following lemma. \begin{lemma} \label{lem:Unexpected_language_in_P(J)} $(a + b)^* a c^+ \in \Prog{\FMVJ, n}$. \end{lemma} \begin{proof} Let $\Sigma = \set{a, b, c}$. Let \[ L = ca \shuffle \Sigma^* \cap (cca \shuffle \Sigma^*)^\complement \cap (caa \shuffle \Sigma^*)^\complement \cap (cb \shuffle \Sigma^*)^\complement \] be the language of all words over $\Sigma$ having $ca$ as a subword but not the subwords $cca$, $caa$ and $cb$, that by construction is piecewise testable, i.e. belongs to $\DLang{\FMVJ}$. We are now going to build a program-reduction from $(a + b)^* a c^+$ to $L$. Let $n \in \N$. If $n \leq 1$, we set $\Psi_n$ to be $\emptyword$, the empty $\Sigma$-program on $\Sigma^n$. Otherwise, if $n \geq 2$, we set \[ \Psi_n = (2, \id_\Sigma) (1, \id_\Sigma) (3, \id_\Sigma) (2, \id_\Sigma) (4, \id_\Sigma) (3, \id_\Sigma) \cdots (n, \id_\Sigma) (n - 1, \id_\Sigma) \displaypunct{.} \] Let us define $s\colon \N \to \N$ by $s(n) = \length{\Psi_n}$ for all $n \in \N$, which is such that \[ s(n) = \begin{cases} 0 & \text{if $n \leq 1$}\\ 2 n - 2 & \text{otherwise ($n \geq 2$)} \end{cases} \] for all $n \in \N$. Fix $n \in \N$. Let $w \in ((a + b)^* a c^+)^{=n}$: it means $n \geq 2$ and there exist $u \in (a + b)^{n_1}$ with $n_1 \in \intinterval{0}{n - 2}$ and $n_2 \in \intinterval{0}{n - 2}$ verifying that $w = u a c c^{n_2}$ and $n_1 + n_2 = n - 2$. We therefore have \[ \Psi_n(w) = \begin{cases} c a c^{2 n_2} & \text{when $n_1 = 0$}\\ u_2 u_1 \cdots u_{n_1} u_{n_1 - 1} a u_{n_1} c a c^{2 n_2} & \text{otherwise ($n_1 > 0$)} \displaypunct{,} \end{cases} \] a word easily seen to belong to $L^{= 2 n - 2}$. Since this is true for all $w \in ((a + b)^* a c^+)^{=n}$, it follows that $((a + b)^* a c^+)^{=n} \subseteq \Psi_n^{-1}(L^{=s(n)})$. Let conversely $w \in \Psi_n^{-1}(L^{=s(n)})$. Since this means that $\Psi_n(w) \in L^{=s(n)}$, we necessarily have $n \geq 2$ as it must contain $ca$ as a subword, so that \[ \Psi_n(w) = w_2 w_1 w_3 w_2 w_4 w_3 \cdots w_n w_{n - 1} \displaypunct{.} \] Let $i, j \in [n]$ verifying that $w_i = c$, that $w_j = a$ and $w_i w_j$ is a subword of $\Psi_n(w)$. This means that $j \geq i - 1$, and we will now show that, actually, $j = i - 1$. Assume that $j \geq i + 2$; by construction, this would mean that $w_i w_j w_j = c a a$ is a subword of $\Psi_n(w)$, a contradiction to the fact it belongs to $L$. Assume otherwise that $j = i + 1$; by construction, this would either mean that $w_i w_{i - 1} w_{i + 1} w_i$ is a subword of $\Psi_n(w)$, which would imply one of $c a a$, $c b a$ and $c c a$ is a subword of $\Psi_n(w)$, or that $w_{i + 1} w_i w_{i + 2} w_{i + 1}$ is a subword of $\Psi_n(w)$, which would imply one of $c a a$, $c b a$ and $c c a$ is a subword of $\Psi_n(w)$, in both cases contradicting the fact $\Psi_n(w)$ belongs to $L$. Hence, we indeed have $j = i - 1$, and in particular that $i \geq 2$. Now, by construction, for each $t \in [i - 2]$, we have that $w_t w_i w_{i - 1} = w_t c a$ is a subword of $\Psi_n(w) \in L$, so that $w_t$ cannot be equal to $c$. Similarly, for each $t \in \intinterval{i + 1}{n}$, we have that $w_i w_{i - 1} w_t = c a w_t$ is a subword of $\Psi_n(w) \in L$, so that $w_t$ must be equal to $c$. This means that $w_1 \cdots w_{i - 2} \in (a + b)^*$ and $w_{i + 1} \cdots w_n \in c^*$, so that $w \in (a + b)^* a c c^* = (a + b)^* a c^+$. Since this is true for all $w \in \Psi_n^{-1}(L^{=s(n)})$, it follows that $((a + b)^* a c^+)^{=n} \supseteq \Psi_n^{-1}(L^{=s(n)})$. Therefore, we have that $((a + b)^* a c^+)^{=n} = \Psi_n^{-1}(L^{=s(n)})$ for all $n \in \N$, so $(\Psi_n)_{n \in \N}$ is a program reduction from $(a + b)^* a c^+$ to $L$ of length $s(n)$. So since $L \in \DLang{\FMVJ}$, we can conclude that $(a + b)^* a c^+ \in \Prog{\FMVJ, s(n)} = \Prog{\FMVJ, n}$ by Proposition~\ref{ptn:Program-reduction_to_regular_language}. \end{proof} Using variants of the ``feedback-sweeping'' reading technique, we can prove that the phenomenon just described is not an isolated case. \begin{lemma} \label{lem:Examples_regular_languages_in_P(J)} The languages $(a + b)^* a c^+$, $(a + b)^* a c^+ a (a + b)^*$, $c^+ a (a + b)^* a c^+$, $(a + b)^* b a c^+$ and $(a + b)^* a c^+ (a + b)^* a c^+$ do all belong to $\Prog{\FMVJ} \setminus \DLang{\StVQJ}$. \end{lemma} Hence, we are tempted to say that there are ``much more'' regular languages in $\Prog{\FMVJ}$ than just those in $\DLang{\StVQJ}$, even though it is not clear to us whether $\DLang{\StVQJ} \subseteq \Prog{\FMVJ}$ or not. But can we show any upper bound on $\Prog{\FMVJ} \cap \Reg$? It turns out that we can, relying on two known results. First, since $\FMVJ \subseteq \FMVDA$, we have $\Prog{\FMVJ} \subseteq \Prog{\FMVDA}$, so Theorem~6 in~\cite{Grosshans-McKenzie-Segoufin-2017}, that states $\Prog{\FMVDA} \cap \Reg = \DLang{\StVQDA}$, implies that $\Prog{\FMVJ} \cap \Reg \subseteq \DLang{\StVQDA}$. Second, let us define an important superclass of the class of piecewise testable languages. Let $\Sigma$ be an alphabet and $u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N_{>0}$); we define $\ccddo{u_1, \ldots, u_k} = \Sigma^* u_1 \Sigma^* \cdots \Sigma^* u_k \Sigma^*$. The \emph{class of dot-depth one languages} is the class of Boolean combinations of languages of the form $\Sigma^* u$, $u \Sigma^*$ and $\ccddo{u_1, \ldots, u_k}$ for $\Sigma$ an alphabet, $k \in \N_{>0}$ and $u, u_1, \ldots, u_k \in \Sigma^+$. The inclusion-wise smallest variety of semigroups containing all syntactic semigroups of dot-depth one languages is denoted by $\FSVJsdD$ and verifies that $\DLang{\FSVJsdD}$ is exactly the class of dot-depth one languages. (See~\cite{Straubing-1985,Maciel-Peladeau-Therien-2000,Pin-2017}.) It has been shown in~\cite[Corollary 8]{Maciel-Peladeau-Therien-2000} that $\Prog{\FSVJsdD} \cap \Reg = \DLang{\StVQuasi(\FSVJsdD)}$ (if we extend the program-over-monoid formalism in the obvious way to finite semigroups). Now, we have $\FMVJ \subseteq \FSVJsdD$, so that $\Prog{\FMVJ} \subseteq \Prog{\FSVJsdD}$ and hence $\Prog{\FMVJ} \cap \Reg \subseteq \DLang{\StVQuasi(\FSVJsdD)}$. To summarise, we have the following. \begin{proposition} $\Prog{\FMVJ} \cap \Reg \subseteq \DLang{\StVQDA} \cap \DLang{\StVQuasi(\FSVJsdD)}$. \end{proposition} In fact, we conjecture that the inverse inclusion does also hold. \begin{conjecture} \label{cjt:Regular_languages_in_P(J)} $\Prog{\FMVJ} \cap \Reg = \DLang{\StVQDA} \cap \DLang{\StVQuasi(\FSVJsdD)}$. \end{conjecture} Why do we think this should be true? Though, for a given alphabet $\Sigma$, we cannot decide whether some word $u \in \Sigma^+$ of length at least $2$ appears as a factor of any given word $w$ in $\Sigma^*$ with programs over monoids in $\FMVJ$ (because $\Sigma^* u \Sigma^* \notin \DLang{\StVQDA}$), Lemma~\ref{lem:Examples_regular_languages_in_P(J)} and the possibilities offered by the ``feedback-sweeping'' technique give the impression that we can do it when we are guaranteed that $u$ appears at most a fixed number of times in $w$, which seems somehow to be what dot-depth one languages become when restricted to belong to $\DLang{\StVQDA}$. This intuition motivates the definition of \emph{threshold dot-depth one languages}. \subsection{Threshold dot-depth one languages} The idea behind the definition of threshold dot-depth one languages is that we take the basic building blocks of dot-depth one languages, of the form $\ccddo{u_1, \ldots, u_k}$ for an alphabet $\Sigma$, for $k \in \N_{>0}$ and $u_1, \ldots, u_k \in \Sigma^+$, and restrict them so that, given $l \in \N_{>0}$, membership of a word does really depend on the presence of a given word $u_i$ as a factor if and only if it appears less than $l$ times as a subword. \begin{definition} Let $\Sigma$ be an alphabet. For all $u \in \Sigma^+$ and $l \in \N_{>0}$, we define $\ccddo{u}_l$ to be the language of words over $\Sigma$ containing $u^l$ as a subword or $u$ as a factor, i.e. $\ccddo{u}_l = \Sigma^* u \Sigma^* \cup u^l \shuffle \Sigma^*$. Then, for all $u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N, k \geq 2$) and $l \in \N_{>0}$, we define $\ccddo{u_1, \ldots, u_k}_l = \ccddo{u_1}_l \cdots \ccddo{u_k}_l$. \end{definition} Obviously, for each $\Sigma$ an alphabet, $k, l \in \N_{>0}$ and $u_1, \ldots, u_k \in \Sigma^+$, the language $\ccddo{u_1, \ldots, u_k}_l$ equals $u_1 \cdots u_k \shuffle \Sigma^*$ when $l = 1$ or $u_1, \ldots, u_k$ are all restricted to one letter. Over $\set{a, b, c}$, the language $\ccddo{ab, c}_3$ contains all words containing a letter $c$ verifying that in the prefix up to that letter, $ababab$ appears as a subword or $ab$ appears as a factor. Finally, the language $(a + b)^* a c^+$ over $\set{a, b, c}$ of Lemma~\ref{lem:Unexpected_language_in_P(J)} is equal to ${\ccddo{c, a}_2}^\complement \cap {\ccddo{c, b}_2}^\complement \cap \ccddo{ac}_2$. We then define a \emph{threshold dot-depth one language} as any Boolean combination of languages of the form $\Sigma^* u$, $u \Sigma^*$ and $\ccddo{u_1, \ldots, u_k}_l$ for $\Sigma$ an alphabet, for $k, l \in \N_{>0}$ and $u, u_1, \ldots, u_k \in \Sigma^+$. Confirming the intuition briefly given above, the technique of ``feedback-sweeping'' can indeed be pushed further to prove that the whole class of threshold dot-depth one languages is contained in $\Prog{\FMVJ}$, and we dedicate the remainder of this section to prove it. Concerning Conjecture~\ref{cjt:Regular_languages_in_P(J)}, our intuition leads us to believe that, in fact, the class of threshold dot-depth one languages with additional positional modular counting is exactly $\DLang{\StVQDA} \cap \DLang{\StVQuasi(\FSVJsdD)}$. In support of this belief, in the next section (Section~\ref{sec:Algebraic_characterisation_TDDO_languages}) we prove that the class of threshold dot-depth one languages is exactly $\DLang{\FMVDA} \cap \DLang{\FSVJsdD}$. Let us now move on to the proof of the following theorem. \begin{theorem} \label{thm:TDDO_languages_in_P(J)} Every threshold dot-depth one language belongs to $\Prog{\FMVJ}$. \end{theorem} As $\Prog{\FMVJ}$ is closed under Boolean operations (Proposition~\ref{lemma-simple-closure-P}), our goal is to prove, given an alphabet $\Sigma$, given $l \in \N_{>0}$ and $u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N_{>0}$), that $\ccddo{u_1, \ldots, u_k}_l$ is in $\Prog{\FMVJ}$; the case of $\Sigma^* u$ and $u \Sigma^*$ for $u \in \Sigma^+$ is easily handled (see the discussion at the beginning of Subsection~\ref{sse:Non-tameness_of_J}). To do this, we need to put $\ccddo{u_1, \ldots, u_k}_l$ in some normal form. It is readily seen that $\ccddo{u_1, \ldots, u_k}_l = \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}$ where the $L^{(l)}_{(u_i, q_i)}$'s are defined thereafter. \begin{definition} \label{def:TDDO_alternative} Let $\Sigma$ be an alphabet. For all $u \in \Sigma^+$, $l \in \N_{>0}$ and $\alpha \in [l]$, set $L^{(l)}_{(u, \alpha)} = \begin{cases} \Sigma^* u \Sigma^* & \text{if $\alpha < l$}\\ u^l \shuffle \Sigma^* & \text{otherwise} \end{cases}$. \end{definition} Building directly a sequence of programs over a monoid in $\FMVJ$ that decides $L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}$ for some alphabet $\Sigma$ and $q_1, \ldots, q_k \in \set{1, l}$ seems however tricky. We need to split things further by controlling precisely how many times each $u_i$ for $i \in [k]$ appears in the right place when it does less than $l$ times. To do this, we consider, for each $\alpha \in [l]^k$, the language $R_l^\alpha(u_1, \ldots, u_k)$ defined below. \begin{definition} Let $\Sigma$ be an alphabet. For all $u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N_{>0}$), $l \in \N_{>0}$, $\alpha \in [l]^k$, we set \begin{align*} R_l^\alpha(u_1, \ldots, u_k) = & ({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^* \cap\\ & \bigcap_{i \in [k], \alpha_i < l} \bigl(({u_1}^{\alpha_1} \cdots {u_i}^{\alpha_i + 1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr)^\complement \displaypunct{.} \end{align*} \end{definition} Now, for a given $\alpha \in [l]^k$, we are interested in the words of $R_l^\alpha(u_1, \ldots, u_k)$ such that for each $i \in [k]$ verifying $\alpha_i < l$, the word $u_i$ indeed appears as a factor in the right place. We thus introduce a last language $S_l^\alpha(u_1, \ldots, u_k)$ defined as follows. \begin{definition} Let $\Sigma$ be an alphabet. For all $u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N_{>0}$), $l \in \N_{>0}$, $\alpha \in [l]^k$, we set \[ S_l^\alpha(u_1, \ldots, u_k) = \bigcap_{i \in [k], \alpha_i < l} \bigl(({u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^*\bigr) u_i \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr) . \] \end{definition} We now have the normal form we were looking for to prove Theorem~\ref{thm:TDDO_languages_in_P(J)}: $\ccddo{u_1, \ldots, u_k}_l$ is equal to the union, over all $\alpha \in [l]^k$, of the intersection of $R_l^\alpha(u_1, \ldots, u_k)$ and $S_l^\alpha(u_1, \ldots, u_k)$. Though rather intuitive, the correctness of this decomposition is not so straightforward to prove and, actually, we can only prove it when for each $i \in [k]$, the letters in $u_i$ are all distinct. \begin{lemma} \label{lem:TDDO-Equality} Let $\Sigma$ be an alphabet, $l \in \N_{>0}$ and $u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N_{>0}$) such that for each $i \in [k]$, the letters in $u_i$ are all distinct. Then, \[ \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)} = \bigcup_{\alpha \in [l]^k} \bigl(R_l^\alpha(u_1, \ldots, u_k) \cap S_l^\alpha(u_1, \ldots, u_k)\bigr) \displaypunct{.} \] \end{lemma} \begin{proof} Let $\Sigma$ be an alphabet and $l \in \N_{>0}$. We prove it by induction on $k \in \N_{>0}$. \paragraph{Base case $k = 1$.} Let $u_1 \in \Sigma^+$ such that the letters in $u_1$ are all distinct. It is clear that \begin{align*} & \bigcup_{q_1 \in \set{1, l}} L^{(l)}_{(u_1, q_1)}\\ = & (\Sigma^* u_1 \Sigma^* \cup {u_1}^l \shuffle \Sigma^*)\\ = & \Bigl(\bigcup_{\alpha_1 = 1}^{l - 1} \bigl({u_1}^{\alpha_1} \shuffle \Sigma^* \cap ({u_1}^{\alpha_1 + 1} \shuffle \Sigma^*)^\complement \cap \Sigma^* u_1 \Sigma^*\bigr) \cup ({u_1}^l \shuffle \Sigma^*)\Bigr)\\ = & \bigcup_{\alpha_1 \in [l]} \bigl(R_l^{\alpha_1}(u_1) \cap S_l^{\alpha_1}(u_1)\bigr) \displaypunct{.} \end{align*} \paragraph{Induction.} Let $k \in \N_{>0}$ and assume that for all $u_1, \ldots, u_k \in \Sigma^+$ such that for each $i \in [k]$, the letters in $u_i$ are all distinct, we have \[ \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)} = \bigcup_{\alpha \in [l]^k} \bigl(R_l^\alpha(u_1, \ldots, u_k) \cap S_l^\alpha(u_1, \ldots, u_k)\bigr) \displaypunct{.} \] Let now $u_1, \ldots, u_{k + 1} \in \Sigma^+$ such that for each $i \in [k + 1]$, the letters in $u_i$ are all distinct. \emph{Right-to-left inclusion.} Let \[ w \in \bigcup_{\alpha \in [l]^{k + 1}} \bigl(R_l^\alpha(u_1, \ldots, u_{k + 1}) \cap S_l^\alpha(u_1, \ldots, u_{k + 1})\bigr) \displaypunct{.} \] Let $\alpha \in [l]^{k + 1}$ witnessing this fact. As $w \in R_l^\alpha(u_1, \ldots, u_{k + 1})$, we can decompose it as $w = x y$ where $x \in ({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*$ and $y \in {u_{k + 1}}^{\alpha_{k + 1}} \shuffle \Sigma^*$ with $\length{y}$ being minimal. What we are going to do is, on the one hand, to prove that $x \in R_l^{\alpha'}(u_1, \ldots, u_k) \cap S_l^{\alpha'}(u_1, \ldots, u_k)$ where $\alpha' = (\alpha_1, \ldots, \alpha_k)$, so that we can apply the inductive hypothesis on $x$ and get that there exist $q_1, \ldots, q_k \in \set{1, l}$ such that $x \in L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}$; and, on the other hand, we are going to prove that there exists $q_{k + 1} \in \set{1, l}$ verifying $y \in L^{(l)}_{(u_{k + 1}, q_{k + 1})}$. We now spell out the details. For each $i \in [k], \alpha_i < l$, we have $x \notin ({u_1}^{\alpha_1} \cdots {u_i}^{\alpha_i + 1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*$, otherwise we would have $w = x y \in ({u_1}^{\alpha_1} \cdots {u_i}^{\alpha_i + 1} \cdots {u_{k + 1}}^{\alpha_{k + 1}}) \shuffle \Sigma^*$. Also, for all $i \in [k], \alpha_i < l$, we have that $x \in \bigl(({u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^*\bigr) u_i \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr)$, otherwise it would mean that $y = y_1 y_2$ with $\length{y_1} > 0$, that $x y_1 \in \bigl(({u_1}^{\alpha_1} \cdots \allowbreak {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^*\bigr) u_i \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr)$ and $y_2 \in {u_{k + 1}}^{\alpha_{k + 1}} \shuffle \Sigma^*$, contradicting the minimality of $\length{y}$. So $x \in R_l^{\alpha'}(u_1, \ldots, u_k) \cap S_l^{\alpha'}(u_1, \ldots, u_k)$, which means by inductive hypothesis that there exist $q_1, \ldots, q_k \in \set{1, l}$ such that $x \in L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}$. Remember now that the letters in $u_{k + 1}$ are all distinct. If $\alpha_{k + 1} < l$, since $w \in \bigl(({u_1}^{\alpha_1} \ldots {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr) u_{k + 1} \Sigma^*$, we must have $y \in \Sigma^* u_{k + 1} \Sigma^*$. Indeed, by minimality of $\length{y}$, the word $y$ starts with the first letter of $u_{k + 1}$, which has pairwise distinct letters, so that $u_{k + 1}$ cannot appear as a factor of $xy$ partly in $x$ and partly in $y$; so if it were the case that $y$ does not contain $u_{k + 1}$ as a factor, we would have $x \in \bigl(({u_1}^{\alpha_1} \ldots {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr) u_{k + 1} \Sigma^*$, so that $x y = w \in ({u_1}^{\alpha_1} \ldots {u_k}^{\alpha_k} {u_{k + 1}}^{\alpha_{k + 1} + 1}) \shuffle \Sigma^*$, a contradiction with the hypothesis on $w$. Hence, $y \in L^{(l)}_{(u_{k + 1}, \alpha_{k + 1})}$. If $\alpha_{k + 1} = l$, then $y \in {u_{k + 1}}^{\alpha_{k + 1}} \shuffle \Sigma^* = L^{(l)}_{(u_{k + 1}, \alpha_{k + 1})}$. So, if we set $q_{k + 1} = \begin{cases} 1 & \text{if $\alpha_{k + 1} < l$}\\ l & \text{otherwise} \end{cases}$, then we get that $y \in L^{(l)}_{(u_{k + 1}, q_{k + 1})}$. We can conclude that $w = x y \in L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)} L^{(l)}_{(u_{k + 1}, q_{k + 1})}$. \emph{Left-to-right inclusion.} Let $w \in \bigcup_{q_1, \ldots, q_{k + 1} \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_{k + 1}, q_{k + 1})}$. The rough idea of our proof here is to take $\alpha_{k + 1} \in [l]$ the biggest integer in $[l]$ such that $w \in \bigl(\bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)})\bigr) ({u_{k + 1}}^{\alpha_{k + 1}} \shuffle \Sigma^*)$ and decompose $w$ as $w = x y$ where $x \in \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}$ and $y \in {u_{k + 1}}^{\alpha_{k + 1}} \shuffle \Sigma^*$ with $\length{y}$ being minimal. By inductive hypothesis, we know there exists $\alpha \in [l]^k$ such that $x \in R_l^\alpha (u_1, \ldots, u_k) \cap S_l^\alpha (u_1, \ldots, u_k)$ and we then prove that $x y \in R_l^{(\alpha_1, \ldots, \alpha_{k + 1})}(u_1, \ldots, u_{k + 1}) \cap S_l^{(\alpha_1, \ldots, \alpha_{k + 1})}(u_1, \ldots, u_{k + 1})$ by distinguishing between the case in which $\alpha_{k + 1} = l$ and the case in which $\alpha_{k + 1} < l$. The first one is easy to handle, the second one is much trickier. We now spell out the details. \begin{itemize} \item Suppose we have \begin{align*} w \in & \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)} L^{(l)}_{(u_{k + 1}, l)}\\ = & \Bigl(\bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}\Bigr) ({u_{k + 1}}^l \shuffle \Sigma^*) \displaypunct{.} \end{align*} Then $w$ can be decomposed as $w = x y$ where $x \in \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots \allowbreak L^{(l)}_{(u_k, q_k)}$ and $y \in {u_{k + 1}}^l \shuffle \Sigma^*$ with $\length{y}$ being minimal. So by inductive hypothesis, there exists $\alpha \in [l]^k$ such that $x \in R_l^\alpha(u_1, \ldots, u_k) \cap S_l^\alpha(u_1, \ldots, u_k)$. Observe that this means we have $w \in ({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k} {u_{k + 1}}^l) \shuffle \Sigma^*$ and for each $i \in [k], \alpha_i < l$, that $w \notin ({u_1}^{\alpha_1} \cdots {u_i}^{\alpha_i + 1} \cdots \allowbreak {u_k}^{\alpha_k} {u_{k + 1}}^l) \shuffle \Sigma^*$, otherwise it would mean that $x \in ({u_1}^{\alpha_1} \cdots {u_i}^{\alpha_i + 1} \cdots \allowbreak {u_k}^{\alpha_k}) \shuffle \Sigma^*$ by minimality of $\length{y}$. Similarly, for all $i \in [k], \alpha_i < l$, it is obvious that we have \[ w = x y \in \bigl(({u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^*\bigr) u_i \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k} {u_{k + 1}}^l) \shuffle \Sigma^*\bigr) \] as $x \in \bigl(({u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^*\bigr) u_i \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr)$ and $y \in {u_{k + 1}}^l \allowbreak \shuffle \Sigma^*$. Hence, $w \in R_l^{(\alpha_1, \ldots, \alpha_{k + 1})}(u_1, \ldots, u_{k + 1}) \cap S_l^{(\alpha_1, \ldots, \alpha_{k + 1})}(u_1, \ldots, u_{k + 1})$. \item Or we have \begin{align*} w \notin & \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)} L^{(l)}_{(u_{k + 1}, l)}\\ = & \Bigl(\bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}\Bigr) ({u_{k + 1}}^l \shuffle \Sigma^*) \end{align*} but \[ w \in \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)} L^{(l)}_{(u_{k + 1}, 1)} \displaypunct{.} \] Let $\alpha_{k + 1} \in [l - 1]$ be the biggest integer in $[l - 1]$ such that \[ w \in \Bigl(\bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}\Bigr) ({u_{k + 1}}^{\alpha_{k + 1}} \shuffle \Sigma^*) \] which does exist by hypothesis. We can decompose $w$ as $w = x y$ where $x \in \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}$ and $y \in {u_{k + 1}}^{\alpha_{k + 1}} \shuffle \Sigma^*$ with $\length{y}$ being minimal. So by inductive hypothesis, there exists $\alpha \in [l]^k$ such that $x \in R_l^\alpha(u_1, \ldots, u_k) \cap S_l^\alpha(u_1, \ldots, u_k)$. We are now going to prove that \[ w = x y \in R_l^{(\alpha_1, \ldots, \alpha_{k + 1})}(u_1, \ldots, u_{k + 1}) \cap S_l^{(\alpha_1, \ldots, \alpha_{k + 1})}(u_1, \ldots, u_{k + 1}) \displaypunct{.} \] Among the obvious things to observe is that we have $w \in ({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k} \allowbreak {u_{k + 1}}^{\alpha_{k + 1}}) \shuffle \Sigma^*$ and for each $i \in [k], \alpha_i < l$, that \[ w \notin ({u_1}^{\alpha_1} \cdots {u_i}^{\alpha_i + 1} \cdots {u_k}^{\alpha_k} {u_{k + 1}}^{\alpha_{k + 1}}) \shuffle \Sigma^* \displaypunct{,} \] otherwise it would mean that $x \in ({u_1}^{\alpha_1} \cdots {u_i}^{\alpha_i + 1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*$ by minimality of $\length{y}$. Similarly, for all $i \in [k], \alpha_i < l$, it is obvious that we have \[ w = x y \in \bigl(({u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^*\bigr) u_i \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k} {u_{k + 1}}^{\alpha_{k + 1}}) \shuffle \Sigma^*\bigr) \] because $x \in \bigl(({u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^*\bigr) u_i \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr)$ and $y \in {u_{k + 1}}^{\alpha_{k + 1}} \shuffle \Sigma^*$. Now let us show that we have $y \in \Sigma^* u_{k + 1} \Sigma^*$. Assume it weren't the case: the letters in $u_{k + 1}$ are pairwise distinct and moreover $y$ starts with the first letter of $u_{k + 1}$ by minimality of $\length{y}$, so $u_{k + 1}$ cannot appear as a factor of $x y$ partly in $x$ and partly in $y$ and, additionally, \begin{align*} w & \in \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)} L^{(l)}_{(u_{k + 1}, 1)}\\ & = \Bigl(\bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}\Bigr) \Sigma^* u_{k + 1} \Sigma^* \displaypunct{,} \end{align*} so we would have $x \in (\bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}) \Sigma^* u_{k + 1} \Sigma^*$. But this either contradicts the maximality of $\alpha_{k + 1}$ or the fact that \[ w \notin \Bigl(\bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}\Bigr) ({u_{k + 1}}^l \shuffle \Sigma^*) \displaypunct{.} \] Thus, we have $w = x y \in \bigl(({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr) u_{k + 1} \Sigma^*$ as $x \in ({u_1}^{\alpha_1} \cdots \allowbreak {u_k}^{\alpha_k}) \allowbreak \shuffle \Sigma^*$. Let us finish with the trickiest part, namely showing that $w \notin ({u_1}^{\alpha_1} \cdots \allowbreak {u_k}^{\alpha_k} \allowbreak {u_{k + 1}}^{\alpha_{k + 1} + 1}) \shuffle \Sigma^*$. Assume that $w \in ({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k} {u_{k + 1}}^{\alpha_{k + 1} + 1}) \shuffle \Sigma^*$. We then have that $x \in ({u_1}^{\alpha_1} \cdots \allowbreak {u_k}^{\alpha_k} u_{k + 1}) \shuffle \Sigma^*$, otherwise it would mean that $y = y_1 y_2$ with $\length{y_1} > 0$, with $x y_1 \in ({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k} \allowbreak u_{k + 1}) \shuffle \Sigma^*$ and $y_2 \in {u_{k + 1}}^{\alpha_{k + 1}} \shuffle \Sigma^*$, contradicting the minimality of $\length{y}$. We can decompose $x$ as $x = x_1 x_2$ where $x_1 \in ({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*$ and $x_2 \in u_{k + 1} \shuffle \Sigma^*$ with $\length{x_2}$ being minimal. We claim that, actually, $x_1 \in R_l^\alpha(u_1, \ldots, u_k) \cap S_l^\alpha(u_1, \ldots, u_k)$, so that by inductive hypothesis, $x_1 \in \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}$. But since $x_2 y \in {u_{k + 1}}^{\alpha_{k + 1} + 1} \shuffle \Sigma^*$, this means that \[ w = x_1 x_2 y \in \Bigl(\bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}\Bigr) ({u_{k + 1}}^{\alpha_{k + 1} + 1} \shuffle \Sigma^*) \displaypunct{,} \] contradicting the maximality of $\alpha_{k + 1}$ or the fact that \[ w \notin \Bigl(\bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}\Bigr) ({u_{k + 1}}^l \shuffle \Sigma^*) \displaypunct{.} \] So we can conclude that $w \notin ({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k} {u_{k + 1}}^{\alpha_{k + 1} + 1}) \shuffle \Sigma^*$. The claim that $x_1 \in R_l^\alpha(u_1, \ldots, u_k) \cap S_l^\alpha(u_1, \ldots, u_k)$ remains to be shown. We directly see that $x_1 \notin ({u_1}^{\alpha_1} \cdots {u_i}^{\alpha_i + 1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*$ for all $i \in [k], \alpha_i < l$, otherwise it would mean that $x \in ({u_1}^{\alpha_1} \cdots {u_i}^{\alpha_i + 1} \cdots \allowbreak {u_k}^{\alpha_k}) \shuffle \Sigma^*$. Let now $i \in [k], \alpha_i < l$, and assume that $x_1 \notin \bigl(({u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^*\bigr) u_i \allowbreak \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots \allowbreak {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr)$. We can decompose $x_1$ as $x_1 = x_{1, 1} x_{1, 2}$ where $x_{1, 1} \in ({u_1}^{\alpha_1} \cdots \allowbreak {u_i}^{\alpha_i}) \shuffle \Sigma^*$ and $x_{1, 2} \in ({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*$ with $\length{x_{1, 1}}$ being minimal. By hypothesis, we have $x_{1, 1} \notin \bigl(({u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^*\bigr) u_i \Sigma^*$, otherwise we would have \[ x_1 = x_{1, 1} x_{1, 2} \in \bigl(({u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^*\bigr) u_i \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr) \displaypunct{.} \] As previously, the letters in $u_i$ are pairwise distinct, and $x_{1, 1}$ ends with the last letter of $u_i$ by minimality of $\length{x_{1, 1}}$, so $u_i$ cannot appear as a factor of $x$ partly in $x_{1, 1}$ and partly in $x_{1, 2} x_2$. Thus, we have that \[ x_{1, 2} x_2 \in \Sigma^* u_i \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^* \bigr) \] because we know that $x \in \bigl(({u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^*\bigr) u_i \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k}) \allowbreak \shuffle \Sigma^*\bigr)$. But this means that $x = x_{1, 1} x_{1, 2} x_2 \in ({u_1}^{\alpha_1} \cdots {u_i}^{\alpha_i + 1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*$, a contradiction. Hence, we can deduce that for all $i \in [k], \alpha_i < l$, we have $x_1 \in \bigl(({u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^* \bigr) u_i \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k}) \allowbreak \shuffle \Sigma^* \bigr)$. This finishes to show that \[ x_1 \in R_l^\alpha(u_1, \ldots, u_k) \cap S_l^\alpha(u_1, \ldots, u_k) \displaypunct{.} \] \medskip Putting all together, we indeed also have that \[ w \in R_l^{(\alpha_1, \ldots, \alpha_{k + 1})}(u_1, \ldots, u_{k + 1}) \cap S_l^{(\alpha_1, \ldots, \alpha_{k + 1})}(u_1, \ldots, u_{k + 1}) \] in the present case. \end{itemize} In conclusion, in both cases, \[ w \in \bigcup_{\alpha \in [l]^{k + 1}} \bigl(R_l^\alpha(u_1, \ldots, u_{k + 1}) \cap S_l^\alpha(u_1, \ldots, u_{k + 1})\bigr) \displaypunct{.} \] So we can finally conclude that \begin{align*} & \bigcup_{q_1, \ldots, q_{k + 1} \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_{k + 1}, q_{k + 1})}\\ = & \bigcup_{\alpha \in [l]^{k + 1}} \bigl(R_l^\alpha(u_1, \ldots, u_{k + 1}) \cap S_l^\alpha(u_1, \ldots, u_{k + 1})\bigr) \displaypunct{.} \end{align*} This concludes the proof of the lemma. \end{proof} Our goal now is to prove, given an alphabet $\Sigma$, given $l \in \N_{>0}$ and $u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N_{>0}$) such that for each $i \in [k]$, the letters in $u_i$ are all distinct, that for any $\alpha \in [l]^k$, the language $R_l^\alpha(u_1, \ldots, u_k) \cap S_l^\alpha(u_1, \ldots, u_k)$ is in $\Prog{\FMVJ}$; closure of $\Prog{\FMVJ}$ under union (Proposition~\ref{lemma-simple-closure-P}) consequently entails that $\ccddo{u_1, \ldots, u_k}_l \in \Prog{\FMVJ}$. The way $R_l^\alpha(u_1, \ldots, u_k)$ and $S_l^\alpha(u_1, \ldots, u_k)$ are defined allows us to reason as follows. For each $i \in [k]$ verifying $\alpha_i < l$, let $L_i$ be the language of words $w$ over $\Sigma$ containing $x_{i, 1} {u_i}^{\alpha_i} x_{i, 2}$ as a subword but not $x_{i, 1} {u_i}^{\alpha_i + 1} x_{i, 2}$ and such that $w = y_1 u_i y_2$ with $y_1 \in x_{i, 1} \shuffle \Sigma^*$ and $y_2 \in x_{i, 2} \shuffle \Sigma^*$, where $x_{i, 1} = {u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}$ and $x_{i, 2} = {u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k}$. If we manage to prove that for each $i \in [k]$ verifying $\alpha_i < l$ we have $L_i \in \Prog{\FMVJ}$, we can conclude that $R_l^\alpha(u_1, \ldots, u_k) \cap S_l^\alpha(u_1, \ldots, u_k) = ({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^* \cap \bigcap_{i \in [k], \alpha_i < l} L_i$ does belong to $\Prog{\FMVJ}$ by closure of $\Prog{\FMVJ}$ under intersection, Proposition~\ref{lemma-simple-closure-P}. The lemma that follows, the main lemma in the proof of Theorem~\ref{thm:TDDO_languages_in_P(J)}, exactly shows this. The proof crucially uses the ``feedback sweeping'' technique, but note that we actually don't know how to prove it when we do not enforce that for each $i \in [k]$, the letters in $u_i$ are all distinct. \begin{lemma} \label{lem:Building-block_program} Let $\Sigma$ be an alphabet and $u \in \Sigma^+$ such that its letters are all distinct. For all $\alpha \in \N_{>0}$ and $x_1, x_2 \in \Sigma^*$, we have \[ (x_1 u^\alpha x_2) \shuffle \Sigma^* \cap \bigl((x_1 u^{\alpha + 1} x_2) \shuffle \Sigma^*\bigr)^\complement \cap (x_1 \shuffle \Sigma^*) u (x_2 \shuffle \Sigma^*) \in \Prog{\FMVJ} \displaypunct{.} \] \end{lemma} \begin{proof} Before proving this lemma, we need a useful decomposition sublemma, that is straightforward to prove. \begin{lemma} \label{lem:Decomposition} Let $\Sigma$ be an alphabet and $u \in \Sigma^+$. Then, for all $\alpha \in \N_{>0}$, each $w \in u^\alpha \shuffle \Sigma^* \cap (u^{\alpha + 1} \shuffle \Sigma^*)^\complement$ verifies \[ w = \bigl(\prod_{i = 1}^\alpha \prod_{j = 1}^{\length{u}} (v_{i, j} u_j)\bigr) y \] where $v_{i, j} \in (\Sigma \setminus \set{u_j})^*$ for all $i \in [\alpha]$ and $j \in [\length{u}]$, and $y \in \bigcup_{i = 1}^{\length{u}} \Bigl(\prod_{j = 1}^{i - 1} \bigl((\Sigma \setminus \set{u_j})^* u_j\bigr) (\Sigma \setminus \set{u_i})^*\Bigr)$. \end{lemma} \begin{proof}[Proof of sublemma] Let $\Sigma$ be an alphabet and $u \in \Sigma^+$. Take $\alpha \in \N_{>0}$ and $w \in u^\alpha \shuffle \Sigma^* \cap (u^{\alpha + 1} \shuffle \Sigma^*)^\complement$. As $w \in u^\alpha \shuffle \Sigma^*$, the word $w$ can be decomposed as $w = x y$ where $x \in u^\alpha \shuffle \Sigma^*$ and $\length{x}$ is minimal. Then, it is clearly necessarily the case that $x = \prod_{i = 1}^\alpha \prod_{j = 1}^{\length{u}} (v_{i, j} u_j)$ with $v_{i, j} \in (\Sigma \setminus \set{u_j})^*$ for all $i \in [\alpha]$ and $j \in [\length{u}]$. Moreover, as $x y \notin u^{\alpha + 1} \shuffle \Sigma^*$, we necessarily have that $y \notin u \shuffle \Sigma^*$, so that there exists some $i \in [\length{u}]$ verifying that $u_1 \cdots u_{i - 1}$ is a subword of $y$ but not $u_1 \cdots u_i$. Thus, we have that $y \in \prod_{j = 1}^{i - 1} \bigl((\Sigma \setminus \set{u_j})^* u_j\bigr) (\Sigma \setminus \set{u_i})^*$. This concludes the proof of the sublemma. \end{proof} We can now prove Lemma~\ref{lem:Building-block_program}. Let $\Sigma$ be an alphabet and $u \in \Sigma^+$ such that its letters are all distinct. Let $\alpha \in \N_{>0}$ and $x_1, x_2 \in \Sigma^*$. We let \[ L = (x_1 u^\alpha x_2) \shuffle \Sigma^* \cap \bigl((x_1 u^{\alpha + 1} x_2) \shuffle \Sigma^*\bigr)^\complement \cap (x_1 \shuffle \Sigma^*) u (x_2 \shuffle \Sigma^*) \displaypunct{.} \] If $\length{u} = 1$, the lemma follows trivially because $L$ is piecewise testable and hence belongs to $\DLang{\FMVJ}$, so we assume $\length{u} > 1$. For each letter $a \in \Sigma$, we shall use $2 \length{u} - 1$ distinct decorated letters of the form $a^{(i)}$ for some $i \in \intinterval{0}{2 \length{u} - 2}$, using the convention that $a^{(0)} = a$; of course, for two distinct letters $a, b \in \Sigma$, we have that $a^{(i)}$ and $b^{(j)}$ are distinct for all $i, j \in \intinterval{0}{2 \length{u} - 2}$. We denote by $A$ the alphabet of these decorated letters. The main idea of the proof is, for a given input length $n \in \N$, to build an $A$-program $\Psi_n$ over $\Sigma^n$ such that, given an input word $w \in \Sigma^n$, it first ouputs the $\length{u} - 1$ first letters of $w$ and then, for each $i$ going from $\length{u}$ to $n$, outputs $w_i$, followed by $w_{i - 1}^{(1)} \cdots w_{i - \length{u} + 1}^{(\length{u} - 1)}$ (a ``sweep'' of $\length{u} - 1$ letters backwards down to position $i - \length{u} + 1$, decorating the letters incrementally) and finally by $w_{i - \length{u} + 2}^{(\length{u})} \cdots w_i^{(2 \length{u} - 2)}$ (a ``sweep'' forwards up to position $i$, continuing the incremental decoration of the letters). The idea behind this way of rearranging and decorating letters is that, given an input word $w \in \Sigma^n$, as long as we make sure that $w$ and thus $\Psi_n(w)$ do contain $x_1 u^\alpha x_2$ as a subword but not $x_1 u^{\alpha + 1} x_2$, then $\Psi_n(w)$ can be decomposed as $\Psi_n(w) = y_1 z y_2$ where $y_1 \in x_1 \shuffle \Sigma^*$, $y_2 \in x_2 \shuffle \Sigma^*$, and $\length{y_1}, \length{y_2}$ are minimal, with $z$ containing $u^\beta u_{\length{u} - 1}^{(1)} \cdots u_1^{(\length{u} - 1)} u_2^{(\length{u})} \cdots u_{\length{u}}^{(2 \length{u} - 2)} u^{\alpha - \beta}$ as a subword for some $\beta \in [\alpha]$ if and only if $w \in (x_1 \shuffle \Sigma^*) u (x_2 \shuffle \Sigma^*)$. This means we can check whether $w \in L$ by testing whether $w$ belongs to some fixed piecewise testable language over $A$. Let's now write the proof formally. For each $i \in \intinterval{0}{2 \length{u} - 2}$, let \[ \function{f^{(i)}}{\Sigma}{A}{a}{a^{(i)}} \displaypunct{.} \] For all $i \in \N, i \geq \length{u}$, we define \[ \Phi_i = (i, f^{(0)}) \prod_{j = 1}^{\length{u} - 1} (i - j, f^{(j)}) \prod_{j = 2}^{\length{u}} (i - \length{u} + j, f^{(\length{u} + j - 2)}) \displaypunct{.} \] For all $n \in \N, n < \length{u}$, we define $\Psi_n = \emptyword$. For all $n \in \N, n \geq \length{u}$, we define \[ \Psi_n = \prod_{i = 1}^{\length{u} - 1} (i, f^{(0)}) \prod_{i = \length{u}}^n \Phi_i \displaypunct{.} \] Finally, let $K$ be the language of words over $A$ having \[ \zeta_\beta = x_1 u^{\beta - 1} u \prod_{j = 1}^{\length{u} - 1} u_{\length{u} - j}^{(j)} \prod_{j = 2}^{\length{u}} u_j^{(\length{u} + j - 2)} u^{\alpha - \beta} x_2 \] for some $\beta \in [\alpha]$ as a subword but not $x_1 u^{\alpha + 1} x_2$. \begin{claim} The sequence $(\Psi_n)_{n \in \N}$ of $A$-programs is a program-reduction from $L$ to $K$. \end{claim} Let \[ \function{s}{\N}{\N}{n} {\begin{cases} 0 & \text{if $n < \length{u}$}\\ \length{u} - 1 + (n - \length{u} + 1) \cdot (2 \length{u} - 1) & \text{otherwise} \displaypunct{.} \end{cases}} \] It is direct to see that $s(n) = \length{\Psi_n} \leq (2 \length{u} - 1) \cdot n$ for all $n \in \N$. Therefore, using this claim, $(\Psi_n)_{n \in \N}$ is a program-reduction from $L$ to $K$ of length $s(n)$, so since $K$ is piecewise testable and hence is recognised (classically) by some monoid from $\FMVJ$, Proposition~\ref{ptn:Program-reduction_to_regular_language} tells us that $L \in \Prog{\FMVJ, s(n)} = \Prog{\FMVJ, n}$. \begin{proof}[Proof of claim] Let $n \in \N$. If $n < \length{u}$, then it is obvious that for all $w \in \Sigma^n$, we have $w \notin (x_1 \shuffle \Sigma^*) u (x_2 \shuffle \Sigma^*)$ so $w \notin L^{=n}$ and also $\Psi_n(w) = \emptyword \notin K^{=s(n)}$, hence $L^{=n} = \emptyset = \Psi_n^{-1}(K^{=s(n)})$. Otherwise, $n \geq \length{u}$. We are going to show that $L^{=n} = \Psi_n^{-1}(K^{=s(n)})$. \paragraph{Left-to-right inclusion.} Let $w \in L^{=n}$. We want to show that $\Psi_n(w) \in K^{=s(n)}$. We are first going to show that there exists some $\beta \in [\alpha]$ such that $\zeta_\beta$ is a subword of $\Psi_n(w)$. The fact that $w \in L^{=n}$ means in particular that $w \in (x_1 \shuffle \Sigma^*) u (x_2 \shuffle \Sigma^*)$ and we can hence decompose $w$ as $w = y_1 z y_2$ where $y_1 \in (x_1 \shuffle \Sigma^*)$ and $y_2 \in (x_2 \shuffle \Sigma^*)$ with $\length{y_1}$ and $\length{y_2}$ being minimal. It follows necessarily that $z \in u^\alpha \shuffle \Sigma^* \cap (u^{\alpha + 1} \shuffle \Sigma^*)^\complement \cap \Sigma^* u \Sigma^*$ by minimality of $\length{y_1}$ and $\length{y_2}$. By Lemma~\ref{lem:Decomposition}, we have $z = \bigl(\prod_{i = 1}^\alpha \prod_{j = 1}^{\length{u}} (v_{i, j} u_j)\bigr) y$ where $v_{i, j} \in (\Sigma \setminus \set{u_j})^*$ for all $i \in [\alpha]$ and $j \in [\length{u}]$, and $y \in \bigcup_{i = 1}^{\length{u}} \Bigl(\prod_{j = 1}^{i - 1} \bigl((\Sigma \setminus \set{u_j})^* u_j\bigr) (\Sigma \setminus \set{u_i})^*\Bigr)$. We know the letters in $u$ are all distinct, so this means that there is no $\beta \in [\alpha - 1]$ such that $u$ is a factor of $z$ partly in $\prod_{j = 1}^{\length{u}} (v_{\beta, j} u_j)$ and partly in $\prod_{j = 1}^{\length{u}} (v_{\beta + 1, j} u_j)$, and that $u$ cannot appear as a factor of $z$ partly in $\prod_{j = 1}^{\length{u}} (v_{\alpha, j} u_j)$ and partly in $y$ either. Hence, since $z \in \Sigma^* u \Sigma^*$, by the way we decomposed $z$, there necessarily exists $\beta \in [\alpha]$ such that $\prod_{j = 1}^{\length{u}} (v_{\beta, j} u_j) \in \Sigma^* u \Sigma^*$. Let $\gamma, \delta \in [n]$ such that $w_\gamma \cdots w_\delta = \prod_{j = 1}^{\length{u}} (v_{\beta, j} u_j)$, $w_1 \cdots w_{\gamma - 1} = y_1 \bigl(\prod_{i = 1}^{\beta - 1} \prod_{j = 1}^{\length{u}} (v_{i, j} u_j)\bigr)$ and $w_{\delta + 1} \cdots w_n = \bigl(\prod_{i = \beta + 1}^\alpha \prod_{j = 1}^{\length{u}} (v_{i, j} u_j)\bigr) y y_2$. By the way $\beta$ is defined, we have $w_{\delta - \length{u} + 1} \cdots w_\delta = u$, because $\delta$ is the first and only position in $w$ with the letter $u_{\length{u}}$ within the interval $\intinterval{\gamma}{\delta}$ verifying that $w_\gamma \cdots w_{\delta - 1}$ contains $u_1 \cdots u_{\length{u} - 1}$ as a subword, and we observe additionally that $\delta \geq \gamma + \length{u} - 1 \geq \length{u}$. This means that \begin{align*} & \Phi_\delta(w)\\ = & f^{(0)}(w_\delta) f^{(1)}(w_{\delta - 1}) \cdots f^{(\length{u} - 1)}(w_{\delta - \length{u} + 1}) f^{(\length{u})}(w_{\delta - \length{u} + 2}) \cdots f^{(2 \length{u} - 2)}(w_\delta)\\ = & u_{\length{u}} \prod_{j = 1}^{\length{u} - 1} u_{\length{u} - j}^{(j)} \prod_{j = 2}^{\length{u}} u_j^{(\length{u} + j - 2)} \displaypunct{.} \end{align*} Moreover, \[ \prod_{i = 1}^{\gamma - 1} f^{(0)}(w_i) = w_1 \cdots w_{\gamma - 1} = y_1 \bigl(\prod_{i = 1}^{\beta - 1} \prod_{j = 1}^{\length{u}} (v_{i, j} u_j)\bigr) \displaypunct{,} \] \[ \prod_{i = \delta - \length{u} + 1}^{\delta - 1}f^{(0)}(w_i) = w_{\delta - \length{u} + 1} \cdots w_{\delta - 1} = u_1 \cdots u_{\length{u} - 1} \] and \[ \prod_{i = \delta + 1}^n f^{(0)}(w_i) = w_{\delta + 1} \cdots w_n = \bigl(\prod_{i = \beta + 1}^\alpha \prod_{j = 1}^{\length{u}} (v_{i, j} u_j)\bigr) y y_2 \displaypunct{.} \] So as $\prod_{i = 1}^{\gamma - 1} (i, f^{(0)}) \prod_{i = \delta - \length{u} + 1}^{\delta - 1} (i, f^{(0)}) \Phi_\delta \prod_{i = \delta + 1}^n (i, f^{(0)})$ is a subword of $\Psi_n$, we have that \[ \zeta_\beta = x_1 u^{\beta - 1} u \prod_{j = 1}^{\length{u} - 1} u_{\length{u} - j}^{(j)} \prod_{j = 2}^{\length{u}} u_j^{(\length{u} + j - 2)} u^{\alpha - \beta} x_2 \] is a subword of $\Psi_n(w)$. We secondly show that $x_1 u^{\alpha + 1} x_2$ cannot be a subword of $\Psi_n(w)$. But this is direct by construction of $\Psi_n$, otherwise we would have that $x_1 u^{\alpha + 1} x_2$ is a subword of $w$, contradicting the fact that $w \in L^{=n}$. Hence, $\Psi_n(w) \in K^{=s(n)}$, and since this is true for all $w \in L^{=n}$, we have $L^{=n} \subseteq \Psi_n^{-1}(K^{=s(n)})$. \paragraph{Right-to-left inclusion.} We are going to prove the ``contrapositive inclusion''. Let $w \in \Sigma^n \setminus L^{=n}$. We want to show that $\Psi_n(w) \notin K^{=s(n)}$. Let us start with the easy cases. If we have $w \notin (x_1 u^\alpha x_2) \shuffle \Sigma^*$, then it means that $x_1 u^\alpha x_2$ is not a subword of $w$ and hence, by construction of $\Psi_n$, not a subword of $\Psi_n(w)$ either, so that there does not exist any $\beta \in [\alpha]$ such that $\zeta_\beta$ is a subword of $\Psi_n(w)$. Similarly, if we have $w \in (x_1 u^{\alpha + 1} x_2) \shuffle \Sigma^*$, then it means that $x_1 u^{\alpha + 1} x_2$ is a subword of $w$ and hence, by construction of $\Psi_n$, a subword of $\Psi_n(w)$. We now assume that $w \in (x_1 u^\alpha x_2) \shuffle \Sigma^* \cap \bigl((x_1 u^{\alpha + 1} x_2) \shuffle \Sigma^*\bigr)^\complement$ while $w \notin (x_1 \shuffle \Sigma^*) u (x_2 \shuffle \Sigma^*)$. We want to show that in this case, there does not exist any $\beta \in [\alpha]$ such that $\zeta_\beta$ is a subword of $\Psi_n(w)$. Suppose for a contradiction that such a $\beta$ exists; our goal is to show, through a careful observation of what this implies on the letters in $w$ by examining how $\Psi_n$ decorates the letters, that this contradictingly entails $x_1 u^{\alpha + 1} x_2$ is a subword of $w$. Since $\zeta_\beta$ is a subword of $\Psi_n(w)$, it is not too difficult to see there exist \[ p_1, \ldots, p_{\length{x_1} + (\beta - 1) \cdot \length{u}}, q_1, \ldots, \allowbreak q_{3 \length{u} - 2}, r_1, \ldots, r_{(\alpha - \beta) \cdot \length{u} + \length{x_2}} \in [n] \] verifying that \[ w_{p_1} \cdots w_{p_{\length{x_1} + (\beta - 1) \cdot \length{u}}} = x_1 u^{\beta - 1} \displaypunct{,} \] \[ w_{q_1} \cdots w_{q_{3 \length{u} - 2}} = u \prod_{j = 1}^{\length{u} - 1} u_{\length{u} - j} \prod_{j = 2}^{\length{u}} u_j \displaypunct{,} \] \[ w_{r_1} \cdots w_{r_{(\alpha - \beta) \cdot \length{u} + \length{x_2}}} = u^{\alpha - \beta} x_2 \] and \begin{align*} (p_1, f^{(0)}) \cdots (p_{\length{x_1} + (\beta - 1) \cdot \length{u}}, f^{(0)}) (q_1, f^{(0)}) \cdots (q_{\length{u}}, f^{(0)})\\ (q_{\length{u} + 1}, f^{(1)}) \cdots (q_{2 \length{u} - 1}, f^{(\length{u} - 1)}) (q_{2 \length{u}}, f^{(\length{u})}) \cdots (q_{3 \length{u} - 2}, f^{(2 \length{u} - 2)})\\ (r_1, f^{(0)}) \cdots (r_{(\alpha - \beta) \cdot \length{u} + \length{x_2}}, f^{(0)}) \end{align*} is a subword of $\Psi_n$. By construction of $\Psi_n$, we have \[ p_1 < \cdots < p_{\length{x_1} + (\beta - 1) \cdot \length{u}} < q_1 < \cdots < q_{\length{u}} < r_1 < \cdots < r_{(\alpha - \beta) \cdot \length{u} + \length{x_2}} \displaypunct{,} \] so this implies that $w$ can be decomposed as $w = y_1 z y_2$ where $y_1 \in x_1 \shuffle \Sigma^*$, where $z \in u^\alpha \shuffle \Sigma^*$ and $y_2 \in x_2 \shuffle \Sigma^*$, the positions $p_1, \ldots, p_{\length{x_1}}$ corresponding to letters in $y_1$, the positions $p_{\length{x_1} + 1}, \ldots, p_{\length{x_1} + (\beta - 1) \cdot \length{u}}, \allowbreak q_1, \ldots, q_{\length{u}}, \allowbreak r_1, \ldots, r_{(\alpha - \beta) \cdot \length{u}}$ corresponding to letters in $z$ and the positions $r_{(\alpha - \beta) \cdot \length{u} + 1}, \ldots, \allowbreak r_{(\alpha - \beta) \cdot \length{u} + \length{x_2}}$ corresponding to letters in $y_2$. We are now going to show that, in fact, $q_{\length{u}} < q_{2 \length{u} - 1} < q_{2 \length{u}} < \cdots < q_{3 \length{u} - 2} < r_1$, which implies $z \in u^{\alpha + 1} \shuffle \Sigma^*$ and thus the contradiction we are aiming for. Since $w \notin (x_1 \shuffle \Sigma^*) u (x_2 \shuffle \Sigma^*)$, we have $z \notin \Sigma^* u \Sigma^*$, hence as $w_{q_{\length{u}}} = u_{\length{u}}$ and $\length{u} > 1$, there must exist $j \in [\length{u} - 1]$ such that $w_{q_{\length{u}} - j} \neq u_{\length{u} - j}$ and $w_{q_{\length{u}} - \iota} = u_{\length{u} - \iota}$ for all $\iota \in \intinterval{0}{j - 1}$. By construction of $\Psi_n$, we know that $q_{\length{u} + j} \geq q_{\length{u}} - j$ (because the instructions with $f^{(j)}$ after an instruction with $f^{(0)}$ querying position $p \in [n]$ all query a position at least equal to $p - j$), but since $u_{\length{u} - j} \neq w_{q_{\length{u}} - j}$ and $u_{\length{u} - j} \neq u_{\length{u} - \iota} = w_{q_{\length{u}} - \iota}$ for all $\iota \in \intinterval{0}{j - 1}$ as the letters in $u$ are all distinct, we get that $q_{\length{u} + j} > q_{\length{u}}$. By (backward) induction, we can show that for all $\iota \in \intinterval{j + 1}{\length{u} - 1}$, we have $q_{\length{u} + \iota} > q_{\length{u}}$. Indeed, given $\iota \in \intinterval{j + 1}{\length{u} - 1}$, we have $q_{\length{u} + \iota - 1} > q_{\length{u}}$, either by inductive hypothesis or directly in the base case $\iota = j + 1$ by what we have just seen. So by construction of $\Psi_n$, we know that $q_{\length{u} + \iota} \geq q_{\length{u}}$ (because the instructions with $f^{(\iota)}$ after an instruction with $f^{(\iota - 1)}$ querying position $p \in [n]$ all query a position at least equal to $p - 1$), but since $u_{\length{u} - \iota} \neq u_{\length{u}} = w_{q_{\length{u}}}$ as the letters in $u$ are all distinct, it follows that $q_{\length{u} + \iota} > q_{\length{u}}$. Therefore, we have that $q_{2 \length{u} - 1} > q_{\length{u}}$. Moreover, by construction of $\Psi_n$, we also have $q_{2 \length{u} - 1} < q_{2 \length{u}} < \cdots < q_{3 \length{u} - 2} < r_1$ (because for each $\iota \in \intinterval{0}{\length{u} - 2}$, the instructions with $f^{(\length{u} + \iota)}$ after an instruction with $f^{(\length{u} + \iota - 1)}$ querying position $p \in [n]$ all query a position at least equal to $p + 1$ and similarly for the instructions with $f^{(0)}$ after an instruction with $f^{(2 \length{u} - 2)}$). So, to conclude, we have $p_1 < \cdots < p_{\length{x_1} + (\beta - 1) \cdot \length{u}} < q_1 < \cdots < q_{\length{u}} < q_{2 \length{u} - 1} < q_{2 \length{u}} < \cdots < q_{3 \length{u} - 2} < r_1 < \cdots < r_{(\alpha - \beta) \cdot \length{u} + \length{x_2}}$ and \begin{align*} & w_{p_1} \cdots w_{p_{\length{x_1} + (\beta - 1) \cdot \length{u}}} w_{q_1} \cdots w_{q_{\length{u}}} w_{q_{2 \length{u} - 1}} w_{q_{2 \length{u}}} \cdots w_{q_{3 \length{u} - 2}} w_{r_1} \cdots w_{r_{(\alpha - \beta) \cdot \length{u} + \length{x_2}}}\\ = & x_1 u^{\beta - 1} u u_1 u_2 \cdots u_{\length{u}} u^{\alpha - \beta} x_2 = x_1 u^{\alpha + 1} x_2 \displaypunct{.} \end{align*} This implies that $w \in (x_1 u^{\alpha + 1} x_2) \shuffle \Sigma^*$, a contradiction. So there does not exist $\beta \in [\alpha]$ such that $\zeta_\beta$ is a subword of $\Psi(w)$. Therefore, in every case $\Psi_n(w) \notin K^{=s(n)}$, and since this is true for all $w \in \Sigma^n \setminus L^{=n}$, we have $\Sigma^n \setminus L^{=n} \subseteq \Psi_n^{-1}(A^{s(n)} \setminus K^{=s(n)})$, which is equivalent to $L^{=n} \supseteq \Psi_n^{-1}(K^{=s(n)})$. \bigskip This concludes the proof of the claim. \end{proof} And the one of the lemma. \end{proof} As explained before stating the previous lemma, we can now use it to prove the result we were aiming for. \begin{proposition} \label{ptn:RegLangPJ-All_distinct_simple_TCCDDO} Let $\Sigma$ be an alphabet, $l \in \N_{>0}$ and $u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N_{>0}$) such that for each $i \in [k]$, the letters in $u_i$ are all distinct. For all $\alpha \in [l]^k$, we have $R_l^\alpha(u_1, \ldots, u_k) \cap S_l^\alpha(u_1, \ldots, u_k) \in \Prog{\FMVJ}$. \end{proposition} \begin{proof} Let $\Sigma$ be an alphabet, $l \in \N_{>0}$ and $u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N_{>0}$) such that for each $i \in [k]$, the letters in $u_i$ are all distinct. Let $\alpha \in [l]^k$. For each $i \in [k]$ verifying $\alpha_i < l$, we define \begin{align*} L_i = & ({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^* \cap \bigl(({u_1}^{\alpha_1} \cdots {u_i}^{\alpha_i + 1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr)^\complement \cap\\ & \bigl(({u_1}^{\alpha_1} \cdots {u_{i - 1}}^{\alpha_{i - 1}}) \shuffle \Sigma^*\bigr) u_i \bigl(({u_{i + 1}}^{\alpha_{i + 1}} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*\bigr) \displaypunct{.} \end{align*} It is immediate to show that \[ R_l^\alpha(u_1, \ldots, u_k) \cap S_l^\alpha(u_1, \ldots, u_k) = ({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^* \cap \bigcap_{i \in [k], \alpha_i < l} L_i \displaypunct{.} \] By Lemma~\ref{lem:Building-block_program}, $L_i \in \Prog{\FMVJ}$ for each $i \in [k]$ verifying $\alpha_i < l$. Moreover, since $({u_1}^{\alpha_1} \cdots {u_k}^{\alpha_k}) \shuffle \Sigma^*$ obviously is a piecewise testable language, it belongs to $\Prog{\FMVJ}$. Thus, we can conclude that $R_l^\alpha(u_1, \ldots, u_k) \cap S_l^\alpha(u_1, \ldots, u_k)$ belongs to $\Prog{\FMVJ}$ by closure of $\Prog{\FMVJ} \cap \Reg$ under intersection, Proposition~\ref{lemma-simple-closure-P}. \end{proof} We thus derive the awaited corollary. \begin{corollary} \label{cor:RegLangPJ-All_distinct_TCCDDO} Let $\Sigma$ be an alphabet, $l \in \N_{>0}$ and $u_1, \ldots, u_k\!\in\!\Sigma^+$ ($k \in \N_{>0}$) such that for each $i \in [k]$, the letters in $u_i$ are all distinct. Then, $\ccddo{u_1, \ldots, u_k}_l \in \Prog{\FMVJ}$. \end{corollary} However, what we really want to obtain is that $\ccddo{u_1, \ldots, u_k}_l \in \Prog{\FMVJ}$ without putting any restriction on the $u_i$'s. But, in fact, to remove the constraint that the letters must be all distinct in each of the $u_i$'s, we simply have to decorate each of the input letters with its position minus $1$ modulo a big enough $d \in \N_{>0}$. This finally leads to the following proposition. \begin{proposition} \label{ptn:RegLangPJ-TCCDDO} Let $\Sigma$ be an alphabet, $l \in \N_{>0}$ and $u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N_{>0}$). Then $\ccddo{u_1, \ldots, u_k}_l \in \Prog{\FMVJ}$. \end{proposition} \begin{proof} Let $\Sigma$ be an alphabet, $l \in \N_{>0}$ and $u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N_{>0}$). Let $d = \max_{i \in [k]} \length{u_i}$. If $d = 1$, then the result is straightforward because the language $\ccddo{u_1, \ldots, u_k}_l$ then belongs to $\DLang{\FMVJ}$, so now we assume $d \geq 2$. We let $\Sigma_d = \Sigma \times \Z \quotient d \Z$ and for all $w \in \Sigma^*$, for all $i \in \Z \quotient d \Z$, we define $\widetilde{w}^i = \prod_{j = 1}^{\length{w}} (w_j, (j + i - 1) \mod d)$. We also let $\widetilde{w} = \widetilde{w}^0$ for all $w \in \Sigma^*$. For all $v \in \Sigma^+, \length{v} \leq d$, we define $\mu(v, 1) = v$ and \[ \mu(v, l) = \underbrace{v_1, \ldots, v_{\length{v}}, \ldots \ldots \ldots, v_1, \ldots, v_{\length{v}}}_{\text{$l$ times}} \displaypunct{.} \] For all $v_1, \ldots, v_{k'} \in \Sigma^+$ ($k' \in \N_{>0}$) such that $\length{v_i} \leq d$ for each $i \in [k']$, we let \[ \ccddo{v_1, \ldots, v_{k'}}_{l, d} = \bigcup_{i_1, \ldots, i_{k'} \in \Z \quotient d \Z} \ccddo{\widetilde{v_1}^{i_1}, \ldots, \widetilde{v_{k'}}^{i_{k'}}}_l \displaypunct{,} \] a language over $\Sigma_d$, that does belong to $\Prog{\FMVJ}$ by Corollary~\ref{cor:RegLangPJ-All_distinct_TCCDDO} and closure of $\Prog{\FMVJ} \cap \Reg$ under finite union (Proposition~\ref{lemma-simple-closure-P}), because since $\length{v_i} \leq d$ for each $i \in [k']$, each $\widetilde{v_i}^j$ for $j \in \Z \quotient d \Z$ has all distinct letters. This implies that for all $q_1, \ldots, q_k \in \set{1, l}$, we have that $\ccddo{\mu(u_1, q_1), \ldots, \allowbreak \mu(u_k, q_k)}_{l, d}$ does belong to $\Prog{\FMVJ}$, so that \[ \bigcup_{q_1, \ldots, q_k \in \set{1, l}} \ccddo{\mu(u_1, q_1), \ldots, \mu(u_k, q_k)}_{l, d} \] is a language over $\Sigma_d$ belonging to $\Prog{\FMVJ}$. Now, it is not so difficult to see that \begin{align*} \ccddo{u_1, \ldots, u_k}_l & = \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}\\ & = \Bigset{w \in \Sigma^* \bigmid \widetilde{w} \in \bigcup_{q_1, \ldots, q_k \in \set{1, l}} \ccddo{\mu(u_1, q_1), \ldots, \mu(u_k, q_k)}_{l, d}} \displaypunct{,} \end{align*} which allows us to conclude that the sequence $(\Psi_n)_{n \in \N}$ of $\Sigma_d$-programs such that $\Psi_n(w) = \widetilde{w}$ for all $n \in \N$ and $w \in \Sigma^n$ is a program-reduction from $\ccddo{u_1, \ldots, u_k}_l$ to $\bigcup_{q_1, \ldots, q_k \in \set{1, l}} \ccddo{\mu(u_1, q_1), \ldots, \mu(u_k, q_k)}_{l, d}$ of length $n$. Hence, $\ccddo{u_1, \ldots, u_k}_l$ does also belong to $\Prog{\FMVJ}$ by Proposition~\ref{ptn:P(V)_program-reduction_closure}. \end{proof} This finishes to prove Theorem~\ref{thm:TDDO_languages_in_P(J)} by closure of $\Prog{\FMVJ}$ under Boolean combinations (Proposition~\ref{lemma-simple-closure-P}) and by the discussion at the beginning of Subsection~\ref{sse:Non-tameness_of_J}. \section{Algebraic characterisation of threshold dot-depth one languages} \label{sec:Algebraic_characterisation_TDDO_languages} In his Ph.D.\ thesis~\cite{PhD_thesis/Grosshans}, the author conjectured that the class of threshold dot-depth languages is exactly $\DLang{\FMVDA} \cap \DLang{\FSVJsdD}$ and proved that all strongly unambiguous monomials (the basic building blocks in $\DLang{\FMVDA}$) that are imposed to belong to $\DLang{\FSVJsdD}$ at the same time are in fact threshold dot-depth one languages. The problem with the proof of this partial result supporting that conjecture is that it is very complex and technical, without leaving much hope for an extension to all languages in $\DLang{\FMVDA} \cap \DLang{\FSVJsdD}$. Here we show this conjecture to be actually true by using a result by Costa~\cite{Costa-2000}. We first prove the easy direction, a proof actually already to be found in~\cite{PhD_thesis/Grosshans}. The result is quite straightforward but a bit cumbersome to prove. \begin{proposition} \label{ptn:TDDO_in_DAintJsdD} Any threshold dot-depth one language belongs to $\DLang{\FMVDA} \cap \DLang{\FSVJsdD}$. \end{proposition} \begin{proof} To prove the proposition, by closure under Boolean operations of both $\DLang{\FMVDA}$ and $\DLang{\FSVJsdD}$, it suffices to prove that for any $\Sigma$ an alphabet, $l \in \N_{>0}$ and $u, u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N_{>0}$), the languages $\Sigma^* u$, $u \Sigma^*$ and $\ccddo{u_1, \ldots, u_k}_l$ do all belong to $\DLang{\FMVDA} \cap \DLang{\FSVJsdD}$. This is what we show in the following. Let $\Sigma$ be an alphabet, $l \in \N_{>0}$ and $u, u_1, \ldots, u_k \in \Sigma^+$ ($k \in \N_{>0}$). First, it is obvious that $\Sigma^* u$ and $u \Sigma^*$ do belong to $\DLang{\FMVDA} \cap \DLang{\FSVJsdD}$. We now show that $\ccddo{u_1, \ldots, u_k}_l \in \DLang{\FMVDA} \cap \DLang{\FSVJsdD}$. \paragraph{Membership in $\DLang{\FSVJsdD}$.} As given by Definition~\ref{def:TDDO_alternative}, we have that \[ \ccddo{u_1, \ldots, u_k}_l = \bigcup_{q_1, \ldots, q_k \in \set{1, l}} L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)} \displaypunct{,} \] where for all $q_1, \ldots, q_k \in \set{1, l}$, the language $L^{(l)}_{(u_1, q_1)} \cdots L^{(l)}_{(u_k, q_k)}$ is easily seen to be dot-depth one. Hence, by closure of $\DLang{\FSVJsdD}$ under finite union, we have that $\ccddo{u_1, \ldots, u_k}_l \in \DLang{\FSVJsdD}$. \paragraph{Membership in $\DLang{\FMVDA}$.} Let now $L = \ccddo{u_1, \ldots, u_k}_l$, let $\sim$ be its syntactic congruence and let $\omega$ be the idempotent power of its syntactic monoid $M$. Using the equational characterisation of $\FMVDA$, we are now going to prove that $M \in \FMVDA$: that is, we are going to prove that $(m n)^\omega = (m n)^\omega m (m n)^\omega$ for all $m, n \in M$, so that $M$ does belong to $\FMVDA$ and thus $\ccddo{u_1, \ldots, u_k}_l$ to $\DLang{\FMVDA}$. To show that each pair of elements of $M$ verifies the previous equation, by definition of the syntactic monoid of $L$, it suffices to show that $(uv)^\omega \sim (uv)^\omega u (uv)^\omega$ for all $u, v \in \Sigma^*$. \medskip Let $u, v \in \Sigma^*$. Our aim is to show that $(uv)^\omega \sim (uv)^\omega u (uv)^\omega$. By definition of the syntactic monoid of $L$ and of $\omega$, it is not too difficult to see that this is equivalent to showing that $(uv)^{\omega'} \sim (uv)^{\omega'} u (uv)^{\omega'}$ where $\omega' \in \N_{>0}$ is the smallest multiple of $\omega$ not smaller than $\sum_{i = 1}^k l \cdot \length{u_i}$ (why we need $\omega'$ to be as big will become clear later on). When both $u$ and $v$ are equal to the empty word, we trivially have that $(uv)^{\omega'} \sim (uv)^{\omega'} u (uv)^{\omega'}$. So we now assume that at least one of $u$ and $v$ is not equal to the empty word. \smallskip Let $x, y \in \Sigma^*$ be such that $w = x (uv)^{\omega'} y \in L$ and consider the word $w' = x (uv)^{\omega'} u (uv)^{\omega'} y$. Let's now prove that $w'$ does also belong to $L$. When $x$ or $y$ belongs to $L$, then it is obvious that $w'$ does also belong to it. We now assume that it is not the case. Let $i_1 \in [k]$ be the smallest integer in $[k]$ such that $x$ does not belong to $\ccddo{u_1, \ldots, u_{i_1}}_l$ and $i_2 \in [k]$ the biggest integer in $[k]$ such that $y$ does not belong to $\ccddo{u_{i_2}, \ldots, u_k}_l$, that do exist by the hypothesis we just made. Let $\kappa_1 \in \intinterval{0}{\length{x}}$ be the smallest integer in $[\length{x}]$ such that $x_1 \cdots x_{\kappa_1} \in \ccddo{u_1, \ldots, u_{i_1 - 1}}_l$ when $i_1 > 1$ and $0$ otherwise; let symmetrically $\kappa_2 \in \intinterval{1}{\length{y} + 1}$ be the biggest integer in $[\length{y}]$ such that $y_{\kappa_2} \cdots y_{\length{y}} \in \ccddo{u_{i_2 + 1}, \ldots, u_k}_l$ when $i_2 < k$ and $\length{y} + 1$ otherwise. The idea to prove $w' \in L$ is to distinguish between three cases when $i_1 \leq i_2$, otherwise it is direct. When both the prefix $x (uv)^{l \cdot \length{u_{i_1}}}$ of $w$ and $w'$ belongs to $\ccddo{u_1, \ldots, u_{i_1}}_l$ and the suffix $(uv)^{l \cdot \length{u_{i_2}}} y$ of $w$ and $w'$ belongs to $\ccddo{u_{i_2}, \ldots, u_k}_l$, then we can conclude by using the fact that all the letters of the words $u_{i_1 + 1}$ to $u_{i_2 - 1}$ are to be found in the remaining factor in the middle of $w$, made solely of powers of $uv$. Otherwise, the prefix $x (uv)^{l \cdot \length{u_{i_1}}}$ of $w$ and $w'$ does not belong to $\ccddo{u_1, \ldots, u_{i_1}}_l$ or the suffix $(uv)^{l \cdot \length{u_{i_2}}} y$ of $w$ and $w'$ does not belong to $\ccddo{u_{i_2}, \ldots, u_k}_l$. When the first possibility is true, we can show that we necessarily have that the prefix $x (uv)^{\omega'}$ of $w$ and $w'$ as a whole does not belong to $\ccddo{u_1, \ldots, u_{i_1}}_l$ and then conclude after analysing how $w$ does consequently decompose into one prefix in $\ccddo{u_1, \ldots, u_{i_1 - 1}}_l$, one middle factor in $\ccddo{u_{i_1}}_l$ and one suffix in $\ccddo{u_{i_1 + 1}, \ldots, u_k}_l$, using $\kappa_1$ and $\kappa_2$. We proceed by symmetry when the second possibility is true. We now move on to the details. If $i_1 > i_2$, then we have that $x$ belongs to $\ccddo{u_1, \ldots, u_{i_1 - 1}}_l$ (which is well defined as $i_1 > i_2 \geq 1$) and that $y$ belongs to $\ccddo{u_{i_1}, \ldots, u_k}_l$ (which is also well defined as $k \geq i_1$), so that $w'$ obviously belongs to $L$. Otherwise, $i_1 \leq i_2$. We first observe that if $x (uv)^{\omega'}$ belongs to $\ccddo{u_1, \ldots, u_{i_1}}_l$, then $x (uv)^{l \cdot \length{u_{i_1}}}$ does also belong to it. Indeed, assume the hypothesis of the implication is true; there are two possible cases. Either all letters of $u_{i_1}$ appear in $uv$: in that case we have $(uv)^{l \cdot \length{u_{i_1}}} \in {u_{i_1}}^l \shuffle \Sigma^* \subseteq \ccddo{u_{i_1}}_l$ and hence $x (uv)^{l \cdot \length{u_{i_1}}} \in \ccddo{u_1, \ldots, u_{i_1}}_l$. Or there is at least one letter in $u_{i_1}$ not appearing in $uv$: since $x \notin \ccddo{u_1, \ldots, u_{i_1}}_l$, either \begin{itemize} \item $u_{i_1}$ is a factor of $x (uv)^{\omega'}$ whose first letter is in $x_{\kappa_1 + 1} \cdots x_{\length{x}}$ and whose last letter is in $(uv)^{\omega'}$, so that because $\length{uv} \geq 1$, we have \[ x_{\kappa_1 + 1} \cdots x_{\length{x}} (uv)^{l \cdot \length{u_{i_1}}} \in \Sigma^* u_{i_1} \Sigma^* \subseteq \ccddo{u_{i_1}}_l \] and hence $x (uv)^{l \cdot \length{u_{i_1}}} \in \ccddo{u_1, \ldots, u_{i_1}}_l$; \item or ${u_{i_1}}^l$ is a subword of $x_{\kappa_1 + 1} \cdots x_{\length{x}} (uv)^{\omega'}$ such that only its at most $\length{u_{i_1}} - 1$ last letters appear in the factor $(uv)^{\omega'}$, so that we have \[ x_{\kappa_1 + 1} \cdots x_{\length{x}} (uv)^{l \cdot \length{u_{i_1}}} \in {u_{i_1}}^l \shuffle \Sigma^* \subseteq \ccddo{u_{i_1}}_l \] and hence $x (uv)^{l \cdot \length{u_{i_1}}} \in \ccddo{u_1, \ldots, u_{i_1}}_l$. \end{itemize} Symmetrically, we can prove that if $(uv)^{\omega'} y$ belongs to $\ccddo{u_{i_2}, \ldots, u_k}_l$, then the word $(uv)^{l \cdot \length{u_{i_2}}} y$ does also belong to it. We now distinguish between three different cases. \begin{itemize} \item $x (uv)^{l \cdot \length{u_{i_1}}}$ does not belong to $\ccddo{u_1, \ldots, u_{i_1}}_l$. By what we have shown just above, this means that $x (uv)^{\omega'}$ does not belong to $\ccddo{u_1, \ldots, u_{i_1}}_l$ either. Since $w \in L$, this necessarily means that $\kappa_2 > 1$ and that there exists $\kappa_2' \in \intinterval{2}{\kappa_2}$ verifying $x_1 \cdots x_{\kappa_1} \in \ccddo{u_1, \ldots, u_{i_1 - 1}}_l$, \[ x_{\kappa_1 + 1} \cdots x_{\length{x}} (uv)^{\omega'} y_1 \cdots y_{\kappa_2' - 1} \in \ccddo{u_{i_1}}_l \] and $y_{\kappa_2'} \cdots y_{\length{y}} \in \ccddo{u_{i_1 + 1}, \ldots, u_k}_l$, implying $i_1 = i_2$ and that $\kappa_2'$ can be taken equal to $\kappa_2$ by the fact that $y \notin \ccddo{u_{i_2}, \ldots, u_k}_l$ and $y_{\kappa_2} \cdots y_{\length{y}} \in \ccddo{u_{i_2 + 1}, \ldots, u_k}_l$. If $(uv)^{\omega'} y$ belongs to $\ccddo{u_{i_1}, \ldots, u_k}_l$, then as it holds that $x \in \ccddo{u_1, \ldots, u_{i_1 - 1}}_l$, we have that $w' = x (uv)^{\omega'} u (uv)^{\omega'} y \in L$. Otherwise, since $u_{i_1}$ contains at least one letter not appearing in $uv$ and $\length{uv} \geq 1$, ${u_{i_1}}^l$ must be a subword of $x_{\kappa_1 + 1} \cdots x_{\length{x}} (uv)^{\omega'} y_1 \cdots y_{\kappa_2' - 1} \in \ccddo{u_{i_1}}_l$ with at most $\length{u_{i_1}} - 1$ of its letters appearing in the factor $(uv)^{\omega'}$, so that \[ x_{\kappa_1 + 1} \cdots x_{\length{x}} (uv)^{\omega'} u (uv)^{\omega'} y_1 \cdots y_{\kappa_2' - 1} \in {u_{i_1}}^l \shuffle \Sigma^* \subseteq \ccddo{u_{i_1}}_l \displaypunct{,} \] also showing $w' = x (uv)^{\omega'} u (uv)^{\omega'} y \in L$. \item $(uv)^{l \cdot \length{u_{i_2}}} y$ does not belong to $\ccddo{u_{i_2}, \ldots, u_k}_l$. Symmetrically to the previous case, we can show that $w' \in L$. \item $x (uv)^{l \cdot \length{u_{i_1}}}$ belongs to $\ccddo{u_1, \ldots, u_{i_1}}_l$ on one side and $(uv)^{l \cdot \length{u_{i_2}}} y$ belongs to $\ccddo{u_{i_2}, \ldots, u_k}_l$ on the other side. In this case, for all $i \in \intinterval{i_1 + 1}{i_2 - 1}$, we have that $\alphabet(u_i) \subseteq \alphabet(uv)$, because as $x \notin \ccddo{u_1, \ldots, u_{i_1}}_l$ and $y \notin \ccddo{u_{i_2}, \ldots, u_k}_l$, we must have $(uv)^{\omega'} \in \ccddo{u_{i_1 + 1}, \ldots, u_{i_2 - 1}}_l$. Hence, we have that $(uv)^{\omega' - l \cdot \length{u_{i_1}}} u (uv)^{\omega' - l \cdot \length{u_{i_2}}}$, containing $(uv)^{\sum_{i = i_1 + 1}^{i_2 - 1} l \cdot \length{u_i}}$ as a subword, belongs to ${u_{i_1 + 1}}^l \cdots {u_{i_2 - 1}}^l \shuffle \Sigma^* \subseteq \ccddo{u_{i_1 + 1}, \ldots, u_{i_2 - 1}}_l$. Thus, putting all together, we get that $w' = x (uv)^{\omega'} u (uv)^{\omega'} y \in L$. \end{itemize} Therefore, in any case we have $x (uv)^{\omega'} u (uv)^{\omega'} y \in L$. \smallskip Let $x, y \in \Sigma^*$ such that $x (uv)^{\omega'} u (uv)^{\omega'} y \in L$. In a way similar to above, we can show that then, $x (uv)^{\omega'} y \in L$. \medskip This shows that $(uv)^\omega \sim (uv)^\omega u (uv)^\omega$ and as it is true for all $u, v \in \Sigma^*$, we eventually get that $\ccddo{u_1, \ldots, u_k}_l \in \DLang{\FMVDA}$. \bigskip This concludes the proof of the proposition. \end{proof} Let us denote by $\FSVGen{\FMVDA}$ the variety of semigroups generated by the variety of monoids $\FMVDA$, that is, the smallest variety of semigroups containing $\FMVDA$. By~\cite[Chapter~V, Exercise~1.3 and Proposition~1.1]{Books/Eilenberg-1976}, we have that for any language over some alphabet, its syntactic semigroup belongs to $\FSVGen{\FMVDA}$ if and only its syntactic monoid belongs to $\FMVDA$. Thus, the languages in $\DLang{\FMVDA} \cap \DLang{\FSVJsdD}$ are exactly those whose syntactic semigroup belongs to $\FSVGen{\FMVDA}$ and $\DLang{\FSVJsdD}$; in other words, $\DLang{\FMVDA} \cap \DLang{\FSVJsdD} = \DLang{\FSVGen{\FMVDA} \cap \FSVJsdD}$. In his Ph.D.\ thesis~\cite{PhD_thesis/Costa} and later in~\cite{Costa-2000}, Costa gave a language theoretic characterisation of $\DLang{\FSVGen{\FMVDA} \cap \FSVLJ}$, where $\FSVLJ$ is the variety of locally-$\FMVJ$ semigroups, the class of all finite semigroups $S$ such that for any idempotent $e$ in $S$, the monoid $e S e$ belongs to $\FMVJ$. It is well known that $\FSVJsdD$ is a strict subclass of $\FSVLJ$~\cite[Theorem~17.3, Example~15.8]{Tilson-1987}, but if we manage to prove that any language in $\DLang{\FSVGen{\FMVDA} \cap \FSVLJ}$ is threshold dot-depth one, we would in particular have proven that any language in $\DLang{\FMVDA} \cap \DLang{\FSVJsdD}$ is threshold dot-depth one. Let us present the characterisation of Costa. \begin{definition}[See~{\cite[p.35]{Costa-2000}}] Let $\Sigma$ be an alphabet. We let $\mathcal{K}(\FSVGen{\FMVDA} \cap \FSVLJ)(\Sigma^*)$ be the set of languages over $\Sigma$ of the form \[ u_0 A_1^* X_1 A_2^* \cdots X_{n - 1} A_n^* u_n \] where $n, r \in \N$, $u_0, u_1, \ldots, u_n \in \Sigma^*$, $\emptyset \neq A_1, A_2, \ldots, A_n \subseteq \Sigma$ and, for all $i \in \intinterval{1}{n - 1}$, we have \[ X_i = \begin{cases} \set{u_i} & \begin{array}{@{}l@{}} \text{if $u_i \neq \emptyword$, with}\\ \alphabet(u_i) \nsubseteq A_i, A_{i + 1} \end{array}\\ (A_i \setminus A_{i + 1}) (A_i \cap A_{i + 1})^{\geq r} (A_{i + 1} \setminus A_i) & \begin{array}{@{}l@{}} \text{otherwise, with}\\ \text{$A_i \nsubseteq A_{i + 1}$ and $A_{i + 1} \nsubseteq A_i$} \end{array} \end{cases} \displaypunct{.} \] \end{definition} \begin{remark} There's actually a slight mistake in the definition given in~\cite[p.35]{Costa-2000}, as the languages defined should be exactly those recognised by the automata $\mathcal{A}(r; u_0, A_1, u_1, \ldots, \allowbreak A_n, u_n)$ of page 10: the $(A_{i + 1} \setminus A_i)$ factor is missing in the definition of $X_i$ for all $u_i$'s equal to the empty word. \end{remark} \begin{theorem}[{\cite[Theorem 9.1]{Costa-2000}}] \label{thm:Costa} For $\Sigma$ an alphabet, $\DLang{\FSVGen{\FMVDA} \cap \FSVLJ}(\Sigma^*)$ is the set of all languages over $\Sigma$ that are Boolean combinations of languages of $\mathcal{K}(\FSVGen{\FMVDA} \cap \FSVLJ)(\Sigma^*)$. \end{theorem} The main contribution of this section is that we can indeed prove that all languages in $\mathcal{K}(\FSVGen{\FMVDA} \cap \FSVLJ)(\Sigma^*)$ for any alphabet $\Sigma$ are threshold dot-depth one. We leave the proof, that is technical, at the end of the section. \begin{proposition} \label{ptn:DAintLJ_in_TDDO} For any alphabet $\Sigma$, any language in $\mathcal{K}(\FSVGen{\FMVDA} \cap \FSVLJ)(\Sigma^*)$ is threshold dot-depth one. \end{proposition} By closure of the class of threshold dot-depth one languages under Boolean operations and since $\FSVGen{\FMVDA} \cap \FSVJsdD \subseteq \FSVGen{\FMVDA} \cap \FSVLJ$, this allows us to fill the gap of the author's Ph.D.\ thesis using Costa's theorem, Theorem~\ref{thm:Costa}, obtaining the following result, by combination with Proposition~\ref{ptn:TDDO_in_DAintJsdD}. \begin{theorem} A language is threshold dot-depth one if and only if it belongs to $\DLang{\FMVDA} \cap \DLang{\FSVJsdD} = \DLang{\FSVGen{\FMVDA} \cap \FSVJsdD}$. \end{theorem} Note that this gives an algebraic characterisation of threshold dot-depth one languages. Also note that, as another corollary of Proposition~\ref{ptn:DAintLJ_in_TDDO}, since $\DLang{\FSVGen{\FMVDA} \cap \FSVLJ}$ and $\DLang{\FSVGen{\FMVDA} \cap \FSVJsdD}$ are so-called \nem-varieties of languages, we have that $\FSVGen{\FMVDA} \cap \FSVJsdD = \FSVGen{\FMVDA} \cap \FSVLJ$ (see~\cite{Straubing-2002}). We now prove Proposition~\ref{ptn:DAintLJ_in_TDDO}. \begin{proof}[Proof of Proposition~\ref{ptn:DAintLJ_in_TDDO}] Let $\Sigma$ be an alphabet. Let \[ L = u_0 A_1^* X_1 A_2^* \cdots X_{n - 1} A_n^* u_n \] where $n, r \in \N$, $u_0, u_1, \ldots, u_n \in \Sigma^*$, $\emptyset \neq A_1, A_2, \ldots, A_n \subseteq \Sigma$ and, for all $i \in \intinterval{1}{n - 1}$, we have \[ X_i = \begin{cases} \set{u_i} & \begin{array}{@{}l@{}} \text{if $u_i \neq \emptyword$, with}\\ \alphabet(u_i) \nsubseteq A_i, A_{i + 1} \end{array}\\ (A_i \setminus A_{i + 1}) (A_i \cap A_{i + 1})^{\geq r} (A_{i + 1} \setminus A_i) & \begin{array}{@{}l@{}} \text{otherwise, with}\\ \text{$A_i \nsubseteq A_{i + 1}$ and $A_{i + 1} \nsubseteq A_i$} \end{array} \end{cases} \displaypunct{.} \] If $n = 0$, then \[ L = u_0 = u_0 \Sigma^* \cap \bigcap_{c \in \Sigma} {\ccddo{u_0, c}_2}^\complement \displaypunct{,} \] so $L$ is indeed threshold dot-depth one. If $n = 1$, then \[ L = u_0 A_1^* u_n = u_0 \Sigma^* \cap \Sigma^* u_n \cap \bigcap_{c \in \Sigma \setminus A_1} {\ccddo{u_0, c, u_n}_2}^\complement \displaypunct{,} \] so, again, $L$ is threshold dot-depth one. We now assume $n \geq 2$. Given some $u \in \Sigma^+$, we set $\overline{u} = u_1, \ldots, u_{\length{u}}$. Moreover, for all $i \in \intinterval{1}{n - 1}$, we set \[ Y_i = \begin{cases} \set{u_i} & \text{if $u_i \neq \emptyword$}\\ (A_i \setminus A_{i + 1}) (A_{i + 1} \setminus A_i) & \text{otherwise} \end{cases} \] and \[ \widetilde{v_i} = \begin{cases} u_i & \text{if $u_i \neq \emptyword$}\\ \overline{v_{i, 1} v_{i, 2}} & \text{otherwise} \end{cases} \] for any $v_i \in Y_i$. We now define \[ K = \begin{array}[t]{@{}l@{}} \displaystyle u_0 \Sigma^* \cap \Sigma^* u_n \cap \bigcup_{v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}} \ccddo{u_0, \widetilde{v_1}, \ldots, \widetilde{v_{n - 1}}, u_n}_2 \cap\\ \displaystyle \bigcap_{\substack{v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}\\ i \in \intinterval{1}{n}, c \in \Sigma \setminus A_i}} {\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, c, \overline{v_i}, \ldots, \overline{v_{n - 1}}, u_n}_2}^\complement \cap\\ \displaystyle \bigcap_{\substack{i \in \set{j \in \intinterval{1}{n - 1} \mid u_j = \emptyword}\\ b_i \in A_i \setminus A_{i + 1}, b_{i + 1} \in A_{i + 1} \setminus A_i\\ v_1 \in Y_1, \ldots, v_{i - 1} \in Y_{i - 1},\\ v_{i + 1} \in Y_{i + 1}, \ldots, v_{n - 1} \in Y_{n - 1}}}\\ \displaystyle \Bigl( \begin{array}[t]{@{}l@{}} \displaystyle \bigcap_{c \in \Sigma \setminus (A_i \cup A_{i + 1})} {\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, b_i, c, b_{i + 1}, \overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2}^\complement \cap\\ \displaystyle \bigcap_{v \in (A_i \cap A_{i + 1})^{< r}} {\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, b_i v b_{i + 1}, \overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2}^\complement\Bigr) \displaypunct{.} \end{array} \end{array} \] Our goal is now to show that $L = K$, which implies that $L$ is actually threshold dot-depth one. \paragraph{Direction $L \subseteq K$.} Let $w \in L$. Then there exist $\alpha_1 \in A_1^*, \ldots, \alpha_n \in A_n^*$ and $x_1 \in X_1, \ldots, x_{n - 1} \in X_{n - 1}$ such that $w = u_0 \alpha_1 x_1 \alpha_2 \cdots x_{n - 1} \alpha_n u_n$. We can prove the following instrumental claim. \begin{claim} \label{clm:Alignment} For each $i \in \intinterval{0}{n - 1}$ and $v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}$ such that $w \in \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{n - 1}}, u_n}_2$, we have that any prefix of $w$ belonging to $\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_i}}_2$ starts with $u_0 \alpha_1 x_1 \cdots \alpha_i x_i$. Similarly, for each $i \in \intinterval{1}{n}$ and $v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}$ such that $w \in \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{n - 1}}, u_n}_2$, we have that any suffix of $w$ belonging to $\ccddo{\overline{v_i}, \ldots, \overline{v_{n - 1}}, u_n}_2$ ends with $x_i \alpha_{i + 1} \cdots x_{n - 1} \alpha_n u_n$. \end{claim} Now, by the structure of $w$, it is obvious that \[ w \in u_0 \Sigma^* \cap \Sigma^* u_n \cap \bigcup_{v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}} \ccddo{u_0, \widetilde{v_1}, \ldots, \widetilde{v_{n - 1}}, u_n}_2 \displaypunct{.} \] It remains to be proved that $w$ does not belong to any of the sets complemented in the formula for $K$. Assume there exist $v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}$, some $i \in \intinterval{1}{n}$ and $c \in \Sigma \setminus A_i$ such that $w$ belongs to $\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, c, \overline{v_i}, \ldots, \overline{v_{n - 1}}, u_n}_2$. Then, since by Claim~\ref{clm:Alignment} we have that any prefix of $w$ belonging to $\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}}_2$ starts with $u_0 \alpha_1 x_1 \cdots \alpha_{i - 1} x_{i - 1}$ and that any suffix of $w$ belonging to $\ccddo{\overline{v_i}, \ldots, \overline{v_{n - 1}}, u_n}_2$ ends with $x_i \alpha_{i + 1} \cdots x_{n - 1} \alpha_n u_n$, and since \[ \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, c, \overline{v_i}, \ldots, \overline{v_{n - 1}}, u_n}_2 = \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}}_2 \ccddo{c}_2 \ccddo{\overline{v_i}, \ldots, \overline{v_{n - 1}}, u_n}_2 \displaypunct{,} \] this would imply that a factor of $\alpha_i$ belongs to $\ccddo{c}_2$. This would in turn mean that $c \in \alphabet(\alpha_i) \cap (\Sigma \setminus A_i)$ while $\alpha_i \in A_i^*$: contradiction. So, \[ w \in \bigcap_{\substack{v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}\\ i \in \intinterval{1}{n}, c \in \Sigma \setminus A_i}} {\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, c, \overline{v_i}, \ldots, \overline{v_{n - 1}}, u_n}_2}^\complement \displaypunct{.} \] Similarly, let $i \in \intinterval{1}{n - 1}$ verifying $u_i = \emptyword$, let $b_i \in A_i \setminus A_{i + 1}, b_{i + 1} \in A_{i + 1} \setminus A_i$ (i.e.\ $b_i b_{i + 1} \in Y_i$) and $v_1 \in Y_1, \ldots, v_{i - 1} \in Y_{i - 1}, v_{i + 1} \in Y_{i + 1}, \ldots, v_{n - 1} \in Y_{n - 1}$. Firstly, assume there exists $c \in \Sigma \setminus (A_i \cup A_{i + 1})$ such that $w$ belongs to $\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, b_i, c, b_{i + 1}, \allowbreak \overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2$. Then, since by Claim~\ref{clm:Alignment} we have that any prefix of $w$ belonging to $\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}}_2$ starts with $u_0 \alpha_1 x_1 \cdots \allowbreak \alpha_{i - 1} x_{i - 1}$ and that any suffix of $w$ belonging to $\ccddo{\overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2$ ends with $x_{i + 1} \alpha_{i + 2} \cdots x_{n - 1} \alpha_n u_n$, and since \begin{align*} & \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, b_i, c, b_{i + 1}, \overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2\\ = & \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}}_2 \ccddo{b_i, c, b_{i + 1}}_2 \ccddo{\overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2 \displaypunct{,} \end{align*} this would imply that a factor of $\alpha_i x_i \alpha_{i + 1}$ belongs to $\ccddo{b_i, c, b_{i + 1}}_2$. This would in turn mean that $c \in \alphabet(\alpha_i x_i \alpha_{i + 1}) \cap \bigl(\Sigma \setminus (A_i \cup A_{i + 1})\bigr)$ while $\alpha_i x_i \alpha_{i + 1} \in A_i^* (A_i \setminus A_{i + 1}) (A_i \cap A_{i + 1})^{\geq r} (A_{i + 1} \setminus A_i) A_{i + 1}^*$: contradiction. Secondly, assume there exists $v \in (A_i \cap A_{i + 1})^{< r}$ such that $w$ belongs to $\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, b_i v b_{i + 1}, \overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, \allowbreak u_n}_2$. Then, since by Claim~\ref{clm:Alignment} we have that any prefix of $w$ belonging to $\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}}_2$ starts with $u_0 \alpha_1 x_1 \cdots \alpha_{i - 1} x_{i - 1}$ and that any suffix of $w$ belonging to $\ccddo{\overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2$ ends with $x_{i + 1} \alpha_{i + 2} \cdots x_{n - 1} \alpha_n u_n$, and since \begin{align*} & \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, b_i v b_{i + 1}, \overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2\\ = & \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}}_2 \ccddo{b_i v b_{i + 1}}_2 \ccddo{\overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2 \displaypunct{,} \end{align*} this would imply that a factor of $\alpha_i x_i \alpha_{i + 1}$ belongs to $\ccddo{b_i v b_{i + 1}}_2$. As $\alpha_i x_i \alpha_{i + 1} \in A_i^* (A_i \setminus A_{i + 1}) (A_i \cap A_{i + 1})^{\geq r} (A_{i + 1} \setminus A_i) A_{i + 1}^* \subseteq A_i^* A_{i + 1}^*$, we could not have that $(b_i v b_{i + 1})^2$ is a subword of $\alpha_i x_i \alpha_{i + 1}$, as $b_{i + 1} \notin A_i$ and $b_i \notin A_{i + 1}$, so it would necessarily be the case that $b_i v b_{i + 1}$ is a factor of $\alpha_i x_i \alpha_{i + 1}$. However, as $v$ is of length less than $r$, the word $b_i v b_{i + 1}$ does not fit as a factor anywhere in $\alpha_i x_i \alpha_{i + 1}$, hence we again reach a contradiction. Therefore, \begin{align*} w \in & \bigcap_{\substack{i \in \set{j \in \intinterval{1}{n - 1} \mid u_j = \emptyword}\\ b_i \in A_i \setminus A_{i + 1}, b_{i + 1} \in A_{i + 1} \setminus A_i\\ v_1 \in Y_1, \ldots, v_{i - 1} \in Y_{i - 1},\\ v_{i + 1} \in Y_{i + 1}, \ldots, v_{n - 1} \in Y_{n - 1}}}\\ & \Bigl( \begin{array}[t]{@{}l@{}} \displaystyle \bigcap_{c \in \Sigma \setminus (A_i \cup A_{i + 1})} {\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, b_i, c, b_{i + 1}, \overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2}^\complement \cap\\ \displaystyle \bigcap_{v \in (A_i \cap A_{i + 1})^{< r}} {\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, b_i v b_{i + 1}, \overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2}^\complement\Bigr) \displaypunct{.} \end{array} \end{align*} Proving Claim~\ref{clm:Alignment} finishes to prove that $L \subseteq K$. \begin{proof}[Proof of Claim~\ref{clm:Alignment}] We only prove the first part of the claim, the second part can be proven in a symmetric way. We want to prove that for each $i \in \intinterval{0}{n - 1}$ and $v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}$ such that $w \in \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{n - 1}}, u_n}_2$, any prefix of $w$ belonging to $\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_i}}_2$ starts with $u_0 \alpha_1 x_1 \cdots \alpha_i x_i$. We are going to prove it by induction on $i$. \subparagraph{Base case $i = 0$.} It is direct to see that for all $v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}$ such that $w \in \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{n - 1}}, u_n}_2$, any prefix of $w$ belonging to $\ccddo{u_0}_2$ starts with $u_0$. \subparagraph{Induction.} Let $i \in \intinterval{0}{n - 2}$ verifying that for all $v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}$ such that $w \in \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{n - 1}}, u_n}_2$, any prefix of $w$ belonging to $\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_i}}_2$ starts with $u_0 \alpha_1 x_1 \cdots \alpha_i x_i$. We are now going to prove that this also holds for $i + 1$. Let $v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}$ such that $w \in \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{n - 1}}, u_n}_2$. By the inductive hypothesis, we have that any prefix of $w$ belonging to $\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_i}}_2$ starts with $u_0 \alpha_1 x_1 \cdots \alpha_i x_i$. We claim that any prefix of $\alpha_{i + 1} x_{i + 1} \alpha_{i + 2} \cdots x_{n - 1} \allowbreak \alpha_n u_n$ containing $v_{i + 1}$ as a subword starts with $\alpha_{i + 1} x_{i + 1}$: this implies that any prefix of $w$ belonging to $\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i + 1}}}_2$ starts with $u_0 \alpha_1 x_1 \cdots \alpha_i x_i \alpha_{i + 1} x_{i + 1}$, otherwise we would have that some proper prefix of $u_0 \alpha_1 x_1 \cdots \alpha_i x_i$ belongs to $\ccddo{u_0, \overline{v_1}, \ldots, \overline{v_i}}_2$ or that some proper prefix of $\alpha_{i + 1} x_{i + 1}$ belongs to $\ccddo{\overline{v_{i + 1}}}_2$. We will show that the second situation cannot occur. We have two cases. \begin{itemize} \item Either $u_{i + 1} \neq \emptyword$. Then, we have $x_{i + 1} = v_{i + 1} = u_{i + 1}$ and it is obvious that $\alpha_{i + 1} x_{i + 1}$ contains $u_{i + 1}$ as a subword. But as $x_{i + 1} = u_{i + 1} = z c z'$ with $z \in A_{i + 1}^*$, with $c \in \Sigma \setminus A_{i + 1}$ and $z' \in \Sigma^*$, it cannot be that a proper prefix of $\alpha_{i + 1} x_{i + 1}$ contains $v_{i + 1} = u_{i + 1}$ as a subword, otherwise we would have that $\alpha_{i + 1} z \in A_{i + 1}^*$ contains $c$. \item Or $u_{i + 1} = \emptyword$. Then, we have $x_{i + 1} = a z b$ and $v_{i + 1} = a' b'$ with $a, a' \in A_{i + 1} \setminus A_{i + 2}$, with $z \in (A_{i + 1} \cap A_{i + 2})^{\geq r}$ and $b, b' \in A_{i + 2} \setminus A_{i + 1}$. But since $\alpha_{i + 1} a z \in A_{i + 1}^*$, it cannot contain $b'$, so no proper prefix of $\alpha_{i + 1} x_{i + 1}$ can contain $v_{i + 1}$ as a subword. \end{itemize} This concludes the proof of Claim~\ref{clm:Alignment}. \end{proof} \paragraph{Direction $K \subseteq L$.} Let $w \in K$. Then, since \[ w \in u_0 \Sigma^* \cap \Sigma^* u_n \cap \bigcup_{v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}} \ccddo{u_0, \widetilde{v_1}, \ldots, \widetilde{v_{n - 1}}, u_n}_2 \displaypunct{,} \] we have that there exist $v_1 \in Y_1, \ldots, v_{n - 1} \in Y_{n - 1}$ and $y_1 \in \ccddo{\widetilde{v_1}}_2, \ldots, y_{n - 1} \in \ccddo{\widetilde{v_{n - 1}}}_2$ such that $w = u_0 y_1 \cdots y_{n - 1} u_n$. Let $i \in \intinterval{1}{n - 1}$. There are two cases to consider. \begin{itemize} \item $u_i \neq \emptyword$. In that case, we have $\widetilde{v_i} = u_i$, so that $y_i \in \ccddo{u_i}_2$. Since $u_0 y_1 \cdots y_{i - 1} \in \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}}_2$ and $y_{i + 1} \cdots y_{n - 1} u_n \in \ccddo{\overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, \allowbreak u_n}_2$ and as there exists some $c \in \alphabet(u_i) \setminus A_i$, it cannot be that ${u_i}^2$ is a subword of $y_i$, otherwise we would have that $w \in \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, c, \overline{v_i}, \ldots, \allowbreak \overline{v_{n - 1}}, u_n}_2$. Hence, there exist $\alpha_i, \beta_i \in \Sigma$ such that $y_i = \alpha_i u_i \beta_i$, which actually verify $\alpha_i \in A_i^*$ and $\beta_i \in A_{i + 1}^*$ for the same reasons as just above. \item $u_i = \emptyword$. In that case, we have $\widetilde{v_i} = \overline{v_{i, 1} v_{i, 2}}$, so that $y_i \in \ccddo{v_{i, 1}, v_{i, 2}}_2$ with $v_{i, 1} v_{i, 2} \in (A_i \setminus A_{i + 1}) (A_{i + 1} \setminus A_i)$, which means that there exist $a_i \in A_i \setminus A_{i + 1}, b_i \in A_{i + 1} \setminus A_i$ verifying that $a_i b_i$ is a subword of $y_i$. We can take these $a_i, b_i$ to be such that $y_i$ can be decomposed as $\alpha_i a_i z_i b_i \beta_i$ where $\alpha_i, \beta_i \in \Sigma^*$ and $z_i \in \Bigl((A_i \cap A_{i + 1}) \cup \bigl(\Sigma \setminus (A_i \cup A_{i + 1})\bigr) \Bigr)^*$. Since $u_0 y_1 \cdots y_{i - 1} \in \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}}_2$ and $y_{i + 1} \cdots y_{n - 1} u_n \in \ccddo{\overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2$ and as $a_i b_i \in Y_i$, it must actually be that $\alpha_i \in A_i^*$ and $\beta_i \in A_{i + 1}^*$, otherwise we would have that $w \in \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, c, \overline{a_i b_i}, \overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2$ for some $c \in \Sigma \setminus A_i$ or that $w \in \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, \overline{a_i b_i}, c, \overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2$ for some $c \in \Sigma \setminus A_{i + 1}$. Moreover, it must be that $z_i \in (A_i \cap A_{i + 1})^*$, otherwise we would have that $w \in \ccddo{u_0, \overline{v_1}, \ldots, \overline{v_{i - 1}}, a_i, c, b_i, \overline{v_{i + 1}}, \ldots, \overline{v_{n - 1}}, u_n}_2$ for some $c \in \Sigma \setminus (A_i \cup A_{i + 1})$. Finally, using the last complemented set in the formula for $K$, we also necessarily have that $\length{z_i} \geq r$, so that $z_i \in (A_i \cap A_{i + 1})^{\geq r}$. \end{itemize} Therefore, in any case we have that $y_i = \alpha_i x_i \beta_i$ with $x_i \in X_i$ as well as $\alpha_i \in A_i^*$ and $\beta_i \in A_{i + 1}^*$. This allows us to conclude that $w \in L$. \end{proof} \section{Conclusion} \label{sec:Conclusion} Although $\Prog{\FMVJ}$ is very small compared to $\AC[0]$, we have shown that programs over monoids in $\FMVJ$ are an interesting subject of study in that they allow to do quite unexpected things. The ``feedback-sweeping'' technique allows one to detect presence of a factor thanks to such programs as long as this factor does not appear too often as a subword: this is the basic principle behind threshold dot-depth one languages, that our article shows to belong wholly to $\Prog{\FMVJ}$. The result that the class of threshold dot-depth one languages corresponds exactly to $\DLang{\FMVDA} \cap \DLang{\FSVJsdD}$ is of independent interest for automata theory: it means that the class of threshold dot-depth one languages corresponds exactly to the intersection of the class of dot-depth one languages and of the class of languages recognised by monoids in $\FMVDA$, two well-known classes at the bottom of the dot-depth hierarchy (see~\cite{Pin-2017}). Obtaining similar results for higher levels of the dot-depth hierarchy could be a nice research goal in automata theory. Concerning the question whether threshold dot-depth one languages with additional positional modular counting do correspond exactly to the languages in $\DLang{\StVQDA} \cap \DLang{\StVQuasi(\FSVJsdD)}$, we think that with the algebraic characterisation of threshold dot-depth languages we now have, it should readily be solved affirmatively using the finite category machinery of~\cite{Dartois-Paperman-2014}. \bibliographystyle{abbrv}
1,314,259,993,678
arxiv
\section{Introduction}\label{sec:intro} In the pursuit of ubiquitous brain-inspired intelligence envisioned in the future 6G networks \cite{KLetaif19_6G}, recent years have witnessed the spreading of {\it artificial intelligence} (AI) algorithms from the cloud to the network edge, resulting in an active area called edge intelligence \cite{Survey_FEEl,Debbah19}. The core research issue therein is to allow low-latency and privacy-aware access to rich mobile data for intelligence distillation. To this end, the {\it federated edge learning} (FEEL) framework has been proposed recently, which distributes the AI-model training task over edge devices by using their data locally to preserve the privacy \cite{Tran19_Infocom,Konecny2016aa_FL,Chen20FL,JParK21Proc,Mo2020aa,JRen20}. Essentially, the FEEL framework corresponds to the implementation of {\it distributed gradient descent} over wireless networks. Such a training process is to find optimized AI-model parameters by minimizing a properly designed loss function in an iterative manner. Specifically, at each iteration or communication round, the edge server first broadcasts the global AI-model parameters to edge devices, such that all edge devices can synchronize their local models; next, the edge devices compute their respective local gradient updates using the local data and then upload them to the edge server for further aggregation to update the global model. Although the uploading of high-volume raw data is avoided, the gradient aggregation process in FEEL may still suffer from a communication bottleneck due to the high dimensionality of each gradient update, especially when the number of edge devices sharing the same wireless medium becomes large. To tackle this issue, one promising solution called {\it over-the-air} FEEL (Air-FEEL) has been proposed (see, e.g., \cite{Zhu2021ComMag}), which exploits the {\it over-the-air computation} (AirComp) technique for ``one-shot" aggregation by allowing multiple devices' concurrent update transmission. In such a way, the communication and computation are integrated in a joint design by exploiting the superposition property of a {\it multiple access channel} (MAC) \cite{nomo_function_Nazer,Gastpar08}. The idea of AirComp was first proposed in \cite{nomo_function_Nazer} for data aggregation in sensor networks, which harnessed the ``interference" via structured codes to help functional computation over a MAC. The subsequent work \cite{Gastpar08} showed that for Gaussian {\it independent and identically distributed} (i.i.d.) data sources, the uncoded (analog) transmission is optimal to minimize the distortion in AirComp. Building on the information-theoretic studies, the analog AirComp implementation has attracted growing research interests (see, e.g., \cite{Abari15,Cao_PowerTWC,WLiu20TWC,Cao_2020aa,GX18IoT,LChen18IoT,XZhai2020Ar,ZD18TSP}). For instance, the synchronization issue in AirComp was addressed in \cite{Abari15} via a shared clock broadcasting from edge server to devices; the optimal power control policies for AirComp over fading channels were derived in \cite{Cao_PowerTWC,WLiu20TWC} to minimize the average computation distortion; and the cooperative interference management for multi-cell AirComp networks was developed in \cite{Cao_2020aa}. Furthermore, {\it multiple-input-multiple-output} (MIMO) spatial multiplexing was exploited in AirComp to enable vector-valued functional computation targeting multi-modal sensing \cite{GX18IoT,LChen18IoT} and to enhance the computational accuracy \cite{XZhai2020Ar,ZD18TSP}. Recently, AirComp found its merits in the new context of FEEL, namely the Air-FEEL, to enable the communication-efficient gradient aggregation at each communication round \cite{TSery2020TSP}. Prior works studied the Air-FEEL system from different perspectives such as devices scheduling \cite{GZhu2020TWC,YSun2021Ar,SXia2020Ar,XFan2021Ar04}, beamforming design \cite{ KYang2020TWC,SWang2021Ar}, update compression \cite{Amiri2020TSP,MAmiri2020TWC,XFan2021Ar}, and hyper-parameters (such as learning rates) optimization \cite{HG21IoT,Xu2021Ar,JZhang2021Ar}. For instance, a broadband Air-FEEL solution was proposed in \cite{GZhu2020TWC}, where a set of communication-learning tradeoffs were derived to guide the device scheduling. Along this vein, the authors in \cite{YSun2021Ar} proposed an energy-aware device scheduling strategy to minimize the expected improvement of loss function value at each communication round, while the authors in \cite{SXia2020Ar} proposed a threshold-based device selection scheme to achieve reliable model aggregation. Then, for multi-antenna Air-FEEL systems, a joint design of device scheduling and receive beamforming was presented in \cite{KYang2020TWC}, and a unit-modulus analog receive beamforming design was proposed in \cite{SWang2021Ar}. As for update compression, a source-coding algorithm exploiting gradient sparsification was proposed in \cite{Amiri2020TSP,MAmiri2020TWC}, and a compressive-sensing based gradient aggregation approach was developed in \cite{XFan2021Ar} to further improve the communication efficiency. Lately, Air-FEEL based on digital modulation was proposed in \cite{GZhu2020Ar} and further extended in \cite{RJiang2020Ar}, which features one-bit quantization and modulation at the edge devices and majority-vote based decoding at the edge server. Besides improving the communication efficiency, the Air-FEEL has also been exploited for enhancing the data privacy, in which individual updates are not accessible by the centralized edge server, thus eliminating the risk of potential model inversion attack \cite{DLiu2020Ar,Koda2020Ar,ASonee2020Ar,MSeif2020Ar}. Generally speaking, the employment of AirComp introduces an essential design tradeoff in Air-FEEL between the the enhanced communication efficiency (via the over-the-air aggregation) and the degraded learning performance (due to the aggregation error caused by the channel fading and noise perturbation). Due to such a tradeoff, how to analytically characterize the training performance (e.g., in terms of accuracy and latency) is a challenging task has not been investigated in the literature yet. Also notice that the aggregation distortion in different communication rounds may have distinct impacts on the learning performance \cite{CShen2021Ar}, thus making the above tradeoff even more complicated. To deal with these issues, it is crucial to properly control the transmission power at different edge devices over different communication rounds. In the literature, there have been prior works on Air-FEEL \cite{GZhu2020TWC,YSun2021Ar,SXia2020Ar,Amiri2020TSP,KYang2020TWC,DLiu2020Ar,Xu2021Ar,XFan2021Ar04} that considered simplified channel inversion (or its variants) to align the channel gains among different devices, which, however, may lead to amplified noise at receiver and thus is highly suboptimal especially in deep fading scenarios. Some other works \cite{NZhang2020Ar,JZhang2021Ar} designed the power control with the objective of minimizing the individual aggregation distortion (e.g., {\it mean squared error} (MSE)) at each communication round, which, however, may not perform well as the aggregation distortion in different communication rounds may have distinct impact on the learning performance \cite{CShen2021Ar}. Therefore, how to obtain the analytic learning performance of Air-FEEL in terms of the power control variables, and accordingly optimize the power control decisions for optimizing the learning performance still remains unknown. This thus motivates the current work. This paper studies an Air-FEEL system consisting of multiple edge devices and one edge server. By considering smooth learning models satisfying the Polyak-{\L}ojasiewicz inequality, we establish an elegant learning performance metric, namely the optimality gap, linking with the aggregation errors over communication rounds. Accordingly, we propose optimized power control policies for directly minimizing the optimality gap. The main contributions are summarized as follows. \begin{itemize} \item { \bf Convergence analysis:} First, we analyze the optimality gap of the loss function over different communication rounds, which characterizes the impact of gradient aggregation errors (i.e., the bias and MSE of the gradient aggregation estimates) on the convergence performance of the Air-FEEL algorithm. It is revealed that if the aggregation errors are unbiased, the optimality gap will diminish to zero with sufficiently many communication rounds and properly chosen step sizes; while if the aggregation errors are biased, the optimality gap would reach to an error floor whose height is equal to the accumulated estimate bias over communication rounds. It is also shown that within a finite number of communication rounds, the aggregation errors at later rounds (with higher weights) contribute more to the optimality gap than those at earlier rounds. \item { \bf Power control optimization:} Next, building on the convergence results, we formulate new power control optimization problems for Air-FEEL under the cases without and with unbiased aggregation constraints, respectively, with the objective of minimizing the optimality gap, subject to a set of average and maximum power constraints at individual edge devices. Fortunately, both power control problems can be transformed into convex forms, which can thus be optimally solved by the Lagrangian duality method. The optimized power control solutions establish a regularized channel inversion structure, where the regularization term at each edge device is related to all other devices' average power budgets for the case without unbiased aggregation constraints, and is only related to the own device's individual average power budget for the case with unbiased aggregation constraints. \item {\bf Performance evaluation:} Finally, we conduct extensive simulations to evaluate the performance of the optimized power control for Air-FEEL by considering the ridge regression with synthetic dataset, and handwritten digit recognition using MNIST dataset with a {\it convolution neural network} (CNN). It is shown that the proposed power control policies in the cases without and with unbiased aggregation constraints achieve significantly faster convergence rate (or lower optimality gap), than the benchmarking fixed power transmission and conventional MSE minimization schemes, as the proposed policies can better handle the aggregation errors over rounds based on their contributions to the optimality gap. It is also shown that the the case without unbiased aggregation constraints can achieve lower optimality gap than that with biased aggregation constaints when some mild conditions are met. This validates our analysis that the Air-FEEL can always converge to the optimal point with unbiased gradient aggregation. \end{itemize} {\it Notations:} Bold lowercase letters refer to column vectors. $\mathbb{E}(\cdot)$ denotes the expectation operation; the superscript $T$ represents the transpose operation; $\nabla$ is the gradient operator, and $(x)^+\triangleq\max\{0,x\}$. For a set $\mathcal{A}$, $|\mathcal{A}|$ denotes its cardinality. $\|\bm a\|$ denotes the Euclidean norm of vector $\bm a$. $\bf I$ denotes the identity matrix. For ease of reference, the main notations used in this paper are listed in Table \ref{t}. \begin{table}[h \caption{List of Main Notations}\label{t \begin{center} \begin{tabular}{|m{1.3cm}<{\centering}| p{6.6cm}|} \hline Symbol & Description\\ \hline $\mathcal{K}$ & Set of edge devices with $\mathcal{K}\triangleq \{1,..., K\}$ \\ \hline $\mathcal{N}$ & Set of communication rounds with $ \mathcal{N}\triangleq\{ 1,\cdots,N\}$\\ \hline ${\bf w}\in\mathbb{R}^q$ & Parameter vector of learning model with size $q$ \\ \hline ${\bf w}^{\star}$ & Optimal parameter vector \\ \hline ${\mathcal D}_k$ & Local dataset at edge device $k\in\mathcal{K}$ with cardinality $D_k$\\ \hline $({\bf x}_i,\tau_i)$& The $i$-th sample ${\bf x}_i$ in dataset with ground-true label $\tau_i$\\ \hline $f({\bf w},{\bf x}_i,\tau_i)$ & Sample-wise loss function for quantifying the prediction error of the learning model $\bf w$ on ${\bf x}_i$ in terms of $\tau_i$\\ \hline $F_k({\bf w})$ & Local loss function of the learning model vector $\bf w$ on ${\mathcal D}_k$ at edge device $k\in\mathcal{K}$ \\ \hline $F({\bf w})$ & Global loss function at the parameter model $\bf w$ \\ \hline ${\bf g}_{k}^{(n)}$ & Local gradient estimate in edge device $k$ at communication round $n$\\ \hline $\bar{\bf g}^{(n)} $ & Global gradient estimation at communication round $n$\\ \hline $\hat{\bf g}^{(n)}$ & Global gradient received at the edge server through over-the-air aggregation at communication round $n$ \\ \hline $\eta^{(n)}$ & Learning rate at communication round $n$ \\ \hline ${\bm \varepsilon}^{(n)}$& Aggregation error at each communication round $n$ \\ \hline $\nabla F({\bf w})$ & Ground-truth gradient of the loss function evaluated at point ${\bf w} \in\mathbb{R}^q$\\ \hline $F^{\star}$ & Optimal loss function value that is equal to $F({\bf w}^{\star})$\\ \hline $\hat h_k^{(n)}$ & Complex channel coefficient from edge device $ k\in{\mathcal K}$ to the edge server at communication round $n$ \\ \hline ${\bf z}^{(n)}\in\mathbb{R}^q$ & AWGN with ${\bf z}^{(n)}\sim{\mathcal CN}(0,\sigma_z^2\bf I)$\\ \hline $p^{(n)}_{k}$ & Power scaling factor in edge device $k\in\mathcal{K}$ at communication round $n$ \\ \hline $ {\bf y}^{(n)}$ & Received signal in edge server at communication round $n$ \\ \hline \end{tabular} \end{center \end{table} \section{System Model}\label{sec:system} \begin{figure} \centering \setlength{\abovecaptionskip}{-4mm} \setlength{\belowcaptionskip}{-4mm} \includegraphics[width=3.7in]{Sys_model.eps} \caption{Illustration of over-the-air federated edge learning (Air-FEEL). } \label{fig:model} \end{figure} We consider an Air-FEEL system consisting of an edge server and $K$ edge devices, as shown in Fig.~\ref{fig:model}. With the coordination of the edge server, the edge devices cooperatively train a shared machine learning model via the over-the-air gradient aggregation as elaborated in the sequel. \subsection{Learning Model} We assume that the learning model is represented by the parameter vector ${\bf w}\in\mathbb{R}^q$ with ${\bf w}=[w_1,\cdots,w_q]^T$ and $q$ denoting the learning model size. Let ${\mathcal D}_k$ denote the local dataset at edge device $k$, in which the $i$-th sample and its ground-true label are denoted by ${\bf x}_i$ and $\tau_i$, respectively. Define $f({\bf w},{\bf x}_i,\tau_i)$ as the sample-wise loss function quantifying the prediction error of the learning model $\bf w$ on sample ${\bf x}_i$ {\it with respect to} (w.r.t.) its ground-true label $\tau_i$. Then the local loss function of the learning model vector $\bf w$ on ${\mathcal D}_k$ is \begin{align}\label{LocalLossFunction} F_k({\bf w})=\frac{1}{|{\mathcal D}_k|} \sum \limits_{({\bf x}_i,\tau_i)\in{\mathcal D}_k} f({\bf w},{\bf x}_i,\tau_i). \end{align} For notational convenience, we denote $f({\bf w},{\bf x}_i,\tau_i)$ as $f_i({\bf w})$ and assume that the sizes of local datasets at different edge devices are uniform, i.e., $D \triangleq D_k=|{\mathcal D}_k|,\forall k\in\mathcal K$. Then, the global loss function on all the distributed datasets ${\mathcal D}_{\rm tot}=\cup_{k\in\mathcal K} {\mathcal D}_k$ evaluated on parameter vector $\bf w$ is given by \begin{align}\label{GlobalLossFunction} F({\bf w})=\frac{1}{ D_{\rm tot}}\sum\limits_{k\in\mathcal K} D_k F_k({\bf w})=\frac{1}{ K}\sum\limits_{k\in\mathcal K} F_k({\bf w}), \end{align} where $D_{\rm tot}=|{\mathcal D}_{\rm tot} |=KD$. The objective of the training process is to find a desired parameter vector $\bf w$ for minimizing the global loss function $F({\bf w})$ in \eqref{GlobalLossFunction}, i.e., \begin{align}\label{OptimalParameter} {\bf w}^{\star}=\arg \min_ {\bf w} F({\bf w}). \end{align} Instead of uploading all the local data to the edge server for centralized training, we consider the Air-FEEL, in which the learning process in \eqref{OptimalParameter} is implemented iteratively in a distributed manner using the {\it federated stochastic gradient descent} (FedSGD) algorithm\footnote{Besides the FedSGD, the {\it federated averaging} (FedAvg) is an alternative method for Air-FEEL, which can be implemented with multiple local updates at edge devices together with the model averaging at the edge server. This paper considers the FedSGD (instead of FedAvg), mainly for the purpose of facilitating the convergence analysis and gaining insightful results. Furthermore, the FedSGD enjoys the advantages of being more robust to the non-i.i.d. data \cite{Hou2021_Ar}. Nevertheless, our design and analysis principles in this paper can be extended to the case with FedAvg, which is left for future work.} \cite{FedAvg} as detailed in the following. Consider a particular iteration or communication round $n$, with the learning model before updating being denoted by ${\bf w} ^{(n)}$. In this round, each edge device $k\in \mathcal K$ computes the local gradient estimate of the loss function as ${\bf g}_{k}^{(n)}$, based on a randomly sampled mini-batch from the local dataset. We denote the set of mini-batch data used by the edge device $k$ at round $n$ as $ \tilde{\mathcal D}_k^{(n)}$ and its size $m_b = | \tilde{\mathcal D}_k^{(n)}|, \forall k\in \mathcal K$. Then we have \begin{align}\label{sys_LocalGradient} {\bf g}_{k}^{(n)}= \frac{1}{m_b} \sum \limits_{({\bf x}_i,\tau_i)\in\tilde{\mathcal D}_k} \nabla f_i\left({\bf w}^{(n)}\right). \end{align} Next, the edge devices upload their local gradients to the edge server for aggregation. If the aggregation is error-free, then the global gradient estimate can be obtained as an average of local gradient estimates from all different edge devices, i.e.,\footnote{ Although we consider the same data size $D$ at different edge devices, our proposed Air-FEEL can be easily extended to the case when they have different data sizes, i.e., $D_k$'s are different. In this case, we only need to revise the global gradient estimate in \eqref{sys_GlobalGradient} as a weighted-average of the local ones, i.e., $\bar{\bf g}^{(n)}=\sum\limits_{k\in\mathcal K}\frac{D_k}{D_{\rm tot}}{\bf g}_{k}^{(n)}$. Via AirComp, the desired weighted aggregation of the local gradient estimate can be easily attained by adding an additional pre-processing $\psi(\cdot)$ on the transmitted signal $s_k$ with $\psi(s_k)=\sum\limits_{k\in\mathcal K}\frac{D_k}{D_{\rm tot}}s_k$.} \begin{align}\label{sys_GlobalGradient} \bar{\bf g}^{(n)}= \frac{1}{K}\sum\limits_{k\in\mathcal K} {\bf g}_{k}^{(n)}. \end{align} Then, the edge server broadcasts the obtained global gradient estimate $\bar{\bf g}^{(n)} $ to the edge devices, based on which different edge devices can synchronously update their own learning model via \begin{align}\label{sys_ModelUpdate} {\bf w}^{(n+1)}={\bf w}^{(n)}-\eta^{(n)}\cdot \bar{\bf g}^{(n)}, \end{align} where $\eta^{(n)}$ is the learning rate at communication round $n$. The above procedure continues until the convergence criteria is met or the maximum number of communication rounds is achieved. Notice that this paper considers the over-the-air aggregation approach to achieve fast gradient aggregation, based on which the received aggregated gradient at the edge server in \eqref{sys_GlobalGradient} may be erroneous due to perturbation caused by the channel fading and noise. This issue will be elaborated in Section~\ref{CommunicationModel}. \subsection{Basic Assumptions on Learning Model} To facilitate the convergence analysis, we make several assumptions on the loss functions and gradient estimates, which are commonly made in the literature \cite{DLiu2020Ar,JZhang2021Ar,SXia2020Ar,SWang2021Ar,Michael2012,Li2019_Noniid}. \begin{assumption}[Smoothness]\label{Assump_Smooth}\emph{ Let $\nabla F({\bf w})$ denote the gradient of the loss function evaluated at point ${\bf w} \in\mathbb{R}^q$. Then there exists a non-negative constant vector ${\bf L}\in\mathbb{R}^q$ with ${\bf L}=[L_1,\cdots,L_q]^T$, such that \begin{align*} & F({\bf w})\!-\!\left[ F({\bf w}^{\prime})\! +\! \nabla F({\bf w})^T (\!{\bf w}\!\!-\! {\bf w}^{\prime})\right]\\ &~~~~~~~~~~~~~\le \frac{1}{2}\sum_{i=1}^{q} L_i({{ w}_i-{w}^{\prime}_i})^2, \forall {\bf w}, {\bf w}^{\prime} \in\mathbb{R}^q. \end{align*}} \end{assumption} Assumption \ref{Assump_Smooth} guarantees that the gradient of the loss function would not change arbitrarily quickly w.r.t. the parameter vector. Note that such an assumption is essential for convergence analysis of gradient decent methods to provide a good indicator for how far to decrease to the minimum loss. \begin{assumption}[Polyak-{\L}ojasiewicz inequality]\label{Assump_PL}\emph{ Let $F^{\star}$ denote the optimal loss function value to problem \eqref{OptimalParameter}. There exists a constant $\delta\ge 0$ such that the global loss function $F({\bf w})$ satisfies the following Polyak-{\L}ojasiewicz condition: \begin{align}\label{Ineq_PL} \| \nabla F({\bf w}) \|^2 \ge 2\delta(F({\bf w})-F^{\star}). \end{align} } \end{assumption} Notice that Assumption \ref{Assump_PL} is more general than the standard assumption of strong convexity \cite{Karimi2016}. The inequality in \eqref{Ineq_PL} simply requires that the gradient grows faster than a quadratic function when away from the optimal function value and implies that every stationary point is a global minimum. Typical loss functions satisfying Assumptions \ref{Assump_Smooth} and \ref{Assump_PL} include logistic regression, linear regression, and least squares. \begin{assumption}[Variance bound]\label{Assum_VarianceBound}\emph{ The local gradient estimates $\{{\bf g}_k\}$, defined in \eqref{sys_LocalGradient}, where the index $n$ is omitted for simplicity, are assumed to be independent and unbiased estimates of the batch gradient $\nabla F({\bf w})$ with coordinate bounded variance, i.e., \begin{align} &\mathbb{E}[{\bf g}_k]=\nabla F({\bf w}), \forall k\in\mathcal K,\\ &\mathbb{E}[ ({g}_{k,i}-\nabla F({w_i}) )^2]\le \frac{\sigma_i^2}{m_b}, \forall k\in\mathcal K, \forall i, \end{align} where ${g}_{k,i}$ and $\nabla F({w_i})$ are defined as the $i$-th element of $\{{\bf g}_k\}$ and $\nabla F({\bf w}) $, respectively, ${\bm\sigma}=[\sigma_1,\cdots,\sigma_q]$ is a vector of non-negative constants, and the denominator $m_b$ accounts for the fact that the local gradient estimate is computed over a mini-batch of data with size $m_b$ } \end{assumption} Notice that the following convergence analysis and power control optimization in Sections \ref{Sec_Con} and \ref{Sec_Power} are based on Assumptions \ref{Assump_Smooth}-\ref{Assum_VarianceBound}, similarly as in prior work \cite{JRen20,DLiu2020Ar,JZhang2021Ar,SXia2020Ar,SWang2021Ar}. Nevertheless, as shown in simulations in Section~\ref{Sec_CNN}, the proposed power control designs can still work well for CNN when such assumptions are relaxed. \subsection{Over-the-Air Aggregation for FEEL}\label{CommunicationModel} The distributed training latency for FEEL is dominated by the update aggregation process, especially when the number of devices becomes large. Therefore, we focus on the aggregation process over a MAC. To accelerate the learning, we employ the AirComp technique for fast gradient aggregation by exploiting the superposition property of the MAC. To implement AirComp, during the gradient-uploading phase, all devices transmit simultaneously over the same time-frequency block with proper phase compensation. For ease of exposition, it is assumed that the channel coefficients remain unchanged within each communication round, but may change over different rounds. It is also assumed that the edge devices can perfectly know their own {\it channel state information} (CSI), so they can compensate for the channel phase differences. Let $\hat h_k^{(n)}$ denote the complex channel coefficient from edge device $k$ to the edge server at communication round $n$, and $h_k^{(n)}$ denote its post-compensated real-valued channel coefficient, i.e., $h_k^{(n)}=|\hat h_k^{(n)}|$. Then, the received aggregated signal via AirComp (after phase compensation) is given by \begin{align}\label{sys_ReceivedSignal} {\bf y}^{(n)}=\sum\limits_{k\in\mathcal K}h_k^{(n)}\sqrt{p_k^{(n)}}{\bf g}_{k}^{(n)}+{\bf z}^{(n)}, \end{align} in which $p_k^{(n)}$ denotes the power scaling factor at edge device $k$, and ${\bf z}^{(n)}\in\mathbb{R}^q$ denotes the {\it additive white Gaussian noise} (AWGN) with ${\bf z}^{(n)}\sim\mathcal{CN}(0,\sigma_z^2\bf I)$ and $\sigma_z^2$ being the noise power. Based on \eqref{sys_ReceivedSignal}, the global gradient estimate at the edge server is given by\footnote{Unlike the conventional AirComp, using an additional scaling factor at the receiver, in \eqref{sys_ComGlobalGradient} we directly use ${\bf y}^{(n)}/K$ as the estimated value of global gradient for Air-FEEL. This is due to the fact that the learning rate $\eta^{(n)} $ in \eqref{sys_ModelUpdate} can play the equivalent role of scaling factor, and thus dedicated scaling factors are not needed as in conventional AirComp. } \begin{align}\label{sys_ComGlobalGradient} \hat{\bf g}^{(n)}=\frac{{\bf y}^{(n)}}{K}. \end{align} It thus follows from \eqref{sys_ReceivedSignal} and \eqref{sys_ComGlobalGradient} that the aggregation error caused by the over-the-air aggregation in global gradient estimation is given by \begin{align}\label{sys_Err} {\bm \varepsilon}^{(n)}&=\hat{\bf g}^{(n)}-\bar{\bf g}^{(n)} \notag\\ &=\underbrace{\frac{ 1}{K}\sum\limits_{k\in\mathcal K}\left(h_k^{(n)}\sqrt{p_k^{(n)}}-1\right){\bf g}_{k}^{(n)} }_{\rm Signal~misalignment ~error}+\underbrace{\frac{ {\bf z}^{(n)}}{K}}_{\rm Noise}, \end{align} which consists of two components representing the signal misalignment error and noise-induced error, respectively. The devices can adaptively adjust their transmit powers by controlling $\{p_k^{(n)}\}$ to reduce the aggregation errors for enhancing the learning performance. We consider that each edge device $k\in\mathcal K$ is subject to a maximum power budget $\hat{P}^{\rm max}_k$ for each communication round, and an average power budget denoted by $\hat{P}^{\rm ave}_k$ over the whole training period. Therefore, we have \begin{align} \frac{1}{q}\mathbb{E}\left(\left\|\sqrt{p_{k}^{(n)}}{\bf g}_{k}^{(n)}\right\|^2\right) \leq \hat{P}^{\rm max}_k,~\forall k\in{\mathcal K}, ~\forall n\in\mathcal{N},\label{sys_bar_P_max1} \end{align} where $q$ is the size of the gradient vector $g_k^{(n)}$, as well as \begin{align} \frac{1}{Nq}\sum \limits_{n\in\mathcal{N}} \mathbb{E}\left(\left\|\sqrt{p_{k}^{(n)}}{\bf g}_{k}^{(n)}\right\|^2\right) \leq \hat{P}^{\rm ave}_k,~\forall k\in{\mathcal K},\label{sys_bar_P_ave1} \end{align} where $\mathcal{N}\triangleq\{1,\cdots,N\}$ with $N$ denoting the total number of communication rounds for model training. In the following Section \ref{Sec_Con}, we will establish a direct learning performance metric, namely the optimality gap, linking with the aggregation errors over communication rounds. Based on the analysis, in Section \ref{Sec_Power} we will propose to minimize the optimality gap via optimizing the power control subject to a set of individual maximum and average power constraints. \section{Convergence Analysis}\label{Sec_Con} In this section, we present a convergence analysis framework for the FEEL in the presence of aggregation errors by using the optimality gap as the performance metric, which sheds light on how the imperfect gradient updates affect the convergence of FEEL in general. As will be shown shortly, depending on whether the aggregated gradient estimate is unbiased or not, the FEEL will have different convergence behaviors. \subsection{Optimality Gap versus Aggregation Errors}\label{Sec_Error} Suppose that at each communication round $n$, $F\left({\bf w}^{(n)}\right)$ is the value of loss function w.r.t. the parameter vector ${\bf w}^{(n)}$. Thus, with the lossy gradient aggregation in \eqref{sys_Err}, the update of learning model at communication round $n$ in \eqref{sys_ModelUpdate} is represented as \begin{align}\label{Error_ModelUpdate} {\bf w}^{(n+1)}={\bf w}^{(n)}-\eta^{(n)}\cdot \left(\bar{\bf g}^{(n)}+{\bm \varepsilon}^{(n)}\right), \end{align} where ${\bm \varepsilon}^{(n)}$ represents the induced random aggregation error (including the signal misalignment error and noice-induced error) at each communication round $n$. Let $\mathbb{E}[{\bm \varepsilon}^{(n)}]$ and $\mathbb{E}[\|{\bm \varepsilon}^{(n)}\|^2]$ denote the bias and MSE of the global gradient estimate at each communication round $n$, respectively, where the expectation operation is taken over the stochastic sample selection on the local gradient estimation over a mini-batch dataset, as well as the receiver noise due to AirComp. Depending on the value of $\mathbb{E}[{\bm \varepsilon}^{(n)}]$, we define two cases for the gradient aggregation. \begin{itemize} \item Case I without unbiased aggregation constraints: The aggregation can either be biased (i.e., $\mathbb{E}[{\bm \varepsilon}^{(n)}]\neq 0$) or unbiased ($\mathbb{E}[{\bm \varepsilon}^{(n)}]= 0$). In this case, no additional constraints on the aggregation biasness are introduced during the power control designs. \item Case II with unbiased aggregation constraints: The aggregation is unbiased, i.e., the constraints $\mathbb{E}[{\bm \varepsilon}^{(n)}]= 0, \forall n\in\mathcal{N}$, are introduced in the aggregation designs (e.g., transmission power control). \end{itemize} Define the optimality gap after $N$ communication rounds as $F\left({\bf w}^{(N+1)}\right)-F^{\star}$ and $L\triangleq \|\bf L\|_{\infty}$. Then, by considering a properly chosen fixed learning rate, we establish the following theorem. \begin{theorem}[Impact of aggregation error on convergence with a fixed learning rate]\label{Theo_OG_FixedRate} \emph{ Under Assumption \ref{Assump_Smooth}, suppose that the FEEL algorithm is implemented with a fixed learning rate $\eta\triangleq \eta^{(n)}, \forall n\in\cal N$, with $ 0\leq\eta\leq\frac{2}{2+L}\leq \frac{1}{\delta}$ and fixed mini-batch size $m_b=N$ \cite{signSGD}. Then, the expected optimality gap satisfies the inequality \eqref{Gap_Iterate_Fix}, where $C=1-\delta\eta$ with $0<C<1$. \begin{figure*} \begin{align}\label{Gap_Iterate_Fix} &\mathbb{E}\left[F\left({\bf w}^{(N+1)}\right)\right]-F^{\star}\leq \!\underbrace{\sum \limits_{n\in\mathcal{N}}\frac{C^{N-n}}{2} \| \underbrace{\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]}_{\text{Bias}}\|^2}_{\text{Error floor}} + \notag\\ &~~~~~~~~~~~~~~~~\underbrace{C^N\!\!\underbrace{\left(\mathbb{E}\left[F\left({\bf w}^{(1)}\right)\right]-F^{\star}\right)}_{\text{Initial optimality gap}}\!+\! \!\sum \limits_{n\in\mathcal{N}}\!\frac{C^{N-n}}{2}\left(\underbrace{\frac{ \eta^2 L\left\|{\bm\sigma}\right\|_2^2}{2\delta N K^2}}_{\text{Gradient variance}}\!\!+ \eta^2L^2\leq \| \underbrace{\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]}_{\text{Bias}}\|^2\!+\!\eta^2L \underbrace{\mathbb{E}\left[\left\|{\bm \varepsilon}^{(n)}\right\|^2\right]}_{\text{MSE}}\!\right)}_{\text{The gap to the error floor}~ \Delta(N)}. \end{align} \end{figure*} } \end{theorem} \begin{IEEEproof} See Appendix~\ref{Theo_OG_FixedRate_Proof}. \end{IEEEproof} \begin{remark}\label{Remark_ContractionRegion \emph{ From Theorem \ref{Theo_OG_FixedRate}, we have the following observations. \begin{itemize} \item {\bf The FEEL algorithm converges eventually as $N \rightarrow \infty$, with the optimality gap possibly landed on an error floor instead of diminishing to zero. } It is observed from \eqref{Gap_Iterate_Fix} that the upper bound of the optimality gap can be decomposed into two components, i.e., the error floor $\sum \limits_{n\in\mathcal{N}}\frac{C^{N-n}}{2}\left\|\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]\right\|^2$ that cannot vanish as $N$ grows, and the gap to the error floor, denoted by $\Delta(N)$, which can approach zero as $N$ increases. To see this, $\Delta(N)$ is observed to contain four terms related to the initial optimality gap ($\mathbb{E}\left[F\left({\bf w}^{(1)}\right)\right]-F^{\star}$), the gradient variance $\frac{ \eta^2 L\left\|{\bm\sigma}\right\|_2^2}{2\delta N K^2} $, as well as the bias $\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]$ and MSE $\mathbb{E}\left[\left\|{\bm \varepsilon}^{(n)}\right\|^2\right]$ of the aggregation errors, respectively. All the four terms diminish as $N$ goes to infinity, or become negligible under a sufficiently small learning rate. On the other hand, the error floor is determined by the accumulated bias $\left\|\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]\right\|^2$ over rounds. Hence, as $N$ increases, the error floor would approach a constant while the gap to it $\Delta(N)$ would vanish. Such effects are illustrated in Fig.~\ref{fig:Contraction}. \item {\bf The FEEL algorithm shows different convergence behaviors depending on whether the gradient aggregation is biased or not. } For the case with unbiased aggregation constraints, i.e., $\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]=0, \forall n\in\mathcal{N}$, as $N$ becomes sufficiently large, the model under training can converge exactly to the optimal point with minimum training loss with zero error floor in the training process. By contrast, for the case without unbiased aggregation constraints, the model under training may only converge to a neighborhood of the optimal point (if the aggregation is biased). However, the case with unbiased aggregation constraints may converge slower compared with its counterpart without unbiased aggregation constraints, as the enforcement of the unbiasness generally comes at a cost of elevated MSE that translates to a larger gap to the error floor $\Delta(N)$. The observation is also validated via experiments shown in Section \ref{sec_simu}. \item {\bf Latter rounds are more sensitive to aggregation error.} The bias $\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]$ and MSE $\mathbb{E}\left[\left\|{\bm \varepsilon}^{(n)}\right\|^2\right]$ at the later communication rounds (with large $n$) contribute more on the optimality gap than that of the initial rounds (with small $n$), as the effect of the aggregation error introduced at early stages (small $n$) is discounted by $C^{N-n}$ on the right hand side of \eqref{Gap_Iterate_Fix}. \end{itemize}} \end{remark} \begin{figure} \centering \includegraphics[width=3.5in]{Conrtraction.eps} \caption{Illustration of contraction region on the learning performance. } \label{fig:Contraction} \end{figure} Theorem \ref{Theo_OG_FixedRate} can be extended to the case with diminishing learning rates, as shown in the following corollary. \begin{corollary}[Impact of aggregation error on convergence with diminishing learning rates]\label{Theo_OG_DRate} \emph{ Under Assumption \ref{Assump_Smooth}, suppose that the FEEL algorithm is implemented with fixed mini-batch size $m_b=N$, and diminishing learning rates $\eta^{(n)}=\frac{u}{n+v}, \forall n\in\mathcal{N}$ \cite{Bottou2018}, with $v>0$ and $u>1/\delta$, such that $\eta^{(1)} \leq\frac{2}{2+L}$. Then, the expected optimality gap satisfies the following inequality: \begin{align}\label{Gap_Iterate} &\mathbb{E}\left[F\left({\bf w}^{(N+1)}\right)\right]\!-\!F^{\star}\leq \left(\!\prod\limits_{n\in\mathcal{N} }C^{(n)}\!\right)\!\left(\mathbb{E}\left[F\left({\bf w}^{(1)}\right)\right]\!-\!F^{\star}\right)\notag\\ &+\sum_{n=1}^{N}J^{(n)}\left(\frac{(\eta^{(n)})^2L\left\|{\bm\sigma}\right\|_2^2}{2m_bK^2}+(\eta^{(n)})^2L^2\left\|\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]\right\|^2\right)\notag\\ &\!+\!\sum_{n=1}^{N}J^{(n)}\!\left\|\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]\right\|^2\!\!+\!\sum_{n=1}^{N}\!J^{(n)}(\eta^{(n)})^2L \mathbb{E}\!\left[\left\|{\bm \varepsilon}^{(n)}\right\|^2\right], \end{align} where $C^{(n)}=1-\delta\eta^{(n)}$ and $ J^{(n)}\triangleq\frac{\prod_{i=n}^{N}C^{(i)}}{2C^{(n)}}, \forall n\in\mathcal{N} $ }\! \end{corollary} \begin{IEEEproof} This proof is similar to that for Theorem \ref{Theo_OG_FixedRate}, and thus is omitted here for brevity. \end{IEEEproof} Note that similar observations can be made from Corollary \ref{Theo_OG_DRate} as those in Remark 1. An additional tradeoff lies in designing the learning rate $\eta^{(n)}$. It is observed that when the learning rate is small, the error floor would be large as $C^{(n)}$ (or $J^{(n)}$) tends to be increasing, whereas the gap to the error floor becomes small. \subsection{Optimality Gap versus Transmission Power Control} In this subsection, we obtain the optimality gap w.r.t. the transmission power control variables based on the results in Section~\ref{Sec_Error}, in order to facilitate the power control design in the sequel. In particular, we consider the Air-FEEL in the cases without and with unbiased aggregation constraints $\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]=0, \forall n\in\mathcal{N}$. Before proceeding, we introduce the following assumption on the sample-wise gradient bound. \begin{assumption}[Bounded sample-wise gradient]\label{Assump_SamplewiseBound}\emph{ At any communication round $n$, the sample-wise gradient $\nabla f\left({\bf w}^{(n)},{\bf x},y\right) $ for any training sample $( {\bf x},y)$ is upper bounded by a given constant $G^{(n)}$, i.e., \begin{align} \left\| \nabla f\left({\bf w}^{(n)},{\bf x},y\right) \right\| \leq G^{(n)}, ~\forall n\in\mathcal{N}. \end{align}} \end{assumption} Based on Assumption \ref{Assump_SamplewiseBound}, we have $\left\|\nabla F({\bf w}^{(n)}) \right\|\leq \max_{ ( {\bf x},y)\in\mathcal{D}}\left\| \nabla f\left({\bf w}^{(n)},{\bf x},y\right) \right\| \leq G^{(n)}$. Together with Assumption~\ref{Assum_VarianceBound}, it thus holds that \begin{align}\label{Gradient_square} \mathbb{E}\left[\left\|{\bf g}_{k}^{(n)}\right\|^2\right] &\leq \left\|\nabla F({\bf w}^{(n)}) \right\|^2+\frac{\left\|{\bm\sigma}\right\|_2^2}{m_b} \notag\\ &\leq \hat{G}^{(n)}\triangleq \left(G^{(n)}\right)^2+\frac{\left\|{\bm\sigma}\right\|_2^2}{m_b} . \end{align} \subsubsection{Convergence Analysis for Air-FEEL in Case I} In this part, we formally characterize the convergence behavior of Air-FEEL w.r.t. the transmission power in the case without unbiased aggregation constraints. According to the definition of $ {\bm \varepsilon}^{(n)}$ in formula \eqref{sys_Err}, at each communication round $n$, the bias and MSE of gradient estimates through the over-the-air gradient aggregation for Air-FEEL are derived as \begin{align} \!\!\!\!\!\left\|\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]\right\|&=\frac{ \left\|\nabla F({\bf w}^{(n)}) \right\|}{K}\left(\sum\limits_{k\in\mathcal K}h_k^{(n)}\sqrt{p_k^{(n)}}-K\right)\notag\\ &\leq \frac{G^{(n)} }{K}\left(\sum\limits_{k\in\mathcal K}h_k^{(n)}\sqrt{p_k^{(n)}}-K\right),\label{equ_bias_p} \end{align} \begin{align} &\!\!\!\!\mathbb{E}\!\left[\!\left\|{\bm \varepsilon}^{(n)}\right\|^2\right]\leq\!\frac{ \!\left\|\nabla F({\bf w}^{(n)}) \right\|^2\!\!+\!\!\frac{\left\|{\bm\sigma}\right\|_2^2}{m_b}}{K}\sum\limits_{k\in\mathcal K}\!\left(\!h_k^{(n)}\sqrt{p_k^{(n)}}\!-\!1\!\right)^2\!\!+\!\frac{\sigma_z^2q}{K^2}\notag\\ &~~~~~~~~~~~~\leq \frac{ \hat{G}^{(n)}}{K}\sum\limits_{k\in\mathcal K}\left(h_k^{(n)}\sqrt{p_k^{(n)}}-1\right)^2+\frac{\sigma_z^2q}{K^2},\label{equ_unbias_p} \end{align} where both inequalities follow from Assumptions~\ref{Assum_VarianceBound} and \ref{Assump_SamplewiseBound}. By substituting \eqref{equ_bias_p} and \eqref{equ_unbias_p} into \eqref{Gap_Iterate} and \eqref{Gap_Iterate_Fix}, we can have the following proposition. \begin{proposition}[Optimality gap for Air-FEEL without unbiased aggregation constraints]\label{Theo_OG_DRate_Air} \emph{ The expected optimality gap for Air-FEEL in the case without unbiased aggregation constraints is upper bounded by \begin{align}\label{Gap_Iterate_Air} &\mathbb{E}\left[F\left({\bf w}^{(N+1)}\right)\right]\!-\!F^{\star}\leq \prod\limits_{n\in\mathcal{N} }C^{(n)}\left(\left[F\left({\bf w}^{(1)}\right)\right]-F^{\star}\right)\notag\\ &\!+\!\sum_{n=1}^{N}J^{(n)}\!\left(\! A^{(n)}\!\left(\!\sum\limits_{k\in\mathcal K}h_k^{(n)}\sqrt{p_k^{(n)}}-K\right)^2\!+\!\frac{(\eta^{(n)})^2L\left\|{\bm\sigma}\right\|_2^2}{2m_bK^2}\!\right)\notag\\ &\!+\!\sum_{n=1}^{N}\!J^{(n)}\!\left(\!B^{(n)}\!\sum\limits_{k\in\mathcal K}\!\left(\!h_k^{(n)}\sqrt{p_k^{(n)}}\!-\!1\!\right)^2\!\!\!+\!\frac{(\eta^{(n)})^2 \sigma_z^2Lq}{K^2}\!\right)\!,\!\! \end{align} where $A^{(n)}= \frac{\left(1+(\eta^{(n)})^2L^2\right)(G^{(n)})^2 }{K^2}$, $B^{(n)}=\frac{ (\eta^{(n)})^2L \hat{G}^{(n)}}{K}$, and $ J^{(n)}=\frac{\prod_{i=n}^{N}C^{(i)}}{2C^{(n)}}$ for the diminishing learning rates $\eta^{(n)}=\frac{u}{n+v}, \forall n\in\mathcal{N}$, with $v>0,u>1/\delta$, and $\eta^{(1)} \leq\frac{2}{2+L}$; while $A^{(n)}= \frac{\left(1+\eta^2L^2\right)(G^{(n)})^2 }{K^2}$, $B^{(n)}=\frac{ \eta^2L \hat{G}^{(n)}}{K}$, and $J^{(n)}=\frac{C^{N-n}}{2}$ for the fixed learning rate with $\eta=\eta^{(n)}, \forall n\in\cal N$, with $ 0\leq\eta\leq\frac{2}{2+L}\leq \frac{1}{\delta}$. } \end{proposition} \subsubsection{Convergence Analysis for Air-FEEL in Case II} Next, we consider the case with unbiased aggregation constraints, where we have $\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]=0, \forall n\in\cal N$. Similar to Proposition \ref{Theo_OG_DRate_Air}, we have the following proposition. \begin{proposition}[Optimality gap for Air-FEEL with unbiased aggregation constraints]\label{Theo_OG_DRate_Air_ub} \emph{ The expected optimality gap for Air-FEEL in the case with unbiased aggregation constraints is upper bounded by \begin{align}\label{Gap_Iterate_Air_ub} &\mathbb{E}\left[F\left({\bf w}^{(N+1)}\right)\right]-F^{\star}\leq \prod\limits_{n\in\mathcal{N} }C^{(n)}\left(\left[F\left({\bf w}^{(1)}\right)\right]-F^{\star}\right) \notag\\ &+\sum_{n=1}^{N}J^{(n)}\left(B^{(n)}\sum\limits_{k\in\mathcal K}\left(h_k^{(n)}\sqrt{p_k^{(n)}}-1\right)^2+\frac{(\eta^{(n)})^2 \sigma_z^2Lq}{K^2}\right)\notag\\ &+\sum_{n=1}^{N}J^{(n)}\frac{(\eta^{(n)})^2L\left\|{\bm\sigma}\right\|_2^2}{2m_bK^2}, \end{align} where $B^{(n)}=\frac{ (\eta^{(n)})^2L \hat{G}^{(n)}}{K^2}$ and $ J^{(n)}=\frac{\prod_{i=n}^{N}C^{(i)}}{2C^{(n)}}$ for the diminishing learning rates $\eta^{(n)}=\frac{u}{n+v}, \forall n\in\mathcal{N}$, with $v>0,u>1/\delta$, and $\eta^{(1)} \leq\frac{2}{2+L}$; while $B^{(n)}=\frac{ \eta^2L \hat{G}^{(n)}}{K^2}$ and $J^{(n)}=\frac{C^{N-n}}{2}$ for the fixed learning rate with $\eta=\eta^{(n)}, \forall n\in\cal N$, with $ 0\leq\eta\leq\frac{2}{2+L}\leq \frac{1}{\delta}$. } \end{proposition} Since the derived convergence results for both the cases of diminishing and fixed learning rates share similar form, the subsequent power control optimization will be presented targeting the case with diminishing learning rates only for brevity, while the yielded insights hold for both cases. \section{Power Control Optimization}\label{Sec_Power} Given the convergence results of Air-FEEL in the preceding section, we are now ready to present the power control optimization polices for speeding up the convergence rate in this section. To start with, we first reformulate the power constraints in \eqref{sys_bar_P_max1} and \eqref{sys_bar_P_ave1} by leveraging Assumption~\ref{Assump_SamplewiseBound} and inequality \eqref{Gradient_square} to avoid the requirement of non-causal gradient information $g_k^{(n)}$. Hence, the individual power constraints at each communication round and the entire training process are respectively reformulated as \begin{align} &p_{k}^{(n)} \hat{G}^{(n)}\leq P^{\rm max}_k,~\forall k\in{\mathcal K}, ~ n\in\mathcal{N},\label{sys_bar_P_max}\\ &\frac{1}{N}\sum \limits_{n\in\mathcal{N}}p_{k}^{(n)}\hat{G}^{(n)}\leq P^{\rm ave}_k,~\forall k\in{\mathcal K},\label{sys_bar_P_ave} \end{align} where $P^{\rm max}_k\triangleq q\hat{P}^{\rm max}_k$ and $P^{\rm ave}_k\triangleq q\hat{P}^{\rm ave}_k, \forall k\in\mathcal{K},$ are defined for notational convenience. \subsection{Power Control Optimization for Case I} We start with the case I without unbiased aggregation constraints. Discarding the irrelevant terms in \eqref{Gap_Iterate_Air} in Proposition \ref{Theo_OG_DRate_Air} (i.e., the terms related to the initial optimality gap $\mathbb{E}\left[F\left({\bf w}^{(1)}\right)\right]-\!F^{\star}$, the gradient variance bound $\frac{(\eta^{(n)})^2L\left\|{\bm\sigma}\right\|_2^2}{2m_bK^2}$, and the noise power $\frac{(\eta^{(n)})^2 \sigma_z^2Lq}{K^2}$) in Proposition ~\ref{Theo_OG_DRate_Air}, we denote $\tilde{\Phi}(\{p_k^{(n)}\})$ in the following as the effective optimality gap to be optimized. \begin{align}\label{Eff_OG_Biased} \tilde{\Phi}(\{p_k^{(n)}\}) \triangleq &\sum_{n=1}^{N}J^{(n)}A^{(n)}\left(\sum\limits_{k\in\mathcal K}h_k^{(n)}\sqrt{p_k^{(n)}}-K\right)^2\notag\\ &+\sum_{n=1}^{N}J^{(n)}B^{(n)}\sum\limits_{k\in\mathcal K}\left(h_k^{(n)}\sqrt{p_k^{(n)}}-1\right)^2. \end{align} The optimization problem is thus formulated as \begin{align} \mathbf{P1:} \min_{\{p_k^{(n)}\ge 0\}} ~~& \tilde{\Phi}(\{p_k^{(n)}\})\notag\\ {\rm s.t.}~~~~&\eqref{sys_bar_P_max}~\text{and}~\eqref{sys_bar_P_ave}.\notag \end{align} By introducing a set of auxiliary variables, $\hat{p}_k^{(n)}=\sqrt{p_k^{(n)}}, \forall k\in\mathcal{K}, n\in\mathcal{N}$, the objective is re-expressed as \begin{align}\label{Eff_OG_Biased_Phat} {\Phi}(\{\hat{p}_k^{(n)}\})\triangleq \sum_{n=1}^{N}J^{(n)}A^{(n)}\left(\sum\limits_{k\in\mathcal K}h_k^{(n)}\hat{p}_k^{(n)}-K\right)^2\notag\\ +\sum_{n=1}^{N}J^{(n)}B^{(n)}\sum\limits_{k\in\mathcal K}\left(h_k^{(n)}\hat{p}_k^{(n)}-1\right)^2\!\!,\! \end{align} and problem (P1) is re-expressed as \begin{align} \!\!\!\mathbf{P1.1:} \min_{\{\hat{p}_k^{(n)}\ge 0\}} ~&{\Phi}(\{\hat{p}_k^{(n)}\})\notag\\ {\rm s.t.}~~~~ &\hat{q}_{k}^{(n)} \leq P^{\rm max}_{k,n},~\forall k\in{\mathcal K}, ~ n\in\mathcal{N} \label{Biased_MaxPower}\\ &\frac{1}{N}\sum \limits_{n\in\mathcal{N}}\left(\hat{q}_{k}^{(n)}\right)^2\hat{G}^{(n)} \leq P^{\rm ave}_k,~\forall k\in{\mathcal K}\label{Biased_AvePower}, \end{align} where constraints \eqref{Biased_MaxPower} and \eqref{Biased_AvePower} follow from \eqref{sys_bar_P_max} and \eqref{sys_bar_P_ave}, respectively, and $P^{\rm max}_{k,n}\triangleq \sqrt{\frac{ P^{\rm max}_k}{\hat{G}^{(n)}}}, \forall k\in\mathcal{K},~n\in\mathcal{N}$. Problem (P1.1) is convex and can thus be optimally solved by the standard convex optimization techniques such as the interior point method \cite{cvx}. Alternatively, to gain engineering insights, we resort to the Lagrange duality method to derive the structured optimal solution for problem (P1.1). Let $\{\hat{p}_k^{(n)\rm opt}\}$ denote the optimal solution to problem (P1.1), and $\varphi_k^{\rm opt}, \forall k\in\mathcal{K}$ the optimal dual variable associated with the $k$-th constraint in \eqref{Biased_AvePower}. Then we have the following proposition. \begin{proposition}\label{lemma_Biased_Power}\emph{ The optimal solution $\hat{p}_k^{(n)\rm opt}, \forall k\in\mathcal{K},~n\in\mathcal{N}$ to problem (P1.1) is \begin{align}\label{Biased_hatP_Opt} \hat{p}_k^{(n)\rm opt}=\min\left[\frac{B^{(n)}+A^{(n)}K }{M_k^{(n)}+ A^{(n)}M_k^{(n)}\sum\limits_{i\in\mathcal{K}} \frac{h_i^{(n)}}{M_i^{(n)}} } ,P^{\rm max}_{k,n}\right], \end{align} where $ M_k^{(n)}\triangleq B^{(n)} h_k^{(n)}+\frac{ \varphi_k^{\rm opt}\hat{G}^{(n)}}{NJ^{(n)}h_k^{(n)}} , \forall k\in\mathcal{K},~n\in\mathcal{N} $. } \end{proposition} \begin{IEEEproof} See Appendix~\ref{Proof_Lemma_Biased}. \end{IEEEproof} \noindent According to Proposition~\ref{lemma_Biased_Power}, the optimal power scaling factors $p_k^{(n)\rm opt}, \forall k\in\mathcal{K},~n\in\mathcal{N}$ to problem (P1) is \begin{align}\label{Biased_power_Opt} \!\!\!p_k^{(n)\rm opt}\!=\!\min\!\!\left[\!\left(\frac{B^{(n)}+A^{(n)}K }{M_k^{(n)}\!+\! A^{(n)}M_k^{(n)}\!\!\sum\limits_{i\in\mathcal{K}}\!\! \frac{h_i^{(n)}}{M_i^{(n)}} } \right)^2\!\!\!\!,\!\left(P^{\rm max}_{k,n}\right)^2\right]\!\!.\! \end{align} \begin{remark}\emph{According to Proposition \ref{lemma_Biased_Power}, the optimal $\{\hat{p}_k^{(n)\rm opt}\}$ (equivalently the optimal power scaling factor $p_k^{(n)\rm opt}=(\hat{p}_k^{(n)\rm opt})^2, \forall k\in\mathcal{K},~n\in\mathcal{N}$) exhibits a {\it regularized channel inversion} structure with the regularized term $\sum\limits_{i\in\mathcal{K}} \frac{A^{(n)}h_i^{(n)}M_k^{(n)}}{M_i^{(n)}}$ related to all dual variables $ \varphi_k^{\rm opt}$ associated with the average power budgets at all edge devices in \eqref{Biased_AvePower}. Considering the special case when the average power budgets $\{P^{\rm ave}_k\}$ at all devices are sufficiently large, such that the dual variables become zero at the same time (i.e., $\varphi_k^{\rm opt}=0, \forall k\in\mathcal{K},~n\in\mathcal{N}$). In this case, the optimal power scaling strategy reduces to the channel inversion policy, i.e., $p_k^{(n)\rm opt}=\min\left[\frac{1}{\left(h_k^{(n)}\right)^2}, \left(P^{\rm max}_{k,n}\right)^2\right]$, $\forall k\in\mathcal{K},~n\in\mathcal{N}$. Interestingly, this result is equivalent to minimizing the MSE in isolation at each communication round. In other words, in the special case when all devices have a sufficiently large average power budgets, the conventional MSE minimization can be sufficient to minimize the optimality gap. } \end{remark} \subsection{Power Control Optimization fo Case II } Next, we consider the power control optimization for the case with unbiased aggregation constraints, where the power control policy needs to enforce the additional constraint $\mathbb{E}\left[{\bm \varepsilon}^{(n)}\right]= 0, \forall n\in\cal N$. According to \eqref{equ_bias_p}, it follows that $\sum\limits_{k\in\mathcal K}h_k^{(n)}\sqrt{p_k^{(n)}}=K, \forall n\in\cal N$. In this case, the effective optimality gap in Proposition~\ref{Theo_OG_DRate_Air_ub} is given by \begin{align} \tilde\Theta\left(\{p_k^{(n)}\}\right)\triangleq \sum_{n=1}^{N}J^{(n)}B^{(n)}\sum\limits_{k\in\mathcal K}\left(h_k^{(n)}\sqrt{p_k^{(n)}}-1\right)^2\!. \end{align} Accordingly, we formulate the power control optimization problem as \begin{align} \mathbf{P2:} \min_{\{p_k^{(n)}\ge 0\}} ~~&\tilde\Theta\left(\{p_k^{(n)}\}\right)\notag\\ {\rm s.t.}~~~~&\sum\limits_{k\in\mathcal K}h_k^{(n)}\sqrt{p_k^{(n)}}=K, \forall n\in\cal N\\ &\eqref{sys_bar_P_max}~\text{and}~\eqref{sys_bar_P_ave}.\notag \end{align} Note that problem (P2) is non-convex. However, via a change of variables $ q_k^{(n)}\triangleq\sqrt{p_k^{(n)}}, \forall k\in\mathcal{K}, n\in\mathcal{N}$, the objective can be re-expressed as \begin{align} \Theta\left(\{q_k^{(n)}\}\right)\triangleq \sum_{n=1}^{N}J^{(n)}B^{(n)}\sum\limits_{k\in\mathcal K}\left(h_k^{(n)}q_k^{(n)}-1\right)^2, \end{align} and problem (P2) can be transformed into the following equivalent convex form: \begin{align} \!\!\!\mathbf{P2.1:} \min_{\{q_k^{(n)}\ge 0\}} ~&\Theta\left(\{q_k^{(n)}\}\right)\notag\\ {\rm s.t.}~~~&\sum\limits_{k\in\mathcal K}h_k^{(n)}q_{k}^{(n)}=K,~\forall n\in\mathcal{N} \label{Unbiased_alignment}\\ &q_{k}^{(n)} \leq P^{\rm max}_{k,n},~\forall k\in{\mathcal K}, ~\forall n\in\mathcal{N} \label{Unbiased_MaxPower}\\ &\frac{1}{N}\sum \limits_{n\in\mathcal{N}}\left(q_{k}^{(n)}\right)^2\hat{G}^{(n)} \leq P^{\rm ave}_k,~\forall k\in{\mathcal K}\label{Unbiased_AvePower}, \end{align} where constraints \eqref{Unbiased_MaxPower} and \eqref{Unbiased_AvePower} follow from \eqref{sys_bar_P_max} and \eqref{sys_bar_P_ave}, respectively. \subsubsection{Feasibility of Problem (P2.1)} Before solving problem (P2.1), we first check its feasibility, i.e., whether the power budget can support the required unbiased estimation level denoted by $\ell$ or not. Let $\ell^{\star} $ denote the maximum unbiased estimation level, which can be expressed as \begin{align} \ell^{\star}= \max_{\{q_k^{(n)}\ge 0\}} ~~&\ell \label{Unbiased_Fea}\\ {\rm s.t.}~~~~&\sum\limits_{k\in\mathcal K}h_k^{(n)}q_{k}^{(n)}\geq \ell,~\forall n\in\mathcal{N}\notag\\ &\eqref{Unbiased_MaxPower}~\text{and }~\eqref{Unbiased_AvePower}.\notag \end{align} If $\ell^{\star}\geq K$, then problem (P2.1) is feasible; otherwise, problem (P2.1) is not feasible. Hence, the feasibility checking procedure corresponds to finding $\ell^{\star} $ by solving problem \eqref{Unbiased_Fea}. Notice that problem \eqref{Unbiased_Fea} is convex, which can thus be efficiently solved via standard convex optimization techniques, such as the interior point method \cite{cvx}. By comparing $\ell^{\star}$ versus $K$, the feasibility of problem (P2.1) is checked. In the following, we solve problem (P2.1) when it is feasible. \subsubsection{Optimal Solution to Problem (P2.1)} Let $\{q_k^{(n)\rm opt}\}$ denote the optimal solution to problem (P2.1). We have the following proposition by leveraging the Lagrange duality method, where $\mu_n^{\rm opt} $ and $\lambda_k^{\rm opt}$ are the optimal dual variables associated with constraints \eqref{Unbiased_alignment} and \eqref{Unbiased_AvePower}, respectively. \begin{proposition}\label{theorem_Unbiased_gamma}\emph{The optimal solution $q_k^{(n)\rm opt}, \forall k\in\mathcal{K},~n\in\mathcal{N} $ to problem (P2.1) is given as \begin{align} q_k^{(n)\rm opt}=\min\left[\frac{h_k^{(n)}\alpha_k^{(n)}}{ (h_k^{(n)})^2+\frac{ 2\lambda_k^{\rm opt}\hat{G}^{(n)}}{NJ^{(n)}B^{(n)}}},P^{\rm max}_{k,n}\right],\label{Unbiased_q_Opt} \end{align} where $\alpha_k^{(n)}\triangleq \left(1-\frac{\mu_n^{\rm opt}}{2J^{(n)}B^{(n)}} \right)^+, \forall k\in\mathcal{K},~n\in\mathcal{N}$. } \end{proposition} \begin{IEEEproof} See Appendix~\ref{Proof_theorem_Unbiased_gamma}. \end{IEEEproof} From \eqref{Unbiased_q_Opt} in Proposition~\ref{theorem_Unbiased_gamma}, we can accordingly obtain the optimal power scaling factors $p_k^{(n)\rm opt}, \forall k\in\mathcal{K},~n\in\mathcal{N}$ to problem (P2) as \begin{align}\label{Unbiased_power_Opt} p_k^{(n)\rm opt}&=\left(q_k^{(n)\rm opt}\right)^2\notag\\ &=\!\min\!\left[\!\left(\!\frac{h_k^{(n)}\alpha_k^{(n)}}{ (h_k^{(n)})^2\!+\!\frac{ 2\lambda_k^{\rm opt}\hat{G}^{(n)}}{NJ^{(n)}B^{(n)}}}\right)^2\!,\left(P^{\rm max}_{k,n}\right)^2\!\right]\!. \end{align} \begin{remark}\emph{According to \eqref{Unbiased_q_Opt}, the optimal solution of $\{q_k^{(n)\rm opt}\}$ to problem (P2.1) (equivalently the optimal power scaling factor $p_k^{(n)\rm opt}=(q_k^{(n)\rm opt})^2, \forall k\in\mathcal{K},~n\in\mathcal{N}$) has a similar \emph{regularized channel inversion} structure as that in \eqref{Biased_hatP_Opt}, but the regularized term therein (i.e.,$\frac{ 2\lambda_k^{\rm opt}\hat{G}^{(n)}}{NJ^{(n)}B^{(n)}}$) is {\it only} related to its own device $k$'s average power budget in \eqref{Unbiased_AvePower} through the dual variable $\lambda_k^{\rm opt}$, as opposed to all devices' budgets in \eqref{Biased_AvePower} for the case without unbiased aggregation constraints. Furthermore, it is observed that for any edge device $k\in\mathcal{K}$, if $\lambda_k^{\rm opt}>0$ holds, then the average power constraint of edge device $k$ must be tight at the optimality (i.e., $\frac{1}{N}\sum \limits_{n\in\mathcal{N}}\left(q_{k}^{(n)\rm opt}\right)^2\hat{G}^{(n)} - P^{\rm ave}_k=0$) due to the complementary slackness condition, and thus this edge device should use up its average power budget based on the regularized channel inversion power control over communication rounds; otherwise, if $\lambda_k^{\rm opt}=0$, then edge device $k$ should transmit with channel-inversion power control without using up its average power budget. } \end{remark} \section{Simulation Results}\label{sec_simu} In this section, we provide simulation results to validate the performance of the proposed power control policies for Air-FEEL. The proposed algorithms are implemented using the Matlab and Pytorch for two different tasks, i.e., the ridge regression and handwritten digit recognition, respectively. \subsection{Simulation Setup and Benchmark Schemes} In the simulation, the wireless channels from the edge devices to the edge server over different communication rounds follow i.i.d. Rayleigh fading, i.e., $h_k^{(n)}$'s are modeled as i.i.d. {\it circularly symmetric complex Gaussian} (CSCG) random variables with zero mean and unit variance. We set the number of devices as $K=10$, the noise variance $\sigma_z^2=0.1$, and the average power budgets at different devices $\hat{P}^{\rm ave}_k$ to be heterogeneous\footnote{The average power budgets at different devices are set as, $\hat{P}^{\rm ave}_i=5$W and $\hat{P}^{\rm ave}_{i+1}=15$W, $i=\{1,\cdots,K/2\}$.}. We set the maximum power budget to be $5\hat{P}^{\rm ave}$. We consider both the fixed and diminishing learning rates with $\eta=0.05$ and $\eta^{(n)}=\frac{u}{n+v}$ under $u=2$ and $v=8$, respectively. As for the performance metrics, the optimality gap and prediction error are considered for ridge regression on synthetic dataset, while the loss function value and test (recognition) accuracy are considered for handwritten digit recognition on MNIST dataset. For performance comparison, we consider the following two benchmark schemes. \begin{itemize} \item {\bf Fixed power transmission}: The edge devices transmit with fixed power over different communication rounds by setting $p_k^{(n)}=P^{\rm ave}_k, \forall k\in\cal K$. \item {\bf Conventional MSE minimization}: The edge devices optimize their power control to minimize the aggregation MSE in isolation at each communication round. For each round, the MSE minimization problem has been solved in \cite{Cao_PowerTWC}\footnote{Although the conventional channel inversion power control can achieve the unbiased aggregation, it is not the only way to achieve the unbiased aggregation and just a sufficient condition leading to unbiased aggregation. Moreover, as validated in \cite{Cao_PowerTWC}, the conventional MSE minimization scheme can achieve the minimum communication distortion in AirComp. Therefore, in this paper we only consider the conventional MSE minimization scheme as one benchmark, which always outperforms the generally sub-optimal channel inversion scheme. }. \end{itemize} \subsection{Air-FEEL for Ridge Regression} First, we consider the ridge regression with the sample-wise loss function $f({\bf w},{\bf x},\tau)=\frac{1}{2}\| {\bf x}^T{\bf w}-\tau\|^2+\rho R({\bf w})$ \footnote{ The loss function here consists of both the model loss and the regularization term, where the former captures the prediction error of the trained model over the samples, and the latter is added to avoid overfitting and enhance the robustness \cite{LChen20TWC}.} and the regularization function $R({\bf w})=\|{\bf w}\|^2$ with the hyperparameter $\rho=5\times 10^{-5}$. Randoml,y generated synthetic dataset is used for model training and testing. The generated data sample vector ${\bf x}\in\mathbb{R}^q$ with $q=10$ follow i.i.d. Gaussian distribution (i.e., $ x \sim\mathcal{N}(0,{\bf I})$) and the label $y$ is obtained as $\tau=x(2)+3x(5)+0.2z$, where $x(t)$ represents the $t$-th in vector ${\bf x}$ and $z$ denotes the observation noise with i.i.d. Gaussian distribution, i.e., $z\sim\mathcal{N}(0,1)$. Unless stated otherwise, the data samples are evenly distributed among devices with identical size $ D = D_k=1000, \forall k\in \mathcal K$ and $D_{\rm tot} = \sum\limits_{k\in \mathcal{K} } D_k = 10000. $ Based on the above models, we can obtain the smoothness parameter $L$ and Polyak-{\L}ojasiewicz parameter $\delta$ as the largest and smallest eigenvalues of the data Gramian matrix ${\bf X}^T{\bf X}/D_{\rm tot}+10^{-4}{\bf I}$, in which ${\bf X}=[{\bf x}_1,\cdots,{\bf x}_{D_{\rm tot}}]^T$ is the data matrix. We use the simple upper bounds $G^{(n)} =2WL $ \cite{DLiu2020Ar} with $W\geq \|\bf w\|$ as a bound on the norm $ \|\bf w\|$. The optimal loss function $F^{\star}$ is computed using the optimal parameter vector $\bf w^{\star}$ to the learning problem \eqref{OptimalParameter}, where ${\bf w}^{\star}=({\bf X}^T{\bf X}+\rho{\bf I})^{-1}{\bf X}^T\boldsymbol{\tau}$ with $\boldsymbol{\tau}=[\tau_1,\cdots,\tau_{D_{\rm tot}}]^T$. We set the initial parameter vector as an all-zero vector. \begin{figure*}[htbp] \centering \subfigure[Optimality gap versus $N$ under diminishing learning rate.] {\label{DL_fig:OG_v_N}\includegraphics[width=8.6cm]{DL_N_OG.eps}} \subfigure[Prediction error versus $N$ under diminishing learning rate.] {\label{DL_fig:PE_v_N} \includegraphics[width=8.6cm]{DL_N_PE.eps}} \subfigure[Optimality gap versus $N$ under fixed learning rate.] {\label{fig:OG_v_N}\includegraphics[width=8.6cm]{N_OG.eps}} \subfigure[Prediction error versus $N$ under fixed learning rate.] {\label{fig:PE_v_N} \includegraphics[width=8.6cm]{N_PE.eps}} \caption{Learning performance of Air-FEEL over number of communication rounds.} \label{Fig:Learning_v_N} \end{figure*} Fig.~\ref{Fig:Learning_v_N} shows the learning performance (i.e., the optimality gap in Figs. \ref{DL_fig:OG_v_N} and \ref{fig:OG_v_N} and the prediction error in Figs. \ref{DL_fig:PE_v_N} and \ref{fig:PE_v_N}) versus the number of communication rounds $N$, where the learning rates are set to be diminishing in Figs.~\ref{DL_fig:OG_v_N} and~\ref{DL_fig:PE_v_N} and those are set to be fixed in Figs.~\ref{fig:OG_v_N} and~\ref{fig:PE_v_N}. First, it is observed that the proposed power control policies (with both biased and unbiased aggregation constraints) and the conventional MSE minimization design achieve faster convergence and lower optimality gap than the fixed power transmission. This shows the benefit of power control optimization in accelerating the learning convergence rate, via either directly minimizing the optimality gap or indirectly minimizing the MSE. Secondly, the proposed power control policies are observed to significantly outperform the conventional MSE minimization design in reducing the optimality gap. This is due to the fact that the contributions of aggregation errors to the optimality gap are distinct at different communication rounds (see Remark~\ref{Remark_ContractionRegion}), which cannot be captured by the conventional MSE minimization design. Furthermore, the proposed power control policy under Case II (with unbiased aggregation constraints) is observed to achieve a lower optimality gap than the proposed power control policy under Case I (without unbiased aggregation constraints) when $N>150$ under the fixed learning rate and $N>80$ under the diminishing learning rates. This coincides with Remark~\ref{Remark_ContractionRegion} that the Air-FEEL algorithm converges to the optimal point with unbiased gradient aggregation. \begin{figure*}[htbp] \centering \subfigure[Optimality gap versus $K$ under diminishing learning rate.] {\label{fig:FL_v_DL_K1}\includegraphics[width=8.6cm]{DL_K_OG.eps}} \subfigure[Prediction error versus $K$ under diminishing learning rate.] {\label{fig:FL_v_DL_K2} \includegraphics[width=8.6cm]{DL_K_PE.eps}} \subfigure[Optimality gap versus $K$ under fixed learning rate.] {\label{fig:FL_v_K1}\includegraphics[width=8.6cm]{K_OG.eps}} \subfigure[Prediction error versus $K$ under fixed learning rate.] {\label{fig:FL_v_K2} \includegraphics[width=8.6cm]{K_PE.eps}} \caption{Effect of number of devices on the learning performance of Air-FEEL.} \label{Fig:FL_v_K} \end{figure*} Fig.~\ref{Fig:FL_v_K} shows the learning performance (i.e., the optimality gap in Figs. \ref{fig:FL_v_DL_K1} and \ref{fig:FL_v_K1} and the prediction error in Figs. \ref{fig:FL_v_DL_K2} and \ref{fig:FL_v_K2}) versus the number of devices $K$, where the learning rates are set to be diminishing in Figs.~\ref{fig:FL_v_DL_K1} and~\ref{fig:FL_v_DL_K2} and those are set to be fixed in Figs.~\ref{fig:FL_v_K1} and~\ref{fig:FL_v_K2}. Firstly, it is observed that the optimality gap achieved by all schemes decreases as $K$ increases. This is because that the edge server can aggregate more data for averaging to improve the learning performance. Secondly, the performance gaps between the proposed power control policies (under Cases I and II) versus the benchmark schemes are observed to decrease with $K$ increasing, which would be saturated in the large $K$ regime. The performance gap validates the effectiveness on the proposed power control optimization in reducing the optimality gap. \subsection{Air-FEEL for Handwritten Digit Recognition}\label{Sec_CNN} \begin{figure*}[htbp] \vspace{-0.05cm} \centering \subfigure[Loss value versus $N$ under diminishing learning rate.] {\label{CNN_DL_fig:OG_v_N}\includegraphics[width=8.6cm]{DL_CNN_LV.eps}} \subfigure[Test accuracy versus $N$ under diminishing learning rate.] {\label{CNN_DL_fig:PE_v_N} \includegraphics[width=8.6cm]{DL_CNN_TA.eps}} \subfigure[Loss value versus $N$ under fixed learning rate.] {\label{CNN_fig:OG_v_N}\includegraphics[width=8.6cm]{CNN_LV.eps}} \subfigure[Test accuracy versus $N$ under fixed learning rate.] {\label{CNN_fig:PE_v_N} \includegraphics[width=8.6cm]{CNN_TA.eps}} \caption{Learning performance of Air-FEEL on MNIST dataset over number of communication rounds.} \label{CNN_Fig:N} \end{figure*} Next, we consider the learning task of handwritten digit recognition using the well-known MNIST datasets, which consists of 10 classes of black-and-white digits ranging from ``0" to ``9". We implement a 6-layer CNN as the classifier model, which consists of two $5\times 5$ convolution layers with ReLU activation (the first with 32 channels, the second with 64), each followed by a $2 \times 2$ max pooling; a fully connected layer with 512 units and ReLU activation; and a final softmax output layer ($582,026$ parameter in total). The local batch size at each edge device is set to be $m_b=512$. Notice that Assumptions \ref{Assump_Smooth} and \ref{Assump_PL} may not hold in this case, but our proposed power control policies still work well as will be shown shortly. Fig.~\ref{CNN_Fig:N} shows the learning performance versus the varying number of communication rounds $N$, where the learning rates are set to be diminishing in Figs.~\ref{CNN_DL_fig:OG_v_N} and~\ref{CNN_DL_fig:PE_v_N} and those are set to be fixed with $\eta=0.01$ in Figs.~\ref{CNN_fig:OG_v_N} and~\ref{CNN_fig:PE_v_N}. First, it is observed that the proposed power control policies achieve lower loss function values and higher test accuracy than both the fixed-power-transmission and conventional-MSE-minimization schemes. Furthermore, the power control policy under Case II is observed to outperform that under Case I when $N>200$ with the fixed learning rate and $N>150$ with the diminishing learning rates. These observations are generally consistent with those in Fig. \ref{Fig:Learning_v_N} with the ridge regression model. \section{Conclusion In this paper, we exploited the transmission power control as a new design degree of freedom to optimize the learning performance of Air-FEEL. To this end, we first analyzed the convergence behavior of the FEEL algorithm (in terms of the optimality gap) and characterized the impact of aggregation errors, w.r.t. its bias and MSE, at different communication rounds. It was observed that in the case with unbiased aggregation estimates, the FEEL algorithm would converge exactly to the optimal point with mild conditions; and otherwise, it would converge with an error floor. Next, we proposed to directly minimize the derived optimality gaps by optimizing the power control, for which the optimal solutions are obtained to follow regularized channel inversion structures. Finally, experimental results demonstrated that the proposed power control policies achieve significantly lower optimality gap in Air-FEEL, as compared with benchmark schemes with fixed power transmission and conventional MSE minimization. We expect that this initial work can provide useful insights on exploiting the power control for enhancing the Air-FEEL performance. Nevertheless, due to the space limitation, there are still a lot of interesting issues that are not addressed in this paper but worth investigation. In the following, we introduce several of them to motivate future work. \begin{itemize} \item One interesting direction is to explore the large-scale Air-FEEL over multi-cell networks to accommodate massive edge devices with more data. In this case, a hierarchical framework would be established to explore the distributed computation and communication capacity for performance improvement, and therein the cooperative interference management will be a new issue to be addressed. \item Another interesting direction is to consider the FedAvg scenario, where the local updates at each edge devices are implemented multiple times, and the local models are aggregated over the air instead of gradients. The analysis and design principles in this paper are generally extendable to this scenario by taking into account the following two new technical challenges. First, as multiple local updates are implemented at edge devices, the introduced gradient/model errors due to aggregation in this paper are not applicable for FedAvg. Therefore, new approaches for analyzing the gradient/model errors in FedAvg should be considered for its convergence analysis. Next, while the learning rate plays a dual role of denoising factor for over-the-air gradient aggregation in FedSGD, in FedAvg we need a dedicated denoising factor to suppress the AirComp signal misalignment error for model aggregation. As a result, the denoising factors would become new design variables for joint optimization, thus making the problem more complicated. \item Furthermore, it is also an interesting problem to fairly and quantitatively compare the performance of Air-FEEL versus the traditional digital FEEL (e.g., \cite{Chen20FL,Mo2020aa,Tran19_Infocom}), in terms of the training latency and training accuracy. General speaking, the proposed Air-FEEL can achieve significant per-round communication latency reduction over the digital FEEL, but at a cost of the newly introduced aggregation errors and the increased number of communication rounds needed for convergence. To deal with such a tradeoff, we need to explore the power control policy for training latency minimization while ensuring a given maximum optimality gap requirement for both Air-FEEL and digital FEEL. How to determine proper approximate optimality gaps for both Air-FEEL and digital FEEL to enable their fair training performance comparison is challenging in practice. \item Moreover, the investigation of Air-FEEL with the non-i.i.d. data is also an interesting future research direction. The non-i.i.d. nature of data may degrade the training performance of AirFEEL, and also make the convergence analysis more challenging. In this case, how to capture the effect of the non-i.i.d. degrees of edge devices on the learning performance and accordingly optimize the transmission power control over them is a difficult task. \end{itemize}
1,314,259,993,679
arxiv
\section{Introduction} Numerical simulations have shown that the very first stars invariably formed in isolation and were much more massive than the sun, due mainly to the inability of primordial gas to efficiently cool at low temperatures \citep{2002Sci...295...93A,2002ApJ...564...23B,2006ApJ...652....6Y}. \citet{2004ApJ...612..602T} have suggested that the Pop III IMF was not dominated by very massive stars (M $>$ 140 M$\mbox{$_{\odot}$}$), but instead by stars with M = 8--40 M$\mbox{$_{\odot}$}$. Even this IMF, though, is still remarkably distinct from that observed for the local universe, which peaks at less than one solar mass \citep{1979ApJS...41..513M,2002Sci...295...82K,2003PASP..115..763C}. The deaths of the first stars produced and distributed copious amounts of metals into their surroundings, through either core-collapse (M $\gtrsim$ 10 M$\mbox{$_{\odot}$}$) or pair-instability (M $\gtrsim$ 140 M$\mbox{$_{\odot}$}$) supernovae \citep{2002ApJ...567..532H}. These metals provide additional avenues for radiative cooling of the ambient gas, through fine-structure and molecular transitions, as well as continuum emission from dust formed from the supernova ejecta, permitting the gas that will form the next generation of stars to reach temperatures lower than what is possible for metal-free gas. Fragmentation of collapsing gas will continue so long as the gas can keep decreasing in temperature as the density increases \citep{2005MNRAS.359..211L}, or until the gas becomes optically thick to its own emission \citep{1976MNRAS.176..367L}. The minimum fragment mass is determined by the local Jeans mass, \begin{equation} \label{eqn:mj} M_{J} \simeq 700 \textrm{ M}\mbox{$_{\odot}$} (T/200 K)^{3/2} (n/10^{4} cm^{-3})^{-1/2} (\mu/2)^{-2}, \end{equation} where T, n, and $\mu$ are the temperature, number density, and mean molecular weight, at the halt of fragmentation \citep{2005MNRAS.359..211L}. For metal-free gas, a minimum temperature of $\sim$ 200 K is reached at n $\simeq$ 10$^{4}$ cm$^{-3}$ when H$_{2}$ cooling becomes inefficient, yielding a Jeans mass, M$_{J}$ $\simeq$ 10$^{3}$ M$\mbox{$_{\odot}$}$ \citep{2002Sci...295...93A,2002ApJ...564...23B}. At some certain chemical abundance, it is conjectured that metals provide sufficient cooling, so that the temperature of the gas continues to decrease as the density increases past the stalling point for metal-free gas, allowing the collapsing gas-cloud to undergo fragmentation and form smaller and smaller clumps. The enrichment of gas to some critical metallicity, Z$_{cr}$, will trigger the formation of the first low-mass (Pop II) stars in the universe, as the gas can cool to lower temperatures at higher metallicity, in general. The value of Z$_{cr}$ can be estimated by calculating the metallicity required to produce a cooling rate equal to the rate of adiabatic compression heating at a given temperature and density. This has been carried out for individual alpha elements, such as C and O, by \citet{2003Natur.425..812B}, and C, O, Si, Fe, as well as solar abundance patterns by \citet{2006ApJ...643...26S}, yielding roughly, 10$^{-3.5}$ Z$\mbox{$_{\odot}$}$ $\lesssim$ Z$_{cr}$ $\lesssim$ 10$^{-3}$ Z$\mbox{$_{\odot}$}$. Aside from the minimum clump mass, however, not much more can be said about the spectrum of clump masses produced during fragmentation. \citet{2005ApJ...626..627O} use one-zone models with very sophisticated chemical networks to follow the evolution of temperature and density in the center of a collapsing gas cloud, for a range of metallicities. The predictions of fragmentation from this work, though, are based solely on statistical arguments of elongation in prestellar cores and do not capture the complex processes of interaction and accretion associated with the formation of multiple stars \citep{2003MNRAS.339..577B}. \citet{2006ApJ...642L..61T} simulate the high density (n $\ge$ 10$^{10}$ cm$^{-3}$) evolution of extremely low-metallicity gas (Z $<$ 10$^{-4}$ Z$\mbox{$_{\odot}$}$), but the conclusions of this work are limited by the fact that the simulations are initialized at an extremely late phase in the evolution of the prestellar core. The numerical simulations by \citet{2001MNRAS.328..969B}, which use cosmological initial conditions, show fragmentation in gas with Z = 10$^{-3}$ Z$\mbox{$_{\odot}$}$, but a mass resolution of 100 M$\mbox{$_{\odot}$}$ prevents this study from saying anything conclusive about the formation of sub-stellar mass objects. In this paper, we present the results of three-dimensional hydrodynamic simulations of metal-enriched star-formation. These simulations are similar in nature to those of \citet{2001MNRAS.328..969B}, but with vastly improved numerical methods and updated physics. We describe the setup of our simulations in \S\ref{sec:setup}, with the results in \S\ref{sec:results} and a discussion of the consequences of this work in \S\ref{sec:discussion}. \section{Simulation Setup} \label{sec:setup} We perform a series of four simulations, with constant metallicities, Z = 0 (metal-free), 10$^{-4}$ Z$\mbox{$_{\odot}$}$, 10$^{-3}$ Z$\mbox{$_{\odot}$}$, and 10$^{-2}$ Z$\mbox{$_{\odot}$}$, using the Eulerian adaptive mesh refinement hydrodynamics/N-body code, Enzo \citep{1997WSAMRGMBryan,2004CWAMROShea}. The metallicity is held constant throughout each simulation in order to isolate the role of heavy element concentration in altering the dynamics of collapse compared to the identical metal-free case. In reality, metals will be injected over time into star forming gas by Pop III supernova blast waves, and the mixing of those metals with the gas will not be completely uniform. Here we focus on an idealized approximation in order to capture the essential physics of collapse and fragmentation. Each simulation begins at z = 99, in a cube, 300 h$^{-1}$ kpc comoving per side, in a $\Lambda$CDM universe, with the following cosmological parameters: $\Omega_{M}$ = 0.3, $\Omega_{\Lambda}$ = 0.7, $\Omega_{B}$ = 0.04, and Hubble constant, h = 0.7, in units of 100 km s$^{-1}$ Mpc$^{-1}$. We initialize all the simulations identically, with a power spectrum of density fluctuations given by \citet{1999ApJ...511....5E}, with $\sigma_{8}$ = 0.9 and n = 1. The computation box consists of a top grid, with 128$^{3}$ cells, and three static subgrids, refining by a factor of 2 each. This gives the central refined region, which is 1/64 the total computational volume, an effective top grid resolution of 1024$^{3}$ cells. The grid is centered on the location of a $\sim$ 5 $\times$ 10$^{5}$ M$\mbox{$_{\odot}$}$ dark matter halo that is observed to form at z $\sim$ 18 in a prior dark-matter-only simulation, as is done similarly in \citet{2002Sci...295...93A, 2005ApJ...628L...5O}. Refinement occurs during the simulations whenever the gas, or dark matter, density is greater than the mean density by a factor of 4, or 8, respectively. We also require that the local Jeans length be resolved by at least 16 grid cells at all times in order to avoid artificial fragmentation as prescribed by \citet{1997ApJ...489L.179T}. To include the radiative cooling processes from the heavy elements, we use the method described in Smith, Sigurdsson, \& Abel (2007), in preparation. The nonequilibrium abundances and cooling rates of H, H$^{+}$, H$^{-}$, He, He$^{+}$, He$^{++}$, H$_{2}$, H$_{2}^{+}$, and e$^{-}$ are calculated internally, as in \citet{2002Sci...295...93A,1997NewA....2..209A}. Meanwhile, the metal cooling rates are interpolated from large grids of values, precomputed with the photoionization software, CLOUDY \citep{1998PASP..110..761F}. We ignore the cooling from dust and focus only on the contribution of gas-phase metals in the optically-thin limit. Unlike other studies of the formation of the first metal-enriched structures, we do not assume the presence of an ionizing UV background. In our model, the singular pop III star that was associated with the dark matter halo in which our stars form has already died in a supernova. We also assume any other Pop III stars are too distant to affect the local star-forming region and that QSOs have yet to form. We use the \texttt{coronoal equilibrium} command when constructing the cooling data in CLOUDY to simulate a gas where all ionization is collisional. The metal cooling data was created using the Linux cluster, Lion-xo, run by the High Performance Computing Group at The Pennsylvania State University. As a consequence of our choice to ignore any external radiation, we do not observe the fine-structure emission of [C \textsc{ii}] (157.74 $\mu$m) that was reported by \citet{2006ApJ...643...26S} to be important. Instead, cooling from C comes in the form of fine-structure lines of [C \textsc{i}] (369.7 $\mu$m, 609.2 $\mu$m). The cooling from [C \textsc{i}] in our study dominates in the same range of densities and temperatures as the cooling from [C \textsc{ii}] in \citet{2006ApJ...643...26S}. We observe the contributions of the other coolants studied by \citet{2006ApJ...643...26S}, [O \textsc{i}], [Si \textsc{ii}], and [Fe \textsc{ii}], to be in agreement with their work. In addition, we find that emission from [S \textsc{i}] (25.19 $\mu$m) dominates the cooling from metals at n $\sim$ 10$^{7}$ cm$^{-3}$ and T $\sim$ 1--3 $\times$ 10$^{3}$ K. The absence of UV radiation in our simulations also allows H$_{2}$ to form, differentiating this study from \citet{2001MNRAS.328..969B}. This allows for a more direct comparison between the metal-free and metal-enriched cases. The simulations are run until one or more dense cores form at the center of the dark matter halo and a maximum refinement level of 28 is reached for the first time, giving us a dynamic range of greater than 10$^{10}$. Only the simulation with Z = 10$^{-2}$ Z$\mbox{$_{\odot}$}$ reached 28 levels of refinement. The three other simulations were stopped after reaching 27 refinement levels, since their central densities were already higher than the simulation with Z = 10$^{-2}$ Z$\mbox{$_{\odot}$}$. Table 1 summarizes the final state of each simulation, where z$_{col}$ is the collapse redshift, l$_{max}$ is the highest level of refinement, $n_{max}$ is the maximum gas density within the box, and $\Delta$t$_{col}$ is the time difference to collapse from the metal-free simulation. \section{Results} \label{sec:results} As can be seen in Table 1, the runs with higher metallicities reach the runaway collapse phase faster. The relationship between metallicity relative to solar and $\Delta$t$_{col}$ is well fit by a power-law with index, n $\simeq$ 0.22. Gas-clouds with more metals are able to radiate away their thermal energy more quickly, and thus, collapse faster. An inverse relation between metallicity and the number of grids and grid-cells exists because the low-density, background gas evolves at roughly the same rate in all simulations, yet has more time, in the runs with lower metallicities, with which to collapse to higher density, requiring additional refinement. Our simulations, shown in Figure 1, display a qualitative transition in behavior between metallicities of 10$^{-4}$ Z$\mbox{$_{\odot}$}$ and 10$^{-3}$ Z$\mbox{$_{\odot}$}$. In the runs with the highest metallicities (Figure 1C and 1D), the central core is extremely asymmetric, and multiple density maxima are clearly visible. All four runs display similar large-scale density profiles (Figure 2A). Radiative cooling from H$_{2}$ becomes extremely inefficient below T $\sim$ 200 K, creating the effective temperature floor, visible in Figure 2B for the metal-free case \citep{2002Sci...295...93A,2002ApJ...564...23B}. At n $\simeq$ 10$^{4}$ cm$^{-3}$, the rotational levels of H$_{2}$ are populated according to LTE, reducing the cooling efficiency and causing the temperature to increase \citep{2002Sci...295...93A,2002ApJ...564...23B}. In the isothermal collapse model of \citet{1977ApJ...214..488S}, the accretion rate is proportional to the cube of the sound speed. The increase in temperature leads to an increase in the accretion rate, causing the density, and thus, the enclosed mass (Figure 2C), to be slightly higher inside the central $\sim$ 0.1 pc in the metal-free case. A similar situation occurs further within for the Z = 10$^{-4}$ Z$\mbox{$_{\odot}$}$ and, later, the 10$^{-3}$ Z$\mbox{$_{\odot}$}$ cases, as the metal cooling is overwhelmed by adiabatic compression heating and the temperature begins to rise with density. The presence of metals at the level of 10$^{-4}$ Z$\mbox{$_{\odot}$}$ enhances the cooling enough to lower the gas temperature to $\sim$ 75 K. Metallicities greater than 10$^{-3}$ Z$\mbox{$_{\odot}$}$ provide sufficient cooling to bring the gas down to the temperature of the cosmic microwave background, where T$_{CMB}$ $\simeq$ 2.7 K (1 + z). The gas temperatures are in general agreement with the calculations of \citet{2005ApJ...626..627O} that include a CMB spectrum at z = 20. Fragmentation requires that the cooling time be less than the dynamical time. Figure 2D shows that this criterion is essentially never met in the zero metallicity case, and only marginally in the Z = 10$^{-4}$ Z$\mbox{$_{\odot}$}$ case. However, the fragmentation criterion is more than satisfied in the Z = 10$^{-3}$ Z$\mbox{$_{\odot}$}$ and 10$^{-2}$ Z$\mbox{$_{\odot}$}$ cases over a wide mass-range. In order to locate fragments within our simulations, we employ an algorithm, based on \citet{1994ApJ...428..693W}, that works by identifying isolated density countours. Before we search for clumps, we smooth the density field by assigning each grid-cell the mass-weighted mean density of the group of cells including itself and its neighbors within one cell-width. This serves to eliminate small density perturbations that would be misidentified as clumps by the code. In order to directly compare the fragmentation from each simulation, we limit the search for clumps to the 1 M$\mbox{$_{\odot}$}$ of gas surrounding the cell with the highest density. On larger scales, all of the runs display a filamentary structure that is qualitatively similar. No other region in any of the simulation boxes has collapsed to densities comparable to those found within the region where the clump search is performed. The results are shown in Figure 3. A single clump exists in the metal-free and 10$^{-4}$ Z$\mbox{$_{\odot}$}$ simulations, containing 99.7\% of the total mass within the region of interest. In the simulation with Z = 10$^{-3}$ Z$\mbox{$_{\odot}$}$, 91\% of the mass is shared between two clumps with 0.52 M$\mbox{$_{\odot}$}$ and 0.39 M$\mbox{$_{\odot}$}$. In the same simulation, we also find two smaller clumps 0.06 M$\mbox{$_{\odot}$}$ and 0.02 M$\mbox{$_{\odot}$}$. Finally, in the Z = 10$^{-2}$ Z$\mbox{$_{\odot}$}$ simulation, we see two clumps with 0.79 M$\mbox{$_{\odot}$}$ and 0.21 M$\mbox{$_{\odot}$}$. \section{Discussion} \label{sec:discussion} We have shown, through three-dimensional hydrodynamic simulations, that fragmentation occurs in collapsing gas with metallicities, Z $\ge$ 10$^{-3}$ Z$\mbox{$_{\odot}$}$. Our results indicate that star-formation occurs in exactly the same manner at metallicity, Z = 10$^{-4}$ Z$\mbox{$_{\odot}$}$, as it does at zero metallicity. The similarities between the simulations with metallicities, Z = 10$^{-3}$ Z$\mbox{$_{\odot}$}$ and 10$^{-2}$ Z$\mbox{$_{\odot}$}$, suggest that the transition to low-mass star-formation is complete by 10$^{-3}$ Z$\mbox{$_{\odot}$}$, implying that the entire transition occurs over only one order of magnitude in metal abundance. More simulations, bracketing the metallicity range, 10$^{-4}$ to 10$^{-3}$ Z$\mbox{$_{\odot}$}$, will test how abrupt the transition truly is. We will also explore the effect of non-solar abundances on the low metallicity IMF. It has been recently argued that dust cooling at high densities (n $\ge$ 10$^{13}$ cm$^{-3}$) can induce fragmentation for metallicities as low as 10$^{-6}$ Z$\mbox{$_{\odot}$}$ \citet{2006MNRAS.369.1437S}. In light of the work by \citet{2007astro.ph..1395F}, who note the absence of stars with D$_{trans}$ $<$ -3.5, where D$_{trans}$ is a measure of the combined logarithmic abundance of C and O, it seems unlikely that Z$_{cr}$ is this low. While the fragmentation mode discussed in \citet{2006MNRAS.369.1437S}, and also \citet{2005ApJ...626..627O}, may truly exist, it is possible that metal yields from Pop III supernovae overshoot this metallicity, for realistic mixing scenarios, leaving almost no star-forming regions with such a low concentration of heavy elements. Similar to our results, \citet{2005ApJ...626..627O} note that only high-mass fragments are produced when Z = 10$^{-4}$ Z$\mbox{$_{\odot}$}$. If Pop III supernovae are able to immediately enrich the local universe to Z = 10$^{-4}$ Z$\mbox{$_{\odot}$}$, the high-density dust cooling fragmentation mode would be skipped altogether, and the high-mass stars that formed via the mode observed at 10$^{-4}$ Z$\mbox{$_{\odot}$}$ would leave no record in the search for low-metallicity stars in the local universe. We have limited the search for fragments to the dense 1 M$\mbox{$_{\odot}$}$ core at the center of each simulation. Within this region, it is unlikely that any more fragments will form in any of the simulations. In all of the cases presented, the cooling has begun to be overwhelmed by compression heating such that the central temperature is now increasing with increasing density, which was indicated by \citet{2005MNRAS.359..211L} to be the end of hierarchical fragmentation. Fragmentation may continue in the surrounding lower density gas in the cases of Z = 10$^{-3}$ Z$\mbox{$_{\odot}$}$ and 10$^{-2}$ Z$\mbox{$_{\odot}$}$. The final stellar masses of these objects will also be affected interaction and accretion that will occur in later stages of evolution. In the two lowest metallicity cases, the gas immediately surrounding the central core evolves slowly enough that it will not have sufficient time to reach high densities before the UV radiation from the central, massive star dissociates all of the H$_{2}$. As was shown by \citet{2001MNRAS.328..969B}, clouds with metallicities, Z $\le$ 10$^{-4}$ Z$\mbox{$_{\odot}$}$ are unable to collapse without the aid of H$_{2}$ cooling. In the two simulations in which significant fragmentation is observed, Z = 10$^{-3}$ and 10$^{-2}$ Z$\mbox{$_{\odot}$}$, the gas is able to cool rapidly to the temperature of the CMB. \citet{2005ApJ...629..615W} predict that the rate of Pop III supernovae peaks at a redshift, z $\sim$ 20, and then drops off sharply, implying that metal production from Pop III stars is effectively finished at this point. In this epoch, the characteristic mass-scale for metal-enriched star formation will be regulated by the CMB, as is predicted in \citet{2003Natur.425..812B}. Thus, the first Pop II stars will be considerably more massive, on average, than stars observed today, as was suggested by \citet{1998MNRAS.301..569L}. Observations of low-mass prestellar cores in the local universe reveal them to have temperatures of about 8.5 K \citep{1999ARA&A..37..311E}, implying that the IMF may not become completely 'normal' until z $<$ 3 when the CMB fell below this temperature. \acknowledgments We thank Tom Abel, Greg Bryan, Mike Norman, Brian O'Shea, and Matt Turk for useful discussions. BDS also thanks Michael Kuhlen for providing an update to some useful analysis tools. We are also very grateful for insightful comments from an anonymous referee. This work was made possible by Hubble Space Telescope Theory Grant HST-AR-10978.01, and an allocation from the San Diego Supercomputing Center.
1,314,259,993,680
arxiv
\section{Introduction} Estimating 3D geometry from data~\cite{HZ-2003,schoenberger2016sfm,snavely2008modeling} in one or more images is a reoccuring problem in subjects like computer vision, robotics, and photogrammetry. The exact formulations of these problems vary considerably, depending on factors like the model of image formation that is being considered or what data are available. A common element in many of these geometric pose estimation problems is the presence of polynomial constraints, depending on both the data and the unknown quantities to be estimated. Considerable emphasis has been placed on \deff{minimal problems} which are usually well-posed in the sense that exact solutions exist for generic data. A thorough understanding of these problems is not only theoretically appealing, but has practical consequences. Minimal problems and their solvers~\cite{AgarwalLST17,Barath-CVPR-2018,Barath-TIP-2018,Barath-CVPR-2017,Byrod-ECCV-2008,DBLP:conf/eccv/CamposecoSP16,PLMP,PL1P,Elqursh-CVPR-2011,Ricardo,Hartley-PAMI-2012,Kileel-MPCTV-2016,DBLP:conf/eccv/KneipSP12,Kuang-ICCV-2013,kuang-astrom-2espc2-13,kukelova2008automatic,larsson2017efficient,Larsson-Saturated-ICCV-2017,larsson2017making,Miraldo-ECCV-2018,mirzaei2011optimal,Nister,DBLP:conf/cvpr/RamalingamS08,SalaunMM-ECCV-2016,saurer2015minimal,Stewenius-ISPRS-2006,ventura2015efficient} play an outsized role in RANSAC-based estimation, introduced to the computer vision community by Fischler and Bolles in 1981~\cite{RANSAC} and later developed to a very efficient and robust estimation method~\cite{Raguram-USAC-PAMI-2013}. Using a robust minimal solver as a subroutine within the RANSAC loop allows for fewer iterations when compared to other techniques (eg.~linear algorithms~\cite{8pt}) that require more measurements and often ignore the underlying polynomial constraints. However, the algebraic complexity of a minimal problem as measured by its \deff{degree} has been observed as a limiting factor for minimal solvers, despite their many successes in practice. Towards controlling this algebraic complexity, the question of how to detect and exploit symmetries when solving polynomial systems of equations has attracted interest in recent literature in computer vision~\cite{Ask12,LarssonSymmetries}. The techniques developed there can be understood in the context of linear representation theory, and can be traced back to pioneering work in computer algebra~\cite{Corless,gatermann90}. As noted in~\cite[Sec 4]{LarssonSymmetries}, these techniques may be difficult to apply without some problem-specific knowledge. Moreover, many problems have \emph{nonlinear} symmetries. A famous example is the problem of 5-point relative pose estimation, or simply the 5-point problem. The set of solutions to this problem is invariant under a nonlinear symmetry known as the twisted pair. \begin{example} \label{ex:5pp} Two cameras view some number of points in the world (ie.~3-dimensional space.) Each camera is modeled via perspective projection. When the cameras are calibrated~\cite[Ch.~6]{HZ-2003}, we may assume that the two camera frames differ by a rotation $\mathbf{R}$ and a translation $\mathbf{t}.$ \begin{figure}[H] \begin{center} \def\columnwidth{0.8\columnwidth} \import{./Figs/5pt/}{5pt.pdf_tex} \label{fig:5pt} \end{center} \end{figure} In the five-point problem, we are given the $2$D data of $5$ correspondences between image points $\mathbf{x}_1 \leftrightarrow \mathbf{y}_1, \ldots , \mathbf{x}_5 \leftrightarrow \mathbf{y}_5$ which are assumed to be images of $5$ world points. The image points are given in \deff{normalized image coordinates}, meaning that each is a $3\times 1$ vector whose last coordinate equals $1.$ The task is to reconstruct the relative orientation $\cam{\mathbf{R}}{\mathbf{t}} \in \SE_\RR (3) $ between the two views and each of the five world points as measured by their depths with respect to the first and second camera frames. Writing $\alpha_1 , \ldots , \alpha_5$ for the depths with respect to the first camera and $\beta_1,\ldots , \beta_5$ for the depths with respect to the second camera, the five-point problem becomes a system of polynomial equations and inequations: \begin{equation} \label{eq:5p_intro} \begin{split} \mathbf{R}^\top\mathbf{R} = \mathtt{I}, \quad \det \mathbf{R} = 1, \\ \beta_i\mathbf{y}_i = \mathbf{R}\alpha_i\mathbf{x}_i + \mathbf{t}, \;\; \alpha_i, \beta_i \neq 0, \quad \forall \, i = 1,\dots,5. \end{split} \end{equation} An inherent ambiguity of the five-point problem is that the unknowns $\mathbf{t}, \alpha_1, \ldots , \alpha_5, \beta_1, \ldots , \beta_5$ can only be recovered up to a common scale factor. If we treat these unknowns as homogeneous coordinates on a $12$-dimensional projective space, then for generic data $\mathbf{x}_1,\ldots , \mathbf{x}_5, \mathbf{y}_1, \ldots , \mathbf{y}_5,$ there are at most finitely many solutions in $\mathbf{R}, \mathbf{t}, \alpha_1, \ldots, \alpha_5, \beta_1,\ldots , \beta_5$ to the system~\eqref{eq:5p_intro}. Moreover, if we count solutions over the complex numbers, there are exactly $20$ solutions for generic data in $Z = \left( \mathbb{C}^2 \times \{1 \}\right)^5 \times \left( \mathbb{C}^2 \times \{ 1 \} \right)^5.$ The solutions to~\eqref{eq:5p_intro} are naturally identified with the fibers of a branched cover $f:X \to Z$ (see Definition~\ref{def:branchedcover}), where $X$ is the \deff{incidence correspondence} \[ X = \{ \left(\mathbf{R}, (\mathbf{t}, \alpha_1, \ldots , \alpha_5, \beta_1, \ldots , \beta_5), (\mathbf{x}_1,\ldots , \mathbf{x}_5, \mathbf{y}_1 , \ldots , \mathbf{y}_5) \right) \in \SO_\CC (3) \times \mathbb{P}_\mathbb{C}^{12} \times Z \mid \eqref{eq:5p_intro} \text{ holds } \} . \] For most solutions to~\eqref{eq:5p_intro}, the associated \deff{twisted pair} solution is obtained by rotation of the second camera frame $180^\circ$ about the baseline connecting the first and second camera centers. \begin{figure}[H] \begin{center} \def\columnwidth{\columnwidth} \import{./Figs/5pt/}{twisted_pair.pdf_tex} \label{fig:5pt_twisted_pair} \end{center} \end{figure} The twisted pair may be viewed as a rational map $\rat{\Psi}{X}{X},$ given coordinate-wise by \begin{equation} \label{eq:twisted-pair} \begin{split} \Psi (\mathbf{R}) &= \left( 2 \displaystyle\frac{\mathbf{t} \hfill \mathbf{t}^\top}{\mathbf{t}^\top \hfill \mathbf{t}} \, - \mathbf{I} \right) \, \mathbf{R} \\ \Psi (\mathbf{t}) &= \mathbf{t} \\ \Psi (\alpha_i ) &= \displaystyle\frac{-\alpha_i \norm{\mathbf{t}}^2 }{\norm{\mathbf{t}}^2 + 2 \, \langle \mathbf{R}^\top \mathbf{t} , \alpha_i \mathbf{x}_i \rangle } = \displaystyle\frac{-\alpha_i\norm{\mathbf{t}}^2}{\norm{\beta_i\mathbf{y}_i}^2 - \norm{\alpha_i\mathbf{x}_i}^2} \\ \Psi (\beta_i ) &= \displaystyle\frac{\beta_i \norm{\mathbf{t}}^2 }{\norm{\mathbf{t}}^2 + 2 \, \langle \mathbf{R}^\top \mathbf{t} , \beta_i \mathbf{x}_i \rangle } = \displaystyle\frac{\beta_i\norm{\mathbf{t}}^2}{\norm{\beta_i\mathbf{y}_i}^2 - \norm{\alpha_i\mathbf{x}_i}^2}\\ \Psi (\mathbf{x}_1, \ldots, \mathbf{x}_5, \mathbf{y}_1, \ldots , \mathbf{y}_5) &= (\mathbf{x}_1, \ldots, \mathbf{x}_5, \mathbf{y}_1, \ldots , \mathbf{y}_5) . \end{split} \end{equation} Here, we use the notation $\langle , \rangle$ and $\norm{\cdot }^2$ for the complex quadratic forms $\langle \boldsymbol{a}, \boldsymbol{b} \rangle= a_1b_1 + a_2 b_2 + a_3 b_3,$ $\norm{\boldsymbol{a}}^2 = \langle \boldsymbol{a}, \boldsymbol{a} \rangle $ which restrict to the usual norm and inner product on $\mathbb{R}^3.$ We note that $\Psi$ is undefined whenever $\boldsymbol{t}\in \mathbb{P}^2$ is an \deff{isotropic vector} satisfying $\norm{\boldsymbol{t} }^2 =0,$ and whenever $\norm{\alpha_i\mathbf{x}_i}^2 = \norm{\beta_i\mathbf{y}_i}^2$ for some $i = 1,\dots,5$. The second condition can be understood geometrically: if the camera centers and the world point $\mathbf{X} = \alpha \mathbf{x}$ form an isosceles triangle with base $\norm{\mathbf{t}}$, then, after rotating the second camera, the rays which join camera centers to the respective image points will become parallel. One can check (e.g.~\cite[p.~20]{Maybank}) that the set of solutions to equations~\eqref{eq:5p_intro} is left invariant under application of $\Psi.$ In other words, we have an equality of mappings $f \circ \Psi = f$ wherever $\Psi$ is defined. The map $\Psi $ is a deck transformation (see Definition~\ref{def:deck}) of the branched cover $f.$ \end{example} The twisted pair of Example~\eqref{eq:5p_intro} is a well-known construction. It is not easy to determine that such a symmetry exists just from staring at Equations~\eqref{eq:5p_intro}. However, the existence of such a symmetry can be easily decided after computing the Galois/monodromy group of $f,$ as we make explicit in Proposition~\ref{prop:deck-centralizer}. The state-of-the-art method~\cite{Nister} for solving the five-point problem is based on \deff{decomposing} the branched cover $f$ in terms of the \deff{essential matrix} (see Examples~\ref{ex:resolve-twisted}, \ref{ex:deck-resolve-twisted} and Section~\ref{subsec:5pp}). In general, decomposability can be detected after computing the Galois/monodromy group, by Proposition~\ref{prop:decomposable_iff_imprimitive}. Working within the framework of branched covers and Galois/monodromy groups allows us to probe the structure of minimal problems in ways that the previously mentioned works cannot. As is well-known (see Proposition~\ref{prop:galoismonodromy}), this group can be understood both algebraically (``Galois'') and topologically (``monodromy''). The algebraic point of view was pursued by Hartley, Nist\'{e}r, and Stew\'{e}nius~\cite{NisterHartley}, who computed Galois groups symbolically to show that certain formulations of minimal problems were degree-optimal. In our paper, the topological point of view is somewhat more relevant, since we can compute the monodromy action for problems of potentially large degree using numerical homotopy continuation methods, as implemented in several software packages~\cite{Bertini,HCJL,MonodromySolver}. Computing the Galois/monodromy group allows us to decide decomposability or the existence of deck transformations by reduction to standard algorithms in computational group theory~\cite{holt}. These algorithms are implemented in computer algebra systems such as GAP~\cite{GAP}, which was an essential ingredient in discovering our main results. We find frequently that many minimal problems do not decompose---in this case, the Galois/monodromy group tells us that our formulation is optimal in a precise sense. However, we find in several cases that minimal problems previously considered in the literature do decompose. We describe this in the context of problems where this phenomenon is already well-understood (such as the five-point problem), and also in several cases where some decomposition or symmetry was not previously noticed. Many of our results are computational, and thus may fall short of the conventional standard for proof. We use the term ``Result'' instead of ``Theorem'' in these cases. \begin{remark} \label{remark:symbolicGalois} Our Galois groups are geometric, over the field $\mathbb{C},$ rather than arithmetic, over the field $\mathbb{Q} .$ In general, the geometric Galois group is a normal subgroup of the arithmetic Galois group. The two can be different---for instance, the arithmetic Galois group for Example~\ref{ex:x^3} is $S_3.$ However, we expect these two groups to be the same for most problems considered here. Several problems (such as the absolute pose problems appearing in Section~\ref{sec:abspose}) are well-within the range of symbolic Galois group computation. For problems of higher degree appearing in Section~\ref{sec:rel-pose}, an alternative route for recognizing the frequently occuring symmetric groups would use heuristics based on the Chebotarev density theorem as in~\cite{del2019classification} after computing certain lexicographic Gr\"{o}bner bases. We do not pursue this route. \end{remark} For readers with a background in computer vision, we suggest starting with the familiar examples in Sections~\ref{sec:abspose} and~\ref{sec:rel-pose}, referring back to the theory and examples in Section~\ref{sec:background} as needed. The mathematics needed to fully understand our paper is presented in Section~\ref{subsec:branch}. We include proofs of several standard facts in hope that our paper is accessible to a broad audience. We give a brief overview of numerical methods for computing Galois/monodromy groups in Section~\ref{subsec:num_methods}. In Section~\ref{sec:abspose}, we investigate the Galois/monodromy groups of absolute pose problems involving points and lines, where the task is to compute cameras from correspondence data between the world and an image. In particular, we describe newly discovered symmetries for problems involving a mixture of points and lines. Section~\ref{sec:rel-pose} considers relative pose problems, where the task is to compute the transformation between camera frames given correspondence data between images. We describe decompositions for the five-point problem in Section~\ref{subsec:5pp}, the four-point calibrated homography problem for two views in Section~\ref{subsec:2viewhomography}, and a novel minimal problem involving two calibrated homographies between three images in Section~\ref{subsec:3viewhomography}. We give a conclusion and outlook in Section~\ref{sec:outlook}. \section{Background} \label{sec:background} \subsection{Branched covers and monodromy groups} \label{subsec:branch} \begin{defn} \label{def:branchedcover} A \deff{branched cover} is a dominant, rational map $\ratnoname{X}{Z},$ where $X$ and $Z$ are irreducible algebraic varieties over $\mathbb{C}$ of the same dimension. \end{defn} We emphasize several consequences of this definition. Most importantly, it follows that for generic (and hence \emph{almost all}) data $z\in Z,$ the fiber over $z,$ denoted $X_z ,$ is a nonempty, finite set. Second is the assumption of irreducibility, which implies that the monodromy group acts transitively. In principal, we can always reduce to the case of an irreducible variety by considering the irreducible components of an arbitrary variety. The many examples we consider show that irreducibility is a natural assumption. Finally, although many of the branched covers we consider are actually regular maps, the maps appearing as deck transformations or in a factorization need not be defined on the same domain as the branched cover, making it more natural to work with rational maps. We say that $X$ and $Z$ are the \deff{total space} and \deff{base} of the branched cover, respectively. The reader may safely assume that all varieties are quasiprojective. Pulling back rational functions from $Z$ to $X$ lets us identify $\mathbb{C} (Z)$ with a subfield of $\mathbb{C} (X)$. Since $\mathbb{C} (X)$ and $\mathbb{C} (Z)$ have the same transcendence degree over $\mathbb{C} ,$ the field extension $\mathbb{C}(X) / \mathbb{C}(Z)$ is finite. The \deff{degree} of the map $\ratnoname{X}{Z}$ may be defined as the degree of this field extension. We write $\deg (X/Z)$ for this quantity, since the map $\ratnoname{X}{Z}$ is usually clear from context. We say that a nonempty Zariski-open $U\subset Z$ is a \deff{regular locus} for $\ratnoname{X}{Z}$ if $U \cap Z_{\sing } = \emptyset$ and if for all $z\in U$ the cardinality of the fiber $X_z := f^{-1}(z)$ is equal to the degree of the map. The existence of such a $U$ follows from basic results in algebraic geometry~\cite[cf.~pp.~142]{Shaf}. We now recall the monodromy action on the fibers of of a degree-$d$ branched cover $\rat{f}{X}{Z}.$ We omit many details which may be found in several excellent introductory references~\cite{Hatcher,Mir95,Zol06}. Fix a regular locus $U$ and a basepoint $z\in U,$ and write $X_z = \{x_1,\dots,x_d\}.$ A \deff{loop} based at $z$ is a continuous map $\gamma : [0,1] \to U$ which satisfies $\gamma (0) = \gamma (1) = z.$ For each $x_i,$ there exists a unique \deff{lift} $\widetilde{\gamma}_i : [0, 1] \to f^{-1} (U)$ satisfying $\gamma = f\circ \widetilde{\gamma}_i $ and $\gamma (0) = x_i.$ This fact from topology is known as the \deff{unique path-lifting property} (see eg.~\cite[Proposition 1.30]{Hatcher}). The lifts based at each of the points $x_1,\ldots , x_d$ determine a permutation of the fiber, $\sigma_{\gamma}: X_z \to X_z$, which may be written in two-line notation as \begin{equation} \label{eq:permutation-lift} \sigma_{\gamma } = \left(\begin{array}{ccccc} x_1 & x_2 & x_3 & \cdots & x_d \\ \widetilde{\gamma}_1(1) & \widetilde{\gamma}_2(1) & \widetilde{\gamma}_3(1) & \cdots & \widetilde{\gamma}_d(1) \end{array}\right). \end{equation} \begin{remark} \label{remark:groupTheory} We use standard notation from group theory: $\sym (X)$ is the symmetric group of all permutations from a finite set $X$ to itself, $S_n, \, A_n, $ denote symmetric and alternating groups acting on letters $[n] = \{ 1, \ldots , n \}$ (hence $\sym ([n]) = S_n$), and $C_n$ denotes a cyclic group of order $n.$ At several points, we must distinguish between abstract groups and the way they act on sets. For instance, the usual action of $S_4$ on $[4]$ is not equivalent to the action of $S_4 \hookrightarrow S_6$ on the $6 = \binom{4}{2}$ unordered pairs in $[4]$. The latter group may also described as $S_2 \wr S_3 \cap A_6.$ Here, $\wr$ denotes the \deff{wreath product}; the group $S_2 \wr S_3 $ may be realized as the subgroup of $S_6$ preserving the partition $[6] = [2] \cup \{3,4\} \cup \{5,6\}.$ In general, the wreath product $S_m \wr S_n$ is a semidirect product $\left(S_m\right)^n \rtimes S_n,$ and is usually equipped with an action on the Cartesian product $[n] \times [m].$ We refer to~\cite[Chapter 7]{Rotman} for several useful facts about the wreath product. The group $S_2 \wr S_3 \cap A_6$ will reappear in Section~\ref{subsec:2viewhomography} on homography estimation. \end{remark} The permutation $\sigma_\gamma $ is independent of the homotopy class of $\gamma $ in $U,$ from which one obtains a homomorphism from the fundamental group $\pi_1 (U , z) $ into the symmetric group $\sym ( X_z)$: \begin{equation} \label{eq:monodromyRep} \begin{split} \rho : \pi_1 (U , z) &\to \sym (X_z)\\ [\gamma ] &\mapsto \sigma_{\gamma }. \end{split} \end{equation} The map $\rho $ in Equation~\ref{eq:monodromyRep} is known as the \deff{monodromy representation.} The image of this homomorphism is a permutation group which acts transitively on the fiber $X_z.$ It is called the \deff{monodromy group,} and will be denoted by $\mathrm{Mon} (X/Z; U, z),$ or $\mathrm{Mon} (X/Z;z),$ or simply $\mathrm{Mon} (X/Z).$ The latter notation, and our preferred terminology of \deff{Galois/monodromy group}, is justified by Proposition~\ref{prop:galoismonodromy}. We write $\galclo{\mathbb{C} (X)} / \mathbb{C} (Z)$ for the Galois closure of $\mathbb{C}(X) / \mathbb{C}(Z)$ and $\Gal (X / Z)$ for its Galois group. \begin{prop} \label{prop:galoismonodromy} Let $\ratnoname{X}{Z}$ be a branched cover with regular locus $U,$ and fix a basepoint $z\in U.$ Then the $\Gal (X/Z)$ and $\mathrm{Mon} (X/Z ; U, z)$ are isomorphic as permutation groups. In particular, $\mathrm{Mon} (X/Z; U, z)$ is independent of the choice of $(U,z).$ \end{prop} A proof of Proposition~\ref{prop:galoismonodromy} is given in~\cite{Harris}. In this proof, the Galois closure $\galclo{\mathbb{C} (X)} / \mathbb{C} (Z)$ is identified as an extension of $\mathbb{C} (X)$ obtained by adjoining certain germs of functions around points in $X_z.$ With this identification, it is then argued (using the Galois correspondence and analytic continuation) that the Galois and monodromy actions on $X_z$ coincide. \begin{example} \label{ex:x^3} Consider \[ X = \{ (x,\, z) \in \mathbb{C} \times \mathbb{C} \mid x^3 = z \}\] as a degree-$3$ branched cover over $Z=\mathbb{C} $ given by $(x,\, z) \mapsto z.$ A regular locus is the punctured complex line $U=\{ z \mid z\ne 0 \}.$ The monodromy group $\mathrm{Mon} (X/Z) \cong A_3 = C_3$ acts by cyclic permutation of $X_z=\{ z, \omega \, z , \omega^2 \, z \},$ where $\omega = \exp (2 \pi i / 3).$ Indeed, $\pi_1 (U ;z)$ is generated by the loop $\gamma (t) = e^{2 \pi i t},$ which encircles the branch point $z=0$ and induces the permutation $\sigma_\gamma$ defined by \[ \sigma_{\gamma } = \left(\begin{array}{ccccc} z & \omega \, z & \omega^2 \, z \\ \omega \, z & \omega^2 \, z & z \end{array}\right). \] \end{example} \begin{defn} \label{def:bir} Two branched covers, $\ratnoname{X_1}{Z_1}$ and $\ratnoname{X_2}{Z_2}$, are \deff{birationally equivalent} if there exist birational maps $ \ratnoname{X_1}{X_2}$ and $\ratnoname{Z_1}{Z_2}$ such that the following diagram commutes: \begin{center} \begin{tikzcd} X_1 \arrow[d, dashed] \arrow[r, dashed] & X_2\arrow[d,dashed] \\ Z_1 \arrow[r, dashed ] & Z_2 . \end{tikzcd} \end{center} \end{defn} \begin{prop} \label{prop:bir} The Galois/monodromy group of a branched cover is a birational invariant. \end{prop} \begin{proof} This follows easily from Proposition~\ref{prop:galoismonodromy}, since a birational equivalence of branched covers induces an isomorphism of field extensions. A more topological proof is also possible. Let us write $\rat{\Psi}{X_1}{X_2},$ $\rat{\psi}{Z_1}{Z_2}$ for the maps appearing in Definition~\ref{def:bir}. Then, for suitable regular loci $U_1 \subset Z_1, U_2 \subset Z_2,$ there is an isomorphism $\mathrm{Mon} (X_1 / Z ; U_1, z) \cong \mathrm{Mon} (X_2 / Z ; U_2 , \Psi (z))$ which may be defined by identifying, for each $\gamma : [0,1] \to U_1$ based at $z,$ the lifts $\widetilde{\gamma}_1, \ldots , \widetilde{\gamma }_d$ in $X_1$ with the lifts $\Psi \circ \widetilde{\gamma}_1, \ldots , \Psi \circ \widetilde{\gamma }_d$ in $X_2.$ \end{proof} \begin{example} \label{ex:resolve-twisted} Consider the regular branched cover $\SO_\CC (3) \times \mathbb{P}^2 \to \mathcal{E}$ given by \begin{equation} \label{eq:twistedPairMap} (\mathbf{R}, \mathbf{t} ) \mapsto [\mathbf{t}]_\times \mathbf{R} , \end{equation} where for $\mathbf{t}=[t_1:t_2:t_3]$ we let \[ [\mathbf{t}]_\times = \left( \begin{smallmatrix} 0 & -t_3 & t_2\\ t_3 & 0 & -t_1\\ -t_2 & t_1 & 0 \end{smallmatrix} \right) \] denote a $3\times 3$ matrix that represents, up to scale, taking the cross product with $\mathbf{t}$, and $\mathcal{E}$ denotes the variety of \deff{essential matrices}: \begin{equation} \label{eq:Vess} \mathcal{E} = \{ \mathbf{E} \in \mathbb{P} (\mathbb{C}^{3\times 3}) \mid \displaystyle\det \mathbf{E} =0, \, \, \mathbf{E} \mathbf{E}^\top \mathbf{E} - \displaystyle\frac{1}{2} \, \tr (\mathbf{E} \mathbf{E}^\top) \mathbf{E} = 0 \}. \end{equation} A regular locus is given by $U\subset \mathcal{E}$ such that all $E \in U$ have rank $2$ and the kernel of $\mathbf{E}$ is not spanned by an isotropic vector. A birationally equivalent branched cover was constructed in~\cite{2Hilbert,van2019functorial}, where the authors construct moduli spaces obtained by letting the \emph{absolute conic}~\cite{HZ-2003} degenerate to a double line. Explicitly, the branched cover $X \to \mathcal{E}$ is given by \[ X = \{ \left( [a_0:a_1:a_2:a_3], [b_0:b_1:b_2:b_3] \right) \in \mathbb{P}^3 \times \mathbb{P}^3 \mid a_0 b_0 + a_1 b_1 + a_2 b_2 + a_3 b_3 = 0 \} \] \[ \left( [\mathbf{a}], [\mathbf{b}] \right) \mapsto \left(\begin{smallmatrix} {a}_{0}{b}_{0}-{a}_{1}{b}_{1}-{a}_{2}{b}_{2}+{a}_{3}{b}_{3}&{a}_{1}{b}_{0}+{a}_{0}{b}_{1}+{a}_{3}{b}_{2}+{a}_{2}{b}_{3}&{a}_{2}{b}_{0}-{a}_{3}{b}_{1}+{a}_{0}{b}_{2}-{a}_{1}{b}_{3}\\ {a}_{1}{b}_{0}+{a}_{0}{b}_{1}-{a}_{3}{b}_{2}-{a}_{2}{b}_{3}&-{a}_{0}{b}_{0}+{a}_{1}{b}_{1}-{a}_{2}{b}_{2}+{a}_{3}{b}_{3}&{a}_{3}{b}_{0}+{a}_{2}{b}_{1}+{a}_{1}{b}_{2}+{a}_{0}{b}_{3}\\ {a}_{2}{b}_{0}+{a}_{3}{b}_{1}+{a}_{0}{b}_{2}+{a}_{1}{b}_{3}&-{a}_{3}{b}_{0}+{a}_{2}{b}_{1}+{a}_{1}{b}_{2}-{a}_{0}{b}_{3}&-{a}_{0}{b}_{0}-{a}_{1}{b}_{1}+{a}_{2}{b}_{2}+{a}_{3}{b}_{3}\\ \end{smallmatrix}\right), \] and there exists a birational equivalence \begin{center} \begin{tikzcd} X \arrow[d] \arrow[r, dashed] & \SO_\CC (3) \times \mathbb{P}^2 \arrow[d] \\ \mathcal{E} \arrow[r] & \mathcal{E} . \end{tikzcd} \end{center} where the bottom map is the identity. The top map may be given by \[ \left( [\boldsymbol{a}], [\boldsymbol{b}] \right) \mapsto \left( (a_3 \, I + [\mathbf{a}]_\times) ([\mathbf{a}]_\times - a_3 \, I)^{-1} , \, \left(\begin{smallmatrix} 2\,{a}_{1}{b}_{0}-2\,{a}_{0}{b}_{1}+2\,{a}_{3}{b}_{2}-2\,{a}_{2}{b}_{3}\\ 2\,{a}_{2}{b}_{0}-2\,{a}_{3}{b}_{1}-2\,{a}_{0}{b}_{2}+2\,{a}_{1}{b}_{3}\\ 2\,{a}_{3}{b}_{0}+2\,{a}_{2}{b}_{1}-2\,{a}_{1}{b}_{2}-2\,{a}_{0}{b}_{3}\\ \end{smallmatrix}\right) \right), \] where now \[ [\mathbf{a}]_\times = \left(\begin{smallmatrix} 0&{a}_{0}&{a}_{1}\\ -{a}_{0}&0&{a}_{2}\\ -{a}_{1}&-{a}_{2}&0\\ \end{smallmatrix}\right). \] The map $\ratnoname{X}{\SO_\CC (3)}$ is undefined for $([\boldsymbol{a}], [\boldsymbol{b}])$ such that $\boldsymbol{a}$ lies on the isotropic quadric $\norm{\boldsymbol{a}}^2 = 0$ in $\mathbb{P}^3.$ In~\cite{van2019functorial}, it was observed that a regular locus of $X \to \mathcal{E}$ is simply given by any $U\subset \mathcal{E}$ such that all $\mathbf{E} \in U$ have rank $2.$ Both branched covers have degree $2,$ and $\mathrm{Mon} (X/ \mathcal{E})$ is the symmetric group $S_2$ acting on two letters. We note that the map $\ratnoname{X}{\SO_\CC (3) \times \mathbb{P}^2}$ is closely related to dual quaternions and Study coordinates for $\SE (3)$~\cite{DBLP:journals/ijrr/Daniilidis99}. \end{example} A \deff{factorization} of a branched cover $\ratnoname{X}{Z}$ is a commutative diagram \begin{equation} \label{eq:factorization-diagram} \begin{tikzcd} X \arrow[rd, dashed] \arrow[r, dashed] & Y\arrow[d,dashed] \\& Z \end{tikzcd} \end{equation} such that $\ratnoname{X}{Y}$ and $\ratnoname{Y}{Z}$ are branched covers. If $\deg (X/Y)$ and $\deg (Y/Z)$ are both strictly less than $\deg (X/Z),$ we say that the factorization is \deff{proper} and that the branched cover $\ratnoname{X}{Z}$ is \deff{decomposable}. Otherwise, $\ratnoname{X}{Z}$ is \deff{indecomposable.} Proposition~\ref{prop:decomposable_iff_imprimitive} implies that the decomposability of a branched cover can be determined from the Galois/monodromy group alone. We recall that a \deff{block system} for the monodromy action $\mathrm{Mon} (X/Z) \curvearrowright X_z = \{ x_1 . \ldots x_d \},$ is a partition of $X_z = B_1 \cup \cdots \cup B_k,$ comprised of equally-sized blocks $B_1 , \ldots , B_k,$ which is preserved in the sense that blocks are always mapped to blocks under the group action. The block systems associated to the action form a lattice under refinement, whose respective maximum and minimum elements are $\{ X_z \}$ and $\{ \{ x_1\} , \ldots , \{ x_d \} \}.$ If any other block systems exist, then $\mathrm{Mon} (X/Z)$ is said to be \deff{imprimitive}, and otherwise it is \deff{primitive}. Given a factorization~\eqref{eq:factorization-diagram}, we have $\deg (X/Z) = \deg (X/Y) \, \deg (Y/Z),$ and a partition \begin{equation} \label{eq:partition} X_z = X_{y_1} \cup \cdots \cup X_{y_k} \end{equation} with $k=\deg (Y/Z).$ The proof of Proposition~\ref{prop:factor} below shows that this is a block system for the monodromy action with blocks of size $\deg (X/Y).$ Conversely, imprimitivity implies decomposability. \begin{proposition} \label{prop:decomposable_iff_imprimitive} A branched cover is decomposable if and only if $\mathrm{Mon}(X/Z)$ is imprimitive. \end{proposition} Proposition~\ref{prop:decomposable_iff_imprimitive} dates back to the work of Ritt~\cite{Ritt1}, who characterized the possible decompositions of branched covers $\mathbb{C} \ni x \mapsto p(x) \in \mathbb{C}$ given by a univariate polynomial $p.$ A Galois-theoretic proof of Proposition~\ref{prop:decomposable_iff_imprimitive} may be found, for instance, in~\cite{Brysiewicz}. If we know $\mathrm{Mon} (X/Z),$ it is also possible to identify $\mathrm{Mon} (X/Y)$ and $\mathrm{Mon} (Y/Z)$ occuring in the factorization~\eqref{eq:factorization-diagram}, as the next proposition shows. \begin{prop} \label{prop:factor} Consider a factorization of branched cover as in Equation~\eqref{eq:factorization-diagram}. For fixed generic $z\in Z,$ partition $X_z$ as in Equation~\ref{eq:partition}. The action $\mathrm{Mon} (X/Z) \curvearrowright X_z$ induces two other group actions which are equivalent to the monodromy groups of the individual factors: \begin{itemize} \item[1)] action on blocks: $\mathrm{Mon} (X/Z) \curvearrowright \{ X_{y_1}, \ldots , X_{y_k} \},$ which is equivalent to $\mathrm{Mon} (Y/Z).$ \item[2)] action on a single block: $\mathrm{Mon} (X/Z)_{X_{y}} \curvearrowright X_{y},$ where $\mathrm{Mon} (X/Z)_{X_{y}}$ denotes the stabilizer of the set $X_{y}$ under the action by $\mathrm{Mon} (X/Z)\curvearrowright X_z.$ This is equivalent to $\mathrm{Mon} (X/Y),$ and thus independent of the choice $y\in Y_z.$ \end{itemize} \end{prop} \begin{proof} 1) For each $\sigma_\gamma \in \mathrm{Mon} (X/Z),$ there is an induced permutation of the blocks: \[ \widetilde{\sigma_\gamma } = \left(\begin{array}{ccccc} X_{y_1} & \cdots & X_{y_k} \\ \sigma_\gamma (X_{y_1}) & \cdots & \sigma_{\gamma} (X_{y_k}) \end{array}\right). \] Indeed, suppose that $x, x ' \in X_{y_i}$ are such that $\sigma_\gamma (x) \in X_{y_j}, \sigma_\gamma (x') \in X_{y_k},$ and consider the lift $\widetilde{\gamma}: [0,1] \rightarrow Y$ starting at $y_i.$ We must have both $\widetilde{\gamma}(1) = y_j$ and $\widetilde{\gamma}(1) = y_k.$ Hence $k=j$ by the unique path-lifting property applied to $\ratnoname{Y}{Z}$, showing that $\sigma_\gamma $ preserves the partition into blocks. In this way we get a group homomorphism \begin{equation} \label{eq:hom1} \begin{split} \mathrm{Mon} (X/Z) &\to \sym ( \{ X_{y_1} , \ldots , X_{y_k} \} )\\ \sigma_\gamma &\mapsto \widetilde{\sigma_\gamma }, \end{split} \end{equation} which represents the action of $\mathrm{Mon} (X/Z)$ on the blocks. Now, there is also an injective group homomorphism \begin{equation} \label{eq:hom2} \begin{split} \mathrm{Mon} (Y/Z ) &\to \sym ( \{ X_{y_1} , \ldots , X_{y_k} \} )\\ \tau_\gamma &\mapsto \left(\begin{array}{ccccc} X_{y_1} & \cdots & X_{y_k} \\ X_{\tau_\gamma (y_1)} & \cdots & X_{\tau_\gamma (y_k)} \end{array}\right) \end{split} \end{equation} obtained by restricting the natural isomorphism $\sym (Y_z) \cong \sym (\{ X_{y_1}, \ldots , X_{y_k} \} )$ that identifies a point $y_i \in Y_z$ with its corresponding block $X_{y_i}.$ We wish to show that maps~\eqref{eq:hom1} and~\eqref{eq:hom2} have the same image. This follows easily if we restrict $\gamma $ in both maps to be loops contained in a regular locus for $\ratnoname{X}{Z}$: the lifts of $\gamma $ to $Y$ (which, by our restriction, also lift to $X$) determine the corresponding permutation of blocks in $X$, and vice-versa. Indeed, we have $\sigma_\gamma (X_y) = X_{\tau_\gamma (y)}$ for any $y\in Y_z$. To see this, it is enough to show one set is contained in the other. A point $x\in \sigma_\gamma (X_y)$ is the endpoint of some lift of $\gamma $ to $X.$ The image of this lift in $Y$ is itself a lift $\wt{\gamma } : [0,1] \to Y$ with $\wt{\gamma} (0) = y$---hence $\wt{\gamma } (1) = \tau_\gamma (y),$ and the endpoint of our original lift $x$ is in $X_{\tau_\gamma (y)}.$\\\\ 2) The proof amounts to showing that a loop $\gamma$ in $Z$ lifts to a loop in $Y$ if and only if $\sigma_\gamma \in \mathrm{Mon} (X/Z)$ stabilizes each of the blocks. As in the previous part, this only true if we consider loops in a suitably small regular locus $U\subset Z.$ It suffices to take $U$ contained in a regular locus for $\ratnoname{X}{Z}$ and whose preimage in $Y$ is a regular locus for $\ratnoname{X}{Y}.$ \end{proof} Proposition~\ref{prop:decomposable_iff_imprimitive} shows that an arbitrary branched cover $\ratnoname{X}{Z}$ factors as a composition of indecomposable branched covers: \begin{equation} \label{eq:decomposition} X = Y_0 \dashrightarrow Y_1 \dashrightarrow \cdots \dashrightarrow Y_{k-1} \dashrightarrow Y_k = Z. \end{equation} Such a factorization corresponds to a maximal chain in the lattice of block systems, and the associated degrees can be read off from the block sizes. Equivalently, for any $x\in X_z,$ a maximal chain in the lattice of blocks of $\mathrm{Mon} (X/Z)$ corresponds to a chain of subgroups that contain the stabilizer $\mathrm{Mon} (X/Z)_x$ (cf.~\cite[proof of Theorem 9.15]{Rotman}) \[ \mathrm{Mon} (X/Z)_x = G_0 \subset G_{1} \subset \cdots \subset G_{k-1} \subset G_k = \mathrm{Mon} (X/Z), \] and we have $\deg (Y_i / Y_{i+1}) = [G_{i+1} : G_i ]$ for $i=0, \, \ldots , \, k-1.$ The decomposition~\eqref{eq:decomposition} is not unique. In fact, as Ritt already understood~\cite[p.~53]{Ritt1}, there are many examples where even the multi-set of degrees $\deg (Y_i / Y_{i+1})$ is not unique. See~\cite[Example 25]{Gutierrez} for one such explicit example. Finally, we define and carefully study the deck transformations of a branched cover, of which the twisted pair symmetry from the introduction is a special case. \begin{defn} \label{def:deck} A birational equivalence from a branched cover to itself which fixes the base is called a \deff{deck transformation.} Explicitly, for $\rat{f}{X}{Z}$ a deck transformation $\rat{\Psi}{X}{X}$ must satisfy $f \circ \Psi = f$ whenever both maps are defined. The deck transformations form a group under composition which acts on a generic fiber $X_z.$ The deck transformation group can be naturally identified with the automorphisms of $\mathbb{C} (X)$ which fix $\mathbb{C} (Z),$ denoted $\Aut (X/Z).$ \end{defn} Analogously to decomposability, Proposition~\ref{prop:deck-centralizer} shows that the existence of a nontrivial deck transformation can be decided from the Galois/monodromy group alone. This turns out to be stronger than decomposability in general. We learned of Proposition~\ref{prop:deck-centralizer} from the sources~\cite{awtrey,Cukierman}. Since it seems less well-known outside of the literature on Galois/monodromy groups, we give a self-contained proof. In topology, a deck transformation of a covering map $f$ can be any continuous function $\Psi$ satisfying $f\circ \Psi = f.$ Our proof of Proposition~\ref{prop:deck-centralizer} reveals that, for a rational branched cover $f $ with regular locus $U,$ the deck transformations of $f_{|f^{-1}(U) } $ in the topological sense are always rational maps in the sense of Definition~\ref{def:deck}. Before giving the proof, we first consider three illustrative examples. \begin{example} \label{ex:deck-quad} Let $X = \mathbf{V}(x^2+ax+b) \subset \mathbb{C}^3$, $Z = \mathbb{C}^2$ and $f : X \to Z$ be the degree-$2$ branched cover defined by coordinate projection $f(x,a,b) = (a,b).$ The deck transformation defined by $\Psi (x,a,b) = (-x - a, a, b)$ acts on a generic fiber $X_{(a,b)}$ by permuting the two roots of the quadratic equation $x^2 + a x + b=0.$ \end{example} \begin{example} \label{ex:deck-pnp} Ask et al.~\cite{Ask12} define a polynomial system $F(\boldsymbol{x})$ with $p$-fold symmetry to be such that $F(\boldsymbol{x}) =0$ implies $F( \omega \, \boldsymbol{x})=0 $ whenever $\omega $ is a $p$-th root of unity. For example, the equations \begin{figure} \centering \includegraphics[scale = 0.4]{Figs/P3P.png} \caption{Frontal view of the P3P problem: \textcolor{red}{$x_1,x_2,x_3$} are unknown. } \label{fig:p3p} \end{figure} \begin{equation} \label{eq:p3p} \begin{split} f_{1,2} = x_1^2 + x_2^2 - c_{1,2} x_1 x_2 - d_{1,2}^2\\ f_{1,3} = x_1^2 + x_3^2 - c_{1,3} x_1 x_3 - d_{1,3}^2\\ f_{2,3} = x_2^2 + x_3^2 - c_{2,3} x_2 x_3 - d_{2,3}^2 \end{split} \end{equation} have a $2$-fold sign symmetry: $(x_1,x_2,x_3)\mapsto (-x_1, - x_2, -x_3).$ These equations define the famous Perspective-3-Point problem or \deff{P3P problem}. Here, each $c_{i,j}$ is equal to $2 \cos \theta_{i,j}$ as in Figure~\ref{fig:p3p}. Letting $X$ denote the vanishing locus of~\eqref{eq:p3p} in $\mathbb{C}^{9},$ the coordinate projection onto the space of knowns $\mathbb{C}^6$ is a branched cover with a deck transformation given by the sign-symmetry. We will return to the P3P problem in Section~\ref{sec:abspose}. \\\\ Ask et al.~\cite{Ask12} develop algorithms for detecting and exploiting \emph{partial} $p$-fold symmetries (occurring in only some subset of the variables) in the automatic generation of polynomial solvers. These methods were generalized by Larsson and {\AA}str{\"o}m~\cite{LarssonSymmetries} to the case of \emph{weighted} partial-$p$ fold symmetries. In general, a branched cover with a weighted partial-$p$ fold symmetry will have a deck transformation of order $p,$ degree $e \, p$ for some integer $e ,$ and its Galois/monodromy group will be a subgroup of $C_p \wr S_e.$ \end{example} \begin{example} \label{ex:deck-resolve-twisted} Example~\ref{ex:resolve-twisted} contains two interesting \deff{Galois covers}---the Galois/monodromy group and the deck transformation group are isomorphic. For $\ratnoname{\SO_\CC (3) \times \mathbb{P}^2}{\mathcal{E}},$ the action on the fiber applies the twisted pair map as in~\eqref{eq:twisted-pair}. For $\ratnoname{X}{\mathcal{E}},$ the action swaps coordinates $\left([\boldsymbol{a}], [\boldsymbol{b}]\right) \mapsto \left([\boldsymbol{b}], [\boldsymbol{a}]\right).$ \end{example} \begin{prop} \label{prop:deck-centralizer} Let $\ratnoname{X}{Z}$ be a branched cover and fix generic $z\in Z$. We may identify the deck transformation group with a subgroup of $\sym (X_z)$ by restricting functions to $X_z.$ This permutation group is \emph{equal} to the centralizer of $\mathrm{Mon} (X/Z)$ in $\sym (X_z).$ \end{prop} \begin{proof} We abbreviate the deck transformation group and centralizer subgroup by $D$ and $C,$ respectively. We define a map between these groups as follows: \begin{align*} \varphi: D &\rightarrow \sym (X_z) \\ \Psi &\mapsto \permshort{x_1}{x_d}{\Psi (x_1)}{\Psi (x_d)} \end{align*} To prove Proposition~\ref{prop:deck-centralizer}, we verify the following properties of $\varphi $: \begin{itemize} \item[1)] $\varphi$ is a group homomorphism. \item[2)] $\varphi $ is injective. \item[3)] The image of $\varphi $ is contained in $C.$ \item[4)] $C$ is contained in the image of $\varphi $---more explicitly, for all $\sigma \in C$ there exists a deck transformation $\Psi_\sigma \in D$ whose restriction to the fiber $X_z$ equals the permutation $\sigma .$ \end{itemize} Property 1) is straightforward. Properties 2) and 3) both follow from the unique path-lifting property. For instance, if $\Psi (x_i)=x_i$ for $i=1,\ldots , d,$ then for generic $x\in X$ will be the endpoint of the lift $\widetilde{\gamma}$ of some path $\gamma $ in $Z$ based at $z$---if $x_i=\Psi (x_i)$ is the initial point of this lift, then we must have $\Psi \circ \widetilde{\gamma } = \widetilde{\gamma },$ so that in particular $\Psi (x) = \Psi \circ \widetilde{\gamma } (1) = \widetilde{\gamma } (1) = x.$ This gives Property 2). The proof of Property 3) is very similar, and may also be found, for instance, in~\cite[Proposition 1.3]{Cukierman}.\\\\ It remains to show Property 4). We do so by first constructing a map $\rat{\Psi_\sigma}{X}{X}$ pointwise via lifting paths. The argument is analagous to the proof in~\cite[Propsition 1.39]{Hatcher}. Fix $x_0\in X_z.$ For generic $x\in X,$ there exists a path $\alpha_x : [0,1] \to X$ from $x_0$ to $x$ whose image in $Z$ is contained in a regular locus for $\ratnoname{X}{Z}.$ Define $\overline{\alpha_x}$ to be the lift based at $\sigma (x_0)$ whose image in $Z$ coincides with the image of $\alpha .$ We define \begin{center} $\rat{\Psi_\sigma }{X}{X}$\\ $x \mapsto \overline{\alpha_x} (1).$ \end{center} \begin{figure} \begin{center} \includegraphics[width=18em]{Figs/covering.png} \end{center} \caption{Construction and well-definedness of $\Psi_\sigma .$}\label{fig:deck} \end{figure} First of all, we must show that $\Psi_\sigma$ is well-defined. This means that for any other path $\beta_x$ from $x_0$ to $x$ we must have $\overline{\beta_x}(1) = \overline{\alpha_x}(1).$ We refer the reader to Figure~\ref{fig:deck} to more easily follow the argument. Consider the loop $\gamma $ based at $f(x_0)$ in $Z$ obtained by concatenating $\overleftarrow{f\circ \beta_x}$ (the reverse of the path $f\circ \beta_x$) with $f\circ \alpha_x .$ Since $\sigma $ and $\sigma_\gamma $ commute, we have that \begin{align*} \sigma_\gamma (\sigma (x_0)) &= \sigma (\sigma_\gamma (x_0))\\ &= \sigma (x_0). \end{align*} Thus $\sigma_\gamma $ fixes $\sigma (x_0),$ and it follows that the lift $\widetilde{\gamma }$ based at $\sigma (x_0)$ is a loop. This implies that $\overline{\alpha_x}$ and $\overline{\beta_x}$ have the same terminal point $\Psi_\sigma (x),$ proving well-definedness.\\\\ Consider now an arbitrary $x\in X_z.$ Note that in this case $f\circ \alpha_x $ is a loop. We calculate \begin{align*} \sigma (x) &= \sigma ( \sigma_{f\circ \alpha_x} (x_0))\\ &= \sigma_{f\circ \alpha_x} (\sigma (x_0))\\ &= \Psi_\sigma (x). \end{align*} Thus, restricting $\Psi_\sigma $ to $X_z$ yields the permutation $\sigma .$ Moreover, by definition of $\Psi_\sigma$ we have that $f \circ \Psi_\sigma = f$ on the locus of points where both maps are defined. It remains to show that $\Psi_\sigma $ is a rational map, since then it will also follow that $\Psi_{\sigma^{-1}}$ is a rational inverse. First we note that, in a suitably small neighborhood of any generic point $x\in X$, we can write $\Psi_\sigma = g_x \circ f,$ where $g_x$ is a holomorphic local inverse of $f.$ Such an expression for $\Psi_\sigma$ exists for any $x$ in some Zariski open subset of $X.$ It follows that $\Psi_\sigma $ is a meromorphic map from $X$ to itself---in other words, it is holomorphic after restricting to a Zariski-open $U\subset X.$ To finish the proof, we may use the well-known fact that all meromorphic maps between projective varieties are rational.\footnote{We recall the words of Mumford~\cite[Ch.~4]{Mumford}: ``This should be viewed as a generalization of the old result that the only everywhere meromorphic functions on $\mathbb{C} \cup \{ \infty \} $ are rational functions."} Indeed, if $X$ and $Z$ are quasiprojective varieties, we may replace them with their projective closures $\overline{X}, \overline{Z},$ to get a birationally equivalent $\ratnoname{\overline{X}}{\overline{Z}}.$ \end{proof} \begin{prop} \label{prop:deckImpliesDecomposable} A branched cover $\ratnoname{X}{Z}$ of degree $d$ with a nontrivial deck transformation $\Psi $ is either decomposable or its Galois/monodromy group is cyclic of order $d.$ In the latter case, $\mathrm{Mon} (X/Z)$ is imprimitive precisely when $d$ is composite. \end{prop} \begin{proof} Partition $X_z$ into the orbits under repeated application of $\Psi .$ This partition is preserved under the monodromy action. Thus, the Galois group is imprimitive if this partition is nontrivial. Otherwise, the action of $\Psi $ on $X_z$ generates a cyclic group $C_d \subset S_d.$ Letting $\cent (\cdot )$ denote the centralizer in $S_d$, we have $C_d \subset \cent (\mathrm{Mon} (X/Z)),$ which holds and only if $\mathrm{Mon} (X/Z) \subset \cent (C_d) = C_d .$ Since $\mathrm{Mon} (X/Z)$ is transitive, we must have $\mathrm{Mon} (X/Z) = C_d.$ \end{proof} In general, a decomposable branched cover need not have any deck transformations. However, a converse to Proposition~\ref{prop:deckImpliesDecomposable} does hold in a special case frequently encountered in practice. \begin{proposition} \label{prop:bir_order_2} $\ratnoname{X}{Z}$ has a deck transformation of order $2$ if and only if $\mathrm{Mon} (X/Z)$ has a block of size $2$. \end{proposition} \begin{proof} $\Rightarrow $ As in Proposition~\ref{prop:deckImpliesDecomposable}. $\Leftarrow$ Proposition~\ref{prop:decomposable_iff_imprimitive} gives a factorization such that $\mathbb{C} (X) / \mathbb{C} (Y)$ is a degree $2$ extension, and thus always Galois. \end{proof} \subsection{Numerically computing Galois/monodromy groups}\label{subsec:num_methods} In this paper, our main interest is in minimal problems. These typically give rise to branched covers of the form $\ratnoname{X}{\mathbb{C}^m},$ at least up to birational equivalence. We again emphasize that being a branched cover implies $\dim X = m$ and, for \emph{generic measurements} $z\in \mathbb{C}^m,$ that the fiber $X_z$ is finite. This is essentially the definition of a minimal problem used in~\cite{PLMP,PL1P}, where a problem is minimal if and only if its joint camera map is a branched cover. In practice it is usually enough to consider the case where $X $ is a subvariety of $\mathbb{C}^n \times \mathbb{C}^m,$ and the map $X \to \mathbb{C}^m$ is coordinate projection. In this case, the ideal $\mathcal{I}_X$ of all polynomials in $\mathbb{C}[x_1, \ldots , x_n, z_1, \ldots , z_m]$ that vanish on $X$ is prime.\\\\ However, one advantage of computing Galois/monodromy groups numerically is that we do not need to know all generators of $\mathcal{I}_X.$ All that is really needed are \begin{itemize} \item[1)] the ability to sample a generic point $(x^*, z^*)\in X,$ and \item[2)] a set of $n$ equations vanishing on $X,$ \begin{equation}\label{eq:witnessSystem} F(x;z) = F(x_1,\dots,x_n;z_1,\dots,z_m) = \left[\begin{array}{c} f_1(x_1,\dots,x_n;z_1,\dots,z_m) \\ \vdots \\ f_n(x_1,\dots,x_n;z_1,\dots,z_m) \end{array}\right], \end{equation} such that the Jacobian $d_x \, F(x^*;z^*)$ is an invertible $n\times n$ matrix for generic $(x^*, z^*)\in X.$ We say that equations~\eqref{eq:witnessSystem} form a \deff{well-constrained system} for the branched cover $X \to \mathbb{C}^m.$ \end{itemize} In equation~\eqref{eq:witnessSystem}, $x_1,\ldots , x_n$ are usually called the \deff{variables} and $z_1, \ldots , z_m$ are usually called the \deff{parameters}. For generic $(x^*, z^*) \in X$ and generic $z\in \mathbb{C}^m,$ the path $\alpha : [0,1] \to \mathbb{C}^m$ defined by the straight-line segment $\alpha (t) = (1-t)\cdot z^* + t\cdot z$ will have a lift $\widetilde{\alpha } (t)$ to $X$ based at $\widetilde{\alpha} (0) = (x^*, z^*).$ By genericity, the lifted path $\widetilde{\alpha}:[0,1] \to X$ will not intersect the subvariety of $X$ where $d_x \, F$ is singular for any $t\in [0,1]$---see~\cite[Lemma 7.12]{SW}. This implies that $\widetilde{\alpha}$ can be numerically approximated by applying numerical continuation methods to the \deff{parameter homotopy} \begin{equation}\label{eq:ParameterHomotopy} H(x,t) = F \left(x; \alpha (t) \right) = 0, \end{equation} which connects known solutions of the \deff{start system} $H(x,0) = f(x;z^*) = 0$ to solutions of the \deff{target system}\footnote{This convention is chosen to agree with the notation of the previous section. We note that this is the opposite of the common convention of placing start systems at $t=1$ and target systems at $t=0$ which, although mathematically equivalent, is more natural from the numerical point of view.} $H(x,1) = F(x;z) = 0.$ The solution curves $x(t)$ satisfying $H(x(t), t)=0$ and the initial condition $x(0) = x^*$ will stay on our irreducible variety $X$ with probability-one, and hence $\widetilde{\alpha } (t) = (x(t), \alpha (t)).$ Thus, although the variety $X$ need not be a complete intersection, appropriate use of a well-constrained system enables us to compute solutions on $X$ using the same number of equations as unknowns. This fits into an established paradigm of numerical algebraic geometry where overdetermined parameterized polynomial systems may be solved by reduction to a well-contrained system (see also~\cite[Sec 6.4]{BHSW13},~\cite{HauensteinRegan}.) In practice, our numerical approximations to $\widetilde{\alpha}(t)$ remain ``close'' to $X$ with some probability that depends on the conditioning and the implementation of the numerical methods.\\\\ By numerically continuing solutions along some path $\beta (t)$ from $z$ to $z^*$, and then along some other path $\alpha (t)$ from $z^*$ to $z$, the concatenated path $\gamma = \beta * \alpha $ is a loop based at $z$ which induces a monodromy permutation $\sigma_{\gamma } \in \mathrm{Mon} (X/\mathbb{C}^m ; z).$ This simple observation motivates numerous applications of monodromy in numerical algebraic geometry: computing the fibers $X_z$ (addressed in~\cite{MSpaper,MARTINDELCAMPO2017559}) computing the Galois/monodromy group (addressed in~\cite{NumGalois,GaloisSchubert}), and in additional applications ranging from numerical irreducible decomposition~\cite{NIDpaper} to kinematics~\cite{RealMonodromy}. To compute $\mathrm{Mon} (X/\mathbb{C}^m)$ in this paper, we simply generate some number of loops $\gamma_1, \ldots , \gamma_k$ in $\mathbb{C}^m,$ with $k$ ranging from $4$ (usually sufficient when $\mathrm{Mon} (X/\mathbb{C}^m)$ is full-symmetric) to as large as $50.$ This adds additional uncertainty to our numerical computations, since \emph{a priori} we only know that $\langle \sigma_{\gamma_1} , \ldots , \sigma_{\gamma_k} \rangle $ is a subgroup of $\mathrm{Mon} (X/\mathbb{C}^m; z).$ In principle, this additional uncertainty could be avoided by a more computation-heavy approach like the branch-point method in~\cite{NumGalois}. Nevertheless, we feel reasonably confident in the Galois/monodromy group computations which we have report, which have been validated through repeated runs and analyzing decompositions in several of the imprimitive cases. \section{Absolute pose problems}\label{sec:abspose} In this section, we apply the mathematical framework of the previous section to absolute pose problems involving combinations of point/line features appearing in work of Ramalingam et al.~\cite{ramalingam} Absolute camera pose estimation is one of the main problems of computer vision~\cite{Ameller02camerapose,RANSAC,DBLP:journals/ijcv/HaralickLON94,DBLP:journals/ijcv/LepetitMF09,DBLP:journals/pami/QuanL99,DBLP:conf/issac/ReidTZ03,DBLP:conf/iccv/Triggs99,DBLP:journals/jmiv/WuH06}. Although the problems considered here are of low degree, computing the Galois/monodromy groups yields new insights which might be applied to building better solvers for these problems. We begin formulating these problems in the language of branched covers. Our general task is to determine a calibrated camera matrix $\cam{\mathbf{R}}{\mathbf{t}}$ from correspondence data between the scene and images. We let $p$ and $l$ be the numbers of point-point and line-line correspondences, respectively, between 3D and 2D. The total space of our branched cover is \[ X_{p,l} = \left( \mathbb{P}^3 \right)^p \times \left( \mathbb{G}_{1,3} \right)^l \times \SE_\CC (3) \] where $\mathbb{G}_{1,3}$ denotes the Grassmannian of lines in $\mathbb{P}^3.$ The base space equals \[ Z_{p, l} = \left( \mathbb{P}^3 \right)^p \times \left( \mathbb{G}_{1,3} \right)^l \times \left( \mathbb{P}^2 \right)^p \times \left( \mathbb{G}_{1,2} \right)^l, \] where $\mathbb{G}_{1,2}$ denotes the Grassmannian of lines in $\mathbb{P}^2$ and $\rat{f_{p,l}}{X_{p,l}}{Z_{p.l}}$ is the map that ``takes pictures": \begin{align*} \left( X_1, \, \ldots , \, X_p, \, \overline{L_1 \, L_1'}, \, \ldots , \, \overline{L_l \, L_l '}, \, \cam{\mathbf{R}}{\mathbf{t}} \right) \mapsto \\ \bigg( X_1, \, \ldots , \, X_p, \, \overline{L_1 \, L_1'}, \, \ldots , \, \overline{L_l \, L_l '}, \, \cam{\mathbf{R}}{\mathbf{t}}\, X_1, \, \ldots , \, \cam{\mathbf{R}}{\mathbf{t}}\, X_p, \\ \overline{\cam{\mathbf{R}}{\mathbf{t}}\, L_1 \, \, \cam{\mathbf{R}}{\mathbf{t}}\, L_1'}, \, \ldots , \, \overline{\cam{\mathbf{R}}{\mathbf{t}}\, L_l \, \, \cam{\mathbf{R}}{\mathbf{t}}\, L_l '} \bigg) \end{align*} (here $\overline{L \, L'}$ is the line spanned by $L$ and $L'.$) Counting dimensions gives $\dim X_{p,l} = 3 (p+l) + 6$ and $\dim Z_{p,l} = 5 (p+l).$ Equating the two, we see that the only possibilities are $(p,l)=(3,0),$ $(2,1),$ $(1,2), $ $(0,3).$ The first case corresponds to the P3P problem. \begin{result} \label{res:abs-pose} The full list of Galois/monodromy groups of branched covers $f_{p,l}$ is as follows: \begin{align*} \mathrm{Mon} (X_{3,0}/Z_{3,0}) &\cong S_2 \wr S_4 \cap A_8 \hookrightarrow S_8\\ \mathrm{Mon} (X_{2,1}/Z_{2,1}) &\cong S_2 \wr S_2 \cap A_4 \cong C_2 \times C_2 \hookrightarrow S_4\\ \mathrm{Mon} (X_{1,2}/Z_{1,2}) &\cong S_2 \wr S_4 \cap A_8 \hookrightarrow S_8\\ \mathrm{Mon} (X_{0,3}/Z_{0,3}) &\cong S_8. \end{align*} \end{result} We interpret Result~\ref{res:abs-pose} in two separate subsections, corresponding to the ``unmixed cases" $(p,l) \in \{ (3,0), \, (0,3) \}$ and the more interesting ``mixed cases" $(p,l) \in \{ (2,1), \, (1,2) \}.$ We note the respective degrees $8, \, 4, \, 8, \, 8$ agree with those reported in~\cite{ramalingam}, in which these problems were formulated using different systems of equations. The systems of equations defining the parameter homotopies used for Result~\ref{res:abs-pose} were constructed as follows: \begin{itemize} \item Points in the world are represented by $4\times 1$ matrices $X_1, \ldots , X_p.$ \item Points in the image are represented by $3\times 1$ matrices $x_1, \ldots , x_p.$ \item Lines in the world are represented as kernels of $2\times 4 $ matrices $[\mathbf{N}_1 \mid \mathbf{N}_1 ']^\top, \ldots , [\mathbf{N}_l \mid \mathbf{N}_l']^\top.$ \item Lines in the image are represented as kernels of $1\times 3$ matrices $\mathbf{n}_1^\top , \ldots \mathbf{n}_l^\top.$ \item We enforce rank constraints by the vanishing of maximal minors of certain matrices: \begin{itemize} \item point-to-point: $\rank \Big( \cam{\mathbf{R}}{\mathbf{t}} \, X_i \mid x_i \Big) \le 1$ for $i=1,\ldots , p$ \item line-to-line: $\rank \Big( \mathbf{N}_i \mid \mathbf{N}_i ' \mid \cam{\mathbf{R}}{\mathbf{t}}^\top \mathbf{n}_i \Big) \le 2$ for $i=1,\ldots , l$ \end{itemize} \item Any square subsystem of these maximal minors whose Jacobian has full rank gives a well-constrained system in the sense of Section~\ref{subsec:num_methods} (the variables being $\cam{\mathbf{R}}{\mathbf{t}},$ and the parameters being all points and lines.) Although the square subsystem may have excess solutions, they are not in the same orbit as the geometrically relevant solutions under the monodromy action. Alternatively, we can get a well-constrained system by randomization as in~\cite[Sec 6.4]{BHSW13}. The monodromy group does not depend on the choice of well-constrained system. \end{itemize} \subsection{The unmixed cases: \texorpdfstring{$(p,l)=(3,0), \, (0, 3)$}{(p,l)=(3,0), (0, 3)}} The case $(p,l) = (3,0)$ reduces to solving the P3P problem as formulated in Equations~\eqref{eq:p3p}. The literature on this problem is vast, and the earliest work~\cite{Grunert-1841} pre-dates the field of computer vision by more than a century. The degree of this problem is $8$ and the Galois/monodromy group is a subgroup of $S_2 \wr S_4$ due to the sign symmetry. In the terminology of Brysiewicz et al.~\cite{Brysiewicz}, Equations~\eqref{eq:p3p} are a lacunary polynomial system whose monomial supports span a proper sublattice of $\mathbb{Z}^3$ with finite index. In the setting of that paper, we would consider the family of all systems with the same monomial supports as in~\eqref{eq:p3p} \begin{equation} \label{eq:fullsupport} \begin{split} h_{1,2} = A x_1^2 + B x_2^2 + C x_1 x_2 + D\\ h_{1,3} = E x_1^2 + F x_3^2 + G x_1 x_3 + H\\ h_{2,3} = I x_2^2 + J x_3^2 + K x_2 x_3 + L \end{split} \end{equation} This gives a branched cover $X_h \to \mathbb{C}^{12}$ where $X_h = V(h_{1,2}, h_{1,3}, h_{2,3}) \subset \mathbb{C}^3 \times \mathbb{C}^{12}.$ On the other hand, for P3P the natural branched cover is $X_f \to \mathbb{C}^6,$ where $X_f\subset \mathbb{C}^{3} \times \mathbb{C}^{6}.$ We find numerically that $\mathrm{Mon} (X_h / \mathbb{C}^{12})$ is the full wreath product $S_2 \wr S_4,$ whereas our numerical experiments suggest that the Galois/monodromy group for P3P is the \emph{proper subgroup} $S_2 \wr S_4 \cap A_8.$ To certify the result of our numerical monodromy computation, we can compute the Galois group for P3P using symbolic computation. Consider $I = \langle f_{1,2}, f_{1,3}, f_{2,3} \rangle $ as an ideal in a polynomial ring $\mathbb{F} [x_1,x_2,x_3]$ whose coefficient field is $\mathbb{F} = \mathbb{C} (Z) = \mathbb{C} (\vec{c}, \vec{d}).$ The dimension and degree of $I$ are $0$ and $8.$ We can compute a lexicographic Gr\"{o}bner basis for $I$ with $x_1 > x_2 > x_3$ in a matter of seconds using the FGLM algorithm~\cite{faugere1993efficient}, implemented for Macaulay2~\cite{M2} in the package \texttt{FGLM}~\cite{FGLMSource}. The Gr\"{o}bner basis $G = \{ g_1, g_2, g_3 \}$ has the form predicted by the Shape lemma \begin{align*} g_1 (x_1,x_2,x_3) &= x_1 + r_1 (\vec{c}, \vec{d} ) \, x_3 \\ g_2 (x_2,x_3) &= x_2 + r_2(\vec{c}, \vec{d} ) \, x_3\\ g_3 (x_3) &= x_3^8 + A (\vec{c}, \vec{d} ) \, x_3^6 + B (\vec{c}, \vec{d} ) \, x_3^4 + C (\vec{c}, \vec{d} ) \, x_3^2 + D (\vec{c}, \vec{d} ), \end{align*} for particular rational functions $r_1,r_2,A,B,C,D \in \mathbb{F}.$ We see that $x_3$ is a primitive element for the extension $\mathbb{C} (X) / \mathbb{C} (Z).$ To verify that $\mathrm{Mon} (X/Z) \cong \mathrm{Gal} (X/Z)$ is contained in $A_8,$ it suffices to show that the discriminant of $g_3$ is square. For arbitrary coefficients $(A,B,C,D),$ the discriminant of $x_3^8 + A \, x_3^6 + B \, x_3^4 + C \, x_3^2 + D$ is the product of $D $ and a square. For P3P, $D(\vec{c}, \vec{d} )$ is also a square. Factorizations of P3P are also classical: from Equations~\eqref{eq:p3p}, we have \begin{equation} \label{eq:p3p-factor} \begin{split} y_1 (1+ y_2^2 - c_{1,2} y_2 ) - d_{1,2}^2=0\\ y_1 (1+ y_3^2 - c_{1,3} y_3 ) - d_{1,3}^2=0\\ y_1 (y_2^2 + y_3^2 - c_{2,3} y_2 y_3) - d_{2,3}^2 = 0 \end{split} \end{equation} where $y_1 = x_1^2, \, y_2 = x_2 / x_1, \, y_3 = x_3/x_1$ are separating invariants~\cite{Kemper} for the action of the deck transformation group. We see that even when $X \to Z$ is regular in the definition of a factorization~\eqref{eq:factorization-diagram}, the maps $\ratnoname{X}{Y}$ and $\ratnoname{Y}{Z}$ need not be. On the other hand, the branched cover $\ratnoname{X_{0,3}}{Z_{0,3}}$ for absolute pose from $3$ lines is indecomposable, since its Galois/monodromy group $S_8$ acts primitively on the set of $8$ solutions. Thus, any algebraic algorithm for solving this problem must be capable of computing the roots of a polynomial of degree $8$ or higher. \subsection{The mixed cases: \texorpdfstring{$(p,l)=(2,1), \, (1, 2)$}{(p,l)=(2,1), (1,2)}} Proposition~\ref{prop:deck-centralizer} shows that each of the mixed cases has a nontrivial deck transformation group: we have that $\Aut (X_{2,1} / Z_{2,1} ) \cong C_2 \times C_2$ and $\Aut (X_{1,2} / Z_{1,2} ) \cong C_2.$ Using the rank constraints described above, we were able to observe numerically that solutions in the same block for both of these mixed cases differed by a reflection. These deck transformations take on a particularly simple form after changing coordinates as in~\cite{ramalingam}. For the case $(p,l) = (2,1),$ the formulation~\cite[Equations 4,5]{ramalingam} makes use of a clever choice of reference frames to get equations \begin{equation} \label{eq:ramalingam-alt} \begin{split} A \, \boldsymbol{X} - b = 0 \\ R_{1,1}^2 + R_{2,1}^2 + R_{3,1}^2 - 1 = 0\\ R_{2,1}^2 + R_{2,2}^2 + R_{2,3}^2 - 1 = 0 \end{split} \end{equation} where $A$ and $b$ are $6\times 8$ and $8\times 1$ matrices depending on the given data, and \[ \boldsymbol{X} = [R_{1,1}, R_{2,1}, R_{3,1}, R_{2,2}, R_{2,3}, t_1, t_2, t_3]^\top \] is a vector of indeterminates. Using \texttt{FGLM} as in the previous section, we discover new constraints \begin{align*} R_{3,1}^2 + \psi_1 (A,b) = 0 \\ t_3^2 + 2 t_3 + \psi_2 (A,b) = 0, \end{align*} for particular rational functions $\psi_1, \psi_2$ in the data, which did not appear in~\eqref{eq:ramalingam-alt} originally. Formulas for the deck transformations of this Galois cover follow by way of the basic Example~\ref{ex:deck-quad}. The remaining constraints output by \texttt{FGLM} are, as expected, of the form \begin{align*} R_{i,j} + \ell_{i,j} (R_{3,1})=0\\ t_j + \ell_j (t_3) =0 \end{align*} for linear forms $\ell_j, \ell_{i,j}$ over the coefficient field $\mathbb{Q} (A,b).$ For this very special problem, the Gr\"{o}bner basis elements are surprisingly compact. This suggests, as an alternative to the solution proposed in~\cite{ramalingam}, that we may solve for the rotation and translation independently. Likewise, for the $(p,l) = (1,2)$ case, using the similar formulation of~\cite[Equations 7,8]{ramalingam}, we discover the following symmetry in the solutions ($\mathbf{e}_3\in \mathbb{R}^3$ is the third standard basis vector): \[ (\vec{R}, \, \mathbf{t}) \mapsto (-\vec{R}, -\mathbf{t} - 2 \mathbf{e}_3). \] We note that in this formulation, $\vec{R}$ contains only the first two rows of the unknown rotation matrix. In hindsight, this symmetry is quite easy to verify. However, we stress that computing the Galois/monodromy group was what led us to discover it. \section{Relative pose problems} \label{sec:rel-pose} Our interest in Galois/monodromy groups of minimal problems began when we computed, using Bertini~\cite{Bertini}, that the Galois/monodromy group of the five-point problem was $S_2 \wr S_{10} \cap A_{20}.$ Much like P3P, we had initially expected the full wreath product. Since then, we have computed Galois/monodromy groups of many minimal problems using Bertini and the Macaulay2 package \texttt{MonodromySolver}~\cite{MonodromySolver}. Overall, we agree with the assessment of Esterov and Lang that Galois/monodromy groups of structured polynomial systems are \emph{``unexpectedly rich}''~\cite{esterov2018sparse}. Many of the problems we considered appeared previously in~\cite[Table 1]{PLMP}. In that work, branched covers were represented by pictograms. For instance, the five-point problem was denoted by $\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/500.pdf}}} .$ The problems $\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/310.pdf}}} $ and $\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/302a.pdf}}} $ were introduced in~\cite{Joe,Ricardo}, respectively. Our computations that $\mathrm{Mon} \left(\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/310.pdf}}} \right) \cong S_{216}$ and $\mathrm{Mon} \left(\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/302a.pdf}}} \right)\cong S_{312}$ show that the homotopy solvers developed in~\cite{Ricardo} are \emph{optimal} in the sense of tracking the fewest paths possible. The problem $\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/partial.png}}} $ appearing in~\cite{PL1P} can be thought of as P3P fibered over the five-point problem. Unlike the majority of problems studied here, this composite minimal problem has an intermediate field $\mathbb{C} (Z) \subsetneq \mathbb{C} (Y) \subsetneq \mathbb{C} (X)$ which is not the fixed field of some subgroup of $\Aut (X/Z).$ \begin{result}\label{res:plmp} Among all minimal problems of degree $< 1, 000$ appearing in~\cite[Table 1]{PLMP}, all have either an imprimitive or full symmetric Galois/monodromy group. The imprimitive cases are: \begin{align*} \mathrm{Mon} \left(\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/c3200a.pdf}}} \right) &\cong (C_2)^2 \rtimes (S_2 \wr S_3 \cap A_6) \hookrightarrow S_{12} \\ \mathrm{Mon} \left( \vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/c4100.pdf}}} \right) &\cong S_2 \wr S_{8} \cap A_{16} \hookrightarrow S_{16}\\ \mathrm{Mon} \left(\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/500.pdf}}} \right) &\cong S_2 \wr S_{10} \cap A_{20} \hookrightarrow S_{20}\\ \mathrm{Mon} \left( \vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/c3100.pdf}}} \right) &\cong S_2 \wr \left( S_2 \wr S_{16} \cap A_{32} \right) \cap A_{64} \hookrightarrow S_{64}\\ \mathrm{Mon} \left( \vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/205c.pdf}}} \right) &\cong \left(C_2\right)^4 \rtimes \left( \left(C_2\right)^4 \rtimes \left( S_2 \wr (S_2 \wr S_4) \right) \right) \hookrightarrow S_{64}\\ \mathrm{Mon} \left( \vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/c2110.pdf}}} \right) &\cong \left(C_2\right)^2 \rtimes \left( C_2^2 \rtimes \left( S_2 \wr \left( S_2 \wr S_2 \cap A_4 \right) \cap A_8 \right) \right) \hookrightarrow S_{32}. \end{align*} \end{result} For the sake of uniformity, we have used the semidirect product $\rtimes $ to indicate subgroups of an appropriate wreath product. Thus, for instance, for $\mathrm{Mon} \left( \vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/c2110.pdf}}} \right),$ the outermost $(C_2)^2 $ should be regarded as a subgroup of $(S_2)^{16},$ and the innermost as a subgroup of $(S_2)^{8}.$ Much to our surprise, the group $\mathrm{Mon} \left( \vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/c2110.pdf}}} \right)$ turns out to be solvable. Clearly there are many minimal problems waiting to be decomposed. With increasingly efficient and user-friendly homotopy continuation software like Bertini, \texttt{HomotopyContinuation.jl}~\cite{HCJL}, and various numerical packages for Macaulay2 (summarized in~\cite{leykin2018homotopy}), we see no obstacles to computing even more Galois/monodromy groups of interest to computer vision in the future. In the remainder of this section, we give particular attention to three problems appearing in Result~\ref{res:plmp}. The first is the five-point problem $\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/500.pdf}}} .$ The second is the five-point problem for data which lie in a ``V"-shape $\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/c3200a.pdf}}} .$ Decomposing the associated branched cover reveals the classical planar calibrated homography problem. The third, denoted $\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/c3100.pdf}}} ,$ is a special case of the notorious four-points-in-three-views problem~\cite{NisterSchaf} in which three of the corresponding points are collinear. \subsection{Five-point relative pose} \label{subsec:5pp} We now return to the five-point problem, retaining the notation from Examples~\ref{ex:5pp} and~\ref{ex:resolve-twisted}. It is well-known fact that the branched cover $X\to Z$ is decomposable. The intermediate variety is \[ Y = \{ \left( \mathbf{E} , \, \left (\mathbf{x}_1, \, \ldots , \, \mathbf{y}_5 \right) \right) \in \mathcal{E} \times Z \mid \mathbf{y}_i^\top \mathbf{E} \, \mathbf{x}_i = 0 ,\, i=1,\, \ldots , \, 5 \} . \] The equations defining $Y$ give the formulation of the five-point problem studied in several seminal works~\cite{Demazure,Kruppa,FaugerasMaybank}, and used in state of the art five-point solvers such as Nist\'{e}r's~\cite{Nister}. The fact that $Y\to Z$ is a generically $10$-$1$ map is closely related to the fact that $\mathcal{E}$ is a projective variety of dimension $5$ and degree $10.$ Although the individual linear equations $\mathbf{y}_i^\top \mathbf{E} \, \mathbf{x}_i =0$ are special, considered together they determine a dominant rational map $\ratnoname{Z}{\mathbb{G}_{3,8}}.$ This fact follows from the trisecant lemma of classical algebraic geometry (see~\cite[p.~134]{Mumford} for one statement of this result, or~\cite[Sec.~5.2.3]{Maybank} for an alternative explanation of this fact.) In formulating the five-point problem, it is possible to normalize the translations and depths in various ways. For instance, we might use the normalization $\norm{\mathbf{t}}^2 = 1,$ giving rise to a branched cover $W \to Z$ with $W \subset \SO_\CC (3) \times \mathbb{C}^{13} \times Z .$ This branched cover decomposes as \[ W \dashrightarrow X \rightarrow Y \rightarrow Z. \] The four elements in the generic fiber of $W\to Y$ are illustrated in~\cite[Figure 9.12]{HZ-2003}. The various Galois/monodromy groups for the five-point problem are summarized in Result~\ref{res:5pp}. The result $\mathrm{Mon} (Y/Z) \cong S_{10}$ gives numerical confirmation of the results of~\cite{NisterHartley}, which imply that $\Gal \left( \galclo{\mathbb{Q} (Y)} / \mathbb{Q} (Z) \right)\cong S_{10}$ by appeal to Hilbert's irreducibility theorem~\cite[Proposition 3.3.5]{Serre}. \begin{result} \label{res:5pp} The Galois/monodromy groups of the five-point problem are as follows: \begin{align*} \mathrm{Mon} (W/Z) &\cong \left(C_2\right)^9 \rtimes (S_2 \wr S_{10}) \hookrightarrow S_{40} \\ \mathrm{Mon} (X/Z) &\cong S_2 \wr S_{10} \cap A_{20} \hookrightarrow S_{20}\\ \mathrm{Mon} (Y/Z) &\cong S_{10}, \end{align*} where $C_2^9$ is the subgroup of $(\tau_1, \ldots , \tau_{20}) \in (S_2)^{20}$ generated by $\tau_{2i-1} \tau_{2i}$ for $i=1,\ldots , 20.$ \end{result} \subsection{Two-view homography} \label{subsec:2viewhomography} We now move on to a minimal problem for planar scenes. We use similar notation as in the five-point problem, with constraints \begin{equation} \label{eq:4p2v} \begin{split} \mathbf{R}^\top\mathbf{R} = \mathbf{I}, \quad \det \mathbf{R} = 1, \\ \beta_i\mathbf{y}_i = \mathbf{R}\alpha_i\mathbf{x}_i + \mathbf{t}, \;\; \alpha_i, \beta_i \neq 0, \quad i = 1,\dots,4, \\ \det\left(\begin{bmatrix} \alpha_1\mathbf{x}_1 & \alpha_2\mathbf{x}_2 & \alpha_3\mathbf{x}_3 & \alpha_4\mathbf{x}_4 \\ 1 & 1 & 1 & 1 \end{bmatrix}\right) = 0. \end{split} \end{equation} Our incidence variety $X$ is given by \[ X \subseteq \SO_\CC (3) \times \mathbb{P}_\mathbb{C}^{10} \times \underbrace{\left(\mathbb{C}^2 \times \{1 \}\right)^4 \times \left(\mathbb{C}^2 \times \{1 \}\right)^4}_{Z} \] such that Equations~\ref{eq:4p2v} hold for all $\left(\mathbf{R}, (\mathbf{t}, \alpha_1, \ldots , \alpha_4, \beta_1, \ldots , \beta_4), (\mathbf{x}_1,\ldots , \mathbf{x}_4, \mathbf{y}_1 , \ldots , \mathbf{y}_4) \right) \in X.$ Projection of $X$ onto $Z$ defines a branched cover of degree $12.$ This branched cover is birationally equivalent to the joint camera map of the problem $\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/c3200a.pdf}}} $ from~\cite{PLMP}, since the fifth point on both lines in each image is generically determined from the other four points. Result~\ref{res:plmp} tells us the Galois/monodromy group is $C_2 \times C_2 \rtimes (S_2 \wr S_3 \cap A_6).$ The GAP command \texttt{MinimalGeneratingSet} shows that this group is minimally generated by two permutations: in cycle notation, \begin{equation} \label{eq:homography-permutations} \mathrm{Mon} (X/Z) \cong \Big\langle (1 \,\,\, 2) (3 \,\,\, 4) (5 \,\,\, 12 \,\,\, 8 \,\,\, 9) (6 \,\,\, 11 \,\,\, 7 \,\,\, 10),\, (1 \,\,\, 11 \,\,\, 5) (2 \,\,\, 10 \,\,\, 8) (3 \,\,\, 9 \,\,\, 7) (4 \,\,\, 12 \,\,\, 6) \Big\rangle. \end{equation} \begin{comment} \begin{figure} \begin{center} \begin{tikzpicture} \node (G) at (0,3) {$[12]$}; \node (B4) at (0, 1) {$[4]$}; \node (B21) at (-2, -1) {$[2]$}; \node (B22) at (0, -1) {$\{ 1, 3 \}$}; \node (B23) at (2, -1) {$\{ 1, 4\}$}; \node (id) at (0, -3) { $[1]$}; \draw (G) -- (B4) node [right, midway] {$3$}; \draw (B4) -- (B21) node [above, midway] {$2$}; \draw (B4) -- (B22) node [right, midway] {$2$}; \draw (B4) -- (B23) node [above, midway] {$2$}; \draw (B21) -- (id) node [below right=15pt and 8pt of B21] {$2$}; \draw (B22) -- (id) node [right, midway] {$2$}; \draw (B23) -- (id) node [below left=15pt and 1pt of B23] {$2$}; \end{tikzpicture} \end{center} \caption{Lattice of block systems for the calibrated homography problem.}\label{fig:lattice} \end{figure} \end{comment} \begin{figure}[H] \centering \begin{tikzpicture} \node (1) at (0,0) {$[12]$}; \node (2) at (0,-2) {$[4]$}; \node (3) at (-2,-4) {$[2]$}; \node (4) at (0,-4) {$\{ 1, 3 \}$}; \node (5) at (2,-4) {$\{ 1, 4 \}$}; \node (6) at (0,-6) {$[1]$}; \path[-] (2) edge node[left] {$3$} (1) (3) edge node[above left] {$2$} (2) (4) edge node[left] {$2$} (2) (5) edge node[above right] {$2$} (2) (6) edge node[below left] {$2$} (3) (6) edge node[left] {$2$} (4) (6) edge node[below right] {$2$} (5); \end{tikzpicture} \phantom{ffffffffffffffff} \begin{tikzpicture} \node (1) at (0,0) {$\mathbb{C}(Z)$}; \node (2) at (0,-2) {$\mathbb{C}(X)_{\langle\Psi_1,\Psi_2\rangle}$}; \node (3) at (-2,-4) {$\mathbb{C}(X)_{\langle\Psi_1\circ\Psi_2\rangle}$}; \node (4) at (0,-4) {$\mathbb{C}(X)_{\langle\Psi_1\rangle}$}; \node (5) at (2,-4) {$\mathbb{C}(X)_{\langle\Psi_2\rangle}$}; \node (6) at (0,-6) {$\mathbb{C}(X)$}; \path[-] (1) edge node[left] {$3$} (2) (2) edge node[above left] {$2$} (3) (2) edge node[left] {$2$} (4) (2) edge node[above right] {$2$} (5) (3) edge node[below left] {$2$} (6) (4) edge node[left] {$2$} (6) (5) edge node[below right] {$2$} (6); \end{tikzpicture} \caption{Correspondence between block systems (left) and intermediate fields (right) for the calibrated homography problem. The notation $K_H$ means the intermediate field of an extension $K/F$ fixed elementwise by a subgroup $H \le \Aut(K/F).$ } \label{fig:blocks_fields} \end{figure} The lattice of block systems is depicted on the left in Figure~\ref{fig:blocks_fields}. The vertex labels correspond to stabilizer subgroups of $\mathrm{Mon} (X/Z)$, and the edges are labeled by the degrees of maps appearing in some decomposition of the form in Equation~\eqref{eq:decomposition}. To the right is the inverted lattice of intermediate fields. Like the majority of examples in this paper, $\mathbb{C} (X) / \mathbb{C} (Z)$ is not a Galois extension. Before we determine a decomposition, we first describe the group of deck transformations. The centralizer in $S_{12}$ is \[ \Big\langle (1 \,\,\, 3) (2 \,\,\, 4) (5 \,\,\, 7) (6 \,\,\, 8) (9 \,\,\, 11) (10 \,\,\, 12), \, (1 \,\,\, 4) (2 \,\,\, 3) (5 \,\,\, 8) (6 \,\,\, 7) (9 \,\,\, 12) (10 \,\,\, 11) \Big\rangle \cong C_2 \times C_2. \] The deck transformation corresponding to the first generator is the twisted pair map $\Psi_1,$ defined just as in Equation~\eqref{eq:twistedPairMap}. The second is a reflection-rotation symmetry $\Psi_2$ depicted in Figure~\ref{fig:4p2v_reflection}. To get a formula for $\Psi_2,$ it is convenient to work with the equation of the unknown plane: \begin{equation} \label{eq:plane} \langle \mathbf{n} , \boldsymbol{X} \rangle = d. \end{equation} Note that $\mathbf{n}$ and $d$ depend rationally on the data. The formula for $\Psi_2$ is given by \begin{equation} \begin{split} \Psi_2 (\mathbf{R}) &= \mathbf{R} \, \left( 2\, \displaystyle\frac{\mathbf{n} \mathbf{n}^\top}{\mathbf{n}^\top \mathbf{n}} - \mathbf{I} \right) \\ \Psi_2 (\mathbf{t}) &= - \mathbf{t} - \frac{2 d}{\mathbf{n}^\top\mathbf{n}} \mathbf{R} \mathbf{n} \\ \Psi_2 (\alpha_i ) &= \alpha_i \\ \Psi_2 (\beta_i ) &= -\beta_i \\ \Psi_2 (\mathbf{x}_1, \ldots, \mathbf{x}_4, \mathbf{y}_1, \ldots , \mathbf{y}_4) &= (\mathbf{x}_1, \ldots, \mathbf{x}_4, \mathbf{y}_1, \ldots , \mathbf{y}_4). \end{split} \end{equation} To better understand the effect of $\Psi_2$ on $\mathbf{t},$ let $\boldsymbol{X}$ be any point on the scene plane and calculate \begin{align} -\mathbf{t} - \frac{2d}{\mathbf{n}^\top\mathbf{n}}\, \mathbf{R} \, \mathbf{n} &= -\mathbf{t} - \mathbf{R} \boldsymbol{X} - \mathbf{R} \left(2\frac{\mathbf{n}\mathbf{n}^\top}{\mathbf{n}^\top\mathbf{n}} - \mathbf{I}\right) \, \boldsymbol{X} \label{eq:deckX} \\ &= - \mathbf{R}\left( 2\frac{\mathbf{n}\mathbf{n}^\top}{\mathbf{n}^\top\mathbf{n}} - \mathbf{I} \right)\bigg(\Big( \mathbf{I} -2\frac{\mathbf{n}\mathbf{n}^\top}{\mathbf{n}^\top\mathbf{n}}\Big)\left(-\mathbf{R}^\top\mathbf{t} - \boldsymbol{X}\right) + \boldsymbol{X}\bigg) \\ &= - \Psi_2 (\mathbf{R}) \, \bigg(\Big( \mathbf{I} -2\frac{\mathbf{n}\mathbf{n}^\top}{\mathbf{n}^\top\mathbf{n}}\Big)\left(-\mathbf{R}^\top\mathbf{t} - \boldsymbol{X}\right) + \boldsymbol{X} \bigg) \end{align} This may be understood as follows: we take $- \mathbf{R}^\top \mathbf{t} ,$ which is the center of the second camera expressed in the frame of the first camera (cf.\ Eq.~\ref{eq:4p2v}), then reflect this vector through the plane and transform it back to the vector representing the center of the first camera expressed in the frame of the reflected second camera (by multiplying by $-\Psi_2(\mathbf{R})$). \begin{figure}[H] \def\columnwidth{\columnwidth} \import{./Figs/4p2v/}{reflection.pdf_tex} \caption{Reflection-rotation symmetry for Equations~\ref{eq:4p2v}} \label{fig:4p2v_reflection} \end{figure} \begin{proposition} \label{prop:4p2v_deck} $\Psi_1$ and $\Psi_2$ generate the deck transformation group for the planar calibrated homography problem $X\to Z$ defined by Equations~\ref{eq:4p2v}. The corresponding permutations which centralize $\mathrm{Mon} (X/Z)$ are as follows: \begin{center} \begin{tabular}{ccc} $\Psi_1 \circ \Psi_2$ & $\leftrightarrow$ & $(1 \,\,\, 2) (3 \,\,\, 4) (5 \,\,\, 6) (7 \,\,\, 8) (9 \,\,\, 10) (11 \,\,\, 12) $\\ $\Psi_1$ & $\leftrightarrow $ & $(1 \,\,\, 3) (2 \,\,\, 4) (5 \,\,\, 7) (6 \,\,\, 8) (9 \,\,\, 11) (10 \,\,\, 12)$ \\ $\Psi_2$ & $\leftrightarrow $ & $(1 \,\,\, 4) (2 \,\,\, 3) (5 \,\,\, 8) (6 \,\,\, 7) (9 \,\,\, 12) (10 \,\,\, 11)$ \end{tabular} \end{center} These correspond to the maximal chains in the lattice of block systems (see Figure~\ref{fig:blocks_fields}). \end{proposition} \begin{proof} We verify that $\Psi_1$ and $\Psi_2$ really are deck transformations. The rest is elementary or follows from Proposition~\ref{prop:deck-centralizer}. Appealing to well-known properties of the twisted pair $\Psi_1,$ it suffices for us to check that planarity of the scene is preserved: \[ \det \left(\begin{bmatrix}\Psi_1(\alpha_1)\mathbf{x}_1 & \Psi_1(\alpha_2)\mathbf{x}_2 & \Psi_1(\alpha_3)\mathbf{x}_3 & \Psi_1(\alpha_4)\mathbf{x}_4 \\ 1 & 1 & 1 & 1 \end{bmatrix}\right) = 0. \] Letting $\mathbf{m} = \frac{2}{\mathbf{t}^\top\mathbf{t}}\mathbf{R}^\top\mathbf{t} ,$ we may compute this determinant as follows: \begin{align*} \left(\displaystyle\prod_{i=1}^4 (1+\mathbf{m}^\top\alpha_i\mathbf{x}_i)\right)^{-1} \det\left(\begin{bmatrix} \alpha_1\mathbf{x}_1 & \alpha_2\mathbf{x}_2 & \alpha_3\mathbf{x}_3 & \alpha_4\mathbf{x}_4 \\ 1+\mathbf{m}^\top\alpha_1\mathbf{x}_1 & 1+\mathbf{m}^\top\alpha_2\mathbf{x}_2 & 1+\mathbf{m}^\top\alpha_3\mathbf{x}_3 & 1+\mathbf{m}^\top\alpha_4\mathbf{x}_4 \end{bmatrix}\right) &= \\ \left(\displaystyle\prod_{i=1}^4 (1+\mathbf{m}^\top\alpha_i\mathbf{x}_i)\right)^{-1}\det\left(\begin{bmatrix} \alpha_1\mathbf{x}_1 & \alpha_2\mathbf{x}_2 & \alpha_3\mathbf{x}_3 & \alpha_4\mathbf{x}_4 \\ 1 & 1 & 1 & 1 \end{bmatrix}\right) = 0. \end{align*} For $\Psi_2 ,$ it is clear that planarity of the scene is preserved and that $\Psi_2(\mathbf{R}) \in \SO_\CC (3).$ If we substitute $\Psi_2(x)$ into the point correspondence constraint in \eqref{eq:4p2v} and take $\boldsymbol{X} = \alpha_i \, \mathbf{x}_i$ in Equation~\eqref{eq:deckX}, then \[ -\beta_i\mathbf{y}_i = \mathbf{R}\left(2\frac{\mathbf{n}\mathbf{n}^\top}{\mathbf{n}^\top\mathbf{n}} - \mathtt{I}\right)\alpha_i\mathbf{x}_i + \left(-\mathbf{t} - \mathbf{R}\alpha_i\mathbf{x}_i - \mathbf{R}\left(2\frac{\mathbf{n}\mathbf{n}^\top}{\mathbf{n}^\top\mathbf{n}} - \mathtt{I}\right)\alpha_i\mathbf{x}_i\right) = -\mathbf{R}\alpha_i\mathbf{x}_i - \mathbf{t} . \] We conclude that equations~\eqref{eq:4p2v} are invariant up to sign under application of $\Psi_2 .$ \end{proof} Finally, we describe a decomposition of $X \to Z$ \begin{equation}\label{4p2v:factor} X \dashrightarrow Y_1 \dashrightarrow Y_2 \to Z, \end{equation} corresponding to the left-most chain in Figure~\ref{fig:blocks_fields}. This decomposition makes use of the \deff{calibrated homography matrix} associated to $(\mathbf{R} , \mathbf{t})$ and the scene plane: \begin{equation} \label{eq:homographyMatrix} \mathbf{H} = \mathbf{R} + \displaystyle\frac{1}{d} \mathbf{t} \mathbf{n}^\top . \end{equation} Up to scale, any $3\times 3$ matrix has the form~\eqref{eq:homographyMatrix}. On the other hand, any real calibrated homography matrix has an eigenvalue equal to $1$ (see eg.~\cite[Lemma 5.18]{ma2012invitation}), and thus lies on an irreducible hypersurface of degree $6$: \[ \mathcal{H}_1 = \{ \mathbf{H} \in \mathbb{C}^{3\times 3} \mid \det (\mathbf{H}^\top \mathbf{H} - \mathbf{I} ) = 0 \}. \] In our decomposition, we may take \[ Y_1 = \{ \left(\mathbf{H} , \left( \left(\mathbf{x}_1 ,\ldots , \mathbf{x}_4 \right) , \left(\mathbf{y}_1, \ldots, \mathbf{y}_4 \right) \right) \right) \in \mathcal{H}_1 \times \left(\mathbb{P}^2 \right)^4 \times \left( \mathbb{P}^2 \right)^4 \mid \mathbf{x}_i \sim \mathbf{H} \mathbf{y}_i , \, i =1 , \ldots 4 \}. \] Here we use the standard notation $\sim$ indicate that two vectors are equal up to scale. We note that each of these correspondence constraints is equivalent to the vanishing of three homogeneous, non-independent linear equations \[ \matrix{\mathbf{x}_i}_\times\mathbf{H}\mathbf{y}_i = 0 . \] A short calculation reveals that $x\in X_z$ and $\Psi_1\circ\Psi_2(x) \in X_z$ map to the same point in $Y_1.$ We also note that $Y_1$ is irreducible, since its Zariski-open in the graph of $\ratnoname{\mathcal{H}_1 \times \left (\mathbb{P}^2\right)^4}{\left(\mathbb{P}^2\right)^4}.$ The projection $Y_1 \to Z$ has a deck transformation given by the sign-symmetry $\mathbf{H} \mapsto - \mathbf{H} .$ To remove this last symmetry, we define \begin{align} \label{eq:Smatrix} s &= \displaystyle\frac{1}{\mathbf{H}_{1,1}^2} \nonumber \\ \SS &= \displaystyle\frac{1}{\mathbf{H}_{1,1}} \, \mathbf{H} \end{align} and take $\ratnoname{Y_1}{Y_2} \subset \mathbb{C}^9 \times \left(\mathbb{P}^2\right)^4 \times \left(\mathbb{P}^2\right)^4$ by mapping $\mathbf{H}$ to $s$ and the 8 non-constant entries of $\SS.$ Algebraically, the ideal \begin{equation} \label{eq:Sideal} \langle \det (\SS^\top \SS - s \mathbf{I} ), \, \matrix{\mathbf{x}_1}_\times\SS \mathbf{y}_1, \, \matrix{\mathbf{x}_2}_\times\SS \mathbf{y}_2,\, \matrix{\mathbf{x}_3}_\times\SS \mathbf{y}_3, \, \matrix{\mathbf{x}_4}_\times\SS \mathbf{y}_4 \rangle \end{equation} has dimension $0$ and degree $3 = \deg (Y_2 /Z)$ for generic data $(\mathbf{x}_1, \ldots , \mathbf{y}_4 ) \in Z.$ The algebraic complexity as captured by the Galois group matches that of a well-known algorithm for computing $\mathbf{H},$ in which one must compute the singular values of a $3\times 3$ matrix $\lambda \mathbf{H}$ recovered up to scale from the four point correspondences (see eg.~\cite[Algorithm 4.1]{HZ-2003} or~\cite[Algorithm 5.2]{ma2012invitation}). \subsection{Three-view homography} \label{subsec:3viewhomography} Finally, we consider the minimal problem $\vcenter{\hbox{\includegraphics[scale = 0.06]{Figs/c3100.pdf}}} $, where the task is to recover the relative orientation of three cameras from the input data of four point correspondences which lie on the incidence variety \[ Z = \{ (\mathbf{x}_1, \, \ldots , \, \mathbf{x}_4, \, \mathbf{y}_1 , \, \ldots , \, \mathbf{y}_4, \, \mathbf{z}_1 , \, \ldots , \, \mathbf{z}_4) \in \left(\mathbb{P}^2 \right)^{12} \mid \mathbf{x}_3 \in \agline{\mathbf{x}_1}{\mathbf{x}_2}, \, \mathbf{y}_3 \in \agline{\mathbf{y}_1}{\mathbf{y}_2}, \, \mathbf{z}_3 \in \agline{\mathbf{z}_1}{\mathbf{z}_2} \}. \] Unlike in the previous section, there is no longer a twisted pair symmetry. However, there are two symmetries analogous to the deck transformation $\Psi _2 .$ For this problem, the joint camera map defined in~\cite{PLMP} is birationally equivalent to a branched cover whose fibers are pairs of homography matrices, which are compatible in the sense that they share the same normal vector. Thus, the solutions of interest lie on the subvariety $\mathcal{H}_2 \subset \left(\mathbb{C}^{3\times 3}\right)^2$ defined to be the closed image of the map \begin{align*} \left(\SO_\CC (3) \right)^2 \times \left(\mathbb{C}^3 \right)^3 &\rightarrow \left(\mathbb{C}^{3\times 3}\right)^2\\ (\mathbf{R}_1, \mathbf{R}_2, \mathbf{t}_1, \mathbf{t}_2, \mathbf{n} ) &\mapsto (R_1 + \mathbf{t}_1 \mathbf{n}^\top , R_2 + \mathbf{t}_2 \mathbf{n}^\top). \end{align*} Notice that, unlike in \eqref{eq:homographyMatrix}, we have absorbed the constant $\frac{1}{d}$ into $\mathbf{t}$ for each homography matrix. We wish to compute the fibers of the branched cover $X \to Z,$ where \begin{equation} \label{eq:3way-homography-equations} X = \{ \left( (\mathbf{H}_1, \, \mathbf{H}_2), \, (\mathbf{x}_1, \, \ldots , \, \mathbf{z}_4)\right) \in \mathcal{H}_2 \times Z \mid \mathbf{x}_i \sim \mathbf{H}_1 \, \mathbf{y}_i \sim \mathbf{H}_2 \, \mathbf{z}_i , \, i=1,\ldots 4 \}. \end{equation} For this problem, we have $\deg (X/Z) = 64,$ and Result~\ref{res:plmp} tells us $\mathrm{Mon} (X/Z) \cong S_2 \wr (S_2 \wr S_{16} \cap A_{32}) \cap A_{64}.$ It follows that there exists a decomposition \[ X \dashrightarrow Y_1 \dashrightarrow Y_2 \dashrightarrow Z \] with $\deg (X/Y_1) = \deg (Y_1 /Y_2) = 2$ and $\deg (Y_2/ Z) = 16.$ The deck transformations of $X \to Z$ are easily seen to be $(\mathbf{H}_1 , \mathbf{H}_2)\mapsto (\pm \mathbf{H}_1 , \pm \mathbf{H}_2).$ Thus, we may use separating invariants for this linear group action as in the previous section to write down the maps $\ratnoname{X}{Y_1}$ and $\ratnoname{Y_1}{Y_2}.$ However, our description of $X$ is unsatisfying from the point of view of constructing polynomial solvers, since we have only described $\mathcal{H}_2$ parametrically. We leave determining the ideal $\mathcal{I}_{\mathcal{H}_2}$ as a challenging open problem in algebraic vision, analogous to previous works~\cite{MULTIVIEWIDEAL,TRIFOCALIDEAL}. Our final Result~\ref{res:degree64} is a partial solution to this implicitization problem, which describes an ideal contained in $\mathcal{I}_{\mathcal{H}_2}.$ The generators of this ideal and the linear correspondence constraints in~\eqref{eq:3way-homography-equations} generate a $0$-dimensional ideal of degree $64$ for generic data $z=(\mathbf{x}_i, \mathbf{y}_i, \mathbf{z}_i).$ Drawing on the description of $\mathcal{H}_1$ from the previous section, consider the map \begin{align*} \mathcal{H}_2 &\rightarrow \left(\mathbb{C}^{3\times 3}\right)^2 \\ (\mathbf{H}_1,\mathbf{H}_2) &\mapsto (\mathbf{H}_1^\top \mathbf{H}_1 - \mathbf{I} , \mathbf{H}_2^\top \mathbf{H}_2 - \mathbf{I}). \end{align*} The image of this map has the alternate parametrization \begin{align*} \left(\mathbb{C}^{3\times 1}\right)^3 &\rightarrow \left(\mathbb{C}^{3\times 3}\right)^2\\ (\mathbf{d}_1, \mathbf{d}_2, \mathbf{n}) &\mapsto (\mathbf{n} \mathbf{d}_1^\top + \mathbf{d}_1 \mathbf{n}^\top, \mathbf{n} \mathbf{d}_2^\top + \mathbf{d}_2 \mathbf{n}^\top ). \end{align*} Using Macaulay2, we compute implicit equations in new matrix variables $\mathbf{W}_i = \mathbf{n} \mathbf{d}_i^\top + \mathbf{d}_i \mathbf{n}^\top,$ $i=1,2.$ The resulting elimination ideal in $\mathbb{C} [\mathbf{W}_1, \mathbf{W}_2]$ is generated by four cubics and 15 quartics. The cubic constraints obtained are \begin{equation}\label{eq:cubicW} \det (\mathbf{W}_1) = \det (\mathbf{W}_2) = \det (\mathbf{W}_1 + \mathbf{W}_2) = \det (\mathbf{W}_1 - \mathbf{W}_2) = 0. \end{equation} These cubics can be understood in terms of the alternate parametrization, which shows that generic $(\mathbf{W}_1, \mathbf{W}_2)$ in the image will span a pencil of rank-$2$ symmetric matrices. In what in follows, it is enough for us to consider two of the 15 quartics, which have alternate expressions in terms of resultants: \begin{equation} \label{eq:resultants} \begin{split} \mathrm{Res}_{n_1} \left(\mathbf{W}_1^{3,3}n_1^2 -2\mathbf{W}_1^{1,3}n_1 + \mathbf{W}_1^{1,1}, \mathbf{W}_2^{3,3}n_1^2 -2\mathbf{W}_2^{1,3}n_1 + \mathbf{W}_2^{1,1}\right) = 0\\ \mathrm{Res}_{n_2}\left(\mathbf{W}_1^{3,3}n_2^2 -2\mathbf{W}_1^{2,3}n_2 + \mathbf{W}_1^{2,2}, \mathbf{W}_2^{3,3}n_2^2 -2\mathbf{W}_2^{2,3}n_2 + \mathbf{W}_2^{2,2}\right) = 0 \end{split} \end{equation} where $\mathbf{n} = (n_1, n_2, n_3).$ Substituting $\mathbf{W}_i = \mathbf{H}_i^\top \mathbf{H}_i - \mathbf{I} $ into~\eqref{eq:cubicW} yields four polynomials of degree $6$ vanishing on $\mathcal{H}_2.$ Using Bertini~\cite{Bertini}, we computed points where these equations vanish by tracking $6^4 = 1296$ homotopy paths. Out of these points, $336$ lie on $\mathcal{H}_2.$ The remaining $960$ paths resulted in $224$ points not on $\mathcal{H}_2,$ each occurring with multiplicity $4.$ We confirmed that the degree of the variety $\mathcal{H}_2$ is indeed $336$ using monodromy. Unlike in the five-point problem, the linear equations implied by $\mathbf{x}_i \sim \mathbf{H}_1 \, \mathbf{y}_i \sim \mathbf{H}_2 \, \mathbf{z}_i $ are non-generic. The number of solutions to these linear equations and the four degree-$6$ equations obtained from~\eqref{eq:cubicW} is $320.$ Out of these solutions, only $84$ satisfy the degree-$8$ equations obtained from~\eqref{eq:resultants}. To obtain the degree $64$ reported in~\cite{PLMP}, it is sufficient to impose the additional constraint $\det \mathbf{H}_1 \ne 0.$ In summary, we have the following result. \begin{result} \label{res:degree64} For each $z=\left( \mathbf{x}_1, \, \ldots , \, \mathbf{z}_4\right) \in Z,$ let $I_z \subset \mathbb{C} [\mathbf{H}_1, \mathbf{H}_2, D] $ be the ideal in $19$ variables generated by the linear relations $\mathbf{x}_i \sim \mathbf{H}_1 \, \mathbf{y}_i \sim \mathbf{H}_2 \, \mathbf{z}_i $ for $i=1, \ldots , 4,$ the saturation constraint $D \det \mathbf{H}_1 -1=0,$ and six equations obtained by setting $(\mathbf{W}_1, \mathbf{W}_2) = (\mathbf{H}_1^\top \mathbf{H}_1 - \mathbf{I} , \mathbf{H}_2^\top \mathbf{H}_2 - \mathbf{I} ) $ in equations~\eqref{eq:cubicW} and~\eqref{eq:resultants}. For generic $z\in Z,$ we have $\deg (I_z) = 64.$ Moreover, after substituting $s_1, s_2, \SS_1, \SS_2$ as in~\eqref{eq:Smatrix}, \eqref{eq:Sideal}, we obtain an ideal of degree $16$ for generic data. \end{result} \section{Conclusion and outlook} \label{sec:outlook} Galois/monodromy groups reveal the intrinsic algebraic structure of problems in enumerative geometry. We have shown how this structure can be revealed on both new and old examples from computer vision. We believe that numerical algebraic geometry and computational group theory are valuable additions to the arsenal of techniques used by researchers interested in building minimal solvers. Although these methods allow us to identify decomposable branched covers, the problem of automatically computing a decomposition as in Equation~\eqref{eq:decomposition} seems hard in general. We have shown how understanding the underlying geometry allows us to solve it in cases of interest. Finally, we note that the techniques in this paper might also be applied to non-minimal problems, where the branched covers $\ratnoname{X}{Z}$ of interest are such that $Z$ is a low-dimensional subvariety of some ambient space of data. In principal, the Galois/monodromy group may be computed by projecting $Z$ onto an affine space of the same dimension and applying Part 2) of Proposition~\ref{prop:factor}. This, however, lies beyond the scope of our work. \section*{Acknowledgements} We are grateful to ICERM (NSF DMS-1439786 and the Simons Foundation grant 507536) for their support and for hosting three events where significant progress on this project was made: the Fall 2018 semester program on Nonlinear Algebra, the Winter 2019 Algebraic Vision research cluster, and the Fall 2020 virtual workshop on Monodromy and Galois groups in enumerative geometry and applications. V.~Korotynskiy and T.~Pajdla were supported by the EU Structural and Investment Funds, Operational Programe Research, Development and Education under the project IMPACT (reg.\ no.\ CZ$.02.1.01/0.0/0.0/15\_003/0000468$), the EU H2020 ARtwin No.~856994, and EU H2020 SPRING No.~871245 Projects. T.~Duff received partial support from NSF DMS \#1719968 and \#2001267. M.~Regan was supported in part by Schmitt Leadership Fellowship in Science and Engineering and NSF grant CCF-1812746. We are particularly grateful to Jon Hauenstein, Anton Leykin, and Frank Sottile for helpful suggestions. \bibliographystyle{amsplain}
1,314,259,993,681
arxiv
\section{Conclusion} \label{sec:conclusion} We have shown that it is possible for an attacker, with modest resources, to determine the current IP address of identified and targeted Skype user (if the user is currently active). It may be possible to do this for other real-time communication applications that also send datagrams directly between caller and callee (such as MSN Live, QQ, and Google Talk). In the case of Skype, even if the targeted user is behind a NAT, the attacker can determine the user's public IP address. Such an attack could be used for many malicious purposes, including observing a person's mobility or linking the identity of a person to his Internet usage. We have further shown that by deploying modest resources, it is possible for an attacker to scale this scheme to not just one user but tens of thousands of users simultaneously. A prankster could use this scalable calling scheme to, for example, create a public web site which provides the mobility and file-sharing history of all active Skype users in a city or a country. Parents, employers, and spouses could then search such a web site to determine the mobility and file-sharing history of arbitrary Skype users. \bigskip {\bf Acknowledgements.} We would like to thank the volunteers for the considerable amount of time they have spent to help us test our scheme. We also would like to thank Justin Cappos, David Choffnes, Paul Francis, Krishna Gummadi, Engin Kirda, and the anonymous reviewers for their constructive comments. This research was partially supported by the NSF grant CNS-0917767. \section{Defenses} \label{sec:defenses} In the previous sections we have seen that it is possible for an attacker to develop and deploy (possibly from a home) a tool that periodically determines the current IP address of a targeted VoIP user. Even if the VoIP user is behind a NAT, the attacker can determine the user's public IP address. Observing the mobility of a targeted individual could be used for many malicious purposes. In this section we briefly discuss defenses for this attack, both at the application level and at the user level. One measure that can go a long way is for the designers of the VoIP signaling protocol to simply {\em ensure that the callee's IP address is not revealed to the caller until the callee accepts the call}. That is, before the callee accepts the call, callee's signaling packets are sent to supernodes or infrastructure nodes, and not to the caller; furthermore, the caller is not provided the callee's IP address during call set-up. By only revealing the callee's IP address after the callee accepts the call, then $(i)$ it is no longer possible to make an inconspicuous call to the target; and $(ii)$ if Alice chooses to block all calls from strangers (i.e., people not on her contact list), then a stranger will no longer be able to determine her IP address and observe her mobility. This solution has a very low overhead as only a few signalling messages are relayed. Thus, we strongly recommend that all VoIP applications adopt this simple mechanism. However, even with this simple mechanism in place, a friend of Alice (that is, anyone on her contact list, including friends, old boyfriends, family members, employers, and employees) would still be able to determine her IP address (and location) when they call her and she accepts the call. We now outline some measures that defend against this attack. One blanket defense for these attacks is to have all calls pass through relays. When a datagram passes through a relay, the relay regenerates the datagram with the source IP address of the relay. If the relay can be trusted, then neither party in the call sees the other's IP. In fact, in Skype, if both caller and callee are behind a NAT, then the call is typically relayed through a third skype user (who is not behind a NAT), serving as a relay. The relays must be selected so as not to give away the location of the callee. (For example, the system shouldn't strive to find a relay in same city as the callee.) The main problem with this solution is that it detracts from the efficiencies of P2P communication because $(i)$ relays must now be made available to support the huge bandwidth demands of large-scale real-time voice and video communication systems; and $(ii)$ access ISPs will see an increase of upstream and downstream relay traffic. In order to not excessively route traffic through relays, the system can be designed so that Alice can specify for which contacts in her address book the calls are to be routed through relays. For example, if Alice is only concerned about her boss observe her mobility, she can configure her client to have calls between her and her boss pass through relays. The client could also be designed to make this decision on a call-by-call basis: whenever, her boss attempts to call her, she is asked whether this should be a P2P or relayed call. We briefly mention that another approach for providing location privacy is to run the P2P communication application through a third-party anonymizing service such as Tor \cite{Tor}. However, the delay and throughput performance of Tor and similar services is clearly insufficient for supporting real-time voice and video \cite{oneswarm, Waiting-for-anonymity}. In addition to being inefficient, Tor also introduces privacy issues for certain applications (e.g., P2P file sharing) \cite{Bad-Apple}. We conclude this brief discussion on defenses by mentioning that these location attacks actually have their roots in the current Internet architecture, for which all datagrams carry source and destination IP addresses. We are not advocating a total re-design of the Internet, but we mention that this and other Internet privacy problems could be resolved by using alternative underlying network architectures. For example, if the Internet were to use virtual circuits (as with X.25 and ATM), then it would be much more difficult for a stranger or a friend to observe a user's mobility. \section{File-Sharing Usage of Skype Users} \label{sec:filesharing} In the previous sections, we established that it is possible to map a person to his IP address in a scalable manner. We are now interested in validating that this scheme introduces a linkability threat where the identity of a person can be associated to his Internet usage. In particular, we focus in this section on finding the identity of file-sharing users. We focus on the BitTorrent application; however, other P2P applications -- such as eMule \cite{emule} or Xunlei \cite{xunlei} -- could instead be used. One of the challenges here is that many file-sharing users are NATed, that is, they may share their IP address with several users. We present in the following a scheme exploiting the identification field in the IP datagrams to check whether two different applications actually run on the same machine. To the best of our knowledge, we are the first ones to run and validate such a scheme in the wild. In this section, we anonymized (as described in section~\ref{sec:single-geo}) all localization information, we do not store IP addresses after the verification procedure, and we never store any information (including the infohash and the content name) related to the contents downloaded by a given user. \subsection{Methodology} Our measurement system comprises a \textit{Skype tracker}, an \textit{Infohash crawler}, a \textit{BitTorrent crawler}, and a \textit{Verifier} which communicate through shared storage. We begin by randomly selecting a set of 100,000 identified Skype users. The Skype tracker employs ten tracking clients to daily collect the IP address for the 100,000 users. The Infohash crawler determines the infohashes (file identifiers) of the 50,000 most popular BitTorrent swarms. Operating in parallel with the Skype tracker, the BitTorrent crawler collects the IP addresses participating in the 50,000 most popular swarms, and determines the IP addresses found in both Skype and BitTorrent. Finally, the Verifier attempts to initiate P2P communications with the two applications in order to verify that the same user is indeed running both of them. In the following, we describe in more detail the operation of each component. The operation of the Verifier will be described in Section~\ref{sec:filesharing-verifier}. \paragraph{The Skype Tracker} We use the methodology developed in Section~\ref{sec:mapping} and Section~\ref{sec:location} to find 100,000 active Skype users. In order to daily call 100,000 Skype users, the Skype tracker uses ten tracking clients. Because we are now not interested in fine grain mobility measures but instead in file-sharing usage, we only call each user once per day. We then analyze packet patterns to determine the latest IP address of these users and temporarily save them to a shared storage. (Keep in mind we collect the IP addresses not only of users that are online but also of all users that have logged into the system in the last 72 hours.) These IP addresses are then loaded from the shared storage by the BitTorrent crawler to determine which files are distributed from these IP addresses. \paragraph{The Infohash Crawler} We collect file identifiers (infohashes) from the PublicBitTorrent tracker \cite{public-BT}, which is the largest BitTorrent tracker at the time of this writing. PublicBitTorrent publishes a file with all the infohashes it tracks on its website. This file is the dump of a request, \textit{scrape-all}, supported by trackers running the OpenTracker software \cite{BT-Spying}. This request returns all infohashes of files it is tracking and the number of downloaders (leechers) and uploaders (seeds). We download this file every day from the PublicBitTorrent website and extract the infohashes for the 50,000 most popular files. \paragraph{The BitTorrent Crawler} In this step, we seek an efficient mechanism to obtain the IP addresses participating in the 50,000 most popular torrents. BitTorrent trackers such as PublicBitTorrent support a request, \textit{announce started}, that returns a list of peers participating in a torrent identified by an infohash. As tracker developers became aware that such requests can be abused they started to limit the number of requests a given peer can send before being blacklisted. Therefore, instead of using the PublicBitTorrent tracker to collect IP addresses, we use a decentralized tracker (DHT). We collect the IP addresses participating in the top 50,000 torrents from the Mainline DHT every hour for two weeks. This DHT is a decentralized tracker that is primarily used by $\mu$Torrent \cite{uTorrent} and Mainline BitTorrent \cite{bittorrent}, the most popular BitTorrent clients. However, we note that other popular P2P file-sharing clients, such as Xunlei, also support it. When a peer wants to download a new file, it contacts the Mainline DHT to obtain a list of peers distributing that file. This peer first finds the DHT node maintaining the list of peers for that file using the \textit{find\_node} request. That request takes an infohash as a parameter, and essentially returns the ID and (IP, port) pair of the DHT node responsible for that infohash. Then, the peer sends a \textit{get\_peers} request to that node, which returns a list of (IP, port) pairs belonging to peers distributing the file. Unlike centralized trackers, we observed that DHT nodes do not implement blacklisting strategies. So we located the nodes responsible for the 50,000 files that we wanted to crawl and then repeatedly sent get\_peers requests to collect the peers distributing these files. The whole procedure distributed over 10 machines takes about one hour. Each of our crawling bots periodically loads the (Skype\_ID, IP) pair of active Skype users into memory. If the IP address of an active Skype user is also found in a BitTorrent swarm, the user is possibly downloading the corresponding file (this correlation is performed on-the-fly and we never store the mapping IP address, infohash). {\em However, we must verify this hypothesis as an IP address may correspond to a NAT shared by several users.} We refer to this problem as the NAT problem. We note that several types of middleboxes, including NATs and IPv6 routers can use a single public IP address for different users. For the sake of simplicity, we use the term NATs when we refer to the generic notion of middleboxes in the following. (We note that dynamic IP addresses can also be shared by several users, resulting in the same problem.) \subsection{The NAT Problem} \begin{figure}[!t] \centering \includegraphics[width=0.962\columnwidth]{eps/usage_NAT.eps} \caption{CDF of the users using BitTorrent as a function of BitTorrent ports. \textit{50\% of collected BitTorrent users share their IP address with other BitTorrent users.}} \label{fig:usage-NAT} \end{figure} Depending on the Internet connectivity of a user, an IP address may correspond to a computer or to a NAT shared by a household, a company, or even an ISP. Because several users can share the same IP address, we may wrongly associate an identified Skype user to the BitTorrent downloads of another user behind the same NAT. To the best of our knowledge, all BitTorrent clients multiplex torrents on a single port. This port is picked at random at the installation of the client, and remains the same in subsequent utilizations. Therefore, we can associate each IP/port pair to a single BitTorrent user \cite{BT-Spying}. However, this observation alone does not allow us to match a Skype user to a BitTorrent user when the user is behind a NAT, as described below. We found 15,000 users (out of 100,000) who have IP addresses that were simultaneously found in Skype and BitTorrent during a period of two weeks. Of these 15,000 Skype users using BitTorrent, approximately 7,500 (50\%) share their IP address with another BitTorrent user (as indicated by users with more than one port in Fig.~\ref{fig:usage-NAT}). In other words, a significant fraction of the 15,000 Skype users are behind a NAT and may therefore not be the ones using BitTorrent (false positives). \subsection{The Verifier} \label{sec:filesharing-verifier} We now describe the operation of our Verifier tool, which is responsible for definitively establishing whether Skype and BitTorrent are run on the same machine. Although more than one person \textit{simultaneously} share the same machine, the granularity of a machine is enough for our purpose. For the sake of simplicity, we assume in the following that each machine is used by a single person. Given an IP address that participates in both Skype and BitTorrent (matching IP), we now describe how the Verifier makes sure the person identified in Skype is indeed the one using BitTorrent. Consider a scenario where two users, Alice and Bob, are behind the same NAT. Suppose that, by calling Alice on Skype, we have determined that her IP address is in a swarm in BitTorrent, but the IP address is a NATed one. Two scenarios are possible. In the first scenario, Alice is using both Skype and BitTorrent on the same host. In the second scenario, Alice is using Skype on one host and Bob is using BitTorrent on another host. The second scenario corresponds to a false positive because Alice is not the one using BitTorrent. To detect false positives, we leverage the predictability of the identification field in the IP datagrams (IP-ID) originating from the same machine \cite{Counting}. As soon as the BitTorrent crawler detects a matching IP address, it signals the Verifier, which immediately calls the corresponding Skype user and, at the same time, initiates a handshake with the BitTorrent client. If the distance between the IP-IDs generated by Skype and those generated by BitTorrent is small, Alice is very likely to be the identified BitTorrent downloader. Otherwise, Alice is likely to be a false positive. At the end of the verification procedure, IP addresses are anonymized using a salted hash. All subsequent analysis is performed on this anonymized data. \paragraph{Limitations} Our verification procedure has two limitations. The first limitation is that we can only initiate communication to public peers or NATed peers that accept incoming communications (e.g., when UPnP is used). This limitation significantly restricts the number of BitTorrent users we can verify. However, for this proof of concept, it is not necessary to verify all the Skype users who are downloading with BitTorrent. An aggressive attacker could easily verify more users by registering the IP address of the Verifier to the Mainline DHT. In this manner he would also receive incoming communication from peers whose NATs refuse incoming communications. Therefore, an attacker could in principle verify NATed peers also. The second limitation is that we assume that the IP-IDs originating from the same machine are predictable, which depends on two conditions. The first condition is that the IP-IDs originating from the same machine should be predictable (e.g., sequential). Because IP-IDs are attributed by the TCP stack of an Operating System (OS), this first condition highly depends on the fraction of OSes observed in the wild whose attribution is indeed predictable. By testing Windows XP, Vista, and 7, we verified that they all use sequential IP-IDs. As these three versions of Windows alone account for 90\% of all OSes found in the wild \cite{OS}, we conclude that this first condition is largely met. The second condition is that NATs do not modify the IP-IDs as attributed by the TCP stack of the machine. This condition is supported by $(i)$ related work in which this behavior was not observed in practice \cite{Counting} and by $(ii)$ the specification of the IPv4 ID field, which specifies that NATs should ignore this field \cite{IP-ID}. In conclusion, we expect that our verification procedure based on the predictability of the IP-ID field to be highly accurate, that is, with no, or few, false positives (due to similar IP-IDs originating from different machines) and relatively few false negatives (due to OSes with unpredictable IP-IDs attribution or IPv6 routers that re-attribute IP-IDs unpredictably). \subsection{Experimental Results} By running our verification procedure for two weeks, we successfully triggered communication between the Verifier and 765 unique users on both Skype and BitTorrent. We refer to these users as {\em verifiable}. \begin{figure}[!t] \centering \includegraphics[width=0.962\columnwidth]{eps/usage_verification.eps} \caption{After two weeks, we plot the 90th percentile of the shortest distance between the IP-IDs on a ring of $2^{16}$ elements of the first Skype and BitTorrent packets received from a verifiable user, sorted by increasing 90th percentile (curve). There is one dot per verification experiment. \textit{We verify 400 users out of 765 users.}} \label{fig:usage-verification} \end{figure} We investigate the fraction of verifiable users that we actually fully verified. For the 765 verifiable users, we compute the shortest distance on a ring of $2^{16}$ elements between the IP-IDs of the first packet received from Skype and from BitTorrent. The smaller the distance, the more likely the identified Skype user is indeed using BitTorrent. In Fig.~\ref{fig:usage-verification}, we see that running this procedure finds 400 unique users for whom the 90th percentile of the distance is less than 1,000. We conclude that approximately 400 users (52\% of the 765 verifiable users) are indeed using BitTorrent. We cannot conclude for sure that the remaining 48\% of the verifiable users are not BitTorrent users (they might be false negatives). However, as we have seen that at least 90\% of the OSes use sequential IP-IDs, we strongly believe that most of them are not using BitTorrent. \begin{table} \small \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Rank & \# Files & First name & Last name & City & Country \\ \hline 1 & 23 & \ding{51} & \ding{51} & \ding{51} & \ding{51}\\ 2 & 18 & \ding{51} & \ding{51} & \ding{51} & \ding{51}\\ 3 & 12 & \ding{51} & \ding{51} & \ding{55} & \ding{51}\\ 4 & 11 & \ding{51} & \ding{51} & \ding{51} & \ding{51}\\ 5 & 11 & \ding{51} & \ding{51} & \ding{51} & \ding{51}\\ 6 & 11 & \ding{51} & \ding{51} & \ding{51} & \ding{51}\\ 7 & 9 & \ding{55} & \ding{51} & \ding{51} & \ding{51}\\ 8 & 8 & \ding{55} & \ding{51} & \ding{51} & \ding{51}\\ 9 & 7 & \ding{51} & \ding{51} & \ding{51} & \ding{51}\\ 10 & 6 & \ding{51} & \ding{51} & \ding{51} & \ding{51}\\ \hline \end{tabular} \caption{For each of the top10 verified user, we show the number of files shared by that user, whether the user provides in its Skype profile a first or last name, a city, and a country.} \label{tab:usage-top10} \end{table} In summary, we have determined 400 identified Skype users (from a random set of 100,000) who are definitely using BitTorrent. Table~\ref{tab:usage-top10} shows the information that is readily available about the top-10 BitTorrent users. When registering with Skype, all of these users provided their last names and all but two users also provided their first names. In addition, all but one of these users provided their cities of residence. However, we remind that we do not store any personal information (e.g., name and city) for the purpose of this measurement; instead, we only store a binary information indicating whether a personal information is available or not. \section{Introduction} \label{sec:intro} The cellular service providers are capable of tracking and logging our whereabouts as long as our cell phones are powered on. Because the web sites we visit see our source IP addresses and cookies, the web sites we frequently visit -- such as Google \cite{google} and Facebook \cite{facebookweb} -- can also track our whereabouts to some extent. Although tracking our whereabouts can be considered a major infringement on our privacy, most people are not terribly concerned, largely because they trust that the cellular and major Internet application providers will not disclose this information. Moreover, these large companies have privacy policies, in which they assure their users that they will not make location history, and other personal information, publicly available. In this paper, we are not concerned about whether large brand-name companies can track our mobility, but instead about whether smaller less-trustworthy entities can leverage the Internet to {\em periodically track our whereabouts}. Is it possible, for example, for an ordinary user with modest financial resources, operating from his or her home, to periodically determine the IP address of a targeted and identified Internet user and to link it to this user's Internet activities (e.g., file sharing)? We will show that the answer to this question is yes! Real-time communication (e.g., VoIP and Video-over-IP) is enormously popular in the Internet today. As shown in Table \ref{tab:intro-apps}, the applications Skype, QQ, MSN Live, and Google Talk together have more than 1.6 billion registered users. Real-time communication in the Internet is naturally done peer-to-peer (P2P), i.e., datagrams flow directly between the two conversing users. The P2P nature of such a service, however, exposes the IP addresses of all the participants in a conversation to each other. Specifically, if Alice knows Bob's VoIP ID, she can establish a call with Bob and obtain his current IP address by simply sniffing the datagrams arriving to her computer. She can also use geo-localization services to map Bob's IP address to a location and ISP. If Bob is mobile, she can call him periodically to observe his mobility over, say, a week or month. Furthermore, once she knows Bob's IP address, she can crawl P2P file-sharing systems to see if that IP address is uploading/downloading any files. Thus VoIP can {\em potentially} be used to collect a targeted user's location. And VoIP can {\em potentially} be combined with P2P file sharing to determine what a user is uploading/downloading. This would clearly be a serious infringement on privacy. \begin{table} \centering \begin{tabular}{|c|c|c|c|} \hline App & \# Users & Dir & P2P\\ \hline Skype & 560M & \ding{51} & \ding{51}\\ MSN Live & 550M & \ding{55} & \ding{51}\\ QQ & 500M & \ding{51} & \ding{51}\\ Google Talk & 150M & \ding{55} & \ding{51}\\ \hline \end{tabular} \caption{Number of users claimed by Skype \cite{skypenumbers}, MSN Live \cite{MSNnumbers}, QQ \cite{QQnumbers}, and Google Talk \cite{gtalknumbers} and for each of these systems, whether it has a directory service and employs P2P communications.} \label{tab:intro-apps} \end{table} However, for such a scheme to be effective, there are several technical challenges: \begin{itemize} \item For a specific targeted individual -- such as Bob Smith, 28 years old, living in Kaiserslautern Germany -- can Alice determine with certainty his VoIP ID? \item Can Alice determine which packets come from Bob (and thereby obtain his IP address)? Indeed, during call setup, Alice may receive packets from many other peers. In addition, can Alice call Bob inconspicuously, so that Alice can periodically call Bob and get his IP address without Bob knowing it? Finally, can Alice obtain Bob's address, even when Bob configures his VoIP client to block calls from Alice? \item If Bob's IP address, found with VoIP, is the same as an IP address found in a P2P file-sharing system, then we cannot conclude with certainty that Bob is downloading the corresponding file, since Bob may be behind a NAT (with the matching IP address being the public IP address of the NAT). Thus, is it possible to verify that Bob is indeed uploading/downloading the files, given that NATs are widely deployed in the Internet? \end{itemize} In this paper, using Skype, we develop a measurement scheme to meet all the above challenges. (This may be possible with other VoIP systems as well, which we leave for future work.) Our main contributions are the following: \begin{itemize} \item \textit{We develop a scheme to find a targeted person's Skype ID and to inconspicuously call this person to find his IP address, even if he is behind a NAT.} By carefully studying Skype packet patterns for a Skype caller, we are able to distinguish packets received from the Skype callee from packets received from many other peers. Having identified these packets, we extract the callee's IP address from the headers of the packets. Furthermore, through experimentation, we determine how to obtain the IP address of the callee fully inconspicuously, that is, without ringing or notifying the user. Finally, we show that Skype privacy settings fail to protect against our scheme. \item \textit{We show our scheme can be used periodically to observe the mobility of Skype users.} By scaling our scheme, we demonstrate that Skype does not implement counter measures to hinder such schemes. Although there are several challenges to measure the mobility of a large number of users, we show that it can be done efficiently and effectively. \item \textit{We show that the scheme introduces a linkability threat where the identity of a person can be associated to his Internet usage.} We illustrate this threat by combining Skype and BitTorrent to show that it is possible to determine the file-sharing usage of identified users. One of the challenges here is that a BitTorrent user is often NATed, so that he may share his IP address with many other users. When a common IP address is discovered in both Skype and BitTorrent, we immediately launch a verification procedure in which we simultaneously call the corresponding user and perform a BitTorrent handshake to the IP address, port and infohash (which identifies the file being shared). We then use the identification field of the IP datagrams to verify with high accuracy whether an identified user is participating in specific torrents. To the best of our knowledge, we are the first ones to show that such a scheme can be used in the wild. \end{itemize} In addition to the technical contributions of this paper, another contribution is that we are alerting Internet users (and the Skype company as discussed in the next section) of a major privacy vulnerability, whereby targeted users can have their mobility and Internet usage tracked. As of May 2011 (more than six months after having notified the Skype company), all the schemes presented in this paper are still valid. We provide some relatively simple solutions so that future real-time communication systems can be made less vulnerable to these attacks. One solution that would go a long way is to design the VoIP system so that the callee's IP address is not revealed until the user accepts the call. With this property, Alice would not be able to inconspicuously call Bob. Moreover, if Alice is a stranger (that is, not on Bob's contact list), and Bob configures his client to not accept calls from strangers, then this design would prevent any stranger from tracking him, conspicuously or otherwise. However, even with this solution in place, any friend of Bob, say Susan, can still call him conspicuously and obtain his IP address. Susan could be Bob's spouse, parent, employer, or employee, for example. It would be hard for Susan to periodically track Bob this way, but Susan could still (i) get Bob's current location, and (ii) check to see if Bob is downloading content from a P2P file-sharing system. Preventing these attacks would require more fundamental changes in the VoIP system (specifically, using relays by default) or more fundamental changes in the underlying Internet protocols. This paper is organized as follows. We discuss the legal and ethical considerations of this paper in Section~\ref{sec:legal}. In Section~\ref{sec:mapping}, we describe our scheme to determine the current IP address of a person using Skype. We then show that this scheme can be used periodically to observe the mobility and file-sharing usage of identified users in Section~\ref{sec:location} and~\ref{sec:filesharing}. Finally, we discuss some simple defenses in Section~\ref{sec:defenses}, the related work in Section~\ref{sec:related}, and we conclude in Section~\ref{sec:conclusion}. \section{Legal and Ethical Considerations} \label{sec:legal} In this measurement study, all testing involving identified users has been performed on a small sample of volunteers who gave us their informed consent to make measurements and publish results. Unfortunately, the informed consent process for privacy, as for fraud \cite{JakobSP08}, may significantly bias user behavior. For example, informed users may stop using Skype or BitTorrent. For this reason, we also needed to consider a larger sample of (anonymized) users in order to accurately assess the amount of personal information that is revealed by a normal usage, e.g., the mobility and file-sharing usage of Skype users. For the sake of privacy, we only stored and processed anonymized mobility and file-sharing information. Based on these arguments, the INRIA IRB approved this study. In the following, we describe our motivation to run privacy measurements, the tests that we ran with volunteers, and the remaining measurements. \paragraph{Motivation for Running Privacy Measurements} Internet users publish a lot of personal information that can be exploited in non-trivial ways to invade their privacy. Indeed, recent research demonstrates that personal information can be correlated in ways that would have been hard to anticipate \cite{kirdaSP10}. One goal of this study is to show that any Internet user can leverage popular real-time communication applications to observe the mobility patterns and file-sharing usage of tens of millions of Internet users. It is important to give public visibility to these privacy issues, as they constitute serious invasions into users' privacy, and can potentially be used for blackmail and phishing attacks. \paragraph{Volunteers} In this study, we have relied on two sets of volunteers for which we have obtained informed consent. The first set comprises 14 research faculty in the CSE department at NYU-Poly for which we have attempted to find the Skype IDs. The second set comprises 20 people spread throughout the world (4 in Asia, 2 in Australia, 7 in Europe, and 7 in USA) in cable and DSL ISPs, with 10 users directly connectable and 10 users behind NAT. We deliberately chose users located in different continents and with different Internet connectivity to observe a large diversity of user and client behaviors. We have relied on the second set of volunteers to $(i)$ determine Skype packet patterns between caller and callee, $(ii)$ develop and test inconspicuous calling, and $(iii)$ evaluate the accuracy of mobility measurements. After manual testing, we called each volunteer 100 times and systematically observed one of the three packet patterns described in Section~\ref{sec:mapping} between caller and callee. We also observed that our inconspicuous calling procedure never notified them about the calls in any way. \paragraph{Anonymized users} We relied on two samples of users for which we did not store their personal information in this study. We first used a sample of 10,000 random users to quantify their mobility. We then used a second sample of 100,000 random users that we used to illustrate a linkability threat, where the identity of a person can be associated to his Internet usage (e.g., file sharing). We always collected the IP addresses of the anonymized users using inconspicuous calls, which we validated on the volunteers. Therefore, \textit{no human contact was ever made with any of the anonymized users.} Moreover, we processed and stored only anonymized information, e.g., we anonymized all localization information, downloaded content, and we did not store the IP addresses. Details of all anonymized information are given in Section~\ref{sec:location} and \ref{sec:filesharing}. \paragraph{Other considerations} In order to conform to the responsible disclosure process, we informed the Skype company of our conclusions in November 2010. In addition, we did not perform any reverse engineering on Skype binaries. Finally, our measurements generated at most $2.7$ calls per second and a few kilobytes of bandwidth per second, so the load that we created on the Skype infrastructure was marginal. \section{Mapping a Person to an IP Address} \label{sec:mapping} In the following, we first describe how to find a targeted person's Skype ID, that is a unique user ID of a person in Skype. Then, we present our scheme to find, based on a Skype ID, the IP address used by this person. We explain how to make this scheme inconspicuous for the user, and we show that the privacy settings in Skype fail to protect against our scheme. \subsection{Finding a Person's ID} \label{sec:mapping-ID} When creating a Skype account, a user needs to provide an e-mail address and Skype ID. The user is also invited to provide personal information, such as birth name, location, gender, age, and/or website. This information is recorded in the Skype directory. Therefore, in attempting to define a person's Skype ID, the obvious first step is to input into the directory's search service the person's e-mail address or birth name. When searching for a birth name, Skype will often return many results. Along with these results, there is often side information, such as city and country of residence. As we will discuss below, if there is still ambiguity about which Skype ID corresponds to the targeted person, we can, using the methodology described in the following section, inconspicuously call each of the candidate Skype IDs, obtain a current or recent IP address for each of those IDs, and from the IP addresses determine current city and ISP (which might be a University or an employer ISP). Such a procedure often determines a person's Skype ID without ambiguity. We briefly remark that if this search was instead based on a service that doesn't provide a directory (such as MSN Live or Google Talk), one may still be able to determine the ID by scraping homepages, scraping pages from various social networks, or simply by guessing. To illustrate that one can easily find the Skype IDs for a set of identified individuals, we attempt to find the IDs of the 14 research faculty in the CSE department at NYU-Poly, all of whom gave us their informed consent. By searching the corresponding 14 professional e-mail addresses, we found 2 Skype IDs and by searching the corresponding 14 birth names, we found 7 additional IDs with a single match. Among the 5 people for which we did not find a conclusive Skype ID, there was multiple matching IDs for 4 and no matching Skype ID only for 1. For the professors with multiple candidate IDs, it would have been possible to inconspicuously call each of the candidate IDs (as described below), geo-localize each candidate, and most likely pinpoint the correct ID. In summary, among 14 NYU-Poly faculty members, we found the Skype IDs for nine of them, and we could have very possibly determined the IDs for four more. \subsection{Finding a Person's IP Address} We have seen how to find the Skype ID of a targeted person. We now discuss how, given the person's Skype ID, we can find the IP address of the machine on which that person is currently active. (If the machine is behind a NAT, then we instead obtain the public IP address of the NAT.) The basic idea is to call the Skype ID, receive IP datagrams from the machine on which that ID is currently logged in, and sniff the packets to get the machine's IP address from the IP header. We describe in the following when this IP address is available. When the caller calls a Skype user who is currently off-line, the Skype application will still provide to the caller the user's most recent IP address, as long as the user was running Skype in the past 72 hours. For this reason, we are able to retrieve the IP address of a Skype user that used Skype within the past 72 hours. By examining traffic patterns to and from a Skype client when our client makes a call to a Skype ID that has been active in the past 72 hours, we have observed that Skype behaves as follows. At the time of the call, the user may be in one of three possible states $(i)$ the user is online and not behind a NAT; $(ii)$ the user is online and behind a NAT; $(iii)$ the user is offline, but was online (with or without a NAT) within the past 72 hours. (There is also the possibility that the user is logged in at more than one address simultaneously. We will discuss that case subsequently.) For case $(i)$, when the user (callee) is online and not behind a NAT, the caller will initiate communication with the callee, sending packets directly to the callee (with the callee's IP address in the destination address field of the datagrams). For case $(ii)$, when the callee is online but behind a NAT, the callee will be instructed (via the callee's supernode) to initiate communication to the caller. In this case, the callee's public IP address will be in the source address field of the incoming datagrams. For case $(iii)$, when the targeted user is offline (but was online in the past 72 hours), the caller's Skype client will still attempt to call the targeted user, using the IP address that was most recently observed by Skype in the past 72 hours. (If the targeted user is behind a NAT, the caller will try to initiate a call, using the public IP address of the NATed user.) In this last case, the callee's most recent (public) IP address can be determined from the IP datagrams. Thus, the callee's IP address (current or most recent) can be extracted from the source and destination fields of IP datagrams. However, there is a major complication here. In the process of establishing a call, the call triggers communication with tens of IP addresses (supernodes and relays). As supernodes and relays are hosted by Skype users, their IP addresses belong to a multitude of address ranges that we cannot just filter out. So it is complex to determine which Skype datagrams are for direct communication with the callee. As Skype uses a proprietary protocol and encrypts the payloads of its messages, we cannot perform direct packet inspection to find packets originating from the callee. To solve this problem, we designed a scheme that relies solely on the packet patterns between the caller and the various Skype nodes it is communicating with. \begin{figure}[!t] \centering \includegraphics[width=1\columnwidth]{eps/packet_pattern.eps} \caption{Communication pattern: (i) callee is online and public; (ii) callee online and behind a NAT; (iii) callee is offline. Crosses correspond to SYN packets that we dropped in order to call inconspicuously.} \label{fig:packet_pattern} \end{figure} To understand Skype's traffic, we placed calls to the second set of volunteers for which we knew the IP addresses of (see Section~\ref{sec:legal}). We observed three identifiable patterns of communication that take place between the caller and callee during the call establishment phase. By exploiting these patterns, we were able to filter out the noise, such as communication with supernodes. Fig.~\ref{fig:packet_pattern} shows these three patterns. We observe the first pattern when the callee is online and public (case $(i)$). In that case, the caller will try to initiate the TCP connection by sending a SYN packet. We will see in Section~\ref{sec:single-incons}, that we need to drop SYN packets to make inconspicuous calls. When the TCP timeout occurs, the caller retransmits the SYN, making two tries after the initial attempt before giving up. The first timeout interval is 3 seconds and the second is 1 second. In addition to the TCP packets, there are UDP packets between the caller and the callee. We always observe three 59 byte or 58 byte packets from caller to callee, and the intervals between them are 2 seconds and 4 seconds. Thus, between caller and callee there is a specific traffic pattern, which is shown in Fig.~\ref{fig:packet_pattern} $(i)$. There is also communication between caller and supernodes; however, the communication with non-callees does not exhibit the pattern in Fig.~\ref{fig:packet_pattern} $(i)$. In summary, by identifying the IP address that has packets with the pattern in Fig.~\ref{fig:packet_pattern} $(i)$, we identify the IP address of the callee. We remark that the TCP packets and UDP packets don't always appear sequentially. Most of the time, they are mixed. The second pattern is observed when the callee is online but behind a NAT (case $(ii)$), that is, the caller cannot initiate communication with the callee. In that case, we have observed that the callee will send a 28 byte UDP packet to the caller. The caller replies with the same size UDP packet. Next, the caller and callee will exchange UDP packets of varying sizes. After about 10 seconds, the callee sends 3 byte UDP packets to the caller. We do not observe these 3 byte UDP packets from any other source besides the callee. The pattern is shown in Fig.~\ref{fig:packet_pattern} $(ii)$. The last pattern occurs when the callee is offline but has been online in the past 72 hours. In that case, the caller still attempts to call the user at its last-seen IP address. The pattern is shown in Fig.~\ref{fig:packet_pattern} $(iii)$. Note this pattern has the same structure as that of case $(i)$ except now there is no response from the callee, since it is offline. To make things even more complicated, a Skype ID can be simultaneously online at more than one machine. In this case, for each online machine either the pattern in Fig.~\ref{fig:packet_pattern} $(i)$ or $(ii)$ will occur once for each online machine. We developed a script that searches for the various patterns and identifies the callee's IP address(es). \subsection{Inconspicuous calling} \label{sec:single-incons} In the following, we define the tracking client as the Skype client we use to exchange packets with a callee. The tracking client is an actual Skype client controlled by a script via the Skype API. Importantly, each of the tracking client is not behind a NAT and, therefore, has a public IP address. Therefore, communication between each tracking client and any user (NATed or not) will always be P2P rather than relayed. Whenever a Skype call comes in, it is accompanied with a ring and a pop-up window for notification. The callee then chooses to accept, reject, or ignore the call. (We use the terminology ``user'' and ``callee'' interchangeably, depending on context.) Since the tracking client actually makes calls to callees, if not designed carefully, it will cause ringing and pop ups on the callees' machines. Not only would this disturb the callee, but it would expose the attacker. We therefore need to design our scheme so the tracking client exchanges packets directly with the callee -- {\em without notifying the callee of the call}. In our testing, we have observed that during call establishment, both TCP and UDP packets are sent between the tracking client and the callee. We have found that if we prevent TCP connections from being established with the callee, the callee will not be notified about the call. Thus, a possible simple solution is to have the tracking client drop all TCP SYN packets sent to and from the callee. However, at the time when we make the call, we have no clue about the callee's IP address, and we cannot tell whether an observed TCP SYN is going to (or coming from) a Skype infrastructure node, a supernode, a relay node, or the targeted callee. To solve this problem, during each call, we prevent the establishment of any new TCP connection by dropping all outgoing and incoming SYN packets (to all IP addresses). Note this procedure does not terminate the tracking client's TCP connections that were in progress before making the call (for example, an ongoing connection to a supernode). With this simple mechanism, the callee is never notified, even if the callee is behind a NAT. To check that no pop ups appear, we tested this scheme on the volunteers as described in Section~\ref{sec:legal}. \subsection{Skype Privacy Settings} \label{sec:skype-priv-settings} Skype has two privacy settings to block calls from specific people. The first setting, \textit{allows call from people in my Contact list only}, is a white list. The second setting called \emph{blocked people} is a black list blocking all people whose Skype ID is in this list. We tested the impact of both settings on our scheme to inconspicuously get the IP address of a callee. For the first setting, the caller was not in the contact list of the callee. For the second setting, the callee explicitly blocked the Skype ID of the caller. In both cases, we were able to inconspicuously retrieve the IP address of the callee. {\em In summary, we observed that Skype privacy settings fail to protect against our scheme.} \section{Mobility of Skype Users} \label{sec:location} In the previous section, we presented a scheme to map a person's name to an IP address. We now investigate whether our scheme can be used to periodically observe the mobility patterns of large sets of Internet users. \subsection{Mobility of a Volunteer} \subsubsection{Geo-Localize Skype Users} \label{sec:single-geo} In the following, we use MaxMind \cite{maxmind} to geo-localize the IP addresses that are obtained from the tracking client, hence providing us with the location of users. MaxMind is a service that, given an IP address, provides a city, country, and AS. To determine city and country, it first aggregates known IP locations from websites that ask their users to provide their geographic location. Then, it uses various heuristics to interpolate the location of other IP addresses. MaxMind claims that it achieves $99.8$\% accuracy at the country level and $83$\% on a city level for the US within a radius of 25 miles. Apart from our set of volunteers, for the sake of user privacy, we anonymized (using a salted hash) all location information. Therefore, we can tell when users change locations at the city, AS or country scale, but not where they actually are. \subsubsection{Example} \label{sec:locatio-track-stevens} \begin{figure}[!t] \centering \includegraphics[width=0.962\columnwidth]{eps/location_example.eps} \caption{Example of mobility of a volunteer.} \label{fig:location-examples} \end{figure} To give a concrete idea of the kind of mobility that can be observed, we plot in Fig.~\ref{fig:location-examples} the mobility of a user in our second set of volunteers. (This volunteer has seen the paper and has given us his consent for all the information about him disclosed.) This person makes publicly available his birth name, gender, date of birth, language, and city of residence in Skype. By searching his birth name and city on Facebook and LinkedIn, we are able to determine his profession and employer. We now briefly describe the mobility of this user. He confirmed to us that during our measurement period he was first visiting a university in New York; he then took a vacation in Chicago; then returned to university and lodged in Brooklyn; and finally returned to his home in France. Fig.~\ref{fig:location-examples} gives an accurate description of the real mobility of this user during the measurement period. The Manhattan location corresponds to an Internet cafe (confirmed by the user). We remark that if we had followed the mobility of the Facebook friends of this user as well, we likely would have determined who he was visiting and when. In conclusion, mobility combined with information from social networks can provide a vivid picture of the daily activities of a targeted user. It is, in our opinion, a major privacy concern for users of real-time communication systems. Whereas this volunteer has an active mobility pattern well suited for our illustrative purpose, a legitimate question is whether it is possible to observe mobility for any Skype user. We answer this question in the following. \subsection{Mobility of the Anonymized Users} We now describe how to scale our scheme to measure the availability and mobility of a representative sample of anonymized Skype users. To confirm the frequent mobility of Skype users, these users indeed need to be often running Skype and from several locations. In addition, we are also interested in evaluating the cost of scaling our scheme and in examining whether Skype employs counter measures to hinder it. For the sake of privacy, we anonymized (as described in section~\ref{sec:single-geo}) all location information, and we do not store IP addresses. Therefore, we can only report aggregated statistics, and not detailed user location information. \subsubsection{Obtaining Millions of Skype IDs} \label{sec:location-ID} In the following, we show that one can easily retrieve a large number of Skype IDs along with the personal information associated with these IDs. To this end, we use the Skype API to collect the IDs. For each ID, we check whether the birth name and other personal information is available. We do not store this information, but instead just note whether it is available in the Skype user's profile. The Skype public API provides a mechanism for third party applications to control a Skype client. This API operates as follows. After registering with the Skype client, the application can send to the client plain text commands such as {\it search} and {\it call}. The Skype client then returns plain text messages to the application. In particular, the Skype API has a {\it search users} command that takes a search string as a parameter and returns a list of users whose ID, birth name, or e-mail address matches the string. If the search string contains {\it @}, the search is performed by e-mail address and has to be an exact match. If the search string is a valid Skype ID, the search is performed on the birth name and ID. Otherwise, the search is made on the birth name only. In addition to the Skype ID of a user, this command will return any other personal information that the user provided at registration, such as birth name, age, gender, homepage, country, language, and other identifying information. To build our search strings, we use a set of $580$K birth names that we collected on Facebook using a similar technique as the one described by Tang et al. \cite{facebook}. This set is made up of $66$K first names and $156$K last names. We then combine the birth names, first names, and last names, to obtain $802$K unique search strings. For each of these search strings, we send the {\it search users} command, which typically returns a long list of users, some of whom didn't specify birth names. We then aggregated these lists together and obtained $13$M Skype IDs together with which identifying information was available in the profile. For these $13$M Skype IDs, $88$\% provide their birth names and $82$\% provide either age, gender, homepage, country or language identifying information (we only store a binary information indicating whether a user has provided a given personal information). We note that even though we used Facebook to build our search strings, we could use any database of first and last names. \subsubsection{Parallel Calling} \label{sec:location-sequential} From the Skype IDs obtained in the previous section, we select 100,000 Skype IDs at random. From these 100,000 IDs, we then determine (using the techniques discussed in Section~\ref{sec:mapping}) that 10,000 Skype IDs (10\%) have been active in the past 72 hours. Finally, we call these 10,000 Skype IDs on an hourly basis. From this result based on a random sample of 100,000 Skype users, we can extrapolate that we can retrieve the IP address of approximately 10\% of all Skype users at any time, which represents 56 million of users at any moment in time. We now describe the methodology to call the 10,000 Skype IDs. We deploy several tracking clients in parallel, each of which calls a subset of the 10,000 Skype IDs. The tracking client calls sequentially all the Skype IDs in its subset, and then repeats the procedure every hour. We determine the IP address of each called Skype ID using the inconspicuous call methodology described in Section~\ref{sec:single-incons}. Based on this IP address we compute the anonymized location of the user as described in section~\ref{sec:single-geo}. Scaling our scheme is challenging. To be able to call 10,000 users on an hourly basis, we need to deploy many tracking clients in parallel, with each one sequentially making one call after another. In order to keep the number of parallel tracking clients to a reasonable level, the time $s$ between two successive calls for a given client should be short. Indeed, there is an important tradeoff in considering an appropriate value for $s$. Consider that the tracking client calls one user, waits $s$ seconds, terminates the call, and then repeats the process with another user. If $s$ is large, our tracking client will call users at a relatively low rate. If $s$ is too small, we may terminate the call before the packet pattern is initiated, in which case we may incorrectly assign the IP address of the subsequent Skype ID to the current Skype ID. Thus, special care must be taken to associate the IP addresses with the correct Skype IDs. The simplest approach is, before making the subsequent call, to wait long enough so that the complete packet pattern elapses. Normally, this takes about 15 seconds from when the first packet is observed until the whole packet pattern occurs. But if we wait 15 seconds between each call, only 4 Skype IDs per minute can be probed. To increase the calling rate, we performed further tests and observed that $(a)$ once a packet pattern starts, it completes even if the call is terminated before completion; $(b)$ all packet patterns begin within three seconds after making the call. Based on these observations, by waiting three seconds before calling a new Skype ID, we always see the pattern beginning before the end of the three second interval, and also see the pattern complete (extending beyond the 3 seconds). To verify claim (a), we randomly pick 500 users from our Skype ID pool, and call them using two different values of $s$: 3 seconds and 20 seconds. After comparing the mappings generated from the two approaches, we observe that they are identical for all 500 random Skype users. This implies that the interval of 3 seconds is sufficiently large; we therefore use $s=3$ seconds in our measurements. To validate the accuracy of our scalable calling scheme, every 100 calls, we call a random Skype ID among our second set of volunteers (see Section~\ref{sec:legal}). We stress that these volunteers were not in the contact list of the tracking clients, so the patterns generated when calling them are identical to those of the other 10,000 users we are calling. On the 1,368 calls that we made when volunteers were online, we observed only 4 false positives ($0.3$\%) due to patterns that have been reordered during parallel calling. By assigning each IP address to the only Skype user that is the most often designated by the packet patterns, we were able to remove all false positives. \subsubsection{Cost of the Scaling} \label{sec:cost-attack} To call 10,000 users on a hourly basis, we run our tracking clients on 30 physical machines, each one with a different IP address. Each physical machine runs one Skype client and can call 340 IDs per hour. We estimate the costs of running this measurement on a cloud computing platform such as EC2 \cite{EC2} to be approximately \$500 per week. Preliminary tests suggest that it would have been possible to increase the number of called users by one order of magnitude with virtualization. Indeed, the main issue we faced is that running several tracking clients on a machine makes it harder to isolate packets from each client. One solution we tested but did not use in our scheme, is to run several tracking client per physical machine, each client in a different virtual machine. Because the goal in this paper is to demonstrate the feasibility of our scheme and not to fully optimize it, running a single tracking client per machine is sufficient. \subsubsection{Measurement Results} \begin{figure*}[!ht] \centering \includegraphics[width=0.62\columnwidth]{eps/location_online.eps} \includegraphics[width=0.65\columnwidth]{eps/location_availability.eps} \includegraphics[width=0.60\columnwidth]{eps/location_aggregated.eps} \caption{(left) Number of simultaneous and cumulative unique online Skype users (of 10,000) called in two weeks. (middle) CDF of availability of Skype users. (right) Number of locations visited in two weeks by each Skype user, sorted by decreasing number of locations. \textit{Skype users are mobile.}} \label{fig:location-online} \end{figure*} Whether our scalable calling scheme actually captures the {\it mobility} of a significant fraction of Skype users depends on three questions that we address in the following. {\it 1) Is it possible to periodically call a large number of Skype users?} In Fig.~\ref{fig:location-online} (left), we see that at any given time, we are calling between $2,000$ and $3,000$ online users among the 10,000 users. The diurnal behavior is due to the heterogeneous distribution of Skype users worldwide. A large fraction of Skype users are from the US and Western Europe. So during the daytime in the US and Western Europe, there are more Skype users online than during night in these geographical areas. We also see in Fig.~\ref{fig:location-online} (left) that after two weeks, we have found at least one current IP address for $9,500$ users, which represents $95$\% of the users we were periodically calling. In summary, it is possible to periodically call a large number of Skype users. {\it 2) How often are Skype users online?} We define availability as the fraction of the time a given user is online. In Fig.~\ref{fig:location-online} (middle), we plot the CDF of availability for the 9,500 Skype users that we have seen online at least once. Skype users are surprisingly available with 20\% of all users available more than 50\% of the time. One explanation for this behavior is that the Skype clients starts automatically at the startup of the system. In summary, Skype users are highly available so one can call them to collect their location most of the time. {\it 3) Can Skype users be found in several locations?} Mobility results in a change of IP address geo-localized in a different city, AS, and/or country. For each user that is online at least once, we determine the different locations he visits over the two-week period. This location information is anonymized (see section~\ref{sec:single-geo}). In Fig.~\ref{fig:location-online} (right), we see that 40\% of the 9,500 Skype users change city, 19\% change AS, and 4\% change country at least once in two weeks. In summary, Skype users run Skype from several locations so one can observe their mobility. In summary, Skype users often run Skype from different locations, and this mobility can be tracked by our methodology. Our methodology to measure the number of locations of a user has two limitations. First, in some cases (e.g., dynamic IP address), MaxMind might erroneously associate a same user to different locations. We believe that such errors are very unlikely at the scale of a country or an AS, and only occurs rarely at the scale of cities so that it does not significantly impact our conclusions (see Section~\ref{sec:single-geo}). Second, the IP address may not capture the location of users running Skype on their mobile phones \cite{3G-phone}. Although this may impact our ability to track Skype users in the future, we believe that relatively few users fall into this category today. We may observe that significantly more users are mobile among cities than among ASes for two reasons. First, some ISPs have broad geographical coverage, so users located in those ISPs are likely to move within the same ISP, even though they change city. Second, some ISPs provide country-wide free Wifi hotspots to their users. When users of such ISPs change of city, they are likely to use these hotspots, thus connecting from the same AS but a different city. We note that, as the accuracy of IP geo-localization improves, it will be possible to determine the locations of users with much finer granularity. For instance, a recent paper shows that it is possible to geo-localize IP addresses with a median error distance of 690 meters \cite{Geolocalization}. \section{Related Work} \label{sec:related} \subsection{Mobility} We now describe the related work on observing the mobility of users by using IP addresses and cell phones. \paragraph{IP Address Mobility} Guha et al. \cite{dns} is the work on IP address mobility that is the closest to ours. The authors show that by periodically retrieving the IP address of dynamic DNS users, an attacker can observe the mobility of these users. Whereas the goal of their attack is similar to ours, there are two major differences between exploiting dynamic DNS and Skype to measure mobility. First, dynamic DNS allows to infer the identify of the user in ``some cases'' whereas we have showed that 88\% of Skype users provide their birth name, and that 82\% also provide either age, gender, homepage, country, or language. Second, targeting dynamic DNS users limits the scope of the attack. Whereas there are a few millions users of dynamic DNS in the world, we showed that much more Skype users are susceptible to have their mobility tracked. \paragraph{Cell Phone Mobility} The Carmen Sandiego Project \cite{Carmen-Sandiego} recently showed how to use cell phones to observe the mobility of a user. The authors first use the caller ID service to collect persons-to-cell phone numbers mappings. Then, by accessing the Home Location Register (HLR), they show that an attacker can collect the current Mobile Switching Center (MSC) identifier for a given phone number. As MSC identifiers often gives the indication of the location of a user, an attacker can periodically collect that information to observe the mobility of an identified cell phone user. One important weakness of this attack is that there is no convention on how an operator attributes MSC identifiers. So the naming convention for MSCs varies from one operator to the other and it is hard to determine to which location a given identifier corresponds. Even though it is not our primary purpose, we believe our scheme, and in particular the description of Skype packet patterns between caller and callee, also has the potential to significantly simplify the tracing of Skype calls. To the best of our knowledge, we are the first to show that it is possible to use real-time applications to map a person to an IP address and to scale that scheme to observe the mobility of a large number of persons. As we have shown it might be possible to observe the mobility of 56 million identified Skype users worldwide at any moment in time, we claim that the scope and the severity of our attack are very severe. \subsection{File-sharing Usage} We now describe the related work on observing file-sharing usage and verifying users. Because we have used BitTorrent in this paper and it is one of the most popular file-sharing system, we focus on BitTorrent in the following. However, we remind that all file-sharing systems are in principle vulnerable to our attack. In recent works, the scale of BitTorrent measurements has significantly increased \cite{BT-Public-Ecosystem, BT-Monitors, BT-Spying}. For example, Zhang et al. collected 5 million IP addresses in 12 hours \cite{BT-Public-Ecosystem}, Siganos et al. collected 37 million IPs in 45 days \cite{BT-Monitors}, and Le Blond et al. collected 148 million IPs in 103 days \cite{BT-Spying}. As noted by Le Blond et al. and more recently by Wolchok et al. \cite{Crawling-BitTorrent}, being able to continuously collect the IP addresses is a serious privacy threat in itself. In this paper though, we have not only collected the IP addresses of a large number of BitTorrent users but we have also identified a significant fraction of these users. A security threat noted by Piatek et al. consists in injecting the IP address of random Internet users into BitTorrent trackers to falsely implicate them into copyright infringement \cite{DMCA}. We note that the ability to map a targeted person to an IP address significantly worsens this threat because an attacker could also implicate that particular person into copyright infringement. As far as we know, we are the first to show that it is possible to find the identity of BitTorrent users without requesting that information from an ISP. We believe that this attack introduces a serious potential for blackmail and phishing attacks. \paragraph{Verification} We relied on IP-IDs to verify the identity of BitTorrent downloaders. This technique has been used in the context of passively counting the number of machines behind a NAT \cite{Counting} (on a LAN). As far as we know, it has never been used on the Internet to actively verify that several applications were running on the same machine. Alternatively, we could have used remote physical device fingerprinting \cite{Device-Fingerprinting} but using IP-IDs was simpler and sufficient for our purpose.
1,314,259,993,682
arxiv
\section{\protect\bigskip Introduction} In the recent paper \cite{holcc} complementarity between output and environment of a quantum channel (or, more generally, CP map) was explored in detail. It was observed that the output purity characteristics for mutually complementary CP maps coincide, making the validity of the mutiplicativity/additivity conjecture for a class of CP maps equivalent to its validity for complementary maps. A similar observation was independently made in \cite{rus} in the context of channels. In \cite{holcc} a regular method for computation of complementary maps was proposed, thus providing an efficient construction of new cases for the solution of the multiplicativity/additivity problem. In this paper we use this method to compute complementary channels for certain important cases, such as depolarizing and transpose-depolarizing channels. This method easily yields minimal Kraus representations from non--minimal ones. We also study the properties of the output purity of the tensor product of a channel and its complement. Let us fix some notation. $\mathfrak{M}\left( \mathcal{H}\right) $ will denote the algebra of all operators, and $\mathfrak{S}(\mathcal{H})$ -- the convex set of all density operators (quantum states) in a finite-dimensional Hilbert space $\mathcal{H} $. The output purity of a CP map $\Phi :\mathfrak{ M}\left( \mathcal{H} \right) \rightarrow \mathfrak{M}\left( \mathcal{H} ^{\prime }\right) ,$ is measured by the quantity \begin{equation} \nu _{p}(\Phi ):=\max_{\rho \in \mathfrak{S}(\mathcal{H})}\left\{ ||\Phi (\rho )||_{p}\right\} ,\quad 1\leq p<\infty , \label{nup} \end{equation} where ${\displaystyle{||\Phi (\rho )||_{p}=\Bigl[\tr\left( \Phi (\rho )\right) ^{p}\Bigr]^{1/p}}}$ is the $p$-norm of $\Phi (\rho )$, or equivalently, by the minimal output R\'{e}nyi $p$-entropy \begin{equation*} \check{R}_{p}(\Phi )=-\frac{p}{p-1}\,\log \nu _{p}(\Phi ). \end{equation*} Recall that the R\'{e}nyi $p$-entropy of a density matrix $\sigma $, $p>1$, is defined as \begin{equation} R_{p}(\sigma ):=-\frac{1}{p-1}\log \left( \tr\,\sigma ^{p}\right) =-\frac{p}{ p-1}\log ||\sigma ||_{p}. \label{rendef} \end{equation} The R\'enyi entropies have the monotonicity property \cite{cup} \begin{equation*} R_{q}(\sigma )\leq R_{p}(\sigma ),\quad 1<p\leq q. \end{equation*} In the limit $p\rightarrow 1$, they converge monotonically and hence uniformly to the von Neumann entropy $H(\sigma )=-\tr\,\sigma \log \sigma $. Therefore we can extend the notation of the R\'{e}nyi entropy by letting $ R_{1}(\sigma ):=H(\sigma )$. The minimal output R\'{e}nyi $p$-entropy of a channel $\Phi $ is then \begin{equation*} \check{R}_{p}(\Phi )=\min_{\rho \in \mathfrak{S}(\mathcal{H})}R_{p}(\Phi (\rho )), \end{equation*} and for $p=1$ it is equal to the minimum output entropy \begin{equation} \check{H}(\Phi ):=\min_{\rho }\,H(\Phi (\rho )). \label{minent1} \end{equation} \section{\protect\bigskip Representations of CP maps} Given three Hilbert spaces $\mathcal{H}_{A},\mathcal{H}_{B},\mathcal{\ H} _{C} $ and a linear operator $V:\mathcal{H}_{A}\rightarrow \mathcal{H} _{B}\otimes \mathcal{H}_{C}$, the relations \begin{equation} \Phi (\rho )=\mathrm{Tr}_{\mathcal{H}_{C}}V\rho V^{\ast },\quad \tilde{\Phi} (\rho )=\mathrm{Tr}_{\mathcal{H}_{B}}V\rho V^{\ast };\quad \rho \in \mathfrak{M}\left( \mathcal{H}_{A}\right) , \label{compl} \end{equation} define two CP maps $\Phi :\mathfrak{M}\left( \mathcal{H}_{A}\right) \rightarrow \mathfrak{M}\left( \mathcal{H}_{B}\right) ,$ $\tilde{\Phi}: \mathfrak{M}\left( \mathcal{H}_{A}\right) \rightarrow \mathfrak{M}\left( \mathcal{H}_{C}\right) ,$ which will be called mutually \textit{complementary }. If $V$ is an isometry, both maps are trace preserving (TP) i.e. channels. For any linear map $\Phi :\mathfrak{M}\left( \mathcal{H}\right) \rightarrow \mathfrak{M}\left( \mathcal{H}^{\prime }\right) $ the dual map $\Phi ^{\ast }:\mathfrak{M}\left( \mathcal{H}^{\prime }\right) \rightarrow \mathfrak{M} \left( \mathcal{H}\right) $ is defined by the formula \begin{equation*} \mathrm{Tr}\Phi (\rho )X=\mathrm{Tr}\rho \Phi ^{\ast }(X);\quad \rho \in \mathfrak{M}\left( \mathcal{H}\right) ,X\in \mathfrak{M}\left( \mathcal{H} ^{\prime }\right) . \end{equation*} If $\Phi $ is CP, then $\Phi ^{\ast }$ is also CP. Relations (\ref{compl}) are equivalent to \begin{equation} \Phi ^{\ast }(X)=V^{\ast }(X\otimes I_{C})V;\quad X\in \mathfrak{M}\left( \mathcal{H}_{B}\right) , \end{equation} \begin{equation} \label{df} \tilde{\Phi}^{\ast }(X)=V^{\ast }(I_{B}\otimes X)V;\quad X\in \mathfrak{M} \left( \mathcal{H}_{C}\right), \end{equation} where $I$ is the identity operator in the corresponding Hilbert space. Considering $\tilde{\Phi}$ as dual to CP map $\tilde{\Phi}^{\ast },$ we conclude that there should also be a representation of the form \begin{equation} \tilde{\Phi}(\rho )=S_{C}(\rho \otimes I_{B})S_{C}^{\ast }, \label{durep} \end{equation} where $S_{C}:{\mathcal{H}}_{A}\otimes {\mathcal{H}}_{B}\rightarrow {\mathcal{H}}_{C}$ (in the case of channel $\mathrm{Tr}_{{\mathcal{H}} _{B}}S_{C}^{\ast }S_{C}=I_{A}).$ There is a simple general relation between this representation and the second formula in (\ref{compl}) for an arbitrary CP map; namely, given $V:{\mathcal{H}}_{A}\rightarrow {\mathcal{H}} _{B}\otimes {\mathcal{H}}_{C}$ choose an orthonormal basis $\left\{ e_{j}^{B}\right\} $ in ${\mathcal{H}}_{B}$ and define $S_C:{\mathcal{H}} _{A}\otimes {\mathcal{H}} _{B}\rightarrow {\mathcal{H}}_{C}$ by the relation $\langle e_{j}^{B}|V=S_{C}|e_{j}^{B}\rangle ,$ or, more precisely, \begin{equation*} \langle \bar{\psi}_{B}\otimes \psi _{C}|V|\psi _{A}\rangle =\langle \psi _{C}|S_{C}|\psi _{A}\otimes \psi _{B}\rangle , \end{equation*} where $\bar{\psi}_{B}$ denotes the complex conjugate of ${\psi}_B$ in the basis $\left\{ e_{j}^{B}\right\} .$ Alternatively, introducing the maximally entangled vector \begin{equation*} |\Omega ^{BB}\rangle =\frac{1}{\sqrt{d_{B}}}\sum_{j=1}^{d_{B}}|e_{j}^{B} \rangle \otimes |e_{j}^{B}\rangle \end{equation*} in ${\mathcal{H}}_{B}\otimes {\mathcal{H}}_{B},$ we have the reciprocity relations \begin{equation*} S_{C}=\sqrt{d_{B}}\langle \Omega ^{BB}|(V\otimes I_{B});\quad V=\sqrt{d_{B}} (I_{B}\otimes S_{C})|\Omega ^{BB}\rangle . \end{equation*} The representation (\ref{durep}) is in fact nothing but the dual form (\ref {df}) of the Stinespring representation for the map $\tilde{\Phi}$, if it is considered (somewhat \textquotedblleft unnaturally\textquotedblright ) as a map in the Heisenberg-- rather than in the Schr\"{o}dinger picture. To give a kind of physical interpretation to the representation (\ref{durep}), consider the polar decomposition $S_{C}=|S_{C}^{\ast }|W,$ where $ |S_{C}^{\ast }|=\sqrt{S_{C}S_{C}^{\ast }}$ is a Hermitian operator in ${ \mathcal{H}}_{C}$ and $W:{\mathcal{H}}_{A}\otimes {\mathcal{H}} _{B}\rightarrow {\mathcal{\ H}}_{C}$ is a partial isometry. Denote $D_{C}= \sqrt{d_{B}}|S_{C}^{\ast }|$ and choose the basis in which this operator is diagonal. Then (\ref{durep}) takes the form \begin{equation*} \tilde{\Phi}(\rho )=D_{C}W\left( \rho \otimes \frac{I_{B}}{d_{B}}\right) W^{\ast }D_{C}, \end{equation*} and (with some strain) can be interpreted as an interaction of the system $A$ with environment $B$ in the chaotic state followed by partial dephasing, cf. \cite{DS}. Note however, that the \textquotedblleft interaction\textquotedblright\ is only partially unitary and the dephasing CP map is in general not TP (i.e. channel). By interchanging the roles of ${\mathcal{H}}_{B},{\mathcal{H}}_{C}$ we of course obtain a similar representation for the initial map $\Phi $ \begin{equation*} \Phi (\rho )=S_{B}(\rho \otimes I_{C})S_{B}^{\ast }, \end{equation*} where $S_{B}:{\mathcal{H}}_{A}\otimes {\mathcal{H}}_{C}\rightarrow {\mathcal{H}}_{B}$ (in the case of channel $\mathrm{Tr}_{{\mathcal{H}} _{C}}S_{B}^{\ast }S_{B}=I_{A}).$ This representation is especially nice in the case where $A=B$ and $\Phi $ is unital: then $S_{B}$ is co-isometry, $ S_{B}S_{B}^{\ast }=I_{A}.$ \bigskip The Kraus representation for the map $\Phi $ \begin{equation} \Phi (\rho )=\sum_{j=1}^{d_{C}}V_{j}\rho V_{j}^{\ast };\qquad \rho \in \mathfrak{\ \ M}(\mathcal{H}_{A}), \label{kraus3} \end{equation} follows from (\ref{compl}) by letting $V_{j}=\langle e_{j}^{C}|V$ where $ \left\{ e_{j}^{C}\right\} $ is an orthonormal basis in $\mathcal{H}_{C}.$ Conversely, \begin{equation*} V=\sum_{j=1}^{d_{C}}V_{j}\otimes |e_{j}^{C}\rangle , \end{equation*} whence, applying the second relation in (\ref{compl}), we have explicit formula for the complementary map \begin{equation} \tilde{\Phi}(\rho )=\sum_{j,k=1}^{d_{C}}|e_{j}^{C}\rangle \langle e_{k}^{C}| \mathrm{Tr}V_{j}\rho V_{k}^{\ast };\qquad \rho \in \mathfrak{M}(\mathcal{H} _{A}). \label{cmp} \end{equation} It follows that the Kraus representation for $\tilde{\Phi}$ is \begin{equation*} \tilde{\Phi}(\rho )=\sum_{k=1}^{d_{B}}\tilde{V}_{k}\rho \tilde{V}_{k}^{\ast }, \end{equation*} where $\tilde{V}_{k}:\mathcal{H}_{A}\rightarrow \mathcal{H}_{C}$ are given by \begin{equation*} \tilde{V}_{k}=\sum_{j=1}^{d_{C}}\langle e_{k}^{B}|V_{j}\otimes |e_{j}^{C}\rangle , \end{equation*} and hence satisfy \begin{equation} \langle e_{j}^{C}|\tilde{V}_{k}=\langle e_{k}^{B}|V_{j}. \label{rec} \end{equation} The representation (\ref{durep}) takes place with \begin{equation*} S_{C}=\sum_{k=1}^{d_{B}}\tilde{V}_{k}\otimes \langle e_{k}^{B}|. \end{equation*} Finally, consider the case where $A=B$ and $\Phi $ is unital, which is equivalent to \begin{equation*} \sum_{j=1}^{d_{C}}V_{j}V_{j}^{\ast }=I_{A}. \end{equation*} By using (\ref{rec}) we obtain that this is the same as \begin{equation*} \mathrm{Tr}\tilde{V}_{j}^{\ast }\tilde{V}_{k}=\delta _{jk}. \end{equation*} Since $S_{C}^{\ast }S_{C}=\sum_{j,k=1}^{d_{B}}\tilde{V}_{j}^{\ast }\tilde{V} _{k}\otimes |e_{j}^{B}\rangle \langle e_{k}^{B}|,$ this is equivalent to $ \mathrm{Tr}_{{\mathcal{H}}_{B}}S_{C}^{\ast }S_{C}=I_{A}.$ \section{Depolarizing channel} Consider the depolarizing channel \begin{equation} \Phi (\rho )=(1-p)\rho +\frac{p}{d}I\mathrm{Tr}\rho ,\quad 0\leq p\leq \frac{ d^{2}}{d^{2}-1}, \end{equation} where $\rho \in \mathfrak{M}\left( \mathcal{H}\right) $, with $\mathcal{H} \simeq \mathbf{C}^{d}$. If $\{|j\rangle \,:j=1,\ldots ,d\}$ is a complete set of orthonormal basis vectors in $\mathcal{H}$, then writing the channel as \begin{equation*} \Phi (\rho )=(1-p)\rho +\frac{p}{d}\sum_{i,j=1}^{d}|i\rangle \langle j|\rho | {j}\rangle \langle {i}|, \end{equation*} yields a Kraus representation with the operators \begin{equation*} V_{0}=\sqrt{1-p}I,\quad V_{ij}=\sqrt{\frac{p}{d}}|i\rangle \langle j|. \end{equation*} Let us relabel these Kraus operators as follows. Define a variable \begin{equation*} c(i,j):=i + (j-1)d,\quad 1\leq i,j\leq d \end{equation*} which takes integer values from $1$ to $d^{2}$. Then the Kraus operators can be denoted as $A_{k}$, $0\leq k\leq d^{2}$, where \begin{eqnarray} A_{0}:= &&V_{0}\quad {\hbox{ and}} \notag \\ A_{c(i,j)}:= &&V_{ji}\quad 1\leq i,j\leq d. \label{krausdep} \end{eqnarray} Note that $A_{c(i,j)}^{\ast }=V_{ji}^{\ast }=V_{ij}$. The channel complementary to the depolarizing channel is given by \cite{holcc} \begin{equation*} {\tilde{\Phi}}(\rho )=\Bigl[\tr \,A_{\alpha }\rho A_{\beta }^{\ast }\Bigr] _{\alpha ,\beta =0,1,\ldots ,d^{2}}. \end{equation*} It is easy to see that \begin{eqnarray} \tr A_{0}\rho A_{0}^{\ast } &=&(1-p)\tr\,(I\rho I)=(1-p)\tr\rho ; \notag \\ \tr A_{0}\rho A_{c(i,j)}^{\ast } &=&\sqrt{(1-p)}\tr(\rho V_{ij})=\sqrt{\frac{ p(1-p)}{d}}\langle j|\rho |i\rangle ; \notag \\ \tr A_{c(i,j)}\rho A_{0}^{\ast } &=&\sqrt{\frac{p(1-p)}{d}}\langle i|\rho |j\rangle ; \notag \\ \tr A_{c(i,j)}\rho A_{c(i^{\prime },j^{\prime })}^{\ast } &=&\tr V_{ji}\rho V_{i^{\prime }j^{\prime }}=\frac{p}{d}\langle i|\rho |i^{\prime }\rangle\delta _{jj^{\prime }} \label{relns1} \end{eqnarray} To express the complementary channel in a compact form, let us define a $ d^{2}$--dimensional row vector\footnote{ Here and henceforth, we use the notation $|ij\rangle $ to denote the vector $ |i\rangle \otimes |j\rangle $. Consequently, $\langle ij|=\langle i\otimes \langle j|$.} \begin{equation} \vec{\rho}:=\sum_{i,j=1}^{d^{2}}\rho _{ji}\langle ij|,\quad {\hbox{where }} \quad \rho _{ji}=\langle j|\rho |i\rangle . \end{equation} In terms of this vector and its transpose $\vec{\rho}^{T}$, the complementary channel $\tilde{\Phi}(\rho )$ can be represented by a $ (d^{2}+1)\times (d^{2}+1)$ matrix \begin{equation} \tilde{\Phi}(\rho )=\left[ \begin{array}{cc} (1-p)\mathrm{Tr}\rho & \sqrt{\frac{p(1-p)}{d}}\vec{\rho} \\ \sqrt{\frac{p(1-p)}{d}}{\vec{\rho}}^{{\small {T}}} & \frac{p}{d}(\rho \otimes I) \end{array} \right] . \label{matdep} \end{equation} This representation is not minimal since the number of Kraus operators $ A_{k} $ (defined by (\ref{krausdep})) is $d^{2}+1$. However, a minimal representation for $\tilde{\Phi}$ can be obtained from (\ref{matdep}) as follows. Note that (\ref{matdep}) can be equivalently written as \begin{equation*} \tilde{\Phi}(\rho )=T\rho T^{\ast }, \end{equation*} where \begin{equation*} T^{\ast }=\left[ \begin{array}{cc} \sqrt{d(1-p)}|\Omega _{12}\rangle & \sqrt{\frac{p}{d}}I_{12}, \end{array} \right] . \end{equation*} with $|\Omega _{12}\rangle =d^{-1/2}\sum_{j=1}^{d}|{j}{j}\rangle $ the maximally entangled vector in ${\mathcal{H}}\otimes {\mathcal{H}}$ and $ I_{12}$ is the identity operator in ${\mathcal{H}}\otimes {\mathcal{H}}$. Let $T=US$ be its polar decomposition, where $S=|T|=\sqrt{T^{\ast }T}$ is a positive Hermitian operator in ${\mathcal{H}}\otimes {\ \mathcal{H\simeq H}} _{d^{2}}$ and $U$ is an isometry from ${\mathcal{H}}_{d^{2}}$ to ${\mathcal{ H }}_{d^{2}+1},$ which is irrelevant for the minimal representation we are looking for. Since \begin{equation*} T^{\ast }T=\frac{p}{d}I_{12}+d(1-p)|\Omega _{12}\rangle \langle \Omega _{12}| \end{equation*} is easily diagonalizable, we find \begin{equation*} S=\sqrt{T^{\ast }T}=\sqrt{\frac{p}{d}}I_{12}+\sqrt{d} \Bigl(-\frac{\sqrt{p}}{ d}+\sqrt{1-p\bigl(\frac{d^{2}-1}{d^{2}}\bigr)} \Bigr)|\Omega _{12}\rangle \langle \Omega _{12}| , \end{equation*} and the minimal representation of the complementary channel is \begin{equation} \label{dc} \tilde{\Phi}(\rho )=S(\rho \otimes I )S^{\ast } \end{equation} While the depolarizing channel is globally unitarily covariant, the complementary channel has the covariance property \begin{equation*} \tilde{\Phi}[U\rho U^{\ast }]=(U\otimes \bar{U})\tilde{\Phi}[\rho ](U\otimes \bar{U})^{\ast } \end{equation*} for arbitrary unitary operator $U$ in $\mathcal{H}$. By the results in \cite{holcc}, \cite{rus}, the complementary channel (\ref {dc}) has the same multiplicativity/additivity properties as the depolarizing channel established in \cite{king}. \section{Transpose-depolarizing channel} Consider the one-parameter family of channels in ${\mathcal{H}} \simeq \mathbf{C}^d$ \begin{equation} \Phi(\rho) = t \rho^T + (1-t) \tr \rho \frac{{I}}{d}, \label{channel} \end{equation} where \begin{equation} - \frac{1}{d - 1} \le t \le \frac{1}{d+1}. \label{range2} \end{equation} Here $\rho^{T}$ denotes transpose of the matrix $\rho$ in a fixed basis. The channel $\Phi $ is irreducibly covariant since for any arbitrary unitary transformation $U$ \begin{equation} \Phi (U\rho U^{\ast })=\bar{U}\Phi (\rho )\bar{U}^{\ast }, \label{cov} \end{equation} where $\bar{U}$ is the complex conjugate of $U$ in the fixed basis. For this class of channels, additivity of the minimum output entropy and the multiplicativity of its maximal $p$--norm for $1 \le p \le 2$, has been proved in \cite{fannes, dhstd, ndmult}. As it was shown in \cite{dhstd}, this channel can also be written as \begin{equation} \Phi(\rho) = c^+\Phi^+(\rho) + c^-\Phi^-(\rho), \label{td} \end{equation} where \begin{equation*} c^{\pm}= \left(\frac{d^2 - 1}{2d}\right)\left( \frac{1}{d \mp 1} \pm t\right), \end{equation*} and \begin{equation} \Phi^\pm (\rho):= \frac{1}{d\pm 1}\left({I} \tr \rho\pm \rho^T\right). \label{whdef} \end{equation} Note that the extreme channel $\Phi^-(\rho)$ is the well known Werner-Holevo (WH) channel \cite{WH}. The channels $\Phi^\pm (\rho)$ have Kraus operators \begin{equation} V^{\pm}_{ij} := \frac{1}{\sqrt{2(d \pm 1)}}\left(|i\rangle\langle j| \pm |j\rangle\langle i|\right), \end{equation} where $|i\rangle, |j\rangle$ denote orthonormal basis vectors in ${\mathcal{ H }}$. Let us relabel these operators using the variable \begin{equation*} c(i,j) = i+ (j-1)d , \quad 1 \le i \le d, \, 1 \le j \le 2d. \end{equation*} and the relations \begin{equation*} A^+_{c(i,j)} = \sqrt{c^+} V^+_{ji} \quad {\hbox{for }}\,\, 1 \le i,j, \le d; \end{equation*} \begin{equation*} A^-_{c(i,j)} =\sqrt{c^-} V^-_{(j-d)\,i} \quad {\hbox{for }}\,\, 1 \le i \le d, \, \, (d+1) \le j \le 2d. \end{equation*} Note that $c(i,j)$ takes integer values from $1$ to $2d^2$. In terms of the above operators, the Kraus operators of the transpose depolarizing channel $ \Phi$, (\ref{td}), can be expressed as \begin{equation} A_{c(i,j)} := A^+_{c(i,j)} {\mathcal{I}}(1 \le j \le d) + A^-_{c(i,j)}{\ \mathcal{I}}(d+1 \le j \le 2d), \label{kraustd} \end{equation} where ${\mathcal{I}}(\cdot )$ denotes an indicator function. Its complementary channel is given by \begin{equation*} \tilde{\Phi}(\rho ) := \Bigl[\tr A_\alpha \rho A_\beta^* \Bigr]_{\alpha, \beta = 1, \ldots, 2d^2}. \end{equation*} Let us first consider the case $1\leq \alpha ,\beta \leq d^{2}$, for which $ \alpha =c(i,j)$ and $\beta =c(i^{\prime },j^{\prime })$ for some $1\leq i,i^{\prime }\leq d$ and $1\leq j,j^{\prime }\leq d$. From (\ref{kraustd}) it follows that \begin{eqnarray*} \tr A_{c(i,j)}\rho A_{c(i^{\prime },j^{\prime })}^{\ast } &=& \tr \,{c^{+}} V_{ji}^{+}\rho V_{i^{\prime }j^{\prime }}^{+} \\ &=& \frac{c^{+}}{2(d+1)} \left[ \delta _{jj^{\prime }}\rho _{ii^{\prime }}+\delta _{ji^{\prime }}\rho _{ij^{\prime }}+\delta _{ij^{\prime }}\rho _{ji^{\prime }}+\delta _{ii^{\prime }}\rho _{jj^{\prime }}\right] . \\ &=& \frac{c^{+}}{2(d+1)} \left[ \rho\otimes I +(\rho\otimes I )F+F(\rho\otimes I )+F(\rho \otimes I )F\right] _{ij,i^{\prime }j^{\prime }} \\ &=& \frac{c^{+}}{2(d+1)} \left[ (I_{12}+F)(\rho \otimes I )(I_{12}+F)\right] _{ij,i^{\prime }j^{\prime }} \end{eqnarray*} Here the flip operator $F$ is defined by its action \begin{equation*} F|ij\rangle =|ji\rangle , \end{equation*} on basis vectors $|ij\rangle $ in ${\mathcal{H}}\otimes {\mathcal{H}}$ and $ I_{12}$ is the identity operator in ${\mathcal{H}}\otimes {\mathcal{H}}$. Moreover, $\rho _{ij}:=\langle i|\rho |j\rangle $. Similarly, for $(d^{2}+1)\leq \alpha ,\beta \leq 2d^{2}$ we have $\alpha =c( i, \tilde{\jmath})$ and $\beta =c(i^{\prime },{\tilde{\jmath}})$ for some $ 1\leq i,i^{\prime }\leq d$ and $d+1\leq \tilde{\jmath},{\tilde{\jmath}} ^{\prime }\leq 2d$. Defining $j=\tilde{\jmath}-d$ and $j^{\prime }={\ \tilde{ \jmath}}^{\prime }-d$, we get \begin{eqnarray*} \tr A_{c(i, {\tilde{\jmath}})}\rho A_{c(i^{\prime}, {\tilde{\jmath}}^{\prime })} ^{\ast } &=& \tr\,{c^{-}}V_{ji}^{-}\rho V_{i^{\prime }j^{\prime }}^{-} \\ &=& \frac{c^{-}}{2(d-1)} \left[ \delta _{jj^{\prime }}\rho _{ii^{\prime }}-\delta _{ji^{\prime }}\rho _{ij^{\prime }}-\delta _{ij^{\prime }}\rho _{ji^{\prime }}+\delta _{ii^{\prime }}\rho _{jj^{\prime }}\right] \\ &=&\frac{c^{-}}{2(d-1)}\left[ (I_{12}-F)(\rho \otimes I )(I_{12}-F)\right] _{ij,i^{\prime }j^{\prime }}. \end{eqnarray*} For $\alpha =c({i},j)$, $\beta =c(i^{\prime }, {\tilde{\jmath}}^{\prime })$ for some $1\leq i,i^{\prime },j^{\prime }\leq d$ and $d+1\leq {\tilde{\jmath} }^{\prime }\leq 2d$, we have \begin{eqnarray*} \tr A_{c({i},j)}\rho A_{c(i^{\prime }, {\tilde{\jmath}}^{\prime })}^{\ast } &=&\tr \,\sqrt{c^{+}c^{-}}V_{ji}^{+}\rho V_{i^{\prime }j^{\prime }}^{-} \\ &=&\frac{1}{2}\sqrt{\frac{c^{+}c^{-}}{(d+1)(d-1)}}\left[ (I_{12}+F)(\rho \otimes I )(I_{12}-F)\right] _{ij,i^{\prime }j^{\prime }}. \end{eqnarray*} By symmetry, for $\alpha =c(i, \tilde{\jmath})$, $\beta =c({i}^{\prime },j^{\prime })$ for some $1\leq i, i^{\prime },j^{\prime }\leq d$ and $ d+1\leq \tilde{\jmath}\leq 2d$, we have \begin{equation*} \tr A_{c(i, {\tilde{\jmath}})}\rho A_{c({i}^{\prime },j^{\prime })}^{\ast }= \frac{1}{2}\sqrt{\frac{c^{+}c^{-}}{(d+1)(d-1)}}\left[ (I_{12}-F)(\rho \otimes I )(I_{12}+F)\right] _{ij,i^{\prime }j^{\prime }}. \end{equation*} From the above relations one concludes that the complementary channel of the transpose-depolarizing channel has the (non--minimal) representation \begin{equation*} {\tilde{\Phi}}(\rho )=T(\rho \otimes I )T^{\ast }, \end{equation*} where \begin{equation*} T^{\ast }=\left[ \begin{array}{cc} a^{+}(I_{12}+F) & a^{-}(I_{12}-F) \end{array} \right] . \end{equation*} with \begin{equation*} a^{\pm }:=\sqrt{\frac{c^{\pm }}{2(d\pm 1)}}. \end{equation*} Let $T=US$ denote the polar decomposition of the matrix $T$, where $S=|T|= \sqrt{T^{\ast }T}$ is a positive Hermitian operator in ${\mathcal{H}}\otimes {\ \mathcal{H\simeq H}}_{d^{2}}$ and $U$ is an isometry from ${\mathcal{H}} _{d^{2}}$ to ${\mathcal{H}}_{2d^{2}}$. By using the fact that $(I_{12}\pm F)/2$ are projection operators we obtain the minimal representation \begin{equation} \label{ctd} {\tilde{\Phi}}(\rho )=S(\rho \otimes I )S^{\ast }, \end{equation} where \begin{equation*} S=\sqrt{T^{\ast }T}=(a^{+}+a^{-})I_{12}+(a^{+}-a^{-})F. \end{equation*} The covariance property of the channel (\ref{ctd}) is \begin{equation*} \tilde{\Phi}(U\rho U^{\ast })=(U\otimes U)\tilde{\Phi}(\rho )(U^{\ast }\otimes U^{\ast }), \end{equation*} as follows from the fact that $F(U\otimes U)=(U\otimes U)F.$ \section{\protect\bigskip Coupling channel with its complementary} Let us now study the properties of a channel which is a tensor product of the WH channel \begin{equation} \Phi (\rho ):=\frac{1}{d-1}\left( {I}\tr\rho -\rho ^{T}\right) , \end{equation} and the complementary channel \begin{equation} {\tilde{\Phi}}(\rho )=\frac{1}{2(d-1)}(I_{12}-F)(\rho \otimes I )(I_{12}-F). \label{whcomp} \end{equation} The particular significance of the WH channel lies in the fact that it provides a counterexample for the multiplicativity of the maximal output $p$ -norm for $p>4.79$ and $d=3$ \cite{WH}. It is interesting to investigate whether a similar violation of multiplicativity is exhibited for the product channel $\Phi \otimes {\tilde{\Phi}}$. The multiplicativity of the maximal output $p$-norm and hence, additivity of the minimum output R\'{e}nyi $p-$ entropies of the WH channel for $p\in \lbrack 1,2]$ was established in \cite {my, af, dhs}. It is also interesting to study whether these additivity properties hold for the channel $\Phi \otimes \tilde{\Phi}$. For the WH channel, $\check{R}_{p}(\Phi )=\check{R}_{p}(\tilde{\Phi})=\log (d-1)$ for all $p\geq 1$, since $\nu_p(\Phi)=(d-1)^{(1-p)/p}$ as shown in \cite{WH}. Further, it was observed in \cite{WE} that if for some channel $ \Phi $ \begin{equation*} \check{R}_{p}(\Phi )=\check{R}_{q}(\Phi )\quad {\hbox{ for }}1\leq q\leq p, \end{equation*} then the additivity of the minimal output R\'{e}nyi $p$-entropy implies the additivity of the minimal output R\'enyi $q$-entropy. By using these facts, the proof of the additivity relation \begin{equation} \check{R}_{p}(\Phi \otimes {\tilde{\Phi}})=\check{R}_{p}(\Phi )+\check{R} _{p}(\tilde{\Phi}) \label{addprod} \end{equation} reduces to proving \begin{equation} \check{R}_{2}(\Phi \otimes {\tilde{\Phi}})=2\check{R}_{2}(\Phi ). \label{addren} \end{equation} We can restate the additivity conjecture (\ref{addren}) as a multiplicativity of maximal $2$--norms \begin{equation} \nu _{2}(\Phi \otimes {\tilde{\Phi}})=\nu _{2}(\Phi )\nu _{2}({\tilde{\Phi}} )=\nu _{2}(\Phi )^{2}, \label{final} \end{equation} where \begin{equation} \nu_2 (\Phi \otimes {\tilde{\Phi}}) := \max_{\atop{|\psi_{12}\rangle \in {\ \mathcal{H}}_1 \otimes {\mathcal{H}}_2 }{{\ ||\psi_{12} ||=1}}}\, \left\{ ||(\Phi\otimes {\tilde{\Phi}})(|\psi_{12}\rangle\langle \psi_{12}| )||_p\right\}, \label{multten} \end{equation} and we have made use of the relation $\nu _{2}({\tilde{\Phi}})=\nu _{2}(\Phi )$ \cite{holcc}. To prove (\ref{final}), it is sufficient to show that the maximum on the right hand side of (\ref{multten}) is achieved for unentangled vectors $|\psi _{12}\rangle $, which in turn corresponds to the reduced states $\rho _{1}:=\tr_{{\mathcal{H}}_{2}}|\psi _{12}\rangle \langle \psi _{12}|$ and $\rho _{2}:=\tr_{{\mathcal{H}}_{1}}|\psi _{12}\rangle \langle \psi _{12}|$ being pure. Let the output of the product channel for an arbitrary pure input state $ |\psi _{12}\rangle \langle \psi _{12}|\in \mathfrak{S}({\mathcal{H}} _{1}\otimes {\mathcal{H}}_{2})$, be denoted by \begin{equation} \Omega :=(\Phi \otimes {\tilde{\Phi}})(|\psi _{12}\rangle \langle \psi _{12}|)=(\mathrm{Id}\otimes {\tilde{\Phi}})(\Phi \otimes \mathrm{Id})(|\psi _{12}\rangle \langle \psi _{12}|), \end{equation} where $\mathrm{Id}$ is the identity channel. Due to the unitary covariance of the channel $\Phi \otimes {\tilde{\Phi}}$, the state vector $|\psi _{12}\rangle $ can be chosen as \begin{equation} |\psi _{12}\rangle =\sum_{j=1}^{d}\sqrt{\lambda _{j}}|j\rangle \otimes |j\rangle , \label{schmidt} \end{equation} where $\{|j\rangle \}$ is the fixed orthonormal basis in $\mathbf{C}^{d}$ (one which defines the transposition), $\lambda _{j}\geq 0$ and $ \sum_{j=1}^{d}\lambda _{j}=1$. The reduced density matrices $\rho _{i},\,i=1,2$ are therefore given by \begin{equation} \rho :=\rho _{1}=\sum_{j=1}^{d}\lambda _{j}|j\rangle \langle j|=\rho _{2}. \label{red} \end{equation} Using the decomposition (\ref{schmidt}) we find that \begin{equation*} (\Phi \otimes \mathrm{Id})(|\psi _{12}\rangle \langle \psi _{12}|)= \sum_{j,k} \sqrt{\lambda_j \, \lambda_k} \Phi(|j\rangle \langle k|) \otimes |j\rangle \langle k|. \end{equation*} From the definition (\ref{whdef}) of the WH channel it follows that \begin{equation*} \Phi(|j\rangle \langle k|) = \frac{1}{d-1}\left(I \delta_{jk} - |k\rangle \langle j| \right), \end{equation*} which in turn implies that \begin{eqnarray*} (\Phi \otimes \mathrm{Id})(|\psi _{12}\rangle \langle \psi _{12}|)&=& \frac{1 }{d-1}\left[\sum_j \lambda_j I \otimes |j\rangle \langle j| - \sum_{j,k} \sqrt{\lambda_j \lambda_k} |kj\rangle \langle jk \right] \notag \\ &=& \frac{1}{d-1}\left[I_{12}\otimes \rho - F(\sqrt{\rho} \otimes \sqrt{\rho} )\right], \end{eqnarray*} where $\rho$ is given by (\ref{red}) and hence $\sqrt{\rho} =\sum_j \sqrt{ \lambda_j} |j\rangle \langle j|$. Due to the relation $F(I\otimes \rho )=(\rho \otimes I)F$, the complementary channel (\ref{whcomp}) can be alternatively expressed in the following forms: \begin{eqnarray*} {\tilde{\Phi}}(\rho )&=&\frac{1}{(d-1)}\left( \frac{I_{12}-F}{2}\right) \left( \rho \otimes I+I \otimes \rho\right) \notag \\ &=& \frac{1}{(d-1)}\left( \frac{I_{12}-F}{2}\right) \left( \rho \otimes I+F(\rho \otimes I)F\right) \label{wh2} \end{eqnarray*} Using the above relations we get \begin{eqnarray} \Omega &=&(\mathrm{Id}\otimes {\tilde{\Phi}})\left[ \frac{1}{d-1}\Bigl( I_{12}\otimes \rho -F_{12}( \sqrt{\rho }\otimes \sqrt{\rho })\Bigr) \right] \notag \\ &=&\frac{1}{(d-1)^{2}}\left( \frac{I_{123}-F_{23}}{2}\right) \Bigl[I\otimes \rho \otimes I+I\otimes I\otimes \rho \notag \\ &-&F_{12}(\sqrt{\rho }\otimes \sqrt{\rho }\otimes I)-F_{23}(F_{12}(\sqrt{ \rho }\otimes \sqrt{\rho }\otimes I))F_{23}\Bigr], \label{step2} \end{eqnarray} where we have defined \begin{equation*} I_{123}=I_{12}\otimes I,\quad F_{23}:=I\otimes F,\quad F_{12}:=F\otimes I. \end{equation*} we can now evaluate $\tr \Omega^2$ by employing the spectral decompositions of $\rho$ (and hence of $\sqrt{\rho}$), the resolution of the identity $I = \sum_k |k\rangle \langle k|$, and the explicit actions of the operators $ F_{12}$ and $F_{23}$ on basis vectors, namely, \begin{equation*} F_{12}|ijk\rangle = |jik\rangle; \quad F_{23}|ijk\rangle = |ikj\rangle, \end{equation*} where $|ijk\rangle :=|i\rangle \otimes |j\rangle \otimes |k\rangle $. This calculation yields \begin{equation*} \tr \Omega^2 = \frac{1}{(d-1)^4} \left[ (d^2 - 4d + 5)\tr \rho^2 + 2(d-2) \right], \end{equation*} which is indeed maximised when $\tr \rho^2 = 1$, i.e., when $\rho$ is a pure state. Thus we see that, for the product channel $\Phi \otimes {\tilde{\Phi}} $, the multiplicativity (\ref{final}) of the $2$--norms and hence the additivity (\ref{addprod}) of the minimum output entropy holds. To investigate violation of multiplicativity for the product channel $\Phi \otimes {\tilde{\Phi},}$ let us consider the output $\Omega ^{me}$ of this channel when the input is the maximally entangled state $|\psi _{me}\rangle \langle \psi _{me}|$, \begin{equation*} |\psi _{me}\rangle =\frac{1}{\sqrt{d}}\sum_{j=1}^{d}|jj\rangle . \end{equation*} In this case the reduced density matrix $\rho $, defined by (\ref{red}), is the completely mixed state: $\rho =I/d$. Hence, $\Omega ^{me}$ is simply obtained from (\ref{step2}) by replacing $\rho $ by $I/d$ on its right hand side. This yields the relation \begin{eqnarray*} \Omega ^{me} &=&\frac{1}{d(d-1)^{2}}\Bigl[P_{1} \\ &+&\frac{1}{2}\bigl( I_{123}-F_{23}-F_{12}-F_{23}F_{12}F_{23}+F_{23}F_{12}+F_{12}F_{23}\bigr) \Bigr ]. \end{eqnarray*} where $P_{1}:=(I_{123}-F_{23})/{2}$ is a projection operator. Let us express $\Omega ^{me}$ in a more transparent form, in order to evaluate its eigenvalues. For this purpose, define a vector \begin{equation*} |\phi _{\{ijk\}}\rangle :=\frac{1}{\sqrt{6}}\Bigl[|ijk\rangle +|jki\rangle +|kij\rangle -|jik\rangle -|kji\rangle -|ikj\rangle \Bigr]. \end{equation*} It is of unit norm and satisfies the relations \begin{eqnarray*} F_{12}|\phi _{\{ijk\}}\rangle &=&-|\phi _{\{ijk\}}\rangle \\ F_{23}|\phi _{\{ijk\}}\rangle &=&-|\phi _{\{ijk\}}\rangle \end{eqnarray*} Moreover, \begin{equation*} \langle \phi _{\{ijk\}}|\phi _{\{i^{\prime }j^{\prime }k^{\prime }\}}\rangle =0\quad {\hbox{unless}}\quad \{ijk\}=\{i^{\prime }j^{\prime }k^{\prime }\}, \end{equation*} and hence the set of vectors \begin{equation*} \Bigl\{|\phi _{\{ijk\}}\rangle \,:\,i,j,k\in \{1,2,\ldots ,d\},i,j,k\,\,{\ \hbox{all different}}\Bigr\} \end{equation*} form an orthonormal set. Therefore \begin{equation*} P_{2}:= \sum_{\atop{\{ijk\}}{{\atop{i,j,k \in \{1,2, \ldots, d\}}{{\ i,j,k \,\, {\hbox{all different}}}}}}} |\phi_{\{ijk\}}\rangle\langle \phi_{\{ijk\}}|, \end{equation*} is a projection operator. Moreover \begin{equation*} {\hbox{ran }}P_{2}\subset {\hbox{ran }} P_{1}. \end{equation*} It is easy to see that \begin{equation*} I_{123}-F_{23}-F_{12}-F_{23}F_{12}F_{23}+F_{23}F_{12}+F_{12}F_{23}= 6P_{2}. \end{equation*} Hence, \begin{equation*} \Omega ^{me}=\frac{1}{d(d-1)^{2}}\left[ P_{1}+3P_{2}\right] , \end{equation*} and its eigenvalues are \begin{enumerate} \item {${4}/{d(d-1)^2}$ with multiplicity \begin{equation*} {\binom{d }{3}} \equiv {\hbox{number of distinct subsets }}\, \{ijk\}\, {\ \hbox{of the set}} \{1,2, \ldots, d\}; \end{equation*} } \item {\ ${1}/{d(d-1)^{2}}$ with multiplicity \begin{equation*} {\hbox{dim }}({\hbox{range of }}P_{1}\setminus P_{2})=\frac{d^{2}(d-1)}{2}-{ \ \binom{d}{3}}=\frac{d(d^{2}-1)}{3} \end{equation*} } \item {\ $0$ with multiplicity $d(d+1)/2$ } \end{enumerate} For $d=3$, therefore, there is a non-degenerate eigenvalue of $1/3$, the eigenvalue $1/12$ with multiplicity $8$, and the eigenvalue $0$ with multiplicity $6$. The non--zero eigenvalues are found to be exactly identical those of the channel $\Phi \otimes \Phi $ for $d=3$, (see \cite{WH} ), for which a violation of the multiplicativity of the maximal output $p$ --norm was obtained for $p>4.79$. Hence, we deduce that a similar violation of multiplicativity is exhibited for the channel $\Phi \otimes {\tilde{\Phi}} $ for $p>4.79$ and $d=3$. For $d\geq 4$ we get \begin{equation*} \nu _{p}(\Phi \otimes {\tilde{\Phi}})^{p}/\nu _{p}(\Phi )^{p}\nu _{p}({\ \tilde{\Phi}})^{p}\ge \frac{1}{(d-1)^{2}}\left[ {\binom{d}{3}}\left( {{4}/{d} } \right) ^{p}+\frac{d(d^{2}-1)}{3}\left( {{1}/{d}}\right)^{p}\right], \end{equation*} but the right hand side is always less or equal than $1$ for $p\ge 1$, so contrary to the case of $\Phi \otimes \Phi ,$ considering the output for the maximally entangled state does not allow us to conclude violation of multiplicativity. However, this might be due to the fact that the channel $ \Phi \otimes {\tilde{\Phi}}$ does not have the flip symmetry of the channel $ \Phi \otimes \Phi ,$ and the maximizing input state could be different from the maximally entangled one. \textbf{Acknowledgments.} This work was accomplished when A. H. was the Leverhulme Visiting Professor at DAMTP, CMS, University of Cambridge. The authors are grateful to Yu. M. Suhov for useful discussions.
1,314,259,993,683
arxiv
\section{Introduction} Space exploration has triggered major progress in our understanding of comets beginning in March 1986 with the exploration of comet 1P/Halley by an armada of missions including the ESA Giotto mission (Reinhard and Battrick, 1986). With the arrival of the ESA Rosetta mission at the comet 67P/Churyumov-Gerasimenko (67P/C-G) on July 2014, comets appear more complex and fascinating than ever. All the visited comets show a low visible albedo and heterogeneous surface (Barucci et al. 2011). However 67P/C-G and some other periodic comets reveal the presence of intriguing bright spots on the surface (Sunshine et al. 2006; Sunshine et al. 2012; and Li et al. 2013; Pommerol et al. 2015). To better understand the properties and composition of the comet 67P/C-G is one of the major objectives of the ESA Rosetta mission, and all on-board instruments have so far contributed with high quality and a precious quantity of data. Since the orbital insertion of the Rosetta spacecraft, the comet nucleus has been mapped by both OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System), and VIRTIS (Visible InfraRed Thermal Imaging Spectrometer) acquiring a huge quantity of surface images in different wavelength bands and spectra, and producing the most detailed maps at the highest spatial resolution of a cometary nucleus surface. The OSIRIS imaging system (Keller et al. 2007) is composed of the Narrow Angle Camera (NAC) designed to study the nucleus with 11 large band filters at different wavelengths from the ultraviolet (269 nm) to the near-infrared (989 nm), while the Wide Angle Camera (WAC) is devoted to the study of gaseous species in the coma with a set of 14 narrow band filters ranging from the ultraviolet to visible wavelengths. The OSIRIS imaging system was the first instrument capable of mapping a comet surface at high resolution, reaching a maximum resolution of 11cm/px during the closest fly-by that occurred on February 14, 2015, at a distance of $\sim$ 6 km from the nucleus surface. VIRTIS (Coradini et al. 2007) is composed of two channels: VIRTIS-M, a spectro-imager operating both in the visible (0.25$-$1.0 $\mu$m) and infrared (1.0$-$5.0 $\mu$m) ranges at low spectral resolution ($\lambda/\delta\lambda$=70--380), devoted to surface composition, and VIRTIS-H, a single-aperture infrared spectrometer (1.9$-$5.0 $\mu$m) with higher spectral resolution capabilities ($\lambda/\delta\lambda$=1300-3000) devoted to the investigation of activity. \begin{figure*}[t] \includegraphics[width=1\textwidth]{Barucci_Fig1_new.pdf} \caption{Map of comet 67P/Churyumov-Gerasimenko, resulting from merging a more detailed shape model SHAP4S (Preusker et al. 2015) for the northern hemisphere and shape model SHAP5 (Jorda et al. 2016) for the southern hemisphere. In red the selected bright spots are reported, based on OSIRIS images and a spectro-photometric analysis, considered as good targets to be investigated by an analysis of VIRTIS data, plus the two bright spots analysed by Filacchione et al. (2016a). The numbers (1-8) represent the spots with positive detection of H$_2$O ice by VIRTIS analysis discussed in this paper. } \label{spots} \end{figure*} The OSIRIS images of 67P/C-G show a highly shaped, irregular bilobed comet, with a dark, dehydrated, and morphologically complex surface characterized by several terrain types, including numerous diverse geomorphologic features (Sierks et al. 2015). The comet's surface is highly heterogeneous with different geological terrains showing smooth, dust-covered areas, large scale depressions, brittle materials with many pits and circular structures, and exposed consolidated areas (Thomas et al. 2015a; El-Maarry et al. 2015; El-Maarry et al. 2016). Pits have also been connected to activity, possibly accompanied by outbursts (Vincent et al. 2015). The complex surface of comet 67P/C-G shows regions covered by different layers of dust on both lobes, including areas with evidence of transport and redistribution of dust materials (Thomas et al. 2015b). Temporal variations of morphological structures have also been observed on the smooth terrains of the Imhotep region (Groussin et al. 2015), as well as in other regions (Fornasier et al. 2016), in particular when the comet was close to perihelion. The comet shows albedo variation of up to about 25\% and spectrophotometric analysis (Fornasier et al. 2015) identified at least three groups of terrains with different spectral slopes (computed in the 535-882 nm range). These differences have been associated with the local composition variation, but since many different surface characteristics overlap, this makes the interpretation difficult. Oklay et al. (2016a) also studied surface variegation on the comet, detecting local color inhomogeneities connected to active and inactive surface regions. \begin{table*} \begin{center} \caption{Observing conditions for the OSIRIS images as reported in Fig. 2, where the ice spots have been identified. The time (UT) refers to the start time of the first image of each sequence, followed by the number of filters available. The diameter size (d) of the spots along with the location region, phase angle ($\alpha$), distance between Rosetta spacecraft, and comet surface ($\Delta$), spatial resolution (R), latitude (Lat), and longitude (Long) are reported.} \small{ \label{observing} \begin{tabular}{l l l c c c c c c c} \hline N. & Time reference & Filters & d & Region & $\alpha$ & $\Delta$ & R & Lat & Long \\ & & & (m) & &($^{\circ}$) & (Km) & (m/px) & ($^{\circ}$) & ($^{\circ}$) \\ \hline 1 & 2015-06-27T13h26 & F22, F23, F41, F24, F71, F27, F51, & 36 & Imhotep & 89.50 & 191.94 & 3.6 &-5.8 & 189.4 \\ & & F61, F28, F15& & & & & & & \\ 2 & 2015-06-27T17h48 & F22, F23, F41, F24, F71, F27, F51, &45 & Anhur & 89.39 & 188.43 & 3.5 & -41.7 & 63.7 \\ &&F61, F28, F15 & & & & & & & \\ 3 & 2015-04-12T21h42 & F22, F23, F41, F24, F71, F27, F51 &11 & Khonsu & 80.46 & 147.98 & 2.7 & -23.8 & 198.3 \\ &&F61, F28,F16, F15 & & & & & & & \\ 4 & 2014-11-22T04h57& F22, F23, F24, F27, F28, F51, F61 &10 & Atum & 92.70 & 29.50 & 0.54 & -20.7 & 227.4 \\ 5 & 2014-11-22T06h32& F22, F23, F24, F27, F28, F51, F61 &6.5 & Imhotep & 92.78 & 29.50 & 0.54 & -22.0 & 182.8 \\ 6 & 2014-09-19T09h19 & F22, F16, F23, F24, F41 & 2-5 (each) & Khepry & 70.48 & 26.50 & 0.49 & 4.2 & 71.7 \\ 7 & 2014-09-05T05h21 & F22, F23, F27, F16, F28, F41, F71 & 3-5 (each) & Imhotep & 57.23 & 41.44 & 0.77 & -8.1 &188.3 \\ 8 & 2014-09-05T08h00& F22, F23, F27, F16, F28, F41, F71& 6 & Imhotep & 58.43 & 40.76 & 0.75 & -2.4 &174.8\\ \hline \hline \end{tabular} } \end{center} \end{table*} The first results by VIRTIS (Capaccioni et al. 2015) about the spectral analysis showed the presence of a broad absorption feature around 2.9--3.6 $\mu$m present across the entire observed region and compatible with carbon-bearing compounds (opaque minerals associated with organic macromolecular materials) with no evidence of ice-rich patches. Later on, De Sanctis et al. (2015) detected the first evidence for the presence of H$_2$O ice as part of a diurnal cycle on the neck of the comet, while Filacchione et al. (2016a) identified H$_2$O ice on two gravitational debris falls in the Imhotep region exposed on the walls of elevated structures. The latter was interpreted as being possibly extended layering in which the outer dehydrated crust is superimposed over water ice-enriched layers. During the first mapping phase of 67P/C-G nucleus, completed in August-November 2014 (heliocentric distances between 3.6 and 2.7 AU), VIRTIS-M achieved a complete mapping of the illuminated regions in the equatorial and northern hemisphere, which enabled us to retrieve the first compositional maps by using VIS and IR spectral parameters (Filacchione et al. 2016b). During the same period, coma observations performed by VIRTIS-M (Migliorini et al. 2016) and VIRTIS-H (Bockel\'ee-Morvan et al. 2015) channels have traced the H$_2$O vapor emission, which occurs preferentially above the illuminated regions of the northern hemisphere. As limited evidence of exposed H$_2$O ice regions has so far been collected, the aim of this work is to investigate in depth the composition of the 67P/ C-G surface, combining the high spatial resolution images of OSIRIS and the high spectral resolution of VIRTIS for detecting and emphasizing interesting ice spectral signatures. Over 100 meter-sized spots were identified by Pommerol et al (2015), possibly associated with the presence of H$_2$O, on the basis of laboratory experiments, but with no confirmation of the real presence of ice. Deshapriya et al. (in preparation) are collecting a catalogue of large bright spots that are present on the surface of the comet by analyzing the OSIRIS images and spectrophotometry data. For this work we selected the largest spots, good candidates to search for H$_2$O ice that could be detected at the lower angular resolution of VIRTIS. We identify large features with high albedo and low spectro-photometric slope with OSIRIS, we compute accurate coordinates, and we analyze them on the basis of the VIRTIS spectra. Over the large number of spots identified by OSIRIS, 13 of them were checked by VIRTIS. Eight of them show clear evidence of H$_2$O ice in their spectra. In this paper we report on the analysis of the eight bright spots for which we obtained a positive detection of H$_2$O ice in VIRTIS data. In Section 2, the OSIRIS data and the performed analysis are presented and, in Section 3, the VIRTIS data, while in Section 4 the spectral modeling of the selected spots is described. In Section 5, a detailed analysis of the spots and surrounding area is reported, while in Section 6, a possible evolution of the area is discussed. The main aim of this work is to confirm the unambiguous presence of H$_2$O ice by spectral analysis. \begin{figure*} \centering \includegraphics[width=1\textwidth]{Barucci_Fig2a.pdf} \clearpage \end{figure*} \begin{figure*} \centering \includegraphics[width=1\textwidth]{Barucci_Fig2_bis.pdf} \caption{NAC OSIRIS images (first column) for the eight spots reported in Table 1 with a zoom on the spot (second column). The images have been taken with F22 filter (at 649.2 nm). The arrows indicates the spots that have been analysed using boxes of 3x3 pixels. The measured $I/F(\alpha)$ of the bright spots are reported in red, while the surrounding area is reported in black (third column). The relative reflectance (normalized to F23 at 535 nm) of the indicated bright spot in red and the surrounding area in black are represented in the fourth column. } \label{images} \end{figure*} \section{OSIRIS data} Bright spots have been observed on 67P/C-G at various locations on the nucleus throughout the cruise phase of the Rosetta mission. First detections of these spots date back to as early as August 2014 and they continued to appear in numerous forms, be it an ensemble of a plethora of small bright spots, or much larger individual or twin bright patches. The abundance of these spots reached a peak during the perihelion passage of comet 67/P C-G in August 2015, resulting in the largest white patches ever detected on the cometary nucleus. The first objective of this study is to explore the nature of these spots in terms of spectrophotometry. We started by selecting potential bright spots found on OSIRIS NAC data with good spectro-photometry and spatial coverage. In the event of an observational sequence of several filters, owing to the fact that both the spacecraft and cometary nucleus are constantly in motion, the recorded images do not necessarily show exactly the same field of view. Owing to the time difference between a pair of consecutive images being around ten seconds, there is a small shift in the fields of view. Hence we have to take this shift into account when stacking up images and creating a data cube for spectral analysis. To achieve this, we adopted an algorithm that automatically identifies identical features in consecutive images and estimates the affine transformation matrix between each couple of consecutive images (ORB and RANSAC tools implemented by van der Walt et al., 2014). Then we analyzed the data cubes to identify bright spots. The basic criterion to identify these spots was their high reflectance compared to the typical nucleus in the filter wavelengths observed. We note that the data used are from 3B level of the standard OSIRIS data reduction pipeline, which accounts for the correction for bias, flat field, geometric distortion, solar flux, and calibration in absolute flux (Tubiana et al. 2015). Thus the data available in the reduced files are in the form of radiance factor that corresponds to the ratio between the observed scattered radiance (I) from the comet and incoming solar irradiance ($F_{\lambda}$) at the heliocentric distance of the comet, being referred to as the $I/F$ value \begin{equation} I/F(\lambda) = \frac{\pi I(i,e,\alpha,\lambda)}{F_{\lambda}}, \end{equation} where $i$ is the incident angle, $e$ is the emission angle, and $\alpha$ is the phase angle. Next we proceeded to generate synthetic images to correct for the illumination conditions and observational geometry using the OASIS tool (Jorda et al. 2010) and the shape model SHP5 (Jorda et al. 2016). This yields incident and emission angles for each pixel, which enables us to apply the Lommel-Seeliger disk law \begin{equation} D(i,e) = \frac{2 \cos(i)}{\cos(i) + \cos(e)} \end{equation} \begin{equation} (I/F)_{corr}(\alpha, \lambda) = \frac{\pi I(i,e,\alpha,\lambda)/(F_{\lambda})}{D(i,e)} .\end{equation} Derived from I/F plots we also produced relative reflectance plots with radiance factors normalized to a given filter (i.e. F23, green filter at 535 nm), enabling us to get an insight into the composition of the areas sampled for this analysis. As the ice displays a flatter spectrum in comparison to the red nature of organic-rich typical comet's nucleus material, we set the following criteria for the bright spots to qualify as final candidates : i) higher albedo properties than the typical nucleus I/F and ii) flat spectral behavior, compared to the typical nucleus on relative reflectance plots. The above method enabled us to filter potential candidates and discard certain previously catalogued bright spots that had shown higher albedo properties, but not necessarily having flat spectral behavior when normalized, thus failing to meet the second requirement. In this case, the high albedo properties were probably due to the illumination conditions during the observation. We selected 13 spots (or cluster of spots) reported in the map (Fig. 1) as the best sample to be analysed by spectroscopy with VIRTIS. In this paper we present only eight spots (Table 1) for which VIRTIS spectral analysis gave positive detection of H$_2$O ice signatures. The OSIRIS images of the area and the zoom of the selected eight bright spots were reported in Fig. 2, together with the measured I/F and the relative spectro-photometry reflectance. As shown in Fig. 2, the spots 4 (depending on the shadow), 6, and 7 belong to a cluster. \begin{table*} \centering \caption{Coverage of OSIRIS bright spot locations in VIRTIS dataset and [number] of 2 $\mu m$ absorption band positive identifications. MTPs are Rosetta mission medium term plan time intervals, each one having a duration of about one month (MTP7 corresponds to September 2014, MTP15 to April 2015). In the last column, the total positive detections are reported.} \begin{tiny} \begin{tabular}{lccccccccccc} \hline OSIRIS Spot &Ice spot N. & MTP007 & MTP008 & MTP009 & MTP010 & MTP011 & MTP012 & MTP013 & MTP014 & MTP015 & Total\\ \hline nl1-5b&1 & 20 [0] & 0 & 0 & 0 & 0 & 0 & 15 [2] & 139 [0] & 89 [0] & \textbf{2} \\ nl2-7b&-& 0 & 0 & 0 & 40 [0] & 0 & 16 [0] & 15 [0] & 8 [0] & 8 [0] & \textbf{0}\\ nl3-12b&-& 149 [0] & 0 & 36 [0] & 144 [0] & 64 [0] & 0 & 32 [0] & 77 [0] & 87 [0] & \textbf{0} \\ nl4-13&2 & 0 & 0 & 0 & 0 & 0 & 0 & 4 [0] & 106 [9] & 49 [0] & \textbf{9}\\ nl5-16&-& 142 [0] & 0 & 0 & 32 [0] & 26 [0] & 0 & 45 [0] & 121 [0] & 70 [0] & \textbf{0} \\ nl6-17a&3 & 0 & 0 & 0 & 0 & 0 & 0 & 15 [0] & 116 [9] & 64 [3] & \textbf{12}\\ nl7-19 & 4& 0 & 0 & 195 [32] & 0 & 0 & 0 & 29 [0] & 52 [0] & 39 [0] & \textbf{32}\\ nl8-20 & 5&40 [0] & 0 & 0 & 300 [86] & 0 & 0 & 38 [0] & 190 [0] & 87 [0] & \textbf{86}\\ nl9-22 & - &225 [0] & 0 & 140 [0] & 86 [0] & 0 & 0 & 21 [0] & 77 [0] & 59 [0] & \textbf{0}\\ nl10-23b &-& 0 & 0 & 0 & 0 & 0 & 27 [0] & 184 [0] & 58 [0] & 27 [0] & \textbf{0}\\ nl11-14b & 6&102 [3] & 711 [0] & 72 [0] & 140 [0] & 70 [0] & 0 & 46 [0] & 76 [0] & 65 [0] & \textbf{3}\\ nl12-24 & 7&94 [10] & 0 & 0 & 0 & 0 & 0 & 14 [0] & 122 [4] & 66 [1] & \textbf{15}\\ nl13-25 & 8&326 [0] & 0 & 0 & 345 [3] & 41 [0] & 0 & 43 [0] & 93 [0] & 85 [0] & \textbf{3}\\ \hline \end{tabular} \end{tiny} \label{tbl:table_virtis_listing} \end{table*} \begin{table*} \begin{tiny} \begin{center} \caption{ Summary of VIRTIS-M dataset processed in this work. For each spot, we report the observations offering the best signal-to-noise conditions with the pixel position (sample and line) reported in Table 4. For each pixel, basic information about observation time, geometry conditions, distance between the Rosetta spacecraft and comet surface ($\Delta$), Local Solar Time (LST), and retrieved temperature (T) are given. The integration time is 3s for all data reported. The cube parameters indicate the size of the acquisition in bands, sample, line dimensions.} \begin{tabular}{lllcccccccc} \hline N. & Observation & Cube & Start Time & End Time & Phase & Incidence & Emission & $\Delta$ & LST & T \\ & name & Parameters & (UT) & & (deg) & (deg) & (deg) & (km) & (hr) & (K) \\ \hline 1 & I1$\_$00383518966 & 432, 256, 158 & 2015-02-25T21:04:00 & 2015-02-25T21:30:19 & 53.02 & 66.94 & 41.92 & 81.51 & 12.23 & 203 \\ 2 & I1$\_$00385906923 & 432, 256, 70 & 2015-03-25T12:23:18 & 2015-03-25T12:46:34 & 73.49 & 67.08 & 45.83 & 88.23 & 15.20 & 197 \\ 3 & I1$\_$00385885107 & 432, 256, 70 & 2015-03-25T06:19:42 & 2015-03-25T06:42:58 & 74.47 & 53.03 & 31.27 & 94.06 & 12.43 & 218 \\ 4 & I1$\_$00373462192 & 432, 256, 86 & 2014-11-01T11:31:03 & 2014-11-01T11:45:22 & 103.07 & 60.11 & 43.28 & 32.39 & 11.04 & 168 \\ 5 & I1$\_$00377182711 & 432, 256, 80 & 2014-12-14T12:59:43 & 2014-12-14T13:13:02 & 91.77 & 43.41 & 52.15 & 19.44 & 15.44 & 188 \\ 6 & I1$\_$00376302211 & 432, 256, 80 & 2014-12-04T08:24:43 & 2014-12-04T08:38:01 & 91.07 & 57.34 & 70.64 & 23.49 & 14.81 & 179 \\ 7 & I1$\_$00369356914 & 432, 256, 109 & 2014-09-14T23:09:43 & 2014-09-14T23:45:57 & 66.89 & 78.22 & 30.26 & 28.16 & 11.03 & 163 \\ 8 & I1$\_$00377184571 & 432, 256, 74 & 2014-12-14T13:30:43 & 2014-12-14T13:43:02 & 92.66 & 48.30 & 54.87 & 19.92 & 15.59 & 158 \\ \hline \hline \label{table:virtis_obs} \end{tabular} \end{center} \end{tiny} \end{table*} \begin{figure} \centering \includegraphics[width=9.0cm,angle=0]{Barucci_Fig3.pdf} \caption{Spectrum of Dark Terrain unit that corresponds to the average spectrum of the comet's surface. } \label{xx} \end{figure} \section{VIRTIS data} The search of VIRTIS-M spectra in correspondence with the bright albedo features identified on OSIRIS images needed a deep data mining of the dataset. Since the entire nucleus was imaged with very high redundancy from a wide range of distances, local times and illumination/viewing geometries, the research has been performed starting from georeferenced data. For each individual pixel in each VIRTIS-M observation, many geometry parameters, including longitude, latitude, incidence, emission, phase angles, distance, and local solar time for the pixel center and four corners, were computed by means of SPICE (Acton, 1996) routines that are able to reconstruct these quantities starting from spacecraft and comet attitude and trajectory kernels. Once computed and validated by the VIRTIS team, these geometry (.GEO) cubes are released through ESA's PSA archive and made publicly available. The nucleus shape model used for computation is SHAP5, derived from OSIRIS images by using a stereophotoclinometry method (Jorda et al. 2016). The coordinates grid is based on the Cheops frame (Preusker et al. 2015). Some examples of geometry parameters computed for VIRTIS-M nucleus observations are given in Filacchione et al. (2016b). The geometry information relative to each pixel is therefore ingested in a database. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{Barucci_Fig4.pdf} \caption{VIRTIS-M infrared data of the eight spots. Spots 1-4 are shown in the left column from top to bottom. Spots 5-8 in the right column from top to bottom. Infrared images in the insert of the plot are built from a combination of spectral bands taken at 1.32 $\mu m$ (B channel), 2.0 $\mu m$ (G), 4.0 $\mu m$ (R). For each spot, we report the VIRTIS-M observed reflectance (black curve), not corrected for phase angle, and best fit (red curve) as derived from the pixels reported in Table \ref{table:virtis_obs} and indicated by red circles on the images. The gaps in the spectral ranges correspond to order-sorting filter-junction wavelengths that can produce unreliable features and, for this reason, are not taken into account in the analysis.} \label{Spot} \end{figure*} % \subsection{Search for the 2 $\mu m$ absorption feature} Using the method described above, the search of VIRTIS-M pixels located in correspondence with the bright spot coordinates identified on OSIRIS images was performed. As a general rule, we selected all VIRTIS-M pixels within a radius of $2^\circ$ in longitude and latitude around the position estimated on the OSIRIS images. The corresponding reflectance spectra are grouped together to form a spectrogram (Filacchione et al., 2016a), one for each MTP, and then further processed by calculating the 2 $\mu m$ band depth, used as a proxy to determine the presence of H$_2$O ice on the surface in the case of values that were larger than a 5$\%$ threshold. Water ice reflectance shows diagnostic absorptions at 1.5, 2.0 and 3.0 $\mu m$. The decision to use only the 2 $\mu m$ band as a proxy to identify the presence of water ice on 67P/C-G surface was driven by two different requirements: i) the 1.5 $\mu m$ band is partially corrupted by the presence of an instrumental order-sorting filter, which makes it difficult to retrieve a correct band shape, particularly for pixels close to sharp illumination transitions and shadows; ii) the intense 3 $\mu m$ band has a complex shape owing to the overlapping of the water and organic material absorptions, which causes changes in shape, center, and depth, depending on the relative abundances of the two end-members. Conversely, the 2 $\mu m$ spectral range is not influenced by similar effects. Moreover, the 2 $\mu m$ band is well-defined for a wide range of grain sizes, making it a good spectral marker to identify the presence of water ice. As a general rule, the detection of the H$_2$O ice on a given place can be limited by unfavorable instrumental signal-to-noise conditions, spatial resolution on ground and oblique illumination/viewing geometry. A summary of the observations showing a positive identification of the 2 $\mu m$ H$_2$O ice-band feature is given in Table 2. Infrared color images of the eight spots are shown in Fig. 4, together with the observed reflectance spectra and best spectral fits, as discussed later in Section 4. In absence of water ice, VIRTIS spectra correspond with the Dark Terrain unit, as reported in Fig. 3, which shows a featureless red slope in the 1-2.6 $\mu m$ range and an intense organic material absorption band at 3.0 $\mu m$. \subsection{Temperature of the icy spots} For each spot we identified VIRTIS-M observations showing the eight spot areas imaged with the best illumination and viewing geometries to maximize the S/N ratio. The VIRTIS-M dataset considered in this work is summarized in Table 3. Surface temperature is derived from VIRTIS-M infrared data by modeling the 4.5-5.1 $\mu m$ spectral radiance (at the pixel where the spectrum has the deepest content of H$_2$O ice) with a Bayesian approach (Tosi et al., 2014). On the surface of the nucleus, the temperature of each point is generally a function of local thermophysical properties (albedo, composition, grain size, roughness, thermal conductivity, volatiles sublimation) and instantaneous illumination conditions (solar incidence angle, or true local solar time). All measurements considered in this work were acquired by VIRTIS-M in the local solar timeframe between late morning to early afternoon, with pixel resolution ranging between 5.0 and 23.5 m/pixel. In these conditions, H$_2$O ice-rich spots show temperatures ranging between about 158 and 218 K. Since the comet's surface is not isothermal, because of local roughness, the temperature values retrieved by VIRTIS should be considered representative only of the warmest fraction of the pixel, corresponding to the more illuminated areas. Moreover, the instrumental noise-equivalent temperature is about 150 K, corresponding to the minimum temperature detectable by the instrument. The error associated with the temperatures reported in Table 3 are between $\pm$30 K for the measurement at minimum temperature T=158 K, and $\pm$10 K for the one at maximum T=218 K. \begin{table*} \begin{center} \caption{Parameters retrieved by modeling for each bright area. The relative error on abundance and grain size is 40\%, as estimated in Raponi 2014. } \small{ \label{tabspots} \begin{tabular}{c c c c c c c c } \hline Spot & VIRTIS file & \multicolumn{2}{c}{Mixing modalities} & H$_2$O ice & H$_2$O ice grain & Additional slope & Goodness \\ & sample, line & \multicolumn{2}{c}{Alternative/Simultaneous} & abundance(\%) & size ($\mu$m) & (\%$\mu$m$^{-1}$) & $\chi^{2}$ \\ \hline 1 & I1$\_$00383518966 & areal & \multirow{2}{*}{A} & 1.1 & 350 & \multirow{2}{*}{5.2} & 4.19 \\ & s: 184, l: 24 & intimate & & 1.2 & 450 & & 4.16 \\ \hline 2 & I1$\_$00385906923 & areal & \multirow{2}{*}{A} & 0.1 & 40 & \multirow{2}{*}{no} & 1.18 \\ & s: 173, l: 42 & intimate & & 0.5 & 200 & & 1.04 \\ \hline 3 & I1$\_$00385885107 & areal & \multirow{2}{*}{A} & 1.3 & 750 & \multirow{2}{*}{-2.8 } & 2.78 \\ & s: 120, l: 35 & intimate & & 1.8 & 1300 & & 3.30 \\ \hline 4 & I1$\_$00373462192 & areal & \multirow{2}{*}{S} & 3.2 & 4500 & \multirow{2}{*}{no } & 0.45 \\ & s: 93, l: 35 & intimate & & 4.0 & 30 & & \\ \hline 5 & I1$\_$00377182711 & areal & \multirow{2}{*}{A} & 1.7 & 400 & \multirow{2}{*}{-4.0} & 1.47 \\ & s: 61, l: 69 & intimate & & 2.0 & 800 & & 2.15 \\ \hline 6 & I1$\_$00376302211 & areal & \multirow{2}{*}{S} & 1.9 & 6500 & \multirow{2}{*}{-1.8} & 0.28 \\ & s: 16, l: 38 & intimate & & 1.0 & 250 & & \\ \hline 7 & I1$\_$00369356914 & areal & \multirow{2}{*}{A} & 1.5 & 10 & \multirow{2}{*}{-4.0} & 0.90 \\ & s: 239, l: 66 & intimate & & 1.8 & 15 & & 0.86 \\ \hline 8 & I1$\_$00377184571 & areal & \multirow{2}{*}{A} & 0.3 & 900 & \multirow{2}{*}{no } & 0.56 \\ & s: 6, l: 7 & intimate & & 0.7 & 2200 & & 0.55 \\ \hline \hline \end{tabular} } \end{center} \label{tbl:virtis_parameters} \end{table*} \section{Spectral modeling} To derive the properties of the H$_2$O ice detected in the bright spots on the surface of the comet, a spectral analysis was performed using Hapke's radiative transfer model (Hapke, 2012), as described in Ciarniello et al. (2011). Unfortunately we are still not able to infer the real composition of the organic-rich dark terrain present on the comet surface. The broad absorption band centered at 3.2 $\mu$m, and the difficulty of its interpretation, have been largely discussed by Quirico et al. (2016) on the basis of present knowledge of the composition of cometary grains and all the components available in laboratory data. The mixture presented in this paper was consequently modeled by means of two spectral end members: crystalline water ice, simulated by using optical constants measured at T=160 K between 1 and 4 $\mu$m (Warren et al., 1984, Mastrapa et al., 2008, Mastrapa et al., 2009, Clark et al., 2012) and a Dark Terrain unit corresponding to the average spectrum of the comet's surface after the application of photometric correction (Ciarniello et al., 2015), as shown in Fig. 3. The quantitative analysis is based on the spectral shape of the diagnostic absorption bands of H$_2$O ice. The absolute level of reflectance of the model is multiplied by a free parameter to fit the data, to account for uncertainties on the radiometric and photometric accuracy as well as errors on the local geometry information, owing to unresolved shadows and roughness. In some cases, the measured spectra present a fictitious slope where a high signal contrast is measured between adjacent pixels, like regions near shadows. This is due to the increasing FWHM of the point spread function toward longer wavelength. To account for this effect, a slope is added to the model to fit the measured spectrum, where it is required. Because of its artificial origin it should not be the subject of interpretation. Before fitting, the observed spectra are corrected for spikes and instrumental artifacts. Thermal emission is modeled and removed simultaneously to the spectral fit, as in Protopapa et al. (2014). The best-fitting result is obtained by applying the Levenberg-Marquardt method for nonlinear least squares multiple regression. During the fitting procedure, the spectral bands are weighted for the Poissonian noise as calculated in Raponi (2014). We have modeled areal and intimate mixing modalities: in the areal mixing, the surface is modeled as patches of pure H$_2$O ice and dark terrain; in the intimate mixing model the particles of the two end-member materials are in contact with each other. The model's free-parameters are the percentage and grain size of the water ice. Further details about spectral modeling are given in Filacchione et al. (2016a) and in Raponi et al. (2016). Most of the icy regions can be described by both areal and intimate mixture, which can be alternative. In the intimate mixing case, the model always requires larger abundance and grain size than the areal mixing case to compensate for the lack of the multiple scattering contribution, which is low because of the interaction of light with the dark terrain. The two alternative solutions indicated with A in Table 4 are the extremes of a possible set of solutions in which both kind of mixtures can be present simultaneously. For the spectra indicated with S in Table 4 (spots 4 and 6), we have only one possible solution in which the two kind of mixtures both contribute to the production of the model, and two populations of grain size are modeled at the same time, confirming the result of Filacchione et al. (2016a). Figure 4 shows spectra and images for each spot with positive identification of ice using VIRTIS-M data. The best fitting model (in red) for each spot is shown as the areal mixing case, or the areal-intimate case according to the parameters, as indicated in Table 4. To highlight the effect of variation in abundance and grain size of the water ice, in Fig. 5 we show, as an example, the best fit obtained for spot 3, in the areal mixture case, varying the H$_2$O ice abundance and the grain size. An additional slope has been set as a free parameter to obtain the best fit. The parameters used and the resulting goodness-of-fit of the models are shown in Table 5. \begin{figure} \includegraphics[width=9.0cm,angle=0]{Barucci_Fig5.png} \begin{center} \caption {Simulated reflectance spectra are shown to highlight the effect of variation in abundance and grain size of the water ice. In both plots, the black curve is the spectrum of spot 3, and the red curve is the best fit for areal mixture case already reported in Fig. 4. The green and blue curves are simulated by varying one of these parameters, as indicated: the panel on the top shows the effect of the variation in abundance of H$_2$O ice (0\%, 1.3\%, 2.6\%) fixing the grain size at 750$\mu$m, and the panels on the bottom show the variation in grain size (75$\mu$m, 750$\mu$m, 7500$\mu$m), fixing the H$_2$O ice abundance at 1.3\%. The gaps in the spectral ranges are not taken into account, as in Fig. 3. } \label{xx} \end{center} \end{figure} \section{Analysis of the exposed H$_2$O ice spots } We analyze the OSIRIS NAC images in the area where the spots have been identified and, in addition, we investigate all available observations to evaluate the lifetime of the H$_2$O ice spots. All spots are almost in equatorial or near-equatorial locations. {\bf Spot 1.} This bright feature is located in the southern hemisphere close to the Khonsu-Imhotep boundary and the cometary equator. The bright spot appears as a freshly exposed cliff on a rough region of rocky appearance bounding an alcove. It is likely a remnant of a collapsed sector of a former pit. The first detection of this feature occurred on 5 June 2015, when it measured 57 m, whereas later on, on 27 June 2015, it measured about 36 m. {\bf Spot 2.} Located in the Anhur region in the southern hemisphere of the comet, this bright feature measures about 45 m. Oweing to the oblique observing geometry and many shadows cast in the neighboring terrain, it is difficult to give a clear depiction of the surroundings. Nevertheless the feature seems to correspond to a flat terrace at the centre of a roundish area (possibly a collapsed pit) in a region of consolidated materials. It shows a very low albedo for a bright feature, although it stands out from that of the typical nucleus. The OSIRIS NAC observations of this bright spot are recorded from 4 June 2015 up to 11 July 2015. As for spot 1, this feature is also surrounded by many shadows caused by the rugged terrain of Anhur. {\bf Spot 3.} This spot appears as a bright boulder close to a pancake-like feature (apparently composed of three broad layers), which is a morphologically unique feature right in the center of the Khonsu region (El-Maarry et al. 2016). The bright spot has a good temporal coverage in terms of OSIRIS NAC observations. It has been observed on several occasions from the end of March to the beginning of May in 2015. The observations suggest a diminution of its size from about 18 m in late March by about 8 m in April, up to complete disappearance in May. By applying illumination correction using the Lommer-Seeliger disk law, the reflectance of the bright spot increased by a factor of 2 for the sequence on 25 March, unlike other cases of bright features, with the exception of spot 8. {\bf Spot 4.} The OSIRIS NAC observations of 22 November 2014 reveal a cluster of bright features located at the Atum region margin close to its boundaries with the Khonsu and Anubis regions. The bright features seem to be freshly exposed brittle materials (El-Maarry et al. 2015) in a rocky-like area. The bright patches of varying individual sizes span an area of about 25 meters in diameter. The largest patch appears to be about 10 m in size in this observation. Further analysis, including earlier images, reveals that this feature was observed as early as 2 September 2014. In the meantime the latest appearance of this feature on OSIRIS NAC observations was on 23 November 2014. {\bf Spot 5.} This bright spot faces the rounded feature with a diameter of 500 m in Imhotep, which is interpreted by Auger et al. 2015 as a fractured accumulation basin. Similar to the bright patches of spot 4, this patch also seems to be formed by a freshly exposed area of a highly fractured material. This bright spot was observed on 22 November 2014 and measures about 10 m. It has also been noted (Pommerol et al. 2015) that this bright feature was spotted as early as 5 September 2014 by OSIRIS NAC. It has also been included in the Oklay et al. (2016a) study. On further analysis of the images, it is possible to find the same feature in some images on 23 August 2014, but with a smaller size. {\bf Spot 6.} This is a cluster of small bright spots located in the Khepry region at the base of a scarp bordering a roundish flat terrace covered by dust deposits at the Babi region margin. Again the material, where the bright patches are located, appears consolidated, brittle (El-Maarry et al. 2015), and dissected by pervasive fractures. The bright patches themselves seem to be either on freshly exposed outcropping material or on boulders. These spots have been observed many times from late August 2014 through to the end of November 2014 by OSIRIS NAC. The first observation on 26 August 2014 indicates the presence of around some 20 small bright spots with sizes ranging from 1.5 m to 3 m along with few spots of about 6 m in size. The following observations on 5 September reveal that there has not been any significant change in the bright patches in terms of their sizes and population. There were more observations on 16 September and three days later (Pommerol et al. 2015), suggesting the stability of the bright patches over time. Later on, an observation sequence on 29 October 2014 suggests that the cluster has reduced to only four bright spots, each measuring about 1 m, but it is not possible to rule out the possibility that some small bright spots may be under a shadow and hence not observable. Another sequence, on 22 November, reveals the cluster of bright spots present on observations in early September, leading to the conclusion that the observation on 29 October was subject to shadows. This cluster of bright spots seems to have been stable for at least three months. Apart from this cluster of bright patches in late 2014, few observations dated from 25 March 2015 indicate the re-emergency of small bright patches at the same location. Despite the shadowy terrain, it is possible to discern up to three small bright patches, each about 5 m in diameter. Perhaps it could be inferred that this locality near the cliff at the Khepry/Babi border is active in terms of bright patches. Some mechanism may have triggered an outburst of icy material underneath the surface of this region allowing the cluster of bright patches to become apparent, possibly gravitational falls of boulders (Pajola et al. 2015) with consequent exposure of patches at the scarp foot. {\bf Spot 7.} This is a cluster of small bright patches that might correspond to a series of boulders at the base of a small terrace bounded by scarps. This feature appears on the OSIRIS NAC observations of 30 September 2014. It is located on a slope adjacent to the smooth plain in Imhotep, pointing to the neighboring Apis region. The location itself is somewhat shadowy and is camouflaged by the surrounding terrain, making it challenging to observe the full extension of this feature. VIRTIS-M data support the presence of H$_2$O ice for this bright feature located in Imhotep region with a size of about 5 m. The corresponding positive VIRTIS-M observations date back to early September 2014, suggesting that this bright feature has been on the cometary surface even as early as the beginning of September. Therefore this feature could have a lifespan of at least one month. {\bf Spot 8.} This Imhotep-based bright feature is recorded in several epochs. The bright spot is close to the isolated accumulation basin and to the small roundish features interpreted by Auger et al. 2015 as ancient degassing conduits. It is located at the base of what appears to be an open trench surrounded by steep small scarps. Therefore the full view of this feature is somewhat hampered by the constant casting of shadows and the viewing geometry. Nevertheless the multiple observations offer partial views of the feature that seems to lie on a consolidated flat area with fractures and tiny staircase borders. For example on the image recorded on 30 September 2014, it appears to be composed of two segments with one measuring 3 m, while the other measures 6 m approximately. We note that the absolute reflectance of this bright spot increased by a factor of 3 upon applying the illumination correction using Lommer-Seeliger disk law for the observation sequence on 5 September 2014. A similar effect has only been noticed in the case of spot 3, where the factor was 2. The earliest detection of this feature dates to 25 August 2014 and the corresponding positive VIRTIS-M recordings date to mid-December 2014, suggesting a stable existence of almost four months on surface for this bright feature. % \begin{table} \begin{center} \caption{Parameters used to perform the models shown in Fig. 5. The first line represents the best fit (reported in Table 4) shown in red in both panels of Fig. 5. The abundance and grain size of water ice in areal mixture are fixed as: 0\% and twice the abundance retrieved for the best fit, and one tenth and ten times the grain size retrieved for the best fit. The additional slope (free parameter of the model) and the resulting goodness are indicated in the table.} \small{ \label{tabspots} \begin{tabular}{c c c c } \hline H$_2$O ice & H$_2$O ice grain & Additional slope & Goodness \\ abundance(\%) & size ($\mu$m) & (\%$\mu$m$^{-1}$) & $\chi^{2}$ \\ \hline \hline 1.3 & 750 & -2.8 & 2.78 \\ \hline 0 & 750 & -4 & 31.2 \\ \hline 2.6 & 750 & 0 & 15.0 \\ \hline 1.3 & 75 & -2.5 & 9.4 \\ \hline 1.3 & 7500 & -3.9 & 6.8 \\ \hline \hline \end{tabular} } \end{center} \label{tbl:virtis_parameters} \end{table} \section{Temporal evolution of the bright spots} Although only a subset of the bright spots detected by OSIRIS can be analyzed by VIRTIS, the unambiguous detection of the spectral signatures of H$_2$O ice in eight of these bright spots is a clear confirmation of their icy nature. In section 5, we detail evidence of spots with long life time. Here (Fig. 6) we add a comparison for a cluster of bright spots (spot 6) in Khepry with observations at an interval longer than two months, which confirms the stability of this cluster at that time of the mission. From the VIRTIS and OSIRIS data, we can measure several parameters: estimation of the amount of water ice on each spot, its local temperature, a timescale and an extent of erosion. These were measured at different times and should consequently be considered only as first order indications. They are theoretically linked together, so that we can use them to estimate whether ice behaves as expected for each spot. In Table 6 we report the mass release rate of H$_2$O (Prialnik et al. 2004), for each temperature, from the surface of 67P/C-G estimated in each of these locations as follows \begin{equation} \label{mass_release} \mathcal{Q}_{H_2O} = \mathcal{P}_{H_2O} (T) \sqrt{\frac{m_{H_2O}}{2 \pi k_B T}} ,\end{equation} with $m_{H_2O}$ [kg] the mass of one molecule of H$_2$O, k$_B$ the Boltzmann constant, T [K] the temperature of each spot, and $\mathcal{P}_{H_2O}$ the saturation vapor pressure, which can be written as \begin{equation} \label{P} \mathcal{P}_{H_2O} = A e^{-B/T} ,\end{equation} with A=356~10$^{10}$~Nm$^{-2}$ and B=6141.667~K for water (Fanale and Salvail, 1984). \begin{table} \begin{tiny} \caption{\label{QH2O}Mass release rate of H$_2$O from the surface of 67P/C-G at the location of H$_2$O ice-rich spots. In the last column an indication of the observed lifetime of the spots is summarised.} \begin{tabular}{cccl} \hline\hline Spot & T [K] & $\mathcal{Q}_{H_2O}$ [kg m$^{-2}$s$^{-1}$] & OSIRIS observations \\ \hline 1 & 203 & 3.357 $\times$ 10$^{-4}$ & size receded from 57m \\ & & & to 36m in 3 weeks\\ 2 & 197 & 1.336 $\times$ 10$^{-4}$ & observed for 5 weeks\\ 3 & 218 & 2.485 $\times$ 10$^{-3}$ & size receded from 18m \\ & & & to 8m in 3 weeks, disappear \\ & & & in the following 3 weeks \\ 4 & 168 & 0.662 $\times$ 10$^{-6}$ & cluster stable for 11 weeks\\ 5 & 188 & 3.003 $\times$ 10$^{-5}$ & stable for 13 weeks\\ 6 & 179 & 0.625 $\times$ 10$^{-5}$ & cluster stable for 13 weeks\\ 7 & 163 & 2.156 $\times$ 10$^{-7}$ & cluster stable for 3 weeks\\ 8 & 158 & 0.701 $\times$ 10$^{-7}$ & stable for 15 weeks\\ \hline \end{tabular}\\ \end{tiny} \end{table} ~ For each spot, the temperature changes because of local diurnal variations of the solar input, seasonal effects, shadowing, or self-heating. These effects are difficult to estimate since they depend on the local illumination geometry and thermo-physical properties. A low thermal inertia, as derived by VIRTIS (Capaccioni et al. 2015) and MIRO (Schloerb et al. 2015), results in quite large day-night and seasonal variations of the temperature. These may influence the survival of ice-rich spots in a way that may be unpredictable. However, we find that the behavior of the eight spots found by the OSIRIS-VIRTIS study is in good agreement with the expected thermal behavior of H$_2$O ice. \par Although the size and timescales measured for each spot may be considered with caution owing to the errors like those induced by shadows, they allow for another estimate of the mass release rate (Prialnik et al. 2004) through $\mathcal{Q}_{H_2O} = \varrho ~ \Delta l / \Delta t$, with $\varrho$ the local density of water ice, $\Delta l$ the typical extent of erosion of each spot, and $\Delta t$ its erosion timescale. If we take the example of spot 1, the expected timescale to erode this feature at the observed extent and for a temperature of 203~K would be 50-100~hr. We emphasize that given the numerous uncertainties this should be considered as a first order approximation. However, this timescale is such that the feature should be stable against day-night variations of the temperature, since it would take more than one cycle to erode it. In addition, the lower temperatures of the night would prevent such a rapid erosion. The total mass of water released by the erosion of spot 1 is $M=\mathcal{Q}_{H_2O} \Delta t f \Delta S$ with $\Delta S$ the eroded surface and $f$ the water ice fraction inferred from VIRTIS data modeling. The feature containing 1\% of water ice seems to have decreased from 57~m to 36~m in about 22~days, i.e. $\sim$1500m$^2$ were eroded in $\sim$2$\times$10$^6$s. At 203~K, this translates into $\sim$10$^4$kg of water ice being sublimated, which is not possible to reach with surface ice alone. We thus have to assume that 1) icy grains may be scattered to the nucleus surface and recondence, creating a cycle maintaining water ice at the surface (Crifo, 1987; Davidsson and Skorov, 2004), 2) the average temperature is much lower than the temperature measured by VIRTIS, 3) ice-rich subsurface layers contribute to maintaining the surface ice. Case 1 is beyond the scope of the simple calculations we perform here. For case 2, we estimate that surface ice (contained in a 1mm layer) would be able to reproduce the observed behavior if the average temperature is $\sim$175~K: large day-night variations of the temperature would thus be necessary. For case 3, if we assume that subsurface layers have the same properties as the bulk of the comet, a layer of the order of $\sim$10 cm would be required to explain the observed mass release. We note that in 67P/C-G, the diurnal and seasonal skin depths ($ \left( \frac{P_{spin} \kappa}{\pi \rho c} \right)^{1/2} $ and $ \left( \frac{2 \kappa a^{3/2}}{\sqrt{GM_{\odot}} \rho c} \right)^{1/2} $, with a the semi major axis, c the specific heat, G the gravitational constant, $\kappa$ the thermal conductivity, $M_{\odot}$ the solar mass, $P_{spin}$ the spin period, and $\rho$ the bulk density; Prialnik et al. 2004) vary from 1~mm to 9~cm and 80~cm to 7~m respectively, depending on local thermo-physical properties, as computed by Leyrat et al. (2015). Spot 2 is larger than spot 1, but contains an order of magnitude less H$_2$O ice as inferred from spectral modeling. This feature's evolution is found to be in good agreement with the expected thermal behavior of water ice, as well as the inference from its low albedo that this spot is close to the end of the sublimation phase. Spot 3 is the hottest feature measured in this study. If we assume that the ice-rich boulder has the same properties as the bulk of 67P/C-G, the 18~m feature being eroded in $\sim$3$\times$10$^6$s results in a mass release rate of 2.82$\times$10$^{-5}$kg~m$^{-2}$~s$^{-1}$, i.e. two orders of magnitude less than the rate computed from the measured temperature. We should thus assume that the temperature is much lower most of the time at this spot, or the boulder properties vary from those of the bulk of the comet (a local density of 800 kg~m$^{-3}$ for example would be required). Alternatively, the very high temperature observed for spot 3 is likely the result of an areal mixture of icy and ice-free terrain within the VIRTIS pixel. Indeed, for a given observed albedo and temperature, the mixing of ice and dust at a grain level plays an important role: ice intimately mixed with dust will be hotter and shorter-lived than a patch of pure ice surrounded by dust. Given the low temperatures encountered for spots 4 to 8, it is expected that these features should be long-lived, as observed by OSIRIS and VIRTIS. At a lower temperature of 180K (spot 6), the sublimation rate is only 0.625$\times$10$^{-5}$kg~m$^{-2}$~s$^{-1}$, and decreases exponentially at lower temperatures (spots 4, 7, and 8). The appearance of meter-sized spots, which are mostly only illuminated for a short fraction of the day, remains constant over time. These features would be more affected by seasonal variations than diurnal variations of the temperature, since water ice is mostly stable at the measured temperatures. Cometary activity, triggered below the surface by other volatile species, may locally influence the surface properties, such as the distribution of dust, and expose fresh H$_2$O ice at the surface. All the bright spots are on consolidated dust free materials, either on boulders or on freshly exposed outcropping regions that often display penetrative fractures. This suggests that H$_2$O ice can mainly be found on the consolidated substratum exposed along scarps or detached in the form of boulders. Some of the bright spots have been in place for weeks and months, while others seem related to diurnal variation. \begin{figure} \centering \includegraphics[width=9.0cm,angle=0]{Barucci_Fig6.pdf} \caption{Comparisons of OSIRIS NAC images of the cluster of bright spot 6, observed two months apart, which shows the stability of the bright features with time. } \label{xx} \end{figure} \section{Conclusions} Comet 67P/Churyumov-Gerasimenko shows a surface rich in heterogeneous geological structures and surface morphological variations that show color and albedo variations across the surface. The high-resolution images obtained by OSIRIS enable us to identify a large quantity of bright spots of different size and located in areas with different properties and high albedo. In this paper, we present for the first time a complementary study of data acquired by the OSIRIS and VIRTIS instruments. A major objective of this paper is to firmly detect the presence of H$_2$O ice on the comet's surface. We confirm the presence of H$_2$O ice on eight new spots and we model the spectra with H$_2$O ice and dark material. Comparing the coordinates of the detected eight H$_2$O ice spots with those of 67P/C-G dust jets, five spots (4, 5, 6, 7 and 8) have been found to lie in the same approximate position of the jets identified by Vincent et al. (2016a) and one (spot 3) among the outbursts observed in the cometary summer (Vincent et al., 2016b). Observational evidence showed that the majority of dust jets also arose from rough terrains and fractured walls rather than smooth areas (Vincent et al., 2016a). Some of these detected H$_2$O ice spots have also been compared by Oklay et al. (2016b) to those of comets 9P and 103P. The detection of H$_2$O ice signatures by VIRTIS on eight of the 13 locations given by OSIRIS data does not mean that the other spots do not contain ice on their surface and this can be explained by not simultaneous observations, unfavorable instrumental signal-to-noise conditions, spatial resolution on the surface, different illumination/viewing geometry, and by the fact that VIRTIS-M channel was unavailable after 4 May, 2015 owing to the failure of the active cooler. ~ The main results of this work can be summarized as follows: \begin{itemize} \item We presented for the first time a complementary analysis of H$_2$O ice-rich areas using data acquired by the OSIRIS and VIRTIS instruments. Comparing high spatial resolution VIS images with extended IR range spectra enables us to study the morphological, thermal and compositional properties of these areas at the same time. \item The analysis of the spectral properties observed by VIRTIS-M indicates that, on these areas, the H$_2$O ice abundance is between 0.1 and 7.2$\%$, mixed in areal or/and in intimate modalities with the dark terrain. \item The ice is distributed on the two lobes of 67P/C-G in locations which remain in shadow for longer. \item The detected bright spots are mostly on consolidated dust free material surfaces, mostly concentrated in equatorial latitudes. \item {The mass release of H$_2$O at the location of the eight ice-rich spots has been estimated.} \item Some spots are stable for several months and others show temporal changes connected to diurnal and seasonal variations. Stability of the spots is corroborated by the temperature retrieved at the surface. The behavior of ice on these locations is in very good agreement with theoretical expectations. \item Six of the detected H$_2$O ice spots are located in approximately the same position of the previously detected cometary jets. \end{itemize} H$_2$O ice is present on the surface substratum where solar illumination plays an important role with seasonal and diurnal variations. During the perihelion orbit passage of the comet, the Rosetta spacecraft was at a greater distance and the available surface OSIRIS images were at lower resolution. Starting in March 2016, the comet is observed again from close distances. With analysis of other available data (in particular from OSIRIS), we will study the surface changes after the perihelion passage to better understand the surface evolution of the comet.\\ \begin{acknowledgements} OSIRIS was built by a consortium of the Max-Planck-Institut f\"ur Sonnensystemforschung, G\"ottingen, Germany, CISAS--University of Padova, Italy, the Laboratoire d'Astrophysique de Marseille, France, the Instituto de Astrof\'isica de Andalucia, CSIC, Granada, Spain, the Research and Scientific Support Department of the European Space Agency, Noordwijk, The Netherlands, the Instituto Nacional de T\'ecnica Aeroespacial, Madrid, Spain, the Universidad Polit\'echnica de Madrid, Spain, the Department of Physics and Astronomy of Uppsala University, Sweden, and the Institut f\"ur Datentechnik und Kommunikationsnetze der Technischen Universit\"at Braunschweig, Germany. VIRTIS was built by a consortium from Italy, France, and Germany, under the scientific responsibility of IAPS, Istituto di Astrofisica e Planetologia Spaziali of INAF, Rome, which lead also the scientific operations. The VIRTIS instrument development for ESA has been funded and managed by ASI (Italy), with contributions from Observatoire de Meudon (France) financed by CNES and from DLR (Germany). The VIRTIS instrument industrial prime contractor was former Officine Galileo, now Finmeccanica in Campi Bisenzio, Florence, Italy. The support of the national funding agencies of Germany (DLR), France (CNES), Italy (ASI), Spain (MEC), Sweden (SNSB), and the ESA Technical Directorate is gratefully acknowledged. \end{acknowledgements}
1,314,259,993,684
arxiv
\section{Introduction} \label{sec:introduction} Let $S^{n-1} := \{(x_{1},\dots, x_{n}) \in \mathbb R^{n} : x^{2}_{1} + \cdots + x^{2}_{n} = 1\}$ denote the unit sphere in $\mathbb R^{n}$, $n\geq 2$. This submanifold inherits a surface measure, denoted by $\sigma_{n-1}$. We define the \emph{Fourier transform} of a measure $\mu$ on $\mathbb R^{n}$ by the formula \begin{align} \label{eq:fourier-transform} \widehat\mu(\xi) & := \int e^{-2\pi i x\cdot\xi}d\mu(x), & \xi\in\mathbb R^{n}. \end{align} A simple computation gives \begin{align} \label{eq:sphere-bessel} \widehat{\sigma}_{n-1}(\xi) = 2\pi|\xi|^{-\frac{n-2}2}J_{\frac{n-2}2}(2\pi|\xi|), \end{align} where $J_{\nu}(t)$ is the \emph{Bessel function of first kind} and $\nu$-th order, see~\cite[Appendix B]{cit:grafakos-cfa}. From~\eqref{eq:sphere-bessel} one obtains the asymptotic expansion \begin{align} \label{eq:asymptotic-expansion} \widehat{\sigma}_{n-1}(\xi) & = 2\cos(2\pi(|\xi| - \frac{n-1}8))|\xi|^{-\frac{n-1}2} + O(|\xi|^{-\frac{n+1}2}), & \xi\to\infty. \end{align} Note that~\eqref{eq:asymptotic-expansion} does not make any claims about values at specific points. A full asymptotic formula is available for $J_{\nu}(t)$ (see~\cite[Chapter VIII]{cit:stein}) and consequentially for~\eqref{eq:sphere-bessel} as well, and the derivates may similarly be estimated using the identity $\frac d{dt} J_{\nu}(t) = \frac{\nu}2 J_{\nu}(t) - J_{\nu+1}(t)$. The above formulas may be useful in any operator or integral that involves the unit sphere. For some applications, see~\cite{cit:mattila}. It is of interest to replace the unit sphere with other submanifolds satisfying a certain property, and the \emph{method of stationary phase} allows us to again claim full asymptotic expansion formulas, albeit whose terms may be difficult to compute explicitly past the first one. We are inspired by problems and results involving configurations (see for instance~\cite{cit:iosevich-liu},~\cite{cit:palsson-sovine}), to seek to estimate for $n\geq 3$ and $2\leq k \leq n$ the following integral, \begin{align} \label{eq:stiefel-fourier} \widehat{\mu}_{n,k}(\Xi) & = \int_{\St^{n}_{k}} e^{-2\pi i\Tr (X^{t}\Xi)} d\mu_{n,k}, & \Xi \in \mathbb{R}^{n\times k}, \end{align} where $\Tr$ denotes trace, $\St^{n}_{k} := \{X\in\mathbb{R}^{n\times k} : X^{t}X = I_{k}\}$ and $I_{k}$ is the $k\times k$ identity matrix. This set is a generalization of the unit sphere above, as $\St^{n}_{1} = S^{n-1}$. The set $\St^{n}_{k}$ is called the \emph{Stiefel manifold} and is the moduli space of unit $k$-frames in $\mathbb{R}^{n}$. By applying the method of stationary phase, we obtain as $\|\Xi\|\to\infty$ the formula, \begin{align} \label{eq:stiefel-asymptotics} \widehat{\mu}_{n,k}(\Xi) =&~2^{\frac{k(k-1)}4}\sum_{s\in\{-1, 1\}^{k}}\cos(2\pi\sum_{j=1}^{k}s_{j}(\lambda_{j} - \frac{n-j}8))~\times \nonumber \\ & |\lambda_{1}\cdots\lambda_{k}|^{-\frac{n-k}2}\prod_{1\leq i < j\leq k}|s_{i}\lambda_{i} - s_{j}\lambda_{j}|^{-1/2} + O(\|\Xi\|^{-\frac{n-k+2}2}), \end{align} where $\lambda_{1},\dots,\lambda_{k}$ are the singular values of $\Xi$. The decay given by~\eqref{eq:stiefel-asymptotics} has the defect that it is not useful in two cases: when $\lambda_{i} \approx \lambda_{j}$ for some $i,j$ or when for some $i$ we have $\lambda_{i} \approx 0$. In those cases we are not able to obtain any results, although we expect that the result will be similar to~\eqref{eq:stiefel-asymptotics} with the division-by-zero terms deleted from the formula. Our expectation is based on two facts, a precise computation for $n=4,k=2$ (see~\eqref{eq:explicit-four}) and that when $\lambda_{k}$ is exactly equal to zero, we have a reduction \begin{align} \label{eq:reduction} \widehat{\mu}_{n,k}(\Xi) = \Vol(S^{n-k})~\widehat{\mu}_{n,k-1}( \begin{pmatrix} \lambda_{1}&&\\&\ddots&\\&&\lambda_{k-1}\\&\vdots& \end{pmatrix}), \end{align} where the other entries of the $n\times(k-1)$ matrix are zeros. This reduction may be iterated, and so it is reasonable to assume $k$ positive singular values. Also of interest is the ``most generate'' case $\lambda_{1} = \cdots = \lambda_{k}$, whose asymptotic expansion will say something about the growth of the moments of the trace function in $\mathbb{O}(k)$ (see for instance \cite{cit:pastur-vasilchuk}), since \begin{align} \label{eq:moments} \frac 1{\Vol{\mathbb{O}(k)}}\int_{\mathbb{O}(k)} e^{i\lambda\Tr X}dX & = \sum_{m=0}^{\infty} \frac{(i\lambda)^{m}}{m!}\mathbb E[(\Tr X)^{m}]. \end{align} As a method of attacking the above we have some details presented in Section~\ref{sec:closer-look-at} on making explicit calculations, as the authors of~\cite{cit:palsson-sovine} did for $k=2$. \section{Proof of Main Result} \label{sec:proof} \subsection{Overview} \label{sec:overview} We wish to prove~\eqref{eq:stiefel-asymptotics}. Symmetry reduces the form of $\Xi$ to a rectangular-diagonal form with nonincreasing entries $\lambda_{1}\geq \cdots \geq \lambda_{k}\geq 0$. Fubini's theorem allows us to integrate out the columns of $X$ for which $\lambda_{j} = 0$, and so we may assume $\lambda_{k} > 0$. We calculate the tangent and normal planes of $\St^{n}_{k}$ and obtain an expression in matrix calculus for their projectors and for the second fundamental form. We calculate the points on $\St^{n}_{k}$ for which $\Xi$ belongs to the normal plane, and there we calculate the eigenvalues of the second fundamental form. The calculation is simple because it turns out to be diagonal. Finally using these eigenvalues we apply Theorem~\ref{thm:stationary-phase} to obtain~\eqref{eq:stiefel-asymptotics}. A reference for the geometric concepts of this section is~\cite{cit:docarmo}. \subsection{Notation} \label{sec:notation} For a matrix $X\in\mathbb{R}^{n\times k}$, denote by $X_{1}\in\mathbb{R}^{k\times k}$ and $X_{2}\in\mathbb{R}^{(n-k)\times k}$ the matrices such that $X = \begin{pmatrix}X_{1}\\X_{2}\end{pmatrix}$. Let $Y\in\St^{n}_{k}$ denote the rectangular-diagonal $n\times k$ matrix with entries equal to $1$. Let $\mathfrak{so}(k)$ denote the skew-symmetric $k\times k$ matrices with real entries, and $\Sym^{2}(k)$ denote the symmetric $k\times k$ matrices. Let $\nabla$ be the standard connection of Euclidean space. Somewhat redundantly, the inner product is available to us both in the matrix calculus and by the $\langle\cdot,\cdot\rangle$ symbol. The Kronecker delta is denoted by $\delta_{ij}$. \subsection{Reduction to Singular Values} \label{sec:reduct-diag-matr} The map $X\mapsto OXP$ for any $O\in\mathbb{O}(n)$, $P\in\mathbb{O}(k)$ preserves the measure $\mu_{n,k}$ and so by a change of variables we obtain \begin{align} \label{eq:symmetries} \widehat{\mu}_{n,k}(\Xi) & = \widehat{\mu}_{n,k}(O\Xi P). \end{align} Using the singular value decomposition of $\Xi$, we may choose $O,P$ so that $O\Xi P$ is rectangular-diagonal with nonincreasing entries $\lambda_{1}\geq\cdots\geq\lambda_{k}\geq 0$. Let $\Pi(j)$, $1 \leq j \leq k$ be the plane that orthogonally complements the plane spanned by the first $j$ columns of $X\in\St^{n}_{k}$, and $\sigma_{\Pi(j)}$ be the measure of the unit sphere of $\Pi(j)$. We have the equality \begin{align} \label{eq:measure-decomposition} \mu_{n,k} = d\sigma_{\Pi(k-1)}(x_{k})\cdots d\sigma_{\Pi(1)}(x_{2})d\sigma_{\mathbb{R}^{n}}(x_{1}) \end{align} where $x_{1},\dots,x_{k}$ are the columns of $X$, and so if we have $\lambda_{k} = \lambda_{k-1} = \cdots = \lambda_{k_{0}+1} = 0$ for some $k_{0}$, then by integrating those columns out, we obtain \begin{align} \label{eq:reduction2} \widehat{\mu}_{n,k}(\Xi) & = \left(\prod_{j=k_{0}+1}^{k} \Vol(S^{n-j})\right)~\widehat{\mu}_{n,k_{0}}( \begin{pmatrix} \lambda_{1}&&\\&\ddots&\\&&\lambda_{k_{0}}\\&\vdots& \end{pmatrix} ), \end{align} where the rest of the elements of the matrix in~\eqref{eq:reduction2} are zeros. Thus we may assume that $\Xi$ takes the special form \begin{align} \label{eq:singular-values} \Xi & = \begin{pmatrix} \lambda_{1}&&\\&\ddots&\\&&\lambda_{k}\\&\vdots& \end{pmatrix}, & \lambda_{1}\geq\cdots\geq\lambda_{k}>0. \end{align} \subsection{Second Fundamental Form} \label{sec:second-order-study} The trick to computing first and second-order expansions is to express the Gauss map and second fundamental form using matrix calculus, and this approach is from~\cite{cit:absil-mahony-sepulchre}. It may help to put this in the context of the sphere. For a vector $u\in\mathbb{R}^{n}$ the spherical tangential projector at $x\in S^{n-1}$ is given by $P_{x}(u) = (I_{n} - xx^{t})u$, and by differentiating in the direction of $v\in T_{x}S^{n-1}$ we obtain, for unit basis vectors $e_{i}, e_{j}$ of the tangent space at $x$, \begin{align*} \langle \nabla_{e_{i}}P_{x}(e_{j}), x\rangle & = -\langle e_{i}x^{t}e_{j} + xe_{i}^{t}e_{j}, x\rangle \\ & = -\delta_{ij}, \end{align*} which is exactly $\langle\II_{x}(e_{i}, e_{j}),\xi\rangle$ where $\xi~(=x)$ is the outward unit normal at $x$. Indeed, one may write the sphere near $(0,\dots,0,1)$ as the graph of a function \begin{align*} x & \mapsto 1-\delta_{ij}x_{i}x_{j}/2 + O(|x|^{3}), & x\in\mathbb{R}^{n-1}. \end{align*} \subsubsection{Tangent and Normal Planes} \label{sec:tang-norm-plan} By differentiating the equation $X^{t}X = I_{k}$ we obtain $dX^{t}X + X^{t}dX = 0$, and taking $X = Y$ gives $dX^{t}_{1} + dX_{1} = 0$. Thus the tangent plane of $\St^{n}_{k}$ at $Y$ is given by all matrices of the form $\begin{pmatrix}A_{1}\\A_{2}\end{pmatrix}$, where $A_{1}\in\mathfrak{so}(k)$ and $A_{2}\in\mathbb{R}^{(n-k)\times k}$. Since symmetric matrices orthogonally complement skew-symmetric matrices, the normal space is given by all matrices of the form $\begin{pmatrix}N_{1}\\0\end{pmatrix}$ where $N_{1}\in\Sym^{2}(k)$. Using the orthogonal decomposition $\mathbb{R}^{k\times k}\ni L \mapsto (\frac{L-L^{t}}2, \frac{L+L^{t}}2)$ and the equations $Y^{t}A = A_{1}$ and $YA_{1} = \begin{pmatrix}A_{1}\\0\end{pmatrix}$, one may write the projector at $Y$ by \begin{align} \label{eq:tangential-projector} P_{Y}(A) = (I_{n} - YY^{t})A+\frac 12Y(Y^{t}A - A^{t}Y). \end{align} The normal projector is given by $P^{\bot} = I_{n} - P$, and so \begin{align} \label{eq:normal-projector} P^{\bot}_{Y}(A) = \frac 12 Y(Y^{t}A + A^{t}Y). \end{align} In fact, formulas~\eqref{eq:tangential-projector} and~\eqref{eq:normal-projector} hold even when the special point $Y$ is replaced by a general point $X\in\St^{n}_{k}$, which one sees by writing $Y = OX$ for some $O\in\mathbb{O}(n)$ and then for $B\in T_{X}\St^{n}_{k}$ setting $A = O^{t}B$ in~\eqref{eq:tangential-projector},~\eqref{eq:normal-projector}. It follows that the form of the tangent and normal spaces at a general point $X\in\St^{n}_{k}$ is given by \begin{align} \label{eq:tangent-space} T_{X}\St^{n}_{k} & = \{XA_1 + KA_2 : A_1\in\mathfrak{so}(k), A_2\in\mathbb{R}^{(n-k)\times k}\}, & X^{t}K = 0, \\ \label{eq:normal-space} T^{\bot}_{X}\St^{n}_{k} & = \{XN_1 : N_1\in\Sym^2(k)\}. \end{align} where $K$ is any $n\times (n-k)$ matrix of maximal rank solving $X^{t}K = 0$. \subsubsection{Differentiating the Projector} \label{sec:diff-proj} Let $X\in\St^{n}_{k}$, $A,B\in T_{X}\St^{n}_{k}$ and extend $B$ to the vector field $\bar B := P_{Z}B, Z\in\mathbb{R}^{n\times k}$. We define the second fundamental form of $\St^{n}_{k}$ at $X$ in the usual manner, \begin{align} \label{eq:second-fundamental-form} \II_{X}(A,B) & := P^{\bot}_{X}(\nabla_{A}\bar{B}) \end{align} and claim that \begin{align} \label{eq:differentiating-projector} \II_{X}(A,B) = \nabla_{A}P_{X}(B), \end{align} which one can see by noting that the map $\St^{n}_{k}\ni Z\mapsto \|P_{Z}(B)\|$ has a critical point at $Z = X$, where $P_{X}(B) = B$. This allows us to compute, \begin{proposition}[from~\cite{cit:absil-mahony-sepulchre}] \label{prop:second-fundamental-form} For $X\in\St^{n}_{k}$ and $A,B\in T_{X}\St^{n}_{k}$, \begin{align} \label{eq:second-fundamental-form-matrix} \II_{X}(A,B) = - \frac 12 X(A^{t}B + B^{t}A). \end{align} \end{proposition} \begin{proof} It suffices to prove the formula at $X = Y$, as it follows for other points by the homogeneity of $\St^{n}_{k}$, as in Section~\ref{sec:tang-norm-plan}. By~\eqref{eq:differentiating-projector} and~\eqref{eq:tangential-projector}, \begin{align} \label{eq:second-fundamental-form-calculation} \II_{Y}(A,B) & = -AY^{t}B - YA^{t}B + \frac 12 A(Y^{t}B - B^{t}Y) + \frac 12 Y(A^{t}B - B^{t}A) \\ & = -\frac 12(A(Y^{t}B + B^{t}Y) + Y(A^{t}B + B^{t}A)). \end{align} The first term is zero since $Y^{t}B = B_{1}\in\mathfrak{so}(k)$ according to Section~\ref{sec:tang-norm-plan}. \end{proof} \subsubsection{Calculating the Eigenvalues} \label{sec:calc-eigenv} The points $X\in\St^{n}_{k}$ for which $\Xi$ belongs to the normal space are given by the rectagular-diagonal matrices with diagonal elements $\pm 1$, since according to~\eqref{eq:normal-space} they must solve the equation $XN_{1} = \Xi$ for some $N_{1}\in\Sym^{2}(k)$, which means that $N_{1}^{2} = \Xi^{t}\Xi$ and so the fact follows by the diagonal reduction~\eqref{eq:singular-values}. In particular $Y$ is one of them. In fact, they may all be parametrized by $s\in\{-1,1\}^{k}$, as they are all the rectangular-diagonal matrices with $\pm 1$ as their diagonal elements. We calculate $\langle\II_{Y}, \Xi\rangle$, as the other cases follow by the homogeneity property of $\St^{n}_{k}$ as in Section~\ref{sec:tang-norm-plan}. We wish to calculate $\langle\II_{Y}(A,B), \Xi\rangle$ for $A,B$ elements of a basis of the tangent space at $Y$. Luckily, only the diagonal terms are nonzero for the obvious choice of basis: for the $\mathfrak{so}(k)$-part of the tangent space, the basis is given by $\mathcal A_{i,j} := (E_{i,j} - E_{j,i})/\sqrt{2}$ for $1 \leq i < j \leq k$, where $E_{i,j}\in\mathbb{R}^{n\times k}$ is the matrix with $1$ in the $i$-th row and $j$-th column and zeros elsewhere, and for the Grassmannian part, the rest of the basis is given by $E_{i,j}$ for $k < i \leq n$ and $1 \leq j \leq k$. It is then easy to verify using~\eqref{eq:second-fundamental-form-matrix} the basis relations, \begin{align} \label{eq:eigenvalues-so-part} \langle \II_{Y}(\mathcal{A}_{i,j}, \mathcal{A}_{i,j}), \Xi\rangle & = - \frac 12(\lambda_{i} + \lambda_{j}), & 1 \leq i < j \leq k, \\ \label{eq:eigenvalues-grassman-part} \langle \II_{Y}(E_{i,j}, E_{i,j}), \Xi\rangle & = - \lambda_{j}, & 1 \leq j \leq k < i \leq n, \\ \langle \II_{Y}(A, B), \Xi\rangle & = 0, &\text{other basis vectors}. \nonumber \end{align} For the other points parametrized by $s\in\{-1,1\}^{k}$, replace $\lambda_{i}$ with $s_{i}\lambda_{i}$ in~\eqref{eq:eigenvalues-so-part} and~\eqref{eq:eigenvalues-grassman-part}. Indeed the signature of the second fundamental form for such a point is easily computed to be equal to \begin{align} \label{eq:signature} \sum_{j=1}^{k}s_{j}(j-n), \end{align} and the absolute value of the determinant of its second fundamental form to be equal to \begin{align} \label{eq:determinant} 2^{-k(k-1)/2}|\lambda_{1}\cdots\lambda_{k}|^{n-k}\prod_{1\leq i < j\leq k}|s_{i}\lambda_{i} + s_{j}\lambda_{j}|. \end{align} \subsection{Application of the Method of Stationary Phase} \label{sec:appl-meth-stat} Theorem~\ref{thm:stationary-phase}, stated below, may be found for $m=n-1$ in~\cite[Chapter VIII]{cit:stein}. It follows for general $m$ by simple geometric considerations, although we could not find a reference that contains it in this exact form. We apply Theorem~\ref{thm:stationary-phase} using the calculations of Section~\ref{sec:calc-eigenv}, in particular~\eqref{eq:signature} and~\eqref{eq:determinant}, to obtain~\eqref{eq:stiefel-asymptotics}. \begin{theorem}[Stationary Phase] \label{thm:stationary-phase} Let $M \hookrightarrow \mathbb{R}^n$ be an immersed $m$-dimensional compact submanifold of $\mathbb{R}^n$ with surface measure denoted by $\mu$. Let $\xi\in\mathbb{R}^n$ be a unit vector and let $\langle\II_x, \xi\rangle$ denote the second fundamental form of $M$ at $x\in M$ in the direction of $\xi$. Assume there are finitely many points $x_1, \dots, x_N$ where $\xi$ belongs to the normal space of $M$ and assume that $\det\langle \II, \xi\rangle \not = 0$ there. Denote by $\Sgn\langle \II_{x}, \xi\rangle$ the number of positive eigenvalues minus the number of negative eigenvalues of the second fundamental form at $x$. Then for $\tau\to+\infty$ and continuously in $\xi$, \begin{align*} \widehat{\mu}(\tau\xi) & = \tau^{-m/2}~\sum_{j=1}^N e^{-2\pi i(\tau x_{j}\cdot \xi + \Sgn\langle\II_{x_j}, \xi\rangle/8)}|\det\langle\II_{x_j},\xi\rangle|^{-1/2} + O(\tau^{-m/2-1}). \end{align*} \end{theorem} \section{A Closer Look at the Problematic Singular Values} \label{sec:closer-look-at} The only explicit computation we are able to carry out presently is for $k=2$, and this was first done in~\cite{cit:palsson-sovine}. We find using~\eqref{eq:measure-decomposition} that \begin{align} \label{eq:explicit-computation} \widehat{\mu}_{n,2}(\begin{pmatrix}\kappa & \\ & \lambda \\ \vdots & \vdots\end{pmatrix}) & = \int e^{-2\pi i (\kappa x\cdot e_{1} + \lambda y\cdot e_{2})} d\mu_{n,2}(x,y) \\ & = \iint e^{-2\pi i\lambda y\cdot e_{2}} d\sigma_{\bot,x}(y) e^{2\pi i \kappa x\cdot e_{1}} d\sigma_{n-1}(x) \end{align} where $\sigma_{\bot,x}$ is the surface measure of the unit $(n-2)$-sphere of the hyperplane perpendicular to $x$. It is easy to see that this is equal to \begin{align} \label{eq:explicit-computation-2} \int e^{-2\pi i \kappa x\cdot e_{1}} \widehat{\sigma}_{n-2}(\lambda\sqrt{1-x_{2}^{2}})d\sigma_{n-1}(x). \end{align} By applying the coarea formula on the function $f(x) = x_{2}$, we further obtain \begin{align} \label{eq:explicit-computation-3} \Vol(S^{n-2})~\int_{-1}^{1} \widehat{\sigma}_{n-2}(\kappa\sqrt{1-t^{2}})\widehat{\sigma}_{n-2}(\lambda\sqrt{1-t^{2}})(1-t^{2})^{\frac{n-3}2}dt, \end{align} which according to~\eqref{eq:sphere-bessel} is also equal to \begin{align} \label{eq:explicit-computation-4} \frac{4\pi^{2}\Vol(S^{n-2})}{(\kappa\lambda)^{\frac{n-3}2}}~\int_{-1}^{1} J_{\frac{n-3}2}(2\pi\kappa\sqrt{1-t^{2}})J_{\frac{n-3}2}(2\pi\lambda\sqrt{1-t^{2}})dt. \end{align} The authors in~\cite{cit:palsson-sovine} note that they can not extract the full asymptotic decay from~\eqref{eq:explicit-computation-4}. On the other hand, the authors in~\cite{cit:iosevich-liu} obtain our result~\eqref{eq:stiefel-asymptotics} for $k=2$ using the method of stationary phase. \section{Future Directions} \label{sec:future-directions} For Section~\ref{sec:closer-look-at} we pose three questions: \begin{enumerate} \item Can the asymptotic decay predicted by~\eqref{eq:stiefel-asymptotics} be recovered from~\eqref{eq:explicit-computation-4}? Indeed, one could hope for the full asymptotic expansion, not just the first term. Presumably progress here would require identities involving Bessel functions, see below for one such case. \item Can one claim some (reduced) decay for $\lambda\approx 0$ or for $\kappa\approx\lambda$ using~\eqref{eq:explicit-computation-4}? Even the case $\kappa=\lambda$ would be interesting. \item Repeat the above two questions for $k\geq 3$: can $\widehat{\mu}_{n,k}$ be written out explicitly in terms of Bessel functions and can its full asymptotic expansion be computed? Can the cases $\lambda_{i}\approx 0$ and $\lambda_{i}\approx\lambda_{j}$ (for some $i\not= j$) be dealt with? The case $k=3$ seems to be within reach. \end{enumerate} We have the following remarks to make. First, using $J_{1/2}(t) = \sqrt{\frac 2{\pi t}}\sin(t)$, one obtains for $n=4$ that \begin{align} \label{eq:explicit-four} \widehat{\mu}_{4,2}(\Xi) & = \frac{16\pi}{\kappa\lambda}[J_{0}(2\pi(\kappa-\lambda))-J_{0}(2\pi(\kappa+\lambda))], \end{align} from which we may obtain the full asymptotic expansion. Second, by setting $\tilde{\sigma}_{r}$ to be the normalized probability measure on the sphere of radius $r>0$ centered at the origin, we may rewrite~\eqref{eq:explicit-computation-3} into a convolution, \begin{align} \label{eq:random-walk} \Vol(S^{n-2})\Vol(S^{n-1})^{2}\int_{-1}^{1} (\tilde{\sigma}_{\kappa}*\tilde{\sigma}_{\lambda})^{\vee}(\sqrt{1-t^{2}})(1-t^{2})^{\frac{n-3}2}dt, \end{align} which connects the computation with a 2-step random walk, which is an interesting problem in itself, see~\cite{cit:kluyver},~\cite{cit:borwein-et-al}. From this observation we may anticipate the singular behavior at $\kappa = \lambda$ for all $n\geq 3$ and $k=2$. If one can indeed achieve estimates for $\lambda_{i}\approx 0$ and $\lambda_{i}\approx\lambda_{j}$, then those may be applied (as the authors in~\cite{cit:palsson-sovine} do for $k=2$) using~\cite[Theorem~1.1]{cit:grafakos-he-honzik-park} to obtain continuity results for simplicial operators for $k\geq 3$ analogous to~\cite{cit:palsson-sovine} for $k=2$. Such results so obtained may also be contrasted with those obtained by the authors in~\cite{cit:iosevich-palsson-sovine}. Finally, one may compare this result to a related formula known as the Harish-Chandra-Itzykson-Zuber integral formula, see~\cite{cit:harish-chandra} and~\cite{cit:itzykson-zuber}, which is an exact formula for the integral $\int_{\mathbb{U}(n)}e^{\lambda \Tr(AUBU^*)}dU$. This formula has a quadratic phase and is over the unitary group instead of the real orthogonal group, in contrast to our result. One could seek results of our kind for $\mathbb{O}(n)$ replaced by $\mathbb{U}(n)$ and vice versa using $\mathbb{O}(n)$ for the quadratic phase. \section*{Acknowledgements} The author wishes to thank Allan Greenleaf for helpful discussions on the method of stationary phase and the second-order geometry of the Stiefel manifold and Daniel Spector for pointing out~\eqref{eq:explicit-four} and their encouragement to work on this problem. \label{sec:acknowledgements}
1,314,259,993,685
arxiv
\section{Introduction} The discovery of novel organic molecules has wide applications in many appliations such as drug design and catalysis development\cite{meyers2021novo}. However, due to the sophisticated structure-property relationships, traditional rational design approaches have only covered an extremely limited chemical design space\cite{zunger2021understanding}. Recently, a large number of generative machine learning algorithms and models have been proposed for molecule design as systematically reviewed in \cite{meyers2021novo,du2022molgensurvey,imrie2020deep}. The first category of these methods are deep generative models mainly including Variational Autoencoders (VAEs)\cite{dai2018syntax}, Generative Adversarial Networks (GANs)\cite{guimaraes2017objective}, and normalizing flow-based models \cite{zang2020moflow}. Two of the major limitations of these models includes their black-box nature of the models and their challenge to deal with modularity in molecule design. the black-box nature of the deep neural network-based generator makes it difficult to interpret the black-box GAN models in terms of the chemical knowledge they learn and how they exploit the learned implicit knowledge for a generation. The second category of molecule generative design methods include several key combinatorial optimization algorithms such as genetic algorithms\cite{kwon2021evolutionary}, reinforcement learning\cite{blaschke2020reinvent}, Bayesian optimization\cite{winter2019efficient}, Monte Carlo Tree Search (MCTS)\cite{yang2017chemts}, Markov Chain Monte Carlo (MCMC) \cite{du2022molgensurvey}. While GAs have demonstrated superior performance in several molecule design benchmark studies \cite{huang2021therapeutics,brown2019guacamol}, the genetic operators of mutation and cross-over lack the learning capability to achieve intelligent and efficient chemical space exploration. This also applies to MCTS, which locally and randomly searches each branch of intermediates and selects the most promising ones during each generation iteration \cite{yang2020practical}. Bayesian optimization is usually applied together with VAEs and search the chemical space in the latent space, which makes it difficult to handle the chemical constraints explicitly \cite{winter2019efficient} and also cannot handle modularity in molecule design. Reinforcement learning has been applied to generative models with both SMILES and 2D graph representations, which learns a policy network to determine the optimal actions that maximize a global reward such as a given property\cite{zhou2019optimization, blaschke2020reinvent}. However, RL is rarely used in de novo molecule generation partially due to the difficulty to achieve long-range credit assignment and to obtain differentiable validity check as the reward signal. Another important consideration in the design of generative models for molecules is the representation level of the molecules, which includes atom based, fragment based and reaction based approaches. Most of existing models have used a single the atom based representations such as SMILES \cite{weininger1988smiles} while other more advanced representations such as SELFIES \cite{krenn2019selfies} and DeepSMILES \cite{o2018deepsmiles} have been proposed for molecule property prediction. How the choice of the molecule representation affects the generative design performance remains unsettled. It is also found that the basic atom representations such as SMILES are not easy to be used to exploit the modules, motifs, or skeletons of known molecules. On the other hand, while fragment and reaction based generative models can exploit such larger building blocks, they also have an issue in their expression power. Another major limitation of existing deep generative models for molecule design is that most of them cannot be used for tinkering design: a specified part of an existing molecule is masked for replacement of other modules to gain specific function property, despite that this is one of the most widely used approaches to explore new molecules \cite{zunger2021understanding} due to many constraints imposed on the possible options. During these processes, chemists or molecules scientists usually resort to their intuition, chemical knowledge, and expertise to select substitution or doping elements and proportions to tune the properties of the molecule by considering a variety of factors such as chemical compatibility, poison level, geometric compatibility, synthesizabilty, and other heuristic knowledge. Here we propose a self-supervised probabilistic language model Generative Molecular Transformer (GMTransformer) for molecular design and generation. The model is based on transformers and self-supervised blank-filling language model BLM \cite{shen2020blank}. The model interpretably calculates its probabilities and derives different actions depending on the token frequency shown by its vocabulary. We use SMILES, SELFIES and DeepSMILES representations to train different models, and found that each of them has its own advantage. The easy interpretation, data efficiency, and tinkering design potentials have been demonstrated in our recent work on inorganic materials composition design \cite{wei2022crystal}, which inspires us to explore its potential in molecule design in this work. We use MOSES benchmarking metrics to evaluate the performance of our GMTransformer models. The results of our extensive experiments show strong performance compared to the state-of-the-art baselines. Our GMTransformer model with SMILES representation achieves 96.83\% novelty and 87.01\% of IntDiv, which demonstrates that our model is capable of generating a wide variety of novel molecules. We also train generative models for maximizing different properties: logP, tPSA, and QED, and find that our models can learn to generate molecules with specific property as demonstrated by the distribution of generated molecular properties. \section{Results} \label{sec:headings} \subsection{Generative and tinkering molecular design as a blank-filling process} SMILES (Simplified molecular input line entry system) uses a string of characters to describe a three-dimensional chemical structure. Atom, bond and branche make up the strings of SMILES. The atoms are represented by their element symbols, e.g. C, N, O, S, F. The atoms in aromatic rings are represented by lowercase letters, such as the lowercase c for aromatic carbon. There are three types of bonds in SMILES: single bonds, double bonds and triple bonds and they are denoted by -, =, \# respectively. Branches are specified by enclosures in parentheses. \begin{table}[th] \centering \caption{Strings of SMILES generated as a canvas rewriting process} \label{tab:canvas} \begin{tabular}{lll} \hline \multicolumn{3}{c}{ Canvas rewriting with 4 actions: (E, \_E, E\_, \_E\_)} \\ \hline \multicolumn{1}{l|}{Step t} & Action & operation \\ \hline \multicolumn{1}{l|}{0. \underline{\$1}} & \_E\_ &Replace \$1 blank with \_C\_ \\ \hline \multicolumn{1}{l|}{1. \underline{\$1} C \underline{\$2}} & E & Replace \$1 blank with C \\ \hline \multicolumn{1}{l|}{2. C C \underline{\$1}} & E\_ & Replace \$1 blank with (\_ \\ \hline \multicolumn{1}{l|}{3. C C ( \underline{\$1}} & \_E\_ & Replace \$1 blank with \_O\_ \\ \hline \multicolumn{1}{l|}{4. C C ( \underline{\$1} O \underline{\$2}} & \_E & Replace \$1 blank with = \\ \hline \multicolumn{1}{l|}{5. C C ( = O \underline{\$1}} & E\_ &Replace \$1 blank with )\_ \\ \hline \multicolumn{1}{l|}{6. C C ( = O ) \underline{\$1}} & E &Replace \$1 blank with C \\ \hline \multicolumn{1}{l|}{7. C C ( = O ) C} & & \\ \hline \end{tabular} \end{table} As shown in Table \ref{tab:canvas}, the following canvas rewriting process shows how the GMTransformer generates the $CC(=O)C$ sequence of the SMILES strings step by step. At the beginning, there is only an initial blank token \underline{\$1} on the canvas, then different candidate tokens and rewriting actions (E, \_E, E\_, \_E\_) are selected by GMTransformer. (1) action E: replace a blank with the element E; (2) action \_E: replace a blank with element E and insert a new blank on its left side, allowing further element insertion; (3) action E\_: replace a blank with element E and insert a new blank on its right side, allowing further element insertion; (4) action \_E\_: replace blank with element E and insert new blanks on both sides \cite{wei2022crystal}. Finally, a string without any blank symbol is generated on the canvas. In Table \ref{tab:canvas}, There is only one initial blank on the canvas in step 0, and it selects action \_E\_ with the element C to get \underline{\$1} C \underline{\$2}. Then it replaces the blank of \underline{\$1} with the element C by action E in the first step. In the second step, the operation is replacing \underline{\$1} blank with branch (\_. Then it chooses action \_E\_ and replaces the blank with element O. In the next two steps, it replaces the blank with bond = and branch )\_ respectively to get canvas C C ( = O ) \underline{\$1}. Finally, it replaces \underline{\$1} with element C. GMTransformer is different from BERT\cite{devlin2018bert} and XL-Net\cite{yang2019xlnet} as it relies on pre-existing content to learn and generate sequences. Instead of using the context of a pre-masked word to predict the probability of the masked word, GMTransformer directly chooses the action and then inserts the word that best matches the content it learns at the appropriate position based on the probabilistic dependencies in the generated vocabulary. \subsection{Generative Molecular Transformer: Blank filling language model for molecule generation } GMTransformer is not like the black-box models such as Variational Autoencoders (VAEs) \cite{dai2018syntax}, Generative Adversarial Networks (GANs) \cite{guimaraes2017objective}, and normalizing flow-based models \cite{zang2020moflow}, it is a process interpretable model designed based on a blank language model (BLM) \cite{shen2020blank}. GMTransformer directly models the probability of the tokens in the vocabulary. It relies on the content of the existing canvas to calculate the probability distribution to select actions and tokens to generate a new canvas. It can intelligently control the intermediate process of generating the string, and each step can give a explanation of why it is doing it. GMTransformer uses SMILES, SELFIES and DeepSMILES representation of atom-level tokenization. The SMILES representation of atom-level tokenization has 21 tokens in SMILES strings and 7 special tokens as the vocabulary during training process. The vocabulary contains 13 atom tokens $<C>, <c>, <O>, <o>, <N>, <n>, <F>, <S>, <s>, <Cl>, <Br>, <[nH]>$, and $<[H] >$, 3 bond tokens $<->, <=>, <\#>$, 6 ring tokens $<1>, <2>, <3>, <4>, <5>, <6>$ and 7 special tokens $<PAD>, <UNK>, <FIRST>, <LAST>, <EOS>, <BLANK>, <BLANK\_0>$. SELFIES and DeepSMILES also contain the same 7 special tokens as SMILES. The SPE tokenization has a mean length of approximately 6 tokens, while the atom-level tokenization has a mean length of approximate 40. SMILES Pair Encoding contains the special tokens and unique tokens from the frequent SMILES substrings. e.g, $<CCC(C)(C)>$, $<CCCC(C)>$, $<NC(=O)C>$. Both SMILES and DEEP SMILES use the SPE tokenization, which does not apply to SELFIES. More details can be found in \cite{li2021smiles}. Figure \ref{fig:architecture} shows the architecture of our Generative Molecular Transformer (GMTransformer). The model utilizes four networks in three iterable stages. The first stage includes the transformer network and linear and softmax layers. The second and third stages include linear and softmax layers, multi-layer perceptron network, respectively. In the first stage, the transformer network encodes the canvas into a sequence of representations. Then which location of the blank should be filled is selected by computing probabilities from linear and softmax layers. In the second stage, it picks an appropriate token into the blank with linear and softmax layers. In the final stage, the action of whether or not to create blanks to the left and right is determined by feeding the concatenation of the representations of the selected blank and the token into the multi-layer perceptron network. The model updates the canvas and repeats the process until there are no blank positions on the canvas. During the training process, first of all, it initializes the model parameter $\theta$ and then randomly samples a training example $x=\left(x_{1}, \cdots, x_{n}\right)$ with the length of $n$. Next, it samples the step length $t$ from $0$ to $n - 1$ and the generation order with n-permutation $\sigma$ of the given example. It constructs a canvas $c$ that remains the first $t$ tokens $x_{\sigma_j}$ ($j=1,...,t)$ and collapse the remaining $n-t$ tokens as blanks. Then it takes $n-t$ target actions $a_{j-t}$ for filling $x_{\sigma_j}$ ($j=t+1,...,n)$ into canvas and calculates loss as Eq. \ref{eq:loss}. Finally it updates parameter $\theta$ by gradient descent and repeat the whole process until convergence. More details can be found in \cite{shen2020blank}. \begin{equation} -\log (n !)-\frac{n}{n-t} \sum_{\sigma_{t+1}} \log p\left(a_{t}^{x, \sigma} \mid c_{t}^{x, \sigma} ; \theta\right) \label{eq:loss} \end{equation} where the $\theta$ is the the model parameter; $c_{t}^{x, \sigma}$ is the $t$th canvas with the given training example $x$ and the determined generation order (permutation $\sigma$); $a_{t}^{x, \sigma}$ represents the action whether or not to create blanks to the left and right of the predicted token at step $t$ with the order permutation $\sigma$ and the selected blank. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{architecture-MOL.pdf} \caption{Neural network architecture of the blank filling language model for molecules tinkering using SMILES string $O=C1CC(c2ccccc2)Oc2cc(O)cc(O)c21$ as an example. } \label{fig:architecture} \end{figure} \subsection{De novo generative design of molecules composition } \paragraph{Training of GMTransformer for hypothetical molecule generation} We use the MOSES dataset as our benchmark dataset, which is widely used in generative molecular design community. The performance evaluation criteria is derived from the MOSES package, which is also a standard in generator performance evaluation. The GMTransformer model was trained and evaluated using the database of MOSES benchmarking platform. MOSES is a benchmarking platform to standardize the training results of molecule generation models. Its initial dataset, ZINK Clean Leads, contains about 4.6 million molecules. The final dataset was obtained by filtering molecules containing charged atoms (except C, N, S, O, F, Cl, Br, H); macrocyclic molecules with more than 8 molecules in the ring; medical chemistry filters (MCFs) and PAINS filters. MOSES provides both training and test sets and a set of metrics for assessing the quality and diversity of the generated molecules. We also evaluate the generated samples of three additional properties: the octanol-water partition coefficient (logP), the topological Polar Surface Area (tPSA), and the Quantitative Estimate of Drug-likeness (QED)\cite{bickerton2012quantifying} computed from RDKit \cite{landrum2019rdkit} are used for training the conditional GMTransformer generator. \FloatBarrier \subsection*{Evaluation of GMT's molecular generation performance} We evaluate the performance of our GMTransformer generators and compare with that of the benchmark molecular models using ten evaluation criteria with MOSES metrics including validity, uniqueness (unique@$1k$ and unique@$10k$), internal diversity (IntDiv), filters, novelty, the similarity to a nearest neighbor (SNN), Frechet ChemNet distance (FCD), fragment similarity (Frag), and scaffold similarity (Scaf). As shown in \ref{table:performance}, GMT-SMILES, GMT-PE-SMILES and GMT-SELFIES generate 85.87\%, 82.88\% and 100\% valid samples, respectively. The uniqueness of all models is almost 100\%. Especially, the novelty of GMT-SMILES, GMT-PE-SMILES and GMT-SELFIES is as high as 95.31\%, 88.29\% and 96.83\% respectively. At the same time, GMT-SMILES, GMT-PE-SMILES and GMT-SELFIES have the highest values 85.69\%, 85.58\% and 87.01\% of IntDive respectively among all benchmark models. These high values mean that they can generate samples with higher diversity, which may accelerate the discovery of new chemical structures. For FCD/Test, GMT-PE-SMILES performs best among all models with 19.86\%, while GMT-SEMILES and GMT-SELFIES have values with72.94\% and 377.5\%. GMT-SMILES, GMT-PE-SMILES and GMT-SELFIES also achieve high value with 16.50\%, 10.87\% and 10.96\%, respectively. \begin{table}[] \caption{Comparison of the MOSES Benchmarking Results} \centering \label{table:performance} \begin{tabular}{|lll|lll|llll|} \hline \multicolumn{3}{|l|}{} & \multicolumn{3}{c|}{GMT} & \multicolumn{4}{c|}{MOSES reference models} \\ \cline{4-10} \multicolumn{3}{|l|}{\multirow{-2}{*}{}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}GMT-\\ SMILES\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}GMT-PE-\\ SEMILES\end{tabular}} & \begin{tabular}[c]{@{}l@{}}GMT-\\ SELFIES\end{tabular} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}GCT\\ -SGDR\end{tabular}} & \multicolumn{1}{l|}{VAE} & \multicolumn{1}{l|}{AAE} & \begin{tabular}[c]{@{}l@{}}char\\ RNN\end{tabular} \\ \hline \multicolumn{1}{|l|}{validity} & \multicolumn{1}{l|}{$\uparrow$} & & \multicolumn{1}{l|}{0.8587} & \multicolumn{1}{l|}{0.8288} & \textbf{1.000} & \multicolumn{1}{l|}{0.9916} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.9767$\pm$\\ 0.0012\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.9368$\pm$\\ 0.0341\end{tabular}} & \begin{tabular}[c]{@{}l@{}}0.9748±\\ 0.0264\end{tabular} \\ \hline \multicolumn{1}{|l|}{unique@1k} & \multicolumn{1}{l|}{$\uparrow$} & & \multicolumn{1}{l|}{\textbf{1.0000}} & \multicolumn{1}{l|}{\textbf{1.0000}} & \textbf{1.0000} & \multicolumn{1}{l|}{0.998} & \multicolumn{1}{l|}{\textbf{1.0±0.0}} & \multicolumn{1}{l|}{\textbf{1.0±0.0}} & \textbf{1.0±0.0} \\ \hline \multicolumn{1}{|l|}{unique@10k} & \multicolumn{1}{l|}{$\uparrow$} & & \multicolumn{1}{l|}{0.9998} & \multicolumn{1}{l|}{0.9995} & \textbf{1.0000} & \multicolumn{1}{l|}{0.9797} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.9984±\\ 0.0005\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.9973±\\ 0.002\end{tabular}} & \begin{tabular}[c]{@{}l@{}}0.9994 ±\\ 0.0003\end{tabular} \\ \hline \multicolumn{1}{|l|}{IntDive} & \multicolumn{1}{l|}{$\uparrow$} & & \multicolumn{1}{l|}{0.8569} & \multicolumn{1}{l|}{0.8558} & \textbf{0.8701} & \multicolumn{1}{l|}{0.8458} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.8558±\\ 0.0004\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.8557±\\ 0.0031\end{tabular}} & \begin{tabular}[c]{@{}l@{}}0.8562 ±\\ 0.0005\end{tabular} \\ \hline \multicolumn{1}{|l|}{filters} & \multicolumn{1}{l|}{$\uparrow$} & & \multicolumn{1}{l|}{0.9766} & \multicolumn{1}{l|}{0.9797} & 0.7961 & \multicolumn{1}{l|}{\textbf{0.9982}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.6949±\\ 0.0069\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.9960±\\ 0.0006\end{tabular}} & \begin{tabular}[c]{@{}l@{}}0.9943 ±\\ 0.0034\end{tabular} \\ \hline \multicolumn{1}{|l|}{novelty} & \multicolumn{1}{l|}{$\uparrow$} & & \multicolumn{1}{l|}{0.9531} & \multicolumn{1}{l|}{0.8829} & \textbf{0.9683} & \multicolumn{1}{l|}{0.6756} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.6949±\\ 0.0069\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.7931±\\ 0.0285\end{tabular}} & \begin{tabular}[c]{@{}l@{}}0.8419 ±\\ 0.0509\end{tabular} \\ \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & Test & \multicolumn{1}{l|}{0.5381} & \multicolumn{1}{l|}{0.5778} & 0.4673 & \multicolumn{1}{l|}{\textbf{0.6513}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.6257±\\ 0.0005\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.6081±\\ 0.0043\end{tabular}} & \begin{tabular}[c]{@{}l@{}}0.6015 ±\\ 0.0206\end{tabular} \\ \cline{3-10} \multicolumn{1}{|l|}{\multirow{-2}{*}{SNN}} & \multicolumn{1}{l|}{\multirow{-2}{*}{↑}} & TestSF & \multicolumn{1}{l|}{0.5143} & \multicolumn{1}{l|}{0.5460} & 0.4485 & \multicolumn{1}{l|}{\textbf{0.5990}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.5783±\\ 0.0008\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.5677±\\ 0.0045\end{tabular}} & \begin{tabular}[c]{@{}l@{}}0.5649 ±\\ 0.0142\end{tabular} \\ \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & Test & \multicolumn{1}{l|}{0.7294} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{0.1986}} & 3.7750 & \multicolumn{1}{l|}{0.7980} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.0990±\\ 0.0125\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.5555±\\ 0.2033\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}0.0732 ±\\ 0.0247\end{tabular}} \\ \cline{3-10} \multicolumn{1}{|l|}{\multirow{-2}{*}{FCD}} & \multicolumn{1}{l|}{\multirow{-2}{*}{↓}} & TestSF & \multicolumn{1}{l|}{1.2607} & \multicolumn{1}{l|}{0.7595} & 4.5698 & \multicolumn{1}{l|}{0.9949} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.5670±\\ 0.0338\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}1.0572±\\ 0.2375\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}0.5204 ±\\ 0.0379\end{tabular}} \\ \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & Test & \multicolumn{1}{l|}{0.9879} & \multicolumn{1}{l|}{0.9982} & 0.9869 & \multicolumn{1}{l|}{0.9922} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.9994±\\ 0.0001\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.9910±\\ 0.0051\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}0.9998 ±\\ 0.0002\end{tabular}} \\ \cline{3-10} \multicolumn{1}{|l|}{\multirow{-2}{*}{Frag}} & \multicolumn{1}{l|}{\multirow{-2}{*}{↑}} & TestSF & \multicolumn{1}{l|}{0.9850} & \multicolumn{1}{l|}{0.9958} & 0.9831 & \multicolumn{1}{l|}{0.8562} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}0.9984±\\ 0.0003\end{tabular}}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.9905±\\ 0.0039\end{tabular}} & \begin{tabular}[c]{@{}l@{}}0.9983 ±\\ 0.0003\end{tabular} \\ \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & Test & \multicolumn{1}{l|}{0.8661} & \multicolumn{1}{l|}{0.9125} & 0.8431 & \multicolumn{1}{l|}{0.8562} & \multicolumn{1}{l|}{\textbf{\begin{tabular}[c]{@{}l@{}}0.9386±\\ 0.0021\end{tabular}}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.9022±\\ 0.0375\end{tabular}} & \begin{tabular}[c]{@{}l@{}}0.9242 ±\\ 0.0058\end{tabular} \\ \cline{3-10} \multicolumn{1}{|l|}{\multirow{-2}{*}{Scaf}} & \multicolumn{1}{l|}{\multirow{-2}{*}{↑}} & TestSF & \multicolumn{1}{l|}{\textbf{0.1650}} & \multicolumn{1}{l|}{0.1087} & 0.1096 & \multicolumn{1}{l|}{0.0551} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.0588±\\ 0.0095\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}0.0789±\\ 0.009\end{tabular}} & \begin{tabular}[c]{@{}l@{}}0.1101 ±\\ 0.0081\end{tabular} \\ \hline \end{tabular} \end{table} We also train five GMT models using different representations and tokens, generate 30,000 hypothetical molecules and evaluate them using MOSES benchmarking metrics. Table \ref{table:GMTmodels} shows the performance of the comparison of the MOSES Benchmarking Results. All models perform very well in terms of uniqueness, in the range of 99.5\%-100\%. In terms of the novelty of the hypothetical molecules, GMT-PE-SMILES achieves 88.92\%, while all other models exceed 90\%. GMT-PE-SMILES outperforms the other models by a wide margin on FCD/Test at 19.86\%. \begin{table}[] \caption{Comparison of the GMT models Results} \centering \label{table:GMTmodels} \centering \begin{tabular}{|lll|lllll|} \hline \multicolumn{3}{|l|}{} & \multicolumn{5}{c|}{GMT models} \\ \cline{4-8} \multicolumn{3}{|l|}{\multirow{-2}{*}{}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}GMT-\\ SMILES\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}GMT-PE-\\ SEMILES\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}GMT-\\ SELFIES\end{tabular}} & \multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}GMT-\\ DEEP\end{tabular}} & \begin{tabular}[c]{@{}l@{}}GMT-PE-\\ DEEP\end{tabular} \\ \hline \multicolumn{1}{|l|}{validity} & \multicolumn{1}{l|}{↑} & & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.8586} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.8288} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{1.0000}} & \multicolumn{1}{l|}{0.8168} & 0.7954 \\ \hline \multicolumn{1}{|l|}{unique@1k} & \multicolumn{1}{l|}{↑} & & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{1.0000}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{1.0000}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{1.0000}} & \multicolumn{1}{l|}{\textbf{1.0000}} & \textbf{1.0000} \\ \hline \multicolumn{1}{|l|}{unique@10k} & \multicolumn{1}{l|}{↑} & & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.9998} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.9995} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{1.0000}} & \multicolumn{1}{l|}{\textbf{1.0000}} & 0.9997 \\ \hline \multicolumn{1}{|l|}{IntDive} & \multicolumn{1}{l|}{↑} & & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.8569} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.8558} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{0.8701}} & \multicolumn{1}{l|}{0.8570} & 0.8519 \\ \hline \multicolumn{1}{|l|}{filters} & \multicolumn{1}{l|}{↑} & & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.9765} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.9797} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.7961} & \multicolumn{1}{l|}{0.9844} & \textbf{0.9847} \\ \hline \multicolumn{1}{|l|}{novelty} & \multicolumn{1}{l|}{↑} & & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.9532} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.8829} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{0.9683}} & \multicolumn{1}{l|}{0.9367} & 0.9149 \\ \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & Test & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.5381} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{0.5778}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.4673} & \multicolumn{1}{l|}{0.5509} & 0.5722 \\ \cline{3-8} \multicolumn{1}{|l|}{\multirow{-2}{*}{SNN}} & \multicolumn{1}{l|}{\multirow{-2}{*}{↑}} & TestSF & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.5143} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{0.5460}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.4485} & \multicolumn{1}{l|}{0.5246} & 0.5405 \\ \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & Test & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.7294} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{0.1986}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}3.7750} & \multicolumn{1}{l|}{0.3604} & 0.4366 \\ \cline{3-8} \multicolumn{1}{|l|}{\multirow{-2}{*}{FCD}} & \multicolumn{1}{l|}{\multirow{-2}{*}{↓}} & TestSF & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}1.2607} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{0.7595}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}4.5698} & \multicolumn{1}{l|}{0.9563} & 1.0736 \\ \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & Test & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.9879} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{0.9982}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.9869} & \multicolumn{1}{l|}{0.9981} & 0.9967 \\ \cline{3-8} \multicolumn{1}{|l|}{\multirow{-2}{*}{Frag}} & \multicolumn{1}{l|}{\multirow{-2}{*}{↑}} & TestSF & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.9850} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.9958} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.9831} & \multicolumn{1}{l|}{\textbf{0.9964}} & 0.9934 \\ \hline \multicolumn{1}{|l|}{} & \multicolumn{1}{l|}{} & Test & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.8661} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{0.9125}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.8431} & \multicolumn{1}{l|}{0.8880} & 0.8903 \\ \cline{3-8} \multicolumn{1}{|l|}{\multirow{-2}{*}{Scaf}} & \multicolumn{1}{l|}{\multirow{-2}{*}{↑}} & TestSF & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}\textbf{0.1649}} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.1087} & \multicolumn{1}{l|}{\cellcolor[HTML]{FFFFFF}0.1096} & \multicolumn{1}{l|}{0.1511} & 0.1170 \\ \hline \end{tabular} \end{table} \FloatBarrier \paragraph{Process of GMT's learning of chemical rules:} To illustrate the chemical order/rules emerge during the training process of our GMT models, we save the intermediate models at the end of 1/5/10/15/20/25/30/50/100/150/200 epochs of training using the SMILES and SELFIES dataset, respectively. Then we use 30,000 generated samples to evaluate validity, unique@10k, IntDiv, Scaf/TestSF and Novelty with MOSES benchmarking metrics. As shown in Figure \ref{fig:SMILESprocess}, the validity of the model using the SMILES representation is only about 50\% of the maximum value when the epoch of the model training is less than 30, and its validity exceeds 80\% at the 100 epochs. This growing process shows that the model is learning the valence rules and the syntax of the SMILES language. For the model using SELFIES representation, the results are shown in Figure \ref{fig:SELFIESprocess}. Because every SELFIES syntax is guaranteed to correspond to a valid molecule \cite{krenn2019selfies}, the validity is always 100\% throughout training epochs from 1 to 200. The increase in Scaf/TestSF value also indicates that the model has learned the Bemis–Murcko scaffold \cite{bemis1996properties}, which contains all molecule’s ring structures and linker fragments connecting rings. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{epochs_chart.pdf} \caption{Percentages of valid, unique@10000, intDiv and Scaf/TestSF samples generated by the SMILES atom tokenizer models saved over the training process. The models generate few valid SMILES strings in the beginning. As the training goes on, the models gradually gain the capability to generate chemically valid SMILES molecules compositions.} \label{fig:SMILESprocess} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth]{selfies_epochs_chart.pdf} \caption{Percentages of valid, unique@10000, intDiv and Scaf/TestSF samples generated by the SELFIES atom tokenizer models saved over the training process. The models generate almost one hundred percent valid SMILES strings from the beginning to the end and the Scaf/TestSF value has also been growing with epoch from 1 to 200.} \label{fig:SELFIESprocess} \end{figure} \FloatBarrier \FloatBarrier \subsection{Comparison of different molecule representations: SMILES, SELFIES, and DeepSMILES } Different representations make the model more capable of generating new potential molecules. We use three types of string-based molecular representations: The simplified molecular input line entry system (SMILES) \cite{weininger1988smiles}, SELF-referencIng Embedded Strings (SELFIES) \cite{krenn2020self}, DeepSMILES \cite{o2018deepsmiles} and two kinds of tokenizers: Atom-level and SmilesPE \cite{li2021smiles}. Table \ref{table:representations} shows examples of the different molecule representations with two types of tokenizers. SELFIES only has the atom-level tokenizers. We first use SMILES, which is the most widely used representation in computational chemistry. SMILES has some weaknesses such as multiple different SMILES strings can represent the same molecule and it is not robust because it is possible for generative models to create strings that do not represent valid molecular graphs. DeepSMILES is a modification of SMILES which obviates most syntactic errors, while semantic mistakes were still possible \cite{o2018deepsmiles}. Therefore, we also use the representation of SELFIES, which can generate 100\% effective molecular graph to definitely avoid the problem of model robustness. SELFIES is like an automaton or derivation grammar, which is designed to eliminate syntactic and semantic invalid strings. Atomic-level tokenization is a method commonly used in deep learning, which simply breaks the SMILES string character-by-character, with each character serving as a token. We use not only an atom-level tokenizer, but also the SmilesPE representation, which has shorter input sequences and can save the computational cost of model training and inference. SmilesPE identifies and retains frequent SMILES substrings as unique tokens, where each token is represented as a chemically meaningful substructure. \begin{table}[ht] \centering \caption{Comparison of the different molecule representations: SMILES, SELFIES, and DeepSMILE} \label{table:representations} \begin{tabular}{|l|l|} \hline \rowcolor[HTML]{FFFFC7} Tokenizer & Atom-level \\ \hline SMILES & C O c 1 c c c c c 1 O C ( = O ) O c 1 c c c c c 1 O C \\ \hline DeepSMILES & C O c c c c c c 6 O C = O ) O c c c c c c 6 O C \\ \hline SELFIES & {[}C{]} {[}N{]} {[}C{]} {[}Branch1{]} {[}C{]} {[}P{]} {[}C{]} {[}C{]} {[}Ring1{]} {[}=Branch1{]} \\ \hline \rowcolor[HTML]{FFFFC7} Tokenizer & SmilesPE \\ \hline SMILES & COc1ccccc1 O C(=O)O c1ccccc1 OC \\ \hline DeepSMILES & CO cccc cc 6 OC =O) O cccc cc 6 OC \\ \hline \end{tabular} \end{table} \subsection{Conditional generative design of molecules} \begin{table}[ht] \centering \caption{Datasets for conditional generation} \begin{tabular}{| >{\columncolor[HTML]{FFFFC7}}c |c|c|c|} \hline \multicolumn{1}{|l|}{\cellcolor[HTML]{FFFFC7}{\color[HTML]{000000} }} & \cellcolor[HTML]{FFFFC7}{\color[HTML]{000000} Whole Set} & \cellcolor[HTML]{FFFFC7}{\color[HTML]{000000} Training Set (Top 50\%)} & \cellcolor[HTML]{FFFFC7}{\color[HTML]{000000} Generated Samples} \\ \hline LogP & 1,584,662 & 792,331 & 16,748 \\ \hline tPSA & 1,584,662 & 792,331 & 16,643 \\ \hline QED & 1,584,662 & 792,331 & 17,082 \\ \hline \end{tabular} \end{table} One desirable generation capability of molecular generators is to design molecules that optimize one or more specific properties. Here we test our model with such capability by building three conditional generators. Basically, we prepare three different training sets from the MOSES training dataset by picking samples whose corresponding property values are within the top 50\% of the whole MOSES training dataset, where the three properties include the octanol-water partition coefficient (logP), the topological Polar Surface Area (tPSA), and the Quantitative Estimate of Drug-likeness (QED), which are computed using the RDKit . We then train the generators with these high-property training molecules and use them to generate 20,000 candidate samples, which are then fed to the RDkit for the property calculation. It is found that RDKit cannot calculate the properties for some of these generated samples. After filtering these generated samples, we finally obtain 16,748, 16,643, and 17,082 samples for LogP, tPSA, and QED respectively. The distributions of these properties values of the whole dataset, the biased (top 50\%) training set, and the generated candidate sets are shown in Figure \ref{fig:distributionmol}. It is found that for all three properties, the distributions of our generated molecules are much closer to those of the top 50\% training sets compared to the property distributions of the whole MOSES training dataset, which indicates that the GMTransformer models have learned the implicit rules to generate high-property molecules. \begin{figure}[ht] \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width=\textwidth]{logP.pdf} \caption{logP} \vspace{-3pt} \label{fig:logP} \end{subfigure} \begin{subfigure}[t]{0.5\textwidth} \includegraphics[width=\textwidth]{TPSA.pdf} \caption{tPSA} \vspace{-3pt} \label{fig:TPSA} \end{subfigure} \begin{subfigure}[t]{0.50\textwidth} \includegraphics[width=\textwidth]{QED.pdf} \caption{QED} \vspace{-3pt} \label{fig:QED} \end{subfigure} \caption{Comparison of property distribution of three different datasets: the whole MOSES training set, the top 50\% properties set used for training the conditional generator models, and the generated samples set for logP, tPSA, QED.} \label{fig:distributionmol} \end{figure} \FloatBarrier \section{Discussion} We propose Generative Molecular Transformer (GMTransformer), a probabilistic generative language model based on neural networks and transformers for the generation and design of molecules. The advantages of the GMT model lie in its interpretability and data efficiency as shown in our previous work on generating hypothetical inorganic materials\cite{wei2022crystal}. Here we show that it can rapidly learn the grammar rules of molecular graphs and generate high-quality hypothetical molecules. Our model is based on the blank filling language model which has a unique advantage and potential for tinkering molecule design as we showed in \cite{wei2022crystal} for tinkering design of materials compositions. In order to evaluate the performance of our models in generating valid and potential molecules, we compare it with a baseline language-based model and three other deep neural network models using a selected set of criteria from the Moses metrics. The results of the hypothetical molecules generated by our models and the high-property distribution results show that our GMT models are able to learn the grammar rules of molecules to generate valid molecules. As shown in Table \ref{table:performance}, our GMT-SMILES, GMT-PE-SMILES and GMT-SELFIES generate 85.87\%, 82.88\% and 100\% valid samples, respectively. Especially, the novelty of GMT-SMILES, GMT-PE-SMILES and GMT-SELFIES is as high as 95.31\%, 88.29\% and 96.83\% respectively compared to 67.56\% of the baseline GCT-SGDR. At the same time, GMT-SMILES, GMT-PE-SMILES and GMT-SELFIES have the highest IntDive scores with 85.69\%, 85.58\% and 87.01\% respectively among all benchmarked models, which means that they can generate samples with higher diversity with the potential to accelerate the discovery of new chemical structures. While uniqueness, validity, and novelty are evaluated mainly based on the molecule structure, the relevance of generated samples to druggability and biological processes are not clear. To address this issue, we evaluate our models using the FCD criterion \cite{preuer2018frechet}, which is computed using the activation of the penultimate layer of ChemNet. This criterion can capture both chemical and biological property of the generated molecules. We find that out of the six language models (in Table \ref{table:performance} and \ref{table:GMTmodels}), our GMT-PE-SMILES achieves the best performance in terms of the FCD/Test measure with 19.86\%, while GMT-SMILES shows the performance with 72.94\% and the baseline GCT-SGDR shows 79.80\% of the FCD/Test. However, the FCD/Test performance of the GMT-SELFIES model is relatively low without clear reason. We also find the FCD performance is also relatively low in other relevant models \cite{yang2022exploring,gnaneshwar2022score} that also uses SELFIES representation. Another advantage of our GMTransformer for molecule generation is that it allows to use functional groups of molecules as tokens to train models that generate molecules with specific functions. While fragment-based models have been proposed before, the blank filling model we use here can be used to discover those function groups as highly dependent subsequences. The discovery and usage of these special functional groups of molecules may have great potential for molecule design for specific functions suitable for real-life scenarios \cite{mowbray2008influence}. We also find that the molecule sequence rewriting probabilities and interpretability of the GMT model provide more control over the molecular generation process, which brings more potential for generating molecules with specific properties. This has been demonstrated in our materials composition design using the BLM model \cite{wei2022crystal}. \section{Materials and Methods} \label{sec:others} \subsection{Datasets} We use the dataset from the benchmarking platform Molecular Sets (MOSES) at \url{https://github. com/molecularsets/moses} \cite{polykovskiy2020molecular}. It contains 1,936,962 molecular structures totally and splits them into three datasets for experiments. Each of them consists of training samples (around 1.6 M), test samples (176 k), and scaffold test samples (176 k) and we use the training and test sets in our experiments. We use the SMILES, SELFIES sets with the basic Atom-level and SmilesPE tokenizers. \subsection{Evaluation criteria} We use the MOSES benchmarking score metrics to evaluate the overall quality of the generated samples. Several models with different tokens are used for GMTransformer training and each model generates 30,000 samples that are evaluated by the MOSES benchmarking metrics in Table \ref{table:performance}. The ratio of valid and unique (unique@$1k$ and unique@$10k$) are reporting the validity and uniqueness of the generated SMILES string respectively. Novelty is the proportion of molecules in the generated samples that is not in the training set. Filter refers to the proportion of generated molecules that passed the filter during dataset construction. The MOSES metrics also measure the internal diversity (IntDiv) \cite{benhenda2018can}, the similarity to the nearest neighbor (SNN) \cite{polykovskiy2020molecular}, Frechet ChemNet distance (FCD) \cite{ preuer2018frechet}, fragment similarity (Frag) \cite{polykovskiy2020molecular}, and scaffold similarity (Scaf) \cite{polykovskiy2020molecular}. The Internal diversity (IntDiv) is calculated via eq (\ref{eq:intdiv}), it evaluates the chemical diversity in the generated set $G$ of molecules and detects if the generative model has model collapse. \begin{equation} \operatorname{IntDiv{_p}}(G) = 1-\sqrt[p]{\frac{1}{|G|^{2}} \sum_{{m_{1}, m_{2}} \in G} \operatorname{T} {\left(m_{1}, m_{2}\right)}^{p}} \label{eq:intdiv} \end{equation} Where $G$ is the generated set, $m_a$ and $m_b$ are their Morgan fingerprints \cite{rogers2010extended} for two molecules $a$ and $b$. $T$ is the Tanimoto-distance \cite{tanimoto1958elementary} molecules of generated set $G$. The Similarity to a nearest neighbor (SNN) is calculated via eq (\ref{eq:SNN}). \begin{equation} \operatorname{SNN}(G, R)=\frac{1}{|G|} \sum_{m_{G} \in G} \max _{m_{R} \in R} T\left(m_{G}, m_{R}\right) \label{eq:SNN} \end{equation} Where $m$ is the Morgan fingerprints of a molecule. T($m_G,m_R$) is an average Tanimoto similarity between $m_G$ in generated set $G$ and its nearest neighbor molecule $m_R$ in the reference dataset $R$. The Fr\'echet ChemNet distance (FCD) is computed from the activation of the penultimate layer of the deep neural network ChemNet, which was trained to predict the biological activity of drugs. These activations can capture chemical and biological properties of compounds for two sets $G$ and $R$. It is defined as eq \ref{eq:FCD}): \begin{equation} \operatorname{FCD}(G, R)=\lvert\lvert\mu_{G}-\mu_{R}\rvert\rvert^{2}+\operatorname{Tr}\left(\sum_{G}+\sum_{R}-2\left(\sum_{G} \sum_{R}\right)^{1 / 2}\right) \label{eq:FCD} \end{equation} Where $\mu_{G}$, $\mu_{R}$ are mean vectors for sets $G$ and $R$ respectively, $\sum{G}$, $\sum{R}$ are full covariance matrices of activations. $Tr$ stands for the trace operator. The Fragment similarity (Frag) is calculated via eq (\ref{eq:Frag}), which compares distributions of BRICS fragments \cite{degen2008art} in the generated set $G$ and reference set $R$. \begin{equation} \operatorname{Frag}(G, R)=\frac{\sum_{f \in F}\left(c_{f}(G) \cdot c_{f}(R)\right)}{\sqrt{\sum_{f \in F} c_{f}^{2}(G)} \sqrt{\sum_{f \in F} c_{f}^{2}(R)}} \label{eq:Frag} \end{equation} Where $F$ is the set of BRICS fragments. $c_{f}(X)$ stands for the frequency of occurrences of a substructure fragment $f$ in the molecules of set $X$. The Scaffold similarity (Scaff) is similar with Frag but it computes the frequencies of Bemis–Murcko scaffolds \cite{bemis1996properties}. It is calculated as eq (\ref{eq:Scaf}): \begin{equation} \operatorname{Scaf}(G, R)=\frac{\sum_{s \in S}\left(c_{s}(G) \cdot c_{s}(R)\right)}{\sqrt{\sum_{s \in S} c_{s}^{2}(G)} \sqrt{\sum_{s \in S} c_{s}^{2}(R)}} \label{eq:Scaf} \end{equation} Where S is the set of Bemis-Murcko scaffolds, $s_{S}(X)$ stands for the frequency of occurrences of a substructure scaffold $s$ in the molecules of set $X$. is when the number of layers is 15. The values of Validity, Unique@10000, Filters, and Novelty at this point are 86.46\%, 99.99\%, 98.06\%, 94.13\% respectively. For the values of FCD/Test and Scaf/TestSF are 32.08\% and 15.41\% respectively. We use the default number of layers for the model of 6 instead of 15 because hyper-parameter studies show that the number of layers has little effect on the overall performance of the model, and the model with the default number of layers has higher efficiency. \begin{table}[] \caption{Hyper-parameter tuning of GMTransform molecules generator} \label{table:hyperparameter} \centering \begin{tabular}{|l|l|l|l|l|l|} \hline Number of layers & & 5 & 10 & 15 & 20 \\ \hline valid & & 0.8582 & 0.8488 & 0.8646 & 0.8549 \\ \hline unique@1000 & & 1.0000 & 1.0000 & 1.0000 & 1.0000 \\ \hline unique@10000 & & 1.0000 & 0.9997 & 0.9999 & 0.9998 \\ \hline IntDiv & & 0.8529 & 0.8536 & 0.8541 & 0.8540 \\ \hline Filters & & 0.9802 & 0.9838 & 0.9806 & 0.9812 \\ \hline Novelty & & 0.9351 & 0.9389 & 0.9413 & 0.9362 \\ \hline \multirow{2}{*}{SNN} & Test & 0.5559 & 0.5556 & 0.5509 & 0.5554 \\ \cline{2-6} & TestSF & 0.5277 & 0.5279 & 0.5252 & 0.5304 \\ \hline \multirow{2}{*}{FCD} & Test & 0.5404 & 0.3243 & 0.3108 & 0.3903 \\ \cline{2-6} & TestSF & 1.1415 & 0.8461 & 0.7978 & 0.8609 \\ \hline \multirow{2}{*}{Frag} & Test & 0.9939 & 0.9950 & 0.9965 & 0.9966 \\ \cline{2-6} & TestSF & 0.9904 & 0.9913 & 0.9933 & 0.9950 \\ \hline \multirow{2}{*}{Scaf} & Test & 0.8954 & 0.8902 & 0.8955 & 0.8868 \\ \cline{2-6} & TestSF & 0.1425 & 0.1482 & 0.1541 & 0.1285 \\ \hline \end{tabular} \end{table} \FloatBarrier \section*{Acknowledgement} \paragraph{Funding:}The research reported in this work was supported in part by National Science Foundation under the grant and 1940099 and 1905775. The views, perspectives, and content do not necessarily represent the official views of the NSF. QW would like to acknowledge the seed funding support from the Big Data Health Science Center (BDHSC) of the University of South Carolina. \paragraph{Author contributions:} Conceptualization, J.H.; methodology,J.H. L.W., N.F., Y.S., Q.W.; software, J.H., L.W.,N.F.; resources, J.H.; writing--original draft preparation, J.H., L.W., N.F.; writing--review and editing, J.H, L.W.; visualization, L.W., Y.S., J.H.; supervision, J.H.; funding acquisition, J.H.,Q.W. \paragraph{Competing interests: The authors declare that they have no competing interests} \paragraph{Data and code availability:} The raw molecules QM9 dataset is downloaded from http://quantum-machine.org/datasets/. The preprocessed datasets and code download link can be found at \url{http://github.com/usccolumbia/GMTransformer} \bibliographystyle{unsrt}
1,314,259,993,686
arxiv
\section{Introduction} \label{sec:introduction} Processes with jets remain one of the most important tools used to study Quantum Chromodynamics (QCD) at hadron colliders, in particular at the LHC~\cite{Salam:2009jx, Sapeta:2015gee} and future Electron Ion Collider~(EIC)~\cite{Accardi:2012qut,Li:2020rqj,Arratia:2019vju,Zheng:2014vka}. Amongst them, production of dijets proves particularly useful to address various questions concerning QCD dynamics. When both jets are produced in the central rapidity region, the energy fractions of the incoming partons are comparable and sizable. Theoretical predictions for such configuration can be safely calculated in the framework of collinear factorization. However, when one of the jets moves in the forward direction, $y_\text{jet} \gg 0$, one of the incoming hadrons is probed at relatively low momentum fraction $x$, and that leads to the appearance of large logarithms $\ln x$, which have to be resummed. The optimal description of this process is achieved within the hybrid factorization~\cite{Catani:1990eg,Dumitru:2005gt,Marquet:2007vb,Deak:2009xt}, where the matrix elements are evaluated with one of the incoming partons being off-shell. The momentum distribution of that parton obeys the BFKL equation \cite{Balitsky:1978ic,Kuraev:1976ge,Fadin:1975cb,Kovchegov:2012mbw}, which depends not only on the longitudinal part of the momentum, but also on its transverse component. We will from now on refer to these as \emph{transverse momentum dependent} parton distributions (TMDs). In addition, when both jets move forward, the value of~$x$ is even smaller and one starts being sensitive to saturation effects~\cite{Gribov:1984tu, Mueller:1985wy}. The corresponding evolution equation becomes nonlinear~\cite{Balitsky:1995ub, Kovchegov:1999yj,JalilianMarian:1997jx,JalilianMarian:1997gr,JalilianMarian:1997dw}, as density of gluons at low~$x$ is very high. While the small-$x$ effects can be taken into account by using one of the phenomenologically successful TMDs, there is another class of effects relevant for forward jet production which should also be accounted for, namely the resummation of Sudakov logarithms. They are important as the hard scale provided by jet transverse momentum opens phase space for logarithmically enhanced soft and collinear emissions \cite{Dokshitzer1980,Lipatov:2019nbx,Collins1985,Bury:2017jxo,VanHaevermaet:2020rro}. See also recent Monte Carlo developments where one constructs TMD distributions that account for $k_T$ and Sudakov effects \cite{Hautmann:2017fcj}. As demonstrated in Refs.~\cite{Mueller:2013wwa,Mueller:2012uf, Sun:2015doa, Sun:2014gfa, Mueller:2016xoc}, small-$x$ and Sudakov resummations can be performed simultaneously in $b_\perp$ space and can then be cast into transverse momentum dependent gluon distributions. Such TMDs\ have already been used in phenomenological calculations of di-hadron correlations at EIC~\cite{Zheng:2014vka} and in proton-nucleus collisions at RHIC \cite{Stasto:2018rci,Marquet:2019ltn}. While in \cite{Zheng:2014vka,Stasto:2018rci} the Golec-Biernat-W\"usthof model~\cite{GolecBiernat:1998js} was employed to account for small-$x$ effects, in \cite{Marquet:2019ltn} the rcBK was used. In the present work, we focus on Sudakov effects in the process of central-forward dijet production in proton-proton collisions. Similarly to previous studies \cite{Deak:2010gk,Deak:2011ga,Kutak:2012rf}, we perform our calculations in the framework of \emph{high energy factorization} (HEF) factorization \cite{Gribov:1984tu, Catani:1990eg, Catani:1994sq, Collins:1991ty}, where the cross section is calculated as a convolution of a hard sub-process~\cite{Lipatov:1995pn,Antonov:2004hh} and nonperturbative parton densities, which take into account longitudinal and transverse degrees of freedom. At low $x$, gluons dominate over quarks, hence we consider only gluon TMDs. In our earlier study of the central-forward dijet production~\cite{vanHameren:2014ala}, the Sudakov effects were introduced by means of a simplified procedure~\cite{vanHameren:2014ala, Kutak:2014wga}, which nevertheless turned out to be phenomenologically successful. We then used the same approach to study forward-forward dijet production in proton-proton and proton-lead collisions, focusing on the broadening of the dijet azimuthal correlation spectrum \cite{vanHameren:2019ysa}. We found that a delicate interplay between the Sudakov effects and the saturation effects is needed to describe the LHC data. Although our phenomenological Sudakov model works well in that regard, a more systematic calculation of the Sudakov effects is necessary in order to solidify the predictions. One of the difficulties here comes from the proliferation of the small-$x$ TMD gluon distributions needed in the saturation regime \cite{Dominguez:2011wm,Kotko:2015ura}. The Sudakov resummation affects all these distributions in a rather complicated way. The following work is a first step towards a fully general approach and focuses on a single small-$x$ TMD gluon distribution, which appears in inclusive processes and in situations where saturation effects are mild. In this work, we use the proper Sudakov factors derived within perturbative QCD~\cite{Mueller:2013wwa, Mueller:2012uf, Sun:2014gfa, Sun:2015doa, Mueller:2016xoc} and profit from the recent progress on consistent merging of Sudakov resummation with small-$x$ effects~\cite{Stasto:2018rci}. These new elements allow us to significantly elevate theoretical status of our predictions for the discussed process of interest. We shall then compare the upgraded results to the ones which used simplistic models of including the Sudakov effects into the small-$x$ gluon, as well as to available experimental data. As in Ref.~\cite{vanHameren:2014ala}, the modeling of the small-$x$ effects in this work comes from the BK equation supplemented with a kinematic constraint and subleading corrections~\cite{Kutak:2012rf}. For the central-forward configuration of the final-state jets, one of the longitudinal fractions of the hadron momenta is much smaller then the other, $x_{B}\ll x_{A}$. This follows from simple kinematic relations \begin{equation} x_A = \frac{1}{\sqrt{s}}\left(|p_{1\perp}| e^{y_1} + |p_{2\perp}| e^{y_2}\right)\,, \qquad x_B = \frac{1}{\sqrt{s}}\left(|p_{1\perp}| e^{-y_1} + |p_{2\perp}| e^{-y_2}\right)\,, \label{eq:xAxB} \end{equation} where $\sqrt{s}$ is the center-of-mass energy of the proton-proton collision, while $p_{i\perp}$ and $y_{i}$ are the transverse momenta (Euclidean two-vectors) and rapidities of the produced jets. The formula for the \emph{hybrid} high energy factorization reads~\cite{Dumitru:2005gt,Deak:2009xt} \begin{align} d\sigma_{A+B\rightarrow j_1 + j_2 + X} & = \int dx_{A}\, \int\frac{dx_{B}}{x_{B}}\, \int \frac{d^{2}k_{B\perp}}{\pi} \nonumber \\[0.5em] % & \hspace{10pt} \times \sum_{a, c, d} f_{a/A}\left(x_{A},\mu \right)\, \mathcal{F}_{g^{*}/B}\left(x_{B},k_{B\perp},\mu \right)\, d\hat{\sigma}_{a+g^{*}\rightarrow c+d} \left(x_{A},x_{B},k_{B\perp},\mu \right), \label{eq:hef} \end{align} where $\mathcal{F}_{g^{*}/B}$ is the so-called \emph{unintegrated gluon density} or \emph{transverse momentum dependent gluon distribution} (see \cite{Kharzeev:2003wz,Dominguez:2011wm,Kotko:2015ura,Bury:2018kvg} for more details on different gluon distributions), $f_{a/A}$ are the collinear PDFs and $d\hat{\sigma}_{a+g^{*}\rightarrow c+d}$ is built out of the off-shell gauge-invariant matrix elements. The indices $a, c, d$ run over the gluon and all the quarks that can contribute to the inclusive dijet production. Notice that both $f_{a/A}$ and $\mathcal{F}_{g^{*}/B}$ depend on the hard scale~$\mu$, and the latter depends also on the transverse momentum of the incoming gluon, whose value is linked to the final-state kinematics by the relation \begin{equation} |k_\perp|^2 = |p_{1\perp}+p_{2\perp}|^2 = |p_{1\perp}|^2+|p_{2\perp}|^2+ 2 |p_{1\perp}| |p_{2\perp}| \cos\Delta\phi\,, \end{equation} where $\Delta\phi$ is the azimuthal distance between the jets. The hard scale dependence in the TMD\ is necessary to properly account for large Sudakov logarithms that appear predominantly in the back-to-back region, where $k_{\perp}$ is small, but $\mu$ remains large for relatively hard jets. As shown in Ref.~\cite{vanHameren:2014ala}, incorporating the hard scale dependence in the TMD\ is essential to successfully describe shapes of dijet spectra. It is important to mention that, as discussed in Ref.~\cite{Kotko:2015ura}, the high energy factorization formula~(\ref{eq:hef}) is valid only when $Q_s \ll |k_\perp| \ll |p_{1\perp}|, |p_{2\perp}|$, which corresponds to collisions of relatively dilute hadrons. The process of central-forward dijet production in $p-p$ collision, which is the focus of our study, corresponds exactly to that situation. For processes which involve dense targets, like for example forward-forward dijet production in $p-A$ collisions, Eq.~(\ref{eq:hef}) has to be replaced by a more general factorization formula with multiple transverse momentum dependent gluon distributions~\cite{Dominguez:2011wm, Kotko:2015ura,vanHameren:2016ftb,Altinoluk:2019fui}. \section{Dipole gluon with Sudakov form factor} \label{sec:sudakov} The Sudakov effects are most conveniently included in position space. The resulting gluon\ TMD, which incorporates both small-$x$ and soft-collinear resummation, can be then transformed to momentum space as follows~\cite{Stasto:2018rci} \begin{equation} \mathcal{F}_{g^*/B}^{ag\to cd}(x, q_\perp, \mu) = \frac{-N_c S_\perp}{2\pi \alpha_s} \int_0^{\infty} \frac{b_\perp db_\perp}{2\pi} J_0(q_\perp b_\perp)\, e^{-S_{\textrm{Sud}}^{ag\to cd}(\mu, b_\perp)}\, \nabla_{b_\perp}^2 S(x,b_\perp)\,, \label{eq:Fqgpos} \end{equation} where $S_{\perp}$ is the transverse area of the target and $S(x,b_{\perp})$ is the so-called dipole scattering amplitude, which in the Color Glass Condensate (CGC) theory (see {\it e.g.} \cite{Gelis:2010nm}) is related to the color average of the dipole operator, {\it i.e.} two infinite Wilson lines displaced in the transverse plane. (Notice the difference in the prefactor w.r.t.\ to Ref.~\cite{Stasto:2018rci}, which comes from the fact that $\mathcal{F}_{^{g^*/B}} = \pi {\cal F}_{^{qg}}^{_{(a)}}$.) The Sudakov factors come from the resummation of soft-collinear gluon radiation and they depend on the partonic channel. Hence, the gluon with the Sudakov acquires this dependence and, consequently, a single dipole gluon is replaced with a set of gluons $\{\mathcal{F}_{^{g^*/B}}^{ab\to cd}\}$. In practice, the two channels that dominate in the central-forward productions are: $qg \to qg$ and $gg \to gg$. Hence, we will need to determine two gluon\ TMDs:\ $\mathcal{F}_{^{g^*/B}}^{qg\to qg}$ and $\mathcal{F}_{^{g^*/B}}^{gg\to gg}$. It is appropriate to mention that in our study we resort to the so-called mean-field approximation (known to work very well, see {\it e.g.} \cite{Dumitru:2011vk}), which allows one to calculate quadrupole operators in terms of the dipole operators alone and thus to use the BK equation for evolution of the gluon density. By taking the Fourier transform of Eq.~(\ref{eq:Fqgpos}), we can express the gluon with Sudakov resummation by the gluon without the Sudakov, all in momentum space \begin{equation} {\cal F}_{g^*/B}^{ab\to cd}(x,k_\perp,\mu) = \int d b_\perp \int dk^{\prime}_\perp \, b_\perp\, k^\prime_\perp\, J_0(b_\perp\,k^\prime_\perp)\, J_0(b_\perp \,k_\perp)\, {\cal F}_{g^*/B}(x,k^\prime_\perp)\, e^{-S_\text{Sud}^{ab\to cd}(\mu,b_\perp)}\,. \label{eq:gluon-dff} \end{equation} For each channel, the Sudakov factors can be written as \begin{equation} S_{\text{Sud}}^{ab\to cd} (b_\perp) =\sum_{i=a,b,c,d} S_p^{i} (b_\perp) + \sum_{i=a, c, d}S^{i}_{np} (b_\perp), \end{equation} where $S_p^{i} (b_\perp) $ and $S_{np}^{i} (b_\perp)$ are the perturbative and non-perturbative contributions. As argued in~Ref.~\cite{Stasto:2018rci}, as small-$x$ gluon\ TMDs\ for parton $b$ may already contain some non-perturbative information at low-$x$, the non-perturbative Sudakov factor associated with that incoming gluon~$b$ should not be included. In addition, according to the derivation in Ref.~\cite{Mueller:2013wwa}, the single logarithmic term in the perturbative part of the Sudakov factor -- the so-called $B$-term -- should also be absent for the incoming small-$x$ gluon. The perturbative Sudakov factors are given by~\cite{Stasto:2018rci} \begin{eqnarray} S_p^{qg\to qg} (Q, b_\perp) & = & \int_{\mu_b^2}^{Q^2} \frac{d\mu^2}{\mu^2} \left[ 2 (C_F + C_A) \frac{\alpha_s}{2\pi} \ln \left( \frac{Q^2}{\mu^2} \right) - \left(\frac{3}{2}C_F + C_A \beta_0 \right) \frac{\alpha_s}{\pi} \right], \label{eq:sudpertqg} \\[1em] S_p^{gg\to gg} (Q, b_\perp) & = & \int_{\mu_b^2}^{Q^2} \frac{d\mu^2}{\mu^2} \left[ 4 C_A \frac{\alpha_s}{2\pi} \ln \left( \frac{Q^2}{\mu^2} \right) - 3 C_A \beta_0 \frac{\alpha_s}{\pi} \right]\,, \label{eq:sudpertgg} \end{eqnarray} where $\beta_0 = (11-2n_f/3)/12$, $\mu_b=2e^{-\gamma_E}/b_*$, and $b_* = b_\perp/\sqrt{1+b_\perp^2/b_{\rm max}^2}$. The $g g \to q \bar q$ channel is negligible for the kinematics of this study. Following Ref~.\cite{Stasto:2018rci}, for the non-perturbative Sudakov factor, we employ the parameterization~\cite{Su:2014wpa,Prokudin:2015ysa} \begin{eqnarray} \label{eq:sudnpqg} S_{np}^{qg\to qg} (Q, b_\perp) & = & \left( 2 + \frac{C_A}{C_F} \right) \frac{g_1}{2} b_\perp^2 + \left(2 + \frac{C_A}{C_F} \right) \frac{g_2}{2} \ln \frac{Q}{Q_0} \ln \frac{b_\perp}{b_*}\,, \\ \label{eq:sudnpgg} S_{np}^{gg\to gg} (Q, b_\perp) & = & \frac{3C_A}{C_F} \frac{g_1}{2} b_\perp^2 + \frac{3C_A}{C_F} \frac{g_2}{2} \ln \frac{Q}{Q_0} \ln \frac{b_\perp}{b_*}\,, \end{eqnarray} with $g_1 = 0.212$, $g_2 = 0.84$, and $Q_0^2 = 2.4 {\rm GeV}^2$. \begin{figure}[p] \begin{center} \includegraphics[width=0.99\textwidth]{plots/paper-gluon-map.png} \end{center} \caption{ KS gluon distribution, Eq.~(\ref{eq:gluon-dff}) without and with the Sudakov form factors. The second row corresponds to the simple model-Sudakov given in Eq.~(\ref{eq:survival}), while the third and the fourth rows show results obtained with the Sudakov factors derived from QCD and given in Eqs.~(\ref{eq:sudpertqg}), (\ref{eq:sudnpqg}), and (\ref{eq:sudpertgg}), (\ref{eq:sudnpgg}), respectively. } \label{fig:gluon-map} \end{figure} As a basis for all calculations presented in this study, we use the nonlinear KS (Kutak-Sapeta) gluon\ TMD~\cite{Kutak:2012rf}, which, for $k_\perp^2>1\, \text{GeV}^2$, comes from evolving the input distribution \begin{equation} \mathcal{F}_{{g^*/B}}^{(0)}(x,k^2_\perp)= \frac{\alpha_S(k^2_\perp)}{2\pi k^2_\perp}\int_x^1 dzP_{gg}(z)\frac{x}{z}g\left(\frac{x}{z}\right)\,, \quad \text{where} \quad xg(x)=N(1-x)^{\beta} (1-D x)\,, \label{eq:initial-cond} \end{equation} with the extension of the BK (Balitsky-Kovchegov) equation \cite{Kutak:2003bd}, following the prescription of Ref.~\cite{Kwiecinski:1997ee} to include kinematic constraint on the gluons in the chain, non-singular pieces of the splitting functions, as well as contributions from sea quarks. For $k_\perp^2 \leq 1\, \text{GeV}^2$, the gluon distribution is taken as $\mathcal{F}_{^{g^*/B}} (x,k^2_\perp) = k^2_\perp \mathcal{F}_{^{g^*/B}}(x,1)$, which is motivated by the shape obtained from the solution of the LO BK equation in the saturation regime \cite{Sergey:2008wk}. The parameters of the gluon were set by a fit to the $F_2$ data from HERA~\cite{Aaron:2009aa}, which returned the values: $N=0.994$, $\beta= 18.6$, $D=-82.1$ and $R=2.40\, \text{GeV}^{-1}$. The first three parameters correspond to the initial condition given in Eq.~(\ref{eq:initial-cond}), while the last parameter is responsible for the strength of nonlinear effects in the evolution equation. The overall quality of the fit was good, with $\chi^2/\text{ndof} = 1.73$. We emphasise that the gluon constrained by the above fit can be used in our study without any modifications. This comes from the fact that it corresponds to the small-$x$ kinematic regime and it is universal amongst DIS and central-forward jet production processes~\cite{Kotko:2015ura}, where saturation effects are moderate. The same is true for the Sudakov factors used in our study. The perturbative part is parameter-free while the non-perturbative terms are universal in the kinematic domain of our study~\cite{Su:2014wpa}. Moreover due to the high transverse momenta of the final state jets, non-perturbative effects in the Sudakov are less important than in the case of hadron production. We introduce the Sudakov effects into the KS gluon distribution following the formalism described above. In addition, for reference, we use two methods employed in our earlier studies~\cite{vanHameren:2014ala, Kutak:2014wga}. Those calculations used the Sudakov form factor, understood as the DGLAP evolution kernel, that has been applied on the top of the gluon\ TMD, together with constrains such as unitarity. Those methods should therefore be considered as models, in contrast to the proper resummation of Sudakov logarithms considered in this work. Nevertheless, the approaches used in Refs.~\cite{vanHameren:2014ala, Kutak:2014wga} were phenomenologically successful (see also~\cite{vanHameren:2019ysa}), and one of the objectives of this study is to check how the predictions of those simplistic models compare with the proper way of including the Sudakov effects into the small-$x$ gluon. The reference models are: \begin{itemize} \item Model 1: The survival probability model~\cite{vanHameren:2014ala}, where the Sudakov factor of the form \cite{Watt:2003mx} \begin{equation} T_s(\mu_{F}^2,k_{\perp}^2)= \exp\left(-\int_{k_{\perp}^2}^{\mu_F^2}\frac{dk_{\perp}^{\prime 2}}{k_{\perp}^{\prime 2}}\frac{\alpha_s(k^{\prime 2}_{\perp})} {2\pi}\sum_{a^\prime}\int_0^{1-\Delta}dz^{\prime}P_{a^\prime a}(z^\prime)\right)\,, \label{eq:survival} \end{equation} is imposed at the level of the cross section. This procedure corresponds to performing a DGLAP-type evolution from the scale $\mu_0\sim |\vec{k}_{\perp}|$ to $\mu$, decoupled from the small-$x$ evolution. \item Model 2: The model with a hard scale introduced in Ref.~\cite{Kutak:2014wga}. The Sudakov form factor of the same form as in Eq.~(\ref{eq:survival}) is imposed on top of the KS gluon distribution in such a way that, after integration of the resulting hard scale dependent gluon\ TMD, one obtains the same result as by integrating the KS gluon distribution. \end{itemize} In Fig.~\ref{fig:gluon-map} we show the KS gluon distributions, with and without Sudakov form factors, as functions of the transverse momentum $k_\perp$ and the hard scale $\mu$. Three columns correspond to three different $x$ values. The first row shows the original KS gluon distribution, which, as expected, does not depend on the value of $\mu$. In the second row, we show the KS hardscale gluon distribution of Ref.~\cite{Kutak:2014wga} (the other model~\cite{vanHameren:2014ala} does not allow one to plot gluon distribution, as it applies Sudakov effects at the cross section level via a reweighting procedure). Here, the dependence on $\mu$ is non-trivial and we see that the gluon develops a maximum in that variable. As shown in the figure, this maximum is rather broad. In the third and the fourth row of Fig.~\ref{fig:gluon-map}, we present our new KS gluon distribution with the Sudakov form factor described in this section. As explained earlier, this gluon exists in two versions, one for the $qg$ and the other for the $gg$ channel. The dependence on $k_\perp$ and $\mu$ is qualitatively similar between the new gluons and the naive KS hardscale gluon distribution. In the former case, however, the peak is significantly narrower in $\mu$ as compare to the naive model of Ref.~\cite{Kutak:2014wga}. It is interesting to note that the $qg$ gluon is broader than the $gg$ gluon. This can be understood by comparing the colour factors in the Sudakov functions~(\ref{eq:sudpertqg}) and (\ref{eq:sudpertgg}). The colour factor is bigger for the $gg$ channel, hence, in that case, the Sudakov suppression is stronger along the $\mu$ direction. We have as well computed linear versions of the KS gluon distributions with the Sudakov, using the KS linear gluon distribution of Ref.~\cite{Kutak:2012rf}. We also used them to calculate differential distributions discussed in the following section. We observed that both sets of gluons (linear and nonlinear) give comparable results for the phenomenological observables. This is consistent with the expectation that saturation plays a limited role in central-forward dijet production in $p-p$ collisions. Therefore, given that the nonlinear KS gluon distribution comes from a better fit to $F_2$ than its linearized version~\cite{Kutak:2012rf}, in the following, we present only the results obtained with the nonlinear gluon density. The new gluons presented in this section are available publicly from the recent version of the {\tt KS package} and can be downloaded from \url{http://nz42.ifj.edu.pl/~sapeta/KSgluon-2.0.tar.gz}. \section{Differential distributions} \begin{figure}[t] \begin{center} \includegraphics[width=0.49\textwidth]{plots/fc2j_x.pdf} \end{center} \caption{ Distributions of the longitudinal momentum fractions, $x_A$, $x_B$, defined in Eq.~(\ref{eq:xAxB}) from calculations with various version of the KS gluon distribution discussed in the article. } \label{fig:xdist} \end{figure} We now turn to the discussion of differential distributions in jets' transverse momenta calculated in the framework described in the preceding sections. We calculated the cross sections using the selection criteria of CMS~\cite{Chatrchyan:2012gwa}. The two leading jets were required to satisfy the cuts $p_{1\perp}, p_{2\perp} > 35\,$ GeV and $|y_1| < 2.8$, $3.2<|y_2|<4.7$. We used the CTEQ18 NLO PDF set~\cite{Hou:2019efy} and LHAPDF \cite{Buckley:2014ana} for the collinear PDFs and the KS gluon distributions with and without Sudakov for the gluon\ TMDs. Our calculations have been performed and cross checked using two independent Monte Carlo programs \cite{vanHameren:2016kkz,Kotko_LxJet} implementing the high energy factorization together with the off-shell matrix element calculated following the methods of Refs.~\cite{vanHameren:2012uj,vanHameren:2012if,Kotko:2014aba}. We used the average transverse momentum of jets as both the renormalization and factorization hard scales. \begin{figure}[t] \begin{center} \includegraphics[width=0.49\textwidth]{plots/fc2j_pTc.pdf} \includegraphics[width=0.49\textwidth]{plots/fc2j_pTf.pdf} \end{center} \caption{ The transverse momentum spectra of the central (left) and the forward (right) jets obtained with the KS gluon distribution, with and without Sudakov effects, computed for the central value of the factorization and renormalization scale, compared to CMS data~\cite{Chatrchyan:2012gwa}. } \label{fig:ptdist1} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.49\textwidth]{plots/fc2j_data_pTc.pdf} \includegraphics[width=0.49\textwidth]{plots/fc2j_data_pTf.pdf} \end{center} \caption{ As Fig.~\ref{fig:ptdist1} but we only show predictions obtained with the original KS gluon distribution and the predictions with the KS gluon distribution with Sudakov from this work. The bands correspond to varying the renormalization and factorization scale by factors $2^{\pm 1}$. } \label{fig:ptdist2} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.49\textwidth]{plots/fc2j_dphi.pdf} \includegraphics[width=0.49\textwidth]{plots/fc2j_rap.pdf} \end{center} \caption{ Differential cross sections as functions of the azimuthal distance between the jets $\Delta\phi$~(left) and jet rapidities (right) obtained with the KS gluon distribution with and without Sudakov effects. } \label{fig:dphi} \end{figure} We start by showing in Fig.~\ref{fig:xdist} distributions of the longitudinal momentum fractions probed by the central-forward dijet configurations. These results are consistent with the discussion of Section~\ref{sec:introduction}, in particular Eq.~(\ref{eq:xAxB}), and provide justification to the use of the hybrid factorization formula~(\ref{eq:hef}). If Fig.~\ref{fig:ptdist1} we show differential cross sections as function of the momenta of the forward and central jets. We compare central values of various predictions which differ by the gluon\ TMDs\ used in the HEF formula~(\ref{eq:hef}). The black dotted histograms correspond to the gluon without Sudakov, while the other three histograms use gluons with some form of Sudakov resummation. The main result of this paper is shown as a blue solid line, while the green and the red dashed curves correspond to the naive Sudakov modelling of Refs.~\cite{vanHameren:2014ala, Kutak:2014wga}. We see that the predictions from this work, with the Sudakov effects included, tend to describe the data better than the predictions without the Sudakov, especially in the region of small~$p_{\perp}$. However, the overall effect of the Sudakov form factor is not very strong for this particular observable. If Fig.~\ref{fig:ptdist2}, we show the same distributions of transverse momenta, but, here, we plot only two models (without Sudakov and with Sudakov from Section~\ref{sec:sudakov}). This time, we show also the theoretical errors, estimated by the usual renormalization and factorization scale variation by the factors $2^{\pm 1}$. We observe good agreement of our predictions with the CMS data~\cite{Chatrchyan:2012gwa}, except the tail of the central-jet transverse momentum distribution. One has to remember however that, following Eq.~(\ref{eq:xAxB}), the tails of $p_{\perp}$ distributions are sensitive to the region of large $x$, where, in principle, the gluon\ TMDs\ are not valid. Indeed, we have seen in our calculation that the KS gluon distribution with the Sudakov can sometimes get negative for larger $x$ values. We interpret that as a sign of going outside of the validity region of the gluon distribution and, hence, in such situations, we set it to zero in the cross section calculation. In Fig.~\ref{fig:dphi} (left) we compare predictions for the distribution of the azimuthal angle between the two leading jets (aka azimuthal decorrelations). Again, we show results corresponding to calculations with and without the Sudakov. We observe that inclusion of Sudakov effects leads to qualitatively the same modification of~$\Delta\phi$ distributions. Namely, the region of large~$\Delta\phi$ is depopulated w.r.t.\ the result without Sudakov, while the opposite happens in the region of smaller~$\Delta\phi$. While qualitatively the predictions from KS gluon distribution + Sudakov from this work look similar to the earlier Sudakov models, quantitatively those cross sections differ to a certain degree, as seen in Fig.~\ref{fig:dphi}. In particular, the models 1 and 2, lead to convex functions for the azimuthal decorrelations, while the Sudakov of this study produces a concave curve. We would also like to mention that the predictions using model 1 were shown to successfully reproduce the shapes of preliminary CMS data for the azimuthal decorrelations~\cite{vanHameren:2014ala}. Since, as of today, these data are not published, we refrain from comparing them with the predictions of this work. We would only like to comment that, based on the comparison shown in Fig.~\ref{fig:dphi}, we expect the predictions from this study to be largely compatible with the earlier naive models, within theoretical errors. Finally, in Fig.~\ref{fig:dphi} (right) we show rapidity distributions resulting from the various versions of the KS gluon distribution, for the central and the forward jet. We see marked differences between predictions without and with Sudakov. Interestingly, inclusion of the Sudakov from this work suppresses both the central and the forward jet distribution, and this is largely consistent with the naive model 1. However, model 2 shows enhancement (central jet) or almost no effect (forward jet) in the rapidity differential cross sections. The results presented in this section took advantage of the recent developments in the merging of the small-$x$ dynamics and the resummation of the Sudakov logarithms. Such calculations were not available at the time of our previous study~\cite{vanHameren:2014ala}, thus we had to resort to simple models of the Sudakov resummation. The calculations presented in this work are much more sound from the theory point of view. The new results show similar (or better) quality in the description of the transverse momentum spectra as compared to the methods of Refs.~\cite{vanHameren:2014ala,Kutak:2014wga}. Likewise, in this work, we obtain predictions for azimuthal decorrelations, which are much more sensitive to the Sudakov resummation procedure. In particular, we see that the present approach gives a somewhat stronger suppression of the correlation peak. Even though the predictions from this study are close to those from our earlier calculation, their theoretical status is much higher since, in this work, we used the proper Sudakov factor derived from first principles in QCD. And this was the main motivation behind the study presented in this paper. We believe that our upgraded theoretical setup is useful in particular for the observables like the $\Delta\phi$ distribution, where the Sudakov effects are strong, as shown in Fig.~\ref{fig:dphi}. The central-forward dijet production process provided us an excellent ground for validation of our framework. The latter can be used now to study small-$x$ dynamics, in particular the saturation effects, which are more pronounced in the production of the forward-forward dijet system, in particular in proton-lead collisions. \section{Summary} We discussed Sudakov effects in central-forward dijet production at LHC energies within the framework of high energy factorization. Our study was triggered by recent progress on consistent merging of Sudakov resummation with the small-$x$ effects, which allowed us to compute hard-scale dependent gluon\ TMDs. As explained in Section~\ref{sec:sudakov}, we were able to combine the phenomenologically successful KS gluon distribution~\cite{Kutak:2012rf} with the Sudakov factors directly in momentum space. In our study, we used the Sudakov factors derived within perturbative QCD in Refs.~\cite{Mueller:2013wwa, Mueller:2012uf, Sun:2014gfa, Sun:2015doa, Mueller:2016xoc}. For comparison, we also used simpler Sudakov models employed in our earlier studies~\cite{vanHameren:2014ala, Kutak:2014wga}. We have calculated theoretical predictions for the differential cross sections as functions of~$p_{\perp}$ of the central and the forward jet, as well as azimuthal distance between the jets. The results are largely consistent with our earlier predictions based on simple phenomenological Sudakov models. We also achieved good description of CMS data for $p_{\perp}$ distributions. Finally, we presented predictions for dijet azimuthal decorrelations. It is worth emphasising that our framework is relatively simple and all the parametrizations of non-perturbative physics were taken from external analyses, as explained in Section~\ref{sec:sudakov}. Hence, no additional parameters were introduced in the calculation of the results presented in this work. Overall, we conclude that the Sudakov resummation has a moderate effect on $p_{\perp}$ spectra and a fairly sizable effect on the shapes of decorrelations. This is consistent with earlier phenomenological studies~\cite{vanHameren:2014ala, vanHameren:2019ysa}, which showed preference for gluons with Sudakov effects included. Our future work will concern developing a full set of TMD\ gluon distributions exhibiting saturation effects and the Sudakov resummation, following the same perturbative calculations we used in the present paper. Such TMDs\ are necessary to confirm our previous calculations for forward-forward dijets \cite{vanHameren:2019ysa} that show interplay of saturation effects and Sudakov effects consistent with the ATLAS data, where, however, the more naive Sudakov model was used. Furthermore, in the future, we plan to address the dijet production in DIS, and a good understanding of the interplay of Sudakov effects and saturation is needed in order to provide robust predictions for the EIC \cite{Accardi:2012qut} jet observables. We expect that by starting with central rapidities and going to more forward rapidities, one will be able to incrementally see the increasing importance of saturation effects and disentangle them from Sudakov effects. \section*{Acknowledgements} We would like to thank Bo-Wen Xiao for useful explanations of some details of Ref.~\cite{Stasto:2018rci}. Piotr Kotko is partially supported by the Polish National Science Centre grant no.\ 2018/31/D/ST2/02731. Andreas van Hameren is partially supported by the Polish National Science Centre grant no.\ 2019/35/B/ST2/03531. Sebastian Sapeta is partially supported by the Polish National Science Centre grant no. 2017/27/B/ST2/02004. This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 824093. \bibliographystyle{unsrt}
1,314,259,993,687
arxiv
\section{\@startsection {section}{1}{\z@} {-3.5ex \@plus -1ex \@minus -.2ex} {2.3ex \@plus.2ex} {\boldmath\normalfont\Large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@} {-3.25ex\@plus -1ex \@minus -.2ex} {1.5ex \@plus .2ex} {\boldmath\normalfont\large\bfseries}} \renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@} {-3.25ex\@plus -1ex \@minus -.2ex} {1.5ex \@plus .2ex} {\boldmath\normalfont\normalsize\bfseries}} \renewcommand\paragraph{\@startsection{paragraph}{4}{\z@} {3.25ex \@plus1ex \@minus.2ex} {-1em} {\boldmath\normalfont\normalsize\bfseries}} \renewcommand\subparagraph{\@startsection{subparagraph}{5}{\parindent} {3.25ex \@plus1ex \@minus .2ex} {-1em} {\boldmath\normalfont\normalsize\bfseries}} \makeatother \title{\vspace{-2.5cm}The geometrically nonlinear Cosserat micropolar shear-stretch energy. \mbox{Part II:} Non-classical energy-minimizing microrotations in 3D and their computational validation} \author{Andreas Fischle \!\!\thanks{Corresponding author: Andreas Fischle, Institut f\"ur Numerische Mathematik, TU Dresden, Zellescher Weg 12-14, 01069 Dresden, Germany, email: andreas.fischle@tu-dresden.de} \quad and \quad Patrizio Neff \!\!\thanks{Patrizio Neff, Head of Lehrstuhl f\"{u}r Nichtlineare Analysis und Modellierung, Fakult\"{a}t f\"{u}r Mathematik, Universit\"{a}t Duisburg-Essen, Thea-Leymann Str. 9, 45127 Essen, Germany, email: patrizio.neff@uni-due.de}} \begin{document} \selectfont \maketitle \pagenumbering{arabic} \vspace{-0.75cm} \begin{center} \textit{Dedicated to \textsc{Gianfranco Capriz} on the occasion of his 90th birthday:\\ -- we bow in great admiration to his lifetime achievement! --} \end{center} \begin{center}\textbf{Abstract}\end{center} \begin{center} \begin{minipage}{0.95\textwidth} In any geometrically nonlinear, isotropic and quadratic Cosserat micropolar extended continuum model formulated in the deformation gradient field $F \coloneqq \nabla\varphi: \Omega \to \GL^+(n)$ and the microrotation field $R: \Omega \to \SO(n)$, the shear--stretch energy is necessarily of the form \begin{align*} \wmm(R\,;F)& \coloneqq \mu \hsnorm{\sym(R^TF - {\boldsymbol{\mathbbm{1}}})}^2 + \mu_c\hsnorm{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(R^TF - {\boldsymbol{\mathbbm{1}}})}^2\;. \end{align*} We aim at the derivation of closed form expressions for the minimizers of $\wmm(R\,;F)$ in $\SO(3)$, i.e., for the set of optimal Cosserat microrotations in dimension $n\!=\!3$, as a function of $F \in \GL^+(n)$. In a previous contribution (Part I), we have first shown that, for all $n \geq 2$, the full range of weights $\mu > 0$ and $\mu_c \geq 0$ can be reduced to either a classical or a non-classical limit case. We have then derived the associated closed form expressions for the optimal planar rotations in $\SO(2)$ and proved their global optimality. In the present contribution (Part II), we characterize the non-classical optimal rotations in dimension \mbox{$n\!=\!3$}. After a lift of the minimization problem to the unit quaternions, the Euler--Lagrange equations can be symbolically solved by the computer algebra system \texttt{Mathematica}. Among the symbolic expressions for the critical points, we single out two candidates $\rpolar^\pm_{\mu,\mu_c}(F) \in \SO(3)$ which we analyze and for which we can computationally validate their global optimality by Monte Carlo statistical sampling of $\SO(3)$. Geometrically, our proposed optimal Cosserat rotations $\rpolar^\pm_{\mu,\mu_c}(F)$ act in the ``plane of maximal strain'' and our previously obtained explicit formulae for planar optimal Cosserat rotations in $\SO(2)$ reveal themselves as a simple special case. Further, we derive the associated reduced energy levels of the Cosserat shear--stretch energy and criteria for the existence of non-classical optimal rotations. \end{minipage} \end{center} \vspace*{0.125cm} {\small {\bf{Key words:}} Cosserat, Grioli's theorem, micropolar, polar media, rotations, quaternions, Lagrange multiplier, equality constraints, non-symmetric stretch, Cosserat couple modulus, polar decomposition. {\bf{AMS 2010 subject classification:}} 15A24, 22E30, 74A30, 74A35, 74B20, 74G05, 74G65, 74N15. } \countres \setcounter{tocdepth}{1} \renewcommand{\baselinestretch}{-1.0}\normalsize { \small \tableofcontents } \renewcommand{\baselinestretch}{1.0}\normalsize \section{Introduction}\label{sec:intro} In this second part (Part II) of a series, we consider the weighted optimality problem for the Cosserat shear--stretch energy $\wmm:\; \SO(n) \,\times\, \GL^+(n) \to \RPosZ$,\\ \begin{equation} \wmm(R\,;F) \;\coloneqq\; \mu\, \hsnorm{\sym(R^TF - {\boldsymbol{\mathbbm{1}}})}^2 \,+\, \mu_c\,\hsnorm{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(R^TF - {\boldsymbol{\mathbbm{1}}})}^2\;. \label{eq:intro:wmm} \end{equation} The arguments are the deformation gradient field $F \coloneqq \nabla\varphi: \Omega \to \GL^+(n)$ and the microrotation field $R: \Omega \to \SO(n)$ evaluated at a given point of the domain $\Omega$. This energy arises in any geometrically nonlinear, isotropic and quadratic Cosserat micropolar continuum model. Note that it is always possible to express the local energy contribution in a Cosserat model as $W = W(\overline{U})$, where $\overline{U} \coloneqq R^TF$ is the first Cosserat deformation tensor. This reduction follows from objectivity requirements and has already been observed by the Cosserat brothers~\cite[p.~123, eq.~(43)]{Cosserat09}, see also~\cite{Eringen99} and~\cite{Maugin:1998:STPE}. Since $\overline{U}$ is in general non-symmetric, the most general isotropic and quadratic local energy contribution which is zero at the reference state is given by \begin{equation} \label{intro:wgeneral} \underbrace{\mu\, \hsnorm{\sym(\overline{U} - {\boldsymbol{\mathbbm{1}}})}^2 \,+\, \mu_c\,\hsnorm{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(\overline{U} - {\boldsymbol{\mathbbm{1}}})}^2}_{\text{``shear--stretch energy''}} \quad+\quad \underbrace{\frac{\lambda}{2}\,\tr{\overline{U} - {\boldsymbol{\mathbbm{1}}})}^2}_{\text{``volumetric energy''}}\;. \end{equation} The last term will be discarded in the following, since it couples the rotational and volumetric response, a feature not present in the well-known isotropic linear Cosserat models.\footnote{The Cosserat brothers never proposed any specific expression for the local energy $W = W(\overline{U})$. The chosen quadratic ansatz for $W = W(\overline{U})$ is motivated by a direct extension of the quadratic energy in the linear theory of Cosserat models, see, e.g.~\cite{Jeong:2009:NLIC,Neff_Jeong_bounded_stiffness09,Neff_Jeong_Conformal_ZAMM08}. We consider a true volumetric-isochoric split in~\secref{sec:discussion:application}.} Let us now proceed to the primary objective of our present contribution \begin{prob}[Weighted optimality in dimension $n = 3$] Let $\mu > 0$ and $\mu_c \geq 0$. Compute the set of optimal rotations \label{intro:prob_wmm} \begin{equation} \argmin{R\,\in\,\SO(3)}{\wmm(R\,;F)} \;\coloneqq\; \argmin{R\,\in\,\SO(3)}{ \left\{ \mu\, \hsnorm{\sym(R^TF - {\boldsymbol{\mathbbm{1}}})}^2 \,+\, \mu_c\,\hsnorm{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(R^TF - {\boldsymbol{\mathbbm{1}}})}^2\right\} } \label{eq:intro:weighted_wmm} \end{equation} for given parameter $F \in \GL^+(3)$ with distinct singular values $\sigma_1 > \sigma_2 > \sigma_3 > 0$. \end{prob} We use the notation $\sym(X) \coloneqq \frac{1}{2}(X + X^T)$, ${\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(X) \coloneqq \frac{1}{2}(X - X^T)$, $\scalprod{X}{Y} \coloneqq \tr{X^TY}$ and we denote the induced Frobenius matrix norm by $\hsnorm{X}^2 \coloneqq \scalprod{X}{X} = \sum_{1 \leq i,j \leq n} X_{ij}^2$. In mechanics applications, the weights $\mu > 0$ and $\mu_c \geq 0$ can be identified with the Lam\'e shear modulus $\mu > 0$ from linear elasticity and the so-called Cosserat couple modulus $\mu_c \geq 0$. The parameter $\lambda$ in the most general form of the energy~\eqref{intro:wgeneral} can further be identified with the second Lam\'e parameter. Note that the interpretation of the Cosserat couple modulus $\mu_c$ is somewhat delicate, see, e.g.,~\cite{Neff_ZAMM05}, which is one of the fundamental motivations for this second contribution in a series. In Part I of this paper~\cite{Fischle:2015:OC2D}, we have proved a still surprising reduction lemma~\cite[Lem.~2.2, p.~4]{Fischle:2015:OC2D} for the material parameters (weights) $\mu$ and $\mu_c$ which is valid for \emph{all} space dimensions $n \geq 2$. This lemma singles out a \emph{classical parameter range} $\mu_c \geq \mu > 0$ and a \emph{non-classical parameter range} $\mu_c \geq \mu > 0$ for $\mu$ and $\mu_c$ and reduces both ranges to an associated limit case. The \emph{classical limit case} is given by $(\mu,\mu_c) = (1,1)$ and the \emph{non-classical limit case} is given by $(\mu,\mu_c) = (1,0)$. We then apply the parameter reduction~\cite[Lem.~2.2, p.~4]{Fischle:2015:OC2D} to~\probref{intro:prob_wmm} and solve it in dimension $n = 2$. This allows us to discuss the optimal planar Cosserat rotations and we observe that the classical and the non-classical parameter ranges for $\mu$ and $\mu_c$ characterize two substantially different types of optimal Cosserat rotations. To explain this difference, we first need to introduce the polar factor $\polar(F) \in \SO(n)$ which is obtained from the right polar decomposition $F = \polar(F)\,U(F)$ of the deformation gradient $F \in \GL^+(n)$. Here, $U(F) \coloneqq \sqrt{F^TF} \in {\rm{PSym}} (n)$ denotes the positive definite symmetric right Biot-stretch tensor. We recall that the eigenvalues of $U \in {\rm{PSym}} (n)$ are by definition the singular values $\sigma_1 > \sigma_2 > \sigma_3 > 0$ of the deformation gradient $F \in \GL^+(n)$. In the classical parameter range $\mu_c \geq \mu > 0$, the polar factor $\polar$ admits a variational characterization which is noteworthy in its own right: namely, for all $n \geq 2$, it is the \emph{unique} minimizer for~\eqref{eq:intro:wmm} as a generalized version of Grioli's theorem shows, see~\cite{Grioli40,Guidugli:1980:EPP,Neff_Grioli14}, or~\cite[Cor.~2.4, p.~5]{Fischle:2015:OC2D}. This variational characterization of the polar factor inspired us to introduce the following \begin{defi}[Relaxed polar factor(s)] Let $\mu > 0$ and $\mu_c \geq 0$. We denote the set-valued mapping that assigns to a given parameter $F \in \GL^+(n)$ its associated set of energy-minimizing rotations by $$\rpolar_{\mu,\mu_c}(F) \quad \coloneqq\quad \argmin{R\,\in\,\SO(n)}{\wmm(R\,;F)}\;.$$ \end{defi} In dimensions $k = 2,3$, we denote the associated optimal Cosserat rotation angles by $\alpha_{\mu,\mu_c}(F) \subset (-\pi,\pi]$. More generally, in what follows, we shall denote the rotation angle of the (absolute) rotation field $R \in \SO(k)$ by $\alpha \in (-\pi,\pi]$ and the rotation axis by $r \in \S^{k - 1}$. By $\S^{n -1} \subset \mathbb{R}^{n}$, we denote the unit $n - 1$-sphere. In dimension $k = 3$, we use the well-known axis-angle parametrization of rotations which we write as $[\alpha,\, r^T]$. Since the classical parameter domain $\mu_c \geq \mu > 0$ is very well understood by now, we can focus on the non-classical parameter range $\mu > \mu_c \geq 0$ in our efforts to solve~\probref{intro:prob_wmm}. Here, the parameter reduction lemma~\cite[Lem.~2.2, p.~4]{Fischle:2015:OC2D} allows us to restrict our attention to the non-classical limit case $(\mu,\mu_c) = (1,0)$, because it shows the equivalence \begin{equation} \argmin{R\,\in\,\SO(n)}{W_{\mu,\mu_c}(R\,;F)} \quad=\quad \argmin{R\,\in\,\SO(n)}{W_{1,0}(R\,; \widetilde{F}_{\mu,\mu_c})} \end{equation} for all $n \geq 2$. On the right hand side appears the \emph{rescaled deformation gradient} $\widetilde{F}_{\mu,\mu_c} \coloneqq \lambda^{-1}_{\mu,\mu_c} \cdot F \in \GL^+(n)$ which is obtained from $F \in \GL^+(n)$ by multiplication with the inverse of the \emph{induced scaling parameter} $\lambda_{\mu,\mu_c} \coloneqq \frac{\mu}{\mu - \mu_c} > 0$. We note that we use the previous notation throughout the text and further introduce the \emph{singular radius} $\rho_{\mu,\mu_c} \coloneqq \frac{2\mu}{\mu - \mu_c}$. It follows that the set of optimal Cosserat rotations can be described by \begin{equation} \rpolar_{\mu,\mu_c}(F) \;=\; \rpolar_{1,0}(\widetilde{F}_{\mu,\mu_c}) \end{equation} for the entire non-classical parameter range $\mu > \mu_c \geq 0$. This simplifies our main objective~\probref{intro:prob_wmm} considerably, since it suffices now to solve \begin{prob}[Weighted optimality in the non-classical limit case $(\mu,\mu_c) = (1,0)$] Let $\mu > 0$ and $\mu_c \geq 0$. Compute the set of optimal rotations \label{intro:prob_wsym} \begin{equation} \argmin{R\,\in\,\SO(3)}{\wsym(R\,;F)} \;\coloneqq\; \argmin{R\,\in\,\SO(3)}{\hsnorm{\sym(R^TF - {\boldsymbol{\mathbbm{1}}})}^2} \label{eq:intro:weighted_wsym} \end{equation} for given parameter $F \in \GL^+(3)$ with distinct singular values $\sigma_1 > \sigma_2 > \sigma_3 > 0$. \end{prob} Regarding our~\probref{intro:prob_wsym} at hand, we will see in~\secref{sec:minimization} that there are in general two energy-minimizing solutions with a certain symmetry. They both have the same rotation axis but differ by the sign of their respective rotation angles which allows us to select the corresponding branches by that sign. Accordingly, we introduce the notations $\rpolar^{\pm}_{\mu,\mu_c}(F)$ and $\alpha^\pm_{\mu,\mu_c}(F)$. Loosely spoken, we will see that the optimal Cosserat rotations coincide with the polar factor $\polar$ in a certain compressive regime for $F \in \GL^+(3)$, but deviate in a certain expansive regime. We shall precisely characterize this in terms of the singular radius $\sradmm$. Such a material behavior is commonly referred to as a \emph{tension-compression asymmetry} which is an interesting natural phenomenon studied in the material sciences, see, e.g.,~\cite{Gall:1999:TCA,Gall:1999:RTTCA} and~\cite{Sehitoglu:2000:CRN} for experimental studies of nickel titanium (NiTi) shape memory single crystals for a glimpse on this broad subject.\footnote{We do not claim that such materials can actually be realistically modelled as a Cosserat continuum, although it is not impossible.} \probref{intro:prob_wsym} is a minimization problem on the matrix Lie group $\SO(3)$ of special orthogonal matrices parameterized by the deformation gradient $F \in \GL^+(3)$ in the identity component of the general linear group. Although it is not our duty, we want to point to some valuable introductory references to this subject. An excellent general reference for minimization problems on manifolds is the text by Absil, Mahony and Sepulchre~\cite{Absil:2009:OAMM}. There, also numerical solution approaches are presented. For an introduction to Lie groups and matrix groups, we refer to, e.g.,~\cite{Baker:2012:MG,JMLee02} and~\cite{KHNeeb91}. For compact Lie groups and their representation theory, see, e.g.,~\cite{Hofmann:2006:SCG} and~\cite{Broecker:2003:RCLG}. There is also a growing body of closely related work treating minimization problems on matrix groups and Grioli's theorem in a similar context~\cite{Neff:2014:LMP,Neff_Grioli14,Lankeit:2014:MML,Neff:2009:SSNC}. Instead of turning towards the solution of~\probref{intro:prob_wsym} right away, we take a step back and notice that there is still another opportunity for simplification which reduces the space of parameters $F \in \GL^+(3)$ to the space of ordered singular values $\sigma_1 \geq \sigma_2 \geq \sigma_3 > 0$ of $F$. This can be achieved by a principal axis transformation which introduces a relative rotation $\widehat{R}$ and allows us to introduce \begin{defi}[Cosserat shear--stretch energy for the relative rotation $\widehat{R}$] \label{defi:wrel} Let $\mu > 0$, $\mu_c \geq 0$ and let $D \coloneqq \diag (\sigma_1, \sigma_2, \sigma_3)$ with $\sigma_1 > \sigma_2 > \sigma_3 > 0$ the singular values of $F \in \GL^+(3)$. The \emph{energy of the relative rotation} $\widehat{R} \in \SO(3)$ is given by \begin{equation} \widehat{W}_{\mu,\mu_c}(\widehat{R}\,;D) \;\coloneqq\; W_{\mu,\mu_c}(\widehat{R}^T\,;D) \;\coloneqq\; \mu\;\hsnorm{\sym(\widehat{R}D - {\boldsymbol{\mathbbm{1}}})}^2 + \mu_c\;\hsnorm{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(\widehat{R}D - {\boldsymbol{\mathbbm{1}}})}^2 \;. \end{equation} \end{defi} This transformation is described in~\secref{sec:minimization} and leads us to the reduced \begin{prob}[Optimality of relative rotations in dimension $n = 3$] Let $\mu = 1$ and $\mu_c = 0$. Compute the set of optimal relative rotations \label{prob:relative_rhat} \begin{equation} \argmin{\widehat{R}\,\in\,\SO(3)}{\widehat{W}_{1,0}(\widehat{R}\,;D)} \quad = \quad \argmin{\widehat{R}\,\in\,\SO(3)}{\hsnorm{\sym(\widehat{R}\diag(\sigma_1,\sigma_2,\sigma_3) - {\boldsymbol{\mathbbm{1}}})}^2} \label{prob:min} \end{equation} for a given diagonal matrix $D \coloneqq \diag(\sigma_1,\sigma_2,\sigma_3)$ with $\sigma_1 > \sigma_2 > \sigma_3 > 0$ the ordered singular values of the deformation gradient $F \in \GL^+(3)$. \end{prob} In this text, we strive to mark quantities related to \emph{relative} rotations with a ``hat''-symbol, e.g., we write $\widehat{R} \in \SO(3)$. Further, we note that although, for now, we explicitly exclude the case of multiple singular values of $F$ from our analysis, there is no major obstruction. The technical treatment would, however, clutter our exposition of the basic mechanisms which we want to distill here. At present, an explicit formal solution for the three-dimensional problem (let alone the $n$-dimensional problem) seems out of reach for us. We have, however, successfully computed explicit formulae for the critical points of the Cosserat shear--stretch energy by the use of computer algebra from which we have determined optimal solutions. For this approach to succeed, we first lift the Cosserat shear--stretch energy expressed in principal axis coordinates to the sphere of unit quaternions $\S^3 \subset \H$ and subsequently apply the Lagrange multiplier technique for minimization with equality constraints, see, e.g.,~\cite{Hestenes:1975:OTF}. The unit quaternions form a two-sheeted cover of $\SO(3)$ and allow for a convenient representation of rotations in three-space. For a preceding successful application of quaternions to represent the rotational degress of freedom in Cosserat theory, see, e.g.,~\cite{Muench07_diss}. A highly interesting recent approach to Cosserat shell theory which also uses quaternions is based on geodesic finite elements, see~\cite{Sander:2014:NGNCS} and~\cite{Grohs:2013:ODEGDF,Sander:2012:GFESG}. This paper is now structured as follows: in~\secref{sec:minimization}, we introduce the lift of the Cosserat shear--stretch energy from $\SO(3)$ to the sphere of unit quaternions $\S^3 \subset \H \cong \mathbb{R}^4$. We then state the corresponding Euler--Lagrange equations and present the energy-minimizing solutions. The complete set of critical points computed by \texttt{Mathematica} ~\cite{Mathematica10} is provided in Appendix~\ref{sec:appendix}. In~\secref{sec:discussion}, we present a geometric interpretation of the optimal Cosserat rotations $\rpolar^\pm_{\mu,\mu_c}(F)$ in terms of the maximal mean planar stretch $u^{\rm mmp}$ for the entire non-classical parameter range $\mu > \mu_c \geq 0$. This leads us to introduce a classical and a non-classical domain for the parameter $F \in \GL^+(3)$ for which we also derive some interesting alternative criteria. This illuminates the bifurcation behavior of $\rpolar_{\mu,\mu_c}(F)$. Further, we compute the associated reduced energy levels $W^{\rm red}_{\mu,\mu_c}(F)$ for the Cosserat shear--stretch energy. Then in~\secref{sec:validation}, we shed light on our methodology for the analysis of the critical points and the experimental computational validation of the energy-minimizing Cosserat rotations using statistical (Monte Carlo) methods. Finally, we summarize our findings in a short conclusion presented in~\secref{sec:conclusion}. \countres \section{Solvable Euler-Lagrange equations: transformation, lift and Lagrange multipliers} \label{sec:minimization} In this section, we use a classical result from the representation theory of compact Lie groups to cast the reduced minimization problem stated as~\probref{prob:relative_rhat} into a form which allows us to symbolically compute explicit expressions for the critical points using~\texttt{Mathematica}. It is well-known that the Lie group of unit quaternions $\S^3 \subset \H \cong \mathbb{R}^4$ is closely related to the matrix group of rotations $\SO(3)$, see, e.g.,~\cite{Marsden:2013:IMS} or~\cite[Chap.~9]{Gallier:2015:NDGL}. More precisely, the unit quaternions $\S^3$ form a double cover of the matrix group $\SO(3)$. For a general introduction to analysis on smooth manifolds which includes smooth coverings, see, e.g.,~\cite{JMLee02}. For a dynamical systems approach to quaternions, see, e.g.,~\cite{Novelia:2015:GSO} which nicely demonstrates the usefulness of quaternions for mechanics applications with constraints. A more algebraic approach to quaternions with historical remarks is given in~\cite{Ebbinghaus:1990:Numbers}, and, finally, for those who enjoy the classics, see~\cite{Hamilton:1843:ANS} and~\cite{Hamilton:1866:EOQ}. \subsection{Transformation into principal directions} In order to reduce the parameter space $\GL^+(3)$, we use the (unique) polar decomposition\footnote{For an introduction to the polar and singular value decomposition, see, e.g.,~\cite{DSerre02} and for recent related results on variational characterizations of the polar factor $\polar(F)$, see~\cite{Neff_Grioli14,Lankeit:2014:MML,Neff:2014:LMP} and references therein.} \mbox{$F = \polar U$} and the (non-unique) spectral decomposition of $U =\sqrt{F^TF} \in {\rm{PSym}} (3)$ given by $U = QDQ^T$, $Q \in \SO(3)$, and expand \begin{equation} R^TF = R^T\polar U = R^T\polar QDQ^T\;. \end{equation} Here, the diagonal matrix $D = \diag(\sigma_1,\sigma_2,\sigma_3)$ contains the eigenvalues of $U$ on its diagonal. These are precisely the singular values of $F \in \GL^+(3)$. In fact, this is a particular form of the singular value decomposition (SVD). If $F$ has only simple singular values, then it is always possible to choose the rotation $Q$ such that an ordering $\sigma_1 > \sigma_2 > \sigma_3 > 0$ is achieved. Exploiting that $Q \in \SO(3)$, it is now possible to carry out a transformation of the Cosserat shear--stretch energy into principal axis coordinates -- essentially due to isotropy of the energy. For the actual computation, note first that \begin{align} &Q^T(\sym(R^TF) - {\boldsymbol{\mathbbm{1}}})Q = Q^T\left(\sym(R^T\polar QDQ^T) - {\boldsymbol{\mathbbm{1}}})\right)Q\notag\\ &\quad=\sym(Q^TR^T\polar QDQ^TQ - Q^TQ) = \sym(\underbrace{Q^TR^T\polar Q}_{\eqqcolon\;\widehat{R}}D - {\boldsymbol{\mathbbm{1}}}) = \sym(\widehat{R}D - {\boldsymbol{\mathbbm{1}}})\;.\label{eq:symtransform} \end{align} In the process, it is natural to introduce the rotation \begin{equation} \label{eq:Rhat} \widehat{R} \coloneqq Q^TR^T\polar Q \end{equation} which acts \emph{relative} to the polar factor $\polar $ in the coordinate system induced by the columns of $Q$, i.e., in a positively oriented frame of principal directions of $U$. This interpretation is also nicely illustrated by the inverse formula \begin{equation} \label{eq:R} R = \left(Q\widehat{R}Q^T\polar ^T\right)^T = \polar Q\widehat{R}^TQ^T \end{equation} which allows to recover the original absolute rotation $R$ from the relative rotation $\widehat{R}$. Our next step is to insert the transformed symmetric part $\eqref{eq:symtransform}$ into the definition of \begin{equation} W_{1,0}(R\,;F) = \hsnorm{\sym(R^TF - {\boldsymbol{\mathbbm{1}}})}^2 = \hsnorm{Q^T\sym(R^TF - {\boldsymbol{\mathbbm{1}}})Q}^2 = \hsnorm{\sym(\widehat{R}D - {\boldsymbol{\mathbbm{1}}})}^2\label{eq:wsymtransformed}\;, \end{equation} where we have used that the conjugation by $Q^T$ preserves the Frobenius matrix norm. This is a promising simplification of the Cosserat shear--stretch energy, because it reduces the dimension of the parameter space from $\dim\GL^+(3) = 9$ to only $3$ parameters. However, we still have to account for the non-uniqueness of $Q$. To this end, we introduce the following symmetric rotation matrices \begin{equation*} Q_1 \coloneqq {\boldsymbol{\mathbbm{1}}}, \quad Q_2 \coloneqq \diag( 1, -1, -1), \quad Q_3 \coloneqq \diag(-1, 1, -1), \quad Q_4 \coloneqq \diag(-1, -1, 1)\;, \end{equation*} and collect them in a set $\mathcal{S} \coloneqq \left\{Q_1, Q_2, Q_3, Q_4\right\} \subset \SO(3)$. This set forms a discrete subgroup of $\SO(3)$ which is isomorphic to the Klein four-group $K_4 \cong \mathbb{Z}^2 \times \mathbb{Z}^2$, as is easily inferred by a comparison of the multiplication tables. \begin{rem}[Uniqueness of the factor $Q$] Let $\sigma_1 > \sigma_2 > \sigma_3 > 0$ be the ordered eigenvalues of $U \coloneqq \sqrt{F^TF}$ and let $D \coloneqq \diag(\sigma_1,\sigma_2,\sigma_3)$. It is well-known that the factor $Q \in \SO(3)$ in the spectral decomposition $U = QDQ^T$ is only determined up to the choice of a right handed orientation of the uniquely determined orthogonal eigenspaces of $U$. This corresponds to the products $QS$, $S \in \mathcal{S}$, which represent all of these possibilities. \end{rem} It is easy to see that for any possible choice of right handed orientation encoded by $S \in \mathcal{S}$, we obtain the same energy level \begin{align} \hsnorm{\sym(\widehat{R}D - {\boldsymbol{\mathbbm{1}}})}^2 \;=\; \hsnorm{\sym(\widehat{R}SDS^T - {\boldsymbol{\mathbbm{1}}})}^2 \;=\; \hsnorm{\sym(S^T\widehat{R}SD - {\boldsymbol{\mathbbm{1}}})}^2\ \end{align} which implies \begin{equation} \argmin{\widehat{R}\,\in\,\SO(3)}{W_{1,0}(\widehat{R}^T\,;D)} \;=\; \setdef{\,S^T \widehat{R} S\,}{R \in \argmin{\widehat{R}\,\in\,\SO(3)}{W_{1,0}(\widehat{R}^T\,;D)} \;\text{and}\; S \in \mathcal{S}}\;. \end{equation} Thus, $\mathcal{S}$ is a symmetry group of the set of energy-minimizing rotations which acts by conjugation. The previous analysis reveals that the non-uniqueness of $Q \in \SO(3)$ is not an issue for the minimization problem, since all possible choices $QS$, $S \in \mathcal{S}$, lead to the same energy level.\footnote{A consistent choice of $Q(F) \in \SO(3)$ for \emph{different} values of $F \in \GL^+(3)$ is certainly to be advised for the numerical computation of a \emph{field} of minimizers $\rpolar^\pm_{\mu,\mu_c}(F(x))$ depending on $x \in \Omega$. The inversion formula~\eqref{eq:R} explicitly depends on the choice of $Q$ and is sensitive to flips of the subspace orientation $Q \mapsto QS$, $S \in \mathcal{S}$.} Without any loss of generality, we may henceforward focus on the solution of \begin{equation} \argmin{\widehat{R} \in \SO(3)}{\hsnorm{\sym(\widehat{R}D - {\boldsymbol{\mathbbm{1}}})}^2} \quad=\quad \argmin{\widehat{R} \in \SO(3)}{W_{1,0}(\widehat{R}^T\,;D)}\;. \end{equation} This proves the reduction of~\probref{intro:prob_wmm} to the minimization problem described in~\probref{prob:relative_rhat} in~\secref{sec:intro} for the non-classical limit case $(\mu,\mu_c) = (1,0)$. The same principal axes transformation can also be carried out for arbitrary values of $\mu$ and $\mu_c$ which gives rise to~\deref{defi:wrel}. In what follows, we denote the rotation angle of the (absolute) microrotation field $R \in \SO(3)$ by $\alpha \in (-\pi,\pi]$ and the axis of the rotation by $r \in \S^2$, where $\S^n \subset \mathbb{R}^{n + 1}$ denotes the unit $n$-sphere. This leads us to the axis-angle representation of a rotation which we write as $[\alpha,\, r^T]$. In what follows, we work with different parametrizations of the group of rotations $\SO(3)$ simultaneously. Thus, we introduce the symbol $\equiv$ in order to identify rotations in $\SO(3)$ which are described with respect to different parametrizations of $\SO(3)$. For example, we might write for the relative rotation $\widehat{R} \equiv [\hat{\beta},\, (\hat{r}_1, \hat{r}_2, \hat{r}_3)]$ and for a unit quaternion $q \in \S^3$ describing $R \in \SO(3)$, we have $q \equiv R \equiv -q$. We see that, in general, this binary relation is \emph{not unique} since the parametrizations need not be one-to-one. The symmetry group $\mathcal{S} \coloneqq \left\{Q_1, Q_2, Q_3, Q_4\right\}$ hints at the structure of the set of optimal relative Cosserat rotations. In our previously introduced notation, we find: \begin{equation} \begin{aligned} Q_1^T \widehat{R} Q_1 &\,\equiv\, \left[\hat{\beta},\, ( \hat{r}_1, \hat{r}_2, \hat{r}_3)\right]\,, \quad&\quad&\quad Q_2^T \widehat{R} Q_2 \,\equiv\, \left[\hat{\beta},\, ( \hat{r}_1, -\hat{r}_2, -\hat{r}_3)\right]\,,\\ Q_3^T \widehat{R} Q_3 &\,\equiv\, \left[\hat{\beta},\, (-\hat{r}_1, \hat{r}_2, -\hat{r}_3)\right]\,, \quad&\quad&\quad Q_4^T \widehat{R} Q_4 \,\equiv\, \left[\hat{\beta},\, (-\hat{r}_1, -\hat{r}_2, \hat{r}_3)\right]\;. \end{aligned} \end{equation} We observe that for rotations about the coordinate axes, i.e., with $\hat{r} = e_n$, $n = 1,2,3$, the rotation axis $\hat{r}$ is either left invariant or negated. The latter is equivalent to the negation of the rotation angle $\hat{\beta}$. \subsection{Lifting the minimization problem to \texorpdfstring{$\S^3$}{the three-sphere}} The unit quaternions can be identified with the three-sphere $\S^3 \coloneqq \setdef{q \in \H}{\abs{q} = 1}$ which we shall consider as a submanifold of the ambient coefficient space $\mathbb{R}^4$ of the quaternion division ring $\H$. Let us choose the coordinates $(w,x,y,z) \in \mathbb{R}^4$, i.e., we write quaternions as $q = w + i x + j y + k z \in \H$. In order to cast the minimization problem into a form which lends itself to the derivation of a closed form solution, it is helpful to simplify the domain of minimization, i.e., to choose a well-adapted system of coordinates. We achieve this by lifting the Cosserat shear--stretch energy from $\SO(3)$ to the covering space given by the sphere of unit quaternions $\S^3 \subset \mathbb{R}^4$. The principal idea is then to extend the covering map from $\S^3$ to the ambient space $\mathbb{R}^4$ and to apply the Lagrange multiplier rule with the constraint function $g(q) \coloneqq \abs{q}^2 - 1 = 0$. This approach leads to minimizers in the submanifold of unit quaternions $q \in \S^3$ which project to energy-minimizing rotations under the well-known covering homomorphism \begin{equation} \label{eq:pi} \pi: \S^3 \to \SO(3), \quad\quad \pi(q) = \begin{pmatrix} 1 - 2(y^2 + z^2) & 2(xy - wz) & 2(xz + wy) \\ 2(xy + wz) & 1 - 2(x^2 + z^2) & 2(yz - wx) \\ 2(xz - wy) & 2(yz + wx) & 1 - 2(x^2 + y^2) \\ \end{pmatrix}\;. \end{equation} In order to make our procedure explicit, let us first consider the case of arbitrary smooth energies $W:\SO(3) \to \mathbb{R}$. \begin{lem} Any smooth energy $W:\SO(3) \to \mathbb{R}$ admits a lift to a smooth energy \mbox{$W^\sharp: \S^3 \to \mathbb{R}$} \begin{equation*} \begin{xy} \xymatrix @!R { \S^3 \ar[rd]^{W^\sharp} \ar[d]^{\pi} & \\ \SO(3) \ar[r]_W & \mathbb{R} \\ } \end{xy} \end{equation*} such that minimizers of $W^\sharp$ are projected to minimizers of $W$, i.e., \begin{equation} \pi(\,\argmin{q\,\in\,\S^3}{W^\sharp(q)}\,) \quad=\quad \argmin{R\,\in\,\SO(3)}{W(R)}\;. \end{equation} \begin{proof} The covering map $\pi: \S^3 \to \SO(3)$ defines a surjective Lie group homomorphism with $\ker \pi = \{1, -1\}$, see, e.g.,~\cite{Gallier:2015:NDGL}. This implies that the unit quaternions form a two-fold cover of $\SO(3)$. In particular, the Lie group homomorphism $\pi$ is a local diffeomorphism when restricted to a sheet of the covering and maps critical unit quaternions in $\S^3$ to critical rotations in $\SO(3)$. By definition $W^\sharp(q\,;F): \S^3 \times \GL^+(3) \to \RPosZ$, $W^\sharp \coloneqq W \circ \pi$ is a lift of the Cosserat shear--stretch energy to the covering space $\S^3$. Smoothness of $W^\sharp$ is obvious since the composition of smooth maps is smooth. \end{proof} \end{lem} For any $R \in \SO(3)$ there exists a $q \in \S^3$ which represents this rotation as $R = \pi(q) \in \SO(3)$. However, this representation is only unique up to antipodal identification, i.e., $q$ and $-q$ represent the same rotation: $\pi(q) = R = \pi(-q)$. We further note that $\pi$ can be symbolically evaluated for all $q \in \H$ which induces an extension. As previously defined, the covering map $\pi$ is only defined for unit quaternions $q \in \S^3$. However, in order to apply the Lagrange multiplier theorem we have to extend it to a suitable neighborhood in the ambient space. To this end, we introduce the punctured space of non-zero quaternions by $\mathring{\H} \coloneqq \H\setminus{\{0\}} \cong \mathbb{R}^4\setminus{\{0\}}$. Then, identifying $\mathbb{R}^{3\times 3} \cong \mathbb{R}^9$ by concatenation of rows, allows us to consider $\pi: \mathring{\H} \to \mathbb{R}^{3\times 3}$ as a map $\pi: \mathring{\H} \to \mathbb{R}^9$ which leads us to the following matrix representation of the derivative \begin{equation} \mathrm{D}_{(w,x,y,z)}\,\pi\left(q(w,x,y,z)\right)\; =\; \begin{pmatrix*}[r] 0 & -2 z & 2 y & 2 z & 0 & -2 x & -2 y & 2 x & 0\\ 0 & 2 y & 2 z & 2 y & -4 x & -2 w & 2 z & 2 w & -4 x\\ -4 y & 2 x & 2 w & 2 x & 0 & 2 z & -2 w & 2 z & -4 y\\ -4 z & -2 w & 2 x & 2 w & -4 z & 2 y & 2 x & 2 y & 0 \end{pmatrix*}^T\!. \end{equation} It is not hard to infer that $\mathrm{D}_q \pi(q)$ is of rank $4$ for all $q \in \mathring{\H}$. Hence, the implicit function theorem ensures that $\pi:\mathring{\H} \to \mathbb{R}^{3\times 3}$ is a local diffeomorphism from the punctured ambient space $\mathring{\H}$ of the unit sphere $\S^3$ to its image $\pi(\mathring{\H}) \subset \mathbb{R}^{3\times 3}$. \begin{defi}[Extension of the lifted energy] The extension of the Lie group homomorphism $\pi: \S^3 \to \SO(3)$ to $\mathring{\H}$ given by $\pi: \mathring{\H} \to \mathbb{R}^{3\times 3}$ induces an \textbf{extension of the lifted energy} to the ambient space $\mathring{\H}$ \begin{equation} W_{1,0}^\sharp:\mathring{\H} \to \mathbb{R}, \quad\quad W_{1,0}^\sharp(q\,;D) \coloneqq \hsnorm{\sym\left(\pi(q)D -{\boldsymbol{\mathbbm{1}}}\right)}^2\;. \end{equation} \end{defi} Let us abbreviate $\widehat{R}(\hat{q}) \coloneqq \restrict{\pi}{\S^3}(\hat{q})$. It is precisely the restriction of the lifted energy to the unit quaternions for which the Cosserat shear--stretch energy of the relative rotation is well-defined $$ \restrict{\widehat{W}_{1,0}^\sharp}{\S^3}(\hat{q}\,;D) \quad=\quad \widehat{W}_{1,0}(\widehat{R}(\hat{q})\,;D)\;. $$ This extension is simply a mathematical construction, i.e., for $\hat{q} \in \mathring{\H}\setminus\S^3$ the lifted energy $\widehat{W}_{\mu,\mu_c}^\sharp$ \emph{loses its original interpretation as a shear--stretch energy}. Further, we note that the choice of extension is not unique, but the solutions to the Euler--Lagrange equations do not depend on the particular extension.\footnote{Alternatively, one may use, e.g., the following extension which yields pairwise orthogonal columns \begin{equation*} \pi': \mathring{\H} \to \mathbb{R}^{3\times 3}\;,\quad \pi'(q)\;\coloneqq\; \begin{pmatrix} w^2+x^2-y^2-z^2 & 2 (x y - w z) & 2 (x z + w y) \\ 2 (x y + w z) & w^2-x^2+y^2-z^2 & 2 (y z - w x) \\ 2 (x z - w y) & 2 (y z + w x) & w^2-x^2-y^2+z^2 \\ \end{pmatrix}\;. \end{equation*} The restrictions $\restrict{\pi}{\S^3} = \restrict{\pi'}{\S^3}$ to the sphere of unit quaternions $\S^3$ are identical.} \begin{defi}[Lagrange function] Consider the constraint function $g: \mathring{\H} \to \mathbb{R}$, $g(\hat{q}) \coloneqq \abs{\hat{q}}^2 - 1$. The \textbf{Lagrange function} for $\widehat{W}_{1,0}^\sharp:\mathring{\H} \to \mathbb{R}$ is given by \begin{equation*} \widehat{L}_{1,0}:\mathring{\H} \times \mathbb{R} \to \mathbb{R}, \quad\quad \widehat{L}_{1,0}(\hat{q},\lambda\,;D) \coloneqq \widehat{W}^\sharp_{1,0}(\hat{q}\,;D) - \lambda\, g(\hat{q})\;. \end{equation*} \end{defi} Clearly, $g(\hat{q}) = 0$ if and only if $\hat{q} \in \S^3 \subset \mathring{\H}$ which leads us to our final reformulation of the original~\probref{intro:prob_wmm} in terms of quaternions describing relative rotations, namely \begin{prob}[Lagrange multiplier formulation] \label{prob:lagrange} Compute the critical points of the Lagrange function \begin{equation} \widehat{L}_{1,0}(\hat{q},\lambda\,;D) \;=\; \hsnorm{\sym\left(\pi(\hat{q})D -{\boldsymbol{\mathbbm{1}}}\right)}^2 - \lambda\, \left(\abs{\hat{q}}^2 - 1\right) \end{equation} and determine the energy-minimizing solutions. \end{prob} The Lagrange function is polynomial. Thus, the application of the Lagrange multiplier technique leads to an algebraic problem for the Euler--Lagrange equations which we investigate next. \subsection{Euler--Lagrange equations, critical points and optimal solutions} In what follows, a shorthand notation is helpful, so let us introduce \begin{equation*} s_{ij} \coloneqq \sigma_i + \sigma_j\quad\text{and}\quad d_{ij} \coloneqq \sigma_i - \sigma_j\;,\quad i,j = 1,2,3\;. \end{equation*} Towards a derivation of the Euler--Lagrange equations in quaternion representation, we first compute the product \begin{align*} \pi(\hat{q}(w,x,y,z))\,D &= \begin{pmatrix} 1-2 \left(y^2+z^2\right) & 2 (x y-w z) & 2 (xz + wy) \\ 2 (xy +wz) & 1-2 \left(x^2+z^2\right) & 2 (yz - wx) \\ 2 (xz -wy) & 2 (yz + wx) & 1-2 \left(x^2+y^2\right) \end{pmatrix} \begin{pmatrix} \sigma_1 & 0 & 0\\ 0 & \sigma_2 & 0\\ 0 & 0 & \sigma_3\\ \end{pmatrix}\\ &= \begin{pmatrix} \sigma_1 \left(1-2 \left(y^2+z^2\right)\right) & 2 \sigma_2 (xy - wz) & 2 \sigma_3 (xz + wy) \\ 2 \sigma_1 (xy + wz) & \sigma_2 \left(1-2 \left(x^2+z^2\right)\right) & 2 \sigma_3 (yz -wx) \\ 2 \sigma_1 (xz -wy) & 2 \sigma_2 (yz + wx) & \sigma_3 \left(1-2 \left(x^2+y^2\right)\right) \\ \end{pmatrix}\;. \intertext{From this, we infer the symmetric part} \sym\left(\pi(\hat{q})D\right) &= \begin{pmatrix} \sigma_1\left(1 - 2 ({y}^2 + {z}^2)\right)& s_{12}\, {x} {y} + d_{12}\, {w} {z} & s_{31}\, {x} {z} + d_{31}\, {w} {y}\\ s_{12}\, {x} {y}+ d_{12}\, {w} {z} & \sigma_2\left(1 - 2 ({x}^2 + {z}^2)\right) & s_{23}\, {y} {z} + d_{23}\, {w} {x}\\ s_{31}\, {x} {z} + d_{31}\, {w} {y} & s_{23}\, {y} {z} + d_{23}\, {w} {x} & \sigma_3\left(1 - 2 ({x}^2 + {y}^2)\right)\\ \end{pmatrix}\;. \end{align*} Observing that $\sym\left(\pi(\hat{q})D - {\boldsymbol{\mathbbm{1}}}\right) = \sym\left(\pi(\hat{q})D\right) - {\boldsymbol{\mathbbm{1}}}$, we can compute the square of the Frobenius norm. This yields the following explicit expression for the Lagrange function $\widehat{L}_{1,0}: \mathring{\H}\times \mathbb{R} \to \mathbb{R}$: \begin{align*} \widehat{L}_{1,0}(\hat{q},\lambda\,;D) &= \left(\sigma_1 (1 \!-\! 2 {y}^2 \!-\! 2 {z}^2) - 1\right)^2 + \left(\sigma_2 (1 \!-\! 2 {x}^2 \!-\! 2 {z}^2) - 1\right)^2 + \left(\sigma_3 (1 \!-\! 2 {x}^2 \!-\! 2 {y}^2) - 1\right)^2\\ & \qquad + 2\;\left( \left(s_{12} {x} {y} + d_{12} {w} {z}\right)^2 + \left(s_{31} {x} {z} + d_{31} {w} {y}\right)^2 + \left(s_{23} {y} {z} + d_{23} {w} {x}\right)^2 \right)\\ & \qquad -{\lambda}\; ({w}^2 + {x}^2 + {y}^2 + {z}^2 - 1)\;. \end{align*} Let $D = \diag(\sigma_1,\sigma_2,\sigma_3)$ be given. Then a critical tuple of coefficients $(w,x,y,z,\lambda)$ for the Lagrange function $\widehat{L}_{1,0}$ satisfies the Euler--Lagrange equations in quaternion representation, i.e., \begin{equation} \mathrm{D}_{(w,x,y,z,\lambda)}\, \widehat{L}_{1,0}\left(\hat{q}(w,x,y,z),\lambda\,;D\right) \;=\; 0\;. \end{equation} After a lengthy computation in components (for which we have used \texttt{Mathematica}), one obtains an explicit form of the Euler--Lagrange equations for $\widehat{L}_{1,0}$ which is equivalent to the following parameter-dependent system of polynomials {\small \begin{equation} \label{eq:EL_quat} \begin{aligned} 0 &= \mathbf{w} \cdot\left(d_{23}^2 \,\mathbf{x}^2 + d_{31}^2 \,\mathbf{y}^2 + d_{12}^2 \,\mathbf{z}^2 -\frac{\,\mathbf{\lambda}}{2}\right)\\ 0 &= \mathbf{x} \cdot\left( d_{23}^2 \,\mathbf{w}^2 + 4 (\sigma_2^2 + \sigma_3^2) \,\mathbf{x}^2 + (4\sigma_3^2 + s_{12}^2) \,\mathbf{y}^2 + (4\sigma_2^2 + s_{31}^2) \,\mathbf{z}^2 -\left(d_{23}^2 + (s_{23} - 2)s_{23})\right) -\frac{\,\mathbf{\lambda}}{2}\right)\\ 0 &= \mathbf{y} \cdot\left( d_{31}^2 \,\mathbf{w}^2 + 4 (\sigma_3^2 + \sigma_1^2 ) \,\mathbf{y}^2 + (4 \sigma_1^2 + s_{23}^2) \,\mathbf{z}^2 + (4 \sigma_3^2 + s_{12}^2) \,\mathbf{x}^2 -\left(d_{31}^2 + (s_{31} - 2)s_{31})\right) -\frac{\,\mathbf{\lambda}}{2}\right)\\ 0 &= \mathbf{z} \cdot\left( d_{12}^2 \,\mathbf{w}^2 \,+ 4 (\sigma_1^2 + \sigma_2^2) \,\mathbf{z}^2 + (4 \sigma_2^2 + s_{31}^2) \,\mathbf{x}^2 + (4 \sigma_1^2 + s_{23}^2) \,\mathbf{y}^2 -\left(d_{12}^2 + (s_{12} - 2)s_{12})\right) -\frac{\,\mathbf{\lambda}}{2}\right)\\ 0 &= \mathbf{w}^2 + \,\mathbf{x}^2 + \,\mathbf{y}^2 + \,\mathbf{z}^2 - 1\;. \end{aligned} \end{equation}} In general, solution sets of polynomial systems over the field of complex numbers $\mathbb{C}$ define complex varieties which intuitively can be regarded as almost-everywhere submanifolds of $\mathbb{C}^n$ with certain singularities. Real algebraic geometry studies the set of solutions to systems over real closed fields and the solution sets define so-called semialgebraic sets~\cite{Bochnak:2013:RAG}. For an exposition of solution methods for polynomial systems, we refer the interested reader to~\cite{Sturmfels:2002:SSPE} and~\cite{Cox:2006:UAG}. Note that in our case both the problem and its solution set are parametrized by the singular values $\sigma_1 > \sigma_2 > \sigma_3 > 0$ of the deformation gradient $F \in \GL^+(3)$ encoded by the diagonal matrix $D = \diag(\sigma_1,\sigma_2,\sigma_3)$. The study of parametrized polynomial systems is an active research area in computational algebraic geometry, see, e.g.,~\cite{Montes:2010:GBPP} and references therein.\footnote{The present authors are not specialists in (computational) algebraic geometry. Our goal here is to point out some interesting references and developments that might be useful for the solution of polynomial systems arising also in other applications.} We briefly introduce the Euler--Lagrange equations obtained by taking variations on the matrix group $\SO(3)$; cf.~\cite[p.~28]{Neff_Biot07} for details. Let $\xi = RA \in T_R\SO(3) \cong R\cdot\so(3)$ be a direction in the tangent space at $R \in \SO(3)$. The corresponding directional derivative of the Cosserat shear--stretch energy $W_{\mu,\mu_c}(R\,; F)$ is then \begin{align*} \label{eq:D1W} D_{R} W_{\mu,\mu_c}(R\,; F).\xi &= 2\mu\,\scalprod{\sym(R^TF - {\boldsymbol{\mathbbm{1}}})}{\sym(\xi^TF)} + 2\mu_c\,\scalprod{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(R^TF)}{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(\xi^TF)}\\ &= \scalprod{2\mu\,\sym(\overline{U} - {\boldsymbol{\mathbbm{1}}}) + 2\mu_c\,{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(\overline{U})}{A^T\overline{U}}\;. \end{align*} Equating this derivative with zero and noting, as usual, that this equality must hold for \emph{all} infinitesimal rotations $A \in \so(3)$, we obtain the Euler--Lagrange equations in matrix representation. In particular, any critical $\overline{U} \coloneqq R^TF$ must satisfy \begin{equation} \label{eq:EL_SO} {\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}\left((\mu - \mu_c)\,\overline{U}^2 - 2\mu\,\overline{U}\right) = 0\;. \end{equation} Clearly, the polar factor $\polar$ solves the Euler--Lagrange equations as it symmetrizes $\overline{U}$. Thus, $\polar$ is \emph{always} a critical point, see, e.g.,~\cite{Bufler85}, or~\cite{Sansour99}. Under certain conditions on $F$, however, there may be non-classical critical points and even minimizers for which $\overline{U}$ is no longer symmetric! This observation lies at the heart of the first collaboration of the present authors~\cite{Neff_Biot07,Neff_Fischle_GAMM08} and we shall meet this phenomenon again in the following; cf. also~\cite{Sansour:2008:NCC}. We have compiled the solution set for the Euler--Lagrange equations in quaternion representation~\eqref{eq:EL_quat} which we have obtained by using \texttt{Mathematica}~in Appendix~\ref{sec:appendix}. This permits us to present the energy-minimizing relative rotations which solve~\probref{prob:lagrange} without further ado. \begin{compres}[Energy-minimizing quaternions for $(\mu,\mu_c) = (1,0)$] Let $D = \diag(\sigma_1, \sigma_2, \sigma_3)$ with $\sigma_1 > \sigma_2 > \sigma_3 > 0$. Then the quaternion representation of the energy-minimizing relative rotations for $\widehat{W}_{1,0}(\hat{q}\,;D)$ are given by the following critical points (listed in Appendix~\ref{sec:appendix}): \begin{equation} \label{eq:optimal_q} \begin{cases} \;\hat{q}_{\mathrm{I},1}(D) \hspace{0.1cm} \quad\equiv\quad {\boldsymbol{\mathbbm{1}}}_3 &,\quad\text{if}\quad s_{12} \coloneqq \sigma_1 + \sigma_2 \leq 2\;,\\ \;\hat{q}^\pm_{\mathrm{II},1}(D) \quad\equiv\quad \left[\pm\arccos(\frac{2}{\sigma_1 + \sigma_2}),\; (0,\,0,\,1)\right] &,\quad\text{if}\quad s_{12} \coloneqq \sigma_1 + \sigma_2 \geq 2\;. \end{cases} \end{equation} \end{compres} \textit{Validation.} At present, we cannot give a full proof for this result. However, we consider our numerical validation to be quite thorough. For an exposition of our analysis of the critical points compiled in Appendix~\ref{sec:appendix} and the numerical validation of the presented energy-minimizing solutions based on extensive random sampling of $\SO(3)$ we refer our reader to~\secref{sec:validation}. One of the main gaps towards a full proof is the question whether the set of critical points computed by \texttt{Mathematica}~is complete. Note that our extensive validation based on random rotations, which exceeds what we can present in a paper by far, does not hint at the existence of additional critical points. Solving algebraic problems is the domain where CAS tools such as \texttt{Mathematica}~do shine brightly. \begin{cor}[Energy-minimizing relative rotations for $(\mu,\mu_c) = (1,0)$] \label{cor:rhat10} The solutions to~\probref{prob:relative_rhat} are given by the energy-minimizing relative rotations {\small \begin{equation} \widehat{R}_{1,0}^{\pm}(F) \coloneqq \begin{pmatrix} \cos \hat{\beta}^\pm_{1,0} & -\sin \hat{\beta}^\pm_{1,0} & 0\\ \sin \hat{\beta}^\pm_{1,0} & \cos \hat{\beta}^\pm_{1,0} & 0\\ 0 & 0 & 1\\ \end{pmatrix} \stackrel{(\text{if}\; s_{12} \geq 2)}{\vphantom{\Sigma}=} \begin{pmatrix} \frac{2}{\sigma_1 + \sigma_2} & \mp \sqrt{1-\left(\frac{2}{\sigma_1 + \sigma_2}\right)^2} & 0\\ \pm \sqrt{1-\left(\frac{2}{\sigma_1 + \sigma_2}\right)^2} & \frac{2}{\sigma_1 + \sigma_2} & 0 \\ 0 & 0 & 1 \end{pmatrix}\;. \end{equation} } Here, the optimal relative rotation angles are given by \begin{equation} \hat{\beta}^\pm_{1,0}(F) \quad\coloneqq\quad \begin{cases} \; 0 &,\;\text{if}\quad s_{12} \coloneqq \sigma_1 + \sigma_2 \leq 2\;,\\ \; \pm\arccos(\frac{2}{\sigma_1 + \sigma_2}) &,\;\text{if}\quad s_{12} \coloneqq \sigma_1 + \sigma_2 \geq 2\;. \end{cases} \end{equation} In particular, for $\sigma_1 + \sigma_2 \leq 2$, we obtain $\widehat{R}_{1,0}^{\pm}(F) = {\boldsymbol{\mathbbm{1}}}$. \end{cor} The interpretation of the optimal relative Cosserat rotations is the main subject of the next section, but in anticipation of this subsequent discussion we remark that the condition $\sigma_1 + \sigma_2 \leq 2$ characterizes a generalized compressive regime. \countres \section{Optimal Cosserat rotations, maximal mean planar strain and the reduced energy} \label{sec:discussion} All proper rotations of euclidean three-space act in a plane perpendicular to the axis of rotation. From this, a continuum model with rotational degrees of freedom inherits a certain planar character. In our context, it seems natural to introduce \begin{defi}[Maximal mean planar stretch and strain] \label{defi:mmpss} Let $F \in \GL^+(n)$, $n \geq 2$, with singular values $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_n > 0$. We introduce the \textbf{maximal mean planar stretch} $\mathbf{u^{\rm mmp}}$ and the \textbf{maximal mean planar strain} $\mathbf{s^{\rm mmp}}$ as follows: \begin{equation} \begin{aligned} u^{\rm mmp}(F) &\;\coloneqq\; \max_{i \neq j}{\frac{\sigma_i + \sigma_j}{2}} = \frac{\sigma_1 + \sigma_2}{2}\;,\quad\text{and}\\ s^{\rm mmp}(F) &\;\coloneqq\; \max_{i \neq j}\frac{(\sigma_i - 1) + (\sigma_j - 1)}{2} = u^{\rm mmp}(F) - 1\;. \end{aligned} \end{equation} \end{defi} \begin{defi}[Classical and non-classical domain] To any pair of material parameters $(\mu,\mu_c)$ in the non-classical range $\mu > \mu_c \geq 0$, we associate the following \textbf{classical domain} and \textbf{non-classical domain} for the parameter $F \in \GL^+(n)$ \begin{equation} \begin{aligned} \mathrm{Dom}^\mathrm{C}_{\mu,\mu_c} &\coloneqq \setdef{F \in \GL^+(n)}{s^{\rm mmp}(\widetilde{F}_{\mu,\mu_c}) \leq 0}\;, \quad\text{and}\quad\\ \mathrm{Dom}^\mathrm{NC}_{\mu,\mu_c} &\coloneqq \setdef{F \in \GL^+(n)}{s^{\rm mmp}(\widetilde{F}_{\mu,\mu_c}) \geq 0}\;, \end{aligned} \end{equation} respectively. \end{defi} It is straight-forward to derive the following alternative characterizations {\small \begin{equation} \begin{aligned} \mathrm{Dom}^\mathrm{C}_{\mu,\mu_c} = \setdef{F \in \GL^+(n)}{u^{\rm mmp}(F) \leq \lambda_{\mu,\mu_c}} = \setdef{F \in \GL^+(n)}{\sigma_1 + \sigma_2 \leq \sradmm \coloneqq \frac{2\mu}{\mu - \mu_c}}\;,\\ \mathrm{Dom}^\mathrm{NC}_{\mu,\mu_c} = \setdef{F \in \GL^+(n)}{u^{\rm mmp}(F) \geq \lambda_{\mu,\mu_c}} = \setdef{F \in \GL^+(n)}{\sigma_1 + \sigma_2 \geq \sradmm \coloneqq \frac{2\mu}{\mu - \mu_c}}\;. \end{aligned} \end{equation} }Note that the intersection $\mathrm{Dom}^\mathrm{C}_{\mu,\mu_c} \cap \mathrm{Dom}^\mathrm{NC}_{\mu,\mu_c} = \setdef{F \in \GL^+(n)}{s^{\rm mmp}_{\mu,\mu_c}(F) = 0}$ is not empty. However, the minimizers $\rpolar_{\mu,\mu_c}^\pm(F)$ coincide with the polar factor $\polar(F)$ on this intersection. This can be seen from the form of the optimal relative rotations in~\coref{cor:rhat10}. In particular, for dimension $n = 3$, we rediscover the following important characterizations of these domains for the non-classical limit case $(\mu,\mu_c) = (1,0)$; cf.~\eqref{eq:optimal_q}: \begin{equation} \begin{aligned} \mathrm{Dom}^\mathrm{C}_{1,0} \coloneqq \setdef{F \in \GL^+(3)}{s_{12} &\coloneqq \sigma_1 + \sigma_2 \leq 2}\;, \quad\text{and}\\ \mathrm{Dom}^\mathrm{NC}_{1,0} \coloneqq \setdef{F \in \GL^+(3)}{s_{12} &\coloneqq \sigma_1 + \sigma_2 \geq 2}\;. \end{aligned} \end{equation} Previously, in our~\coref{cor:rhat10}, we have determined the energy-minimizing relative rotations \begin{equation} \widehat{R}_{1,0}^\pm(D) \;\coloneqq\; \argmin{\widehat{R}\,\in\,\SO(3)}{\widehat{W}_{1,0}(\widehat{R}\;,D)} \;\coloneqq\; \argmin{\widehat{R}\,\in\,\SO(3)}{\hsnorm{\sym(\widehat{R}D) - {\boldsymbol{\mathbbm{1}}}}^2}\;. \end{equation} Let us briefly summarize: for $u^{\rm mmp}(F) \leq 1$, i.e., when $F \in \mathrm{Dom}^\mathrm{C}_{1,0}$, we have $\widehat{R}_{1,0}^\pm(D) = {\boldsymbol{\mathbbm{1}}}$ which corresponds uniquely to the polar factor $\polar{}$. The minimizers $\rpolar^\pm_{1,0}(F)$ deviate strictly from $\polar(F)$ for $F \in \mathrm{Dom}^\mathrm{NC}_{1,0} \setminus \mathrm{Dom}^\mathrm{C}_{1,0}$ and are hence non-classical. Further, expressed in terms of the maximal mean planar stretch $u^{\rm mmp}(F)$, we obtain the alternative representation \begin{equation} \widehat{R}_{1,0}^{\pm}(F) \;=\; \begin{pmatrix} \frac{1}{u^{\rm mmp}(F)} & \mp\sqrt{1-\frac{1}{u^{\rm mmp}(F)^2}} & 0\\ \pm\sqrt{1-\frac{1}{u^{\rm mmp}(F)^2}} & \frac{1}{u^{\rm mmp}(F)} & 0 \\ 0 & 0 & 1 \end{pmatrix}\;. \end{equation} \begin{minipage}[t]{0.6\linewidth} Towards a geometric interpretation of the energy-minimizing Cosserat rotations $\rpolar^\pm_{1,0}(F)$ in the non-classical limit case $(\mu,\mu_c) = (1,0)$, we reconsider the spectral decomposition of $U = QDQ^T$ from the principal axis transformation in~\secref{sec:intro}. Let us denote the columns of $Q \in \SO(3)$ by $q_i \in \S^2$, $i = 1,2,3$. Then $q_1$ and $q_2$ are orthonormal eigenvectors of $U$ which correspond to the largest two singular values $\sigma_1$ and $\sigma_2$ of $F \in \GL^+(3)$. More generally, we introduce the following \begin{defi}[Plane of maximal strain] \label{defi:pms} The \textbf{plane of maximal strain} is the linear subspace $$\mathrm{P}^{\rm mp}(F) \quad\coloneqq\quad \vspan{q_1,q_2} \subset \mathbb{R}^3$$ spanned by the two maximal eigenvectors $q_1,q_2$ of $U$, i.e., the eigenvectors associated to the two largest singular values $\sigma_1 > \sigma_2 > \ldots > \sigma_n$ of the deformation gradient $F \in \GL^+(n)$, $n \geq 2$. \end{defi} We recall that, due to the parameter reduction~\cite[Lem.~2.2]{Fischle:2015:OC2D}, it is always possible to recover the optimal rotations for general non-classical parameters $\mu > \mu_c \geq 0$ \begin{equation} \rpolar_{\mu,\mu_c}(F) \coloneqq \argmin{R\,\in\,\SO(3)}{W_{\mu,\mu_c}(R\,;F)}\;. \end{equation} from the non-classical limit case $(\mu,\mu_c) = (1,0)$. However, we defer the explicit procedure for a bit since it is quite instructive to interpret this distinguished non-classical limit case first. \end{minipage} \quad \begin{minipage}[t]{0.39\linewidth} \begin{center} { \setlength{\fboxsep}{0pt} \setlength{\fboxrule}{1pt} \mbox{ \includegraphics[width=\linewidth,clip=true,trim=0.75cm 3cm 0.75cm 2.5cm]{plane_of_max_strain-cropped} } } \end{center} \captionof{figure}{A stretch ellipsoid corresponding \label{fig:pms} to $(\sigma_1,\sigma_2,\sigma_3) = (4,2,1/2)$. The plane of maximal strain $\mathrm{P}^{\rm mp}(F)$ is depicted in blue. The cylinder perpendicular to this plane marks the axis of rotation $q_3 \perp \mathrm{P}^{\rm mp}(F)$ of $\rpolar^\pm(F)$ which corresponds to the eigenvector associated with the smallest singular value $\sigma_3 = 1/2$. The thin blue cylinder which bisects the angle enclosed by the opening of the ellipsoid corresponds to the polar factor $\polar{}$. Each of the two outer red cylinders corresponds to a non-classical minimizer $\rpolar_{1,0}^\pm(F)$. The angle enclosed is the optimal relative rotation angle $\hat{\beta}_{1,0}^\pm = \pm\arccos(\frac{2}{\sigma_1 + \sigma_2})$. This is the major symmetry of the non-classical minimizers.} \end{minipage} \begin{rem}[$\rpolar^\pm_{1,0}(F)$ in the classical domain] \label{rem:rpolar_class} For $s^{\rm mmp}(F) \leq 0$ the maximal mean planar stretch is non-expansive. By definition, we have $F \in \mathrm{Dom}^\mathrm{C}_{1,0}$ in the classical domain, for which the energy-minimizing relative rotation is given by $\widehat{R}_{1,0}(F) = {\boldsymbol{\mathbbm{1}}}$ and there is no deviation from the polar factor. In short $\rpolar_{1,0}^\pm(F) = \polar(F)$. \end{rem} Let us now turn to the more interesting non-classical case $F \in \mathrm{Dom}^\mathrm{NC}_{1,0}$. \begin{rem}[$\rpolar^\pm_{1,0}(F)$ in the non-classical domain] \label{rem:rpolar_nonclass} If $F \in \mathrm{Dom}^\mathrm{NC}_{1,0}$, then by definition $s^{\rm mmp}(F) > 0$ and the maximal mean planar strain is expansive. The deviation of the non-classical energy-minimizing rotations $\rpolar^\pm_{1,0}(F)$ from the polar factor $\polar$ is measured by a rotation in the plane of maximal strain $\mathrm{P}^{\rm mp}(F)$ given by $\polar(F)^T\rpolar^{\pm}_{1,0}(F) = Q(F)\widehat{R}_{1,0}^\mp(F)Q(F)^T$. The rotation axis is the eigenvector $q_3$ associated with the smallest singular value $\sigma_3 > 0$ of $F$ and the relative rotation angle is given by $\hat{\beta}_{1,0}^\mp(F) = \mp\arccos\left(1/u^{\rm mmp}(F)\right)$. The rotation angles increase monotonically towards the asymptotic limits $$\lim_{u^{\rm mmp}(F) \,\to\, \infty} \hat{\beta}_{1,0}^\pm(F) \quad=\quad \pm \pi\;.$$ In axis-angle representation, we obtain \begin{align} \widehat{R}_{1,0}^\pm(F) &\quad\equiv\quad \left[\pm \arccos(1/u^{\rm mmp}(F)),\, (0,\,0,\,1)\right]\,,\quad\text{and}\\ \polar^T\rpolar^{\pm}_{1,0}(F) &\quad\equiv\quad \left[\mp \arccos(1/u^{\rm mmp}(F)),\, q_3 \right]\;. \end{align} \end{rem} \begin{cor}[An explicit formula for $\rpolar_{\mu,\mu_c}^\pm(F)$] \label{cor:rpolar_formula} For the non-classical limit case $(\mu,\mu_c) = (1,0)$ we have the following formula for the energy-minimizing Cosserat rotations: \begin{equation} \rpolar^{\pm}_{1,0}(F) \quad\coloneqq\quad \begin{cases} \;\polar(F) &, \text{if}\quad F \in \mathrm{Dom}^\mathrm{C}_{1,0}\;,\\ \;\polar(F)Q(F)\widehat{R}_{1,0}^\mp(F)Q(F)^T &, \text{if}\quad F \in \mathrm{Dom}^\mathrm{NC}_{1,0}\;. \end{cases} \end{equation} For general values of the weights in the non-classical range $\mu > \mu_c \geq 0$, we obtain \begin{equation} \rpolar^{\pm}_{\mu,\mu_c}(F) \coloneqq \rpolar^{\pm}_{1,0}(\widetilde{F}_{\mu,\mu_c})\;, \end{equation} where $\widetilde{F}_{\mu,\mu_c} \coloneqq \lambda^{-1}_{\mu,\mu_c}\,F$ is obtained by rescaling the deformation gradient with the inverse of the \emph{induced scaling parameter} $\lambda_{\mu,\mu_c} \coloneqq \frac{\mu}{\mu - \mu_c} > 0$. \end{cor} \begin{proof} This is a straightforward application of our equation~\eqref{eq:R} which translates relative to absolute rotations derived in~\secref{sec:intro} to the optimal relative rotations described in~\coref{cor:rhat10}. The second part is non-trivial and follows from~\cite[Lem.~2.2]{Fischle:2015:OC2D}.\qedhere \end{proof} Note that the previous definition is relative to a fixed choice of the orthonormal factor $Q(F) \in \SO(3)$ in the spectral decomposition of $U = QDQ^T$. Further, right from their variational characterization, one easily deduces that the energy-minimizing rotations satisfy $\rpolar^\pm_{\mu, \mu_c}(Q\,F) = Q\,\rpolar^\pm_{\mu,\mu_c}(F)$, for any $Q \in \SO(3)$, i.e., they are objective functions;~cf.\remref{rem:polar_vs_rpolar}. \begin{figure} \begin{center} \includegraphics[width=12cm]{branchDiag} \end{center} \caption[Pitchfork bifurcation diagram for $\rpolar_{\mu,\mu_c}^\pm(F)$]{ \label{fig:branchDiag} Pitchfork bifurcation diagram for $\rpolar_{\mu,\mu_c}^\pm(F)$ for $\mu > \mu_c \geq 0$. Let us express the energy-minimizers $\rpolar_{\mu,\mu_c}^\pm(F)$ in terms of the maximal mean planar stretch $u^{\rm mmp}(\widetilde{F}_{\mu,\mu_c})$ of the rescaled deformation gradient $\widetilde{F}_{\mu,\mu_c} \coloneqq \lambda_{\mu,\mu_c}^{-1}F$. For values $F \in \mathrm{Dom}^\mathrm{C}_{\mu,\mu_c}$, we have $0 < u^{\rm mmp} \leq \lambda_{\mu,\mu_c}$ and the polar factor $\polar(F)$ is uniquely energy-minimizing. In contrast, for $F \in \mathrm{Dom}^\mathrm{NC}_{\mu,\mu_c}$, $\lambda_{\mu,\mu_c} \leq u^{\rm mmp} < \infty$, there are two non-classical minimizers $\rpolar^\pm_{\mu,\mu_c}(F)$. In this regime, the polar factor is no longer optimal but it is still a critical point. At the branching point $u^{\rm mmp}(\widetilde{F}_{\mu,\mu_c}) = \lambda_{\mu,\mu_c}$ the minimizers all coincide: $\rpolar^{-}_{\mu,\mu_c}(F) = \polar(F) = \rpolar^{+}_{\mu,\mu_c}(F)$. For $\mu_c \to \mu$, the branching point escapes to infinity which asymptotically recovers the behavior in the classical parameter range $\mu_c \geq \mu > 0$.} \end{figure} The domains of the piecewise definition of $\rpolar_{1,0}^\pm(F)$ in~\coref{cor:rpolar_formula} indicate a certain tension-compression asymmetry in the material model characterized by the Cosserat shear--stretch energy $W_{1,0}(R\,;F)$; cf.~\remref{rem:zero_tca}. We can also make a second important observation. To this end, consider a smooth curve $F(t):(-\epsilon,\epsilon) \to \GL^+(3)$. If the eigenvector $q_3(t) \in \S^2$ associated with the smallest singular value $\sigma_3(t)$ changes its orientation along this curve, then the rotation axis of $\rpolar_{1,0}^\pm(F)$ flips as well. Effectively, the sign of the relative rotation angle $\hat{\beta}_{1,0}^\pm(F)$ is negated which may lead to jumps. This can happen, e.g., if $F(t)$ passes through a deformation gradient with a non-simple singular value, but it may also depend on details of the specific algorithm used for the computation of the eigenbasis. For the classical range $\mu_c \geq \mu > 0$, the polar factor and the relaxed polar factor(s) coincide and trivially share all properties. This is no longer true for the non-classical parameter range $\mu_c \geq \mu > 0$ and we compare the properties for that range in our next remark. More precisely, we present a detailed comparison of the well-known features of the polar factor $\polar$ which are of fundamental importance in the context of mechanics. \begin{minipage}{\linewidth} \begin{rem}[$\polar(F)$ vs. $\rpolar(F)$ for the non-classical \label{rem:polar_vs_rpolar} range $\mu > \mu_c \geq 0$] Let $n \geq 2$ and $F \in \GL^+(n)$. The polar factor $\polar(F) \in \SO(n)$ obtained from the polar decomposition $F = \polar(F)\,U$ is \emph{always unique} and satisfies: \begin{equation} \begin{matrix*}[l] &\text{(Objectivity)} &\hspace{.125\linewidth} &\polar(\,Q\cdot F\,) &= &Q\cdot\polar(F) &\quad\quad(\forall Q \in \SO(n))\;,\\ &\text{(Isotropy)} &\hspace{.125\linewidth} &\polar(\,F\cdot Q\,) &= &\polar(F)\cdot Q &\quad\quad(\forall Q \in \SO(n))\;,\\ &\text{(Scaling invariance)} &\hspace{.125\linewidth} &\polar(\,\lambda\cdot F\,) &= &\polar(F) &\quad\quad(\forall \lambda > 0)\;,\\ &\text{(Inversion symmetry)} &\hspace{.125\linewidth} &\polar(F^{-1}) &= &\polar(F)^{-1}\;. & \end{matrix*} \end{equation} The relaxed polar factor(s) $\rpolar_{\mu,\mu_c}(F) \subset \SO(n)$ is \emph{in general multi-valued} and, due to its variational characterization, satisfies: \begin{equation} \begin{matrix*}[l] &\text{(Objectivity)} &\hspace{.125\linewidth} & \rpolar_{\mu,\mu_c}(\,Q \cdot F\,) &= &Q \cdot \rpolar_{\mu,\mu_c}(F) &\quad\quad(\forall Q \in \SO(n))\;,\\ &\text{(Isotropy)} &\hspace{.125\linewidth} &\rpolar_{\mu,\mu_c}(\,F\cdot Q\,) &= &\rpolar_{\mu,\mu_c}(F)\cdot Q &\quad\quad(\forall Q \in \SO(n))\;. \end{matrix*} \end{equation} For the particular dimensions $k = 2,3$, our explicit formulae imply (cf.~also Part I~\cite{Fischle:2015:OC2D}) that there exist particular instances $\lambda^* > 0$ and $F^* \in \GL^+(k)$ for which we have \begin{equation} \begin{matrix*}[l] & \text{(\underline{Broken} scaling invariance)} &\hspace{.125\linewidth} &\rpolar^\pm_{\mu,\mu_c}(\lambda^* \cdot F^*) &\neq &\rpolar(F^*) &, \quad\text{and}\\ &\text{(\underline{Broken} inversion symmetry)} &\hspace{.125\linewidth} &\rpolar^\pm_{\mu,\mu_c}({F^*}^{-1}) &\neq &\rpolar(F^*)^{-1} &. \end{matrix*} \end{equation} This can be directly inferred from the partitioning of $\GL^+(k) = \mathrm{Dom}^\mathrm{C}_{\mu,\mu_c} \,\cup\, \mathrm{Dom}^\mathrm{NC}_{\mu,\mu_c}$ and the respective piecewise definition of the relaxed polar factor(s), see~\coref{cor:rpolar_formula}. \end{rem} We interpret these broken symmetries as a (generalized) tension-compression asymmetry. \end{minipage} \subsection{The reduced Cosserat shear--stretch energy} We now introduce the notion of a reduced energy which is realized by the energy-minimizing rotations $\rpolar_{\mu,\mu_c}(F)$; see also~\remref{rem:capriz}. \begin{defi}[Reduced Cosserat shear--stretch energy] The \textbf{reduced Cosserat shear--stretch energy} is defined as \begin{equation} W_{\mu,\mu_c}^{\rm red}: \GL^+(n) \to \RPosZ, \quad\quad W_{\mu, \mu_c}^{\rm red}(F) \;\coloneqq\; \min_{R\,\in\,\SO(n)} \widehat{W}_{\mu,\mu_c}(R\,;F)\;. \end{equation} \end{defi} Besides the previous definition, we also have the following equivalent means for the explicit computation of the reduced energy \begin{equation} \begin{aligned} W_{\mu,\mu_c}^{\rm red}(F) &\;=\; \widehat{W}_{\mu,\mu_c}(\rpolar^\pm_{\mu,\mu_c}(F)\,;F)\;,\quad\text{and}\\ W_{\mu,\mu_c}^{\rm red}(F) &\;=\; \widehat{W}_{\mu, \mu_c}^{\rm red}(D) \;\coloneqq \min_{\widehat{R} \in \SO(n)} \widehat{W}_{\mu,\mu_c}(\widehat{R}\,; D) \;=\; \widehat{W}_{\mu,\mu_c}(\widehat{R}_{\mu,\mu_c}^\pm\,;D)\;. \end{aligned} \end{equation} We now approach the computation of the explicit representation of $W_{\mu,\mu_c}^{\rm red}(F)$ by means of the equivalent expression $\widehat{W}^{\rm red}_{\mu,\mu_c}(D)$. For the sake of brevity, we set $c = \frac{2}{\sigma_1 + \sigma_2} = 1/u^{\rm mmp}(F)$ and $s = \sqrt{1 - c^2}$. This allows us to write the optimal relative Cosserat rotations in a simple form in the computation of \begin{equation} \widehat{R}_{1,0}^\pm D = \begin{pmatrix} c & \mp s & 0 \\ \pm s &c & 0\\ 0 & 0 & 1\\ \end{pmatrix} \begin{pmatrix} \sigma_1 & 0 & 0\\ 0 & \sigma_2 & 0\\ 0 & 0 & \sigma_3 \end{pmatrix} = \begin{pmatrix} \sigma_1\cdot c & \mp\sigma_2\cdot s & 0\\ \pm\sigma_1\cdot s & \sigma_2\cdot c & 0\\ 0 & 0 & \sigma_3\\ \end{pmatrix}\;. \end{equation} From this, we compute the following symmetric and skew-symmetric parts: \begin{equation} \label{eq:symrd_skewrd} \begin{aligned} &\sym\left(\widehat{R}_{1,0}^\pm D - {\boldsymbol{\mathbbm{1}}}\right) \,\;=\; \begin{pmatrix} \sigma_1\cdot c - 1& \frac{d_{12}}{2}\cdot s & 0\\ \frac{d_{12}}{2}\cdot s& \sigma_2\cdot c - 1 & 0\\ 0 & 0 & \sigma_3 - 1\\ \end{pmatrix}\;,\quad\text{and}\\ &{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}\left(\widehat{R}_{1,0}^\pm D - {\boldsymbol{\mathbbm{1}}}\right) \;=\; { \def9pt{9pt} \begin{pmatrix} 0 & \mp\frac{s_{12}}{2} \cdot s &\hspace{0.2cm} 0\\ \pm\frac{s_{12}}{2}\cdot s & 0 & \hspace{0.2cm}0\\ 0 & 0 & \hspace{0.2cm}0\\ \end{pmatrix}}\;. \end{aligned} \end{equation} \begin{lem}[The reduced Cosserat shear--stretch energy $\wsymred(F)$ in terms of singular values] \label{lem:wred10_singular} Let $F \in \GL^+(3)$ and $\sigma_1 > \sigma_2 > \sigma_3 > 0$ the ordered singular values of $F$. Then the reduced Cosserat shear--stretch energy $\wsymred(F)$ admits the following piecewise representation \begin{equation*} \wsymred(F) \,=\, \begin{cases} \,(\sigma_1 - 1)^2 + (\sigma_2 - 1)^2 + (\sigma_3 - 1)^2 = \hsnorm{U - {\boldsymbol{\mathbbm{1}}}}^2 &,\,\text{if}\;\,\sigma_1 + \sigma_2 \leq 2,\,\text{i.e.},\, F \in \mathrm{Dom}^\mathrm{C}_{1,0}\,,\\ \,\frac{1}{2}\,(\sigma_1 - \sigma_2)^2 + (\sigma_3 - 1)^2 &,\,\text{if}\;\, \sigma_1 + \sigma_2 \geq 2,\,\text{i.e.},\,F \in \mathrm{Dom}^\mathrm{NC}_{1,0}\,.\\ \end{cases} \end{equation*} \end{lem} \textit{Proof.} The classical piece of the energy is easily obtained by inserting the polar factor $\polar(F)$ into the energy. To compute the non-classical piece, we first recall that \begin{equation*} \hsnorm{\sym(\rpolar^{\pm}(F)^TF - {\boldsymbol{\mathbbm{1}}})}^2 \;=\; \wsymred(F) \;=\; \widehat{W}^{\rm red}_{1,0}(D) \;=\; \hsnorm{\sym(\widehat{R}^{\pm}D - {\boldsymbol{\mathbbm{1}}})}^2\;. \end{equation*} We compute the expression on the right hand side. To this end, we set $c = \frac{2}{\sigma_1 + \sigma_2}$ and $s = \sqrt{1 - c^2}$ again and compute the Frobenius matrix norm of $\sym(\widehat{R}^{\pm}D - 1)$ which we have derived in~\eqref{eq:symrd_skewrd}. This gives us \begin{align*} \hsnorm{\sym\left(\widehat{R}^{\pm}D - {\boldsymbol{\mathbbm{1}}}\right)}^2 &= (\sigma_1 c - 1)^2 + (\sigma_2 c - 1)^2 + \frac{1}{2}\,(\sigma_1 - \sigma_2)^2 (1 - c^2) + (\sigma_3 - 1)^2\\ &= \frac{1}{2} \left(4 + (\sigma_1 - \sigma_2)^2 - 4 c (\sigma_1 + \sigma_2) + c^2 (\sigma_1 + \sigma_2)^2\right) + (\sigma_3 - 1)^2\\ &= \frac{1}{2}(\sigma_1 - \sigma_2)^2 + (\sigma_3 - 1)^2\;.\tag*{$\blacksquare$} \end{align*} Our next step is to reveal the form of the reduced energy for the entire non-classical parameter range $\mu > \mu_c \geq 0$ which involves the parameter reduction lemma, but we have to be a bit careful. \begin{rem}[Reduced energies and the parameter reduction lemma] The parameter reduction procedure described in~\cite[Lem.~2.2]{Fischle:2015:OC2D} is the key to the minimizers for general non-classical material parameters $\mu > \mu_c \geq 0$. It might be tempting, but we have to stress that the general form of the reduced energy cannot be obtained by rescaling the singular values $\sigma_i \mapsto \lambda_{\mu,\mu_c}^{-1}\sigma_i$ in the singular value representation of $W^{\rm red}_{1,0}$. \end{rem} \begin{theo}[$W^{\rm red}_{\mu,\mu_c}$ as a function of the singular values] \label{theo:wmm_explicit} Let $F \in \GL^+(n)$ and $\sigma_1 > \sigma_2 > \sigma_3 > 0$, the ordered singular values of $F$ and let $\mu > \mu_c \geq 0$, i.e., a non-classical parameter set. Then the reduced Cosserat shear--stretch energy $W^{\rm red}_{\mu,\mu_c}: \GL^+(3) \to \RPosZ$ admits the following explicit representation \begin{align*} W^{\rm red}_{\mu,\mu_c}(F) \,=\, \begin{cases} \, \mu \left((\sigma_1 - 1)^2 + (\sigma_2 - 1)^2 + (\sigma_3 - 1)^2\right) = \mu\,\hsnorm{U - {\boldsymbol{\mathbbm{1}}}}^2 &,\; F \in \mathrm{Dom}^\mathrm{C}_{\mu,\mu_c}\;,\\ \, \frac{\mu}{2}(\sigma_1 - \sigma_2)^2 + \mu\, (\sigma_3 - 1)^2 + \frac{\mu_c}{2}\left(\left(\sigma_1 + \sigma_2\right) - \sradmm\right)^2 - \frac{\mu_c}{2}\cdot\rho_{\mu,\mu_c}^2 &,\; F \in \mathrm{Dom}^\mathrm{NC}_{\mu,\mu_c}\;. \end{cases} \end{align*} \end{theo} \textit{Proof.} In order to obtain the classical part of the energy it suffices to insert $\polar{}$ into the energy. For the non-classical piece, we insert the optimal relative rotations $\widehat{R}_{\mu,\mu_c}^\pm$ into $\widehat{W}_{\mu,\mu_c}(\widehat{R}\,;D)$. This amounts to replace $c \mapsto \tilde{c} = \frac{\sradmm}{\sigma_1 + \sigma_2}$ and $s \mapsto \tilde{s} = \sqrt{1 - \tilde{c}^2}$ in our preparatory calculation~\eqref{eq:symrd_skewrd}. This yields the following contributions: \begin{align} \mu\;\hsnorm{\sym(\widehat{R}_{\mu,\mu_c}^\pm D - {\boldsymbol{\mathbbm{1}}})}^2 &\;=\; \frac{\mu}{2}\; d_{12}^2 + \mu\, (\sigma_3 - 1)^2 + \frac{\mu}{2} (\sradmm -\, 2)^2\;,\\ \mu_c\;\hsnorm{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(\widehat{R}_{\mu,\mu_c}^\pm D - {\boldsymbol{\mathbbm{1}}})}^2 &\;=\; \frac{\mu_c}{2}\;s_{12}^2\;\tilde{s}^2 \;=\; \frac{\mu_c}{2}s_{12}^2 - \frac{\mu_c}{2} \sradmm^2 \;. \end{align} Finally, adding only the constant part of the symmetric contribution to the complete contribution due to the skew-symmetric part, we obtain \begin{equation} \frac{\mu}{2} (\sradmm - 2)^2 +\frac{\mu_c}{2} s_{12}^2 - \frac{\mu_c}{2} \sradmm^2 =\frac{\mu_c}{2}\left(s^2_{12} - 2\sradmm\right) =\frac{\mu_c}{2}\left(s_{12} - \sradmm\right)^2 - \frac{\mu_c}{2}\;\rho_{\mu,\mu_c}^2\;. \tag*{\raisebox{-0.1cm}{$\blacksquare$}} \end{equation} The last step of the preceding proof is interesting in its own right. \begin{rem}[On $\mu_c$ as a penalty weight] Let us consider the contribution of the skew-term to $W^{\rm red}_{\mu,\mu_c}$ given by $$ \frac{\mu_c}{2}\left(\left(\sigma_1 + \sigma_2\right) - \sradmm\right)^2 $$ as a penalty term for $F \in \GL^+(3)$ arising for material parameters in the non-classical parameter range $\mu > \mu_c \geq 0$. This leads to a simple but interesting observation for strictly positive $\mu_c > 0$. The minimizers $F \in \GL^+(3)$ \emph{for the penalty term} satisfy the bifurcation criterion $$\sigma_1 + \sigma_2 = \sradmm$$ for $\rpolar^\pm_{\mu,\mu_c}(F)$. In this case $\widehat{R}_{\mu,\mu_c}^\pm = {\boldsymbol{\mathbbm{1}}}$ which implies that $\widehat{R}_{\mu,\mu_c}^\pm D - {\boldsymbol{\mathbbm{1}}} \in {\rm{Sym}} (3)$, i.e., it is symmetric. Hence, the skew-part vanishes entirely which minimizes the penalty. In numerical applications, a rotation field $R$ approximating $\rpolar^\pm(F)$ can be expected to be unstable in the vicinity of the branching point $\sigma_1 + \sigma_2 \approx \sradmm$. Hence, a penalty which explicitly rewards an approximation to the bifurcation point seems to be a delicate property. In strong contrast, for the case when the Cosserat couple modulus is zero, i.e., $\mu_c = 0$, the penalty term vanishes entirely. This hints at a possibly more favorable qualitative behavior of the model in that case; cf.~\cite{Neff_ZAMM05}. \end{rem} \subsection{Geometric aspects of the reduced Cosserat shear--stretch energy} We recall that the tangent bundle $T\SO(n)$ is isomorphic to the product $\SO(n)\times\so(n)$ as a vector bundle. This is commonly referred to as the left trivialization, see, e.g.,~\cite{Duistermaat:2012:LG}. With this we can comfortably minimize over the tangent bundle in the following lemma which sets the course for our next theorem. \begin{lem} \label{lem:dist_ctb_technical} Let $F \in \mathbb{R}^{n\times n}$. Then \begin{align*} \inf_{\substack{R\,\in\,\SO(3)\\ A\,\in\,\so(3)}} \norm{R^TF - {\boldsymbol{\mathbbm{1}}} - A}^2 \quad=\quad \min_{R\,\in\,\SO(3)} \norm{\sym(R^TF - {\boldsymbol{\mathbbm{1}}})}^2 \quad \eqqcolon \quad \min_{R\,\in\,\SO(3)} W_{1,0}(R\,;F) \;. \end{align*} \end{lem} \begin{proof} For all $R\,\in\,\SO(3)$, the infimum of $\hsnorm{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(R^TF - {\boldsymbol{\mathbbm{1}}}) - A}^2$ over all skew symmetric $A$ is obviously attained at $A = {\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(R^TF - {\boldsymbol{\mathbbm{1}}})$. Therefore \begin{align*} \smash{\inf_{{\substack{R\,\in\,\SO(3)\\A\,\in\,\so(3)}}}} \, \hsnorm{R^TF - {\boldsymbol{\mathbbm{1}}} - A}^2 &= \inf_{R\,\in\,\SO(3)} \, \inf_{A\,\in\,\so(3)} \Big\lbrace\hsnorm{\sym(R^TF - {\boldsymbol{\mathbbm{1}}} - A)}^2 + \hsnorm{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(R^TF - {\boldsymbol{\mathbbm{1}}} - A)}^2\Big\rbrace \notag\\ &= \inf_{R\,\in\,\SO(3)} \, \Big\lbrace \hsnorm{\sym(R^TF - {\boldsymbol{\mathbbm{1}}})}^2 + \inf_{A\,\in\,\so(3)} \hsnorm{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}(R^TF - {\boldsymbol{\mathbbm{1}}}) - A}^2 \Big\rbrace \notag\\ &= \inf_{R\,\in\,\SO(3)} \, \hsnorm{\sym(R^TF - {\boldsymbol{\mathbbm{1}}})}^2\;. \end{align*} Since $\SO(3)$ is compact and $W_{1,0}(R\,;F)$ is continuous, the infimum is attained by a minimizer.\qedhere \end{proof} The preceding lemma leads us to a nice geometric characterization of the reduced Cosserat shear--stretch energy which we find quite remarkable. It might even be useful for the case $n \geq 4$ although this is somewhat far-fetched. \begin{cor}[Characterization of $W_{1,0}^{\rm red}$ as a distance] \label{cor:wred10_distance} Let $n \geq 2$ and consider $F \in \GL^+(n)$ with singular values $\sigma_1 \geq \sigma_2 \geq \ldots \geq \sigma_n > 0$ not necessarily distinct. Then the reduced Cosserat shear--stretch energy $W_{1,0}^{\rm red}: \GL^+(n) \to \RPosZ$ admits the following characterization as a distance: \begin{equation} W^{\rm red}(F) \quad=\quad \dist_{\rm euclid}^2\big(F,\, \SO(n)\left({\boldsymbol{\mathbbm{1}}} + \so(n)\right)\big)\;. \end{equation} Here, $\dist_{\rm euclid}$ denotes the euclidean distance function. \end{cor} \textit{Proof.} First note that \begin{align*} W^{\rm red}_{1,0}(F) &\coloneqq \min_{R\,\in\,\SO(n)}\norm{\sym(R^TF - {\boldsymbol{\mathbbm{1}}})}^2 \stackrel{(\text{Lem. \ref{lem:dist_ctb_technical}})}{=} \smash{\inf_{\substack{R\,\in\,\SO(3)\\ A\,\in\,\so(3)}}} \norm{R^TF - {\boldsymbol{\mathbbm{1}}} - A}^2\\ &= \inf_{\substack{R\,\in\,\SO(3)\\ A\,\in\,\so(3)}} \norm{R(R^TF - {\boldsymbol{\mathbbm{1}}} - A)}^2\;. \end{align*} The last step ist justified by the orthogonal invariance of the Frobenius norm $\norm{\cdot}$. Carrying out the multiplications on the right hand side, we are lead to the conclusion \begin{align*} \inf_{\substack{R\,\in\,\SO(3)\\ A\,\in\,\so(3)}} \norm{F - R\,{\boldsymbol{\mathbbm{1}}} - RA}^2 = \inf_{\substack{R\,\in\,\SO(3)\\ A\,\in\,\so(3)}} \norm{F - R({\boldsymbol{\mathbbm{1}}} + A)}^2 \eqqcolon \dist_{\rm euclid}^2(F, \SO(n)\left({\boldsymbol{\mathbbm{1}}} + \so(n)\right))\;.\tag*{\raisebox{-0.4cm}{$\blacksquare$}} \end{align*} \begin{figure}[t] \begin{center} \includegraphics[width=7.2cm,clip=false,trim=3.2cm 2cm 3.2cm 3.2cm]{wred_contour} \includegraphics[width=7.2cm,clip=false,trim=3.2cm 2cm 3.2cm 3.2cm]{wred_contour_cut} \end{center} \caption[Energy isosurfaces of {$W^{\rm red}_{1,0}$} in the space of singular values of $F$]{\label{fig:wred_isosurf} Energy isosurfaces of $W^{\rm red}_{1,0}$ considered as a function of the \emph{unordered} singular values $\sigma_1,\sigma_2,\sigma_3 > 0$ of $F \in \GL^+(3)$. The displayed contour levels are $0.1$, $0.4$ and $0.8$. On the right, we have removed a piece from the non-classical cylindrical parts (red) of the energy level $0.8$ which reveals the spherical shell of the classical part (green). Note that a computation of these level surfaces via Monte Carlo minimization yields the same result (but at a much lower resolution).} \end{figure} \begin{rem}[Zero reduced energy and tension-compression asymmetry] \label{rem:zero_tca} A sharp look at~\leref{lem:wred10_singular} is sufficient to see that the $0$-energy level of $W^{\rm red}_{1,0}$ precisely corresponds to singular value tuples of the form $(s,s,1)$, $s \in [1,\infty)$.\footnote{Technically, our derivation of~\leref{lem:wred10_singular} does not extend to the case of multiple singular values, but the characterization as a distance function in~\coref{cor:wred10_distance} does not have this limitation.} In our~\figref{fig:wred_isosurf} tuples of this type (and permutations thereof) correspond to the axes of the cylindrical sheets of the isosurfaces. Let us now consider $X = R({\boldsymbol{\mathbbm{1}}} + A)$, $R \in \SO(3)$, $A \in \so(3)$, which has the squared singular values $(\sigma_1^2,\sigma_2^2,\sigma_3^2) = \left(1 + \hsnorm{A}^2, 1 + \hsnorm{A}^2, 1\right)$. Clearly, such a matrix $X$ does not generate any reduced Cosserat shear--stretch energy at all -- in perfect accord with~\coref{cor:wred10_distance}. Geometrically, $U(X) \coloneqq \sqrt{X^TX}$ induces a homogeneous blow-up (i.e., a rescaling of arbitrary positive magnitude) of the plane of maximal strain $\mathrm{P}^{\rm mp}(X)$ while preserving the distance of any given point to this plane. Furthermore, there is no possibility of similar energy savings in the compressive range for $F \in \GL^+(3)$ where the classical piece of $W^{\rm red}_{1,0}$ is active. It seems to us that this makes a good case for a quite remarkable type of tension-compression asymmetry. \end{rem} \subsection{Alternative criteria for the existence of non-classical solutions} For $\mu > \mu_c > 0$, i.e., for strictly positive $\mu_c > 0$, the singular radius satisfies $\sradmm \coloneqq \frac{2\mu}{\mu - \mu_c} > 2$. We now define a quite similar constant, namely \begin{align} \zeta_{\mu,\mu_c} \coloneqq \sradmm - \;\rho_{1,0} = \frac{2\mu_c}{\mu - \mu_c} > 0\;. \end{align} Furthermore, we define the $\epsilon$-neighborhood of a set $\mathcal{X} \subseteq \mathbb{R}^{n\times n}$ relative to the euclidean distance function as $$ N_{\epsilon}(\mathcal{X}) \;\coloneqq\; \setdef{Y \in \mathbb{R}^{n \times n}}{\dist_{\rm euclid}(Y, \mathcal{X}) < \epsilon}\;. $$ \begin{figure}[h!] \begin{center} \scalebox{0.75}{ \begin{tikzpicture} \node (Pic) at (0,0) {\includegraphics[width=8cm]{SO3nbh}}; \node[scale=1.2] (Id) at (2.5,-0.55) {${\boldsymbol{\mathbbm{1}}}$}; \node[rotate=-33] (UEpsSO3) at (-2.1, -3.05) {$N_\epsilon(\SO(3))$}; \node[rotate=-25] (HalfEps) at (2.35, -1.85) {$\epsilon$}; \node[rotate=45] (Eps) at (2.6, 3.1) {$\delta$}; \node (SO3) at (-3.4,2.6) {$\SO(3)$}; \node (F) at (2.0, 2.6) {$F$}; \node (UEpsF) at (3.25,2) {$N_{\delta}(F)$}; \node (R) at (0.5,1.7) {$R$}; \node[rotate=12] (HR) at (1.2,2.2) {$F-R$}; \end{tikzpicture}} \end{center} \caption{Illustration of a euclidean $\epsilon$-neighborhood of $\SO(3) \subset \mathbb{R}^{3\times 3}$.} \end{figure} \begin{lem}[Classical $\SO(3)$-neighborhood for $\mu_c > 0$] Let $\mu > \mu_c > 0$, $F \in \GL^+(3)$ and $\zeta_{\mu,\mu_c} \coloneqq \frac{2\mu_c}{\mu - \mu_c} > 0$. Then we have the following inclusion \begin{align} N_{\frac{1}{2}\zeta_{\mu,\mu_c}^2}(\SO(3)) \quad\subset\quad \mathrm{Dom}^\mathrm{C}_{\mu,\mu_c}\;. \end{align} In other words, for all $F \in \GL^+(3)$ satisfying $\dist_{\rm euclid}(F, \SO(3)) = \hsnorm{U - {\boldsymbol{\mathbbm{1}}}}^2 < \frac{1}{2}\zeta_{\mu,\mu_c}^2$, the polar factor $\polar{}$ is the unique minimizer of $W_{\mu,\mu_c}(R\,;F)$. \begin{proof} Since $\dist_{\rm euclid}^2(F,\SO(3)) = \norm{U - {\boldsymbol{\mathbbm{1}}}}^2 =\sum_{i=1}^3 (\sigma_i - 1)^2$ by Grioli's theorem~\cite{Neff_Grioli14}, we find \begin{align*} \dist_{\rm euclid}^2(F,\SO(3)) < \frac{1}{2}\;\zeta_{\mu,\mu_c}^2 \quad\Longrightarrow&\quad 2\left((\sigma_1 - 1)^2 + (\sigma_2 - 1)^2 + (\sigma_3 - 1)^2\right) < \zeta_{\mu,\mu_c}^2\\ \quad\Longrightarrow&\quad 2\left((\sigma_1 - 1)^2 + (\sigma_2 - 1)^2\right) < \zeta_{\mu,\mu_c}^2\;. \end{align*} Further, $0 \leq (a - b)^2 = a^2 + b^2 - 2ab$ implies $2\,(a^2 + b^2) \geq a^2 + b^2 + 2ab$ and it follows that \begin{align} (\sigma_1 - 1)^2 + (\sigma_2 - 1)^2 + 2(\sigma_1 - 1)(\sigma_2 - 1) < \zeta_{\mu,\mu_c}^2\;. \end{align} Completing the square and taking square roots on both sides, we find \begin{align} \left((\sigma_1 - 1) + (\sigma_2 - 1)\right)^2 < \zeta^2_{\mu,\mu_c} \quad\Longrightarrow\quad \pm\abs{(\sigma_1 - 1) + (\sigma_2 - 1)} < \zeta_{\mu,\mu_c}\;. \end{align} Inserting $\zeta_{\mu,\mu_c} \coloneqq \sradmm -\;2$, we obtain $(\sigma_1 - 1) + (\sigma_2 - 1) \;<\; \rho_{\mu_,\mu_c} -\; 2$. This implies $\sigma_1 + \sigma_2 < \rho_{\mu_,\mu_c}$ and hence $F \in \mathrm{Dom}^\mathrm{C}_{\mu,\mu_c}$.\qedhere \end{proof} \end{lem} Note that the preceding proof can be quite easily adapted to the planar case $n = 2$ presented in~\cite{Fischle:2015:OC2D}. \begin{lem} \label{lem:disc:SL3_dom_nc} Let $F \in \SL(3)$, i.e., $\det{F} = \sigma_1\sigma_2\sigma_3 = 1$, where $\sigma_1 \geq \sigma_2 \geq \sigma_3 > 0$ are ordered singular values of $F$, not necessarily distinct. Then \begin{align} \SL(3) \quad\subset\quad \mathrm{Dom}^\mathrm{NC}_{1,0}\;, \end{align} i.e., $F$ induces a strictly non-classical minimizer. Equivalently, $\det{F} = 1$ implies the estimate $\sigma_1 + \sigma_2 \geq 2$. \begin{proof} The inequality for the geometric and arithmetic mean shows that \begin{equation} \frac{\sigma_1 + \sigma_2 + \sigma_3}{3} \quad\geq\quad (\sigma_1\,\sigma_2\,\sigma_3)^\frac{1}{3} \quad=\quad 1 \;. \end{equation} It follows that $\sigma_1 + \sigma_2 \geq 3 - \sigma_3$ which implies the claim for $\sigma_3 \leq 1$. Due to the ordering $\sigma_1 \geq \sigma_2 \geq \sigma_3 > 0$, the case $\sigma_3 > 1$ contradicts our assumption $\det{F} > 1$.\qedhere \end{proof} \end{lem} \begin{rem} \label{rem:disc:SL3_dom_nc_strict} If we make the stronger assumption $\sigma_1 > \sigma_2 > \sigma_3 > 0$, we obtain a strict inequality $\sigma_1 + \sigma_2 > 2$. In that case, $F \in \mathrm{Dom}^\mathrm{NC}_{1,0} \setminus \mathrm{Dom}^\mathrm{C}_{1,0}$ is strictly non-classical. \end{rem} \begin{cor} \label{cor:SL3_dom_nc_strict} Let $\mu > 0$, $F \in \SL^+(3)$ and assume that $\sigma_1 > \sigma_2 > \sigma_3 > 0$. Then \begin{equation} F \quad\in\quad \mathrm{Dom}^\mathrm{NC}_{\mu,0}\setminus \mathrm{Dom}^\mathrm{C}_{\mu,0}\;, \end{equation} i.e., the minimizers $\rpolar_{\mu,0}^\pm(F) \neq \polar{}$ are \emph{strictly} non-classical. \begin{proof} Since $\lambda_{\mu,0} = 1$, it follows that $\widetilde{F}_{\mu,0} = F$. Further $\rho_{\mu,0} = \rho_{1,0}$. Thus, we are in the hypotheses of the preceding~\leref{lem:disc:SL3_dom_nc} for the case where the inequality is strict, see~\remref{rem:disc:SL3_dom_nc_strict}. \end{proof} \end{cor} \subsection{Application} \label{sec:discussion:application} Let us now give a short application to our previous findings. We consider a so-called volumetric-isochoric split for the geometrically nonlinear Cosserat shear--stretch energy. Note that this material model appears in a variety of contexts, see, e.g.,~\cite{Eremeyev:2012:FMM,Boehmer:2015:SS,Neff:2015:EGNC,Lankeit:2015:IC,Neff_Muench_transverse_cosserat08,Neff_Muench_magnetic08,Neff_Cosserat_plasticity05,Forest:1997:CSC,Sansour:1998:TEVC,Sansour:2008:NCC}, and, recently~\cite{Skatulla:2013:CCMS,Matteo:2015:MMD,Blesgen:2013:DPF,Blesgen:2014:DPC}. Further, similar expressions for the strain energy have been considered in the context of plate and shell theories, see, e.g.,~\cite{Pietraszkiewicz:2009:VPN,Pietraszkiewicz04,Pietraszkiewicz08,Neff_plate04_cmt,Birsan_Neff_MMS_2013,Birsan_Neff_JElast2013,Sander:2014:NGNCS}. Let us introduce the isochoric projection $F \mapsto F_{\rm iso} \coloneqq \frac{F}{\det{F}^{1/3}} \in \SL(3)$ of the deformation gradient $F \in \GL^+(3)$ which can also be applied to $\overline{U} \coloneqq R^TF$. With this notation, we obtain \begin{align*} W(\overline{U}) &= \underbrace{\mu\, \hsnorm{\sym\left(\frac{\overline{U}}{\det{\overline{U}}^{1/3}} - {\boldsymbol{\mathbbm{1}}}\right)}^2 \;+\; \mu_c\,\hsnorm{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}\left(\frac{\overline{U}}{\det{\overline{U}}^{1/3}} - {\boldsymbol{\mathbbm{1}}}\right)}^2}_{\text{``Cosserat shear--stretch energy''}} \;+\; \underbrace{\vphantom{\hsnorm{\sym\left(\frac{\overline{U}}{\det{\overline{U}}^{1/3}} - {\boldsymbol{\mathbbm{1}}}\right)}^2} h(\det{\overline{U}})}_{\text{``volumetric contribution''}}\\[0.2cm] &= \mu\, \hsnorm{\sym\left(R^T\frac{F}{\det{F}^{1/3}} - {\boldsymbol{\mathbbm{1}}}\right)}^2 \;+\; \mu_c\,\hsnorm{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}\left(R^T\frac{F}{\det{F}^{1/3}} - {\boldsymbol{\mathbbm{1}}}\right)}^2 \;+\; h(\det{R^TF})\\[0.2cm] &= \mu\, \hsnorm{\sym\left(R^TF_{\rm iso} - {\boldsymbol{\mathbbm{1}}}\right)}^2 \;+\; \mu_c\,\hsnorm{{\rm skew}} \def\trial{{\rm trial}} \def\TR{{\rm TR}} \def\with{{\rm with}} \def\AO {{\mbox{\footnotesize Nel}\atop {\mbox{\Huge A}\atop e=1}}\left(R^TF_{\rm iso} - {\boldsymbol{\mathbbm{1}}}\right)}^2 \;+\; h(\det{F})\;. \end{align*} The results of the previous subsections, allow us to determine the optimal Cosserat rotations for the split energy $W(\overline{U}) \;=\; W(R^TF) \;=\; \wmm(R\,;F_{\rm iso}) \;+\; h(\det{F})$. Note first that the additional volumetric contribution $h(\det{F})$ penalizes volume change by a scalar function $h: \RPos \to \RPosZ$ which is constant with respect to $R \in \SO(3)$. Therefore, this formulation still gives rise to the same optimal Cosserat rotations $$\rpolar_{\mu,\mu_c}(F_{\rm iso}) \coloneqq \argmin{R\,\in\,\SO(3)}{\wmm(R\,;F_{\rm iso})} = \argmin{R\,\in\,\SO(3)}{\Big\{\wmm(R\,;F_{\rm iso})\;+\; h(\det{F})\Big\}}\;.$$ We can now make an interesting observation. To this end, let $\epsilon > 0$ and consider diagonal matrices of the type \begin{equation*} D_\epsilon \;\coloneqq\; \begin{pmatrix} \sradmm -\, 1 + \epsilon & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & (\sradmm -\, 1 + \epsilon)^{-1} \end{pmatrix} \;\in\; \SL(3)\;. \end{equation*} The required ordering $\sigma_1 > \sigma_2 > \sigma_3 > 0$ follows from $\sradmm \coloneqq \frac{2\mu}{\mu - \mu_c} \geq \frac{2\mu}{\mu} = 2$ and holds for the entire non-classical parameter range $\mu > \mu_c \geq 0$. Obviously, we have \begin{equation*} \sigma_1 + \sigma_2 = \sradmm + \;\epsilon > \sradmm \quad\Longrightarrow\quad D_\epsilon \in \mathrm{Dom}^\mathrm{NC}_{\mu,\mu_c} \setminus \mathrm{Dom}^\mathrm{C}_{\mu,\mu_c}\;. \end{equation*} Hence, the intersection $\SL(3) \cap \left(\mathrm{Dom}^\mathrm{NC}_{\mu,\mu_c} \setminus \mathrm{Dom}^\mathrm{C}_{\mu,\mu_c}\right) \neq \emptyset$ is never empty since it contains $D_\epsilon$ for all $\epsilon > 0$. Furthermore, the associated optimal Cosserat rotations are \emph{strictly} non-classical, i.e., $\rpolar^\pm_{\mu,\mu_c}(D_\epsilon) \neq \polar(D_\epsilon) = {\boldsymbol{\mathbbm{1}}}$. Hence, in order to assure that there can be no strictly non-classical optimal Cosserat rotations (for whatever reason) one has to consider material parameters from the classical parameter range $\mu_c \geq \mu > 0$. In this case the Cosserat couple modulus $\mu_c$ dominates the Lam\'e shear modulus $\mu$ and Grioli's theorem assures that the polar factor $\polar(F)$ is always uniquely optimal~\cite{Neff_Grioli14}. In the distinguished limit case $\mu_c = 0$, the volumetric-isochoric split precludes the previously observed tension-compression asymmetry. In this particular scenario, \coref{cor:SL3_dom_nc_strict} shows that the optimal rotations are \emph{always} non-classical. Since no bifurcation of the optimal rotations occurs, there can be no qualitatively different energetic response under tension and compression for $\mu_c = 0$; cf. also~\cite{Neff_ZAMM05} for a discussion of other implications of a zero Cosserat couple modulus. Last but not least, we want to mention that our proposed explicit formulae for optimal Cosserat rotations may also lead to improved stability and performance in full scale 3D nonlinear finite element computations for media with rotational microstructure. We expect them to be especially useful for the highly interesting and numerically challenging case of a material with small internal length scale $L_{\rm c} > 0$. If, in addition, the volumetric contribution is independent of the rotation (see above), then the optimal Cosserat rotations $\rpolar^\pm_{\mu,\mu_c}(F)$ proposed in~\coref{cor:rpolar_formula} can be expected to be ideal candidates for the initialization of the Newton--iterations for the field of microrotations $R: \Omega \subset \mathbb{R}^n \to \SO(n)$, $n = 2,3$. \countres \section{Dissection of critical point structure and computational validation of optimality} \label{sec:validation} \begin{figure}[tb] \begin{center} \includegraphics[width=3cm]{01-rhombic} \includegraphics[width=3cm]{02-rhombic} \includegraphics[width=3cm]{03-rhombic} \includegraphics[width=3cm]{04-rhombic}\\ \end{center} \caption[A rhombic dodecahedron in the space of singular values]{\label{fig:dodecahedron} A rhombic dodecahedron placed in the space of unordered singular values $(\sigma_1,\sigma_2,\sigma_3) \in \mathbb{R}^3$ of $F \in \GL^+(3)$ gives rise to a beautiful geometric characterization of the classical and non-classical domains $\mathrm{Dom}^\mathrm{C}_{1,0}$ and $\mathrm{Dom}^\mathrm{NC}_{1,0}$. Pick a face and displace it in normal direction while scaling it by its distance to the origin. This creates a convex cone with the scaled faces as cross-sections which intersects the polytope. The part of the cone inside the polytope corresponds to singular values $(\sigma_1,\sigma_2,\sigma_3) \in \mathbb{R}^3$ in $\mathrm{Dom}^\mathrm{C}_{1,0}$. The exterior part corresponds to $\mathrm{Dom}^\mathrm{NC}_{1,0}$. On the picked face itself $\mathrm{Dom}^\mathrm{C}_{1,0} \cap \mathrm{Dom}^\mathrm{NC}_{1,0}$, the two branches coincide. (This is how we discovered the non-classical bifurcation behavior of $\rpolar^\pm_{1,0}(F)$ in dimension $n\!=\!3$.) } \label{fig:rhombic_dodecahedron} \end{figure} We recall that our primary objective for the present work is to derive a formula (or algorithm) which allows to compute the set of optimal Cosserat rotations $\rpolar_{\mu,\mu_c}(F) \subset \SO(3)$, i.e., the rotations which minimize the Cosserat shear--stretch energy $\wmm(R\,;F)$ for given $F \in \GL^+(3)$ and weights $(\mu,\mu_c)$ in the non-classical parameter range $\mu > \mu_c \geq 0$. In the first two sections of this contribution, we have hopefully convinced our avid reader that it suffices to solve~\probref{prob:relative_rhat} in order to determine the optimal Cosserat rotations which then solve our original~\probref{intro:prob_wmm}. However, in order to simultaneously cross-validate our theoretical derivation (this includes the parameter reduction presented in Part I~\cite[Lem.~2.2, p.~4]{Fischle:2015:OC2D}), we have based our final validation on~\probref{intro:prob_wmm}. This bypasses all simplification steps which we have used in order to derive the formula for $\rpolar_{\mu,\mu_c}^\pm(F)$ proposed in~\coref{cor:rpolar_formula}, but is costly due to the large parameter space. \subsection{Interactive analysis of the critical point structure} The solution of the Euler--Lagrange equations~\eqref{eq:EL_quat} with the computer algebra package~\texttt{Mathematica}~returns the $32$ critical points compiled in Appendix~\ref{sec:appendix}. Note that \texttt{Mathematica}~automatically verifies that the obtained symbolic expressions are indeed solutions for~\probref{prob:lagrange}. These symbolic solutions give rise to $32$ critical branches $\hat{q}^{(i)}: \mathrm{Dom}(\hat{q}^{(i)}) \to \S^3$, $1 \leq i \leq 32$. Note that, we can discard $16$ of the branches right away since they are redundant. This is due to the antipodal identification of quaternions under the covering map $\pi: \S^3 \to \SO(3)$. The critical branches are associated with the \emph{lifted} Cosserat shear--stretch energy formulation $\widehat{W}^\sharp_{\mu,\mu_c}(\hat{q}\,;D)$. In particular, they project to \emph{relative} rotations parametrized by a diagonal matrix $D = \diag(\sigma_1,\sigma_2,\sigma_3)$. In what follows, we identify the space of \emph{unordered} singular values $(\sigma_1, \sigma_2, \sigma_3)$ of $F \in \GL^+(3)$ with $\mathbb{R}^3$. Further, we take the liberty to identify diagonal matrices $D = \diag(\sigma_1,\sigma_2,\sigma_3)$ with points in $\mathbb{R}^3$ and shall even write $D \in \mathbb{R}^3$. This allows us to write $\mathrm{Dom}(\hat{q}^{(i)}) \subset \mathbb{R}^3$, $i = 1,\ldots,16$, for the maximal domain of definition of the $i$-th critical branch.\footnote{Technically, these maximal domains are implicitly defined by the requirement that the critical coefficient tuple computed by \texttt{Mathematica}~is real-valued at the point $D \in \mathbb{R}^3$, i.e., $(w^{(i)},x^{(i)},y^{(i)},z^{(i)}) \in \mathbb{R}^4$.} If the solution set is complete (cf. Appendix~\ref{sec:appendix} for a discussion), then, up to a set of measure zero, we must have\footnote{Equivalently, for a given $F \in \GL^+(3)$ with distinct singular values, at least one of the critical branches $\hat{q}^{(i)}: \mathrm{Dom}(\hat{q}^{(i)}) \to \S^3$ must be energy-minimizing since the Cosserat shear--stretch energy attains its minimum on $\SO(3)$.} $$\bigcup_{1\leq i \leq 16} \mathrm{Dom}(\hat{q}^{(i)}) = \mathbb{R}^3\;.$$ Initially, still stumbling in the dark, we compared the critical branches by comparing the different realized energy levels given by $\widehat{W}_{1,0}(\hat{q}^{(i)}\,; D)$, $i = 1,\ldots,16$, for random tuples $(\sigma_1, \sigma_2, \sigma_3) \in \mathbb{R}^3$. This allowed us to construct a three-dimensional map for the space of unordered singular values by associating the index set of the energy-minimizing critical branches at $D = (\sigma_1, \sigma_2, \sigma_3)$ with the point $D \in \mathbb{R}^3$ in the parameter space. We then mapped each of these index sets to a unique color and subsequently explored the parameter space visually. This three-dimensional ``optimal branch map'' allowed us to isolate the energy-minimizing critical branches corresponding to $\rpolar^\pm_{1,0}(F)$; cf.~\figref{fig:dodecahedron} which describes the structure that appeared. Furthermore, we compared the following minimal energy levels \begin{equation} \min_{1 \leq i \leq 16} \widehat{W}^\sharp_{1,0}(\hat{q}^{(i)}(D)\,; D)\;, \quad\text{and}\quad \min_{\hat{q}\,\in\S^3} \widehat{W}^\sharp_{1,0}(\hat{q}\,; D)\;, \end{equation} using a Monte Carlo approximation of the right hand side (which we describe in detail in the next subsection). This allowed us to detect discrepancies which can only arise due to an incomplete set of critical branches. Note that during our whole investigation, we never encountered any such discrepancy. This is a strong indication that the set of critical points computed by \texttt{Mathematica}~is in fact complete, as one would expect. Our next step is to turn our previously described approach into a more systematic computational validation of the optimality of the proposed candidates $\rpolar^\pm_{\mu,\mu_c}(F)$. \subsection{Validation of optimality by Monte Carlo statistical sampling} We now describe a serious computational approach for the validation of the optimality of our proposed candidate formula $\rpolar^\pm_{\mu,\mu_c}(F)$. This approach relies on a well-known, rather simplistic, but highly useful (in low dimensions) method for the generation of uniformly distributed random rotations due to~\cite{Shoemake:1992:URR}. In what follows, we let $K \coloneqq [-1,1]^4 \subset \mathbb{R}^4$ denote a hypercube of sidelength $2$ centered about the origin and define $\mathbb{B}^4 \coloneqq \setdef{x \in K}{\norm{x} \leq 1}$, i.e., as the closed unit ball in $\mathbb{R}^4$. Further, we let $X_K$ denote a uniformly distributed random variable with values in $K$ and introduce $X_{\mathbb{B}^4}$ as the restriction of $X_K$ to the unit ball. Then, $X_{\mathbb{B}^4} \coloneqq \restrict{X_K}{\mathbb{B}^4}$ is uniformly distributed. The restriction can be defined by rejection sampling, i.e., we reject all realizations in $K \setminus \mathbb{B}^4$ which lie outside of the ball, but accept the first realization inside of $\mathbb{B}^4 \subset K$; see~\figref{fig:rejection_sampling} for an example in the plane. \begin{theo}[Rejection sampling for $\S^3$] The random variable $X_{\S^3} \coloneqq \frac{X_{\mathbb{B}^4}}{\norm{X_{\mathbb{B}^4}}}$ obtained by normalization is uniformly distributed on $\S^3$ with respect to the Lebesgue measure on the sphere which we denote by $\,{\rm dV} _{\S^3}$. \end{theo} \begin{proof} This is a standard method which performs quite well in low space dimensions, see, e.g.,~\cite{Muller:1959:GPS,Hicks:1959:GPS} or~\cite{Marsaglia:1972:CPS} and references therein. \end{proof} \begin{figure}[t] \begin{center} \includegraphics[width=3.9cm]{rej_sampling_01} \qquad \includegraphics[width=3.9cm]{rej_sampling_02} \qquad \includegraphics[width=3.9cm]{rej_sampling_03} \end{center} \caption[Uniform sampling of $\S^1$ via rejection sampling]{ \label{fig:rejection_sampling} Uniform sampling of the circle $\S^1$ by rejection sampling. First, the unit square $[-1,1]^2$ is uniformly sampled (here, with $2000$ samples). Then all samples $p$ with $\norm{p} > 1$ in the red domain are rejected. Finally, the remaining samples in the unit disk are normalized, i.e., $p \mapsto \frac{p}{\norm{p}} \in \S^1$. This yields a uniform distribution on the boundary circle $\S^1$. Although this approach can be generalized to higher-dimensional spheres its performance does not scale to high dimensions.} \end{figure} We recall that $\S^3$ is a Lie group double cover of $\SO(3)$. A uniform distribution on a compact Lie group is defined in terms of the (normalized) Haar measure of the group, see, e.g.,~\cite[p.~9]{Applebaum:2014:PCLG}. Such a measure is invariant with respect to the left (or right) group multiplication and is unique up to a constant multiple. For an introduction to the Haar measure on Lie groups, see, e.g.,~\cite[p.~179-194]{Duistermaat:2012:LG}. It is well-known that the Lebesgue measure $\,{\rm dV} _{\S^3}$ is a bi-invariant (non-normalized) Haar measure for the Lie group of unit quaternions. \begin{theo}[Uniformly distributed random variables on $\SO(3)$] \label{theo:rejection_sampling_S3} Let $X_{\S^3}$ be a uniformly distributed random variable on $\S^3$ and $\pi: \S^3 \to \SO(3)$ the covering homorphism defined in~\eqref{eq:pi}. Then the random variable $X_{\SO(3)} \coloneqq \pi \circ X_{\S^3}$ is uniformly distributed with respect to the (normalized) Haar measure on $\SO(3)$. \end{theo} \begin{proof} This is well-known, see, e.g.,~\cite{Shoemake:1992:URR}. \end{proof} Essentially, the covering homomorphism $\pi:\S^3 \to \SO(3)$ induces a bi-invariant Riemannian metric on $\SO(3)$ via the pullback $\scalprod{\cdot}{\cdot}_{\SO(3)} \coloneqq (\pi^{-1})^*\scalprod{\cdot}{\cdot}_{\S^3}$. Note that the bi-invariant metric on $\SO(3)$ is unique up to scalar multiples since the Lie-algebra $\so(3)$ is simple.\footnote{Note that the Lie algebra $\so(n)$ is not simple for the exceptional dimensions $n = 2,4$.} With respect to the pullback metric, $\pi$ is a local isometry. Further, since $\pi$ is a covering map, the pullback of the invariant surface volume measure on $\S^3$ given by $(\pi^{-1})^*\,{\rm dV} _{\S^3}$ induces an invariant measure, i.e., a Haar measure, on $\SO(3)$. On a sidenote, the use of Riemannian metrics and geodesics on (matrix) Lie groups in applications is currently an active research area, since the computational costs of geometric methods are no longer prohibitive. For some interesting recent applications to strain measures in mechanics, see, e.g.,~\cite{Neff:2015:GLS}. Another interesting recent usecase for geodesics on the group of unit quaternions is the simulation of eye movements, see~\cite{Novelia:2015:GSO}. We now briefly describe the sampling strategies for the computational validation. \begin{rem}[Sampling the group of rotations $\SO(3)$] Based on the method described in~\theref{theo:rejection_sampling_S3}, we have computed a set of samples $\mathcal{Q} \subset \S^3$ consisting of $4.629.171$ uniformly distributed unit quaternions. \end{rem} \begin{rem}[Sampling strategy for $F \in \GL^+(3)$] We have generated a stream of matrices with uniformly distributed coefficients $F_{ij} \in [-\frac{\sradmm}{2},\frac{\sradmm}{2}]$, $1 \leq i,j \leq 3$ and discarded all samples with $\det{F} < 0$.\footnote{We have also discarded matrices with non-simple singular values, but since these form a set of measure zero this case did never arise, as expected.} From the remaining samples, we have selected the first $1.000$ samples in $\mathrm{Dom}^\mathrm{C}_{\mu,\mu_c}$ and $\mathrm{Dom}^\mathrm{NC}_{\mu,\mu_c}$, respectively, and collected them in two sets $\mathcal{F}^{\mathrm{C}}_{\mu,\mu_c}$ and $\mathcal{F}^{\mathrm{NC}}_{\mu,\mu_c}$. \end{rem} \begin{rem}[Limitations of the sampling strategy for $F \in \GL^+(3)$] For performance reasons our sampling strategy takes our expectations into account right from the onset. This can be seen as a limitation. Further, our validation is inherently limited to compact subsets of $\GL^+(3)$. However, this particular strategy, heuristically produces a reasonable resolution for parameters $F \in \GL^+(3)$ in the vicinity of the branching condition $\sigma_1 + \sigma_2 = \sradmm$ (cf. our~\figref{fig:hat{beta}MC_NC}). Based on the predictions of the analysis of our proposed optimal Cosserat rotations presented in~\secref{sec:discussion}, this is without doubt the most interesting parameter sector. \end{rem} We are now finally in the position to expose our computational validation strategy for the global optimality of the formula $\rpolar^\pm_{\mu,\mu_c}(F)$ stated in~\coref{cor:rpolar_formula}; cf. also~\remref{rem:rpolar_class} and~\remref{rem:rpolar_nonclass} for a short review of the geometric interpretation of the optimal Cosserat rotations. It is important to note that the presented validation scheme is based on the lift $$W^\sharp_{\mu,\mu_c}(q\,;F)\;\coloneqq \wmm(\pi(q)\,;F)\;.$$ This formulation is based on the \emph{original} Cosserat-shear stretch energy $\wmm(R\,;F)$, precisely as it appears in the statement of~\probref{intro:prob_wmm}. Clearly, this allows to validate the consistency of the simplifications leading us to~\probref{prob:relative_rhat} in~\secref{sec:intro}.\footnote{Note that this also extends to our use of the parameter reduction~\cite[Lem.~2.2, p.~4]{Fischle:2015:OC2D}.} This approach also implies that the image of the covering homomorphism $\pi: \S^3 \to \SO(3)$ corresponds to an \emph{absolute} rotation $\pi(q) = R$. Let us now present our \begin{compval} Let the sample sets $\mathcal{Q} \subset \S^3$ and $\mathcal{F}_{\mu,\mu_c} \coloneqq \mathcal{F}_{\mu,\mu_c}^{\mathrm{C}} \cup \mathcal{F}_{\mu,\mu_c}^{\mathrm{NC}} \subset \GL^+(3)$ be as previously defined and set the numerical tolerance $\mathrm{tol} = 10^{-4}$. Then for all $\mu > 0$ and $\mu_c \geq 0$ (which we have tested) the following relation holds: \begin{equation} \forall F \in \mathcal{F}_{\mu,\mu_c}:\quad \pi\left(\argmin{q \in \mathcal{Q}}{W^\sharp_{\mu,\mu_c}(q\,;F)}\right) \quad=_{\mathrm{tol}}\quad \rpolar^\pm_{\mu,\mu_c}(F)\;, \end{equation} where $R_1 =_\mathrm{tol} R^\pm_2 \Longleftrightarrow \min_{\pm}\hsnorm{R_1 - R^\pm_2} < \mathrm{tol}$, $R_1,R^\pm_2 \in \SO(3)$. \end{compval} The following procedure is equivalent, but more explicit. It also corresponds more closely to our actual implementation: \begin{equation} \begin{aligned} \forall F \in \mathcal{F}_{\mu,\mu_c}^{\mathrm{C}}: \quad \polar(F)^T \cdot \pi\left(\argmin{q \in \mathcal{Q}}{W^\sharp_{\mu,\mu_c}(q\,;F)}\right) &\quad=_{\mathrm{tol}}\quad \{{\boldsymbol{\mathbbm{1}}}\} \;,\\ \forall F \in \mathcal{F}_{\mu,\mu_c}^{\mathrm{NC}}:\quad \polar(F)^T \cdot \pi\left(\argmin{q \in \mathcal{Q}}{W^\sharp_{\mu,\mu_c}(q\,;F)}\right) &\quad\equiv_\mathrm{tol}\quad [\hat{\beta}^\pm_{\mu,\mu_c}(F), q_3(F)]\;. \end{aligned} \end{equation} In order to clarify the meaning of the notation $\equiv_\mathrm{tol}$, let $[\hat{\beta}^\pm_{\mu,\mu_c}(F), q_3(F)] \equiv R^\pm_2 \in \SO(3)$, then $R_1 \equiv_\mathrm{tol} [\hat{\beta}^\pm_{\mu,\mu_c}(F), q_3(F)] \Longleftrightarrow R_1 =_\mathrm{tol} R_2^\pm$; cf.~also~\remref{rem:rpolar_nonclass}. \begin{rem}[Additional verification by a Riemannian Newton--scheme (W. M\"uller)] For some selected values of $F \in \GL^+(3)$, W. M\"uller (then at Karlsruhe Institute of Technology~\cite{Mueller09_diss}), has verified that the proposed formula $\rpolar^\pm_{\mu,\mu_c}(F)$ is a critical point for the Cosserat shear--stretch energy. He successfully approximated our proposed optimal Cosserat rotations up to machine accuracy by using a Riemannian Newton--scheme for the solution of the Euler--Lagrange equations on $\SO(3)$. Perturbations of the starting values of the Newton--iteration did not indicate the existence of alternative solutions realizing lower energy levels. \end{rem} In~\figref{fig:hat{beta}MC_NC}, we present multiple plots of the energy-minimizing relative rotation angles $\hat{\beta}_{\mu, \mu_c}$ obtained by stochastic (Monte Carlo) minimization. We show plots for different values of ${\mu, \mu_c}$. A corresponding, in itself rather uninteresting, plot for the classical limit case $(\mu,\mu_c) = (1,0)$ is depicted in~\figref{fig:hat{beta}MC_C} for direct comparison. Both figures match our expectations raised by~\figref{fig:branchDiag} very well and the resolution does improve with higher sample counts. It is instructive to compare these figures with the optimal relative rotation angles for optimal \emph{planar} Cosserat rotations presented in Part I of the present contribution, see~\cite{Fischle:2015:OC2D}. \begin{figure} \parbox{0.80\textwidth}{ \begin{tikzpicture} \node (Pic) at (0,0) {\includegraphics[width=11cm]{scatter_1_1}}; \end{tikzpicture}} \parbox{0.19\textwidth}{ Parameters: $(\mu,\mu_c) = (1, 1)$\\ $\rho_{1,1} = \infty$ } \caption[Optimal classical relative rotation angles]{\label{fig:hat{beta}MC_C} Optimal relative rotation angle $\hat{\beta}_{1,1}^{\rm MC}$ obtained from stochastic (Monte Carlo) minimization for the classical limit case $\mu = \mu_c = 1$. We observe that the relative rotation angle vanishes up to numerical accuracy, since the polar factor $\polar(F)$ is always optimal in perfect accordance with Grioli's theorem, see~\cite{Neff_Grioli14} and~\cite[Cor.~2.4,~p.~5]{Fischle:2015:OC2D}. More precisely, this corresponds to the prediction $\hat{\beta}^\pm_{1,1}(\sigma_1 + \sigma_2) = 0$. For multiple examples from the non-classical parameter range $\mu > \mu_c \geq 0$, see~\figref{fig:hat{beta}MC_NC} on page~\pageref{fig:hat{beta}MC_NC}.} \end{figure} \begin{figure} \parbox{0.8\textwidth}{ \begin{tikzpicture} \node (Pic) at (0,0) {\includegraphics[width=11.0cm]{scatter_1_0}}; \node (rho) at (-0.75, -2.5) {$\rho_{1,0} = 2$}; \end{tikzpicture}} \parbox{0.19\textwidth}{ Parameters: $(\mu,\mu_c) = (1,0)$\\ $\rho_{1,0} = 2$ } \parbox{0.8\textwidth}{ \begin{tikzpicture} \node (Pic) at (0,0) {\includegraphics[width=11.0cm]{scatter_1_0dot25}}; \node (rho) at (0.50, -2.5) {$\rho_{1,\frac{1}{4}} = \frac{8}{3}$}; \end{tikzpicture}} \parbox{0.19\textwidth}{ Parameters: $(\mu,\mu_c) = (1,\frac{1}{4})$\\ $\rho_{1,\frac{1}{4}} = \frac{8}{3}$ } \parbox{0.8\textwidth}{ \begin{tikzpicture} \node (Pic) at (0,0) {\includegraphics[width=11.0cm]{scatter_1_0dot5}}; \node (rho) at (2.75, -2.5) {$\rho_{1,\frac{1}{2}} = 4$}; \end{tikzpicture}} \parbox{0.19\textwidth}{ Parameters: $(\mu,\mu_c) = (1, \frac{1}{2})$\\ $\rho_{1,\frac{1}{2}} = 4$ } \caption[Optimal non-classical relative rotation angles]{\label{fig:hat{beta}MC_NC} Optimal relative rotation angles $\hat{\beta}^{\rm MC}_{\mu,\mu_c}$ for multiple non-classical values $\mu > \mu_c \geq 0$. The angles are obtained by stochastic (Monte Carlo) minimization of $W_{\mu,\mu_c}(R\,;F)$. The dashed blue curve shows the predicted value for $\hat{\beta}_{1,0}^\pm(\sigma_1 + \sigma_2)$ and the dashed red line marks the expected bifurcation point at $\rho_{\mu,\mu_c}$. For a direct comparison, we provide~\figref{fig:hat{beta}MC_C} on page~\pageref{fig:hat{beta}MC_C} which shows the classical limit case $(\mu,\mu_c) = (1,1)$; see also~\figref{fig:branchDiag} on page~\pageref{fig:branchDiag} for an illustration and a more precise description of the bifurcation behavior predicted by our proposed formula~$\rpolar^\pm_{\mu,\mu_c}(F)$.} \end{figure} \countres \section{Conclusion} \label{sec:conclusion} The reduced Cosserat shear--stretch energy $\wmmred$ for which we have finally obtained an explicit form in~\theref{theo:wmm_explicit} admits an interesting abstract interpretation in mechanics. In order to reveal this, let us first assume that the microrotations $R$ are spatially decoupled. This is the case when the length scale parameter $L_{\rm c}$ in the full Cosserat model, i.e., including a curvature energy contribution, is extremely small or zero. Let us furthermore assume that $\det{F} = 1$, i.e., that the amount of volume distortion is negligible and that a specimen $\Omega$ of this material is subjected to a given deformation $\varphi:\Omega \to \varphi(\Omega)$ with deformation gradient $F \coloneqq \nabla\! \mathbf{\varphi} \in \GL^+(3)$. Then the total reduced Cosserat shear--stretch energy obtained by integration of the local density given by \begin{equation} \int\nolimits_\Omega \wmmred(F)\,{\rm dV} \;\coloneqq\; \int\nolimits_\Omega \min_{R\,\in\,\SO(3)} \wmm(R\,;F)\,{\rm dV} \end{equation} corresponds precisely to the total energetic response which is generated if the field of microrotations $R$ in the specimen \emph{instantaneously} aligns itself with the field of locally optimal Cosserat rotations $\rpolar^\pm_{\mu,\mu_c}(\nabla\! \mathbf{\varphi})$. It is important to observe that the the field of optimal Cosserat rotations is \emph{purely} induced by the deformation mapping $\varphi$ on which it depends by \emph{local} energy minimization and does not otherwise depend on boundary conditions, exterior forces, etc. A Cosserat material which conforms to the previous description can be nicely embedded into a classical framework due to G. Capriz, the description of which is one of many shimmering pearls to be found in the impressive body of his work on micropolar materials, see, e.g.,~\cite{Capriz:1977:FSC,Capriz:1989:CWM}, and it is with delight that we summarize it in a brief \begin{rem}[Continua with latent microstructure in the sense of Capriz] \label{rem:capriz} In his paper~\cite[p.49]{Capriz85}, G. Capriz introduces the notion of a continuum with {\bf latent microstructure} as follows: \begin{quote} ``I say that the microstructure is latent when, though its effects are felt in the balance equations, all relevant quantities can be expressed in terms of geometric and kinematic quantities pertaining to apparent placements.'' \end{quote} Capriz then gives a more precise definition of the properties a latent microstructure needs to satisfy. We shall only repeat the first two: ``There is no inertia connected with the microstructure.'', and, ``There are no exterior body actions on the microstructure.'' In other words, a latent microstructure is coupled with a deformation $\varphi$ in an instantaneous way. \end{rem} The reduced Cosserat shear-stretch energy $\wmmred(F)$ can be considered as the energetic answer of a medium with a rotational microstructure that instantaneously reorganizes its field of microrotations $R: \Omega \to \SO(3)$ as an energy-minimizing $\rpolar^\pm_{\mu,\mu_c}(F)$-field. This is an example for a latent microstructure in the sense of Capriz. From a more general perspective, a Cosserat continuum can also be considered as a special case of a so-called micromorphic model, see, e.g.,~\cite{Neff_Forest_jel05,Neff_micromorphic_rse_05} and~\cite{Matteo:2015:MMD}. Let us, as before, set the length scale parameter $L_{\rm c}$ governing the curvature contribution to zero. We then observe that such an approach \emph{always} leads to an algebraic side condition, in our case it is given by the equation~\eqref{eq:EL_SO}, which replaces the partial differential equation for the micro-distortion field. This is another, more general, example of a continuum with latent microstructure in the sense of Capriz, compare, e.g.,~\cite{Capriz:2000:PM}, due to G. Capriz himself and also~\cite{Matteo:2015:MMD}. Note that in~\cite{Demirkoparan:2014:HIB} and~\cite{Demirkoparan:2015:SSIB}, the authors -- who are apparently unaware of this established and relatively straightforward interpretation -- have, in our opinion, recently reintroduced the framework of materials with latent microstructure due to G. Capriz for such micromorphic continuum models under the new name of a hyperelastic material with ``internal balance'' and an ``internally balanced solid'', respectively. We now continue our conclusion with some thoughts on possible generalizations of our present results. \begin{rem}[On generalizations to higher dimensions $n \geq 4$] Our solution approach is quite specifically tailored to dimension $n = 3$ since it relies on the covering of $\SO(3)$ by the unit quaternions $\S^3 \subset \H$. It seems reasonable to assume that the particularly simple geometry of $\S^3$ lies at the root of the explicit solvability of the Euler--Lagrange equations. The so-called Sphere Theorem states that the only spheres that admit a connected compact Lie group structure are $\S^1$ and $\S^3$, see, e.g.,~\cite[p.289]{Hofmann:2006:SCG}. Thus, for $n > 3$, there is no hope at all to recover the particularly simple constellation we have quite successfully exploited here. Still, there is a generalization of the unit quaternions, namely the so-called spin groups $\mathrm{Spin}(n)$. These groups are two-fold covers of $\SO(n)$ and closely related to Clifford algebras, see, e.g.,~\cite{Lawson:1989:SG} and~\cite{Doran:2003:GAP}. In principle, such techniques might be appropriate for a generalization of our present results to higher dimensions, but they are out of reach for us. \end{rem} Although our exact solution approach does not generalize to higher dimensions, it seems obvious that the reduced~\probref{prob:relative_rhat} is a very good starting point for the solution of~\probref{intro:prob_wmm} in dimensions $n \geq 4$. Given this particular form, it seems very likely that the minimizers in higher dimensions can also be characterized in terms of the eigenvectors of $U = QDQ^T$ and the singular values $\sigma_i$, $1 \leq i \leq n$, of $F \in \GL^+(n)$. Similar to the rather simplistic random sampling strategy we have employed here, it might certainly be worthwhile to carry out an initial investigation based on a suitable Monte Carlo random sampling approach which is suitable for higher dimensions, see, e.g.,~\cite{Leon:2006:SMR}. On a related note, we have to dampen expectations regarding extensions to anisotropic formulations. These seem to be completely out of reach, since a reduction to a formulation in singular values is then impossible, see~\cite{Neff_Muench_transverse_cosserat08} and~\cite{Pau:2012:BMMP}. Another interesting question which is raised by our findings is whether the maximal mean planar stretch and strain ``measures'', i.e., $u^{\rm mmp}(F)$ and $s^{\rm mmp}(F)$, as defined in~\deref{defi:mmpss} -- which appear to be such natural concepts in our particular context -- are just artifacts of our derivation. The same holds for the plane of maximal strain $\mathrm{P}^{\rm mp}(F)$ introduced in~\deref{defi:pms}. Are there real-world materials or material models which can be precisely or at least approximately characterized by, e.g., slip in the plane of maximal strain $\mathrm{P}^{\rm mp}(F)$? Currently, we are not aware of any such materials or models. In good hope that the presented mechanisms and computational strategies will be at least helpful for the derivation of closed-form solutions for~\probref{intro:prob_wmm} in dimensions $n \geq 3$ and that these will match our proposed formula $\rpolar^\pm_{\mu,\mu_c}(F)$ presented in~\coref{cor:rpolar_formula} for $n = 3$, we conclude our present contribution with a last \begin{rem}[Final remark] As regards suitable values of the Cosserat couple modulus $\mu_c \geq 0$, our development shows clearly that there are ultimately only 3 values of particular interest, namely \begin{equation*} \mu_c \;=\; 0\;,\quad\quad \mu_c \;=\; \mu\;,\quad\quad \text{and} \quad\quad \mu_c \;=\; +\infty\;.\tag*{$\blacksquare$} \end{equation*} \end{rem} \countres \addcontentsline{toc}{section}{References} \bibliographystyle{plain} {\footnotesize \setlength{\bibsep}{1.25pt}
1,314,259,993,688
arxiv
\section{Introduction} \label{Sect1} Classical Be stars, most commonly known as Be stars are very rapidly rotating main sequence B-type stars, which through a constrained process, form an outwardly diffusing gaseous, dust-free Keplerian disk \citep{rivinius2013}. This disk is formed from the material ejected from the fast-spinning central star. Be stars exhibit line emissions, mostly Balmer lines over the photospheric spectrum \citep{porter2003}. The emission lines originate from the geometrically thin, circumstellar disk rotating with near-Keplerian velocity surrounding the central star \citep{carciofi2006}. Be stars are well known variable stars and the period of spectroscopic variation, which includes short-term and long-term variations in emission lines, vary from few minutes to few decades \citep{porter2003}. The profile structure of the H$\alpha$ emission line vary from star to star which mainly depends on the inclination angle of the system as explained by \cite{struve1931}. \cite{catanzaro2013} used the classification scheme proposed by \cite{hanuschik1988} for the observed H$\alpha$ profile types as single peak, double peak, shell structure emission and also absorption. Spectroscopic monitoring of Be stars indicate H$\alpha$ emission line variations in equivalent width (EW), profile shape, V/R value \citep{dachs1987,hanuschik1988,hubert1994,hanuschik1996}. In this paper we discuss the H$\alpha$ line variability of two Classical Be stars i.e., 59 Cyg and OT Gem. The large variations observed in the H$\alpha$ emission line for these two stars are rapid and in a short span of a few months. Studying variation of emission lines is expected to provide insights into the changes in the circumstellar disk. It can be used to derive the distribution of material and kinematics of the circumstellar disk \citep{shruthi2016}. In this paper, we present the emission line variability of 59 Cyg and OT Gem (see Table~\ref{Tab1}) based on spectroscopic data, 7 spectra of 59 Cyg and 8 spectra of OT Gem, obtained over a period of about three months in 2009. We discuss the observed features and changes in the spectra of these stars. We have used similar methods adopted by \cite{shruthi2016} to estimate the radius of the circumstellar disk using the H$\alpha$ line and also to determine the rotational velocity of the central star from prominent He{\sc i} absorption lines. The paper is arranged as follows. The following section gives a brief overview of these two stars mainly on their spectroscopic variability from previous studies. Section 3 addresses the details of spectral observations and data reduction techniques. In section 4, we present the spectra and discuss the major results from the spectral line analysis of both the stars. The conclusions drawn from this study are listed in section 5. \begin{table}[!htb] \caption{Program stars} \centering \begin{tabular}{ccccccc} \hline \hline $\bf HD$ & $\bf HR$ & $\bf Name$ & $\bf Spectral$ & $\bf RA$ & $\bf \delta$ & $\bf V$\\ & & & $\bf Type$ & \\ \hline 200120 & 8047 & 59 Cyg & B1 Ve & 20 59 49.55716 & +47 31 15.4216 & 4.75\\ 58050 & 2817 & OT Gem & B2 Ve & 07 24 27.64809 & +15 31 01.9061 & 6.41\\ \hline \end{tabular} \label{Tab1} \end{table} \section{Previous studies} \subsection{59 Cyg} \label{Sect2.1} 59 Cyg is a well known Be star because of its pronounced spectral variations in emission line profiles and intensities which is summarized by \cite{barker1982} and \citep{harmanec2002}. It was first discovered of its emission lines in 1904 by Cannon and had shown emission components with variable intensities all throughout except in 1912 and 1916 \citep{hubert1981,barker1982}. Strong V/R variations were reported in 1926-1929, 1941-1942 and in 1946-1948; a slow increase of emission was also observed from 1945-1950 by \cite{merrill1949}. Afterwards, a quiescent phase had set in from 1953-1970 with the emission features being relatively stable \citep{hubert1981}. Emission was at maximum in 1956 and in 1961 and at minimum in 1967 \citep{moujtahid1998} \cite{kogure1982} mentioned that the star showed the first gradual strengthening of emission lines in 1971-1972 and came up with rich shell lines in 1973 June. This constituted the first shell phase, in which strong shell absorption lines were detected in Balmer lines (upto H30), He{\sc i}, Mg{\sc ii} and in some singly ionized metals \citep{doazan1975,hubert1981}. In the period 1973-1974, shell lines changed to second strengthening of emissions with asymmetric profile (V/R $>$ 1) and H$\beta$ as single emission line with a very deep core. With the declining of emission components and the appearance of double emission peaks, 59 Cyg proceeded to the second shell phase in 1974 October, which lasted till 1975 March and was stronger than the first shell phase \citep{hubert1981}. \cite{barker1982} describes the 1974-1975 shell phase as a 160-day episode with a general outline of spectral changes and also the detailed line profile variations. Soon after, in 1978, a new Be phase began to develop which was observed in the far UV and optical regions by \cite{doazan1989}. They observed the star again in the time interval 1978-1987 and saw the V/R variability and intensity changes in the emission of H$\alpha$. \cite{doazan1985} gives the V/R variability of H$\alpha$ on a long term to be about 2 years. \cite{barker1983} describes the transient emission events of the star, after the minimum of 1977, with a V/R variation on a short-lived quasi-period no longer than 28 days. 59 Cyg is a part of the multiple system (ADS 14526) of the Trapezium type. Optical components are separated from 59 Cyg A by \(20^{\prime\prime}\), \(26^{\prime\prime}\) and \(38^{\prime\prime}\) respectively. It was confirmed as a single-lined spectroscopic binary by studying the variations of photospheric lines, like He{\sc i} 4471 with a period of 28.1702 $\pm$ 0.0014 day by \cite{rivinius2000} who compared it to $\phi$ Per. 59 Cyg was confirmed to be a Be + sdO binary as the companion was suspected to be a compact object by \citep{maintz2005}. \subsection{OT Gem} \label{Sect2.2} OT Gem has exhibited strong spectral variations during the past years. The first evidence of the emission changes was presented by \cite{merrill1943}. The spectroscopic behaviour between 1954-1975 was described by \cite{hubert1979}, where the strong emission of Balmer and Fe{\sc i} lines was clearly seen. The strength of the emission steadily decreased after the maximum in 1961-1962, with slight variations and finally reaching a minimum at the end of 1980 \citep{hubert1982}. \cite{dachs1986} described the H$\alpha$ emission as a single, sharp emission peak with slightly decreasing strength from 1981 to 1983 which is in good agreement with the H$\alpha$ measurement obtained in 1981 by \cite{andrillat1983}. This slight decrease of Balmer emission line intensity is from 1961 as described by \citep{hubert1982} which had still continued to 1981-1983. \cite{hanuschik1996} classified OT Gem as a non-shell star and concluded that a significant part of the disk is projected against the sky. \cite{bozic1999} compared the variations of OT Gem to $\omega$ CMa and said that the physical process might be same for the two stars. The H$\alpha$ peak intensities of 4.3 \citep{andrillat1983} and 3.2 \citep{doazan1991} were reported. \cite{poretti1982} classified OT Gem as a $\gamma$ Cas type but \cite{ferro1998} considered it to be a mild-$\gamma$ Cas because the intensity of the emission at H$\alpha$ reached only 3 in OT Gem unlike 5 in $\gamma$ Cas. They also mention about the scarcity of available spectroscopic data for this star to give any correlation with the photometric measurements. \cite{catanzaro2013} reports a triple-peaked structure for H$\alpha$ in one of the nights between 2008 and 2009. \section{Observations and data reduction} \label{Sect3} \begin{table}[!htb] \centering \caption{Journal of observations for 59 Cyg and OT Gem} \begin{tabular}{cccc} \hline \hline $\bf Star$ & $\bf Date\ of$ & $\bf Spectral$ & $\bf No.\ of\ spectra$ \\ $\bf Name$ & $\bf Observation$ & $\bf Range$ & $\bf (Integration\ time$ \\ & $\bf in\ 2009$ & $\bf $\AA$ $ & $\bf in\ seconds)$ \\ \hline 59 Cyg & 1 June & 3800 -- 4300 & 1 (1800)\\ & 2 June & 3800 -- 4300 & 1 (1800)\\ & 29 June & 3800 -- 4300 & 2 (1800)\\ & 11 July & 6200 -- 6800 & 1 (2700)\\ & 23 July & 6200 -- 6800 & 2 (2400)\\ \hline OT Gem & 4 February & 6200 -- 6800 & 1 (2700) \\ & 27 April & 6200 -- 6800 & 2 (1800, 2700)\\ & 28 April & 6200 -- 6800 & 2 (1800)\\ & 29 April & 6200 -- 6800 & 1 (2700)\\ & 30 April & 6200 -- 6800 & 1 (2400)\\ & 1 May & 6200 -- 6800 & 1 (2400)\\ \hline \label{Tab2} \end{tabular} \end{table} The spectra of 59 Cyg and OT Gem were acquired during several observation runs from February 2009 to July 2009 and the journal of observations is given in Table~\ref{Tab2}. The spectra were obtained using the Universal Astronomical Grating Spectrograph (UAGS) at the Cassegrain focus of the 1.0m Carl Zeiss reflector located at Vainu Bappu Observatory, Kavalur, India which is operated by the Indian Institute of Astrophysics (IIA). The CCD consists of \(1024 \times 1024\) pixels of 24 $\mu$m size, where the central \(1024 \times 300\) pixels were used for spectroscopy. The typical readout noise is of about \(4.8e^-\) and the gain is \(1.22e^-/\)ADU. Bausch and Lomb 1800 lines per millimetre grating was used, which in combination with the slit provided a resolution of 1~\AA ~at H$\alpha$. The medium resolution data taken in the wavelength region 3800 -- 4600~\AA ~included absorption lines like H$\gamma$ to H$\theta$ and also He{\sc i} lines and data taken in the range 6200 -- 6800~\AA ~had H$\alpha$ in emission. The reduction of all the spectra was performed using several routines in the NOAO/IRAF~\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc.}(Image Reduction and Analysis Facility) package. The wavelength calibration was performed using Fe-Ar arc lamp spectra. Typical S/N near H$\alpha$ for 59 Cyg is $\sim$200 from 3 spectra and for OT Gem is $\sim$100 from 8 spectra. The spectra were initially normalized to the continuum. IRAF tasks were later used to measure parameters of the emission line profiles, such as Equivalent Widths (EW), Full Width at Half Maximum (FWHM), V/R ratios, peak separations ($\Delta$V) and \( I_p/I_c\). All the measurements are reported in the next section. \section{Analysis and discussion} \label{Sect4} 59 Cyg was observed in June 2009 in the shorter wavelength region to investigate the He{\sc i} lines. It was also observed in July 2009 in the H$\alpha$ region. Representative sample spectra for 59 Cyg in different wavelength regions is shown in Figure \ref{Fig1}. OT Gem was observed from February to May, 2009 in the H$\alpha$ region only. In the following section, we discuss the rotational velocity of the two stars. Sect. \ref{Sect4.2} deals with variability of the H$\alpha$ emission line of both the stars. \begin{figure}[''h''] \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=6.5cm, height=6.5cm]{59Cyg_1june.eps} \end{subfigure}% \hspace{0.5cm} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=6.5cm, height=6.5cm]{59Cyg_23july.eps} \end{subfigure}% \caption{Representative sample spectra of 59 Cyg (Left: 3800 -- 4300~\AA ~showing H$\delta$, H$\epsilon$, H$\zeta$, H$\eta$, H$\theta$ along with He{\sc i} 4026~\AA ~and Fe{\sc ii} absorption lines, Right: 6200 -- 6800~\AA ~showing H$\alpha$ emission along with He{\sc i} 6678~\AA and Fe{\sc ii} absorption lines)} \label{Fig1} \end{figure} \subsection{Rotational velocity estimation} \label{Sect4.1} Be stars belong to the most rapidly rotating class of non-degenerate stars and it is also observed that a few stars may be rotating very close to the critical velocity \citep{rivinius2013}. In this study, to estimate the rotational velocity ($\textit{v}$ sin $\textit{i}$), we have considered the He{\sc i} absorption lines in the blue spectral region, He{\sc i} $\lambda$4009, $\lambda$4026 and $\lambda$4143 (refer left panel of Figure \ref{Fig1}), for the star 59 Cyg. These lines are assumed to be unaffected by the emission from the disk. \cite{steele1999} derived the rotational velocities for a sample of 58 Be stars and made a fit to the FWHM - $\textit{v}$ sin $\textit{i}$ correlation of \cite{slettebak1975}. They obtained the relations Eq. 1 - 4 in their paper for four different He{\sc i} lines. We estimate the $\textit{v}$ sin $\textit{i}$ only for He{\sc i} $\lambda$4026 and $\lambda$4143 using the respective relations between FWHM and $\textit{v}$ sin $\textit{i}$ as given in their paper. We do not have a similar relation given for He{\sc i} $\lambda$4009 in \cite{steele1999}, so we use a basic relation between the FWHM of the He{\sc i} line and the $\textit{v}$ sin $\textit{i}$ of the star as given in the relation below. \begin{equation} vsin\emph{i} = \frac{c(FWHM)}{2\lambda\sqrt{ln2}} \label{eq1} \end{equation} The FWHM of the selected He{\sc i} lines and the estimated $\textit{v}$ sin $\textit{i}$ values are shown in Table~\ref{Tab4} for 59 Cyg. It can be seen that the values derived using He{\sc i} $\lambda$4009, are similar to those from the other two He{\sc i} lines. The average $\textit{v}$ sin $\textit{i}$ estimated for this star from 4 spectra is shown in Table~\ref{Tab5}. The error tabulated for all the values corresponds to the standard deviation. The average values obtained were compared with the values from \cite{rivinius2006}, \cite{harmanec2002} and \cite{slettebak1982}. He{\sc i} $\lambda$4026 was well resolved in most of the spectra and was consistently detected well throughout the sample, and the estimated $\textit{v}$ sin $\textit{i}$ is found to be within the error. All the individual averages of He{\sc i} lines, as well as the collective average calculated from the individual averages for 59 Cyg matches with the literature values, except that of \cite{slettebak1982}. \begin{table}[!htb] \centering \caption{FWHM and \emph{v}\ sin\ \emph{i} measurements for 59 Cyg from the spectra using He{\sc i} 4009, 4026 and 4143\AA} \begin{tabular}{ccccccc} \hline \hline $\bf Date\ of$ & \multicolumn{2}{c}{$\bf HeI\ 4009 $\AA$ $} & \multicolumn{2}{c}{$\bf HeI\ 4026 $\AA$ $} & \multicolumn{2}{c}{$\bf HeI\ 4143 $\AA$ $}\\ $\bf Observation$ & $\bf FWHM\ ($\AA$)$ & $\textit{v}$ sin $\textit{i}$ $(\rm kms^{-1})$ & $\bf FWHM\ ($\AA$)$ & $\textit{v}$ sin $\textit{i}$ $(\rm kms^{-1})$ & $\bf FWHM\ ($\AA$)$ & $\textit{v}$ sin $\textit{i}$ $(\rm kms^{-1})$\\ \hline 01/06/09 & 7.9 & 355.5 & 8.5 & 389.5 & 8.7 & 387.2\\ 02/06/09 & 11.3 & 506.9 & 7.9 & 362.0 & 8.3 & 369.4\\ 29/06/09 & 9.3 & 415.6 & 6.9 & 316.2 & 10.7 & 476.3\\ 29/06/09 & 10.1 & 453.0 & 8.5 & 389.5 & 10.1 & 449.6\\ \hline \end{tabular} \label{Tab4} \end{table} \begin{table}[!htb] \centering \caption{Rotational velocity parameters. \emph{v}\ sin\ \emph{i} averaged from 4 spectra for the He{\sc i} lines for 59 Cyg is compared with two other estimations; $\omega$ was calculated using \(v_c\) given in Table 2 of \cite{yudin2001}, who interpolated values given by \cite{moujtahid1999}.} \begin{tabular}{ccc} \hline \hline $\bf Parameters$ & $\bf Reference$ & $\bf Value$\\ \hline $\textit{v}$ sin $\textit{i}$ & 4009~\AA & 433 $\pm$ 32\\ $(\rm kms^{-1})$ & 4026~\AA & 364 $\pm$ 17\\ & 4143~\AA & 421 $\pm$ 25\\ & Average & 406 $\pm$ 25\\ & \cite{rivinius2006} & $\geq$379\\ & \cite{harmanec2002} & 450\\ & \cite{slettebak1982} & 260\\ \hline $ v_c\ (\rm kms^{-1}) $ & Yudin & 520\\ $\omega$ & 4009~\AA & 0.83\\ & 4026~\AA & 0.7\\ & 4143~\AA & 0.81\\ & Average & 0.78\\ \hline \end{tabular} \label{Tab5} \end{table} \cite{harmanec2002} studied the long term as well as rapid variability of 59 Cyg using photometry and spectroscopy. They estimated the basic physical properties of 59 Cyg. \cite{rivinius2006} categorized 59 Cyg into a different class which have emission $\Leftrightarrow$ emission \& shell transitions. These type of stars show a very high $\textit{v}$ sin $\textit{i}$ values, which matches with our estimation. The critical velocity, \( v_c \) value was taken from \cite{yudin2001} which was estimated for a particular spectral class and luminosity class. Thus, for the spectral type indicated in Table~\ref{Tab1}, \( v_c \) was obtained and critical fractional rotation, $\omega$ given by \( v\ sin\ \emph{i}/{v_c} \) was estimated. The uncertainty in $\omega$ is not only from $\textit{v}$ sin $\textit{i}$ but also from the spectral and the luminosity classes \citep{rivinius2006}. The critical fractional rotation obtained in our study using the He{\sc i} lines was compared with the values obtained by \citep{rivinius2006}. $\omega$ value estimated using He{\sc i} 4026~\AA ~seem to be very close to the literature value, but those estimated using the 4009 and 4143~\AA ~ lines show about 10\% deviation. This is due to the fact that the $\textit{v}$ sin $\textit{i}$ itself is different for the two He{\sc i} lines. \begin{table}[!htb] \centering \caption{FWHM and \emph{v}\ sin\ \emph{i} measurements for OT Gem from the spectra using He{\sc i} 6678\AA} \begin{tabular}{ccc} \hline \hline $\bf Date\ of$ & $\bf FWHM$ & $\textit{v}$ sin $\textit{i}$\\ $\bf Observation$ & $\bf (\AA) $ & $(\rm kms^{-1})$\\ \hline 04/02/09 & 3.9 & 105.5\\ 27/04/09 & 4.6 & 125.2\\ 27/04/09 & 5.0 & 135.3\\ 28/04/09 & 4.5 & 120.7\\ 28/04/09 & 4.5 & 122.4\\ 29/04/09 & 5.0 & 134.0\\ 30/04/09 & 5.0 & 133.5\\ 01/05/09 & 4.8 & 129.3\\ \hline \end{tabular} \label{Tab6} \end{table} For OT Gem, though there were no spectra available in the blue region, He{\sc i} 6678 \AA ~was available in the red region spectra. This line is usually affected by the emission from the disk. We can consider the line to be minimally affected during our observation since the star is in a disk loss phase and hence the emission is declining in its strength. Measurements for OT Gem like that of 59 Cyg was obtained and $\textit{v}$ sin $\textit{i}$ was estimated using Eq. \ref{eq1} and is shown in Table~\ref{Tab6}. The average $\textit{v}$ sin $\textit{i}$ from 8 He{\sc i} profiles for OT Gem is found to be 126 $\pm$ 4 $\rm kms^{-1}$. \cite{bozic1999} quoted a value of 130 $\rm kms^{-1}$. Our value matches very well with that of the literature. The critical velocity, \( v_c \) = 483 $\rm kms^{-1}$ value was taken from \cite{yudin2001} based on the spectral type of OT Gem. Thus the critical fractional rotation for OT Gem is found to be only $\sim$ 0.26. This indicates that the star OT Gem is very slow rotator and this might be an effect also due to the low inclination angle. \subsection{Variability of the H$\alpha$ line} \label{Sect4.2} Circumstellar decretion disk of Be stars are formed due to the material ejected from the central star and is equatorially flattened. The emission lines seen in the spectra arise from the disk and the most common emission seen for Be stars is the H$\alpha$ emission line. The radius of the H$\alpha$ emission disk is estimated according to the rotational velocity law as shown in Eq. \ref{eq2} for both the stars in the following subsections. The extent of the H$\alpha$ emission region, \( R_d\) is estimated in terms of the stellar radius \( R_*\) by assuming the region to be in Keplerian orbit around the star \citep{huang1972}. \( R_d\) is also estimated for non-Keplerian orbit by changing the rotational parameter \textit{j} from 1/2 to 1. \begin{equation} \frac{R_d}{R_*} = \left(\frac{2\ v\ sin\ \emph{i}}{\Delta V}\right)^\frac{1}{j} \label{eq2} \end{equation} The variability of the H$\alpha$ emission line is seen for the two stars and are discussed separately in the following subsections. The time series of the H$\alpha$ profile of 59 Cyg is shown in Figure \ref{Fig2} and OT Gem in Figure \ref{Fig3}. \subsubsection{59 Cyg - triple peak in H$\alpha$ emission} \begin{figure}[!htb] \centering \includegraphics[width=10cm, height=8.5cm]{59Cyg_halpha.eps} \caption{Time Series of 59 Cyg H$\alpha$ line observed in July 2009; (Spectra are offset and labelled with the observation date, the oldest appears at the bottom and most recent at the top. Note that although the spectra are displayed evenly spaced, they are not evenly distributed in time.)} \label{Fig2} \end{figure} 59 Cyg was observed once on 11 July and twice on 23 July 2009 in the H$\alpha$ region. Triple-peak feature in the H$\alpha$ emission was observed on all the nights, where the profile shows three emission peaks in the H$\alpha$ spectral line. The third central peak became prominent on 23 July compared to 11 July and appeared to be more connected to the V peak of the profile. V/R ratio was measured considering the extreme peaks and the value was initially 1 and then changed to $\sim$ 1.3 on 23 July. The \(I_p/I_c\), EW, $\Delta$ V and V/R of H$\alpha$ line for 59 Cyg has been shown in Table ~\ref{Tab7} for the two observed dates. The average values and the estimation of the radius of the disk is shown in Table ~\ref{Tab8}. The error in EW and $\Delta$V are the standard deviation of the available observations. \begin{table}[!htb] \centering \caption{Measurements of parameters using H$\alpha$ emission for 59 Cyg} \begin{tabular}{ccccc} \hline \hline $\bf Date\ of$ & \(I_p/I_c\) & $\bf EW$ & $\Delta$ $\bf V$ & $\bf V/R$\\ $\bf Observation$ & & $\bf (\AA) $ & $(\rm kms^{-1})$ & \\ \hline 11/07/2009 & 1.69 & -11.0 & 259.8 & 1.04\\ 23/07/2009 & 1.96 & -13.2 & 243.3 & 1.3\\ 23/07/2009 & 1.96 & -12.7 & 241.4 & 1.33\\ \hline \end{tabular} \label{Tab7} \end{table} \begin{table}[!htb] \centering \caption{H$\alpha$ emission line parameters and estimation of the radius of the disk for 59 Cyg} \begin{tabular}{cccccc} \hline \hline \(I_p/I_c\) & $\bf EW$ & $\Delta$ $\bf V$ & $\bf \textit{v}\ sin\ \textit{i}$ \textsuperscript{\dag} & \multicolumn{2}{c}{ $\bf R$\begin{scriptsize}d\end{scriptsize}/$\bf R$\begin{scriptsize}* \end{scriptsize}}\\ & $\bf (\AA) $ & $(\rm kms^{-1})$ & $(\rm kms^{-1})$ & $\textit{j} = 1/2$ & $\textit{j} = 1$\\ \hline & & & 364 & 8.63 & 2.94\\[0ex] & & & 421 & 11.55 & 3.40\\[1ex] \raisebox{2ex}{1.87} & \raisebox{2ex}{-12.26 $\pm$ 0.66} & \raisebox{2ex}{248.2 $\pm$ 5.9} & 379 & 9.33 & 3.05\\[0ex] & & & 260 & 4.39 & 2.10\\[1ex] \hline \multicolumn{6}{l}{\textsuperscript{\dag}\footnotesize{Refer Table~\ref{Tab5}}}\\ \end{tabular} \label{Tab8} \end{table} \cite{slettebak1992} assumed Keplerian geometry for the circumstellar disk and estimated the range for the radius of the H$\alpha$ emitting region in classical Be stars to be 7 -- 19 \( R_*\). Comparing our estimation i.e., $\sim$ 10 \( R_*\) to this range, we can conclude that 59 Cyg has a circumstellar disk, whose extent is similar to those seen for many other Be stars. We have not given the errors in the values of the radius as it would suffice to give only a typical range of them. The range of the radius of the disk is given by 8.6 -- 11.6 \( R_*\). An inspection of the H$\alpha$ profile suggests that the emission is going through a V/R variation and that it is shifting from \(V \sim R\) to \(V > R\), see Table ~\ref{Tab7}. We see that the the EW increased during the period mainly due to the increased emission in the violet part of the profile. All the three spectra show the presence of the peak in between the V and R peaks, giving rise to three peaks. We checked the BeSS database for spectra taken immediately after our observations, we found spectra obtained on 17 August 2009, 6, 7 and 10 September 2009. An inspection of the spectra obtained on 17th August 2009 suggest that, the profile is dominated by emission in the V, and the emission in the R side is found to be low, suggesting a V-dominated profile. The spectra obtained in September 2009, show that the R side of the profile is strengthened almost to the level of V, but still with a profile V/R slightly greater 1.0. \subsubsection{OT Gem - Rapid disk loss phase} \begin{figure}[!htb] \centering \includegraphics[width=10cm, height=8.5cm]{OTGem_halpha.eps} \caption{Time Series of OT Gem H$\alpha$ line from February to May 2009; (Spectra are offset and labelled with the observation date, the oldest appears at the top and most recent at the bottom. Note that although the spectra are displayed evenly spaced, they are not evenly distributed in time.)} \label{Fig3} \end{figure} OT Gem was observed only in the H$\alpha$ wavelength region during our observations. It was observed totally 8 times on 6 nights. This star showed a very good strength in H$\alpha$ emission in February but later emission decreased in strength and went below the continuum level. Even though the emission had reduced significantly, the two peaks were still seen clearly in the spectra obtained in April and May. The \(I_p/I_c\), EW, V/R, $\Delta$ V and \( R_d/R_*\) of H$\alpha$ line for OT Gem are tabulated in Table ~\ref{Tab9} for all the observation dates. \( R_d/R_*\) was estimated using Eq. \ref{eq2}, by considering the $\textit{v}$ sin $\textit{i}$ to be 126 $\rm kms^{-1}$ for OT Gem as discussed in Sect. \ref{Sect4.1}. Table ~\ref{Tab9} clearly shows the change in the strength of the H$\alpha$ emission line from February to April. The measurements of \(I_p/I_c\), EW and $\Delta$ V, all show a significant change in strength. \begin{table}[!htb] \centering \caption{H$\alpha$ emission line parameters and estimation of the radius of the disk for OT Gem} \begin{tabular}{ccccccc} \hline \hline $\bf Date\ of$ & \(I_p/I_c\) & $\bf EW$ & $\bf V/R$ & $\Delta$ $\bf V$ & \multicolumn{2}{c}{ $\bf R$\begin{scriptsize}d\end{scriptsize}/$\bf R$\begin{scriptsize}* \end{scriptsize}}\\ $\bf Observation$ & & $\bf (\AA) $ & & $(\rm kms^{-1})$ & $\textit{j} = 1/2$ & $\textit{j} = 1$\\ \hline 04/02/09 & 1.34 & -4.1 & 1.09 & 96.0 & 6.89 & 2.63\\ 27/04/09 & 0.95 & -0.7 & 0.83 & 181.8 & 1.92 & 1.39\\ 27/04/09 & 0.96 & -0.7 & 0.92 & 165.7 & 2.31 & 1.52\\ 28/04/09 & 0.96 & -0.8 & 1.00 & 194.8 & 1.67 & 1.29\\ 28/04/09 & 0.97 & -0.7 & 1.12 & 191.2 & 1.74 & 1.32\\ 29/04/09 & 0.95 & -0.8 & 0.84 & 174.6 & 2.08 & 1.44\\ 30/04/09 & 0.96 & -0.6 & 2.11 & 190.3 & 1.75 & 1.32\\ 01/05/09 & 0.97 & -0.8 & 1.98 & 167.7 & 2.26 & 1.50\\ \hline \end{tabular} \label{Tab9} \end{table} The disk of OT Gem is seen to be dissipating and the observed variability is a result of the long-term variation associated with the disk i.e., the disk loss phase of a Be star. The period of such long-term variability for Be-stars can be generally from several years to several decades \citep{rivinius2013}. OT Gem was observed by few amateur astronomers and the spectra is archived in BeSS database \citep{neiner2011}. From BeSS, it is observed that in 2009, during the time of our observation, there is only one spectra in March and as expected, it shows a decrease in strength compared to our February observations. Later, the star continued to have a decreased strength in 2010 but suddenly had an increase in emission strength during 2011. It again decreased in strength during 2012, eventually going into absorption in 2013. The ``B-phase" has continued till March 2016 and is showing an emission again from April 2016. The period of our observation might be during short rapid phase within a period of slowly dissipation. The recent rebuilding of the disk can be an interesting observation and continuous monitoring of this star is being carried out. As previously mentioned, the radius of the H$\alpha$ emitting region given by \cite{slettebak1992} is 7 -- 19 \( R_*\). The range of radius during our observations for OT Gem is between 1.7 -- 6.9 \( R_*\). OT Gem at first in February has a radius which is already below the general observed range and later in April, it goes below the limit. In this episode of disk loss, the outer disk is lost and the feeble emission comes from only the inner disk. OT Gem lost about a little more than half of its disk within three months. We have captured this short-term phenomenon of sudden loss of disk and detect a change in the disk radius as well. This clearly indicates the need of continuous monitoring of such systems. There are very less spectroscopic study of OT Gem in the past. Thus our study would add significantly for any future variability study of this star. \subsection{Discussion} \label{Sect4.3} In this study, we have captured two different type of short-term variation in Classical Be stars, helpful in providing insights into the properties of their circumstellar disk. We also estimated various parameters of the two stars, 59 Cyg and OT Gem. We estimated the rotational velocity, $\textit{v}$ sin $\textit{i}$ of the stars to be 406 $\pm$ 25$\rm km $ $ s^{-1}$ for 59 Cyg and 126 $\pm$ 4$\rm km $ $ s^{-1}$ for OT Gem. 59 Cyg has been classified as emission $\Leftrightarrow$ emission \& shell transition type of star by \cite{rivinius2006}. Thus, the very high rotational velocity generally seen for shell stars is observed even for 59 Cyg. Whereas, OT Gem has been classified as a non-shell star by \cite{hanuschik1996} and thus the estimated low rotational velocity is consistent with the definition of shell star being an edge-on star and non-shell star having a lower inclination angle. By assuming the critical velocity based on the spectral class, we estimated that the fractional critical rotation is about 0.78 for 59 Cyg, suggesting that the star is rotating very close to the break-up velocity. For OT Gem, the fractional critical rotation is only about 0.26 which indicates that it is a very slow rotator, or viewed in low inclination. We also estimated the radius of H$\alpha$ emission, and is found to be in the range, 8.6 -- 11.6 \( R_*\) for 59 Cyg and 1.7 -- 6.9 \( R_*\) for OT Gem. In summary, we find that 59 Cyg is a fast rotator with the H$\alpha$ emission region far from the star and OT Gem is a slow rotator or viewed in low inclination, with a feeble emission coming from the inner disk very close to the star. 59 Cyg is known to be a Be binary with a sdO companion \citep{maintz2005}. It is often compared to $\phi$ Per as they both are binaries with sdO companion and also have been reported similar profile variations. The emission sometimes show rapid variability. The rapid variability is likely to be associated with the changes in the distribution of material within the disk. \cite{maintz2005} reported the phase-locked emission variability of 59 Cyg to have a period of 28.192 days. The appearance of the third peak has been observed for a few other well known classical Be stars like $\phi$ Per, $\zeta$ Tau, $\nu$ Gem, $\chi$ Gem, Pleione. \cite{stefl2007} reported V/R variations of $\zeta$ Tau, $\nu$ Gem, $\phi$ Per along with others. They reported that a triple-peak profile was observed in $\zeta$ Tau and $\nu$ Gem. They observed that the triple-peak appears only in a particular part of the phase. In the case of $\zeta$ Tau, the triple-peak appeared during the \(V < R\) to \(V > R\) transition, whereas it appeared during the \(V > R\) to \(V < R\) transition for $\nu$ Gem. The triple peak and the V/R variation detected in 59 Cyg, is similar to that of $\zeta$ Tau. Thus stars which show triple-peak profile can be classified into two types:\\ \begin{enumerate} \item \(V > R\) to \(V < R\) transition: Case-I (e.g. $\nu$ Gem) \item \(V \leq R\) to \(V > R\) transition: Case-II (e.g. $\zeta$ Tau, 59 Cyg) \end{enumerate} The Case-I can be seen as a case of a perturbation moving in the disk, with the same sense of rotation of the disk, and hence a prograde case. \cite{stefl2007} stated that the triple-peak case can neither be explained by Okazaki's model \citep{okazaki1991} nor by the rapidly expanding circumstellar ring model of \citep{arias2007}. They conclude that the phase of the m = 1 oscillation and assumed Keplerian rotation are inconsistent with large radial velocity fields in the disks. \cite{stefl2007} mentioned that disks with large eccentricity can precess in a prograde direction and $\nu$ Gem is found to have large eccentricity. In the case of Case-II, the perturbation has to move with the opposite sense of rotation and hence can be termed as the retrograde case. Thus, in 59 Cyg and in $\zeta$ Tau, the appearance of triple-peak and the V/R variation are in the opposite sense and is thus, a retrograde case. This retrograde kind of motion is difficult to explain since the density sub-structure has to move against the sense of rotation of the disk to be seen as the third peak. \cite{stefl2007} also reiterated that the appearance of triple-peak is not related to the binary period. 59 Cyg, has a very large radial velocity field due to the large H$\alpha$ emission region as reported in our study. We have been able to capture this short lived triple-peak emission in H$\alpha$, which is generally seen in binary systems. We still do not fully understand the triple-peak profile. The triple-peak feature is not seen in many stars and our study shows that all such stars are different from each other. Further observations and continuous monitoring of such stars during different V/R phase can provide valuable inputs to the understanding of processes in the circumstellar disk. OT Gem underwent a short-term disk loss during our observations. We capture the phase of disk loss and detect a change in the disk radius of the star within three months of observations. During the dissipation of the disk, the H$\alpha$ emitting region shrank, from about 6.9 to 1.7. This might suggest that the disk is dissipated from outside to inside. The outer disk is lost initially and later the feeble emission is still seen due to the inner disk. This star showed a disk loss in a very short time scale unlike the general duration of years to decades. Also, at present from BeSS database \citep{neiner2011}, this star is seen to have emission which means that the star is building the disk again and would be an interesting case to determine the period of such variability. This is the second time that the emission has reappeared after our 2009 observations. We demonstrate that monitoring of these systems are important to understand the way in which the Be star disk is dissipated as well as rejuvenated. Our study reveals that short-term monitoring is important to understand different types of variations seen in classical Be stars. The short term variations which are detected in our observations give valuable insights into their circumstellar disk and the physical processes governing the material in the disk. This study also is important, especially since these observations can be performed with moderate telescope equipped with a spectrograph. \section{Conclusions} \label{Sect5} \begin{enumerate} \item We have presented the spectroscopic analysis of the two Classical Be stars, 59 Cyg and OT Gem which were observed in 2009. \item The rotational velocity, $\textit{v}$ sin $\textit{i}$ was calculated using He{\sc i} lines and is found to be $\sim$ 400$\rm km $ $ s^{-1}$ for 59 Cyg and $\sim$ 130$\rm km $ $ s^{-1}$ for OT Gem . The fraction of critical rotation for 59 Cyg is found to be $\sim$ 0.8, suggesting it to be a rapid rotator and $\sim$ 0.3 for OT Gem, indicating it to be a slow rotator or viewed in low inclination. \item The radius of the circumstellar disk \( R_d/R_*\) using the H$\alpha$ double-peaked emission profile is found to be $\sim$ 10.0, assuming a Keplerian orbit for 59 Cyg. This implies that the H$\alpha$ emission disk is very large for 59 Cyg. \item Triple-peak is observed for 59 Cyg and the H$\alpha$ profile variation is also detected. We make an attempt to classify stars observed to show triple-peak into two groups. The mechanisms for the formation of triple-peak in both the cases remains a mystery. \item The EW of the H$\alpha$ emission line profile for OT Gem varied from -4.1 to -0.7~\AA ~in a span of four months. \item OT Gem underwent a disk loss phenomenon during our observations and lost the outer disk very rapidly but continued to have a feeble emission from the inner disk. This suggests that the disk loss happened from outside to inside during this phase. \item We conclude and confirm that these two stars show rapid short-term variations in H$\alpha$ emission and their close monitoring is important to understand the physical processes in their circumstellar disk. \end{enumerate} \normalem \begin{acknowledgements} This work was funded by Centre for Research, Christ University, Bangalore as a part of Major Research Project. We thank the support staff at 1.0m telescope, Vainu Bappu Observatory, especially Jayakumar K., for their assistance in obtaining the data used in this paper. \end{acknowledgements} \bibliographystyle{raa}
1,314,259,993,689
arxiv
\section{Introduction} Recently there has been a lot of interest in inverse problems for Maxwell's equations in Euclidean domains in $\mathbb{R}^3$ and on compact Riemannian manifolds, see \cite{CarOlaSalo, KenSalUhl, KurLass06, KurLasSom06, OlaPaiSom1993, OlaPaiSom2003, OlaSom1996}. In a smooth bounded domain $M\subset \mathbb{R}^3$, Maxwell's equations are given by \begin{equation} \label{eq_Max_normal} \begin{aligned}\textrm{curl}\, E(x,t)=&-B_t(x,t),\\ \textrm{curl}\, H(x,t)=&D_t(x,t), \end{aligned} \end{equation} where $E$ and $H$ are the electric and magnetic fields, and $B$ and $D$ are the magnetic flux density and the electric displacement. The fields $E$ and $D$, and similarly, the fields $H$ and $B$ are related by the constitutive relations, \begin{equation} \label{eq_constitutive} D(x,t)=\epsilon(x)E(x,t),\quad B(x,t)=\mu(x) H(x,t), \end{equation} where the electric permittivity $\epsilon(x)$ and the magnetic permeability $\mu(x)$ are $C^\infty$-smooth positive-definite $3\times 3$-matrix valued functions on $M$. The initial boundary value problem for the time dependent Maxwell's equations consists of \eqref{eq_Max_normal}, \eqref{eq_constitutive} together with the conditions \begin{equation} \label{eq_bound-cond} \begin{aligned} E(x,t)|_{t=-\tau_f}=0,\quad H(x,t)|_{t=-\tau_f}=0,\\ n\times E|_{\partial M\times \mathbb{R}_-}=f, \end{aligned} \end{equation} where $n$ is the unit exterior normal to $\partial M$, and $\tau_f>0$ is such that $f(x, t)=0$ for $t< -\tau_f$. The inverse problem associated with \eqref{eq_Max_normal}, \eqref{eq_constitutive}, and \eqref{eq_bound-cond}, is the problem of reconstruction of electromagnetic parameters $\epsilon(x)$ and $\mu(x)$ from the knowledge of the response operator \begin{equation} \label{eq_response_int} R:n\times E|_{\partial M\times \mathbb{R}_-}\mapsto n\times H|_{\partial M \times \mathbb{R}_-}. \end{equation} From the point of view of modern electrodynamics and classical field theories, it is natural to adopt an invariant approach to Maxwell's equations, where the domain $M$ is replaced by a general $3$-dimensional smooth compact oriented connected Riemannian manifold, and the vector fields $E$, $H$, $D$, and $B$ are viewed as differential forms, see \cite{Thi79}. The geometric inverse problem is then to determine the unknown manifold $M$, together with the electromagnetic parameters, from the response operator \eqref{eq_response_int}, which is now defined in terms of boundary traces of the corresponding differential forms. See also \cite{LasUhl01, LasTayUhl03}, where the problem of the reconstruction of a Riemannian manifold from the Dirichlet-to-Neumann operator for harmonic functions, has been studied. In the context of time-harmonic Maxwell's equations in an isotropic setting, i.e. when the parameters $\epsilon(x)$ and $\mu(x)$ are scalar, the inverse problem for bounded domains in $\mathbb{R}^3$ was solved in \cite{OlaPaiSom1993}, see also \cite{ColPai92, McD97, OlaSom1996}. Much less is known in the anisotropic case. To the best of our knowledge, the positive results in this direction have only been established in the case of an anisotropic medium of a special type, characterized by the polarization independent velocity of the wave propagation. In terms of the electromagnetic parameters, this amounts to the existence of $\alpha(x)>0$ such that $\epsilon(x)=\alpha(x)\mu(x)$. In this case, under a certain geometric condition, it is shown in \cite{KenSalUhl} that, if the conformal class of $\epsilon(x)$ and $\mu(x)$ is known, the stationary boundary measurements identify uniquely the conformal factors. There are also counterexamples for uniqueness of time-harmonic inverse problems involving very anisotropic and degenerate material parameters \cite{GreKurLasUhl2009, GreKurLasUhl2007}. In \cite{KurLasSom06}, the inverse problem for Maxwell's equations in the time domain for an anisotropic medium was studied, still assuming that the wave propagation is independent of the polarization. It was shown that the Riemannian manifold and the electromagnetic parameters can be recovered from the dynamical response operator similar to \eqref{eq_response_int}, given on a finite time interval. See also \cite{BelIsaPesShar200} for reconstruction of the wave speed. In this paper we shall be concerned with the case of a general anisotropic medium. Specifically, working in the geometric setting of Maxwell's equations on a manifold $M$, we are able to recover the Betti numbers of the manifold from the dynamical response operator, given on an open subset of the boundary. This can be viewed as the first step in attempting to reconstruct the geometry and topology of the underlying manifold, in the full generality of the anisotropic case. Let us remark that in the isotropic case, as well as in the case when $\epsilon(x)=\alpha(x)\mu(x)$, $\alpha(x)>0$, the reconstruction of the manifold and the electromagnetic parameters is based on controllability results, which in turn rely crucially on generalizations of the Tataru unique continuation theorem \cite{EllNakTat02, KurLasSom06}. In our opinion, the main obstacle in the study of the inverse problem for the general anisotropic Maxwell system is due to the fact that such unique continuation results do not seem to be available in this case. We would like also to mention the paper \cite{BelSha2008}, where the reconstruction of the Betti numbers of a manifold from the Dirichlet-to-Neumann operator for the Hodge Laplacian on differential forms is studied. The plan of the paper is as follows. Section 2 is devoted to the description of our geometric setup, including the completion of the Maxwell system to a Dirac type elliptic system, and contains the statement of the main results. We also discuss examples that illustrate the significance of our results for the determination of the topological structure of an unknown object from the boundary measurements. In Section 3, we prove the identifiability of the Betti numbers in the complete Maxwell case, while in Section 4 we establish our results for the physical Maxwell system. \section{Preliminaries and statement of the main results} \subsection{Invariant definition of Maxwell's equations} Let $(M,g_0)$ be a smooth compact oriented connected Riemannian 3-manifold $M$ with $\partial M\not=\emptyset$. We shall first rewrite equations \eqref{eq_Max_normal}, \eqref{eq_constitutive}, in the anisotropic case, using the language of differential forms . In doing so, we shall follow closely \cite{KurLasSom06}, where the case $\epsilon(x)=\alpha(x)\mu(x)$, $\alpha(x)>0$, is considered. Let $\Lambda^k T^*M$, $k=0,1,\dots,3$, be the bundle of the $k$-th exterior differential forms and $\Lambda T^* M$ be the full bundle of differential forms. Denote by $C^\infty(M,\Lambda^k T^*M)$ the space of smooth real exterior differential forms of degree $k$. Define the fiberwise duality between $1$-forms and vector fields, \[ ^ \flat:\ C^\infty (M, T M)\to C^\infty(M, \Lambda^1 T^*M),\quad X^\flat(Y)=g_0(X,Y), \] or in a coordinate system for $X=a^i\frac{\partial}{\partial x^i}$, $X^\flat=g_{0,ij}a^jdx^i$. This map is bijective and has the following properties \cite{Sch95}: \[ (\textrm{curl}\,X)^\flat=*_0d X^\flat,\quad (\textrm{div} X)^\flat=*_0d*_0X^\flat, \] where \[ d: C^\infty(M, \Lambda^k T^* M)\to C^\infty(M, \Lambda^{k+1} T^ *M) \] is the exterior differential and $*_0$ is the Hodge operator with respect to the metric $g_0$, acting fiberwise, \[ *_0:C^\infty(M, \Lambda^k T^ *M) \to C^\infty(M, \Lambda^{3-k} T^ *M). \] We define the $1$-forms $\mathcal{E}=E^\flat$ and $\mathcal{H}=H^\flat$ and the $2$-forms $\mathcal{B}=*_0B^\flat$ and $\mathcal{D}=*_0D^\flat$. Using the identity $*_0*_0=\text{id}$, valid in the $3$-dimensional case, we can write Maxwell's equations \eqref{eq_Max_normal} in terms of differential forms as \begin{equation} \label{eq_maxwell_diff_forms} d\mathcal{E}=-\partial_t\mathcal{B},\quad d\mathcal{H}=\partial_t\mathcal{D}. \end{equation} Consider now the constitutive relations \eqref{eq_constitutive}. We shall determine a metric $g_\epsilon$ such that the Hodge operator with respect to this metric, denoted by $*_\epsilon$, satisfies \begin{equation} \label{eq_cons_forms_1} \mathcal{D}=*_0(\epsilon E)^\flat=*_\epsilon\mathcal{E}. \end{equation} In local coordinates $(x^1,x^2,x^3)$, we have $\epsilon E=\epsilon^i_k E^k\frac{\partial}{\partial x^i}$, $(\epsilon E)^\flat=g_{0,ij}\epsilon^j_k E^kdx^i$ and thus, the middle term of \eqref{eq_cons_forms_1} yields \begin{align*} *_0(\epsilon E)^\flat&=*_0(g_{0,ij}\epsilon^j_kE^kdx^i)=\frac{1}{2} \sqrt{\hbox{det}(g_0)}g_0^{il}g_{0,ij}\epsilon_k^jE^ks_{lpq}dx^p\wedge dx^q\\ &=\frac{1}{2} \sqrt{\hbox{det}(g_0)}\epsilon_k^jE^ks_{jpq}dx^p\wedge dx^q, \end{align*} where $s_{lpq}$ is the Levi-Civita permutation symbol. The right hand side of \eqref{eq_cons_forms_1} implies that \[ *_\epsilon\mathcal{E}=*_\epsilon(g_{0,ik}E^kdx^i)=\frac{1}{2} \sqrt{\hbox{det}(g_\epsilon)}g_\epsilon^{ij}g_{0,ik}E^ks_{jpq}dx^p\wedge dx^q. \] Hence, \eqref{eq_cons_forms_1} is valid if we set \[ \sqrt{\hbox{det}(g_\epsilon)}g_\epsilon^{ij}g_{0,ik}=\sqrt{\hbox{det}(g_0)}\epsilon^j_k. \] By taking the determinants of both sides, we get \[ \sqrt{\hbox{det}(g_\epsilon)}=\hbox{det}(\epsilon)\sqrt{\hbox{det}(g_0)}. \] Defining \[ g_\epsilon^{ij}=\frac{1}{\hbox{det}(\epsilon)} \epsilon^j_kg_0^{ki}, \] we see that \eqref{eq_cons_forms_1} is valid. Similarly, we see that for the metric tensor $g_\mu^{ij}=\frac{1}{\hbox{det}(\mu)} \mu^j_kg_0^{ki}$, we have \[ \mathcal{B}=*_0(\mu H)^\flat=*_\mu\mathcal{H}. \] Hence, the constitutive relations take the form \[ \mathcal{D}(x,t)=*_\epsilon\mathcal{E}(x,t),\quad \mathcal{B}(x,t)=*_\mu\mathcal{H}(x,t). \] We consider the waves that satisfy the initial conitions \[ B(x,t)|_{t=-\tau}=0,\quad D(x,t)|_{t=-\tau}=0, \quad \tau>0, \] Applying the divergence operator to \eqref{eq_Max_normal}, we have \[ \div B(x,t)=0,\ \div D(x,t)=0,\quad t\in \mathbb{R},\quad x\in M. \] In terms of differential forms these equations imply that \begin{equation} \label{eq_compat} d\mathcal{B}=0,\quad d\mathcal{D}=0. \end{equation} In the further considerations, we will use only the pair $(\mathcal{E},\mathcal{B})$ and denote it by $(\omega^1,\omega^2)$, where $\omega^1=\mathcal{E}$ and $\omega^2=\mathcal{B}$. The compatibility conditions \eqref{eq_compat} imply that \begin{equation} \label{eq_max1} d\omega^2=0,\quad d*_\epsilon \omega^1=0. \end{equation} It follows from \eqref{eq_maxwell_diff_forms} that \begin{equation} \label{eq_max2} \omega^2_t=-d\omega^1,\quad \omega^1_t=*_\epsilon d*_\mu\omega^2. \end{equation} Let us consider the following codifferentials, \begin{equation} \label{eq_codiff} \delta_{\epsilon,\mu}\omega^2=*_\epsilon d*_\mu \omega^2, \quad \delta_{\mu,\epsilon}\omega^k=-*_\mu d*_\epsilon \omega^k, k=1,3. \end{equation} Then \eqref{eq_max1} and \eqref{eq_max2} yield \begin{equation} \label{eq_max} \begin{aligned} &\omega^1_t=\delta_{\epsilon,\mu}\omega^2,\quad \delta_{\mu,\epsilon}\omega^1=0,\\ & \omega^2_t=-d\omega^1,\quad d\omega^2=0. \end{aligned} \end{equation} These equations are called \emph{Maxwell's equations for forms in the divergence free case} on a Riemannian manifold $M$. We shall now extend the above equations to the full bundle of exterior differential forms $\Lambda T^*M $. To this end, we introduce auxiliary forms, $\omega^0\in C^\infty(M)$ and $\omega^3\in C^\infty(M, \Lambda^3 T^*M)$, which vanish in the electromagnetic theory, by \[ \omega^0_t=\delta_{\mu,\epsilon}\omega^1,\quad \omega^3_t=-d\omega^2. \] Since $\omega^0=0$ and $\omega^3=0$ in the electromagnetic theory, we can modify equations \eqref{eq_max} to have \begin{align*} &\omega^1_t=-d\omega^0+\delta_{\epsilon,\mu}\omega^2,\quad \omega^3_t=-d\omega^2,\\ & \omega^2_t=-d\omega^1+\delta_{\mu,\epsilon}\omega^3,\quad \omega^0_t=\delta_{\mu,\epsilon}\omega^1, \end{align*} or, in the matrix form, \begin{equation} \label{eq_complete_maxwell} \omega_t+D\omega=0, \end{equation} where $\omega=(\omega^0,\omega^1,\omega^2,\omega^3)$ and the operator $D$ is given by \begin{equation} \label{eq_matrix} D=\begin{pmatrix} 0&-\delta_{\mu,\epsilon} & 0 & 0\\ d & 0 & -\delta_{\epsilon,\mu} & 0\\ 0& d & 0 & -\delta_{\mu,\epsilon}\\ 0& 0 & d &0 \end{pmatrix}. \end{equation} Equations \eqref{eq_complete_maxwell}, \eqref{eq_matrix} are called \emph{the complete Maxwell system}. Notice that the operator $D$ is of the Dirac type. \subsection{Function spaces} Define the $L^2$-inner product in the space $C^\infty(M, \Lambda^kT^* M)$ as follows, \begin{align*} (\omega^k,\eta^k)_{L^2_\mu}=&\int_M \omega^k\wedge*_{\mu}\eta^k,\quad k=0,2,\\ (\omega^k,\eta^k)_{L^2_\epsilon}=&\int_M \omega^k\wedge*_{\epsilon}\eta^k,\quad k=1,3, \end{align*} and denote by $L^2(M, \Lambda^k T^* M)$ the completion of $C^\infty(M, \Lambda^kT^* M)$ in the corresponding norm. In the complexified case, we take the corresponding sesquilinear extension of the inner product. We denote by $H^s(M, \Lambda^k T^* M)$ the standard Sobolev space of $k$-forms. The natural domain of the exterior differential $d$ in $L^2(M, \Lambda^k T^* M)$ is \[ H(d,\Lambda^k T^* M)=\{\omega^k\in L^2(M, \Lambda^k T^* M):d\omega^k\in L^2(M, \Lambda^{k+1} T^* M)\}, \] and we define \[ H(\delta_{\epsilon,\mu},\Lambda^k T^* M)=\{\omega^k\in L^2(M, \Lambda^k T^* M):\delta_{\epsilon,\mu}\omega^k\in L^2(M, \Lambda^{k-1} T^* M)\}, \] and similarly for $\delta_{\mu,\epsilon}$. Let $i^*:C^\infty(M, \Lambda^k T^* M)\to C^\infty(\partial M, \Lambda^k T^* M)$ be the pull-back of the imbedding $i:\partial M\to M$. Then we define the \emph{tangential trace} of $k$-forms as \[ \mathbf{t}:C^\infty(M, \Lambda^k T^* M)\to C^\infty(\partial M, \Lambda^k T^* M), \quad \mathbf{t}\omega^k=i^*\omega^k,\quad k=0,1,2, \] and the \emph{normal trace} as \begin{align*} \mathbf{n}:\ C^\infty(M, \Lambda^k T^* M)& \to C^\infty(\partial M, \Lambda^{3-k} T^* M),\\ \mathbf{n}\omega^k=i^*(*_\epsilon \omega^k),&\ k=1,3,\quad \mathbf{n}\omega^2=i^*(*_\mu \omega^2). \end{align*} Set \[ \langle\mathbf{t}\omega^k,\mathbf{n}\eta^{k+1}\rangle=\int_{\partial M}\mathbf{t}\omega^k\wedge\mathbf{n}\eta^{k+1},\quad k=0,1,2. \] With this notation, Stokes' formulae for differential forms can be written as \begin{equation} \label{eq_stokes_formulae} \begin{aligned} (d\omega^0,\eta^1)_{L^2_\epsilon}-(\omega^0,\delta_{\mu,\epsilon}\eta^1)_{L^2_\mu}= \langle\mathbf{t}\omega^0,\mathbf{n}\eta^1\rangle,\\ (d\omega^1,\eta^2)_{L^2_\mu}-(\omega^1,\delta_{\epsilon,\mu}\eta^2)_{L^2_\epsilon}= \langle\mathbf{t}\omega^1,\mathbf{n}\eta^2\rangle,\\ (d\omega^2,\eta^3)_{L^2_\epsilon}-(\omega^2,\delta_{\mu,\epsilon}\eta^3)_{L^2_\mu}= \langle\mathbf{t}\omega^2,\mathbf{n}\eta^3\rangle. \end{aligned} \end{equation} Using \eqref{eq_matrix} and \ref{eq_stokes_formulae}, we get \begin{equation} \label{eq_stokes_D} (D\omega,\eta)_{L^2}+(\omega,D\eta)_{L^2}=\langle\mathbf{t}\omega,\mathbf{n}\eta\rangle+ \langle\mathbf{t}\eta,\mathbf{n}\omega\rangle, \end{equation} where $\mathbf{t}\omega=(\mathbf{t}\omega^0,\mathbf{t}\omega^1,\mathbf{t}\omega^2)$, $\mathbf{n}\omega=(\mathbf{n}\omega^1,\mathbf{n}\omega^2,\mathbf{n}\omega^3)$, and \[ \langle\mathbf{t}\omega,\mathbf{n}\eta\rangle=\langle\mathbf{t}\omega^0,\mathbf{n}\eta^1\rangle+ \langle\mathbf{t}\omega^1,\mathbf{n}\eta^2\rangle+\langle\mathbf{t}\omega^2,\mathbf{n}\eta^3\rangle. \] Here we take $\omega,\eta\in \mathcal{H}$, where \begin{align*} \mathcal{H}&=H(d, \Lambda^0 T^* M)\times [H(d, \Lambda^1 T^* M)\cap H(\delta_{\mu,\epsilon}, \Lambda^1 T^* M)]\\ &\times [H(d, \Lambda^2 T^* M)\cap H(\delta_{\epsilon,\mu}, \Lambda^2 T^* M)] \times H(\delta_{\mu,\epsilon}, \Lambda^3 T^* M). \end{align*} It will be convenient to write $\delta$ to stand for both $\delta_{\mu,\epsilon}$ and $\delta_{\epsilon,\mu}$, when no risk of ambiguity is possible. There are well defined extensions of the boundary trace operators $\mathbf{t}$ and $\mathbf{n}$ to the spaces $H(d,\Lambda^k T^*M)$ and $H(\delta,\Lambda^k T^*M)$, see \cite{Paquet}. \begin{lem} The operators $\mathbf{t}$ and $\mathbf{n}$ can be extended to continuous surjective maps \begin{equation} \label{eq_t_1} \mathbf{t}:H(d,\Lambda^k T^*M)\to H^{-1/2}(d,\partial M, \Lambda^k T^* M), \end{equation} \begin{equation} \label{eq_n_1} \mathbf{n}:H(\delta,\Lambda^{k+1} T^*M)\to H^{-1/2}(d,\partial M, \Lambda^{2-k} T^* M), \end{equation} where $H^{-1/2}(d,\partial M, \Lambda^kT^* M)$ is given by \[ \{\omega^k\in H^{-1/2}(\partial M,\Lambda^k T^* M):d\omega^k\in H^{-1/2}(\partial M, \Lambda^{k+1} T^* M)\}. \] \end{lem} Let $ H_t(d,\Lambda^k T^* M) $ stand for the kernel of \eqref{eq_t_1}, and $H_n(\delta,\Lambda^{k+1} T^* M)$ will denote the kernel of the operator \eqref{eq_n_1}. Using \eqref{eq_stokes_formulae}, we can verify the following result in a standard way, see also \cite[Lemma 1.3]{KurLasSom06}. \begin{lem} \label{lem_adjoint} The Hilbert space adjoint of \[ d:L^2(M,\Lambda^0 T^* M)\to L^2(M, \Lambda^1 T^*M), \] equipped with the domain $H_t(d,\Lambda^0 T^* M)$, is the operator $\delta_{\mu,\epsilon}$ with the domain $H(\delta_{\mu,\epsilon}, \Lambda^1 T^* M)$. The Hilbert space adjoint of \[ \delta_{\mu,\epsilon}:L^2(M, \Lambda^1 T^*M)\to L^2(M,\Lambda^0 T^* M), \] equipped with the domain $H(\delta_{\mu,\epsilon},\Lambda^1 T^*M)$, is the operator $d$ with the domain $H_t(d,\Lambda^0 T^* M)$. \end{lem} It is clear that analogous statements hold for the operators $d$ and $\delta$, acting on forms of higher degree. We shall need the following result. \begin{prop} \begin{itemize} \item [(i)] The operator $D$, given by \eqref{eq_matrix} and equipped with the domain \begin{align*} \mathcal{D}(D)=&H_t(d,\Lambda^0 T^* M)\times [H_t(d, \Lambda^1 T^* M)\cap H(\delta_{\mu,\epsilon},\Lambda^1 T^* M)]\\ &\times [H_t(d, \Lambda^2 T^* M)\cap H(\delta_{\epsilon,\mu},\Lambda^2 T^* M)] \times H(\delta_{\mu,\epsilon},\Lambda^3 T^* M), \end{align*} is skew-adjoint on $L^2$. \item[(ii)] The spectrum of the operator $D$ with the domain $\mathcal{D}(D)$ is discrete. \item[(iii)] The operator $D$ is an elliptic differential operator in the interior of $M$. \end{itemize} \end{prop} \begin{proof} (i). Using the definition of the domain of the adjoint and Lemma \ref{lem_adjoint}, we obtain that $\mathcal{D}(D^ *)=\mathcal{D}(D)$. The skew-adjointness of $D$ then follows from \eqref{eq_stokes_D}, which holds for $\omega,\eta\in \mathcal{D}(D)$. (ii). In view of Gaffney's inequality \cite[Corollary 2.1.6]{Sch95}, \[ H_t(d, \Lambda^k T^* M)\cap H(\delta,\Lambda^k T^* M)=\{ \omega^k\in H^1(M,\Lambda^k T^* M):\mathbf{t}\omega^k=0 \}, k=1,2, \] together with the Sobolev embedding, we conclude that the imbedding $\mathcal{D}(D) \hookrightarrow L^2$ is compact. Hence, the spectrum of $\mathcal{D}$ is discrete. (iii). It suffices to show the ellipticity of $D^2$. Since $\delta_{\mu,\epsilon}\delta_{\epsilon,\mu}=0$ and $\delta_{\epsilon,\mu}\delta_{\mu,\epsilon}=0$, we get \[ D^2=\begin{pmatrix} -\delta_{\mu,\epsilon}d &0 & 0& 0\\ 0& -d\delta_{\mu,\epsilon}-\delta_{\epsilon,\mu}d& 0 & 0\\ 0& 0& -d\delta_{\epsilon,\mu}-\delta_{\mu,\epsilon}d & 0\\ 0& 0 & 0& -d\delta_{\mu,\epsilon} \end{pmatrix}. \] The operator $D^2$ enjoys the following coercive estimate, \begin{equation} \label{eq_coer} (D^2\omega,\omega)_{L^2(\Omega M)}\ge C_1\|\omega\|_{H^1(\Omega M)}^2-C_2\|\omega\|_{L^2(\Omega M)}^2, C_1>0, \end{equation} where $\omega=(\omega^0,\omega^1,\omega^2,\omega^3)$ and $\omega^k\in C^\infty_0(M,\Lambda^k T^* M)$, $k=0,1,2,3$. When proving \eqref{eq_coer}, notice that, for $\omega^1\in C^\infty_0(M,\Lambda^1 T^* M)$, \[ ((d\delta_{\mu,\epsilon}+\delta_{\epsilon,\mu}d)\omega^1,\omega^1)_{L^2}=\|\delta_{\mu,\epsilon}\omega^1\|_{L^2}^2+\|d \omega^1\|_{L^2}^2. \] An application of Gaffney's inequality gives that \[ \|\omega^1\|_{H^1}\le C(M)(\|\omega^1\|_{L^2}+\|d\omega^1\|_{L^2}+\|\delta_{\mu,\epsilon}\omega^1\|_{L^2}) \] where $C(M)>0$ is a constant. The estimate \eqref{eq_coer} follows, since the treatment of forms of degrees different from $1$ is analogous. See also \cite{Cos1991} for a different proof of coercivity. The ellipticity of $D^2$ now follows from the coercivity estimate \eqref{eq_coer}, see e.g. \cite{Melin1971}. \end{proof} \subsection{Betti numbers and the Euler characteristic of a manifold with boundary} Let $(M,g_0)$ be an orientable compact Riemannian manifold of dimension $3$ with boundary. The space \[ \mathcal{H}^k(M)=\{\omega\in L^2(M, \Lambda^k T^* M):d\omega=0, d*_{g_0}\omega=0\} \] is called the \emph{space of harmonic fields}. Notice that this space is infinite dimensional for $1\le k\le 2$, see \cite[Theorem 3.4.2]{Sch95}. Moreover, it is well-known that harmonic fields are $C^\infty$-smooth in the interior of $M$. The following two finite dimensional subspaces are distinguished in $\mathcal{H}^k(M)$: \begin{align*} \mathcal{H}^k_D(M)=\{\omega\in\mathcal{H}^k(M):\mathbf{t}\omega=0\}\quad\text{and}\\ \mathcal{H}_N^k(M)=\{\omega\in\mathcal{H}^k(M):i^*(\ast_{g_0}\omega)=0\}, \end{align*} which are called the \emph{Dirichlet} and \emph{Neumann harmonic fileds}, respectively. It follows from the Hodge theory that the dimensions of the spaces $\mathcal{H}^k_D(M)$ and $\mathcal{H}^k_N(M)$ are independent of the choice of the metric $g_0$. For our purposes, we shall have to specify the choice of the Hodge star operator in the definition of $\mathcal{H}^k(M)$, according to the definition of the codifferential given in \eqref{eq_codiff}, \begin{align*} \mathcal{H}^2(M)&=\{\omega\in L^2(M, \Lambda^2 T^* M):d\omega=0, d*_{\mu}\omega=0\},\\ \mathcal{H}^k(M)&=\{\omega\in L^2(M, \Lambda^k T^* M):d\omega=0, d*_{\epsilon}\omega=0\},k=1,3. \end{align*} Recall \cite{Fran_book} that the space $\mathcal{H}_N^k( M)$ is isomorphic to the $k$th homology group of the manifold $H_k(M;\mathbb{R})$ and $\mathcal{H}_D^k(M)$ is isomorphic to the $k$th relative homology group $H_k(M,\partial M;\mathbb{R})$. The Poincar\'e-Lefschetz duality states the existence of the following isomorphism, \[ H_k(M;\mathbb{R})\simeq H_{3-k}(M,\partial M;\mathbb{R}), \quad k=0,1,2,3. \] The $k$th absolute Betti number of the manifold $M$ is given by \[ \beta_k(M)=\dim \mathcal{H}_N^k(M), \quad k=0,1,2,3, \] and the $k$th relative Betti number of $M$ is defined by \[ \beta_k(M,\partial M)=\dim\mathcal{H}_D^k(M), \quad k=0,1,2,3. \] Being one of the simplest topological invariants, the Betti numbers carry a basic amount of information about the topology of a manifold in question. The Betti numbers $\beta_0(M)$ and $\beta_3(M)$ admit a particularly straightforward geometric interpretation. Namely, $\beta_0(M)$ counts the number of the connected components of $M$ and $\beta_3(M)$ gives the number of the oriented components of $M$ without boundary. Assuming that the manifold $M$ is connected, we have $\beta_0(M)=1$ and $\beta_3(M)=0$. As for the first Betti number $\beta_1(M)$, it is at least as large as the total number of handles of $\partial M$, see \cite[Theorem 5.1.9]{ColGrigKur}. The Euler characteristic is defined by \[ \chi(M)=\beta_{3}(M)-\beta_2(M)+\beta_1(M)-\beta_0(M). \] It is known \cite[Corollary 8.8]{Dold_book} that the Euler characteristics of a compact 3-manifold and its boundary are related by \begin{equation} \label{eq_euler_bound} \chi({\partial M})=2\chi(M). \end{equation} Notice finally that if $M$ is a connected compact orientable 3-manifold with vanishing Euler characteristic, then either the manifold $M$ is closed or its boundary is a disjoint union of tori. \subsection{Boundary data for inverse problems} Let $\Gamma\subset\partial M$ be an open subset of the boundary $\partial M$. Consider the following initial boundary value problem, \begin{equation} \label{eq_hyperbolic} \begin{aligned} &(\partial_t+D)\omega(x,t)=0\quad\text{in}\quad M\times\mathbb{R},\\ &\mathbf{t}\omega|_{\partial M\times\mathbb{R}}=f\in C_0^\infty( \mathbb{R}_-,C^\infty_0(\Gamma, \Lambda T^*M)),\\ &\omega|_{t=-\tau_f}=0, \end{aligned} \end{equation} where $\tau_f>0$ is such that $\inf \hbox{supp } (f)>-\tau_f$. Following \cite{KurLasSom06}, we shall define a solution of \eqref{eq_hyperbolic} in the following way. Let $E$ be a right inverse to the trace mapping $\mathbf{t}$ such that $Ef(-\tau_f)=0$. We set \begin{equation} \label{eq_semi_1} \omega^f(t)=Ef(t)-\int_{-\tau_f}^t e^{-(t-s)D}(\partial_s +D)Ef(s)ds. \end{equation} Here $e^{-t D}$ is the unitary group, generated by the self-adjoint operator $D/i$. Associated to the problem \eqref{eq_hyperbolic} is the response operator, \[ R_{\Gamma}:f\mapsto \mathbf{n}\omega^f|_{\Gamma\times\mathbb{R}_-}. \] The first main result of this work is the following theorem. \begin{thm} \label{thm_main} Assume that we are given an open subset $\Gamma\subset \partial M$ and the response operator $R_{\Gamma}$ for any $f\in C_0^\infty( \mathbb{R}_-,C^\infty_0(\Gamma, \Lambda T^*M))$. These data determine the Betti numbers of the manifold $M$. \end{thm} Let us now return to the physical Maxwell's equations \begin{equation} \label{eq_Maxwell_phys} \begin{aligned} &\omega^1_t=\delta_{\epsilon,\mu}\omega^2,\quad \delta_{\mu,\epsilon}\omega^1=0,\\ &\omega^2_t=-d\omega^1,\quad d\omega^2=0,\\ &\mathbf{t}\omega^1=h\in C_0^\infty( \mathbb{R}_-,C^\infty_0(\Gamma, \Lambda^1 T^*M)),\\ &\omega|_{t<-\tau_h}=0. \end{aligned} \end{equation} As explained in \cite{KurLasSom06}, the solution to \eqref{eq_Maxwell_phys} is obtained from \eqref{eq_semi_1} by choosing the boundary source $f$ in \eqref{eq_hyperbolic} as \[ f=(0,h,-\int_{-\tau_h}^t dh(t')dt'). \] The response operator for \eqref{eq_Maxwell_phys} is defined by \[ \widetilde R_\Gamma:h\mapsto \mathbf{n}\omega^{h,2}|_{\Gamma\times \mathbb{R}_-}, \] where $\omega^h$ is the solution to \eqref{eq_Maxwell_phys}. Notice that in the classical terminology of electric and magnetic fields, the response operator $\widetilde R_\Gamma$ maps the tangential component of the electric field $n\times E|_{\Gamma\times\mathbb{R}_-}$ to the tangential component of the magnetic field $n\times H|_{\Gamma\times \mathbb{R}_-}$. \begin{thm} \label{thm_main_max} Given an open subset $\Gamma\subset\partial M$ and the response operator $\widetilde R_\Gamma$ for any $h\in C_0^\infty( \mathbb{R}_-,C^\infty_0(\Gamma, \Lambda^1 T^*M))$, the first absolute Betti number $\beta_1(M)$ of the manifold $M$ can be determined. \end{thm} \begin{cor} \label{cor_main} The knowledge of the boundary $\partial M$ and the response operator $\widetilde R_\Gamma$, $\Gamma\subset \partial M$, for any $h\in C_0^\infty( \mathbb{R}_-,C^\infty_0(\Gamma, \Lambda^1 T^*M))$, determines the first and the second absolute Betti numbers $\beta_1(M)$ and $\beta_2(M)$ of $M$. \end{cor} Corollary \ref{cor_main} follows from Theorem \ref{thm_main_max} together with \eqref{eq_euler_bound}. \subsection{Examples} The following two examples illustrate the significance of our results for the determination of the topological structure of an unknown object from the boundary measurements. This may have applications to practical situations, where the structure of complicated voids in an unknown object is to be recovered. \begin{exmp} Let $M\subset \mathbb{R}^3$ be obtained from a large ball by removing a finite number of pairwise disjoint solid tori. Then the first absolute Betti number of $M$ is equal to the number of the removed solid tori. Thus, measuring the response operator on a portion of the boundary sphere, we can recover the total number of the removed tori. \end{exmp} \begin{exmp} Consider a solid torus $ST=S^1\times D^2\subset \mathbb{R}^3$, where $S^1$ is a unit circle and $D^2$ is a closed two-dimensional disc. The boundary of $ST$ is a two dimensional torus and since $D^2$ is contractible, it follows that the first absolute Betti number of $ST$ is equal to $1$. Let $M$ be the connected sum of $k$ copies of solid tori $ST$. Here we may recall that a connected sum of two manifolds, possibly with boundary, is a manifold formed by deleting a ball in the interior of each of the manifolds and gluing together the resulting boundary spheres. The boundary of $M$ is a disjoint union of $k$ copies of two-dimensional tori. It is known that for manifolds of dimension three and higher, the first absolute Betti number of the connected sum is the sum of the first absolute Betti numbers of the summands. Therefore, the first absolute Betti number of $M$ is equal to $k$. It follows from our results that performing measurements on a portion of the boundary of the manifold $M$, we are able to recover the total number of the solid tori. \end{exmp} \section{Proof of Theorem \ref{thm_main}} \subsection{Inner products} Let $\omega^f(t)=\omega^f(x,t)$ be the solution to \eqref{eq_hyperbolic}. We shall need the following Blagovestchenskii type result, see \cite{Blag69} for such results for one-dimensional inverse problems. \begin{thm} \label{thm_blagov} For any $f,h\in C_0^\infty( \mathbb{R}_-,C^\infty_0(\Gamma, \Lambda T^*M))$, the knowledge of $\Gamma\subset \partial M$ and the response operator $R_\Gamma$ allows us to evaluate the inner products \begin{equation} \label{eq_blagov} (\omega^{f,k}(t),\omega^{h,k}(s))_{L^2}, \quad k=0,1,2,3,\quad \text{for} \quad s,t\ge 0. \end{equation} \end{thm} \begin{proof} From \eqref{eq_semi_1}, we obtain that \[ \omega^f(t)=\omega^{f_t}(-1), \quad t\ge 0, \] where $f_t=f(\cdot+t+1)$, $f_t\in C_0^\infty( \mathbb{R}_-,C^\infty_0(\Gamma, \Lambda T^*M))$. Therefore, the knowledge of the operator $R_\Gamma$ is equivalent to the knowledge of the operator $f\to \mathbf{n}\omega^f|_{\Gamma\times\mathbb{R}}$. To prove this theorem we also need the following fact, \[ \mathbf{t}(d\omega^k)=d(\mathbf{t}\omega^k),\quad k=0,1,2,3, \] see \cite[Proposition 1.2.6]{Sch95}. This implies that \begin{align*} \mathbf{n}\delta_{\epsilon,\mu}\omega^2=&\mathbf{t}(\ast_{\epsilon}\ast_{\epsilon}d\ast_{\mu}\omega^2)=d\mathbf{t}(\ast_{\mu}\omega^2) =d\mathbf{n}\omega^2,\\ \mathbf{n}\delta_{\mu,\epsilon}\omega^3=&\mathbf{t}(\ast_{\mu}(-1)\ast_{\mu}d\ast_\epsilon\omega^3)=-d\mathbf{t}(\ast_\epsilon\omega^3)=-d\mathbf{n}\omega^3. \end{align*} Set $I^k(s,t)=(\omega^{f,k}(t),\omega^{h,k}(s))_{L^2}, \quad k=0,\dots,3$. Then using Stokes' formulae, we get \begin{align*} (\partial_s^2-\partial_t^2)I^0(s,t)&=(\omega^{f,0}(t),\partial_s^2\omega^{h,0}(s))-(\partial_t^2\omega^{f,0}(t),\omega^{h,0}(s))\\ &=-(\omega^{f,0}(t),\delta_{\mu,\epsilon}d\omega^{h,0}(s))+(\delta_{\mu,\epsilon}d\omega^{f,0}(t),\omega^{h,0}(s))\\ &=\langle \mathbf{t}\omega^{f,0}(t),\mathbf{n}d\omega^{h,0}(s)\rangle-\langle \mathbf{t}\omega^{h,0}(s),\mathbf{n}d\omega^{f,0}(t)\rangle\\ &=-\langle \mathbf{t}\omega^{f,0}(t),\partial_s\mathbf{n}\omega^{h,1}(s)\rangle+\langle \mathbf{t}\omega^{f,0}(t),\mathbf{n}\delta_{\epsilon,\mu}\omega^{h,2}(s)\rangle\\ &+ \langle \mathbf{t}\omega^{h,0}(s),\partial_t\mathbf{n}\omega^{f,1}(t)\rangle-\langle \mathbf{t}\omega^{h,0}(s),\mathbf{n}\delta_{\epsilon,\mu}\omega^{f,2}(t)\rangle\\ &=-\langle \mathbf{t}\omega^{f,0}(t),\partial_s\mathbf{n}\omega^{h,1}(s)\rangle+\langle \mathbf{t}\omega^{f,0}(t),d\mathbf{n}\omega^{h,2}(s)\rangle\\ &+ \langle \mathbf{t}\omega^{h,0}(s),\partial_t\mathbf{n}\omega^{f,1}(t)\rangle-\langle \mathbf{t}\omega^{h,0}(s),d\mathbf{n}\omega^{f,2}(t)\rangle. \end{align*} Similarly, \begin{align*} (\partial_s^2-\partial_t^2)I^1(s,t)&=(\omega^{f,1}(t),\partial_s^2\omega^{h,1}(s))-(\partial_t^2\omega^{f,1}(t),\omega^{h,1}(s))\\ &=-\langle\partial_s\mathbf{t}\omega^{h,0}(s),\mathbf{n}\omega^{f,1}(t)\rangle- \langle\mathbf{t}\omega^{f,1}(t),\partial_s\mathbf{n}\omega^{h,2}(s)+d\mathbf{n}\omega^{h,3}(s)\rangle\\ &+\langle\partial_t\mathbf{t}\omega^{f,0}(t),\mathbf{n}\omega^{h,1}(s)\rangle+ \langle\mathbf{t}\omega^{h,1}(s),\partial_t\mathbf{n}\omega^{f,2}(t)+d\mathbf{n}\omega^{f,3}(t)\rangle, \end{align*} \begin{align*} (\partial_s^2-\partial_t^2)I^2(s,t)&=(\omega^{f,2}(t),\partial_s^2\omega^{h,2}(s))-(\partial_t^2\omega^{f,2}(t),\omega^{h,2}(s))\\ &=-\langle\partial_s\mathbf{t}\omega^{h,1}(s)+d\mathbf{t}\omega^{h,0}(s),\mathbf{n}\omega^{f,2}(t)\rangle- \langle\mathbf{t}\omega^{f,2}(t),\partial_s\mathbf{n}\omega^{h,3}(s) \rangle\\ &+\langle \partial_t\mathbf{t}\omega^{f,1}(t)+d\mathbf{t}\omega^{f,0}(t),\mathbf{n}\omega^{h,2}(s)\rangle +\langle \mathbf{t}\omega^{h,2}(s),\partial_t\mathbf{n}\omega^{f,3}(t) \rangle, \end{align*} \begin{align*} (\partial_s^2-\partial_t^2)I^3(s,t)&=(\omega^{f,3}(t),\partial_s^2\omega^{h,3}(s))-(\partial_t^2\omega^{f,3}(t),\omega^{h,3}(s))\\ &=-\langle\partial_s\mathbf{t}\omega^{h,2}(s)+d\mathbf{t}\omega^{h,1}(s),\mathbf{n}\omega^{f,3}(t)\rangle\\ &+ \langle\partial_t\mathbf{t}\omega^{f,2}(t)+d\mathbf{t}\omega^{f,1}(t), \mathbf{n}\omega^{h,3}(s)\rangle. \end{align*} Hence $I^k(s,t)$, $k=0,1,2,3$, satisfies an inhomogeneous one-dimensional wave equation in the unbounded region $\{(s,t)\in \mathbb{R}^2:s\ge -\tau_h,t\ge -\tau_f\}$, whose right hand side is determined from the knowledge of $\Gamma$ and $R_\Gamma$. Since \[ I^ k(-\tau_h,t)= I^k(s,-\tau_f)=0,\quad \partial_s I^k(-\tau_h,t)=\partial_tI^k(s,-\tau_f)=0, \] we can determine $I^k(s,t)$ in the entire region $s\ge -\tau_f$, $t\ge -\tau_f$. The result follows. \end{proof} \subsection{Controllability result} In the isotropic setting and the case when $\epsilon(x)=\alpha(x)\mu(x)$, $\alpha(x)>0$, one can use a generalization of Tataru's unique continuation theorem \cite{EllNakTat02, KurLasSom06} to obtain controllability results with sources supported in a finite time interval. As already mentioned in the introduction, such unique continuation results do not seem to be available in the general anisotropic setting. Nevertheless, we shall next show that partial controllability results in the general anisotropic setting on an infinite time interval can be obtained using a unique continuation principle for elliptic systems. As shown below, this turns out to be sufficient for the reconstruction of the Betti numbers. Let $\mathcal{H}_D(M):=\oplus_{k=0}^3\mathcal{H}_D^k(M)$ be the space of all Dirichlet harmonic fields, and let $\Pi:L^2(M, \Lambda T^* M)\to \mathcal{H}_D(M)$ be the orthogonal projection. \begin{thm} \label{thm_controllability_new} We have \[ \{\Pi (\omega^{f}(0)):f\in C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda T^* M))\}= \mathcal{H}_D( M). \] \end{thm} \begin{proof} Let $\eta\in \mathcal{H}_D( M)$. If we prove that the orthogonality condition \[ (\omega^{f}(0),\eta)_{L^2}=0 \quad \text{for all } f\in C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda T^* M)) \] implies that $\eta=0$, then the space $\{\Pi\omega^{f}(0)):f\in C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda T^* M))\}$ is dense in $\mathcal{H}_D(M)$. Since the latter space is finite dimensional, the claim follows. As $D\eta =0$, we shall view $\eta(x)$ as the solution to the following problem, dual to \eqref{eq_hyperbolic}, \begin{equation} \label{eq_dual_new} \begin{aligned} &(-\partial_t-D)u=0, \quad\text{in}\quad M\times\mathbb{R},\\ & \mathbf{t}u|_{\partial M\times\mathbb{R}}=0,\\ & u|_{t=0}=\eta. \end{aligned} \end{equation} Using \eqref{eq_stokes_D}, we have \begin{align*} \partial_t(\omega^f,u)_{L^2}=-(D\omega^f,u)_{L^2}-(\omega^f, Du)_{L^2}=-\langle f,\mathbf{n}u\rangle. \end{align*} Thus, \[ \int_{-\tau_f}^0 \langle f,\mathbf{n}u\rangle dt=-(\omega^f(0),\eta)_{L^2}+(\omega^f(-\tau_f),u(-\tau_f))_{L^2}=0. \] The choice of $-\tau_f$ implies that \[ \int_{\mathbb{R}_-}\langle f,\mathbf{n} u\rangle dt=0 \] for all $f\in C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda T^* M))$. Thus, $\mathbf{n}u=0$ on $\Gamma\times\mathbb{R}_-$. Now if $\Gamma$ coincides with the whole boundary of the manifold $M$, then we are done, since $\mathcal{H}_D^k(M)\cap\mathcal{H}_{N}^k(M)=\{0\}$, see \cite[p. 130]{Sch95}. In the case when $\Gamma$ is a proper open subset of $\partial M$, we proceed as follows. Notice that $\eta(x)$ solves the second order elliptic system $D^2\eta=0$ on $M$ with zero Cauchy data on $\Gamma$, \[ (\mathbf{t}\eta,\mathbf{n}\eta,\mathbf{t}\delta \eta,\mathbf{n}d\eta), \] where $\delta\eta^k=\delta_{\mu,\epsilon}\eta^k$, $k=1,3$, and $\delta\eta^k=\delta_{\epsilon, \mu}\eta^k$, $k=2$. Thus, by the unique continuation principle for second order elliptic systems with diagonal principal part, see \cite[Theorem 4.3]{Isakov04}, we get $\eta=0$ in $M$. \end{proof} \begin{cor} \label{cor_1} Let $\Pi^k:L^2(M, \Lambda^k T^* M)\to \mathcal{H}_D^k(M) $ be the orthogonal projection onto the space of the Dirichlet harmonic $k$-fields. Then \[ \{\Pi^k(\omega^{f,k}(0)): f\in C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda T^* M))\}= \mathcal{H}_D^k(M),\quad k=0,1,2,3. \] \end{cor} \subsection{Determination of the Betti numbers of the manifold} \label{subsection_betti} \begin{lem} \label{lem_inner_2} Let $f,h\in C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda T^* M))$. Then given the response operator $R_\Gamma$, it is possible to find the inner products \[ (\Pi ^k \omega^{f,k}(0), \omega^{h,k}(0))_{L^2}, \quad k=0,1,2,3. \] \end{lem} \begin{proof} Using \eqref{eq_stokes_formulae}, we check by a direct computation that $\ker D=\mathcal{H}_D(M)$. We can therefore view $\Pi$ as the spectral projection onto the zero eigenspace of $D$. Consider the unitary group $e^{-tD}$, $t\in \mathbb{R}$, on $L^2$. We shall make use of the following essentially well-known formula, \begin{equation} \label{eq_pi} \Pi=\lim_{T\to +\infty}\frac{1}{T}\int_0^T e^{-tD}dt, \end{equation} valid in the sense of strong convergence of operators. When checking \eqref{eq_pi}, let \[ \Pi_T=\frac{1}{T}\int_0^T e^{-tD}dt\in \mathcal{L}(L^2,L^2). \] Since $\|\Pi_T\|_{\mathcal{L}(L^2,L^2)}\le 1$, it suffices to check that $\Pi_Tx\to \Pi x$ when $x$ varies in a dense subset of $L^2$. We can take this subset to be the set of all finite linear combinations of the eigenfunctions of $D$. To get \eqref{eq_pi}, we only need to observe that when $\lambda\in \mathbb{R}$, \[ \lim_{T\to+\infty}\frac{1}{T}\int^T_0e^{it\lambda}dt=\begin{cases} 1 & \text{if }\lambda=0,\\ 0 & \text{if }\lambda\ne 0. \end{cases} \] Now notice that since $\hbox{supp }(f)\subset \mathbb{R}_-$, we have \[ e^{-tD}\omega^f(0)=\omega^f(t),\quad t\ge 0, \] and therefore, \[ \Pi\omega^f(0)=\lim_{T\to +\infty}\frac{1}{T}\int_0^T\omega^f(t)dt. \] We get \begin{align*} (\Pi ^k \omega^{f,k}(0), \omega^{h,k}(0))_{L^2}&=\lim_{T\to+\infty}\frac{1}{T}\int_0^T (\omega^{f,k}(t),\omega^{h,k}(0))_{L^2}\\ &=\lim_{T\to+\infty}\frac{1}{T}\int_0^T (\omega^{f_t,k}(0),\omega^{h,k}(0))_{L^2}, \end{align*} where $f_t=f(\cdot+t)$, $f_t\in C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda T^* M))$. Here we have used that $\omega^f(t)=\omega^{f_t}(0)$, as follows from \eqref{eq_semi_1}. An application of Theorem \ref{thm_blagov} concludes the proof. \end{proof} We have the following result with implies Theorem \ref{thm_main}. \begin{lem} Given $\Gamma\subset \partial M$ and the response operator $R_\Gamma$, it is possible to construct a finite number of boundary sources $f_j\in C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda T^* M))$ such that $\Pi^k(\omega^{f_j,k}(0))$ form a basis of $\mathcal{H}_{D}^k(M)$, $0\le k\le 3$. \end{lem} \begin{proof} Let $\{h_j\}_{j=1}^\infty$ be a dense countable set in $C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda T^* M))$. We can use the Gram-Schmidt orthogonalization procedure to construct the sources $f_j$. More precisely, we define $f_j$ recursively by \begin{align*} f_1& =\frac{h_1}{(\Pi^k \omega^{h_1,k}(0),\omega^{h_1,k}(0))_{L^2}^{1/2}},\\ g_j& =h_j-\sum_{i=1}^{j-1}(\Pi^k \omega^{h_j,k}(0),\omega^{h_i,k}(0))_{L^2}f_i,\quad j=2,3,\dots,\\ f_j & =\frac{g_j}{(\Pi^k \omega^{g_j,k}(0),\omega^{g_j,k}(0))_{L^2}^{1/2}}. \end{align*} When $g_j = 0$, we remove the corresponding $h_j$ from the original sequence and continue the procedure. The number of sources $f_j$ produced by the Gram-Schmidt orthogonalization procedure will then be the dimension of $\mathcal{H}_{D}^k(M)$, according to Corollary \ref{cor_1}. \end{proof} \section{Proof of Theorem \ref{thm_main_max}} First notice that as in Theorem \ref{thm_blagov}, for any $f,h\in C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda^1 T^* M))$, the knowledge of the response operator $\widetilde R_\Gamma$ allows us to evaluate the inner products, \[ (\omega^{f,k}(t),\omega^{h,k}(s))_{L^2}, \quad k=1,2,\quad t,s\ge 0, \] where $\omega^f, \omega^h$ are solutions of physical Maxwell's equations \eqref{eq_Maxwell_phys}. We have the following controllability result. \begin{lem} Let $\omega^f$ be a solution to physical Maxwell's equations \eqref{eq_Maxwell_phys}. Then \[ \{\Pi^2(\omega^{f,2}(0)):f\in C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda^1 T^* M))\}=\mathcal{H}_{D}^2(M), \] where $\Pi^2$ is the orthogonal projection onto the space of the Dirichlet harmonic $2$-fields. \end{lem} \begin{proof} Let $\eta^2\in \mathcal{H}_{D}^2(M)$. Assume that \[ (\omega^{f,2}(0),\eta^2)_{L^2}=0\quad\text{for all} \ f\in C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda^1 T^* M)). \] Now \eqref{eq_Maxwell_phys} and Stokes' formula imply that \begin{align*} \partial_t(\omega^{f,2}(t),\eta^2)_{L^2}&=(-d\omega^{f,1}(t),\eta^2)_{L^2}=-(\omega^1(t),\delta_{\epsilon,\mu}\eta^2)_{L^2}-\langle \mathbf{t}\omega^{f,1}(t),\mathbf{n}\eta^2 \rangle\\ &=-\langle f(t),\mathbf{n}\eta^2 \rangle. \end{align*} Thus, \[ \int_{\mathbb{R}_-}\langle f(t),\mathbf{n}\eta^2 \rangle=-(\omega^{f,2}(0),\eta^2)_{L^2}+(\omega^{f,2}(-\tau_f),\eta^2)_{L^2}=0, \] for all $f\in C_0^\infty(\mathbb{R}_-,C^\infty_0(\Gamma,\Lambda^1 T^* M))$. Hence, $\mathbf{n}\eta^2=0$ on $\Gamma$. Moreover, $\Delta\eta^2=0$ on $M$ and $\mathbf{t}\eta^2=0$ on $\partial M$. By the unique continuation principle, we get $\eta^2=0$. \end{proof} Proceeding further as in Subsection \ref{subsection_betti}, we can recover the first absolute Betti number $\beta_1(M)$ from the knowledge of $\Gamma$ and $\widetilde R_\Gamma$. This completes the proof of Theorem \ref{thm_main_max}. \section{Acknowledgements} We would to thank Semen Podkorytov for a helpful discussion and providing useful references on topology of manifolds. The research of K.K. was financially supported by the Academy of Finland (project 125599). The research of Y.K. is partially supported by EPSRC Grant EP/F034016/1. The research of M.L. was financially supported by the Academy of Finland Center of Excellence programme 213476.
1,314,259,993,690
arxiv
\section{Introduction} Information theoretic studies of phase transitions are, by now, well established. The underlying idea here is geometric in nature, and rests on the definition of a Riemannian metric tensor on the space of parameters (called parameter manifold) of a system. Depending on whether the interactions of the system are classical or quantum in nature, this metric might be induced from the equilibrium thermodynamic state space \cite{rupp1} (for a review, see \cite{rupp}), or from the natural Hilbert space structure of quantum states \cite{pv}. For the former, the parameter manifold consists of thermodynamic control parameters such as the pressure, volume and temperature, while for the latter, this might be thought of as the manifold of coupling constants appearing in the Hamiltonian. Given such a metric tensor, the parameter manifold can have very different properties depending on whether the system undergoes a second order classical or a quantum phase transition (CPT or QPT). Whereas the hallmark of a CPT is that the scalar curvature arising out of the metric diverges at a second order phase transition (and everywhere on the spinodal curve), this is not the case for second order QPTs where the curvature can remain regular \cite{zan1}. It is also known that whereas some components of the metric tensor vanish at a second order CPT, as these are related to inverses of thermodynamic response coefficients \cite{rupp}, for QPTs, the situation is reversed, and some of the components of the metric tensor diverge at such a transition, as follows from first order perturbation theory \cite{zan1} (although this may not be true in some special cases, see \cite{zan1a}). Although a lot of attention has been paid to the behavior of the metric tensor and its associated scalar curvature in the context of phase transitions, much less is known about geodesics, i.e paths that minimize the distance between two points on the parameter manifold. In any geometric setup, the behavior of geodesics is an important object to study. Some studies on geodesics have appeared in the context of CPTs \cite{diosi}, and QPTs (specifically, for adiabatic quantum computation) \cite{zan2},\cite{zan3}, in special cases. The purpose of this paper is to complement and generalize these results, and to obtain and analyze general solutions to the geodesic equations for some model systems that exhibit second order phase transitions. Here, we study four models in the thermodynamic limit : the Van der Waals (VdW) model for fluids, the Curie Weiss (CW) mean field model of ferromagnetism, the infinite Ising ferromagnetic chain - all of which exhibit CPTs at finite temperature, and the transverse field XY model that exhibits a QPT at zero temperature. For all these models, the full set of coupled non-linear geodesic equations in the information geometric context are set up and solved numerically, with appropriate initial conditions. To the best of our knowledge, such an analysis has not been performed before. Our treatment is completely general in nature, and differs significantly from the methods used in \cite{diosi},\cite{zan3} where the focus was on obtaining specific geodesics between two given points in the parameter manifold. Interestingly, we find that in all the examples that we consider, geodesics exhibit a ``turning point'' close to criticality, and are ``confined'' to a single phase, thus indicating a geometric universality in apparently unrelated physical phenomena. This paper is organized as follows. In the next section, we first briefly recall some basic facts about information geometry and geodesics. We then proceed to analyze the VdW, the CW and the infinite Ising ferromagnet, as illustrations of CPTs. For the Ising ferromagnet, our analysis of information geometry is novel, and has not appeared in the literature before. In section 3, we analyze the geodesic structure of QPTs via the transverse field XY spin chain. We end in section 4 with our discussions and directions for future study. \section{Information Geometry, Geodesics, and Classical Phase Transitions} In the context of equilibrium thermodynamics of classical systems, the formulation of information geometry is mainly due to the work of Ruppeiner \cite{rupp}. The main idea here is to consider the positive definite Riemannian metric arising out of the Hessian of the entropy density $s$, and given by a line element \begin{equation} d\tau^2 = g_{\mu\nu}dx^{\mu}dx^{\nu} ~~~g_{\mu\nu} = -\frac{1}{k_B}\left(\frac{\partial^2 s}{\partial x^{\mu} \partial x^{\nu}}\right) \label{entropyrep} \end{equation} Here, $x^{\mu}, \mu=1,2$, denotes the internal energy and the particle number per unit volume, and are co-ordinates on the parameter manifold in the ``entropy representation.'' $k_B$ is the Boltzmann's constant, which we will set to unity in what follows. The line element of eq.(\ref{entropyrep}) introduces the concept of a distance in the space of equilibrium thermodynamic states via fluctuation theory, i.e, the larger is this distance between two given states, the smaller is the probability that these are related by a thermal fluctuation. Various representations (related to each other by Legendre transforms) can be used for this geometric construction (a full list can be found in \cite{rupp}), and a particularly useful diagonal form of the metric for single component fluids and magnetic systems is \begin{equation} d\tau^2 = \frac{1}{T}\left(\frac{\partial s}{\partial T}\right)_{\rho} dT^2+ \frac{1}{T}\left(\frac{\partial \mu}{\partial \rho}\right)_Td\rho^2 \label{line} \end{equation} where $T$ is the temperature, $\rho$ the number density, and $\mu = \left(\frac{\partial f}{\partial \rho}\right)_T$, $f$ being the Helmholtz free energy per unit volume. For magnetic systems, we need to consider thermodynamic quantities per unit spin, with the magnetization per unit spin $m$ replacing $\rho$. On the other hand, information geometry in quantum mechanical systems, first studied by Provost and Vallee \cite{pv}, is defined by taking two infinitesimally separated quantum states and constructing the quantity \begin{equation} |\psi\left({\vec x} +d{\vec x}\right) - \psi\left({\vec x}\right)|^2 = \langle \partial_{\mu} \psi|\partial_{\nu} \psi\rangle dx^{\mu}dx^{\nu} = \alpha_{\mu\nu}dx^{\mu}dx^{\nu} \label{pv1} \end{equation} where $x^{\mu}$ (collectively denoted as ${\vec x}$ in the l.h.s of eq.(\ref{pv1})) denotes the parameters on which the wave function $\psi$ depends on, and $\partial_{\mu}$ is a derivative with respect to $x^{\mu}$. From the $\alpha_{\mu\nu}$ (which are not gauge invariant), a meaningful gauge-invariant metric tensor can be defined as \cite{pv} \begin{equation} g_{\mu\nu} = \alpha_{\mu\nu} - \beta_{\mu}\beta_{\nu};~~~~\beta_{\mu} = -i\langle\psi\left({\vec x}\right)|\partial_{\mu}\psi\left({\vec x}\right)\rangle \label{metric} \end{equation} Here, $g_{\mu\nu}$ is the metric induced from the natural structure of the Hilbert space of quantum states. The metrics in eqs.(\ref{line}) and (\ref{metric}) can be used to predict second order phase transitions in both CPTs \cite{rupp} and QPTs \cite{zan1}. We also record here the expression for the scalar curvature arising out of the metric in the special case when the metric is diagonal (with $g \equiv {\rm det}~g_{\mu\nu}$) : \begin{equation} R = \frac{1}{\sqrt{g}}\left[\frac{\partial}{\partial x^1}\left(\frac{1}{\sqrt{g}} \frac{\partial g_{22}}{\partial x^1}\right) + \frac{\partial}{\partial x^2}\left(\frac{1}{\sqrt{g}} \frac{\partial g_{11}}{\partial x^2}\right)\right] \label{scalarcurvature} \end{equation} Given the information geometry of classical or quantum systems, we wish to study geodesics in the same. Let us briefly recall a few elementary facts about geodesics. For a manifold endowed with a metric with components $g_{\mu\nu}$, a geodesic is a path that extremizes the proper distance (or line element, whose infinitesimal form is given by $d\tau^2 = g_{\mu\nu}dx^{\mu}dx^{\nu}$). This can be cast as a variational problem, to determine the extrema of the integral $\int_1^2 \sqrt{g_{\mu\nu}{\dot x^{\mu}}{\dot x^{\nu}}}d\lambda$ where the dot denotes a derivative with respect to $\lambda$, which is an affine parameter, parametrizing the curve joining two points denoted $1$ and $2$. Calculus of variations can then be applied with the result that geodesic curves are solutions to the differential equations \begin{equation} {\ddot x^{\mu}} + \Gamma^{\mu}_{\nu\rho}{\dot x^{\nu}}{\dot x^{\rho}} = 0,~~~ {\rm with}~~~\Gamma^{\mu}_{\nu\rho} = \frac{1}{2}g^{\mu\zeta}\left(\frac{\partial g_{\zeta\nu}}{\partial x^{\rho}} + \frac{\partial g_{\zeta\rho}}{\partial x^{\nu}} - \frac{\partial g_{\nu\rho}}{\partial x^{\zeta}}\right) \label{geodesic} \end{equation} The above equation can also be obtained by writing a ``Lagrangian'' \begin{equation} {\mathcal L} = \frac{1}{2}\left(g_{\mu\nu}{\dot x^{\mu}}{\dot x^{\nu}}\right) \label{lagrangian} \end{equation} and using the (derivatives of the) Euler-Lagrange equations that follow. This method often provides valuable insights into the symmetries of the system. We will be interested in studying the solutions of eq.(\ref{geodesic}) in the context of CPTs and QPTs. It is well known that a natural affine parameter for geodesic curves is $\lambda = \tau$, and thus it is useful to consider the normalized vector ${\dot x^{\mu}} = dx^{\mu}/d\tau$ such that ${\dot x^{\mu}}{\dot x_{\mu}} = g_{\mu\nu}{\dot x^{\mu}}{\dot x^{\nu}} = 1$. Eq.(\ref{geodesic}), in general, gives rise to a set of coupled non-linear differential equations, which might be difficult to solve analytically. We will mostly concentrate on numerical solutions with appropriate boundary conditions. Note that in terms of the normalized vector ${\dot x^{\mu}}$, we need to specify three boundary conditions in order to solve eq.(\ref{geodesic}), with the fourth one being fixed by the normalization condition. Namely, we choose a ``starting point,'' i.e, an initial value of $x^{\mu}$, and any one component of ${\dot x}^{\mu}$. The second component of the derivative is then determined from the fact that ${\dot x^{\mu}}$ is normalized. With these boundary conditions, we determine the most general solutions to eq.(\ref{geodesic}), and study geodesics near criticality. This is done by solving for $x^{\mu}$ in terms of the affine parameter $\tau$, and tracing out the geodesic near the critical point, by parametrically plotting the resulting solution, under variation of $\tau$. \footnote{We will also keep in mind that geodesic paths are not unique : an elementary example is that of a 2-sphere, where there are an infinite number of geodesics, i.e great circles, between two anti-podal points.} Let us now illustrate the above discussion with the example of the Van der Waals fluid and the Curie-Weiss ferromagnet. \subsection{The Van der Waals and the Curie-Weiss Models} Information geometry of the Van der Waals fluid is well established, see e.g \cite{brodyhook}. We start from the Helmholtz free energy per unit volume \begin{equation} f_{{\rm VdW}} = -\rho T{\rm ln}\left(\frac{e}{\rho}\right) + \rho c_v T{\rm ln}\left(\frac{e}{T}\right) - \rho T{\rm ln}\left(1 - b\rho\right) - a\rho^2 \end{equation} where $e$ denotes the exponential function, $c_v$ is the specific heat at constant volume, $\rho$ and $T$ are the number density per molecule of fluid and the temperature, and $a$, $b$ are the coefficients arising in the VdW equation of state. It is convenient to work with the reduced VdW equation of state, and we can substitute $a = 9T_c/8\rho_c$, $b = 1/3\rho_c$, where $\rho_c$ and $T_c$ denote the critical values of the density and the temperature, respectively. Further, the reduced density and temperature are defined by $\rho_r = \rho/\rho_c$, $T_r = T/T_c$. We will set $\rho_c = T_c = 1$ to simplify the algebraic details, and also choose $c_v = 3/2$, the ideal gas value. The information metric (in terms of the co-ordinates $T_r$ and $\rho_r$) is then given, from eq.(\ref{line}), by \begin{equation} g_{TT} = \frac{3}{2} \frac{\rho_r}{T_r^2},~~~~ g_{\rho\rho} = \frac{9\left[4T_r - \rho_r\left(\rho_r - 3\right)^2\right]}{4\rho_rT_r\left(\rho_r - 3\right)^2} \label{vdwfull} \end{equation} \begin{figure}[t!] \centering \includegraphics[width=2.8in,height=2.3in]{vdwcriticalfirstorder.eps} \caption{Numerical solution for geodesics of the VdW equation of state close to criticality, in the $\left(\rho_r,T_r\right)$ plane. The dashed blue, dot-dashed red and solid green curves correspond to the boundary conditions $(T_r, \rho_r, {\dot \rho_r}) = \left(1.001, 1.009, -1.2\right)$, $\left(1.001, 1.007, -0.92\right)$, and $\left(1.0007, 1.011, -2.2\right)$ respectively. The geodesics turn back from the critical point, $\left(\rho_r,T_r\right) = \left(1,1\right)$.} \label{vdwcriticalexpanded} \end{figure} Since we are interested in geodesics close to criticality (for a recent related discussion, see \cite{quevedo}), we now expand the metric upto first order about the critical point, $\left(T_r,\rho_r\right) = \left(1,1\right)$ (remember we have set $\left(T_c,\rho_c\right) = \left(1,1\right)$). The metric components are then given by the simple expressions \begin{equation} g_{TT}^c = 3\rho_r\left(\frac{3}{2} - T_r\right),~~~~g_{\rho\rho}^c = \frac{9}{4}\left(T_r - 1\right) \label{vcrit} \end{equation} where the superscript $c$ in eq.(\ref{vcrit}) signifies that these expressions are valid close to criticality. The geodesic equations of eq.(\ref{geodesic}) turn out to be \begin{eqnarray} &~&{\ddot T_r} + \frac{{\dot T_r}^2}{2T_r - 3} + \frac{{\dot T_r}{\dot \rho_r}}{\rho_r} + \frac{3{\dot \rho_r}^2}{4\rho_r\left(2T_r - 3\right)} = 0\nonumber\\ &~&{\ddot \rho_r} + \frac{{\dot \rho_r}{\dot T_r}}{T_r - 1} + \frac{{\dot T_r}^2\left(2T_r - 3\right)}{3\left(T_r - 1\right)} = 0 \label{vdwcritical} \end{eqnarray} We now numerically solve eq.(\ref{vdwcritical}) with three boundary conditions : $(T_r, \rho_r, {\dot \rho_r})$ = $\left(1.001, 1.009, -1.2\right)$, $\left(1.001, 1.007, -0.92\right)$, and $\left(1.0007, 1.011, -2.2\right)$. \footnote{The value of ${\dot T_r}$ is fixed from the normalization condition as alluded to before.} For all the three cases, we solve eq.(\ref{vdwcritical}) for values of the affine parameter between zero and $0.0025$. The solution for $T_r$ and $\rho_r$ are then parametrically plotted by varying the affine parameter. The result is shown in fig.(\ref{vdwcriticalexpanded}) in the $\left(\rho_r,T_r\right)$ plane, where the dashed blue, dot-dashed red and solid green curves correspond to the three boundary conditions described above, respectively. We see that the geodesic curves ``turn back'' from the critical point. As we will see, this is a generic feature for all second order phase transitions studied in this paper. \begin{figure}[t!] \centering \includegraphics[width=2.8in,height=2.3in]{VDWfull.eps} \caption{Various numerical solution for geodesics of the VdW equation of state in the $\left(\rho_r,T_r\right)$ plane. The dashed blue, dot-dashed black and solid red lines are geodesics that begin from the gas, liquid and supercritical phases, respectively. The spinodal curve is shown in dotted green.} \label{VDWfull} \end{figure} For the sake of completeness, we mention here that the analysis of geodesics using the full VdW metric of eq.(\ref{vdwfull}) is similar, although the geodesic equations are more complicated and we omit them for brevity. After extensive numerical analysis, our conclusion here is that a geodesic starting in the liquid ($\rho_r >1,T_r<1$) or gas ($\rho_r <1,T_r<1$) phase does not reach the other phase. They either terminate at the spinodal line or continue to the supercritical region. Also, close to the critical point, geodesics show the turn-around behavior as depicted in fig.(\ref{vdwcriticalexpanded}). We also find that geodesics do not show any special behavior at the binodal lines, i.e at the location of the first order phase transitions, which is expected, because the metric and the scalar curvature are both regular here. These results are summarized in fig.(\ref{VDWfull}), where we have shown several numerical solutions to the geodesic equations for the VdW equation of state. The dotted green curve is the spinodal curve. The dashed blue curves on the left and the dot-dashed black curves on the right are geodesics that start from the gas and liquid phases respectively, and continue into the supercritical region. The solid red curves are geodesics in the super-critical region ($T_r > 1$), and show turning behaviour similar to that depicted in fig.(\ref{vdwcriticalexpanded}). We now move on to study geodesics in the classical mean-field Curie-Weiss ferromagnetic model in the thermodynamic limit. Information geometry of this model has been studied extensively in \cite{drs}, and we simply state the result that in the $\left(T,m\right)$ representation, the line element of eq.(\ref{line}) is given by \begin{equation} dl^2 = \frac{C_L}{T^2}dT^2 + \frac{1}{T}\frac{\left(T_c\left(1-m^2\right) -T\right)}{m^2-1} dm^2 \label{linecw} \end{equation} Here, $T$ is the temperature, $T_c$ its critical value, $m$ is the magnetization per unit spin, and $C_L(T)$ is a ``lattice specific heat'' introduced in \cite{jm}, that corresponds to the mechanical energy of the lattice. As was shown in \cite{jm}, information geometry in the CW model cannot be defined without introducing this term ad hoc in the theory. In \cite{drs}, it was shown that the line element in eq.(\ref{linecw}) correctly reproduces all the known features of the CW model, including the first order phase transitions. We will study the model close to criticality, and approximate the metric close to $m=0$ as \begin{equation} g_{TT}^c = \frac{C_L(T)}{T^2},~~~~g_{mm}^c = 1 - \frac{T_c}{T} \label{cwcric} \end{equation} where again the superscript $c$ denotes that we are close to criticality. To analyze the geodesic equations here, we note that a crucial simplification is possible, since none of the metric components in eq.(\ref{cwcric}) depend on the magnetization. This implies that the Lagrangian of eq.(\ref{lagrangian}) is independent of $m$, and hence the Euler-Lagrange equation that follows from it implies that ${\dot m} = K/g_{mm}^c$ where $K$ is a constant. Then from the normalization condition $g_{TT}^c{\dot T}^2 + g_{mm}^c{\dot m}^2 = 1$, it follows that \begin{equation} {\dot T}^2 = \frac{1}{g_{TT}^c}\left(1 - \frac{K^2}{g_{mm}^c}\right) = \frac{T^2\left[T\left(1-K^2\right)-T_c\right]}{C_L(T)\left(T - T_c\right)} \label{conditioncw} \end{equation} It is enough for us to consider the region $T>T_c$, for which eq.(\ref{conditioncw}) implies that positivity of the right hand side imposes the restriction $T > T_c/(1-K^2)$, with $K^2 <1$. This means that a geodesic in the region $T > T_c$ always remains in that region and cannot cross-over into the region $T< T_c$. A pathology arises for the case $K=0$, for which eq.(\ref{conditioncw}) implies that such a restriction is not implied, since ${\dot T}^2$ is always a positive number for $K=0$, or $m = {\rm constant}$. We have checked this by explicitly solving the geodesic equations, which in this case are given by \begin{equation} {\ddot T} + \frac{{\dot T}^2\left(T{\dot C_L} - 2C_L\right)}{2TC_L} - \frac{T_c{\dot m}^2}{2TC_L} = 0,~~~~{\ddot m} + \frac{T_c{\dot m}{\dot T}}{T\left(T - T_c\right)}=0 \label{geodesiccw} \end{equation} Numerical analyses (after choosing an appropriate regular functional form for $C_L(T)$, such as a power series) reveals that geodesics with $m = {\rm constant}$ lines (these are indeed geodesics as they satisfy the second equation in eq.(\ref{geodesiccw})) cross over inside the spinodal region. This is probably a mathematical artifact and we do not have a physical explanation for this. Apart from these constant $m$ lines, the behavior of geodesics close to the critical point is, as expected, qualitatively similar to that of the VdW fluid, and graphically, they resemble the ones shown in fig.(\ref{vdwcriticalexpanded}). We also find that the behavior of geodesics with the full CW metric (away from criticality) is qualitatively similar to those of the VdW model. Specifically, geodesics in the phase $m > 0$ do not reach the phase $m < 0$, and vice versa. \subsection{The Infinite Ising Ferromagnet} We now study geodesics in the infinite-range ferromagnetic Ising model with a transverse magnetic field. This model was originally studied in \cite{diptiman}, where it was shown that in the thermodynamic limit, it can be described by the classical dynamics of a single large spin. The information geometric aspects of this model has not been studied so far, and we begin with a discussion on this. The Hamiltonian for this model is given by \cite{diptiman},\cite{amit} \begin{equation} H_{\rm IIF} = -\frac{J}{N}\sum_{i<j} S_i^zS_j^z - h\sum_iS_i^x = -\frac{J}{2N}\left(S_{\rm tot}^z\right)^2 - hS_{\rm tot}^x \end{equation} where the second equality follows from defining the total spin, $S_{\rm tot}^z = \sum_iS_i^z$, $S_{\rm tot}^x = \sum_iS_i^x$ (and neglecting a constant term). We will set $J = 1$ in what follows. In a mean-field approach, where the average magnetization $m = \sum_i<S_i^z>/N$, the Hamiltonian for a single spin reduces to $H_{\rm IIF}^1 = -mS_{\rm tot}^z - hS_{\rm tot}^x$. This is an effective two-state model whose partition function can be shown to be given by \begin{equation} Z = 2{\rm Cosh}\left(\frac{\sqrt{h^2 + m^2}}{2T}\right) \end{equation} To understand the geometric aspects of this model, we write the Gibbs free energy for the single spin, $G = -T{\rm ln} Z$ and effect a Legendre transform to obtain the Helmholtz free energy $F = G + m^2/2$, where $m$ should be thought of as the applied magnetic field, i.e an intensive thermodynamic variable. The factor of $1/2$ in the Legendre transform might look strange, but note that this enforces the magnetization $\partial F/\partial m = 0$ (via the relation $m = -\partial G/\partial m$), i.e defines the boundary between the ferromagnetic and paramagnetic regions. In $(T,m)$ coordinates, using the expression for the Helmholtz free energy, the metric components are given from eq.(\ref{line}) by \begin{eqnarray} g_{TT} &=& \frac{1}{4T^4}\left(h^2 + m^2\right){\rm Sech}^2\alpha \nonumber\\ g_{mm} &=& \frac{1}{T} - \frac{1}{4T^2}{\rm Sech}^2\alpha \frac{\left(m^2\sqrt{h^2 + m^2} + h^2T{\rm Sinh}(2\alpha)\right)}{\left(h^2 + m^2\right)^{3/2}} \label{iffmetric} \end{eqnarray} where $\alpha = \sqrt{h^2 + m^2}/2T$. \begin{figure}[t!] \centering \includegraphics[width=2.8in,height=2.3in]{iif.eps} \caption{Numerical solution for geodesics of the infinite range Ising ferromagnet at $h=0.2$, near the critical point $(T = 0.236, m=0)$. All geodesics are chosen to pass through the point $\left(T,m\right) = \left(0.3,0.004\right)$. The dashed blue, dot-dashed green, dotted red and solid pink curves correspond to the boundary condition ${\dot m} =$ $-0.03$, $0.03$, $-0.25$, and $0.12$ respectively. The dashed black line is the spinodal curve, on which the scalar curvature diverges.} \label{iif} \end{figure} The scalar curvature of eq.(\ref{scalarcurvature}) for the metric of eq.(\ref{iffmetric}), in the limit $m \to 0$ (which is our region of interest), is given by $R = {\mathcal A}/{\mathcal B}$, where \begin{eqnarray} {\mathcal A} &=& h \left[-2 T \left(4 h^2+4 T+1\right) {\rm Sinh} \left(\frac{h}{T}\right)+ 4 \left(h^2+2 T\right) {\rm Tanh}\left(\frac{h}{2 T}\right) +3h {\rm Sech}^2\left(\frac{h}{2 T}\right)\right] \nonumber\\ &-&2 T^2+2 h^2 (4T(T-2) -1)+2 T \left(4 h^2 (T+1)+T\right) {\rm Cosh} \left(\frac{h}{T}\right)\nonumber\\ {\mathcal B} &=& 2 h^2 \left({\rm Tanh} \left(\frac{h}{2 T}\right)-2 h\right)^2 \end{eqnarray} The scalar curvature diverges at ${\rm Tanh}\frac{h}{2T} = 2h$, defining the phase boundary, a result that matches with that obtained in \cite{diptiman}. To understand the behavior of geodesics in this model, we set $h=0.2$, which implies the critical temperature $T = 0.236$. Numerical solutions of this geodesic equations close to the critical point are plotted in fig.(\ref{iif}). Here, we have taken all the geodesics to start from $\left(T,m\right) = \left(0.3,0.004\right)$. The dashed blue, dot-dashed green, dotted red and solid pink curves correspond to ${\dot m} =$ $-0.03$, $0.03$, $-0.25$, and $0.12$ respectively. Also shown in dashed black is the spinodal curve, i.e the locus of divergence of the scalar curvature arising out of the metric of eq.(\ref{iffmetric}). We find that the geodesics show the same turning behavior as in the other mean field models discussed in the previous subsection. We also note that in the limit of $T \to 0$, $g_{mm}$ diverges and $g_{TT} \to 0$. Numerical solutions seem to become somewhat unreliable in this limit, and we will not discuss them. Having elucidated the nature of geodesics in classical systems exhibiting phase transitions at non-zero temperatures, we finally move to quantum phase transitions at zero temperatures. \section{Geodesics in QPTs : The Transverse XY Spin Chain} Information geometry of QPTs has been well studied of late, starting from the work of \cite{zan1}. There are, however, very few systems to which this can be meaningfully applied, since the definition of the geometry (from eq.(\ref{metric})) requires complete knowledge of the many body ground state, which may be difficult to obtain excepting for a few exactly solvable system, like the transverse field XY spin chain. Even when such ground states are obtainable, as in the Dicke model of quantum optics, explicit calculations might be prohibitively difficult due to algebraic complications. We will base our calculations on the transverse XY model, for which the information metric was obtained in \cite{zan1}. To recall, for the transverse XY spin chain, the Hamiltonian with $(2N + 1)$ spins is \begin{equation} H_{\rm XY} = -\left[\sum_{j = -N}^{N}\frac{1 + \gamma}{4}\sigma_j^x\sigma_{j+1}^x + \frac{1-\gamma}{4}\sigma_j^y\sigma_{j+1}^y - \frac{h}{2}\sigma_j^z\right] \end{equation} where the $\sigma^i$, $i=x,y,z$ are Pauli matrices, $\gamma$ is an anisotropy parameter, $h$ is the magnetic field, and the Planck's constant has been set to unity. The information metric for this model has been calculated in \cite{zan1} and in the thermodynamic limit, the line element, in the region $|h| <1$, $\gamma>0$ (the ferromagnetic phase) is given by \begin{equation} ds^2 = \frac{dh^2}{16\gamma\left(1-h^2\right)} + \frac{d\gamma^2}{16\gamma\left(1 + \gamma\right)^2} \label{linexy} \end{equation} QPTs occur on the lines $\gamma=0, ~|h| \leq 1$ (the anisotropic transition line), and $|h| = 1$, (the Ising transition lines), where the spectrum of the theory becomes gapless. Information geometry is however very different for these two transitions. Whereas the scalar curvature (calculated from eqs.(\ref{scalarcurvature}) and (\ref{linexy})) diverges on the line $\gamma = 0$, it is regular on the lines $|h| = \pm 1$. For this model, the geodesic equations are \begin{equation} {\ddot h} + \frac{h{\dot h}^2}{1-h^2} - \frac{{\dot h}{\dot \gamma}}{\gamma} = 0,~~{\ddot \gamma} - \frac{{\dot \gamma}^2\left(1+3\gamma\right)}{2\gamma\left(1 + \gamma\right)} + \frac{{\dot h}^2\left(1 + \gamma\right)^2}{2\gamma\left(1 - h^2\right)}=0 \label{geoxy} \end{equation} where, as before, the dot represents a derivative with respect to the affine parameter $\tau$. Also, the normalization condition implies that \begin{equation} \frac{{\dot h}^2}{16\gamma\left(1 - h^2\right)} + \frac{{\dot \gamma}^2}{16\gamma\left(1 + \gamma\right)^2} = 1 \label{spacexy} \end{equation} \begin{figure}[t!] \centering \includegraphics[width=2.8in,height=2.3in]{xygeodesics.eps} \caption{Numerical solution for a geodesic curve with $(h,\gamma) = (0.96, 0.1)$ and $({\dot h},{\dot \gamma}) = (-0.0857,1.35)$ on the $h-\gamma$ plane. The geodesic is confined to a single phase region.} \label{xy1} \end{figure} Before attempting to solve the coupled non-linear equations of eq.(\ref{geoxy}), let us look at a special case. The first of eq.(\ref{geoxy}) is satisfied by $h = {\rm constant}$ and hence constant $h$ lines are geodesics. To find $\gamma$ as a function of the affine parameter in this case, we substitute ${\dot h} = 0$ in the second of eq.(\ref{geoxy}) and in eq.(\ref{spacexy}). Then it is seen that $\gamma = {\rm Tan}^2\left(2\left(\tau - \tau_0\right)\right)$, where $\tau_0$ is a reference value for the affine parameter. Thus, for the constant $h$ geodesics, $\gamma$ is always positive, i.e these geodesics do not cross the phase boundary at $\gamma = 0$. Rather, they turn back on touching that line. This should be contrasted with the $m = {\rm constant}$ geodesics of the CW model, which, as we have said is not fully understood. To solve the equations in eq.(\ref{geoxy}) in general, we adopt a numerical procedure analogous to what we have done before. As an illustration, we solve for these equations with the initial conditions $(h,\gamma,{\dot h}) = (0.96, 0.1,-0.0857)$. The solution, plotted on the $h-\gamma$ plane parametrically, with the affine parameter $\tau$, is shown in fig.(\ref{xy1}). Clearly, the geodesic is confined to a single phase, and does not cross the phase boundaries, as in CPTs. It is not difficult to check this analytically by expanding the metric near the lines $\gamma = 0$ and $h = \pm 1$. \section{Conclusions and Discussions} In this paper, we have studied four model systems that exhibit phase transitions, in the thermodynamic limit. The Van der Waals model, the Curie-Weiss mean field model of ferromagnetism and the infinite Ising ferromagnet exhibit CPTs at finite temperature. The transverse XY spin chain shows a QPT at zero temperature. For all these models, we performed the most general analysis of geodesics in the parameter manifold. Such an analysis has not appeared in the literature before. In the process, we have established the information geometry of the infinite Ising ferromagnet. We have solved the geodesic equations for all these models in full generality, by choosing a starting point (i.e coordinates) in the manifold, and imposing initial conditions on its derivatives with respect to the affine parameter. In this way, we are able to trace out the geodesics, and study their behavior near second order critical points. This complements and extends the results of \cite{diosi},\cite{zan2} in a non-trivial way. Our main conclusion here is that purely from a geometric perspective, geodesics near critical points show universal behavior, although the physical nature of the phase transitions are widely different. We have also established that geodesics are confined to a single phase. We believe that these results are model independent, and should be true for any model of CPTs or QPTs. It might be interesting to study geodesics in the context of scaled equations of state for classical fluid systems, and also for some other models that exhibit QPTs. In particular, in the context of CPTs, it is an interesting question to ask if geodesics show any special behavior at or near the Widom line, which is a continuation of the co-existence curve, along which the correlation length maximizes. We leave such a study for the future. \begin{center} {\bf Acknowledgements} \end{center} It is a pleasure to thank Diptiman Sen for very useful correspondence. The work of SM is supported by grant no. 09/092(0792)-2011-EMR-1, from CSIR, India.\\
1,314,259,993,691
arxiv
\section{Introduction} Erd\H os and Hajnal observed, in \cite{EH1}, that if a graph $G$ establishes ${\omega}_1{\not\makebox[-7pt]{}\rightarrow} \bigl(({\omega},{{\omega}_1})\bigr)^2_2$ then $G$ is universal for countable graphs, i.e., every countable graph is isomorphic to a spanned subgraph of $G$. This result can not be generalized for higher cardinals because of the following result of Shelah {\cite[Theorem 4.1]{Sh}}: {\em (a) Assume that ${\kappa}$, ${\lambda}$ and ${\tau}$ are cardinals of cofinality greater than ${\omega}$ and $G$ is a graph on ${\kappa}$. Then the property \begin{itemize} \item[($*$)] {\em $G$ establishes ${\kappa}{\not\makebox[-7pt]{}\rightarrow} (({\lambda},{\tau}))^2_2$} \end{itemize} can not be destroyed by adding a single Cohen real, i.e. if $V\models (*)$ then $V^{Fn({\omega},2)}\models (*)$.}\\ (b) {\em If you add a Cohen reals to some model $V$ then in the generic extension there is a graph $C$ on ${{\omega}_1}$ which is not isomorphic to any spanned subgraph of some graph $G$ from $V$.} Learning this result Erd\H os and Hajnal raised the following question in {\cite[Problem 6.b]{EH2}}: {\em Assume that a graph $G$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} ({{\omega}_1}\stackrel{.}+{\omega})^2_2$. Do all graphs of cardinality $\aleph_1$ embed into $G$?} We answer their question in the negative in theorem \ref{tm:ellenpelda}. The proof is based on theorem \ref{tm:cind} which says that the property ``{\em $G$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} (({{\omega}_1};{\omega}))^2_2$}'' is indestructible by adding arbitrary numbers of Cohen reals to the ground model. Given a colouring $f:\br X;n;\longrightarrow C$ a subset $P\subset X$ is called {\em rainbow for $f$} (or {\em $f$-rainbow}) iff $f\restriction \br P;n;$ is one-to-one. We also answer another question of Hajnal, \cite[Problem 4.1]{H2}, in the negative in theorem \ref{tm:con}: it is consistent with GCH that there is a function $f$ which establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{{{\omega}_1}}$ such that there is no uncountable $f$-rainbow set. In theorem \ref{tm:bau} we show that it is also consistent that $2^{{{\omega}_1}}$ is arbitrarily large, and a function $g$ establishes $2^{{{\omega}_1}}{\not\makebox[-7pt]{}\rightarrow}[({{\omega}_1},{\omega}_2)]^2_{{{\omega}_1}}$ such that there is no uncountable $g$-rainbow set. In the second part of the paper we deal with rainbow Ramsey theorems concerning ``bounded'' functions. A function $f:\br X;n;\rightarrow C$ is {\em ${\mu}$-bounded} iff {$|f^{-1}\{c\}|\le {\mu}$} for each $c\in C $. Let us recall some ``arrow'' notations:\\ {${\lambda}\rightarrow^* ({\alpha})^n_{{\kappa}-{\rm bdd}}$} holds iff for every ${\kappa}$-bounded colouring of $\br {\lambda};n;$ there is a rainbow set of order type ${\alpha}$,\\ {\em ${\lambda}\rightarrow^* [({\alpha};{\beta})]_{{\kappa}-{\rm bdd}}$} holds iff for every ${\kappa}$-bounded colouring $c$ of $\br {\lambda};2;$ there is a set $A\subset {\lambda}$ of order type ${\alpha}$ and there is a set $B\subset {\lambda}$ of order type ${\beta}$ such that $\sup A\le \sup B$ and $|[A; B]\cap c^{-1}\{{\xi}\}|<{\kappa}$ for each ${\xi}\in \operatorname{ran} c$, where $[A;B]=\{\{{\alpha},{\beta}\}:{\alpha}\in A, {\beta}\in B, {\alpha}<{\beta}\}$. Clearly ${\lambda}\rightarrow^* ({\alpha})^2_{{\kappa}-{\rm bdd}}$ implies ${\lambda}\rightarrow^* [({\alpha};{\alpha})]_{{\kappa}-{\rm bdd}}$. We say that a function $f$ {\em c.c.c-indestrictibly establishes the negative partition relation $\Phi{\not\makebox[-7pt]{}\rightarrow}^*\Psi$} iff \begin{displaymath} \text{ $V^P\models$ ``$f$ establishes $\Phi{\not\makebox[-7pt]{}\rightarrow}^* \Psi$ '' } \end{displaymath} for each c.c.c poset $P$. Since ${{\omega}_1}\rightarrow({\alpha})^2_{2}$ holds for ${\alpha}<{{\omega}_1}$ by \cite{BH}, and it was proved by Galvin, \cite{G}, that ${\lambda}\rightarrow ({\alpha})^n_k$ implies ${\lambda}\rightarrow^* ({\alpha})^n_{k-{\rm bdd}}$ , we have ${{\omega}_1}\rightarrow^*({\alpha})^2_{2-bdd}$ for ${\alpha}<{{\omega}_1}$. Moreover, Galvin, \cite{G}, showed that \begin{qtheorem} CH implies that ${{\omega}_1}{\not\makebox[-7pt]{}\rightarrow}^* ({{\omega}_1})^2_{2-bdd}$. \end{qtheorem} On the other hand, Todorcevic, \cite{T}, proved that \begin{qtheorem} PFA implies that ${{\omega}_1}\rightarrow^* ({{\omega}_1})^2_{2-bdd}$. \end{qtheorem} Abraham, Cummings and Smyth showed that $MA_{\aleph_{1}}$ is not enough to get ${{\omega}_1}\rightarrow^* ({{\omega}_1})^2_{2-bdd}$. More precisely, they proved the following theorem: \begin{qtheorem}[{\cite[Theorem 3]{ACS}}] It is consistent that there is a function $c:\br {{\omega}_1};2;\longrightarrow {{\omega}_1}$ which c.c.c-indestructibly establishes ${{\omega}_1}{\not\makebox[-7pt]{}\rightarrow}^* ({{\omega}_1})^2_{2-bdd}$. \end{qtheorem} They also showed that the property ``{\em $c$ establishes ${{\omega}_1}{\not\makebox[-7pt]{}\rightarrow}^* ({{\omega}_1})^2_{2-bdd}$}'' is not automatically c.c.c-indestructible: \begin{qtheorem}[{\cite[Theorem 4]{ACS}}] If CH holds and there is a Suslin-tree then there is a function $c':\br {{\omega}_1};2;\longrightarrow 2$ and there is a c.c.c poset $P$ such that \begin{enumerate}[(a)] \item $c'$ establishes ${{\omega}_1}{\not\makebox[-7pt]{}\rightarrow}^* ({{\omega}_1})^2_{2-bdd}$, \item $V^P\models$ there is an uncountable $c'$-rainbow set. \end{enumerate} \end{qtheorem} We show that even the negative partition relation ${{\omega}_1}{\not\makebox[-7pt]{}\rightarrow}^* [({{\omega}_1};{{\omega}_1})]_{k-bdd}$ is consistent with $MA_{\aleph_{1}}$ for each $k\in {\omega}$. Moreover, Abraham and Cumming used two different functions in their theorems above. We show that a single function can play double role. \begin{theorem}\label{tm:acgen} If GCH holds then for each $k\in {\omega}$ there is a $k$-bounded colouring $f:\br {{\omega}_1};2;\rightarrow {{\omega}_1}$ and there are two c.c.c posets ${\mathcal P}$ and ${\mathcal Q}$ such that \begin{displaymath} V^{{\mathcal P}}\models \text{``$f$ c.c.c-indestructibly establishes ${{\omega}_1}{\not\makebox[-7pt]{}\rightarrow}^* [({{\omega}_1};{{\omega}_1})]_{k-bdd}$'',} \end{displaymath} but \begin{displaymath} V^{{\mathcal Q}}\models \text{`` ${{\omega}_1}$ is the union of countably many $f$-rainbow sets ''.} \end{displaymath} \end{theorem} \section{On a problem of Erd\H os and Hajnal.} To formulate our results we need to introduce some notations. Given two functions $f:\br X;2;\longrightarrow C$ and $d:\br Y;2;\longrightarrow C$ we say that {\em $d$ can be embedded into $f$}, ($d\Rightarrow f$, in short), iff there is a one-to-one map $\Phi:Y\longrightarrow X$ such that $d(\{y,y'\})=f(\{\Phi(y),\Phi(y')\})$ for each $\{y,y'\}\in \br Y;2;$. Hajnal, \cite{H}, proved that it is consistent with GCH that there is a colouring establishing ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} ({{\omega}_1}\stackrel{.}+ {\omega})^2_2$. As it turns out, his argument gives following stronger result: \begin{proposition}\label{pr:con} It is consistent that GCH holds and there is a function $f:\br {{\omega}_2};2;\longrightarrow {{\omega}_1}$ establishing ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{{{\omega}_1}}$. \end{proposition} Since Hajnal's proof was never published we sketch his argument. \begin{proof}[Proof of Proposition \ref{pr:con}] Define a poset ${\mathcal P}=\<P,\le\>$ as follows. The underlying set $P$ consists of triples $\<c,{\mathcal A},{\xi}\>$ where $c:\br \operatorname{supp} (c);2;\longrightarrow {\omega}$ for some $\operatorname{supp} (c)\in \br {\omega}_2;{\omega};$, ${\mathcal A}\subset \br \operatorname{supp}(c);{\omega};$ is a countable family and ${\xi}\in {{\omega}_1}$. Put $\<d,{\mathcal B},{\zeta}\>\le \<c,{\mathcal A},{\xi}\>$ iff \begin{enumerate}[(P1)] \item $c\subset d$, ${\mathcal A}\subset {\mathcal B}$, ${\xi}\le {\zeta}$, \item for each $A\in{\mathcal A}$ and for each ${\beta}\in (\operatorname{supp} (d)\setminus \operatorname{supp} (c))\cap \min A$ \begin{displaymath} {\xi}\subset d''[\{{\beta}\},A]. \end{displaymath} \end{enumerate} Then ${\mathcal P}$ is a ${\sigma}$-complete, ${\omega}_2$-c.c. poset and if ${\mathcal G}$ is the generic filter for ${\mathcal P}$ then $g=\cup\{c:\<c,{\mathcal A},{\xi}\>\in{\mathcal G}\}$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{{{\omega}_1}}$ in $V[{\mathcal G}]$. \end{proof} Proposition \ref{pr:con} validates the following question of Erd\H os and Hajnal, {\cite[Problem 6.b]{EH2}}: {\em Assume that a graph $G$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} ({{\omega}_1}\stackrel{.}+{\omega})^2_2$. Do all graphs of cardinality $\aleph_1$ embed into $G$?} To answer this question in the negative we prove a preservation theorem which makes us possible to apply Shelah's method from \cite[theorem 4.1]{Sh}. \begin{theorem}\label{tm:cind} If ${\mu}\le {{\omega}_1}$ and $c$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{\mu}$ then $V^{Fn({\kappa},2)}\models$ ``$c$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{\mu}$. \end{theorem} \begin{proof} The following lemma is straightforward. \begin{lemma}\label{lm:equiv} Let ${\mu}\le {{\omega}_1}$ and $c:\br {\omega}_2;2;\longrightarrow {\mu}$. The followings are equivalent: \begin{enumerate}[(1)] \item $c$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{\mu}$, \item $\forall B\in \br {\omega}_2;{\omega};$ $\forall {\nu}\in {\mu}$ \begin{displaymath} |\bigr\{{\alpha}<\min B: {\nu}\notin c''[\{{\alpha}\}, B]\bigl\}|\le {\omega}, \end{displaymath} \item $\forall {\mathcal B}\in \br {\brsmall {\omega}_2;{\omega};};{\omega};$ $\forall {\nu}\in {\mu}$ \begin{displaymath} |\bigl\{{\alpha}<\min \cup{\mathcal B}: \exists B\in {\mathcal B}\ {\nu}\notin c''[\{{\alpha}\}, B] \bigr\}|\le {\omega}. \end{displaymath} \end{enumerate} \end{lemma} Assume on the contrary that the theorem fails. We can assume that we add just ${{\omega}_1}$ many Cohen reals to $V$, i.e. ${\kappa}={{\omega}_1}$. We can choose ${\xi}\in {\omega}_2$, ${\nu}\in {\mu}$, $p\in Fn({{\omega}_1},2)$ and names $\dot{A}$ and $\dot{B}$ such that \begin{displaymath} p\Vdash \dot{A}\in \br {\xi};{{\omega}_1};\land \dot{B}\in \br {\omega}_2\setminus {\xi};{\omega}; \land {\nu}\notin c''[\dot{A},\dot{B}]. \end{displaymath} We can assume that $\dot{B}\in V^{Fn({\omega},2)}$ and $\operatorname{dom} p\subset {\omega}$. For each $q\in Fn({\omega},2)$ with $q\le p$ put \begin{displaymath} B(q)=\{{\zeta}:\exists r\in Fn({\omega},2)\ r\le q \land r\Vdash {\zeta}\in \dot{B}\}. \end{displaymath} Let ${\mathcal B}=\{B(q): q\in Fn({\omega},2), q\le p\}$ and $A'=\{{\alpha}\in {\omega}_2:\exists r\le p\ r\Vdash {\alpha}\in \dot{A}\}$. Then $A'\in \br {\xi};{{\omega}_1};$ and ${\mathcal B}\in \br {\brsmall {\omega}_2\setminus {\xi};{\omega};};{\omega};$. Hence, by lemma \ref{lm:equiv}, there is ${\alpha}\in A'$ such that ${\nu}\in c''[\{{\alpha}\}, B(q)]$ for each $q\in Fn({\omega},2)$ with $q\le p$. Pick $s\in Fn({{\omega}_1},2)$ with $s\Vdash {\alpha}\in \dot{A}$. Then ${\nu}\in c''[\{{\alpha}\}, B(s\restriction {\omega})]$, i.e. there is ${\beta}\in {\omega}_2 \setminus {\xi}$ and $r\in Fn({\omega},2)$ such that $r\le s\restriction {\omega}$ and $r\Vdash {\beta}\in \dot{B}$. Then \begin{displaymath} s\cup r\Vdash {\alpha}\in \dot{A}\land {\beta}\in \dot{B} \land {\nu}\notin c''[\dot{A},\dot{B}], \end{displaymath} but $c({\alpha},{\beta})={\nu}$. Contradiction. \end{proof} \begin{theorem}\label{tm:ellenpelda} For $2\le {\mu}\le {{\omega}_1}$ it is consistent that GCH holds and there is a colouring $f:\br {{\omega}_2};2;\longrightarrow {\mu}$ establishing ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{\mu}$ such that $g\not\Rightarrow f$ for some colouring $g:\br {{\omega}_1};2;\longrightarrow 2$. \end{theorem} \begin{proof} By proposition \ref{pr:con} we can assume that in the ground model GCH holds and there is a function $f:\br {{\omega}_2};2;\longrightarrow {{\omega}_1}$ establishing ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{{{\omega}_1}}$. If ${\mu}\le {{\omega}_1}$ and ${\pi}_{\mu}:{{\omega}_1}\rightarrow {\mu}$ is onto then $f_{\mu}={\pi}_{\mu}\circ f$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{\mu}$. Then, by \cite[Theorem 4.1]{Sh}, in $V^{Fn({\omega},2)}$ there is a function $d:\br {{\omega}_1};2;\rightarrow 2$ such that $d\not\Rightarrow f_{\mu}$. Since \begin{displaymath} \text{$V^{Fn({\omega},2)}\models$ {\em $f_{\mu}$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{\mu}$}} \end{displaymath} by theorem \ref{tm:cind}, we are done. \end{proof} As it was observed by Hajnal, the construction of theorem \ref{tm:ellenpelda} above left open the following question which he raised in \cite[Problem 4.1]{H2}: \begin{qproblem} Assume GCH holds and a colouring $c:\br {{\omega}_2};2;\longrightarrow {{\omega}_1}$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{{{\omega}_1}}$. Does there exist a $c$-rainbow set of size ${{\omega}_1}$? \end{qproblem} Before answering this question let us recall some positive results of Hajnal. In \cite{H2}, he proved that \begin{qtheorem} (1) If $f: \br {{\omega}_1};2; \longrightarrow {{\omega}_1}$ establishes ${{\omega}_1}{\not\makebox[-7pt]{}\rightarrow} [({\omega},{{\omega}_1})]^2_{{{\omega}_1}} $ then $d \Rightarrow f$ for each $d:\br {\omega};2;\longrightarrow {{\omega}_1}$.\\ (2) If $f :\br {{\omega}_1};2;\longrightarrow{\omega}$ establishes ${{\omega}_1}{\not\makebox[-7pt]{}\rightarrow}[({{\omega}_1},{{\omega}_1})]^2_{\omega}$ then there exists an infinite $f$-rainbow set. \end{qtheorem} When we colour the pairs of ${{\omega}_1}$ we can not expect uncountable rainbow sets because of the following fact. \begin{proposition}\label{pr:ch} If CH holds then there is a function $f:\br {{{\omega}_1}};2;\longrightarrow {{\omega}_1}$ such that \begin{enumerate}[(1)] \item $f$ establishes ${{\omega}_1}{\not\makebox[-7pt]{}\rightarrow} [({\omega};{{\omega}_1})]^2_{{{\omega}_1}}$, \item there is no uncountable $f$-rainbow. \end{enumerate} \end{proposition} \begin{proof}[Proof of proposition \ref{pr:ch}] Enumerate $\br {{\omega}_1};{\omega};$ as $\{A_{\alpha}:{\omega}\le {\alpha}<{{\omega}_1}\}$ such that $A_{\alpha}\subset {\alpha}$. By induction on ${\alpha}$, ${\omega}\le {\alpha}<{{\omega}_1}$, define $f({\xi},{\alpha})$ for ${\xi}<{\alpha}$ such that \begin{enumerate} \item ${\alpha}\subset \{f({\xi},{\alpha}):{\xi}\in A_{\beta}\}$ for ${\beta}<{\alpha}$, \item $A_{\beta}\cup \{{\alpha}\}$ is not an $f$-rainbow for ${\beta}<{\alpha}$. \end{enumerate} Then $f$ satisfies (1) and (2). \end{proof} Next we answer \cite[Problem 4.1]{H2} in the negative. \begin{theorem}\label{tm:con} It is consistent that GCH holds and there is a function $g:\br {{\omega}_2};2;\longrightarrow {{\omega}_1}$ such that \begin{enumerate}[(1)] \item $g$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{{{\omega}_1}}$, \item there is no uncountable $g$-rainbow. \end{enumerate} \end{theorem} \begin{proof}[Proof of theorem \ref{tm:con}] The naive approach is to try to modify the order of the poset $P$ from the proof of proposition \ref{pr:con} by adding a condition (P3) to the definition of the order: \begin{enumerate}[(P1)] \addtocounter{enumi}{2} \item for each $A\in{\mathcal A}$ and for each ${\beta}\in (\operatorname{supp} (d)\setminus \operatorname{supp} (c))$ the set $A\cup\{{\beta}\}$ is not a $d$-rainbow. \end{enumerate} Unfortunately this approach does not work because the modified poset does not satisfies ${\omega}_2$-c.c. So we will argue in a different way. Define the poset $P$ as follows. The underlying set $P$ consists of quadruples $\<c,{\mathcal A},{\xi},{\mathcal D}\>$ where \begin{enumerate}[(i)] \item $c:\br \operatorname{supp} (c);2;\longrightarrow {\omega}$ for some $\operatorname{supp} (c)\in \br {\omega}_2;{\omega};$, \item ${\mathcal A}\subset \br \operatorname{supp}(c);{\omega};$ is a countable family, \item ${\omega}\le {\xi}< {{\omega}_1}$, \item ${\mathcal D}\subset \br \operatorname{supp}(c);{\omega};\times {{\omega}_1}$ is a countable family, \item $\forall \<D,{\sigma}\>\in {\mathcal D}$ $(\forall {\gamma}\in \operatorname{supp}(c))$ $|\{{\delta}\in D:c({\gamma},{\delta})< {\sigma}\}|={\omega}$. \end{enumerate} Put $\<d,{\mathcal B},{\zeta},{\mathcal E}\>\le \<c,{\mathcal A},{\xi},{\mathcal D}\>$ iff \begin{enumerate}[(a)] \item $c\subset d$, ${\mathcal A}\subset{\mathcal B}$, ${\xi}\le {\zeta}$, ${\mathcal D}\subset {\mathcal E}$, \item for each $A\in{\mathcal A}$ and for each ${\beta}\in (\operatorname{supp} (d)\setminus \operatorname{supp} (c))\cap \min A$ \begin{displaymath} {\xi}\subset d''[\{{\beta}\}, A]. \end{displaymath} \end{enumerate} Clearly $\le $ is a partial order on $P$ and ${\mathcal P}=\<P,\le\>$ is ${\sigma}$-complete. \begin{lemma} ${\mathcal P}$ is ${\omega}_2$-c.c. \end{lemma} \begin{proof}[Proof of the lemma] We say that two conditions, $p=\<c,{\mathcal A},{\xi},{\mathcal D}\>$ and $p'=\<c',{\mathcal A}',{\xi}',{\mathcal D}'\>$, are {\em twins} iff there is an order preserving bijection ${\varphi}:\operatorname{supp} (c)\longrightarrow \operatorname{supp} (c')$ such that \begin{enumerate}[(1)] \item $K=\operatorname{supp} (c)\cap \operatorname{supp} (c')$ is an initial segment of both $\operatorname{supp} (c)$ and $\operatorname{supp} (c')$, \item $K<\operatorname{supp} (c)\setminus K<\operatorname{supp} (c')\setminus K$, \item $c({\xi},{\eta})=c'({\varphi}({\xi}),{\varphi}({\eta}))$ for each $\{{\xi},{\eta}\}\in \br \operatorname{supp}(c);2;$, \item ${\mathcal A}'=\{{\varphi}''A:A\in {\mathcal A}\}$, \item ${\xi}={\xi}'$, \item ${\mathcal D}'=\{\<{\varphi}''D,{\sigma}\>:\<D,{\sigma}\>\in{\mathcal D}\}$. \end{enumerate} It is enough to show that if $p$ and $p'$ are twins then they have a common extension $q=\<d,{\mathcal B},{\rho},{\mathcal E}\>$. Let ${\mathcal B}={\mathcal A}\cup{\mathcal A}'$, ${\rho}={\xi}={\xi}'$ and ${\mathcal E}={\mathcal D}\cup{\mathcal D}'$. We should define $d({\nu},{\mu})$ for ${\nu}\in \operatorname{supp} (c)\setminus K$ and ${\mu}\in\operatorname{supp} (c')\setminus K$. We enumerate all ``tasks'' as follows: Let \begin{displaymath} {\mathcal T}_0=\{\<{\beta},A',{\zeta}\>:{\beta}\in \operatorname{supp} (c)\setminus K,A'\in{\mathcal A}', A'\subset \operatorname{supp} (c')\setminus K,{\zeta}<{\xi}'\}, \end{displaymath} \begin{multline}\notag {\mathcal T}_1=\{\<{\gamma},\<D',{\sigma}'\>,n\>: {\gamma}\in \operatorname{supp} (c)\setminus K,\\ \<D',{\sigma}'\>\in{\mathcal D}'\setminus {\mathcal D}, |D'\setminus K|={\omega}, n<{\omega}\} \end{multline} and \begin{multline}\notag {\mathcal T}_2=\{\<{\gamma}',\<D,{\sigma}\>,n\>: {\gamma}'\in \operatorname{supp} (c')\setminus K,\\ \<D,{\sigma}\>\in{\mathcal D}\setminus {\mathcal D}', |D\setminus K|={\omega}, n<{\omega}\}. \end{multline} Since ${\mathcal T}={\mathcal T}_0\cup{\mathcal T}_1\cup {\mathcal T}_2$ is countable we can pick pairwise distinct ordinals $\{{\eta}_x:x\in {\mathcal T}\}$ such that \begin{enumerate}[(a)] \item if $x=\<{\beta},A',{\zeta}\>\in {\mathcal T}_0$ then ${\eta}_x\in A'$, \item if $x=\<{\gamma},\<D',{\sigma}'\>,n\>\in {\mathcal T}_1$ then ${\eta}_x\in D'\setminus K$, \item if $x=\<{\gamma}',\<D,{\sigma}\>,n\>\in{\mathcal T}_2$ then ${\eta}_x\in D\setminus K$. \end{enumerate} Choose a function $d:\br \operatorname{supp} (c)\cup\operatorname{supp} (c');2;\longrightarrow {{\omega}_1}$ such that \begin{enumerate}[(1)] \item $d\supset c\cup c'$, \item $d({\beta},{\eta}_x)={\zeta}$ for $x=\<{\beta},A',{\zeta}\>\in {\mathcal T}_0$, \item $d({\gamma},{\eta}_x)=0$ for $x=\<{\gamma},\<D',{\sigma}'\>,n\>\in {\mathcal T}_1$, \item $d({\gamma}',{\eta}_x)=0$ for $x=\<{\gamma}',\<D,{\sigma}\>,n\>\in{\mathcal T}_2$. \end{enumerate} Let $q=\<d,{\mathcal B},{\eta},{\mathcal E}\>$. To show $q\in P$ we should check only condition (v). So let $\<D,{\sigma}\>\in{\mathcal E}$ and ${\gamma}\in \operatorname{supp}(d)$. Assume that $\<D,{\sigma}\>\in {\mathcal D}$. (The case $\<D,{\sigma}\>\in {\mathcal D}'$ is similar.) If ${\gamma}\in \operatorname{supp} (c)$ then $d\restriction[\{{\gamma}\},D]=c\restriction[\{{\gamma}\},D]$ so we are done. So we can assume that ${\gamma}\in\operatorname{supp} (c')\setminus K$. If $D\setminus K$ is finite then the set \begin{displaymath} E=\{{\delta}\in D\cap K: c({\delta},{\varphi}^{-1}({\gamma}))<{\sigma}\} \end{displaymath} is infinite because $p\in P$ satisfies (v) and for each ${\delta}\in E$ we have $d({\delta},{\gamma})= c'({\delta},{\gamma})=c'({\varphi}({\delta}),{\gamma})= c({\delta},{\varphi}^{-1}({\gamma}))<{\sigma}$. So we can assume that $D\setminus K$ is infinite. In this case $x_n=\<{\gamma},\<D,{\sigma}\>,n\>\in {\mathcal T}_2$ for $n\in {\omega}$, so $d({\gamma},{\eta}_{x_n})=0<{\sigma}$ and $\{{\eta}_{x_n}:n\in {\omega}\}\in \br D;{\omega};$. So $q\in P$. It is straightforward that $q\le p$ because no instances of (b) should be checked. Finally we verify $q\le p'$. Since condition (a) is clear, assume that $A'\in {\mathcal A}'$ and ${\beta}\in \operatorname{supp} (c)\setminus K$ with ${\beta}<\min A'$. Since $\sup K<{\beta}$ we have $A'\subset \operatorname{supp}(c')\setminus K$. Hence for each ${\zeta}<{\xi}$ we have $x=\<{\beta},A',{\zeta}\>\in{\mathcal T}_0$ so $d({\beta},{\eta}_x)={\zeta}$. Thus ${\xi}\subset d''[\{{\beta}\},A']$. This completes the proof of the lemma. \end{proof} Let ${\mathcal G}$ be the generic filter for ${\mathcal P}$ and put $g=\cup\{c:\<c,{\mathcal A},{\xi}\>\in{\mathcal G}\}$. {\noindent \bf Claim:} {\em $g$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega})]^2_{{{\omega}_1}}$ in $V[{\mathcal G}]$. } Indeed, let $p=\<c,{\mathcal A},{\xi},{\mathcal D}\>\in P$. If $A\in \br \operatorname{supp} (c);{\omega}; $ and ${\eta}\in {{\omega}_1}$ then $p'=\<c,{\mathcal A}\cup\{A\},\max ({\xi},{\eta}),{\mathcal D}\>\le p$ and for each $ {\beta}\in \min A \setminus \operatorname{supp} (c)$ \begin{displaymath} p'\Vdash {\eta}\subset g''[\{{\beta}\},A]. \end{displaymath} {\noindent \bf Claim:} {\em There is no uncountable $g$-rainbow set in $V[{\mathcal G}]$.} Indeed, assume that $p_0 \Vdash \dot X\in \br {\omega}_2;{{\omega}_1};$. Since ${\mathcal P}$ is ${\sigma}$-complete there are $p\le p_0$, $p=\<c,{\mathcal A},{\xi},{\mathcal D}\>$, and $D\in \br \operatorname{supp}(c);{\omega};$ such that $p\Vdash D\subset \dot X$. Let $p'=\<c,{\mathcal A},{\xi}, {\mathcal D}\cup\{\<D,(\sup \operatorname{ran} (c))+1\>\}. \>$. Then $p'\in P$ and $p'\le p$. Moreover \begin{displaymath} p'\Vdash \text{$\dot X$ is not a $g$-rainbow}. \end{displaymath} Indeed, work in $V[{\mathcal G}]$, where $p'\in{\mathcal G}$. Write $X=\{{\xi}_{\nu}:{\nu}\in {{\omega}_1}\}$. Then for each ${\nu}<{\omega}$ there is ${\gamma}_{\nu}<\sup \operatorname{ran} (c)+1$ and ${\delta}_{\nu}\in D$ with $g({\delta}_{\nu}, {\xi}_{\nu})={\gamma}_{\nu}$. Then there are ${\nu}<{\mu}<{{\omega}_1}$ with ${\gamma}_{\nu}={\gamma}_{\mu}$. Then $g({\delta}_{\nu}, {\xi}_{\nu})={\gamma}_{\nu}={\gamma}_{\mu}= g({\delta}_{\mu}, {\xi}_{\mu})$ and ${\xi}_{\nu}\ne {\xi}_{\mu}$, i.e. $X$ is not a $g$-rainbow. \medskip So, by the claims above, $g$ satisfies the requirements of the theorem. \end{proof} Baumgartner proved that if $CH$ holds, $P=Fn(\br {\kappa};2;,{{\omega}_1};{{\omega}_1})$ for some cardinal ${\kappa}\ge {\omega}_2$, and ${\mathcal G}$ is the generic filter above $P$, then the function $g=\cup{\mathcal G}$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1},{\omega}_2)]^2_{{{\omega}_1}}$. We prove a related result here. \begin{theorem}\label{tm:bau} If CH holds and ${\kappa}\ge {\omega}_2$ is a cardinal then there is a ${\sigma}$-complete, ${\omega}_2$-c.c. poset $P$ such that in $V^P$ there is a function $g:\br {\kappa};2;\longrightarrow {{\omega}_1}$ such that \begin{enumerate}[(1)] \item $g$ establishes ${\kappa}{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1},{\omega}_2)]^2_{{{\omega}_1}}$. \item there is no uncountable $g$-rainbow subset of ${\kappa}$. \end{enumerate} \end{theorem} \begin{proof} Define the poset $P$ as follows. The underlying set $P$ consists of pairs $\<c,{\mathcal D}\>$ where \begin{enumerate}[(i)] \item $c:\br \operatorname{supp} (c);2;\longrightarrow {\omega}$ for some $\operatorname{supp} (c)\in \br {\kappa};{\omega};$, \item ${\mathcal D}\subset \br \operatorname{supp}(c);{\omega};\times {{\omega}_1}$ is a countable family, \item $\forall \<D,{\sigma}\>\in {\mathcal D}$ $(\forall {\gamma}\in \operatorname{supp}(c))$ $|\{{\delta}\in D:c({\gamma},{\delta})< {\sigma}\}|={\omega}$. \end{enumerate} Put $\<d,{\mathcal E}\>\le \<c,{\mathcal D}\>$ iff $c\subset d$ and ${\mathcal D}\subset {\mathcal E}$. Then $\le $ is a partial order, and ${\mathcal P}$ is ${\sigma}$-complete. We say that two conditions, $p=\<c,{\mathcal D}\>$ and $p'=\<c',{\mathcal D}'\>$, are {\em twins} iff there is an order preserving bijection ${\varphi}:\operatorname{supp} (c)\longrightarrow \operatorname{supp} (c')$ such that \begin{enumerate}[(1)] \item ${\varphi}({\xi})={\xi}$ for ${\xi}\in \operatorname{supp} (c)\cap \operatorname{supp} (c')$, \item $c({\xi},{\eta})=c'({\varphi}({\xi}),{\varphi}({\eta}))$ for each $\{{\xi},{\eta}\}\in \br \operatorname{supp}(c);2;$, \item ${\mathcal D}'=\{\<{\varphi}''D,{\sigma}\>:\<D,{\sigma}\>\in{\mathcal D}\}$. \end{enumerate} \begin{lemma}\label{lm:twins} Assume that $p=\<c,{\mathcal D}\>$, $p'=\<c',{\mathcal D}'\>$ are twins. Let $q\le p$, $q=\<d,{\mathcal E}\>$, such that $\operatorname{supp} (d)\cap \operatorname{supp} (c')=\operatorname{supp} (c)\cap \operatorname{supp} (c')$. Let $A\in \br \operatorname{supp} (d)\setminus \operatorname{supp} (c');{\omega};$, ${\xi}\in \operatorname{supp} (c') \setminus \operatorname{supp} (c)$ and ${\rho}<{{\omega}_1}$. Then there is a common extension $r=\<c_r,{\mathcal D}_r\>$ of $q$ and $p'$ such that ${\rho}\subset c_r''[\{{\xi}\},A]$. \end{lemma} \begin{proof}[Proof of the lemma] Write $K=\operatorname{supp} (c)\cap \operatorname{supp} (c')$ and fix the function ${\varphi}$ witnessing that $p$ and $p'$ are twins. Let \begin{displaymath} {\mathcal T}_0={\rho}, \end{displaymath} \begin{displaymath} {\mathcal T}_1=\{\<{\gamma},\<D',{\sigma}'\>,n\>:{\gamma}\in \operatorname{supp}(d)\setminus K, \<D',{\sigma}'\>\in{\mathcal D}', |D'\setminus K|={\omega}, n\in {\omega}\}, \end{displaymath} \begin{displaymath} {\mathcal T}_2=\{\<{\gamma}',\<E,{\sigma}\>,n\>:{\gamma}'\in \operatorname{supp} c'\setminus K, \<E,{\sigma}\>\in{\mathcal E}, |E\setminus K|={\omega}, n\in {\omega}\}. \end{displaymath} Since ${\mathcal T}={\mathcal T}_0\cup {\mathcal T}_1\cup {\mathcal T}_2$ is countable we can pick pairwise distinct ordinals $\{{\eta}_x:x\in {\mathcal T}\}$ such that \begin{enumerate}[(a)] \item if $x={\chi}\in {\rho}$ then ${\eta}_x\in A$, \item if $x=\<{\gamma},\<D,{\sigma}\>,n\>\in {\mathcal T}_1\cup {\mathcal T}_2$ then ${\eta}_x\in D\setminus K$. \end{enumerate} Let $c_r\supset d\cup c_{\nu}$ such that \begin{enumerate}[(i)] \item $c_r({\eta}_x,{\xi})={\chi}$ if $x={\chi}\in {\mathcal T}_0$, \item $c_r({\eta}_x,{\gamma})=0$ if $\<{\gamma},\<D,{\sigma}\>,n\>\in{\mathcal T}_0\cup{\mathcal T}_1$. \end{enumerate} To prove $r=\<c_r,{\mathcal D}'\cup{\mathcal E}\>\in P$ it is enough to check condition (iii). Assume first that $\<D,{\sigma}\>\in {\mathcal D}'$. If ${\gamma}\in \operatorname{supp} (c')$ then $c_r\restriction[\{{\gamma}\},D]=c'\restriction[\{{\gamma}\},D]$ so we are done. So we can assume that ${\gamma}\in\operatorname{supp}(d)\setminus K$. If $D\setminus K$ is finite then $\<{\varphi}^{-1}D,{\sigma}\>\in{\mathcal D}\subset {\mathcal E}$, and $D\cap K= {\varphi}^{-1}D\cap K$, so the set \begin{displaymath} H=\{{\delta}\in D\cap K: d({\delta},{\gamma})<{\sigma}\} \end{displaymath} is infinite because $q\in P$ satisfies (iii), and $H\subset\{{\delta}\in D: c_r({\delta},{\gamma})<{\sigma}\}$. So we can assume that $D\setminus K$ is infinite. In this case $x_n=\<{\gamma},\<D,{\sigma}\>,n\>\in {\mathcal T}_1$ for $n\in {\omega}$, so $c_r({\gamma},{\eta}_{x_n})=0<{\sigma}$ and $\{{\eta}_{x_n}:n\in {\omega}\}\in \br D;{\omega};$. Assume now that $\<D,{\sigma}\>\in {\mathcal E}$. If ${\gamma}\in \operatorname{supp}(d)$ then $c_r\restriction[\{{\gamma}\},D]=d\restriction[\{{\gamma}\},D]$ so we are done. So we can assume that ${\gamma}\in\operatorname{supp} (c')\setminus K$. If $D\setminus K$ is finite then ${\gamma}'={\varphi}^{-1}({\gamma})\in \operatorname{supp} (c)\subset \operatorname{supp}(d)$ and $q\in P$ imply that the set \begin{displaymath} H=\{{\varepsilon}\in D\cap K: d({\varepsilon},{\gamma}')<{\sigma}\} \end{displaymath} is infinite. But for each ${\varepsilon}\in H$ we have $c_r({\varepsilon},{\gamma})=c'({\varepsilon},{\gamma})= c({\varepsilon},{\gamma}')=d({\varepsilon},{\gamma}')$. So we can assume that $D\setminus K$ is infinite. In this case $x_n=\<{\gamma},\<D,{\sigma}\>,n\>\in {\mathcal T}_2$ for $n\in {\omega}$, so $c_r({\gamma},{\eta}_{x_n})=0<{\sigma}$ and $\{{\eta}_{x_n}:n\in {\omega}\}\in \br D;{\omega};$. So $r\in P$ and clearly $r\le q,p'$. Finally for each ${\zeta}<{\rho}$ we have ${\eta}_{\zeta}\in A$ and $c_r({\xi},{\eta}_{\zeta})={\zeta}$. So ${\rho}\subset c_r''[\{{\xi}\},A]$. \end{proof} \begin{lemma} ${\mathcal P}$ is ${\omega}_2$-c.c. \end{lemma} \begin{proof}[Proof of the lemma] Since any family of conditions of size ${\omega}_2$ contains two conditions $p$ and $p'$ which are twins we can apply the previous lemma to yield that $p$ and $p'$ are compatible in $P$. \end{proof} Let ${\mathcal G}$ be the generic filter for ${\mathcal P}$ and put $g=\cup\{c:\<c,{\mathcal A},{\xi}\>\in{\mathcal G}\}$ \begin{lemma}\label{lm:gok} $g$ establishes ${\omega}_2{\not\makebox[-7pt]{}\rightarrow} [({{\omega}_1};{\omega}_2)]^2_{{{\omega}_1}}$ in $V[{\mathcal G}]$. \end{lemma} \begin{proof} Assume that $p\Vdash \dot X=\{\dot{\xi}_{\nu}:{\nu}<{\omega}_2\} \in \br{\kappa};{\omega}_2;, \dot Y\in \br{\kappa};{{\omega}_1};$. For each ${\rho}<{{\omega}_1}$ we will construct a condition $r\le p$ such that $r\Vdash {\rho}\subset g''[\dot X,\dot Y]$. Write $p=\<c,{\mathcal D}\>$. For each ${\nu}<{\omega}_2$ pick $p_{\nu}=\<c_{\nu},{\mathcal D}_{\nu}\>\le p$ such that $p_{\nu}\Vdash \dot{\xi}_{\nu}={\xi}_{\nu}$ for some ${\xi}_{\nu}\in \operatorname{supp} (c_{\nu})$. Since CH holds there is $I\in \br {\omega}_2;{\omega}_2;$ such that \begin{enumerate} \item $\{\operatorname{supp} (c_{\nu}):{\nu}\in I\}$ forms a $\Delta$-system with kernel $K$, \item for each $\{{\nu},{\mu}\}\in \br I;2;$ the conditions $p_{\nu}$ and $p_{\mu}$ are twins. \end{enumerate} Since $P$ satisfies ${\omega}_2$-c.c we can assume that ${\xi}_{\nu}\in \operatorname{supp} (c_{\nu})\setminus K$ for ${\nu}\in I$. Fix ${\mu}\in I$. Pick a condition $q\le p_{\mu}$, $q=\<d,{\mathcal E}\>$, such that $q\Vdash Z\subset \dot Y$ for some $Z\in \br \operatorname{supp}(d)\cap({\kappa}\setminus K);{\omega};$. Choose ${\nu}\in I$ such that $\operatorname{supp} (c_{\nu})\cap \operatorname{supp}(d)=K$. By lemma \ref{lm:twins} there is a condition $r=\<c_r,{\mathcal D}_{\nu}\cup {\mathcal E}\>\in P$ such that $r\le q,p_{\nu}$ and ${\rho}\subset c_r''[\{{\xi}_{\mu}\},Z]$. Then $r\Vdash {\rho}\subset c_r''[\{{\xi}_{\nu}\},Z]\subset g''[\dot X,\dot Y]$. \end{proof} \begin{lemma} There is no uncountable $g$-rainbow set in $V[{\mathcal G}]$. \end{lemma} \begin{proof} Indeed, assume that $p_0 \Vdash \dot X\in \br {\omega}_2;{{\omega}_1};$. Since ${\mathcal P}$ is ${\sigma}$-complete there are $p\le p_0$, $p=\<c,{\mathcal D}\>$, and $D\in \br \operatorname{supp}(c);{\omega};$ such that $p\Vdash D\subset \dot X$. Let $p'=\<c, {\mathcal D}\cup\{\<D,(\sup \operatorname{ran} (c))+1\>\}. \>$. Then $p'\in P$ and $p'\le p$. Moreover \begin{displaymath} p'\Vdash \text{$\dot X$ is not a $g$-rainbow}. \end{displaymath} Indeed, work in $V[{\mathcal G}]$, where $p'\in{\mathcal G}$. Write $X=\{{\xi}_{\nu}:{\nu}\in {{\omega}_1}\}$. Then for each ${\nu}<{\omega}$ there is ${\gamma}_{\nu}<\sup \operatorname{ran} (c)+1$ and ${\delta}_{\nu}\in D$ with $g({\delta}_{\nu}, {\xi}_{\nu})={\gamma}_{\nu}$. Then there are ${\nu}<{\mu}<{{\omega}_1}$ with ${\gamma}_{\nu}={\gamma}_{\mu}$. Thus $g({\delta}_{\nu}, {\xi}_{\nu})={\gamma}_{\nu}={\gamma}_{\mu}= g({\delta}_{\mu}, {\xi}_{\mu})$ and ${\xi}_{\nu}\ne {\xi}_{\mu}$, i.e. $X$ is not a $g$-rainbow. \end{proof} So, by the lemmas above, $g$ satisfies the requirements of the theorem. \end{proof} \section{$k$-bounded colourings} \begin{definition} Let $X\in \br{{\omega}_1};{{\omega}_1};$, $f:\br X;2;\rightarrow {{\omega}_1}$, $k\in {\omega}$. \begin{enumerate}[(a)] \item $f$ is {\em $k$-bounded } iff {$|f^{-1}\{{\gamma}\}|\le k$} for each ${\gamma}\in \operatorname{ran} (f) $. \item Put \begin{displaymath} \fink X=\{D\in \br{\br X;k;};<{\omega};:d\cap d'=\emptyset \text{ for each } \{d,d'\}\in \br D;2;\}. \end{displaymath} \item For $D\in\fink X$ let \begin{displaymath} \Homo Df=\{{\alpha}:\forall d\in D\ (\forall {\delta},{\delta}'\in d)\ f({\delta},{\alpha})=f({\delta}',{\alpha}) \}. \end{displaymath} \item Given any cardinal ${\mu}$ let \begin{displaymath} \seqk {\mu}X=\{\<D_i:i<{\mu}\>\subset\fink X: (\cup D_i)\cap (\cup D_j)=\emptyset \text{ for } i<j<{\mu} \}. \end{displaymath} \item $f$ is an {\em AR$^{(k)}$-function} iff \begin{enumerate}[(i)] \item $f$ is $k$-bounded, \item for each $\<D_i:i<{\omega}\>\in \seqk {\omega}X$ there is ${\gamma}<{{\omega}_1}$ such that \begin{displaymath} X\setminus {\gamma}\subset\cup\{\Homo {D_i}f:i<{\omega}\}. \end{displaymath} \end{enumerate} \end{enumerate} \end{definition} \begin{obs} An AR$^{(k)}$-function $f:\br {{\omega}_1};2;\longrightarrow {{\omega}_1}$ establishes the negative partition relation ${{\omega}_1} {\not\makebox[-7pt]{}\rightarrow}^* [({\omega};{{\omega}_1})]_{{k}-{\rm bdd}}$. \end{obs} \begin{proof} Assume that $A\in \br {{\omega}_1};{\omega};$ and $B\in \br {{\omega}_1};{{\omega}_1};$. Pick pairwise disjoint sets $\{d_i:i<{\omega}\}\subset \br A;k;$. Write $D_i=\{d_i\}$ and $\vec D=\<D_i:i<{\omega}\>$. Since $\vec D\in \seqk {\omega}{{\omega}_1}$ and $f$ is an AR$^{(k)}$-function there is ${\beta}\in B$ such that ${\beta}\in \Homo {D_i}f$ for some $i<{\omega}$, which means that $|f''[d_i,\{{\beta}\}]|=1$. Since $d_i\in \br A;k;$ we are done. \end{proof} \begin{lemma}\label{lm:const} If CH holds then for each $k\in {\omega}$ there is an AR$^{(k)}$-function $f:\br{{\omega}_1};2;\rightarrow {{\omega}_1}$. \end{lemma} \begin{proof} The construction is standard. Let $\{C_{\alpha}:{\omega}\le {\alpha}<{{\omega}_1}\}\subset \br {{\omega}_1};{\omega};$ be disjoint sets. Fix an enumeration $\<\vec D_{\alpha}:{\omega}\le {\alpha}<{{\omega}_1}\>$ of $\seqk {\omega}{{{\omega}_1}}$ such that $\cup \cup \vec D_{\alpha}\subset {\alpha}$. Let ${\alpha}<{{\omega}_1}$ be fixed. For each ${\xi}<{\alpha}$ pick $i_{\xi}\in {\omega}$ such that the sets $\{\cup (\vec D_{\xi}(i_{\xi})):{\xi}<{\alpha}\}$ are pairwise disjoint. Choose a function $g_{\alpha}:{\alpha}\rightarrow C_{\alpha}$ such that \begin{itemize} \item[($*$)] $g_{\alpha}({\delta})=g_{\alpha}({\delta}')$ iff $\{{\delta},{\delta}'\}\in \br d;2;$ for some ${\xi}<{\alpha}$ and $d\in \vec D_{\xi}(i_{\xi})$. \end{itemize} For ${\delta}<{\alpha}$ let $f({\delta},{\alpha})=g_{\alpha}({\delta})$. \end{proof} \begin{theorem}\label{tm:pos} If GCH holds and $f:\br {{\omega}_1};2;\rightarrow{{\omega}_1}$ is an AR$^{(k)}$-function then there is a c.c.c. poset $P$ such that \begin{displaymath} V^P\models\text{ $f$ c.c.c-indestructibly establishes ${{\omega}_1} {\not\makebox[-7pt]{}\rightarrow}^* [({{\omega}_1};{{\omega}_1})]_{{k}-{\rm bdd}}$}. \end{displaymath} \end{theorem} Although an AR$^{(k)}$-function establishes ${{\omega}_1} {\not\makebox[-7pt]{}\rightarrow}^* [({\omega};{{\omega}_1})]_{{k}-{\rm bdd}}$ but there is no function which c.c.c-indestructibly establishes ${{\omega}_1} {\not\makebox[-7pt]{}\rightarrow}^* [({\omega};{{\omega}_1})]_{{k}-{\rm bdd}}$ because Martin's Axiom implies ${{\omega}_1} \rightarrow^* (({\omega},{{\omega}_1}))^2_{k-bdd}$. \begin{theorem}\label{tm:neg} If GCH holds and $f:\br {{\omega}_1};2;\rightarrow{{\omega}_1}$ is an AR$^{(k)}$-function then there is a set $X\in \br {{\omega}_1};{{\omega}_1};$ and a c.c.c. poset $Q$ such that \begin{displaymath} V^Q\models\text{ $X$ has a partition into countably many $f$-rainbow sets}. \end{displaymath} \end{theorem} Before proving the theorems above we need to introduce some notions. Given a set x denote $\operatorname{TC}(x)$ the transitive closure of $x$. Let ${\kappa}$ be a large enough regular cardinals, (${\kappa}=(2^{{{\omega}_1}})^{+}$ works). Put $H_{{\kappa}}= \{x:|\operatorname{TC}(x)|<{\kappa}\}$ and ${\mathcal H}_{{\kappa}}=\<H_{{\kappa}},\in,\prec\>$, where $\prec$ is a well-ordering of $H_{\kappa}$. \begin{definition} (a) A sequence $\vec N=\<N_{{\alpha}}:{\alpha}\in A\>$ of countable, elementary submodels of ${\mathcal H}_{{\kappa}}$ is called an {\em A-chain\/} iff $A\subset{{\omega}_1}$ and whenever ${\alpha},{\beta}\in A$ with ${\alpha}<{\beta}$ we have $N_{\alpha}\in N_{\beta}$.\\ (b) Suppose that $\vec N=\<N_{\alpha}:{\alpha}\in A\>$ is an $A$-chain and $Y\subset{{\omega}_1}$. We say that Y is {\em separated by $\vec N$} iff for each $C\in\br Y;2;$ there is an ${\alpha}\in A$ with $|N_{\alpha}\cap C|=1$. \end{definition} \begin{lemma}\label{lm:main} Assume that $f$ is an AR$^{(k)}$-function. If $\<N_m:m\le n\>$ is an elementary $n+1$-chain, $f\in N_0$, $\vec D_0,\dots, \vec D_{n-1}\in \seqk {\omega}{{\omega}_1}\cap N_0$, and ${\alpha}_m\in N_{m+1}\setminus N_m$ for $m<n$ then the set \begin{displaymath} \{i<{\omega}:\forall m<n\ {\alpha}_m\in \Homo {\vec D_m(i)}f\} \end{displaymath} is infinite. \end{lemma} \begin{proof} We prove the lemma by induction on $n$. So assume that the set \begin{displaymath} I=\{i<{\omega}:\forall m<n-1\ {\alpha}_m\in \Homo {\vec D_m(i)}f\} \end{displaymath} is infinite. (If $n=1$ then $I={\omega}$). Write $I=\{i_j:j\in {\omega}\}$ and for each $\ell<{\omega}$ put \begin{displaymath} \vec E^\ell=\<\vec D_{n-1}(i_j): \ell\le j<{\omega}\>. \end{displaymath} Since $f$ is AR$^{(k)}$\ and $\vec E^\ell\in \seqk {\omega}{{\omega}_1}$, there is ${\gamma}_\ell<{{\omega}_1}$ such that \begin{displaymath} {{\omega}_1}\setminus {\gamma}_\ell\subset \cup\{\Homo{\vec D_{n-1}(i_j)}{f}:j\in {\omega}\setminus \ell\}. \end{displaymath} So if we take ${\gamma}=\sup\{{\gamma}_\ell:\ell<{\omega}\}$ then for each ${\alpha}\in {{\omega}_1}\setminus {\gamma} $ the set \begin{displaymath} J_{\alpha}=\{i\in I: {\alpha}\in \Homo{\vec D_{n-1}(i)}f\} \end{displaymath} is infinite. Since $f,\vec D_0,\dots, \vec D_{n-1}, {\alpha}_0,\dots, {\alpha}_{n-2} \in N_{n-1}$ we have $I\in N_{n-1}$ and so $\vec E^\ell\in N_{n-1}$ as well. Thus $\<{\gamma}_\ell:\ell<{\omega}\>\in N_{n-1}$ and so ${\gamma}=\sup \<{\gamma}_\ell:\ell<{\omega}\>\in N_{n-1}$ as well. Hence ${\alpha}_{n-1}\in N_n\setminus N_{n-1}\subset {{\omega}_1}\setminus {\gamma}$ and so $J_{\alpha_{n-1}}$ is infinite. But \begin{displaymath} J_{\alpha_{n-1}}= \{i<{\omega}:\forall m<n\ {\alpha}_m\in \Homo {\vec D_m(i)}f\}, \end{displaymath} so we are done. \end{proof} \begin{proof}[Proof of thereon \ref{tm:neg}] Let $\vec N=\<N_{\xi}:{\xi}<{{\omega}_1}\>$ be an ${{\omega}_1}$-chain with $f\in N_0$ and let $X\in \br {{\omega}_1};{{\omega}_1};$ be $\vec N$-separated. Define the poset ${\mathcal Q}=\<Q,\le\>$ as follows: \begin{displaymath} Q=\{q\in Fn(X,{\omega}): \text{$q^{-1}\{n\}$ is $f$-rainbow for each $n\in \operatorname{ran} q$}\}, \end{displaymath} and let $q\le q'$ iff $q\supset q'$. \begin{lemma} ${\mathcal Q}$ satisfies c.c.c. \end{lemma} \begin{proof}[Proof of the lemma] Assume that $\{q_{\nu}:{\nu}<{{\omega}_1}\}\subset Q$. Let $x_{\nu}=\operatorname{dom} q_{\nu}$, $L_{\nu}=\operatorname{ran} q_{\nu}$, and $x_{{\nu},\ell}=q_{\nu}^{-1}\{\ell\}$ for $\ell\in L_{\nu}$. We can assume that \begin{enumerate}[(1)] \item $\{x_{\nu}:{\nu}<{{\omega}_1}\}$ forms a $\Delta$ system with kernel $x$, \item $x< x_{\zeta}\setminus x<x_{\xi}\setminus x$ for ${\zeta}<{\xi}<{{\omega}_1}$, \item $L_{\nu}=L$ for each ${\nu}<{{\omega}_1}$, \item $q_{\nu}\restriction \br x;2;=q$ for each ${\nu}<{{\omega}_1}$, \end{enumerate} For ${\zeta}\in {{\omega}_1}$ let \begin{displaymath} F({\zeta})=\{{\xi}<{{\omega}_1}: f''[x_{\zeta},x_{\zeta}\setminus x]\cap f''[x_{\xi},x_{\xi}\setminus x]\ne\emptyset\}. \end{displaymath} Since $f$ is $k$-bounded, $F({\zeta})$ is finite, and so there is an $F$-free set $Z=\{{\zeta}_i:i<{{\omega}_1}\}\in \br {{\omega}_1};{{\omega}_1}; $. For $x\in {{\omega}_1}$ let ${\rho}(x)=\min \{{\nu}:x\in N_{\nu}\}$. For each ${\xi}\in X$ pick $d_{\xi}\in \br {{\omega}_1};k;$ such that ${\xi}\in d_{\xi}$ and ${\rho}({\eta})={\rho}({\xi})$ for each ${\eta}\in d_{\xi}$. For ${\zeta}\in Z$ let $D_{{\zeta}}=\{d_{\xi}:{\xi}\in x_{{\zeta}}\setminus x\}$. Let $\vec D=\<D_{{\zeta}_i}:i<{\omega}\>$. Clearly $\vec D\in \seqk {\omega}{{{\omega}_1}}$. Since CH holds there is ${\gamma}<{{\omega}_1}$ such that $\vec D\in N_{\gamma}$. Let ${\zeta}\in Z$ such that $N_{\gamma}\cap (x_{\zeta}\setminus x)=\emptyset$. Apply lemma \ref{lm:main} for $n=|x_{\zeta}\setminus x|$, $\vec D_m=\vec D$ for $m<n$ and $\{{\alpha}_m:m<n\}=x_{\zeta}\setminus x$. Then, there is $i<{\omega}$ such that \begin{displaymath} (\forall m<n) \ {\alpha}_m\in \Homo {\vec D(i)}f. \end{displaymath} By the construction it means that \begin{displaymath} (\forall {\eta}\in x_{{\zeta}}\setminus x)\ (\forall {\xi}\in x_{{\zeta}_i}\setminus x)\ (\forall {\delta}\in d_{\xi})\ f({\delta},{\eta})=f({\xi},{\eta}). \end{displaymath} \smallskip \noindent {\bf Claim:} $q_{{\zeta}_i}\cup q_{\zeta}\in Q$, i.e. $f$ is 1--1 on $\br x_{{\zeta}_i,\ell}\cup x_{{\zeta},\ell};2;$ for all $\ell\in L$. \smallskip Let ${\xi},{\eta},{\xi}',{\eta}'\in x_{{\zeta}_i,\ell}\cup x_{{\zeta},\ell}$ with ${\xi}<{\eta}$ and ${\xi}'<{\eta}'$ such that $f({\xi},{\eta})=f({\xi}',{\eta}')$. Assume first that $\{{\xi},{\eta}\},\{{\xi}',{\eta}'\}\in \br x_{{\zeta}_i,\ell};2;\cup \br x_{\zeta,\ell};2;$. Since $q_{{\zeta}_i}, q_{\zeta}\in Q$ we can assume that $\{{\xi},{\eta}\}\in\br x_{{\zeta}_i,\ell};2;\setminus \br x_{\zeta,\ell};2;$ and $\{{\xi}',{\eta}'\}\in \br x_{\zeta,\ell};2;\setminus \br x_{{\zeta}_i,\ell};2;$, (or $f({\xi},{\eta})=f({\xi}',{\eta}')$ implies $\{{\xi},{\eta}\}=\{{\xi}',{\eta}'\}$). Then $f({\xi},{\eta})\in f''[x_{{\zeta}_i}, x_{{\zeta}_i}\setminus x]$ and $f({\xi}',{\eta}')\in f''[x_{\zeta}, x_{\zeta}\setminus x]$, so ${\zeta}_i\notin F({\zeta})$ implies $f({\xi},{\eta})\ne f({\xi}',{\eta}')$. So we can assume that e.g. $\{{\xi},{\eta}\}\notin \br x_{{\zeta}_i,\ell};2;\cup \br x_{\zeta,\ell};2;$, i.e. ${\xi}\in x_{{\zeta}_i,\ell}\setminus x$ and ${\eta}\in x_{\zeta,\ell}\setminus x$. But we know that \begin{displaymath} (\forall {\delta}\in d_{\xi})\ f({\delta},{\eta})=f({\xi},{\eta}). \end{displaymath} Since $f$ is $k$-bounded and $|d_{\xi}|=k$ we have \begin{displaymath} \bigl\{\{{\xi}',{\eta}'\}:f({\xi}',{\eta}')=f({\xi},{\eta}) \bigr\}= \bigl\{\{{\delta},{\eta}\}:{\delta}\in d_{\xi}\bigr\}. \end{displaymath} But $d_{\xi}\cap (x_{{\zeta}_i,\ell}\cup x_{\zeta,\ell})=\{{\xi}\}$ because ${\rho}({\delta})={\rho}({\xi})$ for each ${\delta}\in d_{\xi}$. Hence $f({\xi}',{\eta}')=f({\xi},{\eta})$ implies ${\xi}={\xi}'$ and ${\eta}={\eta}'$. \end{proof} Since $\{q\in Q:{\xi}\in \operatorname{dom} q\}$ is dense in ${\mathcal Q}$ for each ${\xi}\in X$ we have that if ${\mathcal G}$ is the generic filter in $Q$ and $g=\cup{\mathcal G}$, then $\{g^{-1}\{n\}:n\in {\omega}\}$ is a partition of $X$ into countably many $f$-rainbow sets, which completes the proof of Theorem \ref{tm:neg}. \end{proof} To prove theorem \ref{tm:pos} we need some more preparation. We will use a black box theorem from \cite{So}. Given a set $K$ and a natural number $m$ let \begin{displaymath} \text{{$\finmok$}= $\{s:\text{$s$ is a function,}\ \operatorname{dom} (s)\in \br {{\omega}_1};m;,\operatorname{ran} (s)\subset K\}$}. \end{displaymath} A sequence $\<s_{\alpha}:{\alpha}<{{\omega}_1}\>\subset\finmok$ is {\em dom-disjoint } iff {$\domm[ s_{\alpha}]\cap \domm[ s_{\beta}]=\emptyset$} all ${\alpha}<{\beta}<{{\omega}_1}$. Let $H$ be a graph on ${{\omega}_1}\times K$, $m\in {\omega}$. {We say that $H$ is {$m$-solid} if given any {dom-disjoint\ sequence $\<s_{\alpha}:{\alpha}<{{\omega}_1}\>\subset \finmok $}} {there are ${\alpha}<{\beta}< {{\omega}_1}$ such that {\begin{displaymath} [s_{\alpha},s_{\beta}]\subset H. \end{displaymath}}} {$H$ is called {\em strongly solid} iff it is $m$-solid for each $m\in{\omega}$.} \begin{btheorem}[{\cite[Theorem 2.2]{So}}] Assume $2^{{{\omega}_1}}={\omega}_2$. If $H$ is a {strongly solid} graph on ${{\omega}_1}\times K$, where $|K|\le 2^{{{\omega}_1}}$, then for each $m\in {\omega}$ there is a c.c.c poset $P$ of size ${\omega}_2$ such that \begin{displaymath} V^P\models \text{\em ``$H$ is {c.c.c-indestructibly $m$-solid.}''} \end{displaymath} \end{btheorem} The theorem above is build on a method of Abraham and Todor\v cevi\v c from \cite{AT}. We need one more lemma before we can apply the Black Box Theorem above. \begin{lemma}\label{lm:disjoint} There is a function $r:{{\omega}_1} \rightarrow {\omega}$ such that for each $A,B\in \br {{\omega}_1};<{\omega};$ if $r(A)=r(B)$ then $A\cap B$ is an initial segment of $A$ and $B$. \end{lemma} \begin{proof} Let ${\mathcal D}$ be a countable dense subset of the product space ${\omega}^{{{\omega}_1}}$. Moreover, for each ${\alpha}<{{\omega}_1}$ fix a function $f_{\alpha}:{\alpha}\stackrel{1-1}{\rightarrow}{\omega}$. Let $A=\{{\alpha}_0,\dots, {\alpha}_{n-1}\}\in \br {{\omega}_1};<{\omega};$, ${\alpha}_0<\dots {\alpha}_{n-1}$. Pick $d_A\in {\mathcal D}$ such that $d_A({\alpha}_i)=i$ for each $i<|A|$. Let \begin{displaymath} r(A)=\< d_A, \<f_{{\alpha}_i}''(A\cap {{\alpha}_i}):i<|A|\>\>. \end{displaymath} Since the range of $r$ is countable it is enough to prove that if $r(A)=r(B)$ then $A\cap B$ is an initial segment of $A$ and $B$. Write $A=\{{\alpha}_i:i<m\}$, ${\alpha}_0<\dots <{\alpha}_{n-1}$, and $B=\{{\beta}_j:j<m\}$, ${\beta}_0<\dots {\beta}_{m-1}$. Assume that ${\alpha}_i={\beta}_j$. Then $d_A({\alpha}_i)=i$ and $d_B({\beta}_j)=j$. Since $d_A=d_B$ it follows that $i=j$. So $r(A)=r(B)$ yields $f_{{\alpha}_i}''(A\cap {\alpha}_i)=f_{{\alpha}_i}''(B\cap {\alpha}_i)$. Since $f_{{\alpha}_i}$ is 1--1 on ${\alpha}_i$ it follows that $A\cap {\alpha}_i=B\cap {\alpha}_i$. \end{proof} We will use the following corollary of this lemma. \begin{corollary}\label{cor:r} There is a function $r:{{\omega}_1} \longrightarrow {\omega}$ such that for each $A,B\in \br {{\omega}_1};<{\omega};$ if $\min(A)\ne \min(B)$ and $r(A)=r(B)$ then $A\cap B=\emptyset$. \end{corollary} \begin{proof}[Proof of Theorem \ref{tm:pos}] Let $\vec N=\<N_{\xi}:{\xi}<{{\omega}_1}\>$ be an ${{\omega}_1}$-chain with $f\in N_0$. Fix the function $r$ from corollary \ref{cor:r} above. For ${\xi}\in {{\omega}_1}$ let ${\rho}({\xi})=\min \{{\nu}:{\xi}\in N_{\nu}\}$. Let $K=\br {{\omega}_1};k;\times {{\omega}_1} \times {\omega}$. For any function $c:{{\omega}_1} \longrightarrow {{\omega}_1}$ define a graph $H_c$ on ${{\omega}_1}\times K$ as follows. If $x,x'\in {{\omega}_1}\times K$, $x=\<{\zeta},\<d,{\xi},m\>\>$, $x'=\<{\zeta}',\<d',{\xi}',m'\>\>$, ${\zeta}<{\zeta}'$, let $\{x,x'\}$ be an edge in $H_c$ provided \\ IF \begin{enumerate}[(1)] \item $m=m'$, \item ${\rho}({\xi}')={\zeta}' $, \item ${\zeta}<\min d$, \item $r(\{{\zeta}\}\cup d)=m$, \end{enumerate} THEN \begin{enumerate}[(1)] \addtocounter{enumi}{4} \item $c({\delta},{\xi}')=c({\varepsilon},{\xi}')$ for each ${\delta},{\varepsilon}\in d$. \end{enumerate} \begin{lemma}\label{lm:1} If $H_c$ is $1$-solid for some colouring $c$ then $c$ establishes ${{\omega}_1}{\not\makebox[-7pt]{}\rightarrow}^*[({{\omega}_1};{{\omega}_1})]_{k-b.d.d.} $. \end{lemma} \begin{proof} We will show that for all $X=\{{\xi}_{\beta}:{\beta}<{{\omega}_1}\}\in \br {{\omega}_1};{{\omega}_1};$ and for all disjoint family $\{d_{\alpha}:{\alpha}<{{\omega}_1}\}\subset \br{{\omega}_1};k;$ there are ${\alpha},{\beta}<{{\omega}_1} $ such that $\max d_{\alpha}<{\xi}_{\beta}$ and $|c''[d_{\alpha}, \{{\xi}_{\beta}\}]|=1$. By thinning out and renumerating of the sequences we can assume that \begin{enumerate}[(1)] \item ${\rho}({\xi}_{\alpha})<\min {\rho}''d_{\alpha}<\max {\rho}''d_{\alpha}<{\rho}({\xi}_{{\alpha}+1})$ for ${\alpha}<{\beta}<{{\omega}_1}$, \item $r(\{{\rho}({\xi}_{\alpha})\cup d_{\alpha}\})=m$ for some $m\in {\omega}$ for each ${\alpha}\in {{\omega}_1}$. \end{enumerate} Let $x_{\alpha}=\<{\rho}({\xi}_{\alpha}),\<d_{\alpha},{\xi}_{\alpha}, m\>\>$ for ${\alpha}<{{\omega}_1}$. Since the sequence $\<\{x_{\alpha}\}:{\alpha}<{{\omega}_1}\>$ is dom-disjoint, and (1)-(4) hold for each ${\alpha}<{\beta}<{{\omega}_1}$, there are ${\alpha}<{\beta}<{{\omega}_1}$ such that (5) holds for $x_{\alpha}$ and $x_{\beta}$ because $H_c$ is $1$-solid, i.e. $|c''[d_{\alpha},\{{\xi}_{\beta}\}]|=1$, which was to be proved. \end{proof} \begin{lemma}\label{lm:strongly} If $c$ is an AR$^{(k)}$-function and $CH$ holds then $H_{c}$ is strongly solid. \end{lemma} \begin{proof} Let $m\in {\omega}$ and $\<E_{\alpha}:{\alpha}<{{\omega}_1}\>\subset \finmok$ be a dom-disjoint\ sequence. Write $E_{\alpha}=\{x_{{\alpha},i}:i<m\}$, $x_{{\alpha},i}= \<{\zeta}_{{\alpha},i}, \<d_{{\alpha},i},{\xi}_{{\alpha},i}, n_{{\alpha},i}\>\>$. We can assume that \begin{enumerate}[(i)] \item $n_{{\alpha},i}=n_i$, \item ${\rho}({\xi}_{{\alpha},i})={\zeta}_{{\alpha},i}$ \item ${\zeta}_{{\alpha},i}<\min d_{{\alpha},i}$, \item $r(\{{\zeta}_{{\alpha},i}\}\cup(d_{{\alpha},i}))=n_i$, \item $\max {\rho}''d_{{\alpha},i}<{\zeta}_{{\beta},j}$ for ${\alpha}<{\beta}<{{\omega}_1}$ and $i,j<m$. \end{enumerate} Let $N=\{n_i:i<m\}$. For ${\alpha}<{{\omega}_1}$ and $n\in N$ put $D_{{\alpha},n}=\{d_{{\alpha},i}:n_i=n\}$. \noindent{\bf Claim:} $D_{{\alpha},n}\in \fink {{\omega}_1}$. Indeed, if $i\ne j<m$ and $n_i=n_j$ then $r(\{{\zeta}_{{\alpha},i}\}\cup d_{{\alpha},i})=n_i=n_j= r(\{{\zeta}_{{\alpha},j}\}\cup d_{{\alpha},j})$ but $\min (\{{\zeta}_{{\alpha},i}\}\cup d_{{\alpha},i})= {\zeta}_{{\alpha},i}\ne {\zeta}_{{\alpha},j}= \min (\{{\zeta}_{{\alpha},j}\}\cup d_{{\alpha},j})$ so $d_{{\alpha},i}\cap d_{{\alpha},j}=\emptyset$ by the choice of the function $r$. \medskip (iii) and (v) together give $\max (\cup D_{{\alpha},n})<\min (\cup D_{{\beta},n})$ for ${\alpha}<{\beta}<{{\omega}_1}$ and $n\in N$. Thus $\vec D'_n=\<D_{\ell,n}:\ell<{\omega}\>\in \seqk {\omega}{{\omega}_1}$. Since CH holds there is ${\gamma}<{{\omega}_1}$ such that $\{\vec D'_n:n\in N\}\subset N_{\gamma}$. Pick ${\alpha}<{{\omega}_1}$ such that $N_{\gamma}\cap \{{\zeta}_{{\alpha},j}:j<m\}=\emptyset$. Let $\vec D_j=\vec D'_{n_j}$ for $j<m$. We are going to apply lemma \ref{lm:main} as follows: $\vec M=\<N_{\gamma},N_{{\zeta}_j}:j<m\>$ is an elementary $m+1$-chain, $f,\vec D_0,\dots, \vec D_{m-1} \in N_0$ and ${\xi}_{\alpha,j}\in N_{{\zeta}_j}\setminus N_{{\zeta}_{j-1}}$ for $j<m$, where ${\zeta}_{-1}={\gamma}$. Hence, by lemma \ref{lm:main} there is $\ell<{\omega}$ such that for each $j<m$ \begin{displaymath}\tag{$\circ$}\label{circ} {\xi}_{{\alpha},j}\in \Homo {\vec D_j(\ell)}f. \end{displaymath} \noindent{\bf Claim} $[x_\ell,x_{\alpha}]\subset H_c$. Let $i,j<m$. We show $\{x_{\ell,i}, x_{{\alpha},j}\}\in H_c$. (2)-(4) holds by the construction. If $n_i\ne n_j$ then (1) fails so we are done. Assume that $n_i=n_j=n\in N$. Then $d_{\ell,i}\in \vec D'_n(\ell)=\vec D_j(\ell)$. Thus \begin{displaymath} (\forall {\delta},{\delta}'\in d_{\ell,i})\ f({\delta},{\xi}_{{\alpha},j}) =f({\delta}',{\xi}_{{\alpha},j}) \end{displaymath} by (\ref{circ}). Hence (5) holds and so $\{x_{\ell,i}, x_{{\alpha},j}\}\in H_c$. \end{proof} Now we can easily conclude the proof of \ref{tm:pos}. Let $f:\br {{\omega}_1};2;\rightarrow{{\omega}_1}$ be an AR$^{(k)}$-function. By lemma \ref{lm:strongly}, the graph $H_f$ is strongly solid. Since $GCH$ holds, we can apply our Black Box Theorem to find a c.c.c. poset $P$ such that \begin{displaymath} V^P\models\text{$H_f$ is c.c.c-indestructibly 1-solid.} \end{displaymath} But then, by lemma \ref{lm:1}, \begin{displaymath} V^P\models\text{$f$ c.c.c-indestructibly establishes ${{\omega}_1}{\not\makebox[-7pt]{}\rightarrow}^*[({{\omega}_1};{{\omega}_1})]_{k-b.d.d.} $.} \end{displaymath} \end{proof} \begin{proof}[Proof of theorem \ref{tm:acgen}] Since GCH holds, by lemma \ref{lm:const} there is an AR$^{(k)}$-function $g:\br{{\omega}_1};2;\longrightarrow {{\omega}_1}$ . By theorem \ref{tm:neg} there is a set $X\in \br {{\omega}_1};{{\omega}_1};$ and a c.c.c. poset $Q$ such that \begin{displaymath} V^Q\models\text{ $X$ has a partition into countably many $g$-rainbow sets}. \end{displaymath} Let $h:{{\omega}_1}\longrightarrow X$ be a bijection and put $f=g \circ h$. Then \begin{displaymath} V^Q\models\text{ ${{\omega}_1}$ has a partition into countably many $f$-rainbow sets}. \end{displaymath} Since $f$ is an AR$^{(k)}$-function as well, we can apply theorem \ref{tm:pos} to obtain that \begin{displaymath} V^P\models\text{ $f$ c.c.c-indestructibly establishes ${{\omega}_1} {\not\makebox[-7pt]{}\rightarrow}^* [({{\omega}_1};{{\omega}_1})]_{{k}-{\rm bdd}}$}, \end{displaymath} fro some c.c.c. poset $P$, which proves the theorem. \end{proof} Lemma \ref{lm:const} and theorem \ref{tm:pos} give immediately \begin{corollary}\label{tm:main} ${{\omega}_1} {\not\makebox[-7pt]{}\rightarrow}^* [({{\omega}_1};{{\omega}_1})]_{{k}-{\rm bdd}}$ is consistent with Martin's Axiom. \end{corollary}
1,314,259,993,692
arxiv
\section{Introduction} Since the pioneer paper of Zamolodchikov \cite{W3}, a lot of extended nonlinear conformal algebras (the $W$-type algebras) have been constructed and studied (see, e.g., \cite{BS} and references therein). The growing interest to this subject is motivated by many interesting applications of nonlinear algebras to the string theory, integrable systems, etc. However, the intrinsic nonlinearity of W-algebras makes it rather difficult to apply to them the standard arsenal of techniques and means used in the case of linear algebras (while constructing their field representations, etc.). A way to circumvent this difficulty has been proposed by us in \cite{KS}. We found that in many cases a given nonlinear $W$ algebra can be embedded into some linear conformal algebra which is generated by a finite number of currents and contains the considered $W$-algebra as subalgebra in some nonlinear basis. Up to now the explicit construction has been carried out for some simplest examples of nonlinear (super)algebras ( $W_3$ and $W_3^{(2)}$ \cite{KS}, $WB_2$ and $W_{2,4}$ \cite{BKS} ). Besides being a useful tool to construct new field realizations of nonlinear algebras [3-4], these linear algebras provide a suitable framework for considering the embeddings of the Virasoro string in the $W$-type ones \cite{BO}. In the present letter\footnote{The preliminary version of this Letter has been present as talk at the International Workshop "Finite Dimensional Integrable Systems", July 18-21, JINR, Dubna, 1994.} we show that the linearization is a general property inherent to many nonlinear $W$-type algebras. We demonstrate that a wide class of $W$-(super)algebras, including $U(N)$-superconformal \cite{KB}, $W_N^{(N-1)}$ [7-9], as well as $W_N$ \cite{FL} algebras, admit a linearization. The explicit formulas related linear and nonlinear algebras for all these cases are given. The example of $W_4$ algebra is elaborated in detail. \setcounter{equation}0 \section{Linearizing $U(N)$ (quasi)superconformal algebras.} In this Section we construct linear conformal algebras which contain the algebra $W_{N+2}^{(N+1)}$ or $U(N)$ superconformal algebras as subalgebras in some nonlinear basis. By this we mean, that the currents of nonlinear algebras can be related by an {\it invertible} transformation to those of linear algebras. In what follows these linear algebras will be called the linearizing algebras for nonlinear ones. Let us start by reminding the operator product expansions (OPE's) for the $W_{N+2}^{(N+1)}$ algebras and $U(N)$ superconformal algebras (SCA). The OPE's for these algebras can be written in a general uniform way keeping in mind that the $W_N^{(N-1)}$ algebra is none other than $U(N-2)$ quasi-superconformal algebra (QSCA) [7-9] \footnote{Strictly speaking, the $W_N^{(N-1)}$ algebra coincides with $GL(N-2)$ QSCA. In what follows, we will not specify the real forms of algebras and use the common term $U(N)$ QSCA.}. Both $U(N)$ SCA and $U(N)$ QSCA have the same number of generating currents: the stress tensor $T(z)$, the $U(1)$ current $U(x)$, the $SU(N)$ Kac-Moody currents $J_{a}^{b}(x)$ $(1\leq a,b \leq N , \mbox{Tr}(J)=0)$ and two sets of currents in the fundamental $G_a(x)$ and conjugated ${\bar G}^b(x)$ representations of $SU(N)$. The currents $G_a(x),{\bar G}^b(x)$ are bosonic for $U(N)$ QSCA and fermionic for $U(N)$ SCA. To distinguish between these two cases we, following refs. \cite{Rom}, introduce the parameter $\epsilon$ equal to $1 (-1)$ for the QSCAs (SCAs) and write the OPE's for these algebras in the following universal form: \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} T(z_1)T(z_2) & = & \frac{c/2}{z_{12}^4}+\frac{2T}{z_{12}^2}+ \frac{T'}{z_{12}} \quad , \quad U(z_1)U(z_2) = \frac{c_1}{z_{12}^2} \; , \nonumber \\ T(z_1)J_{a}^{b}(z_2) & = & \frac{J_{a}^{b}}{z_{12}^2}+ \frac{{J_{a}^{b}}'}{z_{12}} \quad , \quad T(z_1)U(z_2) = \frac{U}{z_{12}^2}+\frac{U'}{z_{12}} \;, \nonumber \\ T(z_1)G_a(z_2)& = & \frac{3/2 G_a}{z_{12}^2}+\frac{G_a}{z_{12}}\quad , \quad T(z_1){\bar G}^a(z_2) = \frac{3/2 {\bar G}^a}{z_{12}^2}+ \frac{{\bar G}^a}{z_{12}}\; , \nonumber \\ J_a^b(z_1)J_c^d(z_2) & = & (K-\epsilon -N)\frac{\delta_a^d\delta_c^b- \frac{1}{N}\delta_a^b\delta_c^d}{z_{12}^2}+ \frac{\delta_c^b J_a^d-\delta_a^d J_c^b}{z_{12}} \; , \nonumber \\ U(z_1)G_a(z_2) & = & \frac{G_a}{z_{12}} \quad , \quad U(z_1){\bar G}^a(z_2) = -\frac{{\bar G}^a}{z_{12}} \; , \nonumber \\ J_a^b(z_1)G_c(z_2) & = & \frac{\delta_c^b G_a -\frac{1}{N}\delta_a^b G_c}{z_{12}} \quad , \quad J_a^b(z_1){\bar G}^c(z_2) = \frac{-\delta_a^c {\bar G}^b + \frac{1}{N}\delta_a^b {\bar G}^c}{z_{12}} \; \nonumber \\ G_a(z_1) {\bar G}^b(z_2) & = & \frac{2\delta_a^b c_2}{z_{12}^3}+ \frac{2x_2\delta_a^b U + 2x_3 J_a^b}{z_{12}^2}+ \frac{x_2\delta_a^b U' + x_3 {J_a^b}'+2x_5 (J_a^dJ_d^b)}{z_{12}}+ \nonumber \\ & & \frac{2x_4 (UJ_a^b)+\delta_a^b \left( x_1 (UU)- 2\epsilon T + 2x_6 (J_d^eJ_e^d)\right) }{z_{12}} \; , \label{ope} \eea where the central charges $c$ and parameters $x$ are defined by \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} c & = & \frac{-6\epsilon K^2 +(N^2+11\epsilon N +13)K-(\epsilon+N) (N^2+5\epsilon N+6)}{K} \; , \nonumber \\ c_1 & = & \frac{N(2K-N-2\epsilon)}{2+\epsilon N} \quad , \quad c_2=\frac{(K-N-\epsilon )(2K-N-2\epsilon )}{K} \; , \nonumber \\ x_1 & = & \frac{(\epsilon +N)(2\epsilon+N)}{N^2K} \quad , \quad x_2 = \frac{(2\epsilon +N)(K-\epsilon-N)}{\epsilon NK} \quad , \quad x_3 = \frac{2K-N-2\epsilon }{K} \; , \nonumber \\ x_4 & = & \frac{2+\epsilon N}{NK} \quad , \quad x_5 = \frac{1}{K} \quad , \quad x_6 = \frac{1}{2\epsilon K} \; . \label{opecoeff} \eea The currents in the r.h.s. of OPE's \p{ope} are evaluated at the point $z_2$, $z_{12}=z_1-z_2$ and the normal ordering in the nonlinear terms is understood. The main question we need to answer in order to linearize the algebras \p{ope} is as to which minimal set of additional currents must be added to \p{ope} to get extended linear conformal algebras containing \p{ope} as subalgebras. The idea of our construction comes >from the observation that the classical $(K \rightarrow \infty )$ $U(N)$ (Q)SCA \p{ope} can be realized as left shifts in the following coset space \begin{equation}}\newcommand{\ee}{\end{equation} g = e^{\int \! dz {\bar Q}^a (z) G_a (z) } \;,\label{coset} \ee which is parametrized by $N$ parameters-currents ${\bar Q}^a(z)$ with unusual conformal weights $-1/2$. In this case, all the currents of $U(N)$ (Q)SCA \p{ope} can be constructed from ${\bar Q}^a(z)$, their conjugated momenta $G_a(z) = \delta /\delta {\bar Q}^a$ and the currents of the maximal {\it linear} subalgebra ${\cal H}_N$ \begin{equation}}\newcommand{\ee}{\end{equation} {\cal H}_N = \left\{ T , U, J_a^b , {\bar G}^a \right\} \; . \label{glin} \ee Though the situation in quantum case is more difficult, it seems still reasonable to try to extend the $U(N)$ (Q)SCA \p{ope} by $N$ additional currents ${\bar Q}^a(z)$ with conformal weights $-1/2$.\footnote{Let us remind that the current with just this conformal weight appears in the linearization of $W_3^{(2)}$ algebra \cite{KS}.} Fortunately, this extension is sufficient to construct the linearizing algebras for the $U(N)$ (Q)SCAs. Without going into details, let us write down the set of OPE's for these linear algebras, which we will denote as $(Q)SCA_N^{lin}$ \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} T(z_1)T(z_2) & = & \frac{c/2}{z_{12}^4}+\frac{2T}{z_{12}^2}+ \frac{T'}{z_{12}} \quad , \quad U(z_1)U(z_2) = \frac{c_1}{z_{12}^2} \; , \nonumber \\ T(z_1)J_{a}^{b}(z_2) & = & \frac{J_{a}^{b}}{z_{12}^2}+ \frac{{J_{a}^{b}}'}{z_{12}} \quad , \quad T(z_1)U(z_2) = \frac{U}{z_{12}^2}+\frac{U'}{z_{12}} \;, \nonumber \\ T(z_1)G_a(z_2)& = & \frac{3/2 G_a}{z_{12}^2}+\frac{G_a}{z_{12}}\quad , \quad T(z_1)\widetilde{\overline G}{}^a(z_2) = \frac{3/2 \widetilde{\overline G}{}^a}{z_{12}^2}+ \frac{\widetilde{\overline G}{}^a}{z_{12}}\; , \nonumber \\ T(z_1){\bar Q}^a(z_2) & = & \frac{-1/2 {\bar Q}^a}{z_{12}^2}+ \frac{{\bar Q}^a}{z_{12}}\; , \nonumber \\ J_a^b(z_1)J_c^d(z_2) & = & (K-\epsilon -N)\frac{\delta_a^d\delta_c^b- \frac{1}{N}\delta_a^b\delta_c^d}{z_{12}^2}+ \frac{\delta_c^b J_a^d-\delta_a^d J_c^b}{z_{12}} \; , \nonumber \\ U(z_1)G_a(z_2) & = & \frac{G_a}{z_{12}} \quad , \quad U(z_1)\widetilde{\overline G}{}^a(z_2) = -\frac{\widetilde{\overline G}{}^a}{z_{12}} \quad , \quad U(z_1){\bar Q}^a(z_2) = -\frac{{\bar Q}^a}{z_{12}} \; , \nonumber \\ J_a^b(z_1)G_c(z_2) & = & \frac{\delta_c^b G_a -\frac{1}{N}\delta_a^b G_c}{z_{12}} \quad , \quad J_a^b(z_1)\widetilde{\overline G}{}^c(z_2) = \frac{-\delta_a^c \widetilde{\overline G}{}^b + \frac{1}{N}\delta_a^b \widetilde{\overline G}{}^c}{z_{12}} \; , \nonumber \\ J_a^b(z_1){\bar Q}^c(z_2) & = & \frac{-\delta_a^c {\bar Q}^b + \frac{1}{N}\delta_a^b {\bar Q}^c}{z_{12}} \; , \nonumber \\ G_a(z_1) {\bar Q}^b(z_2) & = & \frac{\delta_a^b}{z_{12}} \quad , \quad G_a(z_1) \widetilde{\overline G}{}^b(z_2) = \mbox{regular} \;. \label{linal1} \eea Here the central charges $c$ and $c_1$ are the same as in \p{opecoeff} and the currents $G_a(z),\widetilde{\overline G}{}^a(z)$ and ${\bar Q}^a(z)$ are bosonic (fermionic) for $\epsilon = 1(-1)$. In order to prove that the linear algebra $(Q)SCA_N^{lin}$ \p{linal1} contains $U(N)$ (Q)SCA \p{ope} as a subalgebra, let us perform the following {\it invertible} nonlinear transformation to the new basis $\left\{ T(z),U(z),J_a^b(z),G_a(z),{\bar G}^a(z), {\bar Q}^a(z)\right\}$, where the "new" current ${\bar G}^a(z)$ is defined as \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} {\bar G}^a & = & \widetilde{\overline G}{}^a + y_1 {\bar Q}^a{}''+ y_2 (J_b^a{\bar Q}^b{}') + y_3 (U{\bar Q}^a{}')+y_4 ({J_b^a}'{\bar Q}^b)+ y_5 (U'{\bar Q}^a)+y_6 (T{\bar Q}^a)+\nonumber \\ & & y_7(J_b^cJ_c^a{\bar Q}^b)+ y_8(J_b^cJ_c^b{\bar Q}^a)+ y_9 (UJ_b^a{\bar Q}^b)+ y_{10} (UU{\bar Q}^a) + y_{11}(J_b^cG_c{\bar Q}^b{\bar Q}^a)+ \nonumber \\ & & y_{12}(J_b^aG_c{\bar Q}^c{\bar Q}^b)+y_{13}(G_b'{\bar Q}^b{\bar Q}^a)+ y_{14}(G_b{\bar Q}^b{}'{\bar Q}^a)+ y_{15}(G_b{\bar Q}^b{\bar Q}^a{}')+ \nonumber \\ & & y_{16}(G_bG_c{\bar Q}^b{\bar Q}^c{\bar Q}^a) + y_{17}(UG_b{\bar Q}^b{\bar Q}^a) \; , \label{tr1} \eea and the coefficients $y_1-y_{17}$ are defined as \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} y_1 & = & 2K \quad , \quad y_2 = 4 \quad , \quad y_3=\frac{2(2+\epsilon N)}{N} \quad , \quad y_4=\frac{2(K-\epsilon -N)}{K} \;, \nonumber \\ y_5 & = & \frac{(K-\epsilon-N)(2+\epsilon N)}{NK} \quad , \quad y_6=-2\epsilon \quad , \quad y_7=\frac{2}{K} \quad , \quad y_8=\frac{2}{\epsilon K} \quad , \quad y_9=\frac{2(2+\epsilon N)}{NK} \; , \nonumber \\ y_{10} & = & \frac{(\epsilon+N)(2\epsilon+N)}{N^2K} \quad , \quad y_{11} = y_{12}=\frac{2}{K} \quad , \quad y_{13}= \frac{2(K-N-2\epsilon )}{K}\quad , \quad y_{14}=4 \; \nonumber \\ y_{15} & = & 2 \quad , \quad y_{16}=\frac{2}{\epsilon K} \quad , \quad y_{17}=\frac{2(2+\epsilon N)}{NK} \; . \label{sing} \eea Now it is a matter of straightforward (though tedious) calculation to check that OPE's for the set of currents $\left\{ T(z),U(z),J_a^b(z),G_a(z)\right\}$ and ${\bar G}^a(z)$ \p{tr1} coincide with the basic OPE's of the $U(N)$ (Q)SCA \p{ope}. Thus, we have shown that the linear algebra $(Q)SCA_N^{lin}$ \p{linal1} contains $U(N)$ (Q)SCA as a subalgebra in the nonlinear basis. We close this Section with a few comments. First of all, we would like to stress that the pairs of currents $G_a(z)$ and ${\bar Q}^a(z)$ (with conformal weights equal to $3/2$ and $-1/2$, respectively) in \p{linal1} look like ``ghost--anti-ghost'' fields and so $(Q)SCA_N^{lin}$ algebra \p{linal1} can be simplified by means of the standard ghost decoupling transformations \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} U & = & {\widetilde U}-\epsilon (G_a{\bar Q}^a) \; , \nonumber \\ J_a^b & = & {\widetilde J}{}_a^b - \epsilon (G_a{\bar Q}^b) + \delta_a^b\frac{\epsilon}{N}(G_c{\bar Q}^c) \; , \nonumber \\ T & = & {\widetilde T} +\frac{1}{2}\epsilon (G_a'{\bar Q}^a) +\frac{3}{2}\epsilon (G_a{\bar Q}^a{}') -\frac{\epsilon (2+ \epsilon N)}{2K} {\widetilde U}{}' \; . \label{ghosts} \eea In the new basis the algebra $(Q)SCA_N^{lin}$ splits into the direct product of the ghost--anti-ghost algebra $\Gamma_N=\left\{ {\bar Q}^a, G_b \right\}$ with the OPE's $$ G_a(z_1) {\bar Q}^b(z_2) = \frac{\delta_a^b}{z_{12}} \label{gam} $$ and the algebra of the currents $\left\{ {\widetilde T},{\widetilde U}, {\widetilde J}{}_a^b,\widetilde{\overline G}{}^a\right\}$. We denote the latter as $\widetilde{(Q)SCA}{}_N^{lin}$. It is defined by the following set of OPE's \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} {\widetilde T}(z_1){\widetilde T}(z_2) & = & \frac{-6\epsilon K^2 +(N^2+13)K -(N^3-N+6\epsilon )}{2K\; z_{12}^4}+ \frac{2{\widetilde T}}{z_{12}^2}+ \frac{{\widetilde T}'}{z_{12}} \quad , \nonumber \\ {\widetilde U}(z_1){\widetilde U}(z_2) & = & \left(\frac{2NK}{2+\epsilon N}\right)\frac{1}{z_{12}^2} \; , \; {\widetilde T}(z_1){\widetilde J}{}_{a}^{b}(z_2) = \frac{{\widetilde J}{}_{a}^{b}}{z_{12}^2}+ \frac{{\widetilde J}{}_{a}^{b}{}'}{z_{12}} \; , \nonumber \\ {\widetilde T}(z_1){\widetilde U}(z_2) & = & \frac{{\widetilde U}}{z_{12}^2}+\frac{{\widetilde U}'}{z_{12}} \;, \nonumber \\ {\widetilde T}(z_1)\widetilde{\overline G}{}^a(z_2) & = & \left( \frac{3}{2}+\frac{\epsilon (2+\epsilon N)}{2K}\right) \frac{\widetilde{\overline G}{}^a}{z_{12}^2}+ \frac{\widetilde{\overline G}{}^a}{z_{12}}\; , \nonumber \\ {\widetilde J}{}_a^b(z_1){\widetilde J}{}_c^d(z_2) & = & (K-N)\frac{\delta_a^d\delta_c^b-\frac{1}{N}\delta_a^b\delta_c^d}{z_{12}^2}+ \frac{\delta_c^b {\widetilde J}{}_a^d- \delta_a^d {\widetilde J}{}_c^b}{z_{12}} \; , \nonumber \\ {\widetilde U}(z_1)\widetilde{\overline G}{}^a(z_2) & = & -\frac{\widetilde{\overline G}{}^a}{z_{12}} \; , \; {\widetilde J}{}_a^b(z_1)\widetilde{\overline G}{}^c(z_2) = \frac{-\delta_a^c \widetilde{\overline G}{}^b + \frac{1}{N}\delta_a^b \widetilde{\overline G}{}^c}{z_{12}} \; , \nonumber \\ \widetilde{\overline G}{}^a(z_1) \widetilde{\overline G}{}^b(z_2) & = & \mbox{regular} \;, \label{linal2} \eea \begin{equation}}\newcommand{\ee}{\end{equation} (Q)SCA_N^{lin}=\Gamma_N \otimes \widetilde{(Q)SCA}{}_N^{lin} \quad . \ee Secondly, note that the linear algebra $\widetilde{(Q)SCA}{}_N^{lin}$ \p{linal2} has the same number of currents and the same structure relations as the maximal linear subalgebra ${\cal H}_N$ \p{glin} of $U(N)$ (Q)SCA \p{ope}, but with the "shifted" central charges and conformal weights. It is of importance that the central charges and conformal weights are strictly related as in \p{linal2}.\footnote{Let us remark that Jacoby identities for the set of currents $\left\{ {\widetilde T},{\widetilde U}, {\widetilde J}{}_a^b,\widetilde{\overline G}{}^a\right\}$ do not fix neither central charges nor the conformal weight of $\widetilde{\overline G}{}^a$.} Otherwise, with another relation between these parameters, we would never find the $U(N)$ (Q)SCA \p{ope} in $(Q)SCA{}_N^{lin}$. Thus, our starting assumption about the structure of linear algebra for $U(N)$ (Q)SCA coming from the classical coset realization approach, proved to be correct, modulo shifts of central charges and conformal weights. Thirdly, let us remark that among the $U(N)$ (Q)SCAs there are many (super)algebras which are well known under other names. For examples:\footnote{To avoid the singularity in \p{opecoeff} at $\epsilon=-1,N=2$ one should firstly rescale the current $U\rightarrow \frac{1}{\sqrt{2+\epsilon N}}U$ and then put $\epsilon=-1,N=2$ \cite{KB}.} \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} (Q)SCA ( \epsilon=1,N=1) & \equiv & W_3^{(2)} \quad \cite{PB}, \nonumber \\ (Q)SCA ( \epsilon=-1,N=1) & \equiv & N=2\; SCA \quad \ \cite{A}, \nonumber \\ (Q)SCA ( \epsilon=-1,N=2) & \equiv & N=4 \; SU(2)\; SCA \quad \cite{A}. \nonumber \eea Finally, let us remind that in the simplest case of $W_3^{(2)}$ algebra \cite{KS}, the linear $\widetilde{QSCA}{}_1^{lin}$ algebra \p{linal2} coincides with the linear algebra $W_3^{lin}$ for $W_3$. For general $N$ the situation is more complicated. This will be discussed in the next Section. \setcounter{equation}0 \section{Linearizing $W$ algebras.} The problem of construction of linear algebras for nonlinear ones can be naturally divided in two steps. As the first step we need to find the appropriate sets of additional currents which linearize the given nonlinear algebra. In other words, we must construct the linear algebra (like $(Q)SCA_N^{lin}$) with the correct relations between all central charges and conformal weights, which contains the nonlinear algebra as a subalgebra in some nonlinear basis. As the second step, we need to explicitly construct the transformation from the linear basis to a nonlinear one (like \p{tr1}). While the first step is highly non-trivial, the second one is purely technical. In principle, we could write down the most general expression with arbitrary coefficients and appropriate conformal weights, and then fix all the coefficients from the OPE's of the nonlinear algebra. In this Section we will demonstrate that the linear algebra $QSCA_N^{lin}$ \p{linal1} constructed in the previous Section gives us the hints how to find the linear algebras for many other $W$-type algebras which can be obtained from the $GL(N)$ QSCAs via the secondary Hamiltonian reduction \cite{DFRS}. \subsection{Secondary linearization.} The bosonic $GL(N)$ QSCAs (or, in another notation, $W_{N+2}^{(N+1)}$), which have been linearized in the previous Section, can be obtained through the Hamiltonian reduction >from the affine $sl(N+2)$ algebras [7-9]. The constraints on the currents of $sl(N+2)$ algebra which yield $W_{N+2}^{(N+1)}$ read \begin{equation}}\newcommand{\ee}{\end{equation} \left( \begin{array}{cc|cccc} U & T & {\overline G}{}^1 & {\overline G}{}^2 & \ldots & {\overline G}{}^N \\ 1 & 0 & 0 & 0 & \ldots & 0 \\ \hline 0 & G_1 & & & & \\ 0 & G_2 & & & & \\ \vdots & \vdots & \multicolumn{4}{c}{ sl(N) - \frac{\delta_a^b}{N}U} \\ 0 & G_N & & & & \end{array} \right) \label{ss1} \ee The $W_{N+2}^{(N+1)}$ algebras, forming in themselves a particular class of $W$-algebras with quadratic nonlinearity, are at the same time universal in the sense that a lot of other $W$-algebras can be obtained from them via the secondary Hamiltonian reduction (e.g., $W_N$ algebras, etc.)\cite{DFRS}. Let us consider a set of possible secondary reductions of $W_{N+2}^{(N+1)}$ algebra \p{ss1}. These are introduced by imposing the constraints \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} G_1 = 1 \quad , \quad G_2=\ldots =G_N=0 \quad ,& & \label{ss2}\\ \left. sl(N)\right|_{sl(2)} \quad , & & \label{ss3} \eea where we denoted as $\left. sl(N)\right|_{sl(2)}$ the set of constraints on the $sl(N)$ currents, associated with an arbitrary embedding of $sl(2)$ algebra into $sl(N)$ subalgebra of $W_{N+2}^{(N+1)}$. The main conjecture we will keep to in this Section is as follows \begin{quote}\it To find the linearizing algebra for a given nonlinear $W$-algebra related to $W_{N+2}^{(N+1)}$ through the Hamiltonian reduction \p{ss2},\p{ss3}, one should apply the reduction \p{ss3} to the linear algebra $\widetilde{QSCA}{}_N^{lin}$ \p{linal2} and then linearize the resulting algebra. The algebra $\widetilde{QSCA}{}_N^{lin}$ itself is the linearizing algebra for the reduction \p{ss2}. \end{quote} Roughly speaking, we propose to replace the linearization of the algebra $W$ obtained from the nonlinear algebra $W_{N+2}^{(N+1)}$ through the full set of the Hamiltonian reduction constraints \p{ss2}-\p{ss3}, by the linearization of the algebra $\widetilde{W}$ obtained from the {\it linear} algebra $\widetilde{QSCA}{}_N^{lin}$ by imposing the relaxed set \p{ss3}. At present, we are not aware of the rigorous proof of this statement, but it works well both in the classical cases (on the level of Poisson brackets) and in many particular quantum examples. Of course, the secondary Hamiltonian reduction \p{ss3}, being applied to $\widetilde{QSCA}{}_N^{lin}$, gives rise to a nonlinear algebra. However, the problem of its linearization can be reduced to the linearization of reduction \p{ss3} applied to the affine algebra $sl(N)\subset \widetilde{QSCA}{}_N^{lin}$, which was constructed in \cite{TB}. The resulting algebra will be just linear algebra for the nonlinear algebra we started with. Let us briefly discuss the explicit construction of the linear algebra $W^{lin}$ which contains the nonlinear algebra $\widetilde{W}$ obtained from $W_{N+2}^{(N+1)}$ via the Hamiltonian reduction constraints \p{ss2}-\p{ss3}. Let ${\cal J}$ be a current corresponding to the Cartan element $t_0$ of $sl(2)$ subalgebra. With respect to the adjoint action of $t_0$ the $sl(N)$ algebra can be decomposed into eigenspaces of $t_0$ with positive,null and negative eigenvalues $h_a$ \begin{equation}}\newcommand{\ee}{\end{equation} sl(N) = \left( sl(N) \right)_{-} \oplus \left( sl(N) \right)_{0} \oplus \left( sl(N) \right)_{+} \equiv \begin{array}[t]{c} \oplus \\ h_a \end{array} \left( sl(N) \right)_{h_a} \quad . \label{f2} \ee (In this subsection, the latin indices $(a,b)$ run over the whole $sl(N)$, Greek indices $(\alpha,\beta )$ run over $\left( sl(N) \right)_{-}$ and the barred Greek ones $(\bar\alpha,\bar\beta )$ over $\left( sl(N) \right)_{0} \oplus \left( sl(N) \right)_{+} $ .) The Hamiltonian reduction associated with the embedding \p{f2} can be performed by putting the appropriate constraints \begin{equation}}\newcommand{\ee}{\end{equation} J_{\alpha}-\chi_{\alpha}=0 \quad , \quad \chi_{\alpha}\equiv \chi (J_{\alpha}) \label{sor1} \ee on the currents $J_{\alpha}$ from $\left( sl(N) \right)_{-}$ [2,7]. These constraints are the first class for integral gradings\footnote{Let us remind, that the half-integer gradings can be replaced by integer ones, leading to the same reduction \cite{BS}.}, which means that BRST formalism can be used. In order to impose the constraints \p{sor1} in the framework of BRST approach one can introduce the fermionic ghost--anti-ghost pairs $( b_{\alpha},c^{\alpha} )$ with ghost numbers -1 and 1, respectively, for each current with the negative eigenvalues $h_{\alpha}$: \begin{equation}}\newcommand{\ee}{\end{equation} c^{\alpha}(z_1)b_{\beta}(z_2) = \frac{\delta^{\alpha}_{\beta}}{z_{12}} \quad , \ee and the BRST charge \begin{equation}}\newcommand{\ee}{\end{equation} Q_{BRST} = \int dz J_{BRST}(z) = \int dz \left( (J_{\alpha}-\chi (J_{\alpha}))c^{\alpha} -\frac{1}{2} f_{\alpha,\beta }^{\gamma}b_{\gamma}c^{\alpha}c^{\beta}\right) \;, \label{brst} \ee which coincides with that given in the paper \cite{TB}. The currents of the algebra $\widetilde{QSCA}_N^{lin}$ and the ghost fields $b_{\alpha},c^{\alpha}$ form the BRST complex, graded by the ghost number. The $W$ algebra is defined in this approach as the algebra of operators generating the null cohomology of the BRST charge of this complex. Following \cite{TB}, let us introduce the "hatted" currents ${\widehat J}{}_a$ : \begin{equation}}\newcommand{\ee}{\end{equation} {\widehat J}{}_a = {\widetilde J}{}_a+ \sum_{\beta,\gamma} f_{a,\beta}^{\gamma}b_{\gamma}c^{\beta} \; , \label{hat1} \ee where $f_{a,\beta}^{\gamma}$ are structure constants of $sl(N)$ in the basis \p{f2}. As shown in \cite{TB}, the $W$-algebras, associated with the reductions of the affine $sl(N)$ can be embedded into linear algebras formed by the currents ${\widehat J}{}_{\overline \alpha}$. In contrast to the sl(N) algebra, our algebra $\widetilde{QSCA}{}_N^{lin}$ contains, besides the $sl(N)$ currents, three additional ones ${\widetilde T},{\widetilde U},{\widetilde{\overline G}}{}^a$. Fortunately, the presence of these currents create no new problems while we construct a linearizing algebra for the reduction of $\widetilde{QSCA}{}_N^{lin}$ by the BRST charge \p{brst}. Namely, the improved stress-tensor ${\widehat T}$ with respect to which $J_{BRST}$ in eq. \p{brst} is a spin 1 primary current can be easily constructed \begin{equation}}\newcommand{\ee}{\end{equation} {\widehat T} = {\widetilde T} +{\cal J}' + \sum_{\alpha} \left\{ -(1+h_{\alpha}) b_{\alpha}c^{\alpha}{}' - h_{\alpha}b_{\alpha}'c^{\alpha} \right\} \; , \label{TT} \ee and so it belongs, together with ${\widetilde U}$, which commutes with $Q_{BRST}$, to a linear algebra we are searching for. As regards the current ${\widetilde{\overline G}}{}^i$, one could check that it extends the complex generated by the currents ${\widehat J}_a,b_{\alpha},c^{\beta}$ with preserving the structure of the BRST subcomplexes of the paper \cite{TB}, and forms, together with non-constrained currents ${\widehat J}_{\overline \alpha}$ and $c^{\alpha}$, a reduced BRST subcomplex and subalgebra which do not contains the currents with negative ghost numbers. Hence, like in ref. \cite{TB}, the $W$ algebra closes not only modulo BRST exact operators, but it also closes in its own right. So, it is evident that the currents ${\widehat J}_{\overline \alpha}$ also will be present among the currents of the linearizing algebra in our case, as well as the currents ${\widetilde{\overline G}}{}^i$. Thus, the set of currents ${\widehat T},{\widehat J}_{\overline \alpha}$ \p{hat1},\p{TT} and the currents \begin{equation}}\newcommand{\ee}{\end{equation} {\widehat U} \equiv {\widetilde U} \quad ,\quad {\widehat{\overline G}} \equiv {\widetilde{\overline G}}{}^i \label{sor2} \ee form the linear algebra $W^{lin}$ for the nonlinear algebra $W$ obtained from $W_{N+2}^{(N+1)}$ through the secondary Hamiltonian reduction associated with constraints \p{ss2}-\p{ss3}. \subsection{Linearizing $W_N$ algebras.} In this subsection we apply the general procedure described in the previous subsection to the case of the principal embedding of $sl(2)$ into $sl(N)$ algebra to construct the linear algebras $W_N^{lin}$ which contain the nonlinear $W_N$ algebras as subalgebras. For the principal embedding of $sl(2)$ into $sl(N)$ with the currents $J_a^b, (1\leq a,b \leq N, Tr(J) = 0)$, the current ${\cal J}$ is defined to be \begin{equation}}\newcommand{\ee}{\end{equation} {\cal J}= - \sum_{m=1}^{N-1} m J_{N-m}^{N-m} \quad , \label{cartan} \ee and the decomposition of affine algebra $sl(N)$ reads as follows \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} \left( sl(N) \right)_{-} \propto \left\{ J_a^b , ( 2\leq b \leq N , 1\leq a <b ) \right\} & & \nonumber \\ \left( sl(N) \right)_{0} \oplus \left( sl(N) \right)_{+} \propto \left\{ J_a^b , ( 1\leq a \leq N-1 , a\leq b \leq N ) \right\} \quad , \label{deco} \eea i.e. $\left( sl(N) \right)_{-}$ consists of those entries of the $N\times N$ current matrix which stand below the main diagonal, and the remainder just constitutes the subalgebra $\left( sl(N) \right)_{0} \oplus \left( sl(N) \right)_{+}$. Now, using \p{linal2},\p{hat1} -- \p{deco}, we are able to explicitly write the linear algebra $W_{N+2}^{lin}$ which contains the $W_{N+2}$ algebra as a subalgebra: \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} {\widehat T}(z_1){\widehat T}(z_2) & = & \frac{(N+1)\left( 1-(N+2)(N+3)\frac{(K-1)^2}{K}\right)}{2z_{12}^4}+ \frac{2{\widehat T}}{z_{12}^2}+ \frac{{\widehat T}'}{z_{12}} \quad , \nonumber \\ {\widehat U}(z_1){\widehat U}(z_2) & = & \left(\frac{2NK}{2+N}\right)\frac{1}{z_{12}^2} \; , \nonumber \\ {\widehat T}(z_1){\widehat J}_a^b(z_2) & = & \frac{(N+1-2a)(K-1)\delta_a^b}{z_{12}^3}+ \frac{(b-a+1){\widehat J}_a^b}{z_{12}^2}+ \frac{{\widehat J}_a^b{}'}{z_{12}} \; , \nonumber \\ {\widehat T}(z_1){\widehat U}(z_2) & = & -\frac{2N(K-1)}{z_{12}^3}+ \frac{{\widehat U}}{z_{12}^2}+\frac{{\widehat U}'}{z_{12}} \;, \nonumber \\ {\widehat T}(z_1)\widehat{\overline G}{}^i(z_2) & = & \frac{(i+2)\widehat{\overline G}{}^i}{z_{12}^2}+ \frac{\widehat{\overline G}{}^i{}'}{z_{12}}\; , \nonumber \\ {\widehat J}_a^b(z_1){\widehat J}_c^d(z_2) & = & K\frac{\delta_a^d\delta_c^b- \frac{1}{N}\delta_a^b\delta_c^d}{z_{12}^2}+ \frac{\delta_c^b {\widehat J}_a^d- \delta_a^d {\widehat J}_c^b}{z_{12}} \; , \nonumber \\ {\widehat U}(z_1)\widehat{\overline G}{}^i(z_2) & = & -\frac{\widehat{\overline G}{}^i}{z_{12}} \; , \; {\widehat J}_a^b(z_1)\widehat{\overline G}{}^i(z_2) = \frac{-\delta_a^i \widehat{\overline G}{}^b + \frac{1}{N}\delta_a^b \widehat{\overline G}{}^i}{z_{12}} \; , \nonumber \\ \widehat{\overline G}{}^i(z_1) \widehat{\overline G}{}^j(z_2) & = & \mbox{regular} \;, \label{linal3} \eea where the indices run over the following ranges: $$ {\widehat J}_a^b : ( 1\leq a \leq N-1, a\leq b \leq N) \quad , \quad \widehat{\overline G}{}^i : (1\leq i \leq N) \quad . $$ In this non-primary basis the currents $\widehat{\overline G}{}^i $ have the conformal weights $3,4,...,N+2$, and the stress-tensor ${\widehat T}$ coincides with the stress-tensor of $W_{N+2}$ algebra. It is also instructive to rewrite the $W_{N+2}^{lin}$ algebra \p{linal3} in the primary basis $\left\{ T,{\widehat U}, {\widehat J}{}_a^b,\widehat{\overline G}{}^i\right\}$, where a new stress-tensor $T$ is defined as \begin{equation}}\newcommand{\ee}{\end{equation} T= {\widehat T}-\frac{(N+2)(K-1)}{2K} {\widehat U}{}'+ \frac{K-1}{K}\sum_{m=1}^{N-1} m \left( {\widehat J}{}_{N-m}^{N-m} \right)' \ee and the OPE's have the following form \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} T(z_1) T(z_2) & = & \frac{N+1-6\frac{(K-1)^2}{K}}{2z_{12}^4}+\frac{2T}{z_{12}^2}+ \frac{ T'}{z_{12}} \quad , \quad {\widehat U}(z_1){\widehat U}(z_2) = \left(\frac{2NK}{2+N}\right)\frac{1}{z_{12}^2} \; , \nonumber \\ T(z_1){\widehat J}_a^b(z_2) & = & \frac{\left( 1-\frac{a-b}{K}\right) {\widehat J}_a^b}{z_{12}^2}+ \frac{{\widehat J}_a^b{}'}{z_{12}} \; , \nonumber \\ T(z_1){\widehat U}(z_2) & = & \frac{{\widehat U}}{z_{12}^2}+\frac{{\widehat U}'}{z_{12}} \;, \nonumber \\ T(z_1)\widehat{\overline G}{}^i(z_2) & = & \frac{\left(\frac{3}{2}+\frac{1+2i}{2K}\right) \widehat{\overline G}{}^i}{z_{12}^2}+ \frac{\widehat{\overline G}{}^i{}'}{z_{12}}\; , \nonumber \\ {\widehat J}_a^b(z_1){\widehat J}_c^d(z_2) & = & K\frac{\delta_a^d\delta_c^b- \frac{1}{N}\delta_a^b\delta_c^d}{z_{12}^2}+ \frac{\delta_c^b {\widehat J}_a^d- \delta_a^d {\widehat J}_c^b}{z_{12}} \; , \nonumber \\ {\widehat U}(z_1)\widehat{\overline G}{}^i(z_2) & = & -\frac{\widehat{\overline G}{}^i}{z_{12}} \; , \; {\widehat J}_a^b(z_1)\widehat{\overline G}{}^i(z_2) = \frac{-\delta_a^i \widehat{\overline G}{}^b + \frac{1}{N}\delta_a^b \widehat{\overline G}{}^i}{z_{12}} \; , \nonumber \\ \widehat{\overline G}{}^i(z_1) \widehat{\overline G}{}^j(z_2) & = & \mbox{regular} \;. \label{linal4} \eea In this basis the "chain" structure of the algebras $W_N^{lin}$ becomes most transparent. Namely, if we redefine the currents of $W_{N+2}^{lin}$ as \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} {\cal U}_1 & = & {\widehat U}-N\sum_{m=1}^{N-1}{\widehat J}_m^m \;, \nonumber \\ {\cal U} & = & \frac{(N+2)(N-1)}{N(N+1)}{\widehat U}+ \frac{2}{N+1}\sum_{m=1}^{N-1}{\widehat J}_m^m \; \nonumber \\ {\cal T} & = & T+\sqrt{\frac{N+2}{12KN^2(N+1)}}{\cal U}_1' \; , \quad \left( \mbox{or} \quad {\cal T} = T-\frac{N+2}{2KN^2(N+1)}({\cal U}_1{\cal U}_1)\right) \; , \nonumber \\ {\cal J}_a^b & = & {\widehat J}_a^b- \frac{\delta_a^b}{N-1} \sum_{m=1}^{N-1}{\widehat J}_m^m \;, (1\leq a \leq N-2, a\leq b \leq N-1 ) \; ,\nonumber \\ {\cal S}_a & = & {\widehat J}_a^N \;, (1\leq a \leq N-1) \; ,\nonumber \\ {\overline{\cal G}}{}^i & = & {\widehat{\overline G}}{}^i \;, (1\leq i \leq N-1)\; , \nonumber \\ {\overline{\cal Q}} & = & {\widehat{\overline G}}{}^N \; , \label{linal5} \eea then the subset ${\cal T},{\cal U},{\cal J}_a^b,{\overline{\cal G}}{}^i $ generates the algebra $W_{N+1}^{lin}$ in the form \p{linal4}. Thus, the $W_{N+2}^{lin}$ algebras constructed have the following structure \begin{equation}}\newcommand{\ee}{\end{equation} W_{N+2}^{lin}=\left\{ W_{N+1}^{lin}, {\cal U}_1,{\cal S}_a, {\overline{\cal Q}} \right\} \ee and therefore there exists the following chain of embeddings \begin{equation}}\newcommand{\ee}{\end{equation} \ldots W_{N}^{lin} \subset W_{N+1}^{lin} \subset W_{N+2}^{lin} \ldots \quad . \label{chain} \ee Let us stress that the nonlinear $W_{N+2}$ algebras do not possess the chain structure like \p{chain}, this property is inherent only to their linearizing algebras $W_{N+2}^{lin}$. By this we finished the construction of linear algebras $W_{N+2}^{lin}$ which contain $W_{N+2}$ as subalgebras in a nonlinear basis. Let us repeat once more that the explicit expression for the transformations >from the currents of $W_{N+2}^{lin}$ algebra to those forming $W_{N+2}$ algebra is a matter of straightforward calculation once we know the exact structure of the linear algebra. Finally, let us stress that knowing the structure of the linearized algebras $W_{N+2}^{lin}$ helps us to reveal some interesting properties of the $W_{N+2}$ algebras and their representations. First of all, each realization of $W_{N+2}^{lin}$ algebra gives rise to a realization of $W_{N+2}$. Hence, the relation between linear and nonlinear algebras opens a way to find new non-standard realizations of $W_{N+2}$ algebras. As was shown in \cite{BO} for the particular case of $W_3$, these new realizations \cite{KS} can be useful for solving the problem of embedding Virasoro string into the $W_3$ one. Among many interesting realizations of $W_{N+2}^{lin}$ there is one very simple particular realization which can be described as follows. A careful inspection of the OPE's \p{linal4} shows that the currents \begin{equation}}\newcommand{\ee}{\end{equation} {\widehat{\overline G}}{}^i \; , \; {\widehat J}_a^b : ( 1\leq a \leq N-1, a < b \leq N) \ee are null fields and so they can be consistently put equal to zero. In this case the algebra $W_{N+2}^{lin}$ will contain only Virasoro stress tensor $T$ and $N$ $U(1)$-currents $\left\{ {\widehat U}, {\widehat J}_1^1 , \ldots {\widehat J}_{N-1}^{N-1}\right\}$. Of course, there exists the basis, where all these currents commute with each other. The currents of $W_{N+2}$ algebra are realized in this basis in terms of arbitrary stress tensor $T_{Vir}$ with the central charge $c_{Vir}$ \begin{equation}}\newcommand{\ee}{\end{equation} c_{Vir} =1-6\frac{(K-1)^2}{K} \ee and $N$ decoupled commuting $U(1)$ currents. Surprisingly, the values of $c_{Vir}$ corresponding to the minimal models of Virasoro algebra \cite{min} at \begin{equation}}\newcommand{\ee}{\end{equation} K=\frac{p}{q} \Rightarrow c_{Vir}=1-6\frac{(p-q)^2}{pq} \ee induce the central charge $c_{W_{N+2}}$ of the minimal models for $W_{N+2}$ algebra \cite{FL} \begin{equation}}\newcommand{\ee}{\end{equation} c_{W_{N+2}}=(N+1)\left(1-(N+2)(N+3)\frac{(p-q)^2}{pq}\right) \ee (let us remind that the stress tensor of $W_{N+2}$ coincides with the stress tensor $\widehat T$ in the non-primary basis \p{linal3}). For the $W_3$ algebra this property has been discussed in \cite{KS}. \subsection{Linearizing $W_4$ algebra.} In this subsection, as an example of our construction, we would like to present the explicit formulas concerning the linearization of $W_4$ algebra. The structure of the linear algebra $W_{4}^{lin}$ in the primary basis can be immediately read off from the OPE's \p{linal4} by putting $N=2$. So, the algebra $W_{4}^{lin}$ contains the currents $\left\{ T,{\widehat U},{\widehat J}{}_1^1,{\widehat J}{}_1^2, \widehat{\overline G}{}^1,\widehat{\overline G}{}^2\right\}$, with the conformal weights $\left\{ 2,1,1,\frac{K+1}{K},\frac{3(K+1)}{2K}, \frac{3K+5}{2K}\right\}$, respectively. Passing to the currents of $W_4$, goes over two steps. Firstly, we must write down most general, nonlinear in the currents of $W_4^{lin}$, {\it invertible} expressions for the currents ${\cal T}_W,{\cal W}, {\cal V}$ with the desired conformal weights (2,3 and 4). It can be easily done in the nonprimary basis \p{linal3}, where the stress tensor $\widehat T$ coincides with the stress tensor of $W_4$ algebra. Secondly, we should calculate the OPE's between the constructed expressions and demand them to form a closed set. This procedure completely fixes all coefficients in the expressions for the currents of $W_4$ algebra in the primary basis in terms of currents of $W_4^{lin}$ (up to unessential rescalings). Let us stress that we do not need to know the explicit structure of $W_4$ algebra. By performing the second step, we automatically reconstruct the $W_4$ algebra. Let us present here the results of our calculations for the $W_4$ algebra. \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} {\cal T}_W & = & T +\frac{2(K-1)}{K}{\widehat U}'-\frac{K-1}{K} {\widehat J}{}_1^1{}' \; , \nonumber \\ {\cal W} & = & {\widehat{\overline G}}{}^1 +\frac{K-1}{K}(T_1-T_2)'+ \frac{1}{K}\left((T_1-T_2){\widehat U}\right)- \frac{K-1}{K}{\widehat J}{}_1^2{}'- \frac{1}{K}({\widehat J}{}_1^2{\widehat U}) \; , \nonumber \\ {\cal V} & = & -{\widehat{\overline G}}{}^2 + \frac{K-1}{K}{\widehat{\overline G}}{}^1{}'+\frac{1}{2K}\left( ({\widehat J}{}_1^2{\widehat J}{}_1^2)+{\widehat J}{}_1^2{}'\right)+ \frac{1}{K}\left( ({\widehat U}-2{\widehat J}{}_1^1) {\widehat{\overline G}}{}^1\right) - \frac{1}{K}\left( (T_1-T_2){\widehat J}{}_1^2\right)+ \nonumber \\ & & \frac{1}{2K}\left((T_1-T_2)(T_1-T_2)\right)-\frac{2}{K^2} \left( {\widehat J}{}_1^1{\widehat J}{}_1^2\right)'+ \frac{1}{K^2}\left( (T_1+T_2)(2(K-1){\widehat U}'+ ({\widehat U}{\widehat U}))\right) + \nonumber \\ & & \frac{K-1}{K^2}\left( (T_1+T_2)'{\widehat U}\right)+ \frac{(K-1)^2}{2K^2}(T_1+T_2)''- \frac{(K-1)(2K-3)(3K-2)}{3K^3}{\widehat U}'''+ \nonumber \\ & & \frac{(3-2K)(3K-2)}{4K^3}({\widehat U}''{\widehat U}) - \frac{16(6-13K+6K^2)}{K(300-637K+300K^2)}({\cal T}_W{\cal T}_W)- \nonumber \\ & & \frac{3(60-121K+60K^2)(-6+13K-6K^2)}{4(300-637K+300K^2)} {\cal T}_W'' \; , \label{realiz} \eea where the auxiliary currents $T_1$ and $T_2$ are defined as \begin{eqnarray}}\newcommand{\eea}{\end{eqnarray} T_1 & = & T-\frac{1}{K}({\widehat J}{}_1^1{\widehat J}{}_1^1)- \frac{1}{2K}({\widehat U}{\widehat U}) \; , \nonumber \\ T_2 & = & \frac{1}{K}({\widehat J}{}_1^1{\widehat J}{}_1^1)- \frac{K-1}{K}{\widehat J}{}_1^1{}' \;. \eea For the $W_4^{lin}$ algebra \p{linal4} the currents ${\widehat{\overline G}}{}^1,{\widehat{\overline G}}{}^2 $ and ${\widehat J}{}_1^2$ are null-fields. So we can consistently put them equal to zero. In this case the expressions \p{realiz} provide us with the Miura realization of $W_4$ algebra in terms of two currents with conformal spins 2 ($T_1, T_2$) and with the same central charges, and one current with spin 1 (${\widehat U}$) which commute with each other. \section{Conclusion.} In this letter we have constructed the linear (super)conformal algebras with finite numbers of generating currents which contain in some nonlinear basis a wide class of $W$-(super)algebras, including $W_N^{(N-1)}$, $U(N)$-superconformal as well as $W_N$ nonlinear algebras. For the $W_N$ algebras we do not have a rigorous proof of our conjecture about the general structure of the linearizing algebras, but we have shown that it works both for classical algebras (on the level of Poisson brackets) and some simplest examples of quantum algebras (e.g., for $W_3,W_4$ ). The explicit construction of the linearizing algebras $W_{N+2}^{lin}$ for $W_{N+2}$ reveals their many interesting properties: they have a "chain" structure (i.e. the linear algebras with a given $N$ are subalgebras of those with a higher $N$), the central charge of the Virasoro subsector of these linear algebras in the parametrization corresponding to the Virasoro minimal models, while putting the null-fields equal to zero, induces the central charge for the minimal models of $W_N$, etc. This is the reasons why we believe that our conjecture is true. It is interesting to note that, as we have explicitly demonstrated in the case of $W_4$ algebra, we do not need to know beforehand the structure relations of the nonlinear algebras, which rapidly become very complicated with growth of spins of the involved currents. Once we have constructed the linearizing algebra, we could algorithmically reproduce the structure of the corresponding nonlinear one. So, one of the main open questions now is how much information about the properties of a given nonlinear algebra we can extract from its linearizing algebra. The answer to this question could be important for applications of linearizing algebras to $W$-strings, integrable systems with $W$-type symmetry, etc. A detailed discussion of this issue will be given elsewhere. \vspace{0.5cm} \section*{Note Added.} After this paper was completed, we learned of a paper by J.O. Madsen and E. Ragoucy \cite{mr}, which has some overlap with our work. They showed that the wide class of $W$-algebras (including $W_n$ ones) can be linearized in the framework of the secondary hamiltonian reduction. However, they did not obtain the explicit expressions for the linearizing algebras (excepting $W_4$ case). The linearization of the (quasi)superconformal algebras was not considered, because their method does not allow fields with negative conformal weights. \section*{Acknowledgments.} It is a pleasure for us to thank S. Bellucci, L. Bonora, K. Hornfeck, E. Ivanov, V. Ogievetsky, S. Sciuto, A. Semikhatov, F. Toppan and D. Volkov for many interesting and clarifying discussions. One of us (A.S.) is also indebt to G. Zinovjev for his interest in this work and useful discussions. We are grateful to E. Ivanov for careful reading of the manuscript. This investigation has been supported in part by the Russian Foundation of Fundamental Research, grant 93-02-03821, and the International Science Foundation, grant M9T000.
1,314,259,993,693
arxiv
\section{#1}\mb{0.5cm}} \newcommand{\h}{\frac{1}{2}} \newcommand{\ra}{\rightarrow} \newcommand{\la}{\leftarrow} \newcommand{\ep}{$e^+e^-\ra\pi^+\pi^-\;$} \newcommand{\epp}{$e^+e^-\ra\pi^+\pi^0\pi^-\;$} \def\mathunderaccent#1{\let\theaccent#1\mathpalette\putaccentunder} \def\putaccentunder#1#2{\oalign{$#1#2$\crcr\hidewidth \vbox to.2ex{\hbox{$#1\theaccent{}$}\vss}\hidewidth}} \def\ttilde#1{\tilde{\tilde{#1}}} \newcommand{\ti}{\mathunderaccent\tilde} \newcommand{\M}{{\cal M}} \newcommand{\rw}{$\rho\!-\!\omega\;$} \begin{document} \thispagestyle{empty} \begin{flushright} ADP-96-13/T216 \\ hep-ph/9604375 \end{flushright} \begin{center} {\large{\bf Recent developments in rho-omega mixing}}\\ {\large{\bf [Aust. J. Phys. 50 (1997) 255] }} \vspace{2.2 cm} Heath B. O'Connell\\ \vspace{1.2 cm} {\it Department of Physics and Mathematical Physics \\ University of Adelaide 5005, Australia } \\ \vspace{1.2 cm} \today \vspace{1.2 cm} \begin{abstract} The topic of \rw mixing has received renewed interest in recent years and has been studied using a variety of modern techniques. A brief history of the subject is presented before summarising recent developments in the field. The present status of our understanding is discussed. \end{abstract} \end{center} \vfill \begin{flushleft} E-mail: hoconnel@physics.adelaide.edu.au \\ {\it Refereed version of talk given at ``Quarks, Hadrons and Nuclei" Joint Australian-Japanese meeting at the Institute for Theoretical Physics, Adelaide, November 1995. } \end{flushleft} \newpage No discussion of \rw mixing can be self-contained without a brief mention of the Vector Meson Dominance (VMD) model, within which it is studied \cite{Sak,review}. A simple example to introduce VMD is the electromagnetic (EM) pion form-factor, $F_\pi(q^2)$. This quantity, represents the multiplicative deviation from the amplitude for the reaction \ep for the coupling of photons to purely point-like pions. A resonance peak is observed in the cross-section (see Fig.~\ref{cross} showing data from \cite{data}). While this arises from low-energy, non-perturbative QCD processes \cite{roberts}, it can be modelled by assuming the photon couples to pions via vector mesons, the dominant one here being the rho-meson. VMD assumes the photon interacts with hadrons through vector mesons and mathematically, the enhancement seen in the cross-section is provided by the pole in the meson propagator at the $\rho$ mass-point. The broadness of the peak seen in the reaction is explained by attributing a ``width," $\Gamma_\rho$, to the rho-meson corresponding to shifting the pole off the real axis, so that it becomes complex and we have \be m^2_\rho=\hat{m}^2_\rho+i\hat{m}_\rho\Gamma_\rho, \label{pole} \ee where $\hat{m}_\rho$ (the ``mass" of the rho) and $\Gamma_\rho$ are real. \begin{figure}[htb] \centering{\ \epsfig{angle=0,figure=cross.ps,height=8.5cm} } \parbox{130mm}{\caption {The cross-section for the reaction \protect{\ep} from the data of Ref. \protect{\cite{data}} in the \protect{\rw} resonance region. } \label{cross}} \end{figure} The $\pi^+\pi^-$ final state has isospin 1, so in order that G parity be conserved the $\rho$ meson must have isospin 1. The photon can also couple to the three pion system (as the electromagnetic current has isospin 1 and isospin 0 components). Once again, a similar enhancement is seen in the reaction \epp attributed to the isospin zero $\omega$ , though with a much narrower peak \cite{Barkov2} (and a mass, $\hat{m}_\omega> \hat{m}_\rho$). As more data was collected on \ep and the resolution of the plot of the cross-section improved (see Fig.~\ref{cross}), the interference of the $\omega$ meson in the reaction \ep was observed \cite{orsay}. One was now faced with a G symmetry violating interaction, $\omega\ra\pi^+\pi^-$. This could not be explained by the EM process $\omega\ra \gamma\ra\rho \ra2\pi$ as this is far too small to account for what is seen experimentally. Thus, this interference needs to be incorporated into the VMD picture. This can be done by writing the $\rho$ and $\omega$ meson propagators in matrix form \cite{OPTW,SW} and generating the mixing by dressing the bare (isospin pure) matrix elements. We thus have the bare matrix, $D^0_{\mu\nu}= -g_{\mu\nu}D^0$ where \be D^0=\left(\begin{array}{cc} (q^2-m_\rho^2)^{-1} & 0\\ 0 & (q^2-m_\omega^2)^{-1} \end{array} \right). \ee We then dress this to form the full (or physical) propagator given by $D_{\mu\nu}=D^0_{\mu\nu}+D^0_{\mu\alpha}\Pi^{\alpha\beta}D_{\beta\nu}$ where the polarisation function, $\Pi_{\mu\nu}=(g_{\mu\nu} -{q_\mu q_\nu}/{q^2})\Pi$, has off-diagonal elements $\Pi_{\rho\omega}$, which generate the mixing between the isospin pure $\rho_I$ and $\omega_I$. Thus we introduce the decay mode $\gamma\ra\omega_I\ra\rho_I\ra 2\pi$ to model what is seen in experiment. However we should also consider the intrinsic decay $\omega_I\ra2\pi$. An argument by Renard, though, claimed this effect will be suppressed and hence can be ignored \cite{review,Renard} (this will be discussed further below). The dressed propagator, $D(q^2)$, will then have off-diagonal elements, but the propagator could be diagonalised by transforming to the physical basis, $ \rho=\rho_I-\epsilon\omega_I$ and $\omega=\omega_I+\epsilon\rho_I$. The isospin violating mixing angle, $\epsilon$, is given by \cite{review} \be \epsilon=\frac{\Pi_{\rho\omega}}{m_\omega^2-m^2_\rho} \label{eps} \ee where we use the complex masses of Eq.~(\ref{pole}). A model for the pion form-factor could now be written down \be F_\pi(q^2)=\frac{\hat{m}_\rho^2}{g_\rho(q^2-m_\rho^2)} +\epsilon\frac{\hat{m}_\omega^2}{g_\omega(q^2-m_\omega^2)} \label{form} \ee which produced a remarkably good fit to the experimental data from only a few parameters. In the absence, though, of any theoretical model, $\Pi_{\rho\omega}$ has to be fitted to the data \cite{OPTW2}. The mixing of elementary particles due to symmetry breaking had been considered by Coleman and Schnitzer for the vector case \cite{CS}. They discussed two kinds of mixing, mass (or particle) mixing, which was constant and current (or vector) mixing which was momentum dependent. The conclusion reached was that although mass mixing is perfectly adequate for spinless particles, current mixing is better for spin one particles, because it does not violate the conservation of electric charge (and hence $\Pi_{\rho\omega}$ should be proportional to $q^2$). Although the specific example they addressed was $\omega\!-\!\phi$ mixing, they mentioned that off-diagonal (see above) current mixing was suitable for the study of \rw mixing, which, at the time, had been examined by Glashow \cite{G}. It was a number of years before this suggestion was followed up \cite{SW}, prompted by the direct experimental evidence for \rw mixing in \ep \cite{orsay}. Coon {\it et al.} \cite{CSM} studied \rw mixing in the one boson exchange model of the short-distance nuclear force as a contributing mechanism for the generation of the charge symmetry violation (CSV) seen experimentally \cite{TS}. The resulting potential is proportional to $\Pi_{\rho\omega}$, and as it turns out, the value of $\Pi_{\rho\omega}$ extracted in the measurement of the pion form-factor (Eq.~(\ref{form})) has the right sign and magnitude to produce a reasonable fit to the data. Although Coon {\it et al.} realised that the mixing function would in general be momentum dependent, it was claimed that its value at the $\rho$ or $\omega$ mass point was all that was needed. Nevertheless, its extraction in the pion form-factor is for timelike $q^2$, while the vector mesons in the boson exchange model of the $NN$ force have spacelike momentum. Therefore, any momentum dependence could have significant implications for the standard treatment of CSV using \rw mixing. This was first realised by Goldman {\it et al.} \cite{GHT} who constructed a simple model in which $\Pi_{\rho\omega}$ is generated by a quark loop. The amplitude for the mixing is given by the difference between the $\bar{u}u$ loop and the $\bar{d}d$ loop. In the limit of isospin invariance, ($m_u=m_d$) the mixing would vanish. The prediction of a {\em significant} momentum dependence for \rw mixing forced Goldman {\it et al.} to conclude that it would strongly reduce the standard class III and IV CSV NN potential. So can \rw mixing be simply assumed to be independent of momentum, and if not, what does this say about nuclear models? A lot of model calculations were performed following this \cite{models} which all produced similar results. At a more formal level, it was shown that for any model in which the vector mesons coupled to a conserved current, the mixing must vanish at $q^2=0$ \cite{OPTW}, precisely the constraint on vector mixing expected by Coleman and Schnitzer \cite{CS}. In light of this, alternative mechanisms for nuclear CSV were proposed, involving isospin violation at the meson-nucleon vertex \cite{nuc}, rather than in the propagator. As both the vertex and propagator parts of the $NN$ interaction are off-shell, they are dependent on the choice of interpolating field for the vector mesons. It was thus argued that one could find fields so that the sum of vertex and propagator contributions is equivalent to a configuration in which all CSV occurs through a momentum independent mixed propagator \cite{CM}. This argument, though, has been disputed on the grounds of unitarity and analyticity \cite{kim3} However, the success of $NN$ models in which the CSV is generated by a fixed-valued \rw mixing, provided considerable incentive to argue against momentum dependence. Miller \cite{miller} considered the mixing of the photon and the rho. Traditional VMD has a fixed coupling between the photon and the rho, but if this coupling were also generated by the kind of momentum dependent loop processes used in for \rw mixing \cite{models} then the photon-rho coupling would be strongly momentum dependent, hence destroying the successful VMD phenomenology. However, an equivalent {\em momentum dependent} version of VMD exists (which we shall refer to as VMD1; the traditional version we shall call VMD2 \cite{review}), which was described by Sakurai thirty years ago \cite{Sak}. VMD1 differs from VMD2 by having a linear-in-$q^2$ photon-rho coupling {\em and} a direct coupling of the photon to the hadronic field, unlike in VMD2 where photon-hadron interactions take place exclusively through a vector meson. As an example, the pion form-factor was plotted using VMD1 \cite{OPTW2} and the results are indistinguishable from the usual VMD2. Thus Miller's worry could be addressed by comparing the loop models \cite{models} not to VMD2, but to VMD1 \cite{OWBK}. This makes sense not only because the photon-$\rho$ mixings will be momentum dependent, but also because if the photon is now allowed to couple to quarks (say) to form the loop, then we would it expect it to be able to couple to the quarks in hadrons, hence introducing a direct photon-hadron coupling not found in VMD2, but appearing in VMD1. We might like, though, to make some model-independent statement about \rw. This is difficult because the underlying theory, QCD, is presently inaccessible at the relevant energies. In the past 20 years we have developed some model independent treatments of low energy strong interactions, and two of these have been used to look at \rw mixing. The first is the technique of QCD sum-rules (QCDSR) \cite{SVZ}. The basic idea is that one examines two-point functions of various hadronic currents, expanding them out in powers of $1/q^2$. At high $q^2$ QCD can be treated perturbatively due to asymptotic freedom, but cannot be handled in this manner for low $q^2$ (for example, around the $\rho$ mass). So to work with the current correlators at low $q^2$ we have to appeal to phenomenology (importantly the resonances which are related to the vacuum structure). In this sense QCD sum rules are a bit of an art, because there is no set method for using them. Interestingly, one of the first examples of the use of QCDSR by the original authors was \rw mixing. The problem is set up by considering the two-point function \be C^{\mu\nu}_{\rho\omega}(q)=i\int d^4xe^{iq\cdot x}\langle0|{\rm T}( J_\rho^\mu(x)J_\omega^\nu(0))|0\rangle, \label{corr} \ee where $J_\rho^\mu=(\bar{u}\gamma_\mu u-\bar{d}\gamma_\mu d)/2$ and $J_\omega^\mu=(\bar{u}\gamma_\mu u+\bar{d}\gamma_\mu d)/6$. This {\em current } correlator (Eq.~(\ref{corr})) was then used by Hatsuda {\it et al.} \cite{HHMK} to examine the momentum dependence of \rw mixing by equating it with the mixed propagator (after extracting the transverse tensor $(g_{\mu\nu}-q_\mu q_\nu/q^2)$) \be D_{\rho\omega}(q^2)=\frac{\Pi_{\rho\omega} (q^2)}{(q^2-\hat{m}_\rho^2)(q^2-\hat{m}_\omega^2)}. \label{diag}\ee As pointed out by Maltman \cite{kim1} though, the association of the correlator with the off-diagonal propagator is only relevant if one uses interpolating fields for the $\rho$ and $\omega$ mesons proportional to the currents $J_\rho^\mu$ and $J_\omega^\mu$, otherwise the correlator cannot be used to provide information about the off-shell behaviour of the mixing element of the vector meson propagator. Hatsuda {\it et al.} concentrated on the effect of the $\rho$ and $\omega$ in this correlator, which as the most nearby resonances might be expected to play the dominant role. However, Maltman found that the $\rho$ and $\omega$ contributions actually partially cancel. Because of this, the $\phi$, although quite far away and hence contributing with a much lesser strength than the individual $\rho$ and $\omega$ becomes important for the isospin-breaking correlator (an effect not considered in the previous two analyses \cite{SVZ,HHMK}). In all analyses the sum rule result is ultimately compared to the data for the G-parity violation seen in \ep. The correlator, though, is only relevant to the contribution from the mixing of the isospin pure states, $\rho_I\!-\! \omega_I$, to the isospin breaking seen in the process. The competing process, $\omega_I \ra \pi^+\pi^-$, is overlooked (as mentioned earlier), but Maltman found the $\rho_I\!-\! \omega_I$ contribution (as determined by the current correlator in QDCSR) under-estimates the isospin-violation seen experimentally. We shall discuss the matter of intrinsic decay further below. Leinweber {\it et al.} in two recent papers \cite{IJL} examined the effects of including the widths of the $\rho$ and $\omega$ mesons in the QCDSR calculation performed by Hatsuda {\it et al.}. They replaced the real parts of the mass in Eq.~(\ref{diag}), by the complex pole positions given in Eq.~(\ref{pole}). Following Maltman \cite{kim1}, they included the $\phi$ mesons, but found that its contribution was negligible. Perhaps of most interest, following the nuclear CSV debate, was their claim that for certain values of $\lambda$, $\Pi_{\rho\omega}$ has the same sign and similar magnitude in the space-like region to the on-shell value. Because the reaction \ep is the only place \rw is actually seen, it was decided that a new {\em and general} analysis should be performed \cite{MOW}. The two effects normally ignored, momentum dependence of $\Pi_{\rho\omega}$ and the intrinsic decay $\omega_I \ra \pi^+\pi^-$ were included. As two recent fits to data had been performed \cite{fits} all that needed to be done was to construct a precise theoretical expression for the form-factor, which could be compared to the numbers extracted from these analyses for the expression \be F_\pi(q^2)\propto P_\rho+Ae^{i\phi}P_\omega, \label{formf} \ee where $P_{\rho,\omega}$ are the poles of the $\rho$ and $\omega$ propagators. The starting point was the mixed matrix formalism. In the data one sees two resonance peaks - a broad one associated with the physical (as opposed to isospin-pure) rho, and a narrow one associated with the physical $\omega$. Thus, the mixed propagator, should only contain two poles in the physical bases. We choose this basis to be $\rho=\rho_I -\epsilon_1 \omega_I$ and $\omega= \omega_I+ \epsilon_2 \rho_I$ and write the propagator in the new, physical basis. This allows us to fix $\epsilon_{1,2}$ by demanding that there are no poles in the off-diagonal pieces, i.e. that all resonant behaviour is associated with the physical mesons. This gives expressions for $\epsilon_{1,2}$ similar to Eq.~(\ref{eps}) but with arguments for $\Pi_{\rho\omega}$ at $m_\omega^2$ and $m_\rho^2$ respectively, as noted by Harte and Sachs \cite{HS}. Thus, in general the bases are not related by a simple rotation, and the pure and real bases are not equivalent (as the transformation between them is not orthogonal). For the case that $\Pi_{\rho\omega}$ is either fixed or linear in $q^2$, the off-diagonal terms can be made to disappear completely, but in general they survive and contribute to the non-resonant (i.e. no singular piece) background (although this is only a minor effect). The second part of the analysis centres on the decay $\omega_I \ra \pi^+\pi^-$. A closer examination of the Renard argument shows that the cancellation is not exact, and a reasonable fraction of the intrinsic decay survives, which adds to the total interaction and this turns out to be {\em crucial} to the analysis. In a world of exact experimental precision the pre-factor, $Ae^{i\phi}$, of the $\omega$ pole in the expression for the form-factor (Eq.~(\ref{formf})), would enable us to pin down the values of the two unknowns, $\Pi_{\rho\omega}$ and the strength of $\omega_I \ra \pi^+\pi^-$. Unfortunately this is not the case, and the considerable uncertainty in the Orsay phase, $\phi$, and the lesser uncertainty in $A$ allows a whole spread of values for the two unknowns; $\Pi_{\rho\omega}$ can take values in the set $(-840,-6240)$ MeV$^2$. Naturally if there were no contribution from $\omega_I \ra \pi^+\pi^-$ we would recover the usual analysis and obtain $\Pi_{\rho \omega}=-3960$ MeV$^2$ (c.f. the value -4520$\pm$600 MeV$^2$ \cite{CB}). In light of this Maltman's QCDSR analysis is quite interesting, as it seems to provide theoretical evidence for a non-zero contribution from intrinsic decay, leading us to re-think the present status of the traditional extraction of $\Pi_{\rho\omega}$. It also brings in to question the value of $\Pi_{\rho \omega}$ used in nuclear models. Another model-independent method for treating the strong interaction at low energies is Chiral Perturbation Theory (ChPT). It is the subject of many recent reviews \cite{chiral}, and essentially it sets up an effective model involving the pseudo-scalar octet and admitting all terms allowed by the symmetry of the original QCD Lagrangian, organised as a perturbative series in $q^2$. However, the symmetries of QCD in question (chiral symmetry, isospin symmetry) are not exact; the main feature of ChPT is that it breaks these symmetries, for a meson theory, exactly as QCD breaks them. The various free parameters of the theory are then fixed by comparison to experiment. As a perturbative series in $q^2$, ChPT is only reliable in the low momentum region. The relatively heavy vector mesons, therefore, do not fit naturally into it. As resonances of QCD processes (which is really what they represent), they play a very important role in strong physics, but unfortunately ChPT breaks down well before the $q^2$ of the poles we associate with vector mesons. Thus, it is usually with an assumption such as VMD that the vectors mesons are fitted into ChPT, to create an effective model incorporating ChPT. One such model, in which the vector mesons appear in the Lagragian as antisymmetric tensors has been used by Urech to study \rw mixing \cite{urech}. The hadronic currents, as appearing in the correlator, have no such difficulty and appear quite naturally in ChPT. Maltman \cite{kim1} thus used ChPT as a complement to his QCDSR calculation of the mixed correlator. He examined the correlator to one-loop order ($O(q^4)$) in ChPT obtaining a result dependent on both $q^2$ and the mass difference of the neutral and charged kaon (vanishing when these masses are equal, which, in ChPT occurs when $m_u=m_d$). The result was much smaller than the corresponding QCDSR result, indicating that the next order contribution (two-loops, $O(q^6)$) needed to be included as well. The two-loop calculation \cite{kim2} seems to show that the chiral series is not convergent enough to allow one to truncate even at $O(q^6)$ (which is the present-day limit of ChPT). The study of \rw mixing has pushed VMD and available data to its limit. So far we have established, within the matrix approach to VMD, that we expect the mixing of the pure isospin states to satisfy $\Pi_{\rho\omega}(0)=0$ (for models in which the mesons couple to conserved currents), and that, due to experimental uncertainty in the pion form-factor, we cannot distinguish between $\rho_I-\omega_I$ mixing and the intrinsic decay $\omega_I \ra\pi^+\pi^-$. As we have seen, QCD sum rules and ChPT are providing new insights to this old problem, but their results are open to interpretation. Physical processes in total, rather than their individual contributions are what we actually observe and so our studies need to reflect this. It remains for a completely consistent field theory based description of isospin violation in both the timelike (\rw mixing) and spacelike (nuclear CSV) to be constructed. We look forward to future developments in this field. \vspace{1cm} {\bf Acknowledgements} I would like to thank A.W.~Thomas, A.G.~Williams (Adelaide U.), C.D.~Roberts (Argonne) and K.R.~Maltman (York U., Canada) for helpful conversations. This work is supported in part by the Australian Research Council.
1,314,259,993,694
arxiv
\section{Introduction} In this paper we study the positive solutions (`ground states') to some semi-linear elliptic equations in $\R^d$ of the general form \begin{equation}\label{nleq} \boxed{\Delta u +g(u)=0\qquad \text{ in $\R^d$ with $d\geq2$},} \end{equation} where $\Delta u=\sum_{i=1}^d\partial^2_{x_i}u$ is the Laplacian. We assume that the nonlinearity $g$ is gauge-invariant under the action of the group $\bS^1$, that is \begin{equation}\label{phaseinv} g(|u|e^{i\theta})=g(|u|)e^{i\theta} \end{equation} for any $u,\theta\in \R$. In other words, without loss of generality we may assume that $g:\R\to\R$ is a real odd function such that $g(0)=0$. Then~\eqref{nleq} is invariant under translations and multiplications by a phase factor. The study of the \emph{existence} and \emph{uniqueness} of positive solutions to equations of the type~\eqref{nleq} has a very long history. Of particular interest is the (focusing) \emph{nonlinear Schr\"odinger equation} (NLS) corresponding to \begin{equation} g(u)=u^q-u,\qquad 1<q<2^*-1 \label{eq:g_NLS} \end{equation} where $2^*=2d/(d-2)$ is the critical Sobolev exponent in dimensions $d\geq3$ and $2^*=\infty$ in dimensions $d=1,2$. Here and everywhere else in the paper we use the convention that $u^q:=|u|^{q-1}u$ to ensure that~\eqref{phaseinv} is satisfied. In the particular case~\eqref{eq:g_NLS}, the uniqueness of positive solutions was proved first by Coffman~\cite{Coffman-72} for $q=3$ and $d=3$, and then by Kwong~\cite{Kwong-89} in the general case. These results have been extended to a larger class of non-linearities by many authors, including for instance~\cite{PelSer-83,LeoSer-87,KwoZha-91,CheLin-91,McLeod-93,PucSer-98,SerTan-00,Jang-10}. Another important property for applications is the \emph{non-degeneracy} of these solutions, which means that the kernel of the linearized operators is trivial, modulo phase and space translations: $$\ker\left(\Delta+\frac{g(u)}{u}\right)={\rm span}\{u\},\qquad \ker\left(\Delta+g'(u)\right)={\rm span}\{\partial_{x_1}u,...,\partial_{x_d}u\}.$$ This property plays a central role for the stability or instability of these stationary solutions~\cite{Weinstein-85,ShaStr-85,GriShaStr-87,GriShaStr-90} in the context of the time-dependent Schr\"odinger equation $$i\partial_tu=\Delta u+g(u).$$ In the NLS case~\eqref{eq:g_NLS} non-degeneracy was shown first in~\cite{Coffman-72,Kwong-89,Weinstein-85}, but for general nonlinearities it does not necessarily follow from the method used to show uniqueness. In this paper, we are particularly interested in the \emph{double-power nonlinearity} \begin{equation} \boxed{g(u)=-u^p+u^q-\mu u,\qquad p>q>1,\quad\mu>0,\quad d\geq2} \label{eq:g_double_power_NLS} \end{equation} with a defocusing large exponent $p$ and a focusing smaller exponent $q$. Uniqueness in this case was shown in~\cite{SerTan-00}, but the non-degeneracy of the solutions does not seem to follow from the proof. The nonlinearity~\eqref{eq:g_double_power_NLS} has very strong physical motivations. The cubic-quintic nonlinearity $q=3,p=5$ appears in many applications and usually models systems with attractive two-body interactions and repulsive three-body interactions~\cite{Anderson-71,Kartavenko-84,MerIsi-88}. In this case, non-degeneracy was shown in~\cite{KilOhPocVis-17} for $d=3$ and in~\cite{CarSpa-20_ppt} for $d=2$. On the other hand, the case $p=7/3$ and $q=5/3$ in dimension $d=3$ was considered in~\cite{Ricaud-17} in the context of symmetry breaking for a model of Density Functional Theory for solids. A general result which covers~\eqref{eq:g_double_power_NLS} appeared later in~\cite{AdaShiWat-18}. Some time before~\cite{KilOhPocVis-17,Ricaud-17,AdaShiWat-18}, we had considered in~\cite{LewRot-15} the case $$g(u)=a\sin^{3}u\cos u-b \sin u\cos u,\qquad a>2b,$$ which naturally arises in the non-relativistic limit of a Dirac equation in nuclear physics~\cite{Rota-PhD,EstRot-12,EstRot-13,TreRot-13}. As was already mentioned in~\cite{LewRot-15}, the proof we gave of the uniqueness and non-degeneracy of radial solutions in this particular case is general and can be applied to a variety of situations. It covers the double-power nonlinearity~\eqref{eq:g_double_power_NLS} in the whole possible range of the parameters. In order to clarify the situation, in Section~\ref{sec:uniqueness_general} we start by reformulating the result of~\cite{LewRot-15} in a general setting. We also provide a self-contained proof in Section~\ref{sec:proof_thm_uniqueness} for the convenience of the reader. Next we discuss at length the properties of the unique and non-degenerate solution $u_\mu$ for the double-power non-linearity~\eqref{eq:g_double_power_NLS}. Of high interest is the $L^2$ mass $$M(\mu)=\int_{\R^d}u_\mu(x)^2\,dx$$ of this solution. In the NLS case~\eqref{eq:g_NLS} the mass is a simple explicit power of $\mu$, but for the double-power nonlinearity~\eqref{eq:g_double_power_NLS}, $M$ is an unknown function. In Section~\ref{sec:double_power} we determine its exact behavior in the two regimes $\mu\to0^+$ and $\mu\to\mu_*^-$, where $\mu_*$ is the threshold for existence of solutions. This allows us to make an important conjecture about the variations of $M$ over the whole interval $(0,\mu_*)$, partly inspired by~\cite{KilOhPocVis-17,Ricaud-17,CarSpa-20_ppt} and supported by numerical simulations. In short, our conjecture says that the branch of solutions has at most one unstable part, that is, $M'$ vanishes at most once over $(0,\mu_*)$. One important motivation for studying the variations of $M$ concerns the uniqueness of energy minimizers at fixed mass \begin{multline} I(\lambda)=\inf\bigg\{ \frac12\int_{\R^d}|\nabla u|^2\,dx+\frac{1}{p+1}\int_{\R^d} |u|^{p+1}\,dx- \frac{1}{q+1}\int_{\R^d} |u|^{q+1}\,dx\ :\\ u\in H^1(\R^d)\cap L^{p+1}(\R^d),\ \int_{\R^d}|u|^2\,dx=\lambda\bigg\}, \label{eq:I_lambda_intro} \end{multline} which naturally appears in physical applications. Any minimizer, when it exists, is positive and solves the double-power NLS equation for some Lagrange multiplier $\mu$, hence equals $u_\mu$ after an appropriate space translation. The difficulty here is that several $\mu$'s could in principle give the same mass $\lambda$ and the same energy $I(\lambda)$, so that the uniqueness of solutions to the equation at fixed $\mu$ does not at all imply the uniqueness of energy minimizers. Nevertheless we conjecture that minimizers of~\eqref{eq:I_lambda_intro} are always unique. This would actually follow from the previously mentioned conjecture on $M$, since the latter implies that $M$ is one-to-one on the unique branch of stable solutions. In fact, our analysis of the function $M$ allows us to prove some partial results which, we think, are an interesting first step towards a better understanding of the general case. More precisely, we show that \begin{itemize} \item minimizers of $I(\lambda)$ are always unique for $\lambda$ large enough and for $\lambda$ close enough to the critical mass $\lambda_c$ above which minimizers exist; \item the set of $\lambda$'s for which minimizers are not unique is at most finite; \item the number of minimizers at those exceptional values of $\lambda$ is also finite, modulo phases and space translations. \end{itemize} \medskip The paper is organized as follows. In the next section we state our main result on the uniqueness and non-degeneracy of solutions to~\eqref{nleq}. Then, in Section~\ref{sec:double_power} we describe our findings on the double-power NLS equation. The rest of the paper is devoted to the proof of our main results. \bigskip \noindent{\textbf{Acknowledgement.}} We thank R\'emi Carles and Christof Sparber for useful discussions. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement MDFT No 725528 of M.L.) and from the Agence Nationale de la Recherche (grant agreement DYRAQ, ANR-17-CE40-0016, of S.R.N.). \section{Uniqueness and non-degeneracy: an abstract result }\label{sec:uniqueness_general} Here we state our abstract result about the uniqueness and non-degeneracy of solutions of~\eqref{nleq}. \begin{theorem}[Uniqueness and non-degeneracy]\label{thmuniqnondeg} Let $0< \alpha<\beta$ and $g$ be a continuously differentiable function on $[0,\beta]$. Assume that the following conditions hold: \begin{enumerate}[label=\textnormal{(H\arabic*)}] \item \label{hyp1} We have $g(0)=g(\alpha)=g(\beta)=0$, $g$ is negative on $(0,\alpha)$ and positive on $(\alpha,\beta)$ with $g'(0)<0$, $g'(\alpha)>0$ and $g'(\beta)\leq0$. \smallskip \item \label{hyp2} For every $\lambda>1$, the function \begin{equation}\label{defI} I_{\lambda}(x):=xg'(x)-\lambda g(x) \end{equation} has exactly one root $x_*$ on the interval $(0,\beta)$, which belongs to $(\alpha,\beta)$. \end{enumerate} Then equation \eqref{nleq} admits \emph{at most one} positive radial solution with $\|u\|_{\infty}<\beta$ and such that $u(x),u'(x)\to 0$ when $|x|\to \infty$. Moreover, when it exists, this solution is non-degenerate in the sense that \begin{equation} \ker\big(\Delta+g'(u)\big)={\rm span}\left\{\partial_{x_1}u,\ldots,\partial_{x_d}u\right\},\qquad \ker\left(\Delta+\frac{g(u)}{u}\right)={\rm span}\{u\}. \label{eq:non_degenerate} \end{equation} \end{theorem} \begin{remark} The assumption~\ref{hyp2} can be replaced by the two stronger conditions \begin{enumerate \item[\textnormal{(H2')}] there exists $x^*\in (0,\beta)$ such that $g''>0$ on $(0,x^*)$ and $g''<0$ on $(x^*,\beta)$; \item[\textnormal{(H2'')}] the function $x\mapsto \frac{xg'(x)}{g(x)}$ is strictly decreasing on $(\alpha,\beta)$. \end{enumerate} \end{remark} \begin{remark}\label{rmk:Ilambda} In the proof we use \textnormal{(H2)} only for one (unknown) particular $\lambda>1$. Should one be able to localize better this $\lambda$ for a concrete nonlinearity $g$, one would then only need to verify \textnormal{(H2)} in this region. \end{remark} \begin{remark}\label{rmk:beta} If $g$ is defined on the half line $\R_+$ and negative on $(\beta,\infty)$, then all the positive solutions satisfy $u<\beta$. This follows from the maximum principle since $-\Delta u=g(u)\leq0$ on $\{u\geq\beta\}$. \end{remark} \begin{figure}[t] \centering \includegraphics[width=8cm]{g.pdf} \caption{Typical form of the admissible nonlinearity $g$ in~\eqref{nleq}.\label{fig:g}} \end{figure} As we have mentioned, Theorem~\ref{thmuniqnondeg} was indeed proved in~\cite{LewRot-15} although perhaps only implicitly since we were there mainly concerned with a special case for $g$ (see~\cite[Lemma~3]{LewRot-15} and the comments after the statement). The detailed proof of Theorem~\ref{thmuniqnondeg} is provided in Section~\ref{sec:proof_thm_uniqueness}, for the convenience of the reader. The operators appearing in~\eqref{eq:non_degenerate} are the two linearized operators (for the real and imaginary parts of $u$, respectively) at the solution $u$ and the non-degeneracy means that their kernel (at the solution $u$) is spanned by the generators of the two symmetries of the problem (space translations and multiplication by a phase). Non-degeneracy is important in many respects, as we will recall below. Note that our assumptions (H1)--(H2) require the existence of three successive zeroes for $g$ as in Figure~\ref{fig:g}. In the traditional NLS case there are only two and this corresponds to taking $\beta=+\infty$ in our theorem, a situation where the same result is valid, as proved by McLeod in~\cite{McLeod-93} and reviewed in~\cite{Tao-06,Frank-13}. In the article~\cite{SerTan-00} by Serrin and Tang, uniqueness is proved under similar assumptions as in Theorem~\ref{thmuniqnondeg}. More precisely, the authors assume instead of (H2) that $xg'(x)/ g(x)$ is non-increasing on $(\alpha,\beta)$, in dimensions $d\geq3$. We assume less on $(\alpha,\beta)$ but put the additional assumption that $I_\lambda$ does not vanish on $(0,\alpha)$. The function $I_\lambda$ in~\eqref{defI} appears already in~\cite{LeoSer-87,McLeod-93}. The method of proof in~\cite{SerTan-00} does not seem to provide the non-degeneracy of solutions. In the present article we clarify this important point by providing a self-contained proof in the spirit of McLeod~\cite{McLeod-93}, later in Section~\ref{sec:proof_thm_uniqueness}. Similar arguments were independently used in~\cite{KilOhPocVis-17,AdaShiWat-18}. We conclude this section with a short discussion of the existence of solutions to~\eqref{nleq}. Let $g$ satisfy the condition (H1) of Theorem~\ref{thmuniqnondeg} on $[0,\beta]$ and extend it as a continuously differentiable function over $\R$ such that $g$ is odd and $g<0$ on $(\beta,\infty)$. Then we know from \cite{BerLio-83,BerGalKav-83} that, in dimension $d\geq2$, there exists at least one radial decreasing positive solution $u$ to~\eqref{nleq} which is $C^2$ and decays exponentially at infinity, if and only if \begin{equation} G(\eta):=\int_{0}^\eta g(s)\,ds>0\ \text{for some $\eta>0$}. \label{condex3} \end{equation} That this condition is necessary follows from the Pohozaev identity \begin{equation}\label{pohozaev} \frac{d-2}{2d}\int_{\R^d}|\nabla u|^2\,dx=\int_{\R^d}G(u(x))\,dx \end{equation} which implies that $G(u(x))$ has to take positive values, it cannot be always negative (as it is for $x\to\infty$ since $G(s)\sim g'(0)s^2/2<0$ for $s\to0$). Finally, we recall that if $g\in C^{1+\varepsilon}$ for some $\varepsilon>0$ and since $g(0)=0$ and $g'(0)<0$, it follows from the moving plane method~\cite{GidNiNir-81} that all the positive solutions to \eqref{nleq} are radial decreasing about some point in $\R^d$. \section{The double-power nonlinearity}\label{sec:double_power} In this section we consider the nonlinear elliptic equation \begin{equation}\label{powernl} \boxed{-\Delta u=-u^{p}+u^{q}-\mu u } \end{equation} with $p>q>1$ and $\mu>0$ and $u^p:=|u|^{p-1}u$. This equation appears in a variety of practical situations, including density functional theory in quantum chemistry and condensed matter physics~\cite{LeBris-95,Ricaud-17}, Bose-Einstein condensates with three-body repulsive interactions~\cite{MerIsi-88}, heavy-ion collisons processes~\cite{Kartavenko-84} and nonclassical nucleation near spinodal in mesoscopic models of phase transitions~\cite{MurVan-04,CahHil-59,SaaHoh-92,CrosHoh-93}. The nonlinearity \begin{equation} g_\mu(u)=-u^{p}+u^{q}-\mu u \label{eq:def_g_mu} \end{equation} satisfies the condition (H1) of Theorem~\ref{thmuniqnondeg} hence, due to the Pohozaev identity~\eqref{pohozaev}, there exists a $\mu_*>0$ such that ~\eqref{powernl} admits no nontrivial solutions for $\mu\ge\mu_*$, whereas it always has at least one positive solution for $\mu\in(0,\mu_*)$. The value of $\mu_*$ is given by \begin{equation}\label{eq:mu_star} \mu_*=\frac{2(p+1)^{\frac{q-1}{p-q}}(q-1)^{\frac{q-1}{p-q}}(p-q)}{(q+1)^{\frac{p-1}{p-q}}(p-1)^{\frac{p-1}{p-q}}}. \end{equation} For $\mu=\mu_*$ what is happening is that the primitive $$G_\mu(u):=-\frac{|u|^{p+1}}{p+1}+\frac{|u|^{q+1}}{q+1}-\frac{\mu}2 |u|^2$$ becomes non-positive over the whole half line $\R_+$, with a double zero at \begin{equation} \beta_*:=\left(\frac{(q-1)(p+1)}{(q+1)(p-1)}\right)^{\frac{1}{p-q}}<1. \label{eq:def_beta_*} \end{equation} For all $0<\mu\leq\mu_*$, the function $g_\mu$ is seen to have two zeroes $0<\alpha_\mu<\beta_\mu$ such that $$\begin{cases} \displaystyle \lim_{\mu\to0^+}\alpha_\mu=0, \\[0.2cm] \displaystyle \lim_{\mu\to0^+}\beta_\mu=1, \end{cases}\qquad\begin{cases} \displaystyle \lim_{\mu\to\mu_*^-}\alpha_\mu=\alpha_*\in(0,\beta_*), \\[0.2cm] \displaystyle \lim_{\mu\to\mu_*^-}\beta_\mu=\beta_*. \end{cases}$$ In addition $\mu\mapsto \beta_\mu$ is \emph{decreasing} over $(0,\mu_*)$. \subsection{Branch parametrized by the Lagrange multiplier $\mu$} The following result is a corollary of Theorem~\ref{thmuniqnondeg}. \begin{theorem}[Uniqueness and non-degeneracy]\label{thmpowernl} Let $d\geq2$, $p>q>1$ and $g_\mu$ as in~\eqref{eq:def_g_mu}. For all $\mu\in (0,\mu_*)$, the nonlinear equation~\eqref{powernl} has a \emph{unique positive solution} $u_\mu$ tending to $0$ at infinity, modulo space translations. It can be chosen \emph{radial-decreasing}. It is \emph{non-degenerate}: \begin{equation} \begin{cases} \ker\big(-\Delta+pu_\mu^{p-1}-qu_\mu^{q-1}+\mu\big)={\rm span}\left\{\partial_{x_1}u_\mu,\ldots,\partial_{x_d}u_\mu\right\},\\ \ker\big(-\Delta+u_\mu^{p}-u_\mu^{q}+\mu\big)={\rm span}\{u_\mu\}. \end{cases} \label{eq:non_degenerate2} \end{equation} This solution satisfies $$0<u_\mu(x)<\beta_\mu<1,\qquad\forall x\in\R^d$$ and $u_\mu(0)\to\beta_*$ when $\mu\nearrow\mu_*$. \end{theorem} Existence was proved in~\cite{BerLio-83, BerGalKav-83} and uniqueness in~\cite{SerTan-00, Jang-10}. The non-degeneracy of the solution follows from Theorem~\ref{thmuniqnondeg} as in~\cite{LewRot-15}. The cases $d\in\{2,3\},p=5,q=3$ and $d=3,p=7/3,q=5/3$ were handled in~\cite{KilOhPocVis-17,CarSpa-20_ppt} and~\cite{Ricaud-17,Ricaud-PhD}, respectively. Later in Theorem~\ref{thm:limit_mu_star} we will see that $u_\mu(x)\to\beta_*$ when $\mu\nearrow\mu_*$, for every $x\in\R^d$. The behavior of $u_\mu$ when $\mu\searrow0$ depends on the parameters $p$ and $q$, however, and will be given in Theorem~\ref{thm:limit_mu_0}. \begin{proof} The existence part of the theorem is \cite[Example 2]{BerLio-83} for $d\ge 3$ and \cite{BerGalKav-83} for $d=2$. Moreover, $g_\mu(u)$ satisfies the hypotheses of \cite{GidNiNir-81} therefore all the positive solutions to \eqref{powernl} are radial decreasing about some point in $\R^d$. The function $g_\mu$ also satisfies hypothesis~\ref{hyp1} for some $0<\alpha_\mu<\beta_\mu$ and it is negative on $(\beta_\mu,\infty)$. Since $g$ is $C^\infty$ on $(0,\infty)$ and $u>0$, we deduce from regularity theory that $u$ is real-analytic on $\R^d$. We have $-\Delta u=g_\mu(u)<0$ on the open ball $\{u>\beta_\mu\}$, hence $u$ must be constant on this set, by the maximum principle. This definitely cannot happen for a real analytic function tending to 0 at infinity and therefore $u\leq\beta_\mu$ everywhere. The maximum of $u$ can also not be equal to $\beta_\mu$ since otherwise $u\equiv \beta_\mu$ which is the unique corresponding solution to~~\eqref{powernl}. We have therefore proved that all the positive solutions must satisfy $u<\beta_\mu$ and we are in position to apply Theorem~\ref{thmuniqnondeg}. It only remains to show that $g$ satisfies hypothesis~\ref{hyp2}. We first show that for all $x\in (0,\alpha_\mu)$ and for all $\lambda >1$, $I_\lambda(x)> 0$. To this end, we observe that for $x>0$ sufficiently small and $\lambda>1$, \begin{equation*} I_{\lambda}(x)=(1-\lambda)g_\mu'(0)x=(\lambda-1)\mu x>0 \end{equation*} Next, by computing the second derivative of $g_\mu$, we remark that $g_\mu'$ is strictly increasing in $(0,x^*)$, attains his maximum at $$x^*=\left(\frac{q(q-1)}{p(p-1))}\right)^{\frac1{p-q}}$$ and is strictly decreasing in $(x^*,+\infty)$. Moreover, as a consequence of~\ref{hyp1}, $g'_\mu$ has to vanishes at least once. This implies that $g'_\mu$ vanishes exactly twice, namely at $x_1\in(0,\alpha_\mu)$ and at $x_2\in (\alpha_\mu,\beta_\mu)$. Hence $g_\mu'(x)<0$ for all $x\in (0,x_1)$ and $g_\mu'(x)>0$ for all $x\in (x_1,\alpha_\mu)\subset(x_1,x_2)$. As a consequence, since $g_\mu''(x)>0$ for all $x\in(0,x^*)$ and $x_1<x^*$, $$ I'_{\lambda}(x)=(1-\lambda)g_\mu'(x)+xg_\mu''(x)>0 $$ for all $x\in (0,x_1)$. This shows that $I_\lambda$ is strictly increasing in this interval and $I_\lambda(x)>0$ for all $x\in (0,x_1)$. Next, for all $x\in(x_1,\alpha_\mu)$, we have $g_\mu'(x)>0$ and $g_\mu(x)<0$, which implies $I_\lambda(x)>0$. It remains to show that $I_\lambda$ vanishes exactly once in $(\alpha_\mu,\beta_\mu)$. It is clear that $I_\lambda$ has to vanish at least once since $I_\lambda(\alpha_\mu)=\alpha_\mu g_\mu'(\alpha_\mu)>0$ and $I_\lambda(\beta_\mu)=\beta_\mu g_\mu'(\beta_\mu)<0$. Moreover, it is proved in \cite{SerTan-00} that the function $$ h(x)=\frac{xg_\mu'(x)}{g_\mu(x)} $$ is decreasing on $(\alpha_\mu,\beta_\mu)$. This is enough to conclude that $I_\lambda$ has exactly one zero in $(\alpha_\mu,\beta_\mu)$. Hence $g$ satisfies hypothesis~\ref{hyp2} and this concludes the proof of the uniqueness and non-degeneracy in Theorem~\ref{thmpowernl}. Since $\mu\mapsto \beta_\mu$ is decreasing and its limit at $\mu=0$ is $1$, we deduce that the family $(u_\mu)_\mu$ of solutions to \eqref{powernl} is uniformly bounded: $0<u_\mu<\beta_\mu<1$. If we denote by $\eta_\mu$ the first positive zero of $G_\mu$, then we also have $u_\mu(0)\geq \eta_\mu$, since $G_\mu(u_\mu(0))>0$ by~\eqref{pohozaev}. Since $\eta_\mu\to\beta_*$ when $\mu\nearrow\mu_*$, we obtain $u_\mu(0)\to\beta_*$ when $\mu\nearrow\mu_*$. \end{proof} \subsection{Behavior of the mass} It is very important to understand how the mass of the solution $u_\mu$ \begin{equation}\label{defmass} \boxed{ M(\mu):=\int_{\R^d} u_\mu(x)^2\;dx} \end{equation} varies with $\mu$. In the case of the usual focusing NLS equation with one power nonlinearity $q$ (which formally corresponds to $p=+\infty$ since $u<1$, at least when $q<2^*-1$), the mass is an explicit function of $\mu$ by scaling: $$M_{\rm NLS}(\mu)=\mu^{\frac{4+d-dq}{2(q-1)}}\int_{\R^d}Q(x)^2\,dx$$ where $-\Delta Q-Q^q+Q=0$. There is no such simple relation for the double-power nonlinearity. The importance of $M(\mu)$ is for instance seen in the Grillakis-Shatah-Strauss theory~\cite{Weinstein-85,ShaStr-85,GriShaStr-87,GriShaStr-90,BieGenRot-15} of stability for these solutions within the time-dependent Schr\"odinger equation. The latter says that the solution $u_\mu$ is \emph{orbitally stable} when $M'(\mu)>0$ and that it is \emph{unstable} when $M'(\mu)<0$. Therefore the intervals where $M$ is increasing furnish stable solutions whereas those where $M$ is decreasing correspond to unstable solutions. The Grillakis-Shatah-Strauss theory relies on another conserved quantity, the energy, which is discussed in the next section and for which the variations of $M$ also play a crucial role. Note that the derivative can be expressed in terms of the linearized operator $$\boxed{\mathcal{L}_\mu:=-\Delta-g'_\mu(u_\mu)=-\Delta+pu_\mu^{p-1}-qu_\mu^{q-1}+\mu}$$ by \begin{equation} M'(\mu)=2\Re\pscal{u_\mu,\frac{\partial}{\partial\mu} u_\mu}=-2\pscal{u_\mu,(\mathcal{L}_\mu)^{-1}_{\rm rad}u_\mu}. \label{eq:M'(mu)} \end{equation} Here $(\mathcal{L}_\mu)^{-1}_{\rm rad}$ denotes the inverse of $\mathcal{L}_\mu$ when restricted to the subspace of radial functions, which is well defined and bounded due to the non-degeneracy~\eqref{eq:non_degenerate2} of the solution.\footnote{The functions $\partial_{x_j}u_\mu$ spanning the kernel of $\mathcal{L}_\mu$ are orthogonal to the radial sector, hence $0$ is not an eigenvalue of $(\mathcal{L}_\mu)_{\rm rad}$. But then $0$ belongs to its resolvent set, since the essential spectrum starts at $\mu>0$.} This is why the non-degeneracy is crucial for understanding the variations of $M$. From the implicit function theorem, note that $M$ is a real-analytic function on $(0,\mu_*)$. \medskip Our main goal is to understand the number of sign changes of $M'$, which tells us how many stable and unstable branches there are. Here is a soft version of a conjecture which we are going to refine later on. It states that there is at most one unstable branch. \begin{conjecture}[Number of unstable branches] Let $d\geq2$ and $p>q>1$. The function $M'$ vanishes at most once on $(0,\mu_*)$. \end{conjecture} If true, this conjecture would have a number of interesting consequences, with regard to the stability of $u_\mu$ and the uniqueness of energy minimizers. We show in Theorem~\ref{thm:limit_mu_star} below that the stable branch is always present since $M'>0$ close to $\mu_*$. In order to make a more precise conjecture concerning the number of roots of $M'$ in terms of the exponents $p$ and $q$ and the dimension $d\geq2$, it is indeed useful to analyze the two regimes $\mu\to0$ and $\mu\to\mu_*$, where one can expect some simplification. This is the purpose of the next two subsections. \subsubsection{The limit $\mu\searrow0$} The following long statement about the limit $\mu\searrow0$ is an extension of results from~\cite{MorMur-14}, where the limit of $u_\mu$ was studied, but not that of $M$ and $M'$. \begin{theorem}[Behavior when $\mu\searrow0$]\label{thm:limit_mu_0} Let $d\geq2$ and $p>q>1$. \medskip \noindent$\bullet$ \emph{\bf (Sub-critical case)} If $d=2$, or if $d\geq3$ and $$q<1+\frac{4}{d-2},$$ then the rescaled function \begin{equation} \frac{1}{\mu^{\frac1{q-1}}}u_\mu\left(\frac{x}{\sqrt\mu}\right) \label{eq:rescaled_subcritical} \end{equation} converges strongly in $H^1(\R^d)\cap L^\infty(\R^d)$ in the limit $\mu\to0$ to the function $Q$ which is the unique positive radial-decreasing solution to the nonlinear Schr\"odinger (NLS) equation \begin{equation}\label{limiteq0} \Delta Q +Q^q-Q=0. \end{equation} We have \begin{multline} M(\mu)=\mu^{\frac{4+d-dq}{2(q-1)}}\int_{\R^d}Q^2 +\frac{2(p-1)+4+d-dq}{(p+1)(q-1)}\mu^{\frac{2(p-q)+4+d-dq}{2(q-1)}}\int_{\R^d}Q^{p+1}\\+o\left(\mu^{\frac{2(p-q)+4+d-dq}{2(q-1)}}\right)_{\mu\searrow0} \label{eq:limit_M_mu_0} \end{multline} and \begin{multline} M'(\mu)=\frac{4+d-dq}{2(q-1)}\mu^{\frac{4+d-dq}{2(q-1)}-1}\int_{\R^d}Q^2\\ +\frac{(2(p-1)+4+d-dq)(2(p-q)+4+d-dq)}{2(p+1)(q-1)^2}\mu^{\frac{2(p-q)+4+d-dq}{2(q-1)}-1}\int_{\R^d}Q^{p+1}\\+o\left(\mu^{\frac{2(p-q)+4+d-dq}{2(q-1)}-1}\right)_{\mu\searrow0}. \label{eq:limit_M_derivative_mu_0} \end{multline} In particular, $M$ is increasing for $q\leq 1+4/d$ and decreasing for $q>1+4/d$, in a neighborhood of the origin. \bigskip \noindent $\bullet$ \emph{\bf (Critical case)} If $d\geq3$ and $$q=1+\frac{4}{d-2},$$ then the rescaled function \begin{equation} \frac{1}{\epsilon_\mu^{\frac{d-2}{2}}}u_\mu\left(\frac{x}{\epsilon_\mu}\right) \label{eq:rescaled_critical} \end{equation} converges strongly in $\dot{H}^1(\R^d)\cap L^\infty(\R^d)$ in the limit $\mu\to0$ to the Sobolev optimizer $$S(x)=\left(1+\frac{|x|^2}{d(d-2)}\right)^{-\frac{d-2}{2}},$$ which is also the unique positive radial-decreasing solution (up to dilations) to the Emden-Fowler equation $\Delta S +S^q=0,$ where \begin{equation} \epsilon_\mu\sim c\begin{cases} \mu^{\frac{1}{p-3}}&\text{if $d=3$,}\\ \left(\mu\log\mu^{-1}\right)^{\frac{1}{p-1}}&\text{if $d=4$,}\\ \mu^{\frac{q-1}{2(p-1)}}&\text{if $d\geq5$.}\\ \end{cases} \label{eq:def_eps_mu} \end{equation} Furthermore, we have \begin{equation} \lim_{\mu\searrow0} M(\mu) =\lim_{\mu\searrow0}-M'(\mu)=\infty. \label{eq:limit_M_mu_0_critical} \end{equation} In particular, $M$ is decreasing in a neighborhood of the origin. \bigskip \noindent $\bullet$ \emph{\bf (Super-critical case)} If $d\geq3$ and $$q>1+\frac{4}{d-2},$$ then $u_\mu$ converges strongly in $\dot{H}^1(\R^d)\cap L^\infty(\R^d)$ in the limit $\mu\to0$ to the unique positive radial-decreasing solution $u_0\in \dot{H}^1(\R^d)\cap L^{p+1}(\R^d)$ of the `zero-mass' double-power equation $$-\Delta u_0=-u_0^p+u_0^q$$ decaying like $u_0(x)=O(|x|^{2-d})$ at infinity. We have the limits \begin{equation} \lim_{\mu\searrow0} M(\mu)=\int_{\R^d}u_0(x)^2\,dx\begin{cases} =\infty&\text{if $d\in\{3,4\}$,}\\ <\infty&\text{if $d\geq5$} \end{cases} \label{eq:limit_M_mu_0_supercritical} \end{equation} and \begin{equation} \lim_{\mu\searrow0} M'(\mu)=\begin{cases} -\infty&\text{if $d\in\{3,4,5,6\}$,}\\ M'(0)\in\R&\text{if $d\geq7$.} \end{cases} \label{eq:limit_derivative_M_mu_0_supercritical} \end{equation} In particular, $M$ is decreasing in a neighborhood of the origin when $d\in\{3,...,6\}$. In dimensions $d\geq7$, we have $M'(0)<0$ under the additional condition \begin{equation} 1+\frac{4}{d-2}<q<p<1+\frac{4}{d-2}+\frac{32}{d(d-2)\big((d-2)q-d-2\big)}. \label{eq:ugly_condition_on_p} \end{equation} \end{theorem} The convergence properties of $u_\mu$ are taken from~\cite{MorMur-14} in all cases. Only the behavior of $M$ and $M'$ is new. The corresponding proof is given below in Section~\ref{sec:proof_mu_0}. The condition~\eqref{eq:ugly_condition_on_p} in the super-critical case is not at all expected to be optimal and it is only provided as an illustration. This condition requires that $$1+\frac{4}{d-2}<q<1+\frac{4}{d-2}+\frac{4\sqrt2}{\sqrt{d}(d-2)}$$ and that $p$ satisfies the right inequality in~\eqref{eq:ugly_condition_on_p}. In particular, $p$ can be arbitrarily large when $q$ approaches the critical exponent. Although we are able to prove that $M'$ admits a finite limit when $\mu\to0$ in dimensions $d\geq7$, we cannot determine its sign in the whole range of parameter. Numerical simulations provided below in Section~\ref{sec:conjecture} seem to indicate that $M'(0)$ can be positive. The limit $\mu\to0$ for $M'(\mu)$ is quite delicate in the super-critical case, since the limiting linearized operator $$\mathcal{L}_0=-\Delta+p(u_0)^{p-1}-q(u_0)^{q-1}$$ has no gap at the origin. Its essential spectrum starts at $0$. Nevertheless, we show in Appendix~\ref{app:u_0} that $u_0$ is still \emph{non-degenerate} in the sense that $\ker\left(\mathcal{L}_0\right)={\rm span}\left\{\partial_{x_1}u_0,...,\partial_{x_d}u_0\right\}$. This allows us to define $(\mathcal{L}_0)_{\rm rad}^{-1}$ by the functional calculus and to prove that, as expected, $$M'(0)=-2\pscal{u_0,(\mathcal{L}_0)_{\rm rad}^{-1}u_0},$$ where the right side is interpreted in the sense of quadratic forms. In dimensions $d\geq5$ there are no resonances and $(\mathcal{L}_0)_{\rm rad}^{-1}$ essentially behaves like $(-\Delta)^{-1}_{\rm rad}$ at low momenta~\cite{Jensen-80}. Since $\pscal{u_0,(-\Delta)^{-1}u_0}$ is finite only in dimensions $d\geq7$ due to the slow decay of $u_0$ at infinity, $M'(0)$ is only finite in those dimensions. In dimensions $d\in\{7,8\}$ it is the second derivative $M''(\mu)$ which diverges to $+\infty$ when $\mu\to0$ (see Remark~\ref{rmk:higher_derivatives}) but this does not tell us anything about the variations of $M$. \subsubsection{The limit $\mu\nearrow\mu_*$} Next we study the behavior of the branch of solutions in the other limit $\mu\nearrow\mu_*$. \begin{theorem}[Behavior when $\mu\nearrow\mu_*$]\label{thm:limit_mu_star} Let $d\geq2$ and $p>q>1$. Let $\mu_*$ and $\beta_*$ be the two critical constants defined in~\eqref{eq:mu_star} and~\eqref{eq:def_beta_*}, respectively. Then we have \begin{equation} \boxed{\lim_{\mu\nearrow\mu_*}(\mu_*-\mu)^dM(\mu)=\lim_{\mu\nearrow\mu_*}\frac{(\mu_*-\mu)^{d+1}}{d}M'(\mu)=\Lambda} \label{eq:limit_M_mu_infty} \end{equation} where \begin{equation} \Lambda:=2^{\frac{3d}2}\frac{|\bS^{d-1}|}{d}(\beta_*)^{2(1-d)}(d-1)^{d}\left(\int_{0}^{\beta_*}|G_{\mu_*}(s)|^{\frac12}\,ds\right)^d. \label{eq:def_Lambda} \end{equation} Let $\gamma\in(0,\beta_*)$ be any constant and call $R_\mu$ the unique radius such that $u_\mu(R_\mu)=\gamma$. Then we have \begin{equation} R_\mu=\frac{\rho}{\mu_*-\mu}+o\left(\frac1{\mu_*-\mu}\right),\qquad \rho=\frac{2\sqrt2(d-1)}{\beta_*^2}\int_0^{\beta_*} \sqrt{|G_{\mu_*}(s)|}\,ds, \label{eq:formula_R_mu} \end{equation} and the uniform convergence \begin{equation} \lim_{\mu\to\mu_*}\norm{u_\mu-U_*(|x|-R_\mu)}_{L^\infty(\R^d)}=0, \label{eq:limit_mu_mu_*} \end{equation} where $U_*$ is the unique solution to the one-dimensional limiting problem \begin{equation} \begin{cases} U_*''+g_{\mu_*}(U_*)=0&\text{on $\R$}\\ U_*(-\infty)=\beta_*\\ U_*(+\infty)=0\\ U_*(0)=\gamma\in(0,\beta_*.) \end{cases} \label{eq:U} \end{equation} \end{theorem} The proof will be provided later in Section~\ref{sec:proof_mu_star}. What the result says is that $u_\mu$ ressemble a \emph{radial translation} of the one-dimensional solution $U_*$, which links the two unstable stationary solutions $\beta_*$ and $0$ of the underlying Hamiltonian system. Since $U_*$ tends to $\beta_*$ at $-\infty$, we see that $u_\mu(r)$ tends to $\beta_*$ for every fixed $r$, as we claimed earlier, and this is why the mass diverges like $$M(\mu)\underset{\mu\to\mu_*}\sim(R_\mu)^d(\beta_*)^2\frac{|\bS^{d-1}|}{d}.$$ Plugging the asymptotics of $R_\mu$ from~\eqref{eq:formula_R_mu} then provides~\eqref{eq:limit_M_mu_infty}. Stronger convergence properties of $u_\mu$ will be given in the proof. For instance we have for the derivatives $$\lim_{\mu\to\mu_*}\norm{u'_\mu-U_*'(|x|-R_\mu)}_{L^\infty(\R^d)}=\lim_{\mu\to\mu_*}\int_0^\infty\left|u'_\mu(r)-U'_*(r-R_\mu)\right|^p\,dr=0$$ for all $1\leq p<\infty$. On the other hand we only have $$\|u'_\mu-U_*'(|x|-R_\mu)\|_{L^p(\R^d)}=o(R_\mu^{\frac{d-1}{2}})$$ due to the volume factor $r^{d-1}{\rm d}r$. Upper and lower bounds on $M(\mu)$ in terms of $(\mu_*-\mu)^{-d}$ were derived in~\cite{KilOhPocVis-17} in the case $d=3$, $p=5$ and $q=3$ but the exact limit~\eqref{eq:limit_M_mu_infty} is new, to our knowledge. Particular sequences $\mu_n\to\mu_*$ have been studied in~\cite{AkaKikYam-18}. That the solution $u_\mu$ tends to a constant in the limit of a large mass is a `saturation phenomenon' which plays an important role in Physics, for instance for infinite nuclear matter~\cite{MerIsi-88}. Theorem~\ref{thm:limit_mu_star} implies that $M$ is always increasing close to $\mu_*$, hence in this region we obtain an orbitally stable branch for the Schr\"odinger flow, for every $p>q>1$. \begin{remark}[A general result] Our proof of Theorem~\ref{thm:limit_mu_star} is general and works the same for a function in the form $g_\mu(u)=g_0(u)-\mu u$ with \begin{itemize} \item $g_0\in C^1([0,\infty))\cap C^2(0,\infty)$ with $g_0(0)=g_0'(0)=0$ and $g_0(s)\to-\infty$ when $s\to+\infty$; \item $g_\mu$ has exactly two roots $0<\alpha_\mu<\beta_\mu$ on $(0,\infty)$ with $g_\mu'(\alpha_\mu)>0$ and $g_\mu'(\beta_\mu)<0$ for all $\mu\in(0,\mu_*]$ where $\mu_*$ is the first $\mu$ so that $G_\mu(r)=\int_0^rg_\mu(s)\,ds\leq0$ for all $r\geq0$; \item $\Delta u+g_\mu(u)=0$ has a unique non-degenerate radial positive solution for every $\mu\in(0,\mu_*)$ (for instance $g_\mu$ satisfies \textnormal{(H2)} in Theorem~\ref{thmuniqnondeg} for all $\mu\in(0,\mu_*)$). \end{itemize} \end{remark} \subsubsection{Main conjecture and numerical illustration}\label{sec:conjecture} Theorems~\ref{thm:limit_mu_0} and~\ref{thm:limit_mu_star} and the fact that $M$ is a smooth function on $(0,\mu_*)$ imply some properties of solutions to the equation $M(\mu)=\lambda$, whenever $\lambda$ is either small or large. Those are summarized in the following \begin{corollary}[Number of solutions to $M(\mu)=\lambda$]\label{cor:equation_M_lambda} Let $d\geq2$ and $p>q>1$. The equation $$M(\mu)=\lambda$$ \medskip \noindent$\bullet$ admits a \emph{unique solution} $\mu$ for $\lambda$ small enough when $1<q<1+\frac{4}{d}$, and it is stable, $M'(\mu)>0$; \medskip \noindent$\bullet$ admits a \emph{unique solution} $\mu$ for $\lambda$ large enough when $$\begin{cases} \text{$1<q\leq1+\frac4d$,}\\ \text{$q>1+\frac{4}{d-2}$ and $d\geq5$,} \end{cases} $$ and it is stable, $M'(\mu)>0$; \medskip \noindent $\bullet$ admits \emph{exactly two} solutions $\mu_1<\mu_2$ for $\lambda$ large enough when $$\begin{cases} \text{$q>1+\frac{4}{d}$ and $d\in\{2,3,4\}$,}\\ \text{$1+\frac4d<q\leq 1+\frac{4}{d-2}$ and $d\geq5$,}\\ \end{cases} $$ which are respectively unstable and stable: $M'(\mu_1)<0$, $M'(\mu_2)>0$. \end{corollary} Now that we have determined the exact behavior of $M$ at the two end points of its interval of definition, it seems natural to expect that the following holds true. \begin{conjecture}[Behavior of $M$]\label{conjecture:M} Let $d\geq2$ and $p>q>1$. Then $M'$ is either positive on $(0,\mu_*)$, or vanishes at a unique $\mu_c\in(0,\mu_*)$ with \begin{equation} M'\begin{cases} <0&\text{on $(0,\mu_c)$,}\\ >0&\text{on $(\mu_c,\mu_*)$.} \end{cases} \label{eq:conjecture_M'} \end{equation} More precisely: \begin{itemize} \item[$\bullet$] If $q\leq 1+4/d$, then $M'>0$ on $(0,\mu_*)$. \item[$\bullet$] If $d\in\{2,...,6\}$ and $q>1+4/d$, or if $d\geq7$ and $1+4/d<q\leq 1+4/(d-2)$, then $M'$ vanishes exactly once. \item[$\bullet$] If $d\geq7$ and $q>1+4/(d-2)$, there exists a $p_c(q)\geq q$ such that $M'$ vanishes once for $q<p< p_c(q)$ and does not vanish for $p> p_c(q)$. \end{itemize} \end{conjecture} The property~\eqref{eq:conjecture_M'} is an immediate consequence of Theorems~\ref{thm:limit_mu_0} and~\ref{thm:limit_mu_star} whenever $M'$ vanishes only once. The conjecture was put forward in~\cite{KilOhPocVis-17,CarSpa-20_ppt} for the quintic-cubic NLS equation ($p=5,q=3$) in dimensions $d\in\{2,3\}$, and in~\cite{Ricaud-17} for $d=3$, $p=7/3,q=5/3$. These cases have been confirmed by numerical simulations~\cite{Anderson-71,MerIsi-88,KilOhPocVis-17,Ricaud-17}. In Figures~\ref{fig:numerics_2D}--\ref{fig:numerics_5D_7D} we provide a selection of numerical simulations of the function $M$ in dimensions $d\in\{2,3,5,7\}$ which seem to confirm the conjecture. Although we have run many more simulations and could never disprove the conjecture, we have however not investigated all the possible powers and dimensions in a systematical way. \begin{figure}[h] \begin{tabular}{ccc} \includegraphics[width=4cm]{masse_d2_p5_q2.png}&\includegraphics[width=4cm]{masse_d2_p5_q3.png}&\includegraphics[width=4cm]{masse_d2_p5_q4.png}\\ $p=5$, $q=2$&$p=5$, $q=3$&$p=5$, $q=4$\\ \end{tabular} \caption{Function $\mu/\mu_*\in[0,1)\mapsto |\bS^{d-1}|^{-1}M(\mu)$ in dimension $d=2$.\label{fig:numerics_2D}} \bigskip\bigskip \begin{tabular}{ccc} \includegraphics[width=4cm]{masse_d3_p7-3_q5-3.png}&\includegraphics[width=4cm]{masse_d3_p3_q7-3.png}&\includegraphics[width=4cm]{masse_d3_p5_q3.png}\\ $p=7/3$, $q=5/3$&$p=3$, $q=7/3$&$p=5$, $q=3$\\ \end{tabular} \caption{Same function in dimension $d=3$.\label{fig:numerics_3D}} \bigskip\bigskip \begin{tabular}{ccc} \includegraphics[width=4cm]{masse_d5_p5_q3.png}&\includegraphics[width=4cm]{masse_d7_p5-2_q2.png}&\includegraphics[width=4cm]{masse_d7_p5_q3_b.png}\\ $d=5$, $p=5$, $q=3$&$d=7$, $p=5/2$, $q=2$&$d=7$, $p=5$, $q=3$\\ \end{tabular} \caption{Same function in dimensions $d=5$ and $d=7$. The second case $p=5/2,q=2$ is covered by~\eqref{eq:ugly_condition_on_p} whereas the third is not. \label{fig:numerics_5D_7D}} \end{figure} \subsection{The double-power energy functional}\label{sec:double_power_energy} In this paper the larger power $p$ is defocusing and always controls the smaller focusing nonlinearity of exponent $q$. In this situation the double-power NLS equation~\eqref{powernl} has a natural variational interpretation in the whole possible range of powers, which we discuss in this section. We introduce the energy functional $$\mathcal{E}(u)=\frac12\int_{\R^d}|\nabla u(x)|^2\,dx+\frac1{p+1}\int_{\R^d}|u(x)|^{p+1}\,dx-\frac1{q+1}\int_{\R^d}|u(x)|^{q+1}\,dx$$ and the corresponding minimization problem \begin{equation} \boxed{I(\lambda):=\inf_{\substack{u\in H^1(\R^d)\cap L^{p+1}(\R^d)\\ \int_{\R^d}|u|^2=\lambda}}\mathcal{E}(u)} \label{eq:I_lambda} \end{equation} at fixed mass $\lambda\geq0$. This problem is well posed for all $p>q>1$ because we can write $$\mathcal{E}(u)= \frac12\int_{\R^d}|\nabla u(x)|^2\,dx-\int_{\R^d}G_{\mu_*}\big(u(x)\big)\,dx-\frac{\mu_*\lambda}{2}\geq -\frac{\mu_*\lambda}{2}.$$ Recall that $\mu_*$ in~\eqref{eq:mu_star} is precisely the lowest $\mu$ for which $G_\mu\leq0$ on $\R_+$. The minimization problem~\eqref{eq:I_lambda} appears naturally in applications, for instance in condensed matter physics for $d=3$, $p=7/3$ and $q=5/3$ where it can be obtained from the Thomas-Fermi-von Weis\"acker-Dirac functional of atoms, molecules and solids~\cite{ChaCohHan-01,Lieb-81b,BenBreLie-81,Lions-87,Lions-88,LeBris-93b}, in a certain limit of a large Dirac term~\cite{Ricaud-17,GonLewNaz-20_ppt}. The existence of minimizers follows from rather standard methods of nonlinear analysis, as stated in the following \begin{theorem}[Existence of minimizers for $I(\lambda)$]\label{thm:prop_I_lambda} Let $d\geq2$ and $p>q>1$. The function $\lambda\mapsto I(\lambda)$ is concave non-increasing over $[0,\infty)$. It satisfies \begin{itemize} \item[$\bullet$] $I(\lambda)=0$ for all $0\leq \lambda\leq \lambda_c$, \item[$\bullet$] $\lambda\mapsto I(\lambda)$ is negative and strictly decreasing on $(\lambda_c,\infty)$, \end{itemize} where $$\lambda_c\begin{cases} =0&\text{if $q<1+4/d$,}\\ =\int_{\R^d}Q^2&\text{if $q=1+4/d$,}\\ \in(0,\infty)&\text{if $q>1+4/d$,} \end{cases}$$ with $Q$ the same NLS function as in Theorem~\ref{thm:limit_mu_0}. The problem $I(\lambda)$ admits at least one positive radial-decreasing minimizer $u$ for every $$\lambda\begin{cases} \geq \lambda_c&\text{if $q\neq1+4/d$,}\\ > \lambda_c&\text{if $q=1+4/d$.} \end{cases}$$ Any minimizer $u$ solves the Euler-Lagrange equation~\eqref{powernl} for some $\mu\in(0,\mu_*)$, hence must be equal to $u_\mu$. The infimum is not attained for $\lambda<\lambda_c$ or for $\lambda=\lambda_c$ and $q=1+4/d$. \end{theorem} In the proof, provided later in Section~\ref{sec:prop_I_lambda} we give a characterization of $\lambda_c$ in terms of optimizers of the Gagliardo-Nirenberg-type inequality \begin{equation} \norm{u}_{L^{q+1}(\R^d)}^{q+1}\leq C_{p,q,d}\norm{u}_{L^2(\R^d)}^{q-1-\theta(p-1)}\norm{\nabla u}_{L^2(\R^d)}^{2(1-\theta)}\norm{u}_{L^{p+1}(\R^d)}^{\theta(p+1)} \label{eq:Gagliardo-Nirenberg-type} \end{equation} when $q\geq1+4/d$, with $$\theta=\frac{q-1-\frac{4}d}{p-1-\frac{4}d}\in[0,1).$$ A similar property was used in~\cite{KilOhPocVis-17,CarSpa-20_ppt}. At $q=1+4/d$ we have $\theta=0$ and obtain the usual Gagliardo-Nirenberg inequality, of which $Q$ is the unique optimizer. A very natural question is to ask whether minimizers of $I(\lambda)$ are \emph{unique}, up to space translations and multiplication by a phase factor. This does not follow from the uniqueness of $u_\mu$ at fixed $\mu$ because the minimizers could have different multipliers $\mu$'s. The concavity of $I$ implies that it is differentiable except for countably many values of $\lambda$. When the derivative exists and $\lambda>\lambda_c$, it can be seen that the minimizer is unique and given by $u_\mu$ with $\mu=-2I'(\lambda)$. Details will be provided later in Theorem~\ref{thm:uniqueness_I_lambda} where we actually show that the derivative can only have finitely many jumps in $(\lambda_c,\infty)$. Another natural question is to ask whether one solution $u_\mu$ could be a candidate for the minimization problem $I(\lambda)$ with $\lambda=M(\mu)$. From the non-degeneracy of $u_\mu$, the answer (see, e.g.~\cite[App.~E]{Weinstein-85}) is that when $M'(\mu)>0$ the corresponding solution $u_\mu$ is a \emph{strict local minimum} of $\mathcal{E}$ at fixed mass $\lambda=M(\mu)$, whereas when $M'(\mu)<0$, the solution $u_\mu$ is a saddle point. In particular, there must always hold $M'(\mu)\geq0$ for a minimizer $u_\mu$ of $I(\lambda)$. From this discussion, we see that the following would immediately follow from Conjecture~\ref{conjecture:M}. \begin{conjecture}[Uniqueness of minimizers]\label{conjecture:I} Let $d\geq2$ and $p>q>1$. Then $I(\lambda)$ admits a \emph{unique minimizer} for all $\lambda\geq\lambda_c$ (resp. $\lambda>\lambda_c$ if $q=1+4/d$). \end{conjecture} Although we are not able to prove this conjecture, our previous analysis implies the following uniqueness result. \begin{theorem}[Partial uniqueness of minimizers]\label{thm:uniqueness_I_lambda} Let $d\geq2$ and $p>q>1$. Then $I(\lambda)$ admits a \emph{unique positive radial minimizer} when \begin{itemize} \item $\lambda$ is large enough; \item $q<1+4/d$ and $\lambda\in[0,\epsilon)$, \item $q\geq 1+4/d$ and $\lambda\in(\lambda_c,\lambda_c+\epsilon)$ \end{itemize} for some $\epsilon>0$ small enough. In fact, $I(\lambda)$ has a unique positive radial minimizer for all $\lambda\in[\lambda_c,\infty)$ (resp. $\lambda\in(\lambda_c,\infty)$ when $q=1+4/d$), except possibly at finitely many points in $[\lambda_c,\infty)$. At those values, the number of positive radial minimizers is also finite. For any $\lambda\in(\lambda_c,\infty)$ we have $$I'(\lambda^-)=-\frac12\min\big\{\mu\ :\ \mathcal{E}(u_\mu)=I(\lambda),\ M(\mu)=\lambda\big\},$$ and $$I'(\lambda^+)=-\frac12\max\big\{\mu\ :\ \mathcal{E}(u_\mu)=I(\lambda),\ M(\mu)=\lambda\big\}.$$ \end{theorem} In order to explain the proof of Theorem~\ref{thm:uniqueness_I_lambda}, it is useful to introduce the energy $$E(\mu):=\mathcal{E}(u_\mu),\qquad \mu\in(0,\mu_*)$$ of our branch of solutions $u_\mu$. Note that \begin{equation} E'(\mu)=-\frac{\mu}2 M'(\mu), \label{eq:relation_E_M} \end{equation} that is, the variations of $E$ are exactly opposite to those of $M$. The following is a simple consequence of Theorems~\ref{thm:limit_mu_0} and~\ref{thm:limit_mu_star} together with~\eqref{eq:relation_E_M}. \begin{corollary}[$E(\mu)$ at $0$ and $\mu_*$]\label{cor:Energy} Let $d\geq2$ and $p>q>1$. \medskip \noindent$\bullet$ When $\mu\searrow0$, we have $$\lim_{\mu\to0^+}E(\mu)=\begin{cases} \mathcal{E}(u_0)=\frac{1}{d}\int_{\R^d}|\nabla u_0|^2&\text{if $d\geq3$ and $q> 1+\frac4{d-2}$,}\\ \frac{1}{d}\int_{\R^d}|\nabla S|^2 &\text{if $d\geq3$ and $q= 1+\frac4{d-2}$,}\\ 0&\text{otherwise.}\\ \end{cases}$$ Moreover $$E(\mu)\begin{cases} <0&\text{for $q\leq 1+4/d$,}\\ >0&\text{for $q> 1+4/d$,} \end{cases}$$ for $\mu$ in a neighborhood of the origin. \medskip \noindent$\bullet$ When $\mu\nearrow\mu_*$, we have $$E(\mu)\underset{\mu\to\mu_*}\sim-\frac{\mu_*\Lambda}{2(\mu_*-\mu)^d}$$ where $\Lambda$ is the same constant as in Theorem~\ref{thm:limit_mu_star}. \medskip \noindent$\bullet$ $\mu\mapsto E(\mu)$ is real-analytic on $(0,\mu_*)$ and the equation $E(\mu)=e$ always has finitely many solutions for any $e\in(-\infty,\max E]$. \end{corollary} In the case $e=E(0)>0$ when $q>1+4/(d-2)$ and $d\geq7$, one has to use Remark~\ref{rmk:higher_derivatives} which says that one derivative $M^{(k)}$ always diverges in the limit $\mu\to0$, for $k$ large enough depending on the dimension $d$. This implies that $E$ cannot take the value $E(0)$ infinitely many times in a neighborhood of the origin. We see that Conjecture~\ref{conjecture:I} would follow if we could prove that \begin{itemize} \item $E$ is decreasing for $q\leq 1+4/d$; \item $E$ has a unique positive zero and is decreasing on the right side of this point, for $q>1+4/d$. \end{itemize} Note that when $q>1+4/d$, Conjecture~\ref{conjecture:I} is really weaker than Conjecture~\ref{conjecture:M} on the mass $M(\mu)$, since the places where $E(\mu)>0$ do not matter for the minimization problem $I(\lambda)$. We conclude the section with the \begin{proof}[Proof of Theorem~\ref{thm:uniqueness_I_lambda}] If $\lambda$ is large or small, the statement follows immediately from Corollary~\ref{cor:equation_M_lambda} (and the fact that $M'(\mu)\geq0$ at a minimizer $u_\mu$ for $I(\lambda)$ in case there are two solutions to the equation $M(\mu)=\lambda$). We now discuss the more complicated case $q=1+4/d$. We know from Theorems~\ref{thm:limit_mu_0} and~\ref{thm:prop_I_lambda} that $\lambda_c=\int_{\R^d}Q^2=M(0)$. We claim that minimizers for $\lambda$ close to $\lambda_c$ necessarily have $\mu$ small enough, so that the conclusion follows from the monotonicity of $M$ close to the origin, by Theorem~\ref{thm:limit_mu_0}. To see this, assume by contradiction that there exists a sequence $\mu_n\nrightarrow0$ such that $\mathcal{E}(u_{\mu_n})=I(\lambda_n)\nearrow0$ and $\lambda_n=M(\mu_n)\searrow \lambda_c$. Since $E$ diverges to $-\infty$ at $\mu_*$, we can assume after extracting a subsequence that $\mu_n\to\mu\in(0,\mu_*)$ and then obtain $E(\mu)=0$ with $M(\mu)=\lambda_c$. But this cannot happen because \begin{align*} 0=E(\mu)&=\frac12\int_{\R^d}|\nabla u_\mu|^2+\frac1{p+1}\int_{\R^d}u_\mu^{p+1}-\frac1{q+1}\int_{\R^d}u_\mu^{q+1} \\ &\geq \frac1{p+1}\int_{\R^d}u_\mu^{p+1}>0, \end{align*} where we used the Gagliardo-Nirenberg inequality~\eqref{eq:Gagliardo-Nirenberg-type} which, in the case $q=1+4/d$, takes the simple form $$\frac1{q+1}\int_{\R^d}u^{q+1}\leq \frac{\norm{u}^{\frac4d}_{L^2(\R^d)}}{2(\lambda_c)^{\frac2d}}\int_{\R^d}|\nabla u|^2.$$ As a conclusion, all the minimizers for $I(\lambda)$ must have a small Lagrange multiplier $\mu$ when $\lambda$ is close to $\lambda_c$, and the result follows when $q=1+4/d$. For every $\lambda>\lambda_c$, the number of $\mu$'s such that $E(\mu)=I(\lambda)<0$ is finite by Corollary~\ref{cor:Energy}. The same holds at $\lambda_c$ when $q\geq1+4/d$. Hence $I(\lambda)$ always admits finitely many positive radial minimizers. It remains to prove that there can be at most finitely many $\lambda$'s for which uniqueness does not hold. Let us denote by $J_k$ all the disjoint closed intervals on which $M$ is increasing. By real-analyticity and the behavior of $M$ close to $0$ and $\mu_*$ from Theorems~\ref{thm:limit_mu_0} and~\ref{thm:limit_mu_star}, there are only finitely many such intervals.\footnote{When $q>1+4/(d-2)$ and $d\geq7$ we need to use again Remark~\ref{rmk:higher_derivatives} to ensure that $M'$ cannot change sign infinitely many times close to the origin. In any case we have $E(0)>0$ by Corollary~\ref{cor:Energy} and this region does not play any role in the rest of the argument.} Note that in each interval $J_k$ the derivative $M'$ can still vanish, but if it does so this can only happen at finitely many points by real-analyticity. In the rest of the argument we label the intervals $J_k$ in increasing order, that is, such that $\mu<\mu'$ for all $\mu\in J_k$ and all $\mu'\in J_{k+1}$. If $q\leq1+4/d$ then $J_1=[0,\mu_1]$ and $M'$ is positive close to the origin. On each $J_k$ we have a well defined continuous inverse $\mu_k:=M_{|J_k}^{-1}$ and the corresponding continuous energy $I_k(\lambda)=E(\mu_k(\lambda))$, $\lambda\in J'_k:=M(J_k)$. Since $M$ and $-E$ are increasing over $J_k$, then $\mu_k$ is increasing and $I_k$ is decreasing over $J'_k$. The intervals $J'_k$ cover the whole interval $[\min M,\infty)\supset [\lambda_c,\infty)$ and it is clear from the existence of minimizers in Theorem~\ref{thm:prop_I_lambda} that \begin{equation} I(\lambda)=\min\big\{0, I_k(\lambda)\big\}. \label{eq:I_min_E_k} \end{equation} The function $\lambda\mapsto I_k(\lambda)$ is real-analytic on the open subset $M(\{M'>0\}\cap J_k)$ of $J'_k$ and a calculation shows that \begin{equation} I_k'(\lambda)=-\frac{\mu_k(\lambda)}{2}=-\frac{M_{|J_k}^{-1}(\lambda)}{2} \label{eq:derivative_E_k_lambda} \end{equation} on this set, that is, the derivative of the energy with respect to the mass constraint is proportional to the Lagrange multiplier. Using~\eqref{eq:derivative_E_k_lambda} or~\eqref{eq:relation_E_M} we see that $I_k$ is actually $C^1$ over the whole interval $J'_k$, with the relation~\eqref{eq:derivative_E_k_lambda} (but it need not be smoother in general). Note that since $\mu_k$ is increasing we deduce from~\eqref{eq:derivative_E_k_lambda} that $I_k$ is concave over $J'_k$. A last important remark is that due to~\eqref{eq:derivative_E_k_lambda} and the fact that the $J_k$ are disjoint ordered intervals, we see that $$\max I_{k+1}'<\min I'_k<0.$$ The slopes of $I_{k+1}$ are always more negative than the slopes of $I_k$ at any possible point. This property implies that two functions $I_k$ and $I_{k'}$ can cross at most once, with $I_{k'}$ strictly below $I_k$ on the right of the crossing point for $k'>k$, and conversely on the left. Thus there must be at most finitely many crossing points between the functions $I_k$ on $[\lambda_c,\infty)$. The function $I$ being the minimum of all these functions $I_k$ (and the constant function 0), we deduce that $I$ is always equal to exactly one of the $I_k$, except at finitely many points. This proves the statement that there is always only one minimizer, except at finitely many possible values of $\lambda$, where the $I_k$ cross and realize the minimum in~\eqref{eq:I_min_E_k}. At any such crossing point $\lambda_0$, we have $I(\lambda)=I_k(\lambda)$ for $\lambda\in(\lambda_0-\epsilon,\lambda_0)$ and $I(\lambda)=I_{k'}(\lambda)$ for $\lambda\in(\lambda_0,\lambda_0+\epsilon)$, where $k$ corresponds to the lowest possible multipliers, that is the interval $J_k$ which is the furthest to the left and $k'$ corresponds to the largest possible multipliers. This concludes the proof of Theorem~\ref{thm:uniqueness_I_lambda}. \end{proof} Note that $\lambda_c$ is exactly known when $q\leq 1+4/d$, but it is in principle not explicit for $q>1+4/d$. In case Conjecture~\ref{conjecture:M} holds true, then $\lambda_c=M(\mu_c')$ where $\mu_c'$ is the unique positive root of $E$, which is necessarily on the right of $\mu_c$, the unique point at which $M'(\mu_c)=0$. \medskip The rest of the paper is devoted to the proof of our other main results. \section{Proof of Theorem~\ref{thm:limit_mu_0} on the limit $\mu\to0$}\label{sec:proof_mu_0} The convergence of $u_\mu$ in all cases is proved in~\cite{MorMur-14}. We only discuss here the behavior of $M(\mu)$ and its derivative, which was not studied for all cases in~\cite{MorMur-14}. Recall that $M'(\mu)$ is given by~\eqref{eq:M'(mu)}. \subsection{$M$ and $M'$ in the sub-critical case} When $q<1+4/(d-2)$ the rescaled function $\tilde u_\mu$ in~\eqref{eq:rescaled_subcritical} solves \begin{equation}\label{rescaledeq} -\Delta \tilde u_\mu=-\mu^{\frac{p-q}{q-1}}\tilde u_\mu^p +\tilde u_\mu^q-\tilde u_\mu \end{equation} and it converges to the NLS solution $Q$. More precisely, the implicit theorem gives \begin{equation} \norm{\tilde u_\mu-Q+\mu^{\frac{p-q}{q-1}}(\mathcal{L}_Q)_{\rm rad}^{-1}Q^p}_{H^1(\R^d)}=o\left(\mu^{\frac{p-q}{q-1}}\right) \label{eq:expansion_tilde_u_mu} \end{equation} where $\mathcal{L}_Q:=-\Delta-qQ^{q-1}+1$. Recall that the the limiting NLS optimizer $Q$ is non-degenerate~\cite{Kwong-89,Weinstein-85} and therefore $0$ is in the resolvent set of its restriction to the sector of radial functions. Using the non-degeneracy of $(\mathcal{L}_Q)_{\rm rad}$ we can also add an exponential weight in the form \begin{equation} \norm{e^{c|x|}\left(\tilde u_\mu-Q+\mu^{\frac{p-q}{q-1}}(\mathcal{L}_Q)_{\rm rad}^{-1}Q^p\right)}_{L^\infty(\R^d)}=o\left(\mu^{\frac{p-q}{q-1}}\right) \label{eq:expansion_tilde_u_mu_exp} \end{equation} for all $c<\sqrt{\mu}$. Using~\eqref{eq:expansion_tilde_u_mu} we find after scaling \begin{align} M(\mu)&=\mu^{\frac{4+d-dq}{2(q-1)}}\int_{\R^d}\widetilde u_\mu(x)^2\,dx \nonumber\\ &=\mu^{\frac{4+d-dq}{2(q-1)}}\int_{\R^d}Q(x)^2\,dx\nonumber\\ &\qquad -2\mu^{\frac{2(p-q)+4+d-dq}{2(q-1)}}\pscal{Q,(\mathcal{L}_Q)^{-1}_{\rm rad}Q^p}+o\left(\mu^{\frac{2(p-q)+4+d-dq}{2(q-1)}}\right).\label{eq:expansion_M_sub-critical} \end{align} In the NLS case we can compute $$\big(-\Delta-qQ^{q-1}+1\big)\left(\frac{rQ'}{2}+\frac{Q}{q-1}\right)=-Q$$ so that \begin{equation} (\mathcal{L}_Q)^{-1}_{\rm rad}Q=-\frac{rQ'}{2}-\frac{Q}{q-1} \label{eq:derivative_mu_NLS} \end{equation} and \begin{align} \pscal{Q,(\mathcal{L}_Q)^{-1}_{\rm rad}Q^p}&=\pscal{-\frac{rQ'}{2}-\frac{Q}{q-1},Q^p}\nonumber\\ &=-\frac{2(p+1)-d(q-1)}{2(p+1)(q-1)}\int_{\R^d}Q(x)^{p+1}\,dx. \label{eq:comput_Q_power} \end{align} Inserting in~\eqref{eq:expansion_M_sub-critical} we find~\eqref{eq:limit_M_mu_0}. The derivative equals $$M'(\mu)=-2\mu^{\frac{4+d-dq}{2(q-1)}-1}\pscal{\tilde u_\mu,\tilde\mathcal{L}_1(\mu)^{-1}_{\rm rad}\tilde u_\mu}$$ where $$\tilde\mathcal{L}_1(\mu)(\mu):=-\Delta+p\mu^{\frac{p-q}{q-1}}\tilde u_\mu^{p-1}-q\tilde u_\mu^{q-1}+1.$$ Due to the convergence of $\tilde u_\mu$ in $L^\infty(\R^d)$, the operator $\tilde\mathcal{L}_1(\mu)$ converges to $\mathcal{L}_Q:=-\Delta-qQ^{q-1}+1$ in the norm resolvent sense. Since $0$ is in the resolvent set, we obtain the convergence $$\tilde\mathcal{L}_1(\mu)^{-1}_{\rm rad}\to (\mathcal{L}_Q)^{-1}_{\rm rad}=\big(-\Delta-qQ^{q-1}+1\big)^{-1}_{\rm rad}$$ in norm. More precisely, from the resolvent expansion we have \begin{multline*} \bigg\|\tilde\mathcal{L}_1(\mu)^{-1}_{\rm rad}- (\mathcal{L}_Q)^{-1}_{\rm rad}\\ +\mu^{\frac{p-q}{q-1}}(\mathcal{L}_Q)^{-1}_{\rm rad}\Big(pQ^{p-1}+q(q-1)Q^{q-2}\delta\Big)(\mathcal{L}_Q)^{-1}_{\rm rad}\bigg\|=O\left(\mu^{2\frac{p-q}{q-1}}\right) \end{multline*} where $\delta=(\mathcal{L}_Q)^{-1}_{\rm rad}Q^p$ and where the function in the parenthesis is understood as a multiplication operator. Recall from~\eqref{eq:expansion_tilde_u_mu_exp} that $\delta$ decreases exponentially at the same rate as $Q$, hence $Q^{q-2}\delta$ tends to 0 at infinity even when $q<2$. This shows that \begin{align} \mu^{1-\frac{4+d-dq}{2(q-1)}}M'(\mu)&= -2\pscal{Q,(\mathcal{L}_Q)^{-1}_{\rm rad}Q} +2\mu^{\frac{p-q}{q-1}}\bigg\{2\pscal{Q,(\mathcal{L}_Q)_{\rm rad}^{-2}Q^p}\nonumber\\ &\qquad+p\pscal{(\mathcal{L}_Q)_{\rm rad}^{-1}Q,Q^{p-1}(\mathcal{L}_Q)_{\rm rad}^{-1}Q}\nonumber\\ &\qquad +q(q-1)\pscal{(\mathcal{L}_Q)_{\rm rad}^{-1}Q,Q^{q-2}\delta(\mathcal{L}_Q)_{\rm rad}^{-1}Q}\bigg\}+o\left(\mu^{\frac{p-q}{q-1}}\right).\label{eq:horrible_expansion_derivative_M} \end{align} By integration over $\mu$ and comparing with the expansion of $M(\mu)$, the two terms have to be given by the expression in~\eqref{eq:limit_M_derivative_mu_0}. For the first this is easy to check since by~\eqref{eq:derivative_mu_NLS}, we have $$-2\pscal{Q,(\mathcal{L}_Q)^{-1}_{\rm rad}Q}=2\pscal{Q,\frac{Q}{q-1}+\frac{rQ'}{2}}=\frac{4+d-dq}{2(q-1)}\int_{\R^d}Q(x)^2\,dx.$$ For the second term this is more cumbersome but the value can be verified as follows. Let us introduce the scaled function $$Q_\mu(x)=\mu^{\frac1{q-1}}Q(\sqrt\mu x)$$ which solves the equation $$-\Delta Q_\mu-Q_\mu^q+\mu Q_\mu=0.$$ Then we have $\partial_\mu Q_\mu=-(\mathcal{L}_{Q_\mu})^{-1}Q_\mu$ where $\mathcal{L}_{Q_\mu}=-\Delta-q Q_\mu^{q-1}+\mu$ and $$\int_{\R^d}\frac{Q_\mu^{p+1}}{p+1}=\mu^{\frac{2(p+1)-d(q-1)}{2(q-1)}}\int_{\R^d}\frac{Q^{p+1}}{p+1},$$ $$\partial_\mu\int_{\R^d}\frac{Q_\mu^{p+1}}{p+1}=\pscal{\partial_\mu Q_\mu,Q_\mu^p}=-\pscal{(\mathcal{L}_{Q_\mu})^{-1}Q_\mu,Q_\mu^p},$$ \begin{multline*} \partial^2_{\mu}\int_{\R^d}\frac{Q_\mu^{p+1}}{p+1}=2\pscal{(\mathcal{L}_{Q_\mu})^{-2}Q_\mu,Q_\mu^p}+p\pscal{(\mathcal{L}_{Q_\mu})^{-1}Q_\mu,Q_\mu^{p-1}(\mathcal{L}_{Q_\mu})^{-1}Q_\mu}\\ +q(q-1)\pscal{Q_\mu^{q-2}\big|(\mathcal{L}_{Q_\mu})^{-1}Q_\mu\big|^2,(\mathcal{L}_{Q_\mu})^{-1}Q_\mu^p}. \end{multline*} At $\mu=1$ we obtain $$-\pscal{(\mathcal{L}_{Q_\mu})^{-1}Q_\mu,Q_\mu^p}=\frac{2(p+1)-d(q-1)}{2(q-1)(p+1)}\int_{\R^d}Q^{p+1}$$ which is exactly~\eqref{eq:comput_Q_power} and \begin{multline*} 2\pscal{\mathcal{L}_Q^{-2}Q,Q^p}+p\pscal{\mathcal{L}_{Q}^{-1}Q,Q^{p-1}\mathcal{L}_{Q}^{-1}Q}+q(q-1)\pscal{Q^{q-2}\big|\mathcal{L}_{Q}^{-1}Q\big|^2,\delta}\\ =\frac{(2(p-1)+4+d-dq)(2(p-q)+4+d-dq)}{4(p+1)(q-1)^2}\int_{\R^d}Q^{p+1} \end{multline*} which gives exactly the equality between the second order terms in~\eqref{eq:horrible_expansion_derivative_M} and~\eqref{eq:limit_M_derivative_mu_0}. \subsection{$M$ in the super-critical case} We have $u_\mu\to u_0$ strongly in the homogeneous Sobolev space $\dot{H}^1(\R^d)$ and in $C^2(\R^d)$ by~\cite{MorMur-14}. The limit $u_0$ is unique~\cite{KwoMcLPelTro-92} and it is not in $L^2(\R^d)$ in dimensions $d=3,4$, therefore $M(\mu)$ cannot have a bounded subsequence and the limit~\eqref{eq:limit_M_mu_0_supercritical} follows in this case. Let us prove the strong convergence in $L^2(\R^d)$ when $d\geq5$. We use~\cite[Lem. A.III]{BerLio-83} which says that $$u(x)\leq C_d\frac{\norm{\nabla u}_{L^2(\R^d)}}{|x|^{\frac{d-2}{2}}},\qquad |x|\geq 1$$ for a universal constant $C_d$. Due to the strong convergence of $u_\mu$ in $\dot{H}^1(\R^d)$, the gradient term is bounded and this gives $$\left(-\Delta-\frac{C}{|x|^{\frac{(d-2)(q-1)}2}}\right)u_\mu\leq (-\Delta+\mu+u_\mu^{p-1}-u_\mu^{q-1})u_\mu=0,\qquad|x|\geq1$$ for a constant $C$. We can also use that $$\left(-\Delta-\frac{C}{|x|^{\frac{(d-2)(q-1)}2}}\right)\frac1{|x|^{d-2-\epsilon}}=\left(\epsilon(d-2-\epsilon)-\frac{C}{|x|^{\frac{(d-2)(q-1)-4}2}}\right)\frac1{|x|^{d-\epsilon}}$$ which is positive for $|x|$ large enough since $q-1>4/(d-2)$. By the maximum principle on $\R^d\setminus B_R$, we deduce that \begin{equation} u_\mu(x)\leq \frac{u_\mu(R)R^{d-2-\epsilon}}{|x|^{d-2-\epsilon}},\qquad \forall |x|\geq R. \label{eq:estim_max_principle} \end{equation} When $\epsilon$ is small enough, the domination is in $L^2(\R^d)$ for $d\geq5$ and this shows, by the dominated convergence theorem, that $u_\mu\to u_0$ strongly in $L^2(\R )$. The behavior of $M'$ is discussed below in Section~\ref{sec:proof_super_critical}. \subsection{$M$ in the critical case} This case is more complicated and was studied at length in~\cite{MorMur-14}. The function $w_\mu$ in~\eqref{eq:rescaled_critical} satisfies the equation \begin{equation} -\Delta w_\mu +(\epsilon_\mu)^{\frac{(p-1)(d-2)-4}{2}}w_\mu^{p}-w_\mu^q+\frac\mu{\epsilon_\mu^2} w_\mu=0 \label{eq:effective_critical} \end{equation} for $\sqrt\mu\ll \epsilon_\mu\ll1$ as in~\eqref{eq:def_eps_mu} and we have $$M(\mu)=\frac{1}{\epsilon_\mu^{2}}\int_{\R^d}w_\mu(x)^2\,dx.$$ In dimensions $d=3,4$, we have $\|w_\mu\|_{L^2}\to\infty$ because the limit $S$ is not in $L^2$. In dimensions $d\geq5$, it is proved in~\cite[Lem.~4.8, Cor.~1.14]{MorMur-14} that $\|w_\mu\|_{L^2}\sim1$ and this implies $M(\mu)\to\infty$ in all cases. The same argument as in~\eqref{eq:estim_max_principle} actually gives $w_\mu\to S$ in $L^2(\R^d)$ for $d\geq5$. \subsection{An upper bound on $M'$} Next we derive an upper bound on $M'(\mu)$ following ideas from~\cite{KilOhPocVis-17}. \begin{lemma}[An estimate on $M'(\mu)$]\label{lem:M_derivative} Let $d\geq2$ and $p>q>1$. Then we have \begin{multline} \frac{M'(\mu)}{2}\left(\frac{(p-1)(p-q)(d-2)}{p+1}\beta(\mu)+\frac{2}{d}\big(d+2-(d-2)q\big)\right)\\ <\frac{dM(\mu)^2}{2T(\mu)}\left(\frac{d(p-1)(p-q)}{2(p+1)}\beta(\mu)+1+\frac4d -q\right) \label{eq:estim_M_derivative} \end{multline} for $$\begin{cases} \text{all $\mu>0$ if $d=2$ or if $d\geq3$ and $q\leq 1+\frac{4}{d-2}$,}\\ \text{small enough $\mu>0$ if $d\geq3$ and $q>1+\frac{4}{d-2}$,}\\ \end{cases}$$ where $$T(\mu)=\int_{\R^d}|\nabla u_\mu(x)|^2\,dx,\qquad \beta(\mu)=T(\mu)^{-1}\int_{\R^d}u_\mu(x)^{p+1}\,dx.$$ \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:M_derivative}] The Pohozaev identity~\eqref{pohozaev} gives \begin{align} \frac{d-2}{2d}T(\mu)&=-\frac1{p+1}\int_{\R^d} u_\mu(x)^{p+1}\,dx+\frac1{q+1}\int_{\R^d} u_\mu(x)^{q+1}\,dx\nonumber\\ &\qquad -\frac\mu2\int_{\R^d}u_\mu(x)^2\,dx\nonumber\\ &=-\frac{T(\mu)\beta(\mu)}{p+1}+\frac1{q+1}\int_{\R^d} u_\mu(x)^{q+1}\,dx-\frac{\mu M(\mu)}2 \label{eq:Pohozaev_2PNLS} \end{align} and taking the scalar product with $u$ in~\eqref{powernl} we find \begin{align} T(\mu)&=-\int_{\R^d} u_\mu(x)^{p+1}\,dx+\int_{\R^d} u_\mu(x)^{q+1}\,dx -\mu\int_{\R^d}u_\mu(x)^2\,dx\nonumber\\ &=-T(\mu)\beta(\mu)+\int_{\R^d} u_\mu(x)^{q+1}\,dx -\mu M(\mu). \end{align} This gives the relation \begin{equation} \frac{p-q}{(p+1)(q+1)}\beta(\mu)=\frac{d-2}{2d}-\frac{1}{q+1}+\frac{(q-1)}{2(q+1)}\frac{\mu\,M(\mu)}{T(\mu)}. \label{eq:formule_beta} \end{equation} When $d\geq3$ and $q>1+4/(d-2)$ we have \begin{equation} \lim_{\mu\searrow0}\beta(\mu)=\frac{(p+1)(d-2)}{2d(p-q)}\left(q-1-\frac{4}{d-2}\right):=\beta(0). \label{eq:beta_0} \end{equation} In all the other cases we have $\beta(\mu)\to0$ when $\mu\searrow0$. More precisely, we have $$\beta(\mu)=O\left(\mu^{\frac{p-q}{q-1}}\right)\to0\qquad\text{for $d=2$, or $d\ge3$ and $q<1+\frac{4}{d-2}$}$$ and $$\beta(\mu)=O\left(\epsilon_\mu^{\frac{d-2}{2}(p-q)}\right)\to0\qquad\text{for $d\ge3$ and $q=1+\frac4{d-2}$}.$$ Next we compute the symmetric matrix $L_{ij}$ of the restriction of the operator $\mathcal{L}_\mu$ to the finite dimensional space spanned by $u_\mu$, $\partial_\mu u_\mu$ and $$\frac{x\cdot\nabla+\nabla\cdot x}{2}u_\mu=\left(x\cdot\nabla+\frac{d}2\right)u_\mu=ru_\mu'+\frac{d}{2}u_\mu.$$ Some tedious but simple computations using the above relations give $$L_{11}:=\pscal{\partial_\mu u_\mu,\mathcal{L}_\mu \partial_\mu u_\mu}=-\frac{M'(\mu)}{2},\qquad L_{12}:=\pscal{\partial_\mu u_\mu,\mathcal{L}_\mu u_\mu}=-M(\mu),$$ $$L_{22}:=\pscal{u_\mu,\mathcal{L}_\mu u_\mu}=\left(\frac{(p-1)(p-q)}{p+1}\beta(\mu)-\frac{2(q+1)}{d}\right)T(\mu),$$ $$L_{13}:=\pscal{ru_\mu'+\frac{d}{2}u_\mu,\mathcal{L}_\mu \partial_\mu u_\mu}=0,$$ $$L_{23}:=\pscal{ru_\mu'+\frac{d}{2}u_\mu,\mathcal{L}_\mu u_\mu}=\left(\frac{d(p-1)(p-q)}{2(p+1)}\beta(\mu)-(q-1)\right)T(\mu)$$ and \begin{align*} L_{33}&:=\pscal{ru_\mu'+\frac{d}{2}u_\mu,\mathcal{L}_\mu \left(ru_\mu'+\frac{d}{2}u_\mu\right)}\\ &=\frac{d}2\left(\frac{d(p-1)(p-q)}{2(p+1)}\beta(\mu)+1+\frac4d -q\right)T(\mu). \end{align*} Note that $L_{22}<0$ for $\mu$ small enough when $d=2$ or when $d\ge 3$ and $q\leq 1+4/(d-2)$. We have \begin{multline*} \det\begin{pmatrix} L_{22}&L_{23}\\ L_{32}&L_{33}\end{pmatrix}\\ =T(\mu)^2\left(-\frac{(p-1)(p-q)(d-2)}{p+1}\beta(\mu)+\frac{2}d \left(q(d-2)-d-2\right)\right). \end{multline*} This is always negative in the sub-critical and critical case. In the super-critical case, we obtain from~\eqref{eq:beta_0} that \begin{equation} \det\begin{pmatrix} L_{22}&L_{23}\\ L_{32}&L_{33}\end{pmatrix}\\ \underset{\mu\searrow0}\sim-T(\mu)^2\frac{\big(q(d-2)-d-2\big)\big(p(d-2)-d-2\big)}{2d}<0. \label{eq:determinant_small} \end{equation} Since $\mathcal{L}_\mu$ has a unique negative eigenvalue, we conclude that the full determinant is negative: \begin{align*} 0&>\det(L)\\ &=\frac{M'(\mu)}{2}\left(\frac{(p-1)(p-q)(d-2)}{p+1}\beta(\mu)+\frac{2}{d}\big(d+2-(d-2)q\big)\right)T(\mu)^2\\ &\qquad -\frac{d}2M(\mu)^2\left(\frac{d(p-1)(p-q)}{2(p+1)}\beta(\mu)+1+\frac4d -q\right)T(\mu). \end{align*} This gives the estimate~\eqref{eq:estim_M_derivative}. \end{proof} In the critical case $q=1+4/(d-2)$,~\eqref{eq:estim_M_derivative} gives for $\mu$ small enough $$M'(\mu)<-c\frac{M(\mu)^2}{T(\mu)\beta(\mu)}\underset{\mu\searrow0}{\sim} -c\begin{cases} \epsilon_\mu^{-\frac{(d-2)p-d-2}{2}}M(\mu)^2&\text{for $d=3,4$,}\\ \epsilon_\mu^{-\frac{(d-2)(p-1)+4}{2}}&\text{for $d\geq5$.} \end{cases} $$ We obtain $M'(\mu)\to-\infty$ as was claimed in the statement. In the super-critical case $q>1+4/(d-2)$, we similarly find $M'(\mu)\to-\infty$ in dimensions $d=3,4$. In dimensions $d\geq5$, using~\eqref{eq:beta_0} we find $M'(\mu)<0$ for $\mu$ small enough whenever $$p<1+\frac{4}{d-2}+\frac{32}{d(d-2)\big(q(d-2)-(d+2)\big)}.$$ \subsection{$M'$ in the super-critical case}\label{sec:proof_super_critical} This last section is devoted to the study of $M'$ in the super-critical case. We have seen that $u_\mu\to u_0$ in $\dot{H}^1(\R^d)\cap L^\infty(\R^d)$, the unique radial-decreasing solution to the equation $$-\Delta u_0+u_0^p-u_0^q=0.$$ In addition, the convergence holds in $L^s(\R^d)$ for all $s>d/(d-2)$ since $u_\mu(r)\leq Cr^{2-d-\epsilon}$ at infinity. This includes $L^2(\R^d)$ only in dimensions $d\geq5$. In Lemma~\ref{lem:non-degenerate_mu_0} in Appendix~\ref{app:u_0}, we show that the limiting linearized operator $$\mathcal{L}_0:=-\Delta +p(u_0)^{p-1}-q(u_0)^{q-1}$$ has the trivial kernel $$\ker(\mathcal{L}_0)=\left\{\partial_{x_1}u_0,...,\partial_{x_d}u_0\right\}.$$ This allows us to define $(\mathcal{L}_0)_{\rm rad}^{-1}$ by the functional calculus. Note that $\mathcal{L}_0$ admits exactly one negative eigenvalue, since $$\pscal{u'_0,\mathcal{L}_0u_0'}=-(d-1)\int_{\R^d}\frac{u_0'(x)^2}{|x|^2}\,dx<0$$ and since it is the norm-resolvent limit of $\mathcal{L}_\mu$, which has exactly one negative eigenvalue. In particular, we see that $(\mathcal{L}_0)_{\rm rad}^{-1}$ has one negative eigenvalue and is otherwise positive and unbounded from above. Our first step is to prove that its quadratic form domain is the same as for the free Laplacian, in sufficiently high dimensions. In the whole section we assume $d\geq5$ for simplicity, although some parts of our proof apply to $d\in\{3,4\}$. \begin{lemma}[Quadratic form domain of $(\mathcal{L}_0)_{\rm rad}^{-1}$]\label{lem:quadratic_form_domain_inverse_L} Let $d\geq5$ and $p>q>1+4/(d-2)$. There exists a constant $C>0$ such that $$\frac1C(-\Delta)_{\rm rad}^{-1}-C\leq (\mathcal{L}_0)_{\rm rad}^{-1}\leq C(-\Delta)_{\rm rad}^{-1}$$ in the sense of quadratic forms. \end{lemma} \begin{proof} In the proof we remove the index `rad' and use the convention that all the operators are restricted to the sector of radial functions. Note that the following arguments work the same on $L^2(\R^d)$ for a general potential $V$ such that $-\Delta+V$ has no zero eigenvalue. But this of course not the case of our linearized operator $\mathcal{L}_0$ which always has $\partial_{x_j}u_0$ in its kernel. Introducing the notation $V_0:=pu_0^{p-1}-qu_0^{q-1}$ for the external potential, we start with the relation \begin{equation} \mathcal{L}_0=(-\Delta)^{\frac12}\left(1+(-\Delta)^{-\frac12}V_0(-\Delta)^{-\frac12}\right)(-\Delta)^{\frac12}. \label{eq:Birman-Schwinger-type} \end{equation} Recalling that $u_0(r)\sim C_0r^{2-d}$ at infinity and that $q>1+4/(d-2)$, we obtain \begin{equation} V_0\in L^s(\R^d),\qquad \forall s>\frac{d}{(d-2)(q-1)}>\frac{d}{4}. \label{eq:prop_V_super_critical} \end{equation} In particular we have $V_0\in L^{d/2}(\R^d)$. From the Hardy-Littlewood-Sobolev (HLS) inequality~\cite{LieLos-01}, we then know that the operator \begin{equation} K:=(-\Delta)^{-\frac12}V_0(-\Delta)^{-\frac12} \label{eq:def_K} \end{equation} is self-adjoint and compact on $L^2(\R^d)$, with $\norm{K}\leq C\norm{V_0}_{L^{d/2}(\R^d)}.$ The fact that $\ker(\mathcal{L}_0)_{\rm rad}=\{0\}$ is equivalent to the property that $-1\notin \sigma\left(K\right)$ where we recall that $K$ is here only considered within the sector of radial functions. Let $g$ be a radial eigenfunction of $K$, corresponding to a discrete eigenvalue $\lambda\neq0$, $$Kg=(-\Delta)^{-\frac12}V_0(-\Delta)^{-\frac12}g=\lambda g.$$ Multiplying by $(-\Delta)^{-1}$ on the left we find \begin{equation} (-\Delta)^{-1}g=\lambda^{-1} (-\Delta)^{-\frac32}V_0(-\Delta)^{-\frac12}g. \label{eq:estim_eigenfn_K} \end{equation} Using this time $d>4$ and again the HLS inequality, we see that the operator $(-\Delta)^{-3/2}V_0(-\Delta)^{-1/2}$ is compact with $$\norm{(-\Delta)^{-\frac32}V_0(-\Delta)^{-\frac12}}\leq C\norm{V_0}_{L^{d/4}(\R^d)}.$$ This proves that $(-\Delta)^{-1}g\in L^2(\R^d)$. Next we go back to~\eqref{eq:Birman-Schwinger-type}. Since $\mathcal{L}_0$ has one negative eigenvalue, this implies that $K$ has at least one eigenvalue $<-1$, for otherwise $\mathcal{L}_0$ would be positive by~\eqref{eq:Birman-Schwinger-type}. On the other hand, $K$ cannot have two eigenvalues $<-1$ otherwise after testing against this subspace and using that the corresponding eigenfunctions are in the domain of $(-\Delta)^{-1/2}$, we would find that $\mathcal{L}_0$ is negative on a subspace of dimension 2. We conclude that $K$ has exactly one simple eigenvalue $\nu_1<-1$ within the radial sector and call the corresponding normalized eigenfunction $g_1$. Next we invert~\eqref{eq:Birman-Schwinger-type} and obtain \begin{equation} (\mathcal{L}_0)_{\rm rad}^{-1}=(-\Delta)^{-\frac12}\left(1+K\right)^{-1}(-\Delta)^{-\frac12}. \label{eq:inverting_L_0} \end{equation} We have $$\frac{1}{1+\nu_2}-\left(\frac{1}{1+\nu_2}-\frac{1}{1+\nu_1}\right)|g_1\rangle\langle g_1|\leq \frac1{1+K}\leq \frac{1}{1+\nu_2}$$ with $\nu_2=\min\sigma(K)\setminus\{\nu_1\}>-1$. This gives \begin{equation} \frac{(-\Delta)^{-1}}{1+\nu_2}-c\norm{(-\Delta)^{-\frac12}g_1}^{2}\leq (\mathcal{L}_0)_{\rm rad}^{-1}\leq \frac{(-\Delta)^{-1}}{1+\nu_2} \label{eq:compare_L_Delta_proof} \end{equation} with $$c=\frac{1}{1+\nu_2}-\frac{1}{1+\nu_1}>0.$$ The norm on the left of~\eqref{eq:compare_L_Delta_proof} is finite since we have even proved that $(-\Delta)^{-1}g_1\in L^2(\R^d)$ and this concludes the proof of the lemma. One can actually show that the domains of $(\mathcal{L}_0)^{-1}_{\rm rad}$ and $(-\Delta)^{-1}_{\rm rad}$ coincide, not just the quadratic form domains, but this is not needed in our argument. \end{proof} Next we turn to the upper bound on $M'(\mu)$. \begin{lemma}[Upper bound on the limit of $M'(\mu)$] Let $d\geq5$ and $p>q>1+{4}/{(d-2)}$. Then we have \begin{equation} \limsup_{\mu\searrow0}M'(\mu)\leq -2\pscal{u_0,(\mathcal{L}_0)^{-1}_{\rm rad}u_0}\in[-\infty,\infty), \label{eq:limsup_derivative_super_critical} \end{equation} interpreted in the sense of quadratic forms. In dimensions $d=5,6$ the right side equals $-\infty$ whereas it is finite in dimensions $d\geq7$. \end{lemma} From the proof of Lemma~\ref{lem:quadratic_form_domain_inverse_L}, the proper interpretation of the quadratic form on the right side of~\eqref{eq:limsup_derivative_super_critical} is $$\pscal{u_0,(\mathcal{L}_0)^{-1}_{\rm rad}u_0}:=\pscal{(-\Delta)^{-\frac12}u_0,(1+K)^{-1}(-\Delta)^{-\frac12}u_0}$$ where $K$ is given by~\eqref{eq:def_K}. \begin{proof} We start with \begin{align*} M'(\mu)= -2\pscal{u_\mu,(\mathcal{L}_\mu)^{-1}_{\rm rad}u_\mu}&=-\frac{2|\pscal{f_{1,\mu},u_\mu}|^2}{\lambda_1(\mu)} -2\pscal{u_\mu,(\mathcal{L}_\mu)_+^{-1}u_\mu}\\ &\leq-\frac{2|\pscal{f_{1,\mu},u_\mu}|^2}{\lambda_1(\mu)} -2\pscal{u_\mu,(\mathcal{L}_\mu+\epsilon)_+^{-1}u_\mu}. \end{align*} Here $f_{1,\mu}$ is the unique eigenfunction corresponding to the negative eigenvalue $\lambda_1(\mu)$ of $\mathcal{L}_\mu$ and $\epsilon$ is a small fixed number, chosen so that $\epsilon\leq |\lambda_1(\mu)|/2$. From the convergence of $\mathcal{L}_\mu$ in the norm-resolvent sense and the strong convergence of $u_\mu$ in $L^2(\R^d)$, we obtain in the limit $$\limsup_{\mu\searrow0} M'(\mu)\leq -\frac{2|\pscal{f_{1},u_0}|^2}{\lambda_1} -2\pscal{u_0,(\mathcal{L}_0+\epsilon)_+^{-1}u_0}$$ with of course $\mathcal{L}_0f_1=\lambda_1 f_1$ and $\lambda_1<0$. Letting finally $\epsilon\to0^+$, this gives the limit~\eqref{eq:limsup_derivative_super_critical}. From Lemma~\ref{lem:quadratic_form_domain_inverse_L} we know that the right side of~\eqref{eq:limsup_derivative_super_critical} is finite if and only if $\|(-\Delta)^{-1/2}u_0\|_{L^2}$ is finite. This turns out to be infinite in dimensions $d=5,6$ and finite in larger dimensions. The reason is the following. Since $r^{2-d}u_0(r)\to C_0>0$ at infinity, we can write $u_0=u_0\1_{B_1}+u_0\1_{\R^d\setminus B_1}:=u_1+u_2$ where $u_1\in (L^1\cap L^\infty)(\R^d)$ and $$\frac{c\1(|x|\geq1)}{|x|^{d-2}}\leq u_2(x)\leq \frac{C\1(|x|\geq1)}{|x|^{d-2}}.$$ But $$\norm{(-\Delta)^{-\frac12}u_0}^2_{L^2(\R^d)}=c \iint_{\R^d\times\R^d}\frac{u_0(x)u_0(y)}{|x-y|^{d-2}}\,dx\,dy$$ and the terms involving $u_1$ are finite by the HLS inequality, whereas the term involving $u_2$ twice is comparable to $$\iint_{\substack{|x|\geq1\\ |y|\geq1}}\frac{dx\,dy}{|x|^{d-2}|y|^{d-2}|x-y|^{d-2}}=\begin{cases} +\infty&\text{when $d=5,6$,}\\ <\infty&\text{when $d\geq7$.} \end{cases} $$ This concludes the proof of the lemma. \end{proof} We are left with showing the limit in dimensions $d\geq7$. \begin{lemma}[Limit of $M'(\mu)$ in dimensions $d\geq7$] Let $d\geq7$ and $p>q>1+{4}/{(d-2)}$. Then we have \begin{equation} \lim_{\mu\searrow0}M'(\mu)= -2\pscal{u_0,(\mathcal{L}_0)^{-1}_{\rm rad}u_0}. \label{eq:lim_derivative_super_critical} \end{equation} \end{lemma} \begin{proof} Similarly to~\eqref{eq:Birman-Schwinger-type} we can write \begin{equation} \mathcal{L}_\mu=(-\Delta+\mu)^{\frac12}\left(1+K_\mu\right)(-\Delta+\mu)^{\frac12} \label{eq:Birman-Schwinger-type_mu} \end{equation} where \begin{align*} K_\mu&:=(-\Delta+\mu)^{-\frac12}\left(pu_\mu^{p-1}-qu_\mu^{q-1}\right)(-\Delta+\mu)^{-\frac12}\\ &=\left(\frac{-\Delta}{-\Delta+\mu}\right)^{\frac12}(-\Delta)^{-\frac12}\left(pu_\mu^{p-1}-qu_\mu^{q-1}\right)(-\Delta)^{-\frac12}\left(\frac{-\Delta}{-\Delta+\mu}\right)^{\frac12}. \end{align*} Since $pu_\mu^{p-1}-qu_\mu^{q-1}\to pu_0^{p-1}-qu_0^{q-1}=V_0$ in $L^{d/2}(\R^d)$, we have $$(-\Delta)^{-\frac12}\left(pu_\mu^{p-1}-qu_\mu^{q-1}\right)(-\Delta)^{-\frac12}\to K=(-\Delta)^{-\frac12}V_0(-\Delta)^{-\frac12}$$ in operator norm, by the HLS inequality. On the other hand, the operator $(-\Delta)^{1/2}(-\Delta+\mu)^{-1/2}$ is bounded by 1 and converges to the identity strongly. Since $K$ is compact, this shows that $K_\mu\to K$ in operator norm. In particular, the spectrum of $K_\mu$ converges to that of $K$ and, since $1+K$ is invertible, we conclude that $(1+K_\mu)^{-1}$ is bounded and converges in norm towards $(1+K)^{-1}$. This allows us to invert~\eqref{eq:Birman-Schwinger-type_mu} and obtain $$(\mathcal{L}_\mu)_{\rm rad}^{-1}=(-\Delta+\mu)^{-\frac12}\left(1+K_\mu\right)^{-1}(-\Delta+\mu)^{-\frac12}$$ as well as $$M'(\mu)=-2\pscal{(-\Delta+\mu)^{-\frac12}u_\mu,\big(1+K_\mu\big)^{-1}(-\Delta+\mu)^{-\frac12}u_\mu}.$$ From the HLS inequality we have \begin{align*} &\norm{(-\Delta+\mu)^{-\frac12}u_\mu- (-\Delta)^{-\frac12}u_0}_{L^2(\R^d)}\\ &\leq \norm{(-\Delta+\mu)^{-\frac12}(u_\mu-u_0)}_{L^2(\R^d)}+\norm{\big((-\Delta+\mu)^{-\frac12}- (-\Delta)^{-\frac12}\big)u_0}_{L^2(\R^d)}\\ &\leq C\norm{u_\mu-u_0}_{L^{\frac{2d}{d+2}}(\R^d)}+\norm{\big((-\Delta+\mu)^{-\frac12}- (-\Delta)^{-\frac12}\big)u_0}_{L^2(\R^d)} \end{align*} which tends to zero since $u_\mu$ converges in $L^{\frac{2d}{d+2}}(\R^d)$ and $(-\Delta)^{-1/2}u_0\in L^2(\R^d)$ for $d\geq7$. Thus we can pass to the limit and obtain \begin{equation*} \lim_{\mu\searrow0}M'(\mu)=-2\pscal{(-\Delta)^{-\frac12}u_0,\big(1+K\big)^{-1}(-\Delta)^{-\frac12}u_0}, \end{equation*} where the right side is the definition of the quadratic form of $(\mathcal{L}_0)^{-1}_{\rm rad}$. \end{proof} \begin{remark}[Higher derivatives]\label{rmk:higher_derivatives} A calculation shows that \begin{equation} M''(\mu)=6\int_{\R^d}\delta_\mu^2-2p(p-1)\int_{\R^d}\delta_\mu^3u_\mu^{p-2}+2q(q-1)\int_{\R^d}\delta_\mu^3u_\mu^{q-2} \label{eq:derivee_seconde} \end{equation} with $\delta_\mu:=(\mathcal{L}_\mu)^{-1}_{\rm rad}u_\mu$. From the ODE we have $\delta_\mu(r)\leq Cr^{4-d+\epsilon}$ for $r\geq1$ and this can be used to show that the two terms involving $\delta_\mu^3$ converge in dimensions $d\geq7$. In dimensions $d\in\{7,8\}$ the first term has to diverge because $\delta_0\notin L^2(\R^d)$. Thus we have $$\lim_{\mu\searrow0}M''(\mu)=\begin{cases} +\infty&\text{if $d\in\{7,8\}$,}\\ M''(0)\in\R&\text{if $d\geq9$.} \end{cases}$$ By induction, it is possible to prove that $(-1)^kM^{(k)}(\mu)\to+\infty$ for a sufficiently large $k$ in any dimension $d\geq3$. \end{remark} This concludes the proof of Theorem~\ref{thm:limit_mu_0}.\qed \section{Proof of Theorem~\ref{thm:limit_mu_star} on the limit $\mu\to\mu_*$}\label{sec:proof_mu_star} Throughout the proof we set $G_*:=G_{\mu_*}$ and $g_*:=g_{\mu_*}$. \subsection{Local convergence} Our first step is to prove that $u_\mu$ almost satisfies the one-dimensional equation of $U_*$ and to prove the local convergence $u_\mu(r)\to\beta_*$ for any fixed $r$, which was claimed after Theorem~\ref{thmpowernl}. \begin{lemma}[Local convergence] We have \begin{equation} \norm{(u_\mu')^2+2G_*(u_\mu)}_{L^\infty(\R_+)}\leq \mu_*-\mu, \label{eq:limit_eq} \end{equation} \begin{equation} \norm{u_\mu'+\sqrt{-2G_*(u_\mu)}}_{L^\infty(\R_+)}\leq \sqrt{\mu_*-\mu}, \label{eq:estim_square_root} \end{equation} and \begin{equation} \lim_{\mu\to\mu_*}u_\mu(r)=\beta_*,\qquad \lim_{\mu\to\mu_*}u'_\mu(r)=0. \end{equation} uniformly on any compact interval $[0,R]$. \end{lemma} We recall that $G_*$ is negative on $\R_+$ by definition of $\mu_*$. \begin{proof} We start with the ODE \begin{equation} u_\mu''+(d-1)\frac{u_\mu'}{r}+g_\mu(u_\mu)=0. \label{eq:ODE} \end{equation} Multiplying by $u'_\mu$, we find $$\frac{(u_\mu')^2}{2}+(d-1)\int_0^r\frac{u_\mu'(s)^2}{s}\,ds+G_\mu(u_\mu)=G_\mu(u_\mu(0)).$$ Evaluating at $r=+\infty$, this gives \begin{equation} (d-1)\int_0^\infty\frac{u_\mu'(s)^2}{s}\,ds=G_\mu(u_\mu(0))=G_*(u_\mu(0))+\frac{\mu_*-\mu}{2}u_\mu(0)^2\leq\frac{\mu_*-\mu}{2} \label{eq:limit_d_term} \end{equation} since $G_*:=G_{\mu_*}\leq0$ and $0\leq u_\mu\leq1$. We can also rewrite the equation in the form \begin{equation} \frac{(u_\mu')^2}{2}+G_*(u_\mu)=(d-1)\int_r^\infty\frac{u_\mu'(s)^2}{s}\,ds-\frac12(\mu_*-\mu)u_\mu^2. \label{eq:integrated_ODE} \end{equation} Due to~\eqref{eq:limit_d_term} we obtain~\eqref{eq:limit_eq}. Noticing that $|a^2-b^2|\leq \epsilon^2$ implies $|a-b|\leq \epsilon$ whenever $a,b\geq0$, we obtain~\eqref{eq:estim_square_root}. We also have \begin{equation} |u_\mu(r)-u_\mu(0)|\leq \int_0^r|u'_\mu(s)|\,ds\leq \frac{r}{\sqrt2}\left(\int_0^r\frac{u'_\mu(s)^2}{s}\,ds\right)^{\frac12}\leq \frac{r\sqrt{\mu_*-\mu}}{2\sqrt{d-1}} \label{eq:estim_uniform_local} \end{equation} and therefore we obtain the local convergence of $u_\mu$ to $\beta_*$, uniformly on any compact interval $[0,R]$. For the derivative we use~\eqref{eq:estim_square_root}. \end{proof} \subsection{Convergence of $u_\mu(\cdot+R_\mu)$} Next we look at $u_\mu$ much further away. We fix $\gamma\in(0,\beta_*)$ and define $R_\mu$ like in the statement by the condition that $u_\mu(R_\mu)=\gamma$, for $\mu$ close enough to $\mu_*$. We then introduce $$v_\mu(r):=u_\mu(R_\mu+r),\qquad r\in[-R_\mu,\infty).$$ From~\eqref{eq:estim_uniform_local} we know that \begin{equation} R_\mu\geq \frac{2\sqrt{d-1}(u_\mu(0)-\gamma)}{\sqrt{\mu_*-\mu}}\to\infty. \label{eq:lower_bd_R_mu} \end{equation} The function $v_\mu$ satisfies a relation similar to~\eqref{eq:integrated_ODE}. It is uniformly bounded together with its derivative and we can pass to the uniform local limit $\mu\to\mu_*$, possibly after extraction of a subsequence. We obtain in the limit that $U_*=\lim_{\mu\to\mu_*}v_\mu$ solves~\eqref{eq:U}. That is, $U_*$ is the unique unstable solution of the $d=1$ Hamiltonian system, linking the two stationary points $\beta_*$ and $0$ and passing through $\gamma$ at $r=0$. More precisely, we have since $U_*'<0$ $$-\frac{U_*'}{\sqrt{2|G_*(U_*)|}}=1.$$ Therefore $$U_*(r)=\Psi^{-1}(r),\qquad \Psi(v)=-\int_{\gamma}^{v}\frac{ds}{\sqrt{2|G_*(s)|}}.$$ Note that $\Psi$ diverges logarithmically at 0 and $\beta_*$, so that $U_*$ converges exponentially fast towards $\beta_*$ at $-\infty$ and $0$ at $+\infty$. To summarize the situation, we have for every $h>0$ \begin{multline} \lim_{\mu\to\mu_*}\norm{u_\mu-U_*(\cdot-R_\mu)}_{L^\infty(R_\mu-h,R_\mu+h)}\\ =\lim_{\mu\to\mu_*}\norm{u'_\mu-U_*'(\cdot-R_\mu)}_{L^\infty(R_\mu-h,R_\mu+h)}=0 \label{eq:CV_Lii_local} \end{multline} where the convergence of the derivatives follows from~\eqref{eq:estim_square_root}. In the next lemma we derive pointwise bounds on $v_\mu=u_\mu(\cdot+R_\mu)$ and its derivatives which will later allow us to improve the limit~\eqref{eq:CV_Lii_local}. \begin{lemma}[Pointwise exponential bounds]\label{lem:exp_bounds} We have the bounds \begin{equation} v_\mu(r)\begin{cases} \displaystyle\leq Ce^{-c|r|}&\text{on $[0,\infty)$,}\\[0.3cm] \displaystyle\geq \beta_\mu-Ce^{-c|r|}&\text{on $[-R_\mu,0]$,}\\ \end{cases} \label{eq:exp_decay} \end{equation} and \begin{equation} |v'_\mu(r)|+|v''_\mu(r)|\leq Ce^{-c|r|}\qquad\text{on $[-R_\mu,\infty)$.} \label{eq:exp_decay_derivative} \end{equation} for some $c,C>0$ independent of $\mu\in[\mu_*/2,\mu_*)$. \end{lemma} \begin{proof} Due to the local uniform convergence of $u_\mu$ around $R_\mu$ and the fact that $u_\mu$ is decreasing, we deduce that for all $\epsilon>0$ we can find an $h>0$ such that $$u_\mu\begin{cases} \geq\beta_*-\epsilon &\text{on $[0,R_\mu-h]$}\\ \leq\epsilon &\text{on $[R_\mu+h,\infty)$} \end{cases}.$$ We have $g_*'(0)=-\mu_*<0$ and $g_*'(\beta_*)<0$. Therefore, choosing $\epsilon$ small enough, we obtain $$g_\mu(u_\mu)\begin{cases} \geq \frac{g_*'(\beta_*)}2 (u_\mu-\beta_\mu)&\text{on $[0,R_\mu-h]$,}\\ \leq-\frac{\mu_*}2 u_\mu&\text{on $[R_\mu+h,\infty)$.} \end{cases}$$ In other words, $u_\mu$ satisfies $$\begin{cases} -\Delta (\beta_\mu-u_\mu)+c^2(\beta_\mu-u_\mu)\leq 0 &\text{on the ball $B_{R_\mu-h}$}\\ -\Delta u_\mu+c^2u_\mu\leq 0 &\text{on $\R^d\setminus B_{R_\mu+h}$.} \end{cases}$$ for $c^2=\min(\mu_*,|g_*'(\beta_*)|)/2$. Next we recall that $$(-\Delta+c^2)e^{-\alpha |x|}=\left(c^2-\alpha^2+\frac{d-1}{|x|}\alpha\right)e^{-\alpha|x|}$$ in the sense of distributions on $\R^d$. On $\R^d\setminus B_{R_\mu+h}$ we choose $\alpha=c$ and obtain that $$\left(-\Delta +c^2\right)\left(u_\mu-u_\mu(R_\mu+h)e^{-c(|x|-R_\mu-h)}\right)\leq 0 \qquad\text{on $\R^d\setminus B_{R_\mu+h}$.}$$ By the maximum principle we obtain $$u_\mu\leq u_\mu(R_\mu+h)e^{-c(|x|-R_\mu-h)}\leq Ce^{-c(|x|-R_\mu)}$$ with $C=e^{ch}$. This is the first bound in~\eqref{eq:exp_decay} on $\R^d\setminus B_{R_\mu+h}$. To prove the estimate on $B_{R_\mu-h}$, we recall that $$(-\Delta+c^2)\frac{e^{\alpha |x|}}{|x|^{\frac{d-1}2}}=\left(c^2-\alpha^2+\frac{(d-1)(d-3)}{4|x|^2}\right)\frac{e^{\alpha |x|}}{|x|^{\frac{d-1}2}}.$$ In dimension $d\geq3$ the right side is always non-negative for $\alpha^2\leq c^2$. We can then simply take $\alpha=c$ and obtain similarly as before that $$\beta_\mu-u_\mu\leq \frac{e^{c(|x|-R_\mu+h)}}{|x|^{\frac{d-1}2}}.$$ Note that the bound is blowing up at the origin but it gives us~\eqref{eq:exp_decay} for $r\geq1$, with $C=e^{ch}$. For $r\leq1$ we can simply use that, since $u_\mu$ is decreasing, $$\beta_\mu-u_\mu\leq \beta_\mu-u_\mu(1)\leq e^{c(1-R_\mu+h)}\leq e^{c(1+h)}e^{c(|x|-R_\mu)}.$$ Finally, upon increasing again the constant $C$ to cover the interval $[R_\mu-h,R_\mu+h]$ where $u_\mu$ is bounded by 1, we obtain~\eqref{eq:exp_decay} in dimensions $d\geq3$. As usual, the two-dimensional case requires a bit more care. When $d=2$ we introduce $w_\mu:=\sqrt{r}(\beta_\mu-u_\mu)$ which satisfies $$\left(-w_\mu''+\frac{c^2}4w_\mu\right)\leq\left(\frac{1}{r^2}-3c^2\right) \frac{w_\mu}{4}\leq 0,\qquad\text{on $\left[\frac{1}{\sqrt3 c},R_\mu-h\right]$.}$$ The function $$z_\mu:=e^{-\frac{c}2r}\left(w'_\mu+\frac{c}2 w_\mu\right)=e^{-cr}\left(e^{\frac{c}2r}w_\mu\right)'$$ satisfies $z'_\mu\geq0$ on $\big[1/(\sqrt3 c),R_\mu-h\big]$ and therefore we find $$\left(e^{\frac{c}2r}w_\mu\right)'\leq z_\mu(R_\mu-h)e^{cr}\qquad \text{for all}\quad \frac{1}{\sqrt3 c}\leq r\leq R_\mu-h.$$ Integrating over $\big[1/(\sqrt3 c),R_\mu-h\big]$ and using that $z_\mu$ and $w_\mu$ are increasing, we obtain $$\beta_\mu-u_\mu(r)\leq \left(e^{1/\sqrt{3}}+\frac{1}{c}\right)\frac{z_\mu(R_\mu-h)}{\sqrt{r}}e^{\frac{c}2r}\qquad \text{for all}\quad \frac{1}{\sqrt3 c}\leq r\leq R_\mu-h.$$ Using that $u_\mu'(R_\mu-h)$ and $u_\mu(R_\mu-h)$ are bounded, we have $$z_\mu(R_\mu-h)\leq C\sqrt{R_\mu-h}\;e^{-\frac{c}2(R_\mu-h)}$$ for some constant $C$, and hence we have shown the bound $$\beta_\mu-u_\mu(r)\leq \left(e^{1/\sqrt{3}}+\frac{1}{c}\right) C e^{\frac{c}2(r-R_\mu+h)}\qquad \text{for all}\quad \frac{1}{\sqrt3 c}\leq r\leq R_\mu-h.$$ Increasing the constant $C$ to get the bound on $[0,(\sqrt3 c)^{-1}]\cup[R_\mu-h,R_\mu+h]$, we obtain~\eqref{eq:exp_decay} in dimension $d=2$ as well. Next we turn to the derivatives. The equation~\eqref{eq:ODE} can also be rewritten in the form \begin{equation} \left(r^{d-1}u_\mu' \right)'=-r^{d-1}g_\mu(u_\mu). \label{eq:ODE2} \end{equation} After integrating over $[r,\infty)$ and on $[0,r]$, this gives the estimate on $|v_\mu'(r)|$ in~\eqref{eq:exp_decay_derivative} after using~\eqref{eq:exp_decay} together with $$|g_\mu(v)|\leq \begin{cases}C|v| & \text{for $0\leq v\leq \gamma$,}\\ C(\beta_\mu-v) &\text{for $\gamma\leq v\leq \beta_\mu$.} \end{cases}$$ Note that for $r\leq1$ we have the more precise bound \begin{equation} |u_\mu'(r)|=\frac1{r^{d-1}}\int_0^rs^{d-1}|g_\mu(u_\mu(s))|\,ds\leq Cre^{-R_\mu}. \label{eq:exp_decay_derivative_origin} \end{equation} For the second derivative we can therefore use the equation~\eqref{eq:ODE} to obtain the corresponding bound in~\eqref{eq:exp_decay_derivative}. \end{proof} We are now able to prove the convergence~\eqref{eq:limit_mu_mu_*} of the statement, even though we still do not know the behavior of $R_\mu$ in terms of $\mu$. \begin{lemma}[Global convergence]\label{lem:CV_u_derivative} We have the uniform convergence \begin{equation} \lim_{\mu\to\mu_*}\norm{u_\mu-U_*(\cdot-R_\mu)}_{L^\infty(\R_+)}=0 \label{eq:CV_Lii} \end{equation} and the convergence of the derivatives \begin{equation} \lim_{\mu\to\mu_*}\norm{u_\mu'-U_*'(\cdot-R_\mu)}_{L^p(\R_+)}=0, \label{eq:CV_derivative} \end{equation} in $L^p(\R_+)$ for all $1\leq p\leq\infty$. \end{lemma} \begin{proof} The exponential bounds from Lemma~\ref{lem:exp_bounds} give $|v'_\mu(r)-U_*'(r)|\leq C e^{-c|r|}$ for $r\geq-R_\mu$. From the dominated convergence theorem, this shows that $$\lim_{\mu\to\mu_*}\int_{-R_\mu}^\infty|v'_\mu(r)-U_*'(r)|^p\,dr=\lim_{\mu\to\mu_*}\int_{0}^\infty|u'_\mu(r)-U_*'(r-R_\mu)|^p\,dr=0.$$ The convergence of the derivatives in $L^1$ together with the fact that $v_\mu(0)=U_*(0)$ imply the uniform convergence~\eqref{eq:CV_Lii}. Finally, the uniform convergence for $v_\mu'-U_*'$ follows from the similar $L^1$ convergence of $v_\mu''-U_*''$ on $[-R_\mu,\infty)$. \end{proof} Next we expand the mass in terms of $R_\mu$. \begin{lemma}[Expansion of the mass]\label{lem:CV_L2} We have \begin{equation} \int_0^\infty r^{\alpha}u_\mu(r)^2\,dr=\frac{\beta_*^2}{\alpha+1}R_\mu^{\alpha+1}+O(R_\mu^{\alpha})+O\big(R_\mu^{\alpha+1}(\mu_*-\mu)\big), \label{eq:claim_L2c} \end{equation} for all $\alpha>0$. At $\alpha=d-1$ this implies \begin{equation} M(\mu)=\norm{u_\mu}^2_{L^2(\R^d)}=\frac{|\bS^{d-1}|\beta_*^2}{d}R_\mu^d+O(R_\mu^{d-1})+O\big(R_\mu^d(\mu_*-\mu)\big). \label{eq:claim_L2} \end{equation} \end{lemma} We will prove later that $R_\mu\sim C(\mu_*-\mu)^{-1}$ so that the two errors on the right side are actually of the same order. \begin{proof} Recall that $\beta_\mu$ is the second root of $g_\mu=g_*+(\mu_*-\mu)u$. From the implicit function theorem, we obtain \begin{equation} \beta_\mu=\beta_*+\frac{\beta_*}{g'(\beta_*)}(\mu-\mu_*)+O\big((\mu-\mu_*)^2\big). \label{eq:approx_beta_mu} \end{equation} Using the upper bound~\eqref{eq:exp_decay} we then find \begin{align*} \int_0^\infty r^\alpha u_\mu(r)^2\,dr&\leq u_\mu(0)^2\int_0^{R_\mu} r^\alpha \,dr+C\int_{0}^\infty (r+R_\mu)^\alpha e^{-cr}\,dr \\ &\leq \frac{u_\mu(0)^2}{\alpha+1}R_\mu^{\alpha+1}+O(R_\mu^\alpha)\\ &=\frac{\beta_*^2}{\alpha+1}R_\mu^{\alpha+1}+O\left(R_\mu^{\alpha+1}(\mu_*-\mu)\right)+O(R_\mu^\alpha). \end{align*} The second estimate is because $u_\mu(0)\leq \beta_\mu\leq\beta_*+C(\mu_*-\mu)$. Using the lower bound in~\eqref{eq:exp_decay}, we also obtain \begin{align*} \int_0^\infty r^\alpha u_\mu(r)^2\,dr&\geq \int_{0}^{R_\mu} r^\alpha \left(\beta_\mu-Ce^{-c(R_\mu-r)}\right)^2\,dr \\ &=\frac{\beta_*^2}{\alpha+1}R_\mu^{\alpha+1}+O(R_\mu^{\alpha}) \end{align*} since $\beta_\mu\geq\beta_*$. This gives the stated expansion~\eqref{eq:claim_L2c}, hence~\eqref{eq:claim_L2} after passing to spherical coordinates. \end{proof} Finally, we obtain the exact behavior of $R_\mu$ in terms of $\mu_*-\mu$. \begin{lemma}[Behavior of $R_\mu$]\label{lem:R_mu} We have \begin{equation} R_\mu\underset{\mu\to\mu_*}\sim \frac{2\sqrt2(d-1)}{\beta_*^2(\mu_*-\mu)}\int_0^{\beta_*} \sqrt{|G_*(s)|}\,ds. \label{eq:R_mu} \end{equation} \end{lemma} \begin{proof} We integrate~\eqref{eq:integrated_ODE} and obtain \begin{equation*} \frac12\int_0^\infty u_\mu'(r)^2\,dr-(d-1)\int_0^\infty\int_r^\infty\frac{u_\mu'(s)^2}{s}\,ds\,dr+\int_0^\infty G_\mu(u_\mu(r))\,dr=0. \end{equation*} After integrating by parts we find $$-(d-1)\int_0^\infty\int_r^\infty\frac{u_\mu'(s)^2}{s}\,ds\,dr=-(d-1)\int_0^\infty u'_\mu(r)^2\,dr$$ and the Pohozaev-type relation \begin{equation} \left(d-\frac32\right)\int_0^\infty u_\mu'(r)^2\,dr=\int_0^\infty G_\mu(u_\mu(r))\,dr. \label{eq:Pohozaev} \end{equation} We split the second integral in the form \begin{multline*} \int_0^\infty G_\mu(u_\mu(r))\,dr=\int_{-R_\mu}^\infty G_\mu(v_\mu(r))\,dr\\=\int_{-R_\mu}^0 \big(G_\mu(v_\mu(r))-G_\mu(\beta_\mu)\big)\,dr+\int_{0}^\infty G_\mu(v_\mu(r))\,dr+R_\mu G_\mu(\beta_\mu). \end{multline*} When $\mu$ is in a neighborhood of $\mu_*$, we have the bounds $$|G_\mu(v)|\leq Cv^2\qquad\text{for all $v\in[0,\gamma]$}$$ and $$|G_\mu(v)-G_\mu(\beta_\mu)|\leq C(v-\beta_\mu)^2\qquad\text{for all $v\in[\gamma,\beta_\mu]$}.$$ With the exponential bounds~\eqref{eq:exp_decay}, this allows us to use Lebesgue's dominated convergence theorem and deduce that $$\lim_{\mu\to\mu_*}\int_{0}^\infty G_\mu(v_\mu(r))\,dr=\int_0^{\infty}G_*(U_*(r))\,dr,$$ $$\lim_{\mu\to\mu_*}\int_{-R_\mu}^0 \big(G_\mu(v_\mu(r))-G_\mu(\beta_\mu)\big)\,dr=\int_{-\infty}^0G_*(U_*(r))\,dr.$$ Using the $L^2$ convergence~\eqref{eq:CV_derivative} of $u'_\mu$, we therefore obtain from~\eqref{eq:Pohozaev} \begin{align} \lim_{\mu\to\mu_*}R_\mu G_\mu(\beta_\mu)&=\left(d-\frac32\right)\int_{-\infty}^\infty U_*'(r)^2\,dr-\int_{-\infty}^\infty G_*(U_*(r))\,dr\nonumber\\ &=\sqrt{2}(d-1)\int_{0}^{\beta_*}|G_*(s)|^{\frac12}\,ds.\label{eq:limit_R_G} \end{align} Using the expansion of $\beta_\mu$ in~\eqref{eq:approx_beta_mu}, this implies $$G_\mu(\beta_\mu)=G_*(\beta_\mu)+\frac{\mu_*-\mu}{2}\beta_\mu^2=\frac{\mu_*-\mu}{2}\beta_*^2+O\big((\mu-\mu_*)^2\big)$$ and after inserting in~\eqref{eq:limit_R_G} we obtain~\eqref{eq:R_mu}. \end{proof} Inserting~\eqref{eq:R_mu} into~\eqref{eq:claim_L2} we obtain immediately that \begin{equation} M(\mu) =\frac{\Lambda}{(\mu_*-\mu)^d}+O\left((\mu_*-\mu)^{-d+1}\right)_{\mu\to\mu_*} \label{eq:behavior_M} \end{equation} with the constant $\Lambda$ introduced in the statement. This is the first limit in Theorem~\ref{thm:limit_mu_star}. It only remains to prove the convergence for $M'(\mu)$, which requires a detailed analysis of the linearized operator. \subsection{The linearized operator} The difficulty with the derivative $M'(\mu)=-2\langle u_\mu,(\mathcal{L}_\mu)_{\rm rad}^{-1}u_\mu\rangle$ is that the first eigenvalue of $\mathcal{L}_\mu$ tends to zero. Indeed, since $u_\mu-U_*(|x|-R_\mu)\to0$ in $L^\infty(\R^d)$, as proved before in Lemma~\ref{lem:CV_u_derivative}, the intuition is that the operator $\mathcal{L}_\mu$ behaves like $-\Delta-g_*'(0)=-\Delta+\mu_*$ at infinity and like $-\Delta-g_*'(\beta_*)$ close to the origin. These are two positive operators. On the other hand, the restriction to the radial sector behaves like the operator $$L_*:=-\frac{\rm d^2}{{\rm d}r^2}-g'_*(U_*)\qquad\text{on $L^2(\R)$,}$$ in the neighborhood of $|x|=R_\mu$. This is because in this region the $d$-dependent term $-(d-1)({\rm d}/{\rm d}r)/r$ becomes negligible. Note that we have $L_*U_*'=0$ which proves that $L_*\geq0$ with $\ker(L_*)={\rm span}(U_*')$ . On the other hand, there is a spectral gap above $\lambda_1(L_*)=0$ since the essential spectrum starts at $\min\left(\mu_*,-g'_*(\beta_*)\right)>0$ and the first eigenvalue is always simple, by the Perron-Frobenius theorem. From this discussion, we conclude that the first eigenvalue $\lambda_1(\mu)$ of the operator $\mathcal{L}_\mu$ should tend to 0, that the corresponding eigenfunction $\phi_\mu$ should behave like $U_*'(\cdot -R_\mu)$ and that $\mathcal{L}_\mu$ should have a uniform spectral gap above its first eigenvalue, when restricted to the radial sector. The following result confirms this intuition. \begin{lemma}[The linearized operator in the limit $\mu\to\mu_*$]\label{lem:linearized} The lowest eigenvalue of $\mathcal{L}_\mu$ behaves as \begin{equation} \lambda_1(\mu)=-\frac{d-1}{(R_\mu)^2}+o\left(\frac1{(R_\mu)^2}\right)_{\mu\to\mu_*} \label{eq:lambda_1} \end{equation} and the corresponding normalized positive eigenfunction $\phi_\mu$ satisfies \begin{equation} \lim_{\mu\to\mu_*}\norm{\phi_\mu+\frac{U_*'(\cdot-R_\mu)}{\kappa(R_\mu)^{\frac{d-1}{2}}}}_{L^2(\R^d)}=0 \label{eq:CV_L2_phi_mu} \end{equation} with $$\kappa=2^{\frac14}|\bS^{d-1}|^{\frac12}\left(\int_{0}^{\beta_*}\sqrt{|G_*(v)|}\,dv\right)^{\frac12}.$$ In addition, we have the bound \begin{equation} P_\mu^\perp(\mathcal{L}_\mu)_{\rm rad}P_\mu^\perp\geq cP_\mu^\perp \label{eq:estim_cL_orthogonal} \end{equation} for some $c>0$ where $P_\mu^\perp=1-|\phi_\mu\rangle\langle\phi_\mu|$ is the projection on the orthogonal of $\phi_\mu$, within the sector of radial functions. \end{lemma} \begin{proof} We split the proof into several steps. \medskip \noindent\textbf{Step 1. Upper bound on $\lambda_1(\mu)$.} We recall that $u_\mu'$ satisfies the equation $$\mathcal{L}_\mu u_\mu'+\frac{d-1}{|x|^2}u_\mu'=0.$$ By the variational principle, this proves immediately that $$\lambda_1(\mu)\leq \norm{u_\mu'}_{L^2(\R^d)}^{-2}\pscal{u_\mu',\mathcal{L}_\mu u_\mu'}=-(d-1)\frac{\int_0^\infty r^{d-3}u_\mu'(r)^2\,dr}{\int_0^\infty r^{d-1}u_\mu'(r)^2\,dr}.$$ We obtain from Lemma~\ref{lem:CV_u_derivative} and the same analysis as in Lemma~\ref{lem:CV_L2} that $$\int_0^\infty r^{\alpha}u_\mu'(r)^2\,dr=(R_\mu)^{\alpha}\int_\R U_*'(r)^2\,dr+o(R_\mu^\alpha),\qquad \forall \alpha\geq0$$ and this gives in dimension $d\geq3$ the upper bound \begin{equation} \lambda_1(\mu)\leq -\frac{d-1}{(R_\mu)^2}+o\left((\mu_*-\mu)^2\right). \label{eq:upper_bd_lambda_1} \end{equation} Dimension $d=2$ requires a little more attention. In this case we can for instance use~\eqref{eq:limit_d_term} which gives \begin{align*} (d-1)\int_0^\infty \frac{u_\mu'(r)^2}{r}\,dr &=G_\mu(u_\mu(0))\\ &=G_\mu(\beta_\mu)+O(e^{-R_\mu})\\ &=\frac{d-1}{R_\mu}\int_{\R}U_*'(r)^2\,dr+o(R_\mu^{-1}). \end{align*} In the second line we have used~\eqref{eq:exp_decay} whereas in the third line we have used~\eqref{eq:limit_R_G}. We therefore obtain the same upper bound~\eqref{eq:upper_bd_lambda_1} in dimension $d=2$. \medskip \noindent\textbf{Step 2. Convergence.} Let $\psi_\mu$ be the first (radial positive) eigenfunction of $\mathcal{L}_\mu$, normalized in the manner $$\psi_\mu(R_\mu)=-U_*'(0)=\sqrt{-2G_*(\gamma)}.$$ The function $\psi_\mu$ solves the linear equation $$\left(-\Delta-g_\mu'(u_\mu)-\lambda_1(\mu)\right)\psi_\mu=0.$$ Usual elliptic regularity gives that $\psi_\mu$ is bounded in $C^2(B_{R_\mu+h}\setminus B_{R_\mu-h})$ for every fixed $h>0$. Since $\lambda_1(\mu)<0$ we also obtain $$\left(-\Delta-g_\mu'(u_\mu)\right)\psi_\mu\leq 0\qquad \text{on $\R^d$}.$$ By arguing exactly as in the proof of Lemma~\ref{lem:exp_bounds}, we can then obtain a uniform bound in the form \begin{equation} 0< \psi_\mu(r)\leq Ce^{-c|R_\mu-r|} \label{eq:exp_decay_psi_mu} \end{equation} for some constants $C,c>0$. Using the equation $$\left(r^{d-1}\psi_\mu'\right)'=-r^{d-1}\left(\lambda_1(\mu)+g_\mu(u_\mu)\right)\psi_\mu$$ and the fact that $|\lambda_1(\mu)|\leq \|g_\mu'\|_{L^\infty(]0,1[)}$ is bounded, we can deduce that $$|\psi'_\mu(r)|\leq \begin{cases} Ce^{-c|R_\mu-r|}& \text{on $[0,\infty)$,}\\ Cre^{-cR_\mu} & \text{on $[0,1]$}. \end{cases}$$ After extracting a subsequence, we can thus assume that $\psi_{\mu_n}(\cdot+R_{\mu_n})\to V$ strongly in $L^1\cap L^\infty(\R)$ and $\lambda_1(\mu_n)\to\lambda$, with $-V''-g'_*(U_*)V=\lambda V$. Since $V\geq 0$ and $V(0)=-U_*'(0)>0$ we must then have $V=-U_*'$ and $\lambda=0$. We have therefore proved that $\lambda_1(\mu)\to0$ and $\psi_\mu(\cdot+R_\mu)\longrightarrow -U_*'$ in $L^1\cap L^\infty(\R)$. \medskip \noindent\textbf{Step 3. Lower bound on $\lambda_1(\mu)$.} Next we derive the lower bound on $\lambda_1(\mu)$. Since $u_\mu'$ has a constant sign, this shows that it must be the first eigenvector of the operator $\mathcal{L}_\mu +(d-1)|x|^{-2}$. In other words, we have \begin{equation} \mathcal{L}_\mu +\frac{d-1}{|x|^2}\geq0 \label{eq:compare_cL_mu} \end{equation} in the sense of quadratic form. Hence we can use that $$\lambda_1(\mu)=\frac{\pscal{\psi_\mu,\mathcal{L}_\mu\psi_\mu}}{\norm{\psi_\mu}^2_{L^2(\R^d)}}\geq-(d-1)\frac{\int_0^\infty r^{d-3}\psi_\mu(r)^2\,dr}{\int_0^\infty r^{d-1}\psi_\mu(r)^2\,dr}.$$ The pointwise bounds~\eqref{eq:exp_decay_psi_mu} allow us to pass to the limit exactly as for $u_\mu'$ and conclude that $$\int_0^\infty r^{\alpha}\psi_\mu(r)^2\,dr=(R_\mu)^{\alpha}\int_\R U_*'(r)^2\,dr+o(R_\mu^\alpha),\qquad \forall \alpha\geq0.$$ This gives the desired lower bound in dimensions $d\geq3$. The proof does not quite work in dimension $d=2$, since in this case $\int_{0}^\infty r^{-1}\psi_\mu(r)^2\,dx=+\infty$. Instead, we choose a radial localization function $\chi_\mu\in C^\infty_c(\R^2)$ so that $\chi_\mu\equiv1$ on the ball of radius $R_\mu-2\sqrt{R_\mu}$, $\chi_\mu\equiv0$ outside of the ball of radius $R_\mu-\sqrt{R_\mu}$ with $|\nabla\chi_\mu|\leq C/\sqrt{R_\mu}$ and we set $\eta_\mu:=\sqrt{1-\chi_\mu^2}$. The IMS localization formula tells us that \begin{align*} \lambda_1(\mu)\norm{\psi_\mu}_{L^2(\R^2)}^2&= \int_{\R^2}|\nabla \psi_\mu|^2-\int_{\R^2}g_\mu'(u_\mu)\psi_\mu^2\\ &= \pscal{\chi_\mu\psi_\mu,\mathcal{L}_\mu\chi_\mu\psi_\mu}+\pscal{\eta_\mu\psi_\mu,\mathcal{L}_\mu\eta_\mu\psi_\mu}\\ &\qquad\qquad-\int_{\R^2}(|\nabla \chi_\mu|^2+|\nabla \eta_\mu|^2)\psi_\mu^2\\ &\geq-(d-1)|\bS^1|\int_{0}^\infty\frac{\eta_\mu^2\psi_\mu^2}{r}\,dr-Ce^{-c\sqrt{R_\mu}}\\ &\geq-\frac{(d-1)|\bS^1|}{R_\mu-2\sqrt{R_\mu}}\int_{0}^\infty\psi_\mu^2\,dr-Ce^{-c\sqrt{R_\mu}}. \end{align*} The convergence in $L^2$ allows us to conclude. In the first inequality we have used that $g_\mu'(u_\mu(r))<0$ for $r\leq R_\mu-\sqrt{R_\mu}$ since $u_\mu$ is (exponentially) close to $\beta_\mu$ in this range and $g_\mu'(\beta_\mu)<0$. This implies $\pscal{\chi_\mu\psi_\mu,\mathcal{L}_\mu\chi_\mu\psi_\mu}\geq0$. We have also used~\eqref{eq:compare_cL_mu} for the second term and the exponential bound~\eqref{eq:exp_decay_psi_mu} for the localization error. The normalized eigenfunction in the statement is $\phi_\mu=\norm{\psi_\mu}_{L^2(\R^d)}^{-1}\psi_\mu$ with \begin{align*} \norm{\psi_\mu}^2_{L^2(\R^d)}&=R_\mu^{d-1}|\bS^{d-1}|\int_{\R}U_*'(r)^2\,dr+o(R_\mu^{d-1})\\ &=R_\mu^{d-1}|\bS^{d-1}|\sqrt2\int_{0}^{\beta_*}\sqrt{|G_*(v)|}\,dv+o(R_\mu^{d-1}) \end{align*} and the result~\eqref{eq:CV_L2_phi_mu} follows. \medskip \noindent\textbf{Step 4. Lower bound on the orthogonal to $\phi_\mu$.} Let us now prove~\eqref{eq:estim_cL_orthogonal}. We can argue by contradiction and assume that there exists a subsequence $\mu_n\to\mu_*$ such that $\lambda_2(\mu_n)\to0$, where $\lambda_2(\mu_n)$ is then the second eigenvalue of $\mathcal{L}_\mu$ within the sector of radial functions. But the exact same arguments as before then give that the corresponding eigenfunction $\tilde\psi_\mu$ satisfies $\tilde\psi_\mu(\cdot+R_\mu)\to cU_*'$, which cannot hold because the functions have to be orthogonal to each other. Hence we must have $\liminf_{\mu\to\mu_*}\lambda_2(\mu)>0$. This concludes the proof of Lemma~\ref{lem:linearized}. \end{proof} We finally use Lemma~\ref{lem:linearized} to derive the stated limit~\eqref{eq:limit_M_mu_infty} for $M'(\mu)$. We write \begin{align*} M'(\mu)&=-2\pscal{u_\mu,(\mathcal{L}_\mu)^{-1}u_\mu}\\ &=-2\left(\frac{|\pscal{u_\mu,\phi_\mu}|^2}{\lambda_1(\mu)}+\pscal{P_\mu^\perp u_\mu,(\mathcal{L}_\mu)^{-1}P_\mu^\perp u_\mu}\right)\\ &=-2\frac{|\pscal{u_\mu,\psi_\mu}|^2}{\lambda_1(\mu)\norm{\psi_\mu}_{L^2(\R^d)}^2}+O(R_\mu^{d}), \end{align*} where we have used $P_\mu^{\perp}(\mathcal{L}_\mu)^{-1}P_\mu^{\perp}\leq P_\mu^\perp /c$ to infer $$\pscal{P_\mu^\perp u_\mu,(\mathcal{L}_\mu)^{-1}P_\mu^\perp u_\mu}\leq \frac{2}{c}M(\mu)=O(R_\mu^{d})$$ by Lemma~\ref{lem:CV_L2}. From the previous convergence properties, we have $$\pscal{u_\mu,\psi_\mu}=-|\bS^{d-1}|R_\mu^{d-1}\int_{\R} U_*U_*'+o(R_\mu^{d-1})=\frac{|\bS^{d-1}|\beta_*^2}{2}R_\mu^{d-1}(1+o(1))$$ and hence we obtain the limit for $M'(\mu)$ in the statement from~\eqref{eq:lambda_1}. This concludes the proof of Theorem~\ref{thm:limit_mu_star}.\qed \section{Proof of Theorem~\ref{thm:prop_I_lambda} on the variational principle $I(\lambda)$}\label{sec:prop_I_lambda} It is classical that $I(\lambda)\leq0$ and that $I(\lambda)$ is non-increasing. First we prove that $I$ is concave. Letting $v(x)=u(\lambda^{-1/d}x)$ which satisfies $\int_{\R^d}|v|^2=1$ whenever $\int_{\R^d}|u|^2=\lambda$ we can rewrite $$I(\lambda)=\lambda\, J\!\left(\lambda^{-\frac2d}\right)$$ where \begin{multline} J(\epsilon):=\inf_{\substack{v\in H^1(\R^d)\cap L^{p+1}(\R^d)\\ \int_{\R^d}|v|^2=1}}\bigg\{\frac{\epsilon}{2}\int_{\R^d}|\nabla v(x)|^2\,dx+\frac1{p+1}\int_{\R^d}|v(x)|^{p+1}\,dx\\ -\frac1{q+1}\int_{\R^d}|v(x)|^{q+1}\,dx\bigg\} \label{eq:def_J} \end{multline} The function $J$ is non-decreasing, concave and non-positive and this implies that $I$ itself is concave. This is because we have \begin{equation} I''(\lambda)=-\frac{2(d-2)}{d^2}\lambda^{-\frac2d-1} J'\!\left(\lambda^{-\frac2d}\right)+\frac4{d^2}\lambda^{-\frac4d-1} J''\!\left(\lambda^{-\frac2d}\right)\leq0 \label{eq:I_concave} \end{equation} in the sense of distributions on $(0,\infty)$. From the concavity of $I$ we deduce that there exists a unique $\lambda_c$ such that $I\equiv0$ on $[0,\lambda_c]$ and $I$ is strictly decreasing (and hence negative) on $(\lambda_c,\infty)$. In dimension $d\geq3$ we even see from~\eqref{eq:I_concave} that $I$ is strictly concave on $(\lambda_c,\infty)$. The proof that there exists a minimizer for all $\lambda>\lambda_c$ is very classical. By rearrangement inequalities~\cite{LieLos-01}, we can restrict the infimum to radial-decreasing functions. If $\{u_n\}$ is a minimizing sequence consisting of such functions for $I(\lambda)$, then we can assume after passing to a subsequence that $u_n\to u$ weakly in $H^{1}(\R^d)\cap L^{p+1}(\R^d)$ and strongly in $L^r(\R^d)$ for all $2<r<\min(p+1,2^*)$ with $2^*=2d/(d-2)$ when $d\geq3$ and $2^*=\infty$ when $d=2$, by Strauss' compactness lemma for radial functions~\cite{Strauss-77,BerLio-83}. In particular, $u_n\to u$ strongly in $L^{q+1}(\R^d)$. By Fatou's lemma we then have $$I(\lambda')\leq \mathcal{E}(u)\leq \liminf_{n\to\infty}\mathcal{E}(u_n)=I(\lambda),$$ where $\lambda'=\int_{\R^d}u^2$. Since $I(\lambda)<I(\lambda')$ for $\lambda'<\lambda$ when $\lambda>\lambda_c$, we must then have $\lambda'=\lambda$ and the convergence is strong in $L^2(\R^d)$. In particular, $u$ is a minimizer. Any minimizer for $I(\lambda)$, when it exists, can be chosen positive radial-decreasing after rearrangement. It solves~\eqref{powernl} for some $\mu$. From the Pohozaev identity~\eqref{pohozaev} we obtain $$\frac{d-2}{2d}\int_{\R^d}|\nabla u(x)|^2\,dx=\int_{\R^d}G_\mu(u(x))\,dx=-I(\lambda)-\frac{\mu\lambda}{2}+\frac12\int_{\R^d}|\nabla u(x)|^2\,dx$$ and hence \begin{equation} \mu=\frac2\lambda\left(-I(\lambda)+\frac1d \int_{\R^d}|\nabla u(x)|^2\,dx\right) \label{eq:relation_mu_I_kinetic} \end{equation} which is strictly positive since $I(\lambda)\leq0$ (except of course in the trivial case $\lambda=0$ where $u\equiv0$ is the only solution). Since $u\geq0$ is radial-decreasing, it must therefore coincide with the unique corresponding $u_\mu$. Next we look at $\lambda=\lambda_c$. For $q<1+4/d$ a simple scaling argument shows that $I(\lambda)<0$ for all $\lambda>0$, hence $\lambda_c=0$ in this case. The unique minimizer is then $u_0=0$. For $q\geq 1+4/d$, it is useful to characterize $\lambda_c$ through the inequality~\eqref{eq:Gagliardo-Nirenberg-type}. We have by definition \begin{equation} \frac1{q+1}\int_{\R^d}|u(x)|^{q+1}\,dx\leq \frac12\int_{\R^d}|\nabla u(x)|^2\,dx+\frac1{p+1}\int_{\R^d}|u(x)|^{p+1}\,dx \end{equation} for all $u\in H^1(\R^d)\cap L^{p+1}(\R^d)$ such that $\int_{\R^d}|u|^2\leq\lambda_c$. Replacing $u$ by $\ell^{d/2}u(\ell\cdot)$ and optimizing over $\ell$, we obtain \begin{multline} \int_{\R^d}|u(x)|^{q+1}\,dx\leq (q+1)\frac{dp-d-4}{2}\times\\ \times\left(\frac{1}{d(p-q)}\int_{\R^d}|\nabla u(x)|^2\,dx\right)^{1-\theta}\left(\frac{2}{(p+1)(dq-d-4)}\int_{\R^d}|u(x)|^{p+1}\,dx\right)^{\theta} \end{multline} for all $u\in H^1(\R^d)\cap L^{p+1}(\R^d)$ such that $\int_{\R^d}|u|^2\leq\lambda_c$, with $$\theta:=\frac{q-1-\frac{4}d}{p-1-\frac{4}d}\in(0,1).$$ When $\theta=0$ the formulas have to be extended by continuity in an obvious manner but the corresponding optimal $\ell$ vanishes. This gives \begin{multline} \norm{u}_{L^{q+1}(\R^d)}^{q+1}\leq (q+1)\frac{dp-d-4}{2}\left(\frac{1}{d(p-q)}\right)^{1-\theta}\left(\frac{2}{(p+1)(dq-d-4)}\right)^{\theta}\times\\ \times\left(\lambda_c^{-\frac12}\norm{u}_{L^2(\R^d)}\right)^{q-1-\theta(p-1)}\norm{\nabla u}_{L^2(\R^d)}^{2(1-\theta)}\norm{u}_{L^{p+1}(\R^d)}^{\theta(p+1)} \end{multline} for all $u\in H^1(\R^d)\cap L^{p+1}(\R^d)$. In addition, we have equality everywhere for $u$ a minimizer of $I(\lambda_c)$, when it exists, rescaled in the appropriate manner as above. This shows that the best constant in the Gagliardo-Nirenberg-type inequality \begin{equation} \norm{u}_{L^{q+1}(\R^d)}^{q+1}\leq C_{p,q,d}\norm{u}_{L^2(\R^d)}^{q-1-\theta(p-1)}\norm{\nabla u}_{L^2(\R^d)}^{2(1-\theta)}\norm{u}_{L^{p+1}(\R^d)}^{\theta(p+1)} \label{eq:Gagliardo-Nirenberg-type_bis} \end{equation} is exactly given by $$C_{p,q,d}=(q+1)\frac{dp-d-4}{2}\left(\frac{1}{d(p-q)}\right)^{1-\theta}\left(\frac{2}{(p+1)(dq-d-4)}\right)^{\theta}\lambda_c^{\frac{1+\theta(p-1)-q}2}.$$ From the usual Gagliardo-Nirenberg inequality, it is easily seen that $C_{p,q,d}<\infty$ for $q\geq1+4/d$ and therefore $\lambda_c>0$. In the simpler case $q=1+4/d$ we find $$C_{p,1+4/d,d}=\frac{d+2}{d}\lambda_c^{-\frac{2}d}$$ and~\eqref{eq:Gagliardo-Nirenberg-type_bis} is indeed the usual Gagliardo-Nirenberg inequality. The corresponding optimizer is the function $Q$ which solves the NLS equation $-\Delta Q-Q^q+Q=0$ and then \begin{equation} \lambda_c=\int_{\R^d}Q(x)^2\,dx, \qquad\text{for $q=1+\frac4d$}. \label{eq:value_lambda_c} \end{equation} From Theorem~\ref{thm:limit_mu_0}, this exactly coincides with $M(0)$. The function $Q$ can however not be an optimizer for $I(\lambda_c)$ because the inequality does not involve the $L^{p+1}$ norm, hence $Q$ does not solve the appropriate equation. This is due to the fact that we have to scale it with $\ell\to0$ as above. As a conclusion we have proved that $\lambda_c>0$ for all $q\geq1+4/d$ and that there cannot be an optimizer for $I(\lambda_c)$ at $q=1+4/d$. It remains to show the existence for $\lambda=\lambda_c$ and $q>1+4/d$. Let us consider a sequence $\lambda_n\searrow \lambda_c$ and call $u_n=u_{\mu_n}$ a sequence of corresponding radial-decreasing minimizers. The sequence of multipliers $\mu_n$ cannot tend to 0 because $\mathcal{E}(u_{\mu_n})=I(\lambda_n)<0$ and we know that $\mathcal{E}(u_\mu)$ is always positive for $\mu$ close to the origin by Corollary~\ref{cor:Energy}. On the other hand it can also not converge to $\mu_*$ because there $\mathcal{E}(u_\mu)$ is unbounded. Hence after extracting a subsequence we have $\mu_n\to\mu\in(0,\mu_*)$ and $u_{\mu_n}\to u_\mu$ strongly in $(H^1\cap L^\infty)(\R^d)$. The function $u_\mu$ is the sought-after minimizer. If $0\leq\lambda<\lambda_c$, then there cannot be a minimizer $u$. If there was one $u$ (positive and radial-decreasing without loss of generality), then it would solve~\eqref{powernl} for some $\mu$. Using again~\eqref{eq:relation_mu_I_kinetic} and $I(\lambda)=0$ we find $\mu>0$. Since $$\mathcal{E}\big((1+\epsilon)u_\mu\big)=-\mu\lambda\epsilon+o(\epsilon)$$ we find that $I(\lambda)$ becomes negative on the right of $\lambda$, which can only happen at $\lambda_c$ by definition. This concludes the proof of Theorem~\ref{thm:prop_I_lambda}.\qed \begin{remark}[Compactness of minimizing sequences] The strict monotonicity of $J$ in~\eqref{eq:def_J} implies that $I(t\lambda)>tI(\lambda)$ for every $\lambda>\lambda_c$ and every $t\in(0,1)$. This, in turn, implies that $I(\lambda)<I(t\lambda)+I((1-t)\lambda)$. By the concentration-compactness method~\cite{Lions-84,Lions-84b}, these `binding inequalities' imply that all the minimizing sequences for $I(\lambda)$ converge strongly in $H^1(\R^d)$ to a minimizer, up to space translations and a subsequence. \end{remark} \section{Proof of Theorem~\ref{thmuniqnondeg}}\label{sec:proof_thm_uniqueness} In this section we provide the full proof of Theorem~\ref{thmuniqnondeg}, although we sometimes refer to the literature for some classical parts or to~\cite{LewRot-15} for arguments which coincide with the ones in that paper. Since we are interested in proving the uniqueness and the non-degeneracy of positive radial solution to~\eqref{nleq}, we consider the associated ordinary differential equation \begin{equation}\label{nlradeq} \left\{ \begin{aligned} &u''+\frac{d-1}{r}u'+g(u)=0 \text{ on }\R^*_+\\ &u'(0)=0 \end{aligned} \right. \end{equation} and we focus on showing the uniqueness and non-degeneracy of positive solutions such that $(u(r),u'(r))\to 0$ when $r\to\infty$. This system has a local energy, given by \begin{equation}\label{locenergy} H(r)=\frac{u'(r)^2}{2}+G(u(r)),\qquad \text{ with } G(\eta)=\int_0^\eta g(s)\,ds, \end{equation} which decreases along the trajectories, since $$H'(r)=-\frac{d-1}{r}u'(r)^2\leq0.$$ We parametrize the solutions $u_y$ to~\eqref{nlradeq} by $u_y(0)=y$. Since we are interested in positive solution with $\|u_y\|_{\infty}<\beta$, then $y<\beta$. Hence, following \cite{McLeod-93,LewRot-15}, we introduce the three sets \begin{align*} &S_+=\{y\in (0,\beta): \min_\R u_y >0\},\\ &S_0=\{y\in (0,\beta): u_y>0 \text{ and } \lim_{r\to +\infty}u_y(r)=0\},\\ &S_-=\{y\in (0,\beta): u_y(r_y)=0 \text{ for some (first) } r_y>0\}, \end{align*} which form a partition of $(0,\beta)$. In case $y\in S_0$, we set $r_y=+\infty$. One should think of plotting the solution in the plane $(u',u)$ as in Figure~\ref{fig:portrait}. Then, as we will show, $S_+$ exactly correspond to all the solutions that cross the vertical axis, while staying above the horizontal axis at all times. On the other hand, $S_-$ consists of those crossing the horizontal axis first (we will show they cannot cross the vertical axis before). We are particularly interested in the set $S_0$ containing the remaining solutions which are converging to the point $(0,0)$ at infinity while staying in the quadrant $(u'<0,u>0)$. Our goal is indeed to show that $S_0$ is reduced to one point. A transition between $S_-$ and $S_+$ is typically a point in $S_0$ and this is actually how one can prove the existence of solutions by the shooting method. Here we \emph{assume} the existence of one such solution, hence we have $S_0\neq\emptyset.$ Points in $S_0$ typically occur as transition points between $S_-$ and $S_+$. The main idea of the proof is to show that for any $y\in S_0$, we must have \begin{equation} (y-\eta,y)\in S_+\qquad\text{and}\qquad (y,y+\eta)\in S_- \label{eq:transition} \end{equation} for some sufficiently small $\eta>0$. In other words, there can only exist transitions from $S_-$ to $S_+$ and never the other way around, when $y$ is increased starting from $y=0$. This will imply uniqueness. The way to show~\eqref{eq:transition} is to prove that the variation with respect to the initial condition $y$, \begin{equation} v_y:=\frac{\partial}{\partial y}u_y, \label{eq:def_v} \end{equation} tends to $-\infty$ at infinity, as well as its derivative $v_y'$. This implies that the curves move enough to cross either the horizontal or the vertical axis when $y$ is moved a bit, for a sufficiently large $r$. The function $v_y$ in~\eqref{eq:def_v} turns out to be the zero-energy solution of the linearized operator $\Delta+g'(u)$ with $v(0)=1$. The fact that $v_y$ diverges implies $v_y\notin L^2(\R_+,r^{d-1}\,dr)$, which means that the kernel of $\Delta+g'(u)$ cannot contain any non-trivial radial function. It is then classical~\cite{Weinstein-85} that this implies the non-degeneracy~\eqref{eq:non_degenerate}. \begin{figure}[t] \centering \includegraphics[width=7cm]{portrait.pdf} \caption{Phase portrait of the solutions in the plane $(u',u)$. The interval $(0,\beta)$ is partitioned into the sets $S_+$, $S_-$ and $S_0$. Solutions with an initial datum in $S_+$ cross first the vertical axis and stay positive for all times, whereas solutions in $S_-$ cross first the horizontal axis. The set $S_0$ contains the solutions that stay in the quadrant for all times and converge to the origin. The goal is to prove that $S_0$ is reduced to one point as in the picture. \label{fig:portrait}} \end{figure} We turn to the proof of the theorem, which we split into four steps corresponding to Lemmas~\ref{proppartition}--\ref{lem:nondegenerate}, respectively. In the first we show some rather classical facts about the sets $S_-$, $S_+$ and $S_0$. \begin{lemma}[Properties of the sets $S_+,S_-,S_0$]\label{proppartition}\ \begin{enumerate} \item We have $(0,\alpha]\subset S_+$. \item The set $S_-$ is open. \item\label{proppartition2} If $y\in S_0\cup S_-$, then $u'_y<0$ on $(0,r_y)$. In particular, $u_y$ is strictly decreasing on $(0,r_y)$. \item\label{proppartition3} If $y\in S_0$, then $\lim_{r\to+\infty} u'_y(r)=0$ and $\int_0^y g(s)\,ds>0$. Moreover, $u_y$ has the following behavior at infinity $$ u_y(r)\underset{r\to+\infty}{\sim} C\,\frac{e^{-\sqrt{-g'(0)}r}}{r^{\frac{d-1}{2}}}\quad u'_y(r)\underset{r\to+\infty}{\sim} -\sqrt{-g'(0)}\,C\,\frac{e^{-\sqrt{-g'(0)}r}}{r^{\frac{d-1}{2}}} $$ for some $C>0$. \item If $y\in S_+$, then $u'_y$ vanishes at least once and, for the first positive root $r'_y$ of $u'_y$, we have $H(r'_y)<0$. \item The set $S_+$ is open. \end{enumerate} \end{lemma} \begin{remark} From (\ref{proppartition3}), we see that if $G(\eta)\le 0$ for all $\eta\in (0,\beta)$, then $S_0=\emptyset$. Hence, if $S_0\neq\emptyset$, as we assume in our case, we can define $$\gamma:=\inf\{\eta\in(0,\beta), G(\eta)>0\}.$$ \end{remark} \begin{proof} \textit{(1)} If $y\in (0,\alpha]$, then we have $$H(r)\leq H(0)=G(y)=\int_0^yg(t)\,dt<0$$ for all $r$ since the local energy $H$ is decreasing along a solution and $g<0$ on $(0,\alpha)$. Therefore $u(r)$ cannot vanish (at a zero of $u$ we would have $H(r)\geq0$). This proves that $(0,\alpha]\subset S_+$. \smallskip \noindent \textit{(2)} At any point $r>0$ so that $u_y(r)=0$ we must have $u_y'(r)\neq0$, otherwise $u_y\equiv0$. The implicit function theorem then proves that $r_y$ depends smoothly on the initial condition $y$ and hence $S_-$ is open. \smallskip \noindent \textit{(3)} This is~\cite[Lem.~4]{LewRot-15} and the argument goes as follows. For $y\in S_0\cup S_-\subset (\alpha,\beta)$ we have $u''_y(0)=-g(y)/d<0$ since $g$ is positive on $(\alpha,\beta)$. This shows that $u_y'(r)<0$ for small $r>0$. On the other hand, for $y\in S_-$ we have $u_y'(r_y)<0$ since $r_y$ is by definition the first zero of $u_y$. If $u'$ changes sign before $r_y$ then $u_y$ must have a local strict minimum at some point $0<r'<r_y$ and then there must be another point $r''$ at which $u_y(r'')=u_y(r')$. But then $$\frac{u_y'(r'')^2}{2}=H(r'')-H(r')=-(d-1)\int_{r'}^{r''}\frac{u_y'(s)^2}{s}\,ds<0,$$ a contradiction. Therefore $u_y$ must vanish before $u'_y$ and the solution crosses first the horizontal axis in the phase portrait. The argument is similar when $r_y=+\infty$. \smallskip \noindent \textit{(4)} If $y\in S_0$, then $u'_y<0$ and for $r$ large enough $u''_y(r)=-\frac{d-1}{r}u'_y(r)-g(u_y(r))>0$. Hence, $u'_y$ has a limit at infinity, which can only be zero since $u$ tends to zero. Next, because of the monotonicity of the energy $H$, \begin{equation} \int_0^y g(s)\,ds=H(0)>\lim_{r\to+\infty}H(r)=0. \label{eq:estim_G} \end{equation} Finally, the explicit decay rate of $u_y$ and $u'_y$ is a classical fact whose proof can for instance be found in~\cite{BerLioPel-81}. \smallskip \noindent \textit{(5)} This is \cite[Lem.~5]{LewRot-15}, which follows the presentation in \cite{Frank-13}. If $y=\alpha$, then $u_y\equiv \alpha$ and $H(r)<0$ for all $r\in \R^+$. Hence, let $y\neq \alpha$. First of all, we prove that $u'_y$ must vanish. Otherwise, for all $y\in S_+\cap (0,\alpha)$, $u_y$ is increasing and, for all $y\in S_+\cap (\alpha,\beta)$, $u_y$ is decreasing. This implies that $\lim_{r\to+\infty }u_y(r)=\alpha$. Next, let $U(r)=r^{\frac{d-1}{2}}(u_y(r)-\alpha)$. The function $U$ solves the equation $$ U''=\left(\frac{(d-1)(d-3)}{4r^2}- \frac{g(u_y)}{u_y-\alpha}\right)U $$ and, in the limit $r\to+\infty$, $U''\sim -g'(\alpha) U$ with $g'(\alpha)>0$. This leads to a contradiction. Hence $u'_y$ vanishes and we call $r'_y$ its first root. To prove $H(r'_y)<0$, we consider two cases. First, if $y\le \alpha$, $H(r'_y)<H(0)<0$. If $y>\alpha$, then $u_y$ has a local minimum at $r'_y$. Hence, from equation~\eqref{nlradeq}, we obtain $g(u_y(r'_y))<0$ which implies $0<u_y(r'_y)<\alpha$. Finally, $H(r'_y)<0$. \smallskip \noindent \textit{(6)} To prove that $S_+$ is open, we proceed again as in \cite[Lem.~5]{LewRot-15}. We know that $(0,\gamma)\subset S_+$ ( where $\gamma=\inf\{\eta\in(0,\beta), G(\eta)>0\}$). So let $y\in S_+\cap [\gamma,\beta)$ and $z$ in a small neighborhood of $y$. As a consequence, $u_z$ has a local minimum at $r'_z$ with $H(r'_z)<0$ which implies $G(u_z(r))<0$ for all $r>r'_z$. Hence, there exists $\varepsilon>0$ such that $\varepsilon\le u_z(r)\le \gamma-\varepsilon$. Therefore, $z\in S_+$. \end{proof} Let now $v_y$ be the unique solution to the linear problem \begin{equation}\label{nlradeqlin} \left\{ \begin{aligned} &L(v):=v''+\frac{d-1}{r}v'+g'(u_y)v=0 \text{ on }(0,\infty),\\ &v(0)=1,\\ &v'(0)=0. \end{aligned} \right. \end{equation} The function $v_y$ is the variation of $u_y$ with respect to the initial condition $u_y(0)=y$, that is, $v_y=\partial_yu_y$. We have the following proposition on the solution to~\eqref{nlradeqlin}, which is the core of the proof of the theorem. \begin{lemma}[Solution of the linearized problem]\label{proplinear}Let $y\in S_0$. Then \begin{enumerate} \item $v_y$ vanishes exactly once. \item $v_y(r)$ and $v_y'(r)$ diverge exponentially fast to $-\infty$ as $r\to+\infty$. \end{enumerate} \end{lemma} \begin{proof} This is exactly \cite[Lem. 7, Lem. 8]{LewRot-15} and we will not reproduce all the details here. The argument is based on the Wronskian identity \begin{equation}\label{eqwronskian} (r^{d-1}(v_y f' - fv'_y))'=r^{d-1}v_y L(f) \end{equation} for different choices of the test function $f$, where $L(f)$ is defined in~\eqref{nlradeqlin}. In particular, a simple calculation shows that \begin{align*} L(u_y)&=u_yg'(u_y)-g(u_y)\\ L(u_y')&=\frac{d-1}{r^2}u'_y \end{align*} and \begin{equation*} L(ru_y')=-2g(u_y). \end{equation*} \smallskip \noindent \textit{(1)} We argue by contradiction and assume that $v_y>0$. Using $f=u_y'<0$ provides $$(r^{d-1}(v_y u_y'' - u_y'v'_y))'=(d-1)r^{d-3}u'_yv_y<0$$ since $v_y>0$ and $u'_y<0$. This shows that $r^{d-1}(v_y u_y'' - u_y'v'_y)=r^{d-1}v_y^2(u_y'/v_y)'$ is decreasing and since it vanishes at the origin, $u'_y/v_y$ is decreasing. Since this function is negative close to the origin, we have $0<v_y\leq -c u'_y$ for some $c>0$ and all $r\geq 1$. In addition, $r^{d-1}(v_y u_y'' - u_y'v'_y)\leq -c$ for (another) $c>0$ and all $r\geq 1$. But $|v_y u_y''|\leq c|u_y''|\,|u_y'|$ decreases exponentially at infinity and hence $r^{d-1}u_y'v'_y\geq c/2$ for $r$ large enough. From the behavior at infinity of $u'_y$ this proves that $-v'_y\geq c/2r^{(1-d)/2}e^{\sqrt{g'(0)}r}$ for large $r$, which shows that $v'_y\to-\infty$, a contradiction. The proof that it vanishes only once is the same as in~\cite[p~p. 357--358]{Tao-06}, deforming solutions starting from the constant solution $u\equiv \alpha$ at $y=\alpha$ and using that there are no double zeroes. \smallskip \noindent \textit{(2)} For the proof of the central fact that $v_y,v'_y\to-\infty$, we call $r_*$ the unique zero of $v_y$ at which we must have $v_y'(r_*)<0$. We then take in the Wronskian identity $f=u_y+cru'_y$ where $c=-u_y(r_*)/(r_*u'_y(r_*))>0$ is chosen so that $f(r_*)=0$. Then we obtain $$(r^{d-1}(v_y f' - fv'_y))'=r^{d-1}v_y\Big(u_yg'(u_y)-(1+2c)g(u_y)\Big)=r^{d-1}v_y I_\lambda(u_y)$$ with $\lambda=1+2c$. The function $r^{d-1}(v_y f' - fv'_y)$ vanishes both at $0$ and at $r_*$, hence its derivative must vanish at least once in $(0,r_*)$. At this point $\tilde r$ we have $I_\lambda(u_y(\tilde r))=0$ and since $I_\lambda$ is assumed to have only one root over $(0,\beta)$, this shows that $I_\lambda(u_y)>0$ hence $(r^{d-1}(v_y f' - fv'_y))'<0$ on $(r_*,\infty)$. The argument is then similar as above to show that $v_y,v'_y\to-\infty$, see~\cite[Lem.~8]{LewRot-15}. \end{proof} Note that Lemma~\ref{proplinear} shows that, if $u_y$ is a positive radial solutions to~\eqref{nleq} which vanishes at $+\infty$ then it is \emph{radial} non-degenerate. Indeed, since the unique solution $v_y$ to~\eqref{nlradeqlin} diverges exponentially fast when $r\to\infty$, the kernel of $L$ as an operator on $L^2(\R_+,r^{d-1}dr)$ is trivial. That the kernel in the full space $L^2(\R^d)$ is spanned by the partial derivatives of $u$ will be proved later in Lemma~\ref{lem:nondegenerate}. At this point we have all the tools for proving the uniqueness of positive solutions to~\eqref{nlradeq}. This is done by using the following third proposition. \begin{lemma}\label{propisolated} Let $y\in S_0$. Then there exists $\varepsilon>0$ such that $(y-\varepsilon,y)\subset S_+$ and $(y,y+\varepsilon)\subset S_-$. \end{lemma} As above, the proof goes as in~\cite[Lem. 3(b)]{McLeod-93}. \begin{proof} Let $y\in S_0$. Choose $a>0$ such that $g'(s)\le g'(0)/2$ for all $s\in[0,a)$ and $\bar R$ such that $u_y(r)\le a$ for all $r\ge \bar R$. Such an $\bar R$ exists since for all $y\in S_0$, $u_y$ is strictly decreasing and goes to $0$ at $+\infty$. Moreover, thanks to Lemma~\ref{proplinear}, we can choose $R\ge \bar R$ such that $v_y(R)<0$ and $v'_y(R)<0$. As a consequence, there exists $\varepsilon>0$ such that for all $z\in (y,y+\varepsilon)$, we have $0<u_z(R)<u_y(R)$ and $u'_z(R)<u'_y(R)<0$. In particular, the function $w:=u_z-u_y$ is such that $w(R)<0$ and $w'(R)<0$. Suppose, by contradiction, $z\in S_0\cup S_+$. Then $w$ must tend to $0$ or become positive at some point. Therefore, the function $w$ must have a negative minimum at $R'>R$. As a consequence, by using~\eqref{nlradeq}, we obtain \begin{equation*} w''(R')=-(g(u_z(R'))-g(u_y(R')))=- g'(\theta)w(R') \end{equation*} for some $0<u_z(R')<\theta<u_y(R')\le a$. Hence, $g'(\theta)\le g'(0)/2<0$. This leads to $w''(R')<0$ which is a contradiction. As a conclusion, $z\in S_-$. The argument is the same for $z\in (y-\varepsilon,y)$. \end{proof} Lemma~\ref{propisolated} implies that any $y\in S_0$ is an isolated point. Since $S_+$ and $S_-$ are open sets, they can only be separated by points in $S_0$. Now, the lemma says that a point in $S_0$ can only serve as a transition between $S_+$ below and $S_-$ above. Hence, there can be only one transition of this type and $S_0$ contains at most one point. This concludes the proof of uniqueness. Our last step is classical and consists in showing the non-degeneracy in the whole space $L^2(\R^d)$. \begin{lemma}[Non-degeneracy in $L^2(\R^d)$]\label{lem:nondegenerate} Let $u$ a positive radial solution to~\eqref{nleq} with $\|u\|_{\infty}<\beta$ and such that $u(|x|)\to 0$ as $|x|\to +\infty$. Let $\mathcal L_{\rm tot}$ the linearized operator at $u$. Hence, for any $v\in L^2(\R^d,\C)$, $\mathcal L_{\rm tot} v:=\mathcal{L}{\rm Re}(v)+i\mathcal{L}'{\rm Im}(v)$ with $$\mathcal{L}=-\Delta- g'(u),\qquad \mathcal{L}'=-\Delta -\frac{g(u)}{u}.$$ Then we have $$\ker(\mathcal{L})={\rm span}(\partial_{x_1}u,\ldots,\partial_{x_d}u),\qquad \ker(\mathcal{L}')={\rm span}(u).$$ \end{lemma} \begin{proof} Since $u$ tends to zero at infinity, the two potentials $-g'(u)$ and $-g(u)/u$ are uniformly bounded. Therefore the operators $\mathcal{L}$ and $\mathcal{L}'$ are self-adjoint on $L^2(\R^d)$, with domain $H^2(\R^d)$ and form domain $H^1(\R^d)$. Moreover they satisfy the Perron-Frobenius property, that their first eigenvalue, when it exists, is necessarily simple with a positive eigenfunction, by the Perron-Frobenius theorem~\cite{ReeSim4}. Finally, any positive eigenfunction is necessarily the first one. Since $u$ is a positive solution to~\eqref{nleq}, then $\mathcal{L}'u=0$. Then $0$ is the first eigenvalue of $\mathcal{L}'$ and it is non-degenerate, hence $\ker(\mathcal{L}')={\rm span}(u)$. The argument for $\mathcal{L}$ follows~\cite{Weinstein-85}. First of all, we decompose $L^2(\R^d)$ in angular momentum sectors as $$L^2(\R^d)=\bigoplus\limits_{\ell\ge 0} L^2(\R_+,r^{d-1}dr)\otimes \mathcal K_\ell$$ with $K_\ell$ the $\ell$th eigenspace of the Laplace-Beltrami operator on the sphere $\mathbb{S}^{d-1}$. Next, since $\mathcal{L}$ commutes with space rotations, it can be written as $\mathcal{L}=\bigoplus\limits_{\ell\ge 0} A^{(\ell)}\otimes \mathds{1}$ where \begin{equation*} A^{(\ell)}v=-v''-\frac{d-1}{r}v'+\frac{\ell(\ell+d-2)}{r^2}v-g'(u)v \end{equation*} with Neumann boundary condition at $r=0$ for $\ell=0$ and Dirichlet boundary condition for $\ell\ge 1$. By the variational principle, the first eigenvalue of $A^{(\ell)}$ in increasing with $\ell$. Each $A^{(\ell)}$ has the Perron-Frobenius property in $L^2(\R_+,r^{d-1}\,dr)$. Now, the translation-invariance gives $u'\in \ker(A^{(1)})$ and, thanks to Lemma~\ref{proppartition2}, $u'<0$. Therefore $0$ is the first eigenvalue of $A^{(1)}$ and $\ker(A^{(1)})={\rm span}(u')$. Next, for any $\ell \ge 2$, the first eigenvalue of $A^{(\ell)}$ must be positive since $\lambda_1(A^{(\ell)})>\lambda_1(A^{(1)})$, hence $\ker(A^{(\ell)})=\{0\}$ for $\ell\ge 2$. Finally, it remains to determine $\ker(A^{(0)})$. But $-A^{(0)}$ is simply the operator $L$ defined in~\eqref{nlradeqlin} and we have shown in Lemma~\ref{proplinear} that the kernel of $L$ as an operator on $L^2(\R_+,r^{d-1}dr)$ is trivial. As a conclusion, $\ker(\mathcal{L})={\rm span}(\partial_{x_1}u,\ldots,\partial_{x_n}u)$. \end{proof} This concludes the proof of Theorem~\ref{thmuniqnondeg}.\qed
1,314,259,993,695
arxiv
\section{Introduction and the main result}\label{sectIntro} The one-dimensional Kardar-Parisi-Zhang (KPZ) universality class~\cite{KPZ86} of stochastic growth models has received a lot of attention in recent years, see e.g.~the surveys and lecture notes~\cite{FS10,Cor11,QS15,BG12,Qua11,Fer10b,Tak16,Zyg18}. Two of the most studied models in this class are the exponential/geometric last passage percolation (LPP) and the totally asymmetric simple exclusion process (TASEP). In both cases, one can define a height function $h(x,t)$, where $x$ stands for space (one-dimensional in our case) and $t$ for time. At a large time $t$, under the $2/3-1/3$ scaling, one expects to see a non-trivial limit process. To illustrate it, consider the scaling around the origin \begin{equation} h_t^{\rm resc}(u)=\frac{h(u t^{2/3},t)-t h_{\rm ma}(u t^{-1/3})}{t^{1/3}} \end{equation} with $h_{\rm ma}(\xi)=\lim_{t\to\infty} t^{-1} h(\xi t)$ being the (deterministic) macroscopic limit shape. The limit process depends on the geometry of the initial condition. One natural initial condition is the stationary one and the limit process in this case, called Airy$_{\rm stat}$, has been determined in~\cite{BFP09}. For non-random initial conditions, the two main cases are: \begin{enumerate} \item[(a)] curved limit shape $h_{\rm ma}$: one expects the weak limit $\lim_{t\to\infty} h^{\rm resc}_t(u)=a_1 {\cal A}_2(a_2 u)$, with ${\cal A}_2$ being the Airy$_2$ process~\cite{PS02} and $a_1,a_2$ are model-dependent parameters (see~\cite{PS02,Jo03b,BF07} for LPP and TASEP setting and~\cite{Dim20} for a non-determinantal case), \item[(b)] flat limit shape $h_{\rm ma}$: one expects the weak limit $\lim_{t\to\infty} h^{\rm resc}_t(u)=a_1' {\cal A}_1(a_2' u)$, with ${\cal A}_1$ being the Airy$_1$ process~\cite{Sas05}, with again $a_1',a_2'$ model-dependent parameters (see~\cite{Sas05,BFPS06,BFP06}). \end{enumerate} As universal limit objects in the KPZ universality class, the $Airy_{\rm stat}$, as well as ${\cal A}_1$ and ${\cal A}_2$ (which also are stationary stochastic processes in $\mathbb{R}$) have attracted much attention. It is known that the one point marginal for ${\cal A}_2$ is the GUE Tracy-Widom distribution from random matrix theory~\cite{PS02}, whereas the one point marginal for ${\cal A}_1$ is a scalar multiple of the GOE Tracy-Widom distribution~\cite{Sas05,BFPS06}. The next fundamental question is naturally to understand the two point functions for these processes. Although there are explicit formulae available for the multi-point distributions, extracting asymptotics from these complicated formulae is non-trivial. Widom in~\cite{Wid03} (see also~\cite{aVm03} for a conditional result) proved that \begin{equation} {\rm Cov}({\cal A}_2(0),{\cal A}_2(u))=2 u^{-2}+\mathcal{O}(u^{-4})\textrm{ as }u\to\infty. \end{equation} Although algebraically there are many similarities between the processes ${\cal A}_1$ and ${\cal A}_{2}$ (see the review~\cite{Fer07}), the method used in~\cite{Wid03} can not be directly applied to the case of the Airy$_1$ process, and the question of understanding the decay of correlations in the Airy$_1$ process had remained open until now. A numerical study~\cite{BFP08} clearly showed that the decay of the covariance for the Airy$_1$ process is very different from that of Airy$_2$, in that it decays super-exponentially fast, i.e., $-\ln \Cov({\cal A}_1(0),{\cal A}_1(u)) \sim u^{\delta}$ for some $\delta>1$. Unfortunately, the numerical data of~\cite{BFP08} are coming from a Matlab program developed in~\cite{Born08} and uses the 10-digits machine precision. From the data it was not possible to conjecture the true value of $\delta$. The reason behind the difference in the decay of the covariances of ${\cal A}_2$ and ${\cal A}_1$ can be explained as follows. In the curved limit shape situation, the space-time regions which essentially determine the values of $h(0,t)$ and $h(u t^{2/3},t)$ have an intersection whose size decays polynomially in $u$. In contrast, for the flat limit shape, except on a set whose probability goes to zero super-exponentially fast in $u$, these regions are disjoint. The goal of this paper is to prove that the decay of covariance for the Airy$_1$ process is super-exponential with $\delta=3$. More precisely, we prove upper and lower bounds of the covariance where exponents have a matching leading order term. The following theorem is the main result of this paper. \begin{thm}\label{ThmMain} There exist constants $c,c'>0$ such that for $u>1$ \begin{equation} e^{-c u \ln(u)} e^{-\frac43 u^3}\leq \Cov({\cal A}_1(0),{\cal A}_1(u)) \leq e^{c'u^2}e^{-\frac43 u^3}. \end{equation} \end{thm} Clearly, the threshold $u>1$ above is arbitrary, and by changing the constants $c,c'$ we can get the same bounds for any $u$ bounded away from $0$. The upper and the lower bounds in Theorem~\ref{ThmMain} are proved separately with very different arguments. In Section~\ref{sectUpperBound} we prove the upper bound; see Corollary~\ref{corUB}. For this purpose we consider the point-to-line exponential last passage percolation (LPP) which is known to converge to the Airy$_1$ process under an appropriate scaling limit. Corollary~\ref{corUB} is an immediate consequence of Theorem~\ref{thmUpperBound} which proves the corresponding decorrelation statement in the LPP setting. The strategy for the upper bound follows the intuition that the decorrelation comes from the fact that the point-to-line geodesics for two initial points far from each other use mostly disjoint sets of random variables. To make this precise, we prove and use results controlling the transversal fluctuations of the point-to-line geodesics and their coalescence probabilities (see Theorem~\ref{ThmUB6Bis} and Theorem~\ref{ThmCrossings}) that are of independent interest. The proof is mainly based on probabilistic arguments, but uses one point moderate deviation estimates for the point-to-point and point-to-line exponential LPP with optimal exponents. Such results have previously been proved in~\cite{LR10}, but an estimate with the correct leading order term in the upper tail exponent is required for our purposes. These are obtained in Lemma~\ref{lemUB2} and Lemma~\ref{lemUB2B} by using asymptotic analysis. In Section~\ref{SectLowerBound} we prove the lower bound; see Theorem~\ref{ThmLowerBound}. For the lower bound we start with Hoeffding's covariance formula, which says that the covariance of two random variables is given by the double integral of the difference between their joint distribution and the product of the two marginals; see \eqref{covid}. The joint distribution of the Airy$_1$ process is given in terms of a Fredholm determinant (see \eqref{Fr}) and the proof uses analytic arguments to obtain precise estimates for these Fredholm determinants. A crucial probabilistic step here, however, is the use of the FKG inequality applied in the LPP setting, which, upon taking an appropriate scaling limit yields that the aforementioned integrand is always non-negative; see Lemma~\ref{lemFKG}. This allows one to lower bound the covariance by estimating the integrand only on a suitably chosen compact set, which nonetheless leads to a lower bound with the correct value of the leading order exponent. We finish this section with a brief discussion of some related works. Studying the decay of correlations in exponential LPP has recently received considerable attention. Following the conjectures in the partly rigorous work~\cite{FS16}, the decay of correlations in the time direction has been studied for the stationary and droplet initial conditions in~\cite{FO18}, where precise first order asymptotics were obtained (see also~\cite{FO22,BKLD22} for works on the half-space geometry). Similar, but less precise, estimates for the droplet and flat initial conditions were obtained in~\cite{BG18, BGZ19}. All these works also rely on understanding the localization and geometry of geodesics in LPP, some of those results are also useful for us. The lower bounds in~\cite{BG18, BGZ19} also use the FKG inequality in the LPP setting and provide bounds valid in the pre-limit. One might expect that similar arguments can lead to a bound similar, but quantitatively weaker, to Theorem~\ref{thmUpperBound} valid in the LPP setting. \paragraph{Acknowledgements.} The work of O. Busani and P.L. Ferrari was partly funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - GZ 2047/1, projekt-id 390685813. P.L. Ferrari was also supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 211504053 - SFB 1060. R. Basu is partially supported by a Ramanujan Fellowship (SB/S2/RJN-097/2017) and a MATRICS grant (MTR/2021/000093) from SERB, Govt. of India, DAE project no. RTI4001 via ICTS, and the Infosys Foundation via the Infosys-Chandrasekharan Virtual Centre for Random Geometry of TIFR. \newpage \section{Upper Bound}\label{sectUpperBound} In this section we prove the upper bound of Theorem~\ref{ThmMain}. \subsection{Last passage percolation setting} We consider exponential last passage percolation (LPP) on $\mathbb{Z}^2$. Let $\omega_{i,j}\sim\exp(1)$, $i,j\in\mathbb{Z}$, be independent exponentially distributed random variables with parameter $1$. For points $u,v\in \mathbb{Z}^2$ with $u\prec v$, i.e., $u_1\leq v_1$ and $u_2\leq v_2$, we denote the passage time between the points $u$ and $v$ by \begin{equation} L_{u,v}=\max_{\pi:u\to v} \sum_{(i,j)\in\pi} \omega_{i,j}, \end{equation} where the maximum is taken over all up-right paths from $u$ to $v$ in $\mathbb{Z}^2$. Denote by $\Gamma_{u,v}$ the geodesic from $u$ to $v$, that is, the path $\pi$ maximizing the above sum. Furthermore, let $\mathcal{L}_n=\{(x,y)\in\mathbb{Z}^2\,|\, x+y=n\}$ and denote by \begin{equation} L_{u,\mathcal{L}_n}=\max_{\pi:u\to\mathcal{L}_n} \sum_{(i,j)\in\pi}\omega_{i,j} \end{equation} the point-to-line last passage time, where the maximum is taken over all up-right paths going from $u$ to $\mathcal{L}_n$. Let us mention some known limiting results of exponential LPP. Let\footnote{We do not write explicitly the rounding to integer values, i.e., $(x,y)$ stands for $(\lfloor x\rfloor,\lfloor y\rfloor)$.} \begin{equation} I(u)=u(2N)^{2/3}(1,-1),\quad J(u)=(N,N)+u(2N)^{2/3}(1,-1) \end{equation} and define the rescaled LPP \begin{equation}\label{eq1.4} L_N^*(u)=\frac{L_{I(u),\mathcal{L}_{2N}}-4N}{2^{4/3}N^{1/3}}, \quad L_N(u)=\frac{L_{I(u),(N,N)}-4N}{2^{4/3}N^{1/3}}. \end{equation} Then, by the result on TASEP with density $1/2$~\cite{Sas05,BFPS06} which can be transferred to LPP using slow decorrelation~\cite{CFP10b}, we know that \begin{equation}\label{eqUB5} \lim_{N\to\infty} L_N^*(u)= 2^{1/3}{\cal A}_1(2^{-2/3}u), \end{equation} where ${\cal A}_1$ is the Airy$_1$ process, in the sense of finite-dimensional distributions. Similarly, (see~\cite{Jo03b} for the geometric case and~\cite{BP07} for a two-parameter extension) \begin{equation} \label{eqa2} \lim_{N\to\infty} L_N(u)= {\cal A}_2(u)-u^2, \end{equation} with ${\cal A}_2$ is the Airy$_2$ process~\cite{PS02}, where in~\cite{Jo03} the convergence is weak convergence on compact sets. \begin{figure}[t!] \centering \includegraphics[height=5.5cm]{FigSettingUB} \caption{ (Left): In the typical scenario, the geodesics associated with $L_N^*(0)$ (blue) and $L_N^*(u)$ (green) do not cross the straight line connecting $I(u/2)$ and $J(u/2)$. This suggests that $L_N^*(0)$ and $L_N^*(u)$ are almost independent. (Right) In the untypical case where the blue and green geodesics meet. In that case the geodesics will meet around the point $J(u/2)$. The probability of this event is the main contributor to the covariance of $L_N^*(0)$ and $L_N^*(u)$. } \label{FigGeometryUB} \end{figure} We also denote by $\Gamma_N$ (resp.\ $\Gamma^*_N$) the (almost surely unique) \emph{geodesic} attaining $L_{(0,0),(N,N)}$ (resp.\ $L_{(0,0),\mathcal{L}_{2N}}$). For a directed path $\pi$, we denote by $\pi(t)=(x-y)/2$ where $(x,y)$ is the unique point (if it exits) where $\pi$ intersect $\mathcal{L}_t$. The parameter $t$ will be thought of as \emph{time} and $\pi(t)$ will be the \emph{position} of the path at time $t$. For instance, if $\Gamma^*_N$ ends at $J(u)$, then $\Gamma^*_N(2N)=u(2N)^{2/3}$, see also Figure~\ref{FigGeometryUB}. We shall also denote by $L(\pi)$ the passage time of the path $\pi$, i.e., the sum of the weights on $\pi$. The rescaled last passage times $L_N^*(u)$ and $L_N(u)$ have super-exponential upper and lower tails (see e.g.~Appendices of~\cite{FO18} for a collection of such results and references), which implies that the limit of their covariance is the covariance of their limit, i.e., \begin{equation}\label{eq2.7} \lim_{N\to\infty} {\rm Cov}\left(L_N^*(u),L_N^*(0)\right)= 2^{2/3}{\rm Cov}\left({\cal A}_1(2^{-2/3}u),{\cal A}_1(0)\right). \end{equation} So if ${\rm Cov}\left(L_N^*(u),L_N^*(0)\right)\sim e^{-\beta u^3}$, then ${\rm Cov}\left({\cal A}_1(u),{\cal A}_1(0)\right)\sim e^{-4 \beta u^3}$. \medskip We first state the upper bound on ${\rm Cov}\left(L_N^*(u),L_N^*(0)\right)$ which is the main result in this section. \begin{thm}\label{thmUpperBound} For $N^{1/14}\gg u>1$, \begin{equation} {\rm Cov}\left(L_N^*(u),L_N^*(0)\right)\leq e^{cu^2}e^{- \frac13 u^3} \end{equation} for some $c>0$. \end{thm} The following corollary, proving the upper bound in Theorem~\ref{ThmMain}, is immediate from \eqref{eq2.7} and the above theorem. \begin{cor}\label{corUB} For $u>1$, we have \begin{equation} {\rm Cov}\left({\cal A}_1(u),{\cal A}_1(0)\right) \leq e^{cu^2}e^{-\frac43 u^3} \end{equation} for some $c>0$. \end{cor} Before proceeding further, let us explain the heuristic idea behind the proof of Theorem~\ref{thmUpperBound}. Let $T$ denote the straight line joining $I(u/2)$ and $J(u/2)$. Let $\tilde{L}_{N}(0)$ (resp.\ $\tilde{L}_{N}(u)$) denote the rescaled last passage time from $(0,0)$ to $\mathcal{L}_{2N}$ (resp.\ from $I(u)$ to $\mathcal{L}_{2N}$) in the LPP restricted to use the randomness only to the left (resp.\ to the right) of $T$. Since $\tilde{L}_{N}(0)$ and\ $\tilde{L}_{N}(u)$ depend on disjoint sets of vertex weights and hence are independent, one expects that the leading order behaviour of the covariance is given by the probability that $L_N^*(0)\neq \tilde{L}_{N}(0)$ and $L_N^*(u)\neq \tilde{L}_{N}(u)$ (parts of the sample space where only one of these two events hold can also contribute, but our arguments will show that these contributions are not of a higher order, see Figure~\ref{FigGeometryUB}). Now, $$\Pb(L_N^*(0)\neq \tilde{L}_{N}(0))=\Pb(L_N^*(u)\neq \tilde{L}_{N}(u))=\Pb\Big(\sup_{0\leq t\leq 2N}\Gamma^*_N(t)\geq \tfrac{1}{2}u(2N)^{2/3}\Big)$$ and the probability of the last event is $\lesssim e^{-\frac{1}{6}u^3}$ by Theorem~\ref{ThmUB6Bis} below. The proof is completed by showing that the probability of the intersection of the two events has an upper bound which is of the same order (at the level of exponents) as their product. This final step is obtained by considering the two cases, one where the point-to-line geodesics do not intersect, and the second where they do. The first part is bounded using the BK inequality where the probability of the geodesics intersecting is upper bounded separately in Theorem~\ref{ThmCrossings}. \subsection{Localization estimates of geodesics} As explained above, a key step in the proof is to get precise estimates for the probability that the geodesic behaves atypically, i.e., it exits certain given regions. To this end, the main result of this subsection provides the following localization estimate for $\Gamma^*_N$ that is of independent interest. \begin{thm}\label{ThmUB6Bis} For $N^{1/14}\gg u>1$ we have \begin{equation} \Pb\Big(\sup_{0\leq t\leq 2N}\Gamma^*_N(t)\geq u(2N)^{2/3}\Big)\leq e^{cu^2}e^{-\frac{4}{3} u^3} \end{equation} for some constant $c>0$. \end{thm} Although we do not prove a matching lower bound, the constant $\frac{4}{3}$ is expected to be optimal; see the discussion following Lemma~\ref{lemUB3}. Transversal fluctuation estimates for geodesics in LPP are of substantial interest and have found many applications. This is a first optimal upper bound in this direction for point-to-line geodesic. For point-to-point geodesics, similar estimates (albeit with unspecified constants in front of the cubic exponent) are proved for Poissonian LPP~\cite{BSS14} and exponential LPP~\cite{BGZ19}, see also~\cite{HS20} for a lower bound. In fact, we shall need to use the following estimate from~\cite{BGZ19,BF20b}. \begin{lem}[Proposition~C.9 of~\cite{BGZ19}]\label{lemUB5} For $N^{1/3}\gg u>1$ we have \begin{equation} \Pb\Big(\sup_{0\leq t\leq 2N}\Gamma_N(t)\geq u(2N)^{2/3}\Big)\leq e^{-c u^3} \end{equation} for some constant $c>0$. \end{lem} For the proof of Theorem~\ref{ThmUB6Bis} as well as the results in the subsequent subsections of this section, we shall use as input some lower and upper tail estimates of various LPP, which are collected and, if needed, proved in Appendix~\ref{a:LPP}. The first step is to prove the special case $t=2N$ of Theorem~\ref{ThmUB6Bis}; we get an estimate of the probability that $\Gamma^*_N$ ends in \mbox{${\cal D}_u=\cup_{v\geq u} J(v)$}, that is, $\Gamma^*_N(2N)\ge u(2N)^{2/3}$. \begin{lem}\label{lemUB3} For all $N^{1/9}\gg u>1$, we have \begin{equation}\label{eqUBLocLine} \Pb(\Gamma^*_N(2N)\geq u(2N)^{2/3})\leq e^{cu^2}e^{-\frac43u^3} \end{equation} for some $c>0$. \end{lem} Again, we do not prove a matching lower bound, but the constant $\frac{4}{3}$ should be optimal. Indeed, notice that by \eqref{eqa2}, one expects that $(2N)^{-2/3}\Gamma^{*}_{N}(2N)$ weakly converges to the almost surely unique maximizer ${\cal M}$ of ${\cal A}_{2}(u)-u^2$. The distribution of ${\cal M}$ has been studied in~\cite{Sch12,BKS12} whence it is known that $\Pb(|{\cal M}|\ge u)\sim e^{-\frac43u^3}$. \begin{proof}[Proof of Lemma~\ref{lemUB3}] Let $C_0$ be such that (using Lemma~\ref{lemUB1}) \begin{equation}\label{eq2.12} \Pb(L_{(0,0),(N,N)}<4N- C_0u2^{4/3} N^{1/3})\le e^{-\frac{4}{3}u^3}. \end{equation} We have \begin{equation} \begin{aligned} \Pb(\Gamma^*_N(2N)\leq u(2N)^{2/3})\geq &\, 1-\Pb(L_{(0,0),{\cal D}_u}\geq 4N-C_0u 2^{4/3}N^{1/3})\\ &-\Pb(L_{(0,0),(N,N)}<4N-C_0u 2^{4/3} N^{1/3}). \end{aligned} \end{equation} Using our definition of $C_0$ and Lemma~\ref{lemUB2}, we get for $N^{1/3}\gg u\ge C_0+1$ \begin{equation} \Pb(\Gamma^*_N(2N)\geq u(2N)^{2/3}) \leq C e^{-\frac43 (u^2-C_0u)^{3/2}}+ e^{-\frac{4}{3}u^3}\leq e^{cu^2} e^{-\frac{4}{3} u^3}. \end{equation} By adjusting the constant $c$ if necessary, we get the same conclusion for $u\in [1,C_0+1]$. \end{proof} Our next result is a similar localization estimate for the point-to-line geodesic $\Gamma^*_N$ at an intermediate time $t$. \begin{lem}\label{lemUB3B} Let $t=2\tau N$. Fix any $\theta\geq 1$ and take $\min\{\tau,1-\tau\}\geq u^{-\theta}$. Assume that $N^{1/(9+3\theta)}\gg u>\max\{1,\tfrac43\sqrt{\tau}C_0\}$ with $C_0$ as in \eqref{eq2.12}. Then there exists a constant $c>0$ such that \begin{equation} \label{eq:t1} \Pb(\Gamma^*_N(t)\geq u(2N)^{2/3}) \leq e^{-\frac43 u^3 \tau^{-3/2}}e^{\frac12 u \tau^{-3/2}} e^{u^2(c+ 2 C_0 \tau^{-1})}. \end{equation} In particular, by choosing another constant $c'>0$, \begin{equation} \label{eq:t2} \Pb(\Gamma^*_N(t)\geq u(2N)^{2/3})\leq e^{-\frac43 u^3+c' u^2} \end{equation} for all $N^{1/(9+3\theta)}\gg u>1$. \end{lem} Notice that \eqref{eq:t1} provides a stronger bound compared to \eqref{eq:t2} for small $\tau$ which is expected as the transversal fluctuation should grow with $\tau$. Indeed, for $\tau \ll 1$, one expects even stronger bounds, see Theorem~3 of~\cite{BSS19}. For the proof of Theorem~\ref{ThmUB6Bis}, \eqref{eq:t2} would suffice but we record the stronger estimate \eqref{eq:t1} as it would be used in the next subsection. \begin{proof}[Proof of Lemma~\ref{lemUB3B}] We start with a simple rough bound. Notice that since the geodesics are almost surely unique, by planarity they cannot cross each other multiple times. Consequently, if $\Gamma^*_N(2N)\leq A(2N)^{3}$ for some $A>0$, the geodesic $\Gamma^{*}_{N}$ lies to the left of the point-to-point geodesic from $I(A)$ to $J(A)$. Therefore, the maximal transversal fluctuation of the point-to line geodesic can be upper bounded by the sum of the fluctuation at the endpoint plus the maximal transversal fluctuation of a point-to-point geodesic. Similar arguments will be used multiple times in the sequel and will be referred to as the ordering of geodesics. It follows that \begin{equation}\label{Oub5} \begin{aligned} \Pb(\Gamma^*_N(t)\geq C_1 u\tau^{-1/2} (2N)^{2/3})&\leq \Pb(\Gamma^*_N(2N)\geq \tfrac12 C_1 u\tau^{-1/2} (2N)^{2/3})\\ &+\Pb(\Gamma_N(t)\geq \tfrac12 C_1 u\tau^{-1/2} (2N)^{2/3}). \end{aligned} \end{equation} Applying the bounds of Lemma~\ref{lemUB3} and Lemma~\ref{lemUB5} with the constant $C_1$ large enough, we get that \begin{equation}\label{Oub5b} \Pb(\Gamma^*_N(t)\geq C_1 u\tau^{-1/2} (2N)^{2/3})\leq e^{-\frac43 u^3 \tau^{-3/2}}. \end{equation} Next we need to bound the probability that $\Gamma^*_N(t)$ is in $[u(2N)^{2/3},C_1u\tau^{-1/2}(2N)^{2/3}]$. Define $K(v)=(t/2,t/2)+v(2N)^{2/3}(1,-1)$. Then, for any $S\in \mathbb{R}$, \begin{multline}\label{eqB4} \Pb(C_1 u\tau^{-1/2}(2N)^{2/3}>\Gamma^*_N(t)\geq u(2N)^{2/3}) \leq \Pb(L_{(0,0),\mathcal{L}_{2N}}\leq S)\\ +\Pb\Big(\sup_{u\leq v\leq C_1 u\tau^{-1/2}} (L_{(0,0),K(v)}+\tilde L_{K(v),\mathcal{L}_{2N}})>S\Big), \end{multline} where $\tilde L_{K(v),\mathcal{L}_{2N}}=L_{K(v),\mathcal{L}_{2N}}-\omega_{K(v)}$ is the LPP without the first point\footnote{The tail bounds in Corollary~\ref{a:LPP} clearly continue to hold even after removing the random variable at $K(v)$ which is ${\rm Exp}(1)$-distributed. The advantage is that in this way $L_{(0,0),K(v)}$ and $\tilde L_{K(v),\mathcal{L}_{2N}}$ are independent random variables.}. We set \begin{equation} S=4N-a u^2 2^{4/3} N^{1/3}. \end{equation} By Corollary~\ref{CorUB2}, setting $a=C_0 \tau^{-1/2}u^{-1}\ll N^{2/3}$ with $C_0$ as in \eqref{eq2.12}, we see that \begin{equation} \Pb(L_{(0,0),\mathcal{L}_{2N}}\leq S)\leq e^{-\frac43 u^3 \tau^{-3/2}}. \end{equation} It remains to bound the last term in \eqref{eqB4}. We have \begin{equation}\label{eqB6} \begin{aligned} &\Pb\Big(\sup_{u\leq v\leq C_1 u \tau^{-1/2}} (L_{(0,0),K(v)}+\tilde L_{K(v),\mathcal{L}_{2N}})>S\Big)\\ &\leq \sum_{k=0}^{C_1 u/(\sqrt{\tau}\delta)-u-1} \hspace{-1em}\Pb\Big(\sup_{u+k \delta\leq v\leq u+(k+1)\delta} (L_{(0,0),K(v)}+\tilde L_{K(v),\mathcal{L}_{2N}})>S\Big)\\ &\leq \sum_{k=0}^{C_1 u/(\sqrt{\tau}\delta)-u-1} \hspace{-1em}\Pb\Big(\sup_{v\geq u+k\delta} L_{(0,0),K(v)}+\sup_{u+k\delta\leq v\leq u+(k+1)\delta}\tilde L_{K(v),\mathcal{L}_{2N}}>S\Big), \end{aligned} \end{equation} where $\delta>0$ will be chosen later. Note that the two supremums above are independent. Denoting $X_k=\sup_{v\geq u+k\delta} L_{(0,0),K(v)}$ and $Y=\sup_{-\delta/2\leq v\leq \delta/2}\tilde L_{K(v),\mathcal{L}_{2N}}$, note also that $\sup_{u+k\delta\leq v\leq u+(k+1)\delta}\tilde L_{K(v),\mathcal{L}_{2N}}$ has the same law as $Y$ for each $k$. Hence, \begin{equation}\label{eqB7} \eqref{eqB6} =\sum_{k=0}^{C_1 u/(\sqrt{\tau}\delta)-u-1} \Pb(X_k+Y>S) \leq \frac{C_1 u}{\sqrt{\tau}\delta} \Pb(X_0+Y>S) \end{equation} since $X_k\leq X_0$ for any $k\geq 0$. As $X_0$ and $Y$ are independent, it is expected that the leading term should behave as \begin{equation}\label{App9} \Pb(X_0>S^*) \Pb(Y>S-S^*) \end{equation} with $S^*$ chosen such that \eqref{App9} is maximal. Since we want to minimize over a finite number of points (not going to infinity as $N$ does), we instead look for the maximum of \begin{equation}\label{App9B} \Pb(X_0>S^*) \Pb(Y>S-S^*-\eta 2^{4/3} N^{1/3}) \end{equation} for some small positive discretization step $\eta$; see Figure~\ref{FigDiscretization}. The natural scale of the fluctuation of $X_0$ is $\tau^{1/3} 2^{4/3}N^{1/3}$ and the one for $Y$ is $(1-\tau)^{1/3}2^{4/3}N^{1/3}$, so we choose $\eta$ at most $\tfrac14\min\{(1-\tau)^{1/3},\tau^{1/3}\}$. \begin{figure}[t!] \centering \includegraphics[height=6cm]{FigDiscretization} \caption{The probability $\Pb(X_0+Y>S)$ is smaller than the sum of the probabilities $\Pb(X_0\geq A_1, Y\geq A_2)$ with $(A_1,A_2)$ being the discretized points on $A_1+A_2=S-\eta 2^{4/3}N^{1/3}$.} \label{FigDiscretization} \end{figure} We want to discretize the interval $[4\tau N-\frac{u^2}{\tau}2^{4/3}N^{1/3},4\tau N-a u^2 2^{4/3}N^{1/3}]$ into pieces of size $\eta 2^{4/3} N^{1/3}$. The interval shall be non-empty, which can be ensured if $u$ is not too small. For that purpose let us assume that $u>\max\{1,\frac43 \sqrt{\tau}C_0\}$. Define the number of discretized points and then $\eta$ by \begin{equation} M=\left\lceil u^2(\tau^{-1}-a)\frac{4}{\min\{(1-\tau)^{1/3},\tau^{1/3}\}}\right\rceil, \quad \eta=u^2(\tau^{-1}-a)\frac{1}{M}. \end{equation} Our assumption on $u$ ensures that $\frac1\tau-a\geq \frac{1}{4\tau}$, or equivalently $\tau a\leq \frac34$, as well as $\eta/u^2\leq \frac14 2^{-1/3}$. Therefore $\tau a+\eta/u^2<1$. Let $S_1=4\tau N-\frac{u^2}{\tau}2^{4/3}N^{1/3}$ and $S_2=S-S_1=4(1-\tau)N+\frac{1-\tau a}{\tau}u^2 2^{4/3}N^{1/3}$, $S_3=4\tau N-a u^22^{4/3}N^{1/3}$. Let $S^*$ be the maximizer of \eqref{App9B}. Then \begin{equation}\label{eqB17} \begin{aligned} \eqref{eqB7} &\leq \frac{C_1 u}{\sqrt{\tau}\delta} \sum_{k=0}^{M-1}\Pb\left(X_0>S_1+k\eta 2^{4/3}N^{1/3},Y>S_2- (k+1)\eta2^{4/3}N^{1/3}\right)\\ &+\frac{C_1 u}{\sqrt{\tau}\delta} \Pb\left(X_0>S_3\right)+\frac{C_1 u}{\sqrt{\tau}\delta}\Pb\left(Y>S_2\right)\\ &\leq\frac{C_1 u}{\sqrt{\tau}\delta} M\Pb\left(X_0>S^*\right) \Pb\left(Y>S-S^*- \eta 2^{4/3} N^{1/3}\right)\\ &+\frac{C_1 u}{\sqrt{\tau}\delta} \Pb\left(X_0>S_3\right)+\frac{C_1 u}{\sqrt{\tau}\delta}\Pb\left(Y>S_2\right). \end{aligned} \end{equation} Instead of finding $S^*$ that maximizes \eqref{App9B}, we look to find an upper bound of \eqref{App9B} by maximizing the product of the upper bounds of the individual terms. For this, we take $\delta=(1-\tau)^{2/3}$. Then by rescaling Lemma~\ref{lemUB2}, for $s_2\ll (\tau N)^{2/9}$ and $u\ll (\tau N)^{1/9}$ \begin{equation}\label{eqB10} \Pb\Big(X_0>4\tau N-\frac{u^2}{\tau} 2^{4/3} N^{1/3}+s_1 \tau^{1/3} 2^{4/3}N^{1/3}\Big)\leq C e^{-\frac43 s_1^{3/2}} \end{equation} and by rescaling Proposition~\ref{PropUB5}, for $s_2\ll (1-\tau)^{2/3}N^{2/3}$, we get \begin{equation}\label{eqB9} \Pb\Big(Y>4(1-\tau) N+s_2(1-\tau)^{1/3} 2^{4/3}N^{1/3}\Big)\leq C s_2 e^{-\frac43 s_2^{3/2}}. \end{equation} Thus we need to maximize the product of the terms in \eqref{eqB10} and \eqref{eqB9}. All the terms in the sum in \eqref{eqB17} corresponds to $s_1$ and $s_2$ being non-negative and also of $\mathcal{O}(\frac{u^2}{\tau \min\{\tau^{1/3},(1-\tau)^{1/3}\}})$. Therefore we can ignore the polynomial pre-factor in \eqref{eqB9} and look to find $s_1,s_2\geq 0$ such that \begin{equation} -\frac{u^2}{\tau}+s_1 \tau^{1/3}+s_2(1-\tau)^{1/3}=-(a u^2+\eta),\quad s_1^{3/2}+s_2^{3/2}\textrm{ is minimal}. \end{equation} Denote $\tilde a=a+\eta /u^2$, which satisfied $\tau\tilde a<1$ by the above assumptions. Plugging in the value of $s_2$ as function of $s_1$ into $s_1^{3/2}+s_2^{3/2}$ and computing its minimum through the derivatives we get \begin{equation} s_1=\left(\tfrac{1}{\tau}-\tilde a\right) \tau^{2/3}u^2,\quad s_2= \left(\tfrac{1}{\tau}-\tilde a\right) (1-\tau)^{2/3} u^2,\quad s_1^{3/2}+s_2^{3/2}=u^3\left(\tfrac{1}{\tau}-\tilde a\right)^{3/2}. \end{equation} With this choice of $s_1$ and $s_2$, we write with a minor abuse of notation \begin{equation} \begin{aligned} &S^*=4\tau N-\frac{u^2}{\tau} 2^{4/3} N^{1/3}+s_1\tau^{1/3}2^{4/3}N^{1/3},\\ &S-S^*-\eta 2^{4/3}N^{1/3}=4(1-\tau)N+s_2(1-\tau)^{1/3}2^{4/3}N^{1/3}, \end{aligned} \end{equation} and \begin{equation}\label{eqB12} \Pb\left(X_0>S^*\right) \Pb\left(Y>S-S^*-\eta 2^{4/3}N^{1/3}\right)\leq C u^{2+4\theta/3} e^{-\frac43 u^3 \left(\tfrac{1}{\tau}-\tilde a\right)^{3/2}} \end{equation} for some constant $C>0$, where we have used the a priori bound on $s_2$ together with the assumption on $\tau$ to get the polynomial pre-factor. For $\tau \tilde a <1$, $\left(\tau^{-1}-\tilde a\right)^{3/2}\geq \tau^{-3/2}(1-\tfrac32 \tau a-\frac38 u^{-2})$ so that we get \begin{equation}\label{eq2.36} \eqref{eqB12} \leq C u^{2+4\theta/3} e^{-\frac43 u^3 \tau^{-3/2}} e^{\frac12 u \tau^{-3/2}} e^{2 u^2 C_0 \tau^{-1}}. \end{equation} This is the $\tau$-dependent bound which is useful for small enough $\tau$, but the exponent is minimal when $\tau\to 1$. Finally, notice that the bound on $\Pb(X_0>S_3)$ (resp.\ $\Pb(Y>S_2)$) corresponds to the bound with the value $s_2=0$ (resp.\ $s_1=0$), thus they are also smaller than \eqref{eqB12}. Thus we have shown that $\eqref{eqB17}\leq (M+2)\frac{C_1 u}{\sqrt{\tau}\delta} \times \eqref{eq2.36}$. The prefactor is only a polynomial in $u$ (using again the assumptions on $\tau$) and can be absorbed in the $u^2$ term in the exponent by adjusting the constant. Thus we have proved \eqref{eq:t1}, and \eqref{eq:t2} follows by observing that $\tau\to 1$ is the worst case. The condition $u\ll N^{1/(9+3\theta)}$ ensures that all the conditions on $s_1$, $s_2$, $u$ mentioned above are satisfied. \end{proof} We can now prove Theorem~\ref{ThmUB6Bis}. \begin{proof}[Proof of Theorem~\ref{ThmUB6Bis}] We shall prove the result for $u$ sufficiently large, the result for all $u>1$ shall follow by adjusting the constant $c$. Let us set $\varepsilon=\delta u^{-3/2}$ for some $\delta>0$ to be chosen later ($\delta$ will be small but fixed and in particular will not depend on $u$ or $N$). Without loss of generality let us also assume that $\varepsilon N$ and $1/\varepsilon$ are both integers. Let us define the sequence of points \begin{equation} v_0=I(u-1), \cdots, v_{j}=I(u-1)+(j\varepsilon N, j\varepsilon N), \cdots, v_{\varepsilon^{-1}}=J(u-1). \end{equation} Let $A_{j}$ denote the event that $\Gamma^*_{N}(2j \varepsilon N)\ge (u-1)(2N)^{2/3}$ for $j=1,2, \ldots, \varepsilon^{-1}$, and let $B_{j}$ denote the event that $\sup _{t}\Gamma_{v_{j-1},v_{j}} \ge u (2N)^{2/3}$ for $j=1,2,\ldots, \varepsilon^{-1}$ where $\Gamma_{v_{j-1},v_{j}}$ denotes the geodesic from $v_{j-1}$ to $v_{j}$. By ordering of geodesics, it follows that on \begin{equation} \Big(\bigcap _{j} A_j^{c}\Big) \cap \Big(\bigcap _{j} B_j^{c}\Big) \end{equation} one has $\sup_{0\leq t\leq 2N} \Gamma^*_{N}(t) \le u(2N)^{2/3}$; see Figure~\ref{FigLocalization}. \begin{figure}[t!] \centering \includegraphics[height=5cm]{FigLocalization} \caption{The thick blue line is the geodesic $\Gamma^*_N$. It passes to the left of the $v_1,v_2,\ldots,v_{\varepsilon^{-1}}$. The green thin lines are the point-to-point geodesics from $v_j$ to $v_{j+1}$, $j=0,\ldots,\varepsilon^{-1}-1$, which stay to the left of the dashed line joining $I(u)$ and $J(u)$.} \label{FigLocalization} \end{figure} Hence, \begin{equation} \Pb\Big(\sup_{0\leq t\leq 2N}\Gamma^*_N(t)\geq u(2N)^{2/3}\Big)\leq \sum_{j} \Pb(A_j)+ \sum_{j}\Pb(B_j). \end{equation} It follows from Lemma~\ref{lemUB3B} that for each $j$, \begin{equation} \Pb(A_{j}) \leq e^{c(u-1)^2} e^{-\frac{4}{3}(u-1)^3}\leq e^{c' u^2} e^{-\frac43 u^3} \end{equation} for some new constant $c'>0$. Notice now that \begin{equation} \Pb(B_{j})=\Pb\Big(\sup_{0\leq t\leq 2\varepsilon N}\Gamma_{\varepsilon N}(t) \ge \delta^{-2/3}u(2\varepsilon N)^{2/3}\Big). \end{equation} We now use $u\ll N^{2/3}$ (which implies $u\ll (\varepsilon N)^{1/3}$) and choose $\delta$ sufficiently small such that by Lemma~\ref{lemUB5} we have \begin{equation} \Pb(B_{j}) \le e^{-\frac{4}{3}u^3}. \end{equation} With this choice it follows that \begin{equation} \Pb\Big(\sup_{0\leq t\leq 2N}\Gamma^*_N(t)\geq u(2N)^{2/3}\Big)\le 2\varepsilon^{-1}e^{c u^2}e^{-\frac{4}{3}u^3}. \end{equation} Since $\varepsilon^{-1}=\mathcal{O}(u^{3/2})$, the result follows by adjusting the value of $c$. \end{proof} \subsection{Coalescence probability} Consider the point-to-line geodesics $\Gamma^*_{N}$ and $\tilde{\Gamma}^{*}_{N}$ to $\mathcal{L}_{2N}$ from $(0,0)$ and $I(u)$ respectively. Owing to the almost sure uniqueness of geodesics, if $\Gamma^*_{N}$ and $\tilde{\Gamma}^{*}_{N}$ meet, they \emph{coalesce} almost surely. Coalescence of geodesics is an important phenomenon in random growth models including first and last passage percolation and has attracted a lot of attention. For exponential LPP, for point-to-point geodesics started at distinct points and ending at a common far away point (or semi-infinite geodesics going in the same direction) tail estimates for distance to coalescence has been obtained; see~\cite{BSS19, Pim16, Zha20} for more on this. For point-to-line geodesics started at initial points that are far (on-scale), one expects the probability of coalescence to be small. Our next result proves an upper bound to this effect and is of independent interest. \begin{thm}\label{ThmCrossings} In the above set-up, for $N^{1/14}\gg u>1$ \begin{equation} \Pb(\Gamma^*_N\cap \tilde{\Gamma}^{*}_N\neq \emptyset)\leq e^{-\frac13 u^3+c u^2} \end{equation} for some constant $c>0$. \end{thm} The rest of this section deals with the proof of Theorem~\ref{ThmCrossings}. We divide it into several smaller results. As always, we shall assume without loss of generality that $u$ is sufficiently large, extending the results to all $u>1$ is achieved by adjusting constants. First of all, due to Theorem~\ref{ThmUB6Bis}, the probability that the two geodesics in the statement of Theorem~\ref{ThmCrossings} meet outside the rectangle $\mathcal{R}(u)$ with corners $(0,0)$, $I(u)$, $J(u)$, $(N,N)$ is smaller than the estimate we want to prove. Thus we can restrict to bounding the probability that the two geodesics intersect in $\mathcal{R}(u)$; a stronger result is proved in Lemma~\ref{PropNotCrossingFar} below. As the number of points in $\mathcal{R}(u)$ where the geodesics $\tilde{\Gamma}^{*}_{N}$ and $\Gamma^*_N$ could meet is $\mathcal{O}(N^{5/3})$, we need to discretize space. We therefore divide $\mathcal{R}(u)$ into a grid of size $\varepsilon N\times (2 \varepsilon N)^{2/3}$, where $\varepsilon$ will be taken small enough (but not too small, namely of order $u^{-2}$); see Figure~\ref{FigGrid}. \begin{figure}[t!] \centering \includegraphics[height=6cm]{FigDiscreteCrossing} \caption{Illustration of the grid as discretization of space-time. In the space direction the length is $(2\varepsilon N)^{2/3}$ and in the time direction $\varepsilon N$.} \label{FigGrid} \end{figure} For $\tau$ an integer multiple of $\varepsilon$, let $A(\tau,v)$ be the event that the first intersection of $\Gamma^*_N$ and $\tilde{\Gamma}^{*}_N$ occurs at time $t \in (2\tau N,2\tau N+2\varepsilon N]$, and they then cross the anti-diagonal grid segment (of length $(2 \varepsilon N)^{2/3}$) at time $2\tau N+2\varepsilon N$ with mid-point given by \begin{equation} P(\tau,v)=(\tau N+\varepsilon N+v(2N)^{2/3},\tau N+\varepsilon N-v (2N)^{2/3}). \end{equation} Notice that, the number of choices of $\tau$ and $v$ is $\mathcal{O} (\varepsilon^{-5/3})$, which by our choice of $\varepsilon$ is at most a polynomial of $u$. Thus we need to prove the $\Pb (A(\tau,v))$ is at most $e^{-\frac13 u^3+c u^2}$ for any $\tau,v$. The proof of Theorem~\ref{ThmCrossings} is completed by taking a union bound. Our first rough estimate deals with values of $v$ which are close to $0$ or $u$ and also small values of $\tau$. The basic idea is that in these cases the probability bounds coming from considering the transversal fluctuation of a single geodesic is sufficient. \begin{lem}\label{PropNotCrossingFar} Let $N^{1/14}\gg u >1$. For any $v$ satisfying $\min\{v,u-v\}\leq u (1-2^{-2/3})-\varepsilon^{2/3}$ and for any $\tau\leq 2^{-2/3}-\varepsilon$, \begin{equation} \Pb(A(\tau,v))\leq e^{-\frac13 u^3+c u^2} \end{equation} for some constant $c>0$. \end{lem} \begin{proof} For $u-v\leq u(1-2^{-2/3})-\varepsilon^{2/3}$, we have \begin{equation} \Pb(A(\tau,v))\leq \Pb\Big(\sup_t \Gamma^*_N(t)\geq 2^{-2/3} u (2N)^{2/3}\Big)\leq e^{-\frac13 u^3+c u^2}, \end{equation} where the last inequality follows from Theorem~\ref{ThmUB6Bis}. The same argument gives the desired result for $v\leq (1-2^{-2/3})u$ by considering the transversal fluctuation of the geodesic $\tilde{\Gamma}_{N}^*$. Next, notice that $A(\tau,v)$ implies that either $\Gamma^*_N(2\tau N+2\varepsilon N)\geq \frac12 u (2N)^{2/3}$ or $\tilde{\Gamma}^{*}_{N}(2\tau N+2\varepsilon N)\leq \frac12 u (2N)^{2/3}$, since after meeting they follow the same path. For $\tau+\varepsilon \le 2^{-2/3}$, by Lemma~\ref{lemUB3B} (use the first inequality with $\tau\mapsto \tau+\varepsilon\leq 2^{-2/3}$) each of these events have probability bounded by $e^{-\frac13 u^3+c u^2}$ for some constant $c>0$, completing the proof. \end{proof} We now proceed towards dealing with the remaining case. Define the segment \begin{equation} {\cal S}_v=\{(\tau N+k,\tau N-k)| (v-1)(2N)^{2/3}\leq k\leq (v+1)(2N)^{2/3}\}. \end{equation} Let $C_2$ be large enough such that \begin{equation}\label{eqB27} \Pb(L_{(0,0),\mathcal{L}_{2N}}\leq 4N-C_2 u 2^{4/3}N^{1/3})\leq e^{-\frac13 u^3}. \end{equation} For a path $\gamma$, recall that $L(\gamma)$ denotes the passage time of that path. Define the event \begin{multline} B(\tau,v)=\{\exists\, \gamma_1,\gamma_2\,|\, \gamma_1:(0,0)\to {\cal S}_v,\gamma_2:I(u)\to {\cal S}_v,\gamma_1\cap\gamma_2=\emptyset\, \textrm{ and }\\\min\{L(\gamma_1)+\tilde L_{{\cal S}_v,\mathcal{L}_{2N}},L(\gamma_2)+\tilde L_{{\cal S}_v,\mathcal{L}_{2N}}\}\geq 4N-C_2 u 2^{4/3}N^{1/3}\}, \end{multline} where in $\tilde L$ we remove the first point. Then we have the following estimate. \begin{figure}[t!] \centering \includegraphics[height=4.5cm]{FigSandwitching} \caption{Magnification of the \emph{local} geometry of geodesics used in the sandwitching of Lemma~\ref{AtauvBound}. The segment ${\cal S}_v$ is the dashed one. Notice that the lower line is not $\tau=0$.} \label{FigSandwitching} \end{figure} \begin{lem}\label{AtauvBound} Assume $(1-2^{-2/3})u-\varepsilon^{2/3}<v<2^{-2/3}u+\varepsilon^{2/3}$ and $2^{-2/3}-\varepsilon\leq \tau\leq 1-\varepsilon$ (notice that if the geodesics coalesce then $A(\tau,v)$ must hold for some $\tau\le 1-\varepsilon$, the case $\tau=1$ need not be considered). For $N^{1/11}\gg u>1$, there exists a $\delta>0$ small enough (not depending on $u$ and $N$) such that with $\varepsilon=\delta u^{-2}$, \begin{equation} \Pb(A(\tau,v))\leq \Pb(B(\tau,v))+4 C e^{-\frac13 u^3}. \end{equation} for some constant $C>0$. \end{lem} \begin{proof}We prove it for $u>4(1+C_2)$ with $C_2$ as in \eqref{eqB27}. Then by adjusting the constant $C$ it is true also for $u>1$. Denote by $\Gamma_1$ and $\Gamma_2$ the geodesics from $(0,0)$ to $P(\tau,v-\frac12\varepsilon^{2/3})$ and from $I(u)$ to $P(\tau,v+\frac12\varepsilon^{2/3})$ respectively. These two points are the end point of the grid interval whose midpoint is $P(\tau,v)$; see also Figure~\ref{FigSandwitching}. Define the events \begin{equation} \begin{aligned} B_1&=\{\Gamma_1(2\tau N)\leq (v-1)(2N)^{2/3}\},\\ B_2&=\{\Gamma_2(2\tau N)\geq (v+1)(2N)^{2/3}\},\\ B_3&=\{L_{(0,0),\mathcal{L}_{2N}}\leq 4N-C_2 u 2^{4/3}N^{1/3}\},\\ B_4&=\{L_{I(u),\mathcal{L}_{2N}}\leq 4N-C_2 u 2^{4/3}N^{1/3}\}. \end{aligned} \end{equation} Let us show that \begin{equation}\label{eqB32} A(\tau,v)\subseteq B(\tau,v)\cup B_1\cup B_2 \cup B_3 \cup B_4. \end{equation} Observe that \begin{equation} A(\tau,v) \subseteq B_1\cup B_3\cup B_3 \cup B_4 \cup (A(\tau,v)\cap B_1^c\cap B_2^c\cap B_3^c\cap B_4^c) \end{equation} and the last event is included in $B(\tau,v)$. Indeed, on $A(\tau,v)\cap B_1^c\cap B_2^c$, the geodesics $\Gamma^*_N$ and $\tilde{\Gamma}^{*}_N$ must cross ${\cal S}_v$. Now, let $\gamma_1$ and $\gamma_2$ be the portions of $\Gamma^*_N$ and $\tilde{\Gamma}^{*}_N$ respectively before time $\tau N$. On $A(\tau,v)$, by definition, $\gamma_1$ and $\gamma_2$ must be disjoint. On $B_3^c\cap A(\tau,v)$ it holds that $L(\gamma_1)+L_{{\cal S}_v,\mathcal{L}_{2N}}\geq L_{(0,0),\mathcal{L}_{2N}}\geq 4N-C_2 u 2^{4/3}N^{1/3}$, and similar inequality holds on $B_4^c \cap A(\tau,v)$ replacing $\gamma_1$ by $\gamma_2$. Thus the event $B(\tau,v)$ is satisfied. To complete the proof, we apply union bound to \eqref{eqB32} and bound the probabilities $\Pb(B_i)$. By the choice of $C_2$, $\Pb(B_3)$ and $\Pb(B_4)$ are both bounded by $e^{-\frac13 u^3}$. Lemma~\ref{lemBoundGamma1} below shows that $\Pb(B_1)\leq C e^{-\frac13 u^3}$ for some $C>0$ and by symmetry $\Pb(B_2)\leq C e^{-\frac13 u^3}$ as well. This completes the proof. \end{proof} \begin{lem}\label{lemBoundGamma1} Assume $(1-2^{-2/3})u-\varepsilon^{2/3}<v<2^{-2/3}u+\varepsilon^{2/3}$ and $2^{-2/3}-\varepsilon\leq \tau\leq 1$. For $N^{1/14}\gg u >1$, there exists a $\delta>0$ small enough (not depending on $u$ and $N$) such that with $\varepsilon=\delta u^{-2}$, \begin{equation} \Pb(\Gamma_1(2\tau N)\leq (v-1) (2N)^{2/3})\leq C e^{-\frac13 u^3} \end{equation} for some $C>0$. \end{lem} \begin{proof} $\Gamma_1$ is the geodesic from $(0,0)$ to $Q_1=P(\tau,v-\frac12\varepsilon^{2/3})$. We want to bound the probability that $\Gamma_1(2\tau N)\leq (v-1)(2N)^{2/3}$. For $x_1,x_2,x_3>0$, define the events \begin{equation} \begin{aligned} E_1&=\{L_{(0,0),Q_1}\leq 4(\tau+\varepsilon)N-\tfrac{v^2}{\tau+\varepsilon} 2^{4/3}N^{1/3}-x_1 2^{4/3}N^{1/3}\},\\ E_2&=\{L_{(0,0),\mathcal{L}_{2\tau N}}\geq 4\tau N+x_2 2^{4/3}N^{1/3}\},\\ E_3&=\Big\{\sup_{w\leq (v-1)(2N)^{2/3}}L_{(\tau N+w,\tau N-w),Q_1}\geq 4\varepsilon N\\ &\hspace{11em}-\tfrac{1}{\varepsilon}(1-\tfrac12 \varepsilon^{2/3})^22^{4/3}N^{1/3}+x_3 2^{4/3} N^{1/3}\Big\}. \end{aligned} \end{equation} By a first order approximation, we have $L_{(0,0),Q_1}\simeq 4(\tau+\varepsilon)N-\frac{v^2}{\tau+\varepsilon}2^{4/3}N^{1/3}+\mathcal{O}(v\varepsilon^{2/3}N^{1/3})$. So, by Lemma~\ref{lemUB1}, we have $\Pb(E_1)\leq C e^{-c x_1^3}$ for $x_1$ of at least the order of $u$, and such that $x_1\ll N^{2/3}$. Next, by Lemma~\ref{lemUB2B}, we have $\Pb(E_2)\leq C e^{-\frac43 x_2^{3/2}\tau^{-1/2}}$ for $x_2\ll N^{2/9}$. Finally, using Lemma~\ref{lemUB2} (with the variables $(N,u)$ in Lemma~\ref{lemUB2} replaced by $(\varepsilon N,\varepsilon^{-2/3}(1-\tfrac12 \varepsilon^{2/3}))$, we get $\Pb(E_3)\leq C e^{-\frac43 x_3^{3/2}\varepsilon^{-1/2}}$ provided \begin{equation}\label{coe} x_3 \varepsilon^{-1/3}\ll(\varepsilon N)^{2/9} \text{\,\, and \,\, } N^{-1/7}\ll\varepsilon. \end{equation} Under the condition \begin{equation}\label{co} -\frac{v^2}{\tau+\varepsilon}-x_1\geq x_2-\frac{1}{\varepsilon}(1-\tfrac12 \varepsilon^{2/3})^2+x_3, \end{equation} we have \begin{equation}\label{eqA.54} \Pb(\Gamma_1(2\tau N)\leq (v-1) (2N)^{2/3}) \leq \Pb(E_1)+\Pb(E_2)+\Pb(E_3). \end{equation} We assume already that $\varepsilon$ is small enough so that $\tau\geq 1/2$. First take $x_1=u/(3 c)$ so that $\Pb(E_1)\leq C e^{-\frac13 u^3}$. Then take $x_2=u^2$ which ensures $\Pb(E_2)\leq C e^{-\frac13 u^3}$ as well. Finally we take $x_3=u^2 \varepsilon^{1/3}$ that gives $\Pb(E_3)\leq C e^{-\frac13 u^3}$. To satisfy the condition \eqref{co}, it is enough to take $\varepsilon=\delta u^{-2}$ with $\delta$ small enough (independent of $u$). Finally, note that \eqref{coe} implies that $u\ll N^{\tfrac{1}{14}}$. \end{proof} To complete the proof of Theorem~\ref{ThmCrossings} we need to obtain a bound on the event $B(\tau,v)$. \begin{lem}\label{LemEventB} Assume $2^{-2/3}u-\varepsilon^{2/3}<v<(1-2^{-2/3})u+\varepsilon^{2/3}$ and $2^{-2/3}-\varepsilon\leq \tau\leq 1-\varepsilon$. For $N^{1/9}\gg u>4(1+C_2)$, there exists a constant $\delta>0$ small enough such that with $\varepsilon=\delta u^{-2}$, \begin{equation} \Pb(B(\tau,v))\leq e^{-\frac13 u^3+c u^2} \end{equation} for some constant $c>0$ independent of $\tau,v$. The constant $C_2$ is as in \eqref{eqB27}. \end{lem} \begin{proof} For any $s_1,s_2\in\mathbb{R}_+\cup \{-\infty\}$ we define $D(s_1,s_2)$ to be the event that there exist disjoint paths $\gamma_1$ and $\gamma_2$ as in the definition of $B(\tau,v)$, such that \begin{equation} \begin{aligned} L(\gamma_1)&\geq 4\tau N-\frac{(v-1)^2}{\tau}2^{4/3}N^{1/3}+ s_1 2^{4/3}N^{1/3},\\ L(\gamma_2)&\geq 4\tau N-\frac{(u-v-1)^2}{\tau}2^{4/3}N^{1/3}+ s_2 2^{4/3}N^{1/3}. \end{aligned} \end{equation} For $s_3\in\mathbb{R}_+\cup \{-\infty\}$, we define the event \begin{equation} C(s_3)=\{\tilde L_{{\cal S}_v,\mathcal{L}_{2N}}\geq 4(1-\tau)N+s_3 2^{4/3}N^{1/3}\}. \end{equation} Recall the constant $C_2$ from \eqref{eqB27}. Like in the proof of Lemma~\ref{lemUB3B} we do a discretization with a fixed width $0<\eta<1$ and thus we will not write all the details. The minor difference is that now we a couple of constraints: \begin{equation} s_1+s_3=\tfrac1\tau (v-1)^2-\eta-C_2 u,\quad s_2+s_3 = \tfrac1\tau (u-v-1)^2-\eta-C_2 u. \end{equation} In the discretization of Lemma~\ref{lemUB3B}, see \eqref{eqB17}, we separated explicitly two terms, which corresponds taking $S_2=-\infty$ and $S_3=-\infty$. Here we do the same, but instead of writing those terms separately, we consider subsets allowing positive numbers and $-\infty$. More precisely, define the set \begin{equation} \begin{aligned} \Theta=\{&s_1,s_2,s_3 \in\mathbb{R}_+\cup \{-\infty\} |\, s_3\in \eta \mathbb{Z}, s_1\vee 0+s_3\vee 0 = \tfrac1\tau (v-1)^2-\eta-C_2 u\\ &\textrm{and } s_2\vee 0+s_3\vee 0 = \tfrac1\tau (u-v-1)^2-\eta-C_2 u\}. \end{aligned} \end{equation} Then \begin{equation}\label{Oin} B(\tau,v)\subset \bigcup_{s_1,s_2,s_3\in \Theta} C(s_3)\cap D(s_1,s_2). \end{equation} The number of elements is, for any $v$ with $\min\{v,u-v\}\leq u (1-2^{2/3})+\varepsilon^{2/3}$ of order $u^2/\tau$. Since $\tau\geq 1/2$ (for $\varepsilon\leq 2^{-2/3}-1/2$), the sum contains $\mathcal{O}(u^2)$ many terms. Therefore, using the independence of $C(s_3)$ and $D(s_1,s_2)$, \begin{equation} \Pb(B(\tau,v))\leq C u^2 \max_{s_1,s_2,s_3\in \Theta} \Pb(C(s_3))\Pb(D(s_1,s_2)) . \end{equation} As $\gamma_1$ and $\gamma_2$ ``occur disjointly'', by the BK (Berg-Kesten) inequality (see e.g.\ Theorem~7 of~\cite{AGH18} for a statement applicable in the above scenario) we get \begin{equation}\label{eqB40} \begin{aligned} \Pb(D(s_1,s_2))&\leq \Pb(L_{(0,0),{\cal S}_v}\geq 4\tau N-\tfrac1{\tau}(v-1)^22^{4/3}N^{1/3}+s_1 2^{4/3} N^{1/3})\\ &\times \Pb(L_{I(u),{\cal S}_v}\geq 4\tau N-\tfrac1{\tau} (u-v-1)^22^{4/3}N^{1/3}+s_2 2^{4/3} N^{1/3}). \end{aligned} \end{equation} Set ${\cal D}_{\tau,v}=\{(\tau N+k,\tau N-k)| k\geq (v-1)(2N)^{2/3}\}$. Then $L_{(0,0),{\cal S}_v}\leq L_{(0,0),{\cal D}_{\tau,v}}$ and Lemma~\ref{lemUB2} (after rescaling $N\to \tau N$) leads to \begin{equation} \Pb(L_{(0,0),{\cal S}_v}\geq 4\tau N-\tfrac1{\tau}(v-1)^22^{4/3}N^{1/3}+s_1 2^{4/3} N^{1/3})\leq C e^{-\frac43 \frac{s_1^{3/2}}{\tau^{1/2}}} \end{equation} and similarly for the bound on the second term in \eqref{eqB40}, so that we have \begin{equation} \Pb(D(s_1,s_2))\leq C e^{-\frac43 \frac{s_1^{3/2}+s_2^{3/2}}{\tau^{1/2}}}, \end{equation} for $2<u\ll N^{1/9}$, $0<s_1,s_2\ll N^{2/9}$. To estimate $\Pb(C(s_3))$ we divide the segment ${\cal S}_v$ into pieces of length $(1-\tau)^{2/3}(2N)^{2/3}$ to which we can apply a rescaled version of Proposition~\ref{PropUB5}. We have $2/(1-\tau)^{2/3}\leq 2/\varepsilon^{2/3}$ such pieces (we used $1-\tau\geq \varepsilon$). Using union bound we then get, for $s_3 (1-\tau)^{-1/3}\ll N^{2/3}$, \begin{equation}\label{eq43} \Pb(C(s_3))\leq \frac{C \max\{1,s_3 (1-\tau)^{-1/3}\}}{\varepsilon^{2/3}} e^{-\frac43 \frac{s_3^{3/2}}{(1-\tau)^{1/2}}}. \end{equation} Let $\varepsilon=\delta u^{-2}$ for $\delta>0$ small enough as in the proof of Lemma~\ref{lemBoundGamma1}. We have \begin{equation}\label{Oub4} \Pb(B(\tau,v))\leq C u^4 \delta^{-1}\max_{s_1,s_2,s_3\in\Theta} e^{-\frac43 \frac{s_1^{3/2}+s_2^{3/2}}{\tau^{1/2}}} e^{-\frac43 \frac{s_3^{3/2}}{(1-\tau)^{1/2}}}. \end{equation} Therefore we concentrate now on finding the maximum of \begin{equation} e^{-\frac43 \frac{s_1^{3/2}+s_2^{3/2}}{\tau^{1/2}}}e^{-\frac43 \frac{s_3^{3/2}}{(1-\tau)^{1/2}}} \end{equation} for $s_1,s_2,s_3\in \Theta$. Define $\tilde s_3=s_3+\eta+C_2 u$. Then for given $s_3$ on $\Theta$, we have \begin{equation} s_1=\frac{(v-1)^2}{\tau}-\tilde s_3,\quad s_2=\frac{(u-v-1)^2}{\tau}-\tilde s_3. \end{equation} So we need to maximize \begin{equation} M(v,\tau,s_3)=-\frac43 \frac{((v-1)^2/\tau-\tilde s_3)^{3/2} +((u-v-1)^2/\tau-\tilde s_3)^{3/2}}{\sqrt{\tau}}-\frac43 \frac{s_3^{3/2}}{\sqrt{1-\tau}}. \end{equation} In principle, to get the bound on $B(\tau,v)$, we would need to find $s_3$ maximizing $M(v,\tau,s_3)$. In the statement we want a bound uniform in $\tau,v$. This means that we need to maximize the result over $\tau,v$ as well. In short, we maximize $M$ for $s_3,v,\tau$ and thus we do it in another order. First notice that for given $\tau,s_3$, $M(v,\tau,s_3)$ is maximized at $v=u/2$, for which \begin{equation} M(u/2,\tau,s_3)=-\frac43 \frac{2((u-2)^2/(4\tau)-\tilde s_3)^{3/2}}{\sqrt{\tau}}-\frac43 \frac{s_3^{3/2}}{\sqrt{1-\tau}}. \end{equation} Computing the derivative with respect to $s_3$ we get that, for a given $\tau$, the maximum is at $s_3^*=\frac{[(u-2)^2-4\tau(\eta+C_2 u)](1-\tau)}{\tau(4-3\tau)}>0$ under the assumption $u\geq 4(1+C_2)$ and $\eta<1$. So we get \begin{equation}\label{eq2.74} \begin{aligned} M(u/2,\tau,s_3^*)&=-\frac43 \frac{[(u-2)^2-4\tau (\eta+C_2 u)]^{3/2}}{4\tau^{3/2} \sqrt{4-3\tau}}\\ &\leq -\frac13 [(u-2)^2-4\tau (\eta+C_2 u)]^{3/2}\leq -\frac13 u^3+c u^2 \end{aligned} \end{equation} for some constant $c>0$. Inserting \eqref{eq2.74} into \eqref{Oub4} and choosing an appropriate new constant $c$ leads to the claimed result. \end{proof} \subsection{Proof of Theorem~\ref{thmUpperBound}} As before, we shall prove the bound first for sufficiently large $u$, and adjust $c$ later to deduce the same for all $u>1$. Recall that $L_N^*(u)$ is the rescaled LPP from $I(u)$ to $\mathcal{L}_{2N}$, see \eqref{eq1.4}. Let us use the notations $X=L_N^*(0)$ and $Y=L_N^*(u)$. For $j,j'\in \{1,2,\ldots, u-1\}$, let $S^{j}_N$ denote the weight of the maximum weight path $\pi^j$ from $(0,0)$ to $\mathcal{L}_{2n}$ such that $\pi(t) < j(2N)^{2/3}$ for all $t$, and similarly, let $S^{u,j'}_N$ denote the weight of the maximum weight path $\tilde \pi^{j'}$ from $I(u)$ to $\mathcal{L}_{2n}$ such that $\tilde \pi^{j'}(t) > (u-j')(2N)^{2/3}$ for all $t$. Let us also set \begin{equation} X_j=2^{-4/3}N^{-1/3}(S^{j}_N-4N), \quad Y_{j'}=2^{-4/3}N^{-1/3}(S_N^{u,j'}-4N). \end{equation} For notational convenience, we shall also write $X_0=Y_0=0$ and $X_{u}=X$, $Y_{u}=Y$. Now, writing \begin{equation} X=\sum_{j=0}^{u-1} X_{j+1}-X_{j},\quad Y=\sum_{j'=0}^{u-1} Y_{j'+1}-Y_{j'}, \end{equation} and using the bilinearity of covariance it is enough to prove that for some $c>0$ and for all $j, j'\in \{0,1,\ldots, u-1\}$ \begin{equation}\label{eq:jj} {\rm Cov}(X_{j+1}-X_{j}, Y_{j'+1}-Y_{j'}) \le e^{cu^2}e^{-\frac{1}{3}u^3}. \end{equation} Notice first that $X_{j+1}-X_{j}$ and $Y_{j'+1}-Y_{j'}$ depend on disjoint sets of vertex weights and hence are independent unless $j+j'\ge u-1$. Hence we only need to consider $(j,j')$ such that $j+j'\ge u-1$. For such a pair, noticing $X\ge X_{j+1}\ge X_{j}$ and $Y\ge Y_{j+1}\ge Y_{j}$ it follows that \begin{equation} {\rm Cov}(X_{j+1}-X_{j}, Y_{j'+1}-Y_{j'}) \le \mathbbm{E}[(X-X_j)(Y-Y_{j'})]. \end{equation} For convenience of notation, let $\Gamma_1$ and $\Gamma_2$ locally denote the geodesics from $(0,0)$ and $I(u)$ respectively to $\mathcal{L}_{2N}$. We define \begin{equation} A_{j}=\Big\{\sup_{0\leq t\leq 2N} \Gamma_1(t) \ge j(2N)^{2/3}\Big\},\quad B_{j'}=\Big\{\inf_{0\leq t\leq 2N} \Gamma_2(t) \le (u-j')(2N)^{2/3}\Big\}. \end{equation} Clearly, $(X-X_j)(Y-Y_{j'})=0$ on the complement of $A_{j}\cap B_{j'}$ and $X-X_{j}$ and $Y-Y_{j'}$ are positive random variable with super-exponential (uniform in $j,j'$) tails (indeed we can just use the upper tail bounds for $X$ and $Y$). Using the notation $\| X \|_p=\mathbbm{E}(|X|^p)^{1/p}$ and the fact that the $p$-th norm of the of random variables with super-exponential tails can grow at most linearly in $p$, we know that there exists a constant $C$ such that for all $j,j'$ and all $p\ge 1$ $||X-X_{j}||_{p}, ||Y-Y_{j'}||_{p}\le Cp.$ Using the H\"older inequality we have \begin{equation} \mathbbm{E}[(X-X_{j})(Y-Y_{j'})]\le ||\mathbbm{1}_{A_j\cap B_{j'}}||_{q}||(X-X_{j})(Y-Y_{j'})||_{p} \end{equation} where $p^{-1}+q^{-1}=1$. By the Cauchy-Schwarz inequality \begin{equation} ||(X-X_{j})(Y-Y_{j'})||_{p}\le ||X-X_{j}||_{2p}||Y-Y_{j'}||_{2p}\le C p^2 \end{equation} for some new constant $C>0$. It therefore follows that \begin{equation} \label{eq:momentbound} \mathbbm{E}[(X-X_{j})(Y-Y_{j'})]\le C'p^2 \Pb(A_{j}\cap B_{j'})^{1/q}. \end{equation} for $p,q \ge 1$ with $p^{-1}+q^{-1}=1$. By Lemma~\ref{lemAjBj} below, we have \begin{equation} \label{eq:ajbjbound} \Pb(A_{j}\cap B_{j'})\le e^{-\frac{1}{3}u^{3}+cu^2} \end{equation} for some $c>0$. We choose $p=u$ so that $1/q=1-\frac{1}{u}$. Therefore, plugging \eqref{eq:ajbjbound} in \eqref{eq:momentbound} it follows that \begin{equation} \mathbbm{E}[(X-X_{j})(Y-Y_{j'})]\le Cu^2 e^{(cu^2-u^3/3)(1-\frac{1}{u})}\le e^{-\frac13 u^3+c' u^2} \end{equation} for some new constant $c'$. This establishes \eqref{eq:jj} and Theorem \ref{thmUpperBound} follows by summing over $(j,j')$. \qed \begin{lem}\label{lemAjBj} In the above set-up, for $u$ large enough and $j+j'\ge u-1$ we have \begin{equation} \Pb(A_{j}\cap B_{j'})\le e^{-\frac{1}{3}u^{3}+cu^2} \end{equation} for some $c>0$. \end{lem} \begin{proof} Notice first that arguing as in the proof of Lemma~\ref{PropNotCrossingFar}, if $j\ge 2^{-2/3}u$ then $\Pb(A_{j})\le e^{-\frac{1}{3}u^{3}+cu^2}$ and similarly if $j'\ge 2^{-2/3}u$ then $\Pb(B_{j'})\le e^{-\frac{1}{3}u^{3}+cu^2}$. Therefore it suffices to consider only the cases $\max\{j,j'\}\le 2^{-2/3}u$. This, together with $j+j'\ge u-1$ also implies that $\min\{j,j'\}\ge (1-2^{-2/3})u-1>0$ for $u$ sufficiently large. Observe now that \begin{equation} \Pb(A_{j}\cap B_{j'})\le \Pb(\Gamma_{1}\cap \Gamma_2 \neq \emptyset)+\Pb(A_j\cap B_{j'}\cap \{\Gamma_{1}\cap \Gamma_2 =\emptyset\}). \end{equation} By Theorem~\ref{ThmCrossings} it follows that the first term is upper bounded by $e^{-\frac{1}{3}u^{3}+cu^2}$ and hence it suffices to show that \begin{equation} \label{eq:ajbj2} \Pb(A_j\cap B_{j'}\cap \{\Gamma_{1}\cap \Gamma_2 =\emptyset\})\le e^{-\frac13 u^3+cu^2} \end{equation} for some $c>0$. Since the two geodesics do not intersect, we would like to use the BK inequality to get an upper bound. However, we can not do it directly, since the property of being a geodesic depends on all the random variables and not on subsets. We therefore would show something similar for any paths which are of typical length. First, we need to approximate the events $A_j$ and $B_{j'}$. For $\varepsilon>0$ to be chosen later, let $\tilde{A}_{j}$ denote the event that there exists $k\in \{1,2, \ldots ,\frac{1}{\varepsilon}\}$ such that $\Gamma_1(2k\varepsilon N)\ge (j-1)(2N)^{2/3}$. Similarly, let $\tilde{B}_{j'}$ denote the event that there exists $k\in \{1,2, \ldots ,\frac{1}{\varepsilon}\}$ such that $\Gamma_2(2k\varepsilon N)\le (u-j'+1)(2N)^{2/3}$. By choosing $\varepsilon=\delta u^{-3/2}$ for $\delta$ sufficiently small but fixed and arguing as in the proof of Theorem~\ref{ThmUB6Bis} it follows that \begin{equation} \Pb(A_{j} \setminus \tilde{A}_{j}), \Pb(B_{j'}\setminus \tilde B_{j'}) \le e^{-\frac{1}{3}u^{3}+c u^2}. \end{equation} Therefore \eqref{eq:ajbj2} reduces to showing \begin{equation} \label{eq:ajbj3} \Pb(\tilde{A}_j\cap \tilde{B}_{j'}\cap \{\Gamma_{1}\cap \Gamma_2 =\emptyset\})\le e^{-\frac13 u^3+c u^2} \end{equation} for some $c>0$. Let $C_2$ be such that (using Lemma~\ref{lemUB1}) \begin{equation} \Pb(X\le -C_2 u)=\Pb(Y\le -C_2u) \le e^{-\frac{1}{3}u^{3}}. \end{equation} Observe that on $\tilde{A}_j\cap \tilde{B}_{j'}\cap \{\Gamma_{1}\cap \Gamma_2 =\emptyset\} \cap \{X>-C_2u\}\cap \{Y>-C_2u\}$ there exist disjoint paths $\gamma_1$ and $\gamma_2$ from $(0,0)$ and $I(u)$ respectively to $\mathcal{L}_{2N}$ with $L(\gamma_1), L(\gamma_2)\ge 4N-C_2u 2^{4/3}N^{1/3}$ such that there exist $k_1,k_2\in \{1,2,\ldots, \frac{1}{\varepsilon}\}$ with $\gamma_1(2k_1\varepsilon N)\ge (j-1)(2N)^{2/3}$ and $\gamma_2(2k_2\varepsilon N)\leq (u-j'+1)(2N)^{2/3}$. By using the BK inequality as before we get that \begin{equation} \label{eq:ajbj4} \Pb(\tilde{A}_j\cap \tilde{B}_{j'}\cap \{\Gamma_{1}\cap \Gamma_2 =\emptyset\}) \le \Pb(\hat{A}_{j})\Pb(\hat{B}_{j'})+2e^{-\frac{1}{3}u^{3}} \end{equation} where $\hat{A}_{j}$ denotes the event that there exists a path $\gamma_1$ from $(0,0)$ to $\mathcal{L}_{2N}$ satisfying $L(\gamma_1)>4N-C_2u2^{4/3}N^{1/3}$ and $\gamma_1(2k\varepsilon N)\ge (j-1)(2N)^{2/3}$ for some $k$ and $\hat{B}_{j'}$ denotes the event that there exists a path $\gamma_2$ from $I(u)$ to $\mathcal{L}_{2N}$ satisfying $L(\gamma_2)>4N-C_2u2^{4/3}N^{1/3}$ and $\gamma_2(2k\varepsilon N)\le (u-j-1)(2N)^{2/3}$ for some $k$. We claim that \begin{equation}\label{eq2.92} \Pb(\hat{A}_{j})\le e^{cu^2}e^{-\frac{4}{3}(j-1)^3},\quad \Pb(\hat{B}_{j}) \le e^{cu^2}e^{-\frac{4}{3}(j'-1)^3}. \end{equation} Postponing the proof of \eqref{eq2.92} for now, let us first complete the proof of the lemma. By Jensen's inequality together with $j+j'\ge u-1$ it follows that \begin{equation} (j-1)^3+ (j'-1)^3 \ge \frac{1}{4}(j+j'-2)^{3}\ge \frac{(u-3)^3}{4} \end{equation} and hence \begin{equation} \Pb(\hat{A}_{j})\Pb(\hat{B}_{j'})\le e^{-\frac{1}{3}u^3+c u^2} \end{equation} for some $c>0$. This, together with \eqref{eq:ajbj4} establishes \eqref{eq:ajbj3}. To conclude the proof we show \eqref{eq2.92}. The idea is to follow the proof of Lemma~\ref{lemUB3B}. However the first bound \eqref{Oub5b} in that proof applies only to geodesics, while here we have to show it for any paths with a length larger that $4N-C_2 u 2^{4/3}N^{1/3}$. We will prove that for any path $\gamma_1$ satisfying the conditions of $\hat A_j$, for any $\tau\in\{\varepsilon,2\varepsilon,\ldots,1\}$ \begin{equation}\label{eq2.90} \Pb(\gamma_1(2 \tau N)\geq M (2N)^{1/3})\leq C u^2 e^{-\frac43 u^3} \end{equation} for $M=\sqrt{C_2 u+ u^2}$. Then the rest of the proof of Lemma~\ref{lemUB3B} applies, except that the sum in \eqref{eqB7} goes until $M-u-1$. Denote $K(v)=(\tau N,\tau N)+v(2N)^{2/3}(1,-1)$ and divide the possible points where $\gamma_1$ crosses the line $\mathcal{L}_{2\tau N}$ as \begin{equation} \{K(v),v\in [M,\tfrac{\tau N}{(2N)^{2/3}}]\}={\cal I}_0\bigcup_{\ell=M}^{N^{1/10}-1} {\cal I}_\ell, \end{equation} with ${\cal I}_0=\{K(v),v\in [N^{1/10},\tfrac{\tau N}{(2N)^{2/3}}]\}$ and ${\cal I}_\ell=\{K(v), v\in [\ell,\ell+1)\}$. Then we have, for any choice of $A_\ell$ and $B_\ell=4N-C_2 u 2^{4/3}N^{1/3}-A_\ell$, \begin{equation} \begin{aligned} \Pb(\gamma_1(2 \tau N)\geq M (2N)^{1/3})&\leq \sum_{\ell} \Pb\left(L_{(0,0),{\cal I}_\ell}+L_{{\cal I}_\ell,\mathcal{L}_{2N}}\geq 4N-C_2 u 2^{4/3}N^{1/3}\right)\\ &\leq \sum_\ell \Pb\left(L_{(0,0),{\cal I}_\ell}\geq A_\ell\right)+\Pb\left(L_{(0,0),{\cal I}_\ell}\geq B_\ell\right). \end{aligned} \end{equation} By rescaling Lemma~\ref{lemUB2}, for $s_1\ll (\tau N)^{2/9}$ and $u\ll (\tau N)^{1/9}$ \begin{equation} \begin{aligned} \Pb\Big(L_{(0,0),{\cal I}_\ell}>4\tau N-\frac{\ell^2}{\tau} 2^{4/3} N^{1/3}+ s_1 \tau^{1/3}2^{4/3}N^{1/3}\Big)&\leq C e^{-\frac43 s_1^{3/2}},\\ \Pb\Big(L_{(0,0),{\cal I}_0}>4\tau N-\frac{N^{1/5}}{\tau} 2^{4/3} N^{1/3}+s_1 \tau^{1/3}2^{4/3}N^{1/3}\Big)&\leq C e^{-\frac43 s_1^{3/2}}, \end{aligned} \end{equation} and by rescaling Proposition~\ref{PropUB5} (and union bound on $(1-\tau)^{-2/3}$ subsegments per $(2N)^{2/3}$-length), for $s_2\ll (1-\tau)^{2/3}N^{2/3}$, we get \begin{equation} \begin{aligned} \Pb\Big(L_{{\cal I}_\ell,\mathcal{L}_{2N}}>4(1-\tau) N+s_2 (1-\tau)^{1/3}2^{4/3}N^{1/3}\Big)&\leq C s_2(1-\tau)^{-2/3} e^{-\frac43 s_2^{3/2}},\\ \Pb\Big(L_{{\cal I}_0,\mathcal{L}_{2N}}>4(1-\tau) N+ s_2 (1-\tau)^{1/3}2^{4/3}N^{1/3}\Big)&\leq C s_2 N^{1/3}(1-\tau)^{-2/3} e^{-\frac43 s_2^{3/2}}. \end{aligned} \end{equation} We take, with $\alpha_\ell=-\frac{\ell^2 (1-\tau)^{1/3}+C_2 u \tau^{4/3}}{((1-\tau)^{1/3}+\tau^{1/3})\tau}$, $A_\ell=4\tau N+\alpha_\ell 2^{4/3}n^{1/3}$. Setting $\ell=M+\tilde \ell$, we get \begin{equation} s_1=s_2=\frac{\ell^2-C_2 u \tau}{((1-\tau)^{1/3}+\tau^{1/3})\tau}\geq u^2+\tilde \ell^2. \end{equation} From this, it follows that \begin{equation}\label{eq2.100} \Pb\left(L_{(0,0),{\cal I}_\ell}\geq A_\ell\right)+\Pb\left(L_{(0,0),{\cal I}_\ell}\geq B_\ell\right)\leq 2C \delta^{-2/3} u \ell^2 e^{-\frac43 u^3}e^{-\frac43 (\ell-M)^{3/2}} \end{equation} and thus $\sum_{\ell=M}^{N^{1/10}-1}\eqref{eq2.100}\leq C' \delta^{-2/3}u^2 e^{-\frac43 u^3}$, while for $\ell=0$ it is of an order $\mathcal{O}(e^{-\frac43 N^{3/20}})$ smaller. Applying union bound on the $\varepsilon^{-1}=u^{3/2}/\delta$ time intervals and the estimate \eqref{eq2.90}, we get that any path in $\hat A_j$ is localized within a distance $M(2N)^{1/3}$ with probability at least $1-C e^{-u^3}$. \end{proof} \section{Lower Bound}\label{SectLowerBound} In this section we prove the lower bound of Theorem~\ref{ThmMain}. \begin{thm}\label{ThmLowerBound} There exist a constant $c>0$ such that \begin{equation} \Cov({\cal A}_1(0),{\cal A}_1(u))\geq e^{-c u \ln(u)} e^{-\frac43 u^3}. \end{equation} \end{thm} We begin explaining the strategy of the proof. Hoeffding's covariance identity~\cite{Hoef40}, which comes from integration by parts on $\mathbb{R}_+$ and $\mathbb{R}_-$ separately, states that \begin{equation}\label{covid} \Cov(X,Y)=\int_\mathbb{R} ds_1\int_\mathbb{R} ds_2\left[\Pb(X\leq s_1,Y\leq s_2)-\Pb(X\leq s_1)\Pb(Y\leq s_2)\right]. \end{equation} We therefore define the following functions \begin{equation}\label{Fs} \begin{aligned} F(u;s_1,s_2)&=\Pb\left({\cal A}_1(0)\leq s_1,{\cal A}_1(u)\leq s_2\right),\\ f(s)&=\Pb\left({\cal A}_1(0)\leq s\right)=\Pb\left({\cal A}_1(u)\leq s\right), \end{aligned} \end{equation} where in the last equality we used the stationarity of ${\cal A}_1$. As we would like to use \eqref{covid} with $X={\cal A}_1(0)$ and $Y={\cal A}_1(u)$, we are interested in finding the asymptotic behavior of \begin{equation}\label{Ff2} F(u;s_1,s_2)-f(s_1)f(s_2) \quad \text{ as } u\rightarrow\infty. \end{equation} As $u\to\infty$, the random variables ${\cal A}_1(0)$ and ${\cal A}_1(u)$ become independent of each other; thus we define $\cal E$ through \begin{equation}\label{Ff} F(u;s_1,s_2)=f(s_1)f(s_2)(1+{\cal E}(u;s_1,s_2)), \end{equation} where ${\cal E}\rightarrow 0$ when $u\rightarrow \infty$ (at least for $s_1$ and $s_2$ independent of $u$). Using \eqref{Ff} in \eqref{Ff2} and \eqref{covid} we obtain \begin{equation}\label{int} \text{Cov}({\cal A}_1(0),{\cal A}_1(u))=\int_\mathbb{R} ds_1 \int_\mathbb{R} ds_2 f(s_1)f(s_2){\cal E}(u;s_1,s_2) \end{equation} Next, by the FKG inequality, see Lemma~\ref{lemFKG} below, the integrand in \eqref{int} is positive for all $u\geq 0$. We can therefore restrict the integration in \eqref{int} to a compact subset of $\mathbb{R}^2$ to obtain the following lower bound \begin{equation}\label{eq1.7} \text{Cov}({\cal A}_1(0),{\cal A}_1(u))\geq \int_\alpha^{\beta} ds_1 \int_\alpha^\beta ds_2 f(s_1)f(s_2) {\cal E}(u;s_1,s_2) \end{equation} for any choice of $\alpha<\beta$. Thus the goal of the computations below is to show that $\cal E$ is of order $e^{-\frac43 u^3}$ times a subleading term for $s_1,s_2$ in some chosen intervals of size $\mathcal{O}(1)$ where $f(s_1)$ and $f(s_2)$ are bounded away from $0$. \begin{lem}\label{lemFKG} For any $s_1,s_2\in\mathbb{R}$, \begin{equation}\label{fkga} \Pb({\cal A}_1(0)\leq s_1,{\cal A}_1(u)\leq s_2)-\Pb({\cal A}_1(0)\leq s_1)\Pb({\cal A}_1(u)\leq s_2)\geq 0. \end{equation} \end{lem} \begin{proof} Recalling the notation from \eqref{eq1.4}, notice that both $L^{*}_{N}(0)$ and $L_{N}^{*}(u)$ are increasing function of the weights $\omega_{i,j}\sim \exp(1)$ (and they depend only on finitely many vertex weights). For any $t_1,t_2$ it therefore follows that the events $\{L^{*}_{N}(0)\le t_1\}$ and $\{L^{*}_{N}(u)\le t_2\}$ are both decreasing and hence by the FKG inequality they are positively correlated (note that the FKG inequality is often stated for measures on finite distributive lattices satisfying the FKG lattice condition, but more general versions for product measures on finite products of totally ordered measure spaces applicable in the above scenario are available; see e.g.\ Lemma~2.1 of~\cite{Kesten2003} or Corollary~2 of~\cite{kemperman1977fkg}), and therefore \begin{equation}\label{fkg} \Pb(L^{*}_{N}(0)\le t_1,L^{*}_{N}(u)\le t_2)-\Pb(L^{*}_{N}(0)\le t_1)\Pb(L^{*}_{N}(u)\le t_2)\geq 0. \end{equation} Using that the Airy$_1$ process is a scaling limit of $L^*$ (see~\eqref{eqUB5}), the proof is complete. \end{proof} We first derive an expression for ${\cal E}(u;s_1,s_2)$. Let us begin with the Fredholm representation of the function $F$. We have from~\cite{Sas05,BFPS06,Fer07} \begin{equation}\label{Fr} F(u;s_1,s_2)=\det(\mathbbm{1}-K) \end{equation} where $K$ is a $2\times 2$ matrix kernel \begin{equation}\label{K} K=\left( \begin{array}{cc} K_{1,1} & K_{1,2} \\ K_{2,1} & K_{2,2} \\ \end{array} \right) \end{equation} with entries given by the extended kernel of the Airy$_1$ process~\cite{Sas05,BFPS06,Fer07} \begin{equation}\label{Ker} \begin{aligned} K_{1,1}(x,y)&=\mathbbm{1}_{[x>s_1]}\mathbbm{1}_{[y>s_1]}\Ai(x+y),\\ K_{1,2}(x,y)&=\mathbbm{1}_{[x>s_1]}\mathbbm{1}_{[y>s_2]}\Big(\Ai(x+y+u^2) e^{(x+y)u+\tfrac23 u^3}-\frac{e^{-(x-y)^2/4u}}{\sqrt{4\pi u}}\Big),\\ K_{2,1}(x,y)&=\mathbbm{1}_{[x>s_2]}\mathbbm{1}_{[y>s_1]}\Ai(x+y+u^2) e^{-(x+y)u-\tfrac23 u^3},\\ K_{2,2}(x,y)&=\mathbbm{1}_{[x>s_2]}\mathbbm{1}_{[y>s_2]}\Ai(x+y), \end{aligned} \end{equation} where $\Ai$ denotes the Airy function. Also, for the one-point distributions, we have $f(s_i)=\det(\mathbbm{1}-K_{i,i})$ for $i=1,2$. The first step is the following result. \begin{lem}\label{LemmaDec} With the above notations \begin{equation}\label{FdK} 1+{\cal E}(u;s_1,s_2)=\det(\mathbbm{1}-\widetilde K) \end{equation} where \begin{equation} \widetilde K=(\mathbbm{1}-K_{1,1})^{-1}K_{1,2}(\mathbbm{1}-K_{2,2})^{-1}K_{2,1}. \end{equation} \end{lem} \begin{proof} Similar to~\cite{Wid03}, we compute \begin{equation} \begin{aligned} & \det\left(\mathbbm{1}-\left( \begin{array}{cc} K_{1,1} & K_{1,2} \\ K_{2,1} & K_{2,2} \\ \end{array} \right) \right) = \det\left(\mathbbm{1}-\left( \begin{array}{cc} K_{1,1} & 0 \\ 0 & K_{2,2} \\ \end{array} \right) - \left( \begin{array}{cc} 0 & K_{1,2} \\ K_{2,1} & 0 \\ \end{array} \right) \right)\\ &= \det\left(\mathbbm{1}-\left( \begin{array}{cc} K_{1,1} & 0 \\ 0 & K_{2,2} \\ \end{array} \right)\right) \\&\qquad \times \det\left(\mathbbm{1}-\left( \begin{array}{cc} (\mathbbm{1}-K_{1,1})^{-1} & 0 \\ 0 & (\mathbbm{1}-K_{2,2})^{-1} \\ \end{array} \right) \left( \begin{array}{cc} 0 & K_{1,2} \\ K_{2,1} & 0 \\ \end{array} \right)\right) \\ &=\det(\mathbbm{1}-K_{1,1})\det(\mathbbm{1}-K_{2,2}) \det\left(\mathbbm{1}-\left( \begin{array}{cc} 0 &-G \\ -H & 0 \\ \end{array} \right)\right), \end{aligned} \end{equation} where we set $G=-(\mathbbm{1}-K_{1,1})^{-1}K_{1,2}$ and $H=-(\mathbbm{1}-K_{2,2})^{-1}K_{2,1}$. Moreover, \begin{equation} \begin{aligned} \det\left( \begin{array}{cc} \mathbbm{1} &G \\ H & \mathbbm{1} \\ \end{array} \right) = \det\left(\left( \begin{array}{cc} \mathbbm{1} & G \\ H & \mathbbm{1} \\ \end{array} \right) \left( \begin{array}{cc} \mathbbm{1} & 0 \\ -H & \mathbbm{1} \\ \end{array} \right) \right)=\det(\mathbbm{1}-G\, H), \end{aligned} \end{equation} where \begin{equation} G\, H = (\mathbbm{1}-K_{1,1})^{-1}K_{1,2}(\mathbbm{1}-K_{2,2})^{-1}K_{2,1}. \end{equation} Since $\det(\mathbbm{1}-K_{\ell,\ell})=f(s_\ell)$, for $\ell=1,2$ as we mentioned above, \eqref{FdK} follows from \eqref{Ff}. \end{proof} Next we would like to approximate the Fredholm determinant in \eqref{FdK} by that of a simpler kernel. \begin{prop}\label{PropR1} Let us define \begin{equation} R_1(u;s_1,s_2):=\det(\mathbbm{1}-\tilde{K})-\det(\mathbbm{1}-K_{1,2}K_{2,1}). \end{equation} Then, for any $s_1,s_2\geq 0$, there exists a constant $C_2>0$ such that \begin{equation}\label{ubd} |R_1(u;s_1,s_2)|\leq \frac{C_2 e^{-\min\{s_1,s_2\}}}{u^2}e^{-\tfrac43 u^3-2(s_1+s_2)u}. \end{equation} for any $u\geq \max\{\frac12,\sqrt{s_1+s_2}\}$. \end{prop} The proof of this proposition is in Section~\ref{sectProofProp34}. \subsection{The leading term}\label{sec:tr} Lemma~\ref{LemmaDec} and Proposition~\ref{PropR1} suggest \begin{equation} {\cal E}(u;s_1,s_1)\sim \det(\mathbbm{1}-K_{1,2}K_{2,1})-1 \quad \text{ as }u\rightarrow \infty. \end{equation} For a trace class operator $\cal K$, the Fredholm determinant is given by \begin{equation} \det(\mathbbm{1}-{\cal K})=\sum_{n=0}^{\infty}\frac{(-1)^n}{n!}\int_{\mathbb{R}^n} dx_1\cdots dx_n\det[{\cal K}(x_i,x_j)]_{i,j=1}^n. \end{equation} When ${\cal K}=K_{1,2}K_{2,1}$, this translates to \begin{equation}\label{asy2} \det(\mathbbm{1}-K_{1,2}K_{2,1})= 1-\Tr\big(K_{1,2}K_{2,1}\big)+R_2(u;s_1,s_2), \end{equation} where $\Tr\big(K_{1,2}K_{2,1}\big)=\int_\mathbb{R} dx \int_\mathbb{R} dy K_{1,2}(x,y)K_{2,1}(y,x)$ and \begin{equation} \label{Req} R_2(u;s_1,s_2)=\sum_{n=2}^{\infty}\frac{(-1)^n}{n!}\int_{s_1}^\infty\cdots\int_{s_1}^\infty dx_1\cdots dx_n \det[K_{1,2}K_{2,1}(x_i,x_j)]_{i,j=1}^n. \end{equation} From \eqref{Ker}, it is clear that the upper tail of $K_{1,2}K_{2,1}$ in either variables, is determined by that of the function $\Ai$, which is known to decay super-exponentially, see \eqref{asy}. As each of the determinants in \eqref{Req} is a sum of products of elements of the order of $\Tr\big(K_{1,2}K_{2,1}\big)$, one expects the latter to dominate $R_2$ and therefore that \begin{equation}\label{tra} \det(\mathbbm{1}-K_{1,2}K_{2,1})\sim 1-\Tr(K_{1,2}K_{2,1}) \end{equation} if $\Tr(K_{1,2}K_{2,1})$ is small. Let us move on to the computation of $\Tr(K_{1,2}K_{2,1})$. We write the kernel entries $(K_{1,2} K_{2,1})(x,y)$ as well as its trace using complex integral representations, which will then be analyzed. We start with the following identities (see e.g.\ Appendix~A of~\cite{BFP09} for the first and last, while the second is a standard Gaussian integral) \begin{equation}\label{eqintRepr} \begin{aligned} &\frac{1}{2\pi{\rm i}}\int_{\gamma_a} d\xi e^{-\xi^3/3+u\xi^2+x\xi}=\Ai(x+u^2)e^{\frac23u^3+ux},\\ & \frac1{2\pi{\rm i}}\int_{\gamma_a} d\xi e^{u\xi^2+x\xi}= \frac{e^{-\frac{x^2}{4u}}}{\sqrt{4\pi u}},\\ &\frac{1}{2\pi{\rm i}} \int_{\gamma_b} d\eta e^{\frac{\eta^3}3-u\eta^2-x\eta} = \Ai(x+u^2)e^{-\frac23 u^3-ux}, \end{aligned} \end{equation} where \begin{align} \gamma_a&=a+{\rm i}\mathbb{R} \quad \text{for $a<u$}\label{contour1}\\ \gamma_b&=b+{\rm i}\mathbb{R} \quad \text{for $b>u$}\label{contour2}. \end{align} Plugging these identities into \eqref{Ker} we can write \begin{equation}\label{eq1.24} \begin{aligned} K_{1,2}(x,y)&=\mathbbm{1}_{x>s_1}\mathbbm{1}_{y>s_2}\frac{1}{2\pi{\rm i}}\int_{\gamma_a} d\xi \Big[e^{-\xi^3/3+u\xi^2+(x+y)\xi}-e^{u\xi^2+(x-y)\xi}\Big],\\ K_{2,1}(x,y)&=\mathbbm{1}_{x>s_2}\mathbbm{1}_{y>s_1}\frac{1}{2\pi{\rm i}} \int_{\gamma_b} d\eta e^{\frac{\eta^3}3-u\eta^2-(x+y)\eta}. \end{aligned} \end{equation} This leads to the following representation of the kernel $K_{1,2}K_{2,1}$: \begin{equation} \begin{aligned}\label{eq7} &(K_{1,2}K_{2,1})(x,y)=\int_\mathbb{R} dz K_{1,2}(x,z) K_{2,1}(z,y)\\ &=\frac{\mathbbm{1}_{x>s_1}\mathbbm{1}_{y>s_1}}{(2\pi{\rm i})^2}\int_{\gamma_a} d\xi\int_{\gamma_b} d\eta e^{u\xi^2+x\xi}e^{\frac{\eta^3}3-u\eta^2-y\eta}\Bigg[\frac{e^{-\frac{\xi^3}3-s_2(\eta-\xi)}}{\eta-\xi}-\frac{e^{-s_2(\eta+\xi)}}{\eta+\xi}\Bigg]. \end{aligned} \end{equation} Setting $x=y$ and integrating over $x$ (on $[s_1,\infty)$ due to the indicator functions) we get \begin{equation} \begin{aligned}\label{K12K21} \Tr(K_{1,2}K_{2,1})=& \frac{1}{(2\pi{\rm i})^2}\int_{\gamma_a} d\xi \int_{\gamma_b} d\eta\, e^{\frac{\eta^3}3-u\eta^2-(s_1+s_2)\eta}\frac1{\eta-\xi}\\ &\times\Big[e^{-\xi^3/3+u\xi^2 +(s_1+s_2)\xi}\frac{1}{\eta-\xi}-e^{u\xi^2+(s_1-s_2)\xi}\frac{1}{\xi+\eta}\Big]. \end{aligned} \end{equation} We begin with determining the asymptotic behavior of $\Tr(K_{1,2}K_{2,1})$. \begin{prop}\label{propTrace} For all $0\leq s_1,s_2\leq \sqrt{u}$, \begin{equation}\label{eq3.23} -\Tr(K_{1,2}K_{2,1})= \frac{1}{16\pi u^4}e^{-2(s_1+s_2)u-\frac43 u^3}\left[s_1s_2+\mathcal{O}(u^{-1/4})\right] \end{equation} as $u\to\infty$. \end{prop} \begin{proof} To get the asymptotic behavior of the trace, we need to choose the parameters $a,b$ in the integration contours. We do it in a way that the dominant parts in the exponents of the contour integrals in \eqref{K12K21} are minimized. \begin{equation}\label{eq3.32} \begin{aligned} \eqref{K12K21}&= \frac{1}{(2\pi{\rm i})^2}\int_{\gamma_a} d\xi_1 \int_{\gamma_b} d\eta\, e^{\frac{\eta^3}3-u\eta^2-(s_1+s_2)\eta}e^{-\xi_1^3/3+u\xi_1^2 +(s_1+s_2)\xi_1}\frac1{(\eta-\xi_1)^2}\\ &-\frac{1}{(2\pi{\rm i})^2}\int_{\gamma_a} d\xi_2 \int_{\gamma_b} d\eta\, e^{\frac{\eta^3}3-u\eta^2-(s_1+s_2)\eta}e^{u\xi_2^2+(s_1-s_2)\xi_2}\frac{1}{(\xi_2+\eta)(\eta-\xi_2)}. \end{aligned} \end{equation} Now we need to choose the parameters $a,b$. For that reason we search for the minimizers of the different exponents in \eqref{eq3.32}: \begin{enumerate} \item[(a)] for \begin{equation} \frac{d}{d\xi_1}\Big(-\frac{\xi_1^3}3+u\xi_1^2+(s_1+s_2)\xi_1\Big)=2u\xi_1-\xi_1^2=0, \end{equation} which is solved for $\xi_1=u-\sqrt{u^2+s_1+s_2}=:a_1$ or $\xi_1=u+\sqrt{u^2+s_1+s_2}$. The solution which satisfy the constraint $\Re(\xi_1)<u$ in \eqref{contour1} is also the minimum. \item[(b)] For \begin{equation} \frac{d}{d\xi_2}\Big(u\xi_2^2+(s_1-s_2)\xi_2\Big)=2u\xi_2+(s_1-s_2) \end{equation} we see that $\xi_2=(s_2-s_1)/(2u)=:a_2$ is the minimum. \item[(c)] Similarly, \begin{equation} \frac{d}{d\eta}\Big(\frac{\eta^3}3-u\eta^2-(s_1+s_2)\eta\Big)=\eta^2-2u\eta-(s_1+s_2)=0, \end{equation} has the minimum is at $\eta=u+\sqrt{u^2+s_1+s_2}=:b$ satisfies ${\rm{Re}(\eta)}>u$. \end{enumerate} So let us use the following change of variables \begin{equation}\label{chov} \xi_1=a_1+\frac{z}{\sqrt{u}},\quad \xi_2=a_2+\frac{z}{\sqrt{u}},\quad \eta=b+\frac{w}{\sqrt{u}} \end{equation} with $z,w\in{\rm i}\mathbb{R}$ into \eqref{eq3.32}. The two terms are analyzed in the same way, thus we write the details only for the first one. Denote $\sigma=(s_1+s_2)/u^2\leq 2 u^{-3/2}$ and consider the first term in \eqref{eq3.32}. We have \begin{equation} \begin{aligned} &e^{\frac{\eta^3}3-u\eta^2-(s_1+s_2)\eta}e^{-\frac13\xi_1^3+u\xi_1^2 +(s_1+s_2)\xi_1}= e^{-\frac43u^3 (1+\sigma)^{3/2}} e^{\sqrt{1+\sigma}(w^2+z^2)} e^{\frac{w^3-z^3}{3 u^{3/2}}}\\ &=e^{-\frac43 u^3-2(s_1+s_2)u-\frac{(s_1+s_2)^2}{2u}(1+\mathcal{O}(\sigma))} e^{(z^2+w^2)(1+\mathcal{O}(\sigma;z u^{-3/2};w u^{-3/2}))} \end{aligned} \end{equation} and for the prefactor\footnote{The notation $\mathcal{O}(a_1;...\,;a_k)$ stands for $\mathcal{O}(a_1)+...+\mathcal{O}(a_k)$.} \begin{equation} \frac1{(\eta-\xi_1)^2} = \frac{1}{4u^2}(1+\mathcal{O}(z u^{-3/2};w u^{-3/2};\sigma)). \end{equation} Each term in the exponential which is cubic in $z,w\in{\rm i}\mathbb{R}$ is purely imaginary, thus its exponential is bounded by $1$. Furthermore, the quadratic terms in $z$ and $w$ have a positive prefactor $\sqrt{1+\sigma}\geq 1$. Thus integrating over $|z|>u^{1/4}$ and/or $|w|>u^{1/4}$ we get a correction term of order $e^{-\sqrt{u}}$ times the value of the integrand at $z=w=0$. For the rest of the integral, for which $|z|,|w|\leq u^{1/4}$, the error terms $\mathcal{O}(z u^{-3/2};w u^{-3/2};\sigma)=\mathcal{O}(u^{-5/4})$. So the first term in \eqref{eq3.32} is given by \begin{multline}\label{eq3.39} \frac{e^{-\frac43 u^3-2(s_1+s_2)u-\frac{(s_1+s_2)^2}{2u}(1+\mathcal{O}(\sigma))}}{4 u^3} \bigg[\mathcal{O}(e^{-\sqrt{u}})\\ +\frac{(1+\mathcal{O}(u^{-5/4}))}{(2\pi{\rm i})^2}\int_{-{\rm i} u^{1/4}}^{{\rm i} u^{1/4}} dz \int_{-{\rm i} u^{1/4}}^{{\rm i} u^{1/4}} dw e^{(z^2+w^2)(1+\mathcal{O}(u^{-5/4}))}\bigg]. \end{multline} Finally, extending the Gaussian integrals to ${\rm i}\mathbb{R}$, we get an error term $\mathcal{O}(e^{-\sqrt{u}})$ only and using \begin{equation} \frac1{(2\pi{\rm i})^2}\int_{{\rm i}\mathbb{R}}dz \int_{{\rm i}\mathbb{R}}dw\,e^{w^2+z^2}=\frac1{4\pi} \end{equation} we get \begin{equation}\label{eq3.41} \begin{aligned} \eqref{eq3.39}&=\frac{e^{-\frac43 u^3-2(s_1+s_2)u-\frac{(s_1+s_2)^2}{2u}(1+\mathcal{O}(\sigma))}}{16\pi u^3}(1+\mathcal{O}(u^{-5/4}))\\ &=\frac{e^{-\frac43 u^3-2(s_1+s_2)u}}{16 \pi u^3}\left[1-\frac{(s_1+s_2)^2}{2u}+\mathcal{O}(u^{-5/4})\right] \end{aligned} \end{equation} A similar computation for the second term in \eqref{eq3.32} leads to \begin{equation}\label{eq3.42} -\frac{e^{-\frac43 u^3-2(s_1+s_2)u}}{16 \pi u^3}\left[1-\frac{s_1^2+s_2^2}{2u}+\mathcal{O}(u^{-5/4})\right]. \end{equation} Summing up \eqref{eq3.41} and \eqref{eq3.42} we obtain \begin{equation} -\Tr(K_{1,2}K_{2,1})=\frac{e^{-\frac43 u^3-2(s_1+s_2)u}}{16 \pi u^4}\left[s_1s_2+ \mathcal{O}(u^{-1/4})\right]. \end{equation} \end{proof} \subsection{Bounding lower order terms} To show \eqref{tra} we need to get a bound on $R_2(u;s_1,s_2)$ from \eqref{Req}, which is $o(\Tr\big(K_{1,2}K_{2,1}\big))$. \begin{prop}\label{propR2} There exists a constant $C>0$ such that \begin{equation} |R_2(u;s_1,s_2)|\leq \frac{C}{u^6} e^{-4(s_1+s_2)u} e^{-\frac83 u^3} \end{equation} uniformly in $s_1,s_2\in \mathbb{R}$. \end{prop} \begin{proof} In \eqref{eq7} we use the change of variables \begin{equation} \xi = \frac{z}{\sqrt{u}},\quad \eta =2u+\frac{w}{\sqrt{u}}\label{chov4}, \end{equation} where $z,w\in{\rm i}\mathbb{R}$. This leads to \begin{equation}\label{eq1.40} \eqref{eq7} = \mathbbm{1}_{x>s_1}\mathbbm{1}_{y>s_1}\frac{e^{-\frac43 u^3-2u (s_2+y)}}{(2\pi{\rm i})^2 u}\int_{{\rm i}\mathbb{R}}dz\int_{{\rm i}\mathbb{R}}dw e^{z^2+w^2} P(z,w,x,y) \end{equation} with \begin{equation} P(z,w,x,y)=e^{\frac{x z}{\sqrt{u}}+\frac{w^3}{3 u^{3/2}}-\frac{w y}{\sqrt{u}}} \bigg[ \frac{e^{-\frac{z^3}{3 u^{3/2}}-s_2\frac{w-z}{\sqrt{u}}}}{2u+\frac{w-z}{\sqrt{u}}}-\frac{e^{-s_2\frac{z+w}{\sqrt{u}}}}{2u+\frac{z+w}{\sqrt{u}}} \bigg]. \end{equation} Since $x,y\in\mathbb{R}$ and $z,w\in{\rm i}\mathbb{R}$, we get the simple bound $|P(z,w,x,y)|\leq \frac{1}{u}$, while $e^{z^2+w^2}$ is real. Performing the Gaussian integral we then get \begin{equation} |\eqref{eq1.40}|\leq \mathbbm{1}_{x>s_1}\mathbbm{1}_{y>s_1}\frac{e^{-\frac43 u^3-2 (s_2+y)u}}{4\pi u^2}. \end{equation} Let $K=K_{1,2}K_{2,1}$. Hadamard's inequality\footnote{Let $A$ be a $n\times n$ matrix with $|A_{i,j}|\leq 1$. Then $|\det(A)|\leq n^{n/2}$.} gives \begin{equation} |\det[K(x_i,x_j)]_{i,j=1}^n|\leq n^{n/2} \prod_{j=1}^n \frac{e^{-\frac{4u^3}{3}-2(x_j+s_2)u}\mathbbm{1}_{x_j>s_1}}{4\pi u^2} \end{equation} so that \begin{equation} \bigg|\int_{x_1\geq s_1}dx_1\cdots\int_{x_n\geq s_1}dx_n\det[K(x_i,x_j)]_{i,j=1}^n \bigg|\leq n^{n/2}\bigg(\frac{e^{-\frac{{4u^3}}{3}-2(s_1+s_2)u}}{8\pi u^3}\bigg)^n. \end{equation} Denote $M=\frac{e^{-\frac{{4u^3}}{3}-2(s_1+s_2)u}}{8\pi u^3}$. Then there exists $C>0$ such that \begin{equation} |R_2(u;s_1,s_2)|\leq\sum_{n=2}^{\infty}\frac{M^nn^{n/2}}{n!}\leq C M^2\leq Cu^{-6}e^{-\frac{{8u^3}}{3}-4(s_1+s_2)u}. \end{equation} This completes the proof. \end{proof} We have now all the ingredients to complete the proof of Theorem~\ref{ThmLowerBound}. \begin{proof}[Proof of Theorem~\ref{ThmLowerBound}] We have \begin{equation} \begin{aligned} F(u;s_1,s_2)\stackrel{\eqref{FdK}}{=}&f(s_1)f(s_2)\det(\mathbbm{1}-\tilde{K})\\ \stackrel{\eqref{ubd}}{=}&f(s_1)f(s_2)[\det(\mathbbm{1}-K_{1,2}K_{2,1})+R_1(u;s_1,s_2)]\\ \stackrel{\eqref{asy2}}{=}&f(s_1)f(s_2)[1-\Tr(K_{1,2}K_{2,1})+R(u;s_1,s_2)], \end{aligned} \end{equation} where \begin{equation} R(u;s_1,s_2)=R_1(u;s_1,s_2)+R_2(u;s_1,s_2). \end{equation} It follows that \begin{equation} F(u;s_1,s_2)-f(s_1)f(s_2)=-f(s_1)f(s_2)\Tr(K_{1,2}K_{2,1})+f(s_1)f(s_2)R(u;s_1,s_2). \end{equation} We shall apply the lower bound \eqref{eq1.7} for the covariance for an appropriate choice of $\alpha=\alpha(u)$ and $\beta=\beta(u)$ such the contribution of error term $f(s_1)f(s_2)R(u;s_1,s_2)$ is of a subleading order. We shall choose $\alpha>0$ depending on $u$ and for concreteness set $\beta=\alpha+1$. Thus we integrate over a region of area $1$. By Proposition~\ref{propR2}, the error term $R_2(u;s_1,s_2)$ is much smaller than the leading term coming from the trace, see Proposition~\ref{propTrace} (of order $e^{-\frac43 u^3}$ smaller) for $s_1,s_2\ll u^2$. However, the exponential term in $u$ in the error bound of $R_1(u;s_1,s_2)$ coming from \eqref{ubd} is of the same order, namely $e^{-2(s_1+s_2)u-\frac43 u^3}$. The difference is that in the trace we have a prefactor $\sim \frac{s_1 s_2}{u^4}$, while in the bound for $R_1$ we have a prefactor $\sim\frac{e^{-\min\{s_1,s_2\}}}{u^2}$. Thus, in order to ensure that the contribution from $R_1$ is subleading, we can take $s_1,s_2\sim c\ln(u)$ for any $c\geq 3$. Therefore choosing $\alpha=3\ln(u)$ and using the fact that for all $x\geq 0$, $(x)=F_{\rm GOE}(2^{2/3}x)$ is bounded away from 0 (in fact, one can numerically check $f(x)=F_{\rm GOE}(2^{2/3}x)\in [F_{\rm GOE}(0),1]=[0.83...,1]$), we finally get the claimed result. \end{proof} \subsection{Proof of Proposition~\ref{PropR1}}\label{sectProofProp34} To prove Proposition~\ref{PropR1}, we begin with the following well known bound (see e.g.\ Lemma~4(d), Chapter XIII.17 of~\cite{RS78III}) \begin{equation}\label{ub13} |\det(\mathbbm{1}-\tilde{K})-\det(\mathbbm{1}-K_{1,2}K_{2,1})|\leq \|Q\|_1 e^{\|Q\|_1+2\|K_{1,2}K_{2,1}\|_1+1}. \end{equation} where $Q=\tilde{K}-K_{1,2}K_{2,1}$, where $\|\cdot\|_1$ is the trace-norm (see e.g.~\cite{Sim00}). From Lemmas~\ref{lem:K12K21} and~\ref{lem:Qb} below, the exponent in the display above is bounded and the result follows. \qed In the remainder of this section we prove the two results Lemma~\ref{lem:K12K21} and Lemma~\ref{lem:Qb} used above. We first need to prove some auxiliary bounds. From the identity \begin{equation}\label{id} (\mathbbm{1}-K_{1,1})^{-1}=\mathbbm{1}+(\mathbbm{1}-K_{1,1})^{-1}K_{1,1} \end{equation} we see that \begin{equation}\label{id2} \begin{aligned} Q=&\tilde{K} - K_{1,2} K_{2,1}=(\mathbbm{1}-K_{1,1})^{-1}K_{1,1}K_{1,2}(\mathbbm{1}-K_{2,2})^{-1}K_{2,2}K_{2,1}\\ &+K_{1,2}(\mathbbm{1}-K_{2,2})^{-1}K_{2,2}K_{2,1}+(\mathbbm{1}-K_{1,1})^{-1}K_{1,1}K_{1,2}K_{2,1}. \end{aligned} \end{equation} Recall that from \eqref{ub13} we need to bound $\|Q\|_1$. Thus it is enough to bound the $\|\cdot\|_1$-norm of each of the terms on the right hand side of \eqref{id2}. Since $K_{1,2}(x,y)$ is not decaying as $x=y\to \infty$, we could either work in weighted $L^2$ spaces, or as we do here, introduce some weighting in the kernel elements. Namely define \begin{equation} \begin{aligned} \bar{K}_{1,2}^L(x,y)&=e^{-\frac x2} K_{1,2}(x,y), \quad \bar{K}_{1,2}^R(x,y)= K_{1,2}(x,y)e^{-\frac y2},\\ \bar{K}_{1,1}(x,y)&= K_{1,1}(x,y) e^{\frac y2},\quad \bar{K}_{2,2}(x,y)= e^{\frac x2}K_{2,2}(x,y). \end{aligned} \end{equation} Also, we use the norm inequalities $\|A B\|_1\leq \|A\|_2 \|B\|_2$ and $\|A\|_2\leq \|A\|_1$, with $\|\cdot\|_2$ the Hilbert-Schmidt norm given by $\|K\|_2=\Big(\int dxdy (K(x,y))^2\Big)^{1/2}$. These norm inequalities, the fact that $K_{i,i}$ and $(\mathbbm{1}-K_{i,i})^{-1}$ commute and the identity $K_{1,1}K_{1,2}=\bar{K}_{1,1}\bar{K}_{1,2}^L$ lead to \begin{equation}\label{ub17a} \|(\mathbbm{1}-K_{1,1})^{-1}K_{1,1}K_{1,2}K_{2,1}\|_1\leq \|(\mathbbm{1}-K_{1,1})^{-1}\|_2\|\bar{K}_{1,1}\|_2\|\bar{K}^L_{1,2}\|_2\|K_{2,1}\|_2. \end{equation} Moreover, using $K_{1,2}K_{2,2}=\bar{K}^R_{1,2}\bar{K}_{2,2}$, we get \begin{equation}\label{ub17b} \begin{aligned} \|K_{1,2}(\mathbbm{1}-K_{2,2})^{-1}K_{2,2}K_{2,1}\|_1 &=\|K_{1,2}K_{2,2}(\mathbbm{1}-K_{2,2})^{-1}K_{2,1}\|_1 \\ &\leq \|\bar{K}_{1,2}^R\|_2\|\bar{K}_{2,2}\|_2\|(\mathbbm{1}-K_{2,2})^{-1}\|_2\|K_{2,1}\|_2, \end{aligned} \end{equation} and finally \begin{equation}\label{ub17c} \begin{aligned} & \|(\mathbbm{1}-K_{1,1})^{-1}K_{1,1}K_{1,2}(\mathbbm{1}-K_{2,2})^{-1}K_{2,2}K_{2,1}\|_1 \\ &\leq \|(\mathbbm{1}-K_{1,1})^{-1}\|_2\|\bar{K}_{1,1}\|_2\|\bar{K}^L_{1,2}\|_2\|(\mathbbm{1}-K_{2,2})^{-1}\|_2\|K_{2,2}\|_2\|K_{2,1}\|_2. \end{aligned} \end{equation} We turn to bound each of the terms on the right hand side of each of the inequalities in \eqref{ub17a}-\eqref{ub17c}. We also use the following change of variables: for $a,b\in \mathbb{R}$, define \begin{equation}\label{LT} \begin{aligned} \omega = (x-a)+(y-b),\quad \zeta=(x-a)-(y-b), \end{aligned} \end{equation} which give $x=a+\frac12(\omega+\zeta)$ and $y=b+\frac12(\omega-\zeta)$. So the integral over $(x,y)\in [a,\infty)\times[b,\infty)$ becomes an integral over $(\omega,\zeta)$ with $\omega\geq 0$ and $\zeta\in [-\omega,\omega]$. \begin{lem}\label{lem1.6} Uniformly for all $s_1,s_2\geq 0$, \begin{equation}\label{Ki} \|K_{i,i}\|_2\leq \tfrac12 e^{-2 s_i},\quad \|\bar{K}_{i,i}\|_2\leq e^{-s_i},\quad \|(\mathbbm{1}-K_{i,i})^{-1}\|_2\leq 2, \end{equation} for $i\in\{1,2\}$. \end{lem} \begin{proof} The bounds for $i=1$ and $i=2$ are fully analogous, thus we restrict to the case $i=1$ here. To bound $\|K_{1,1}\|_2^2$ we use \eqref{eqAiExpBound} which gives \begin{equation} \begin{aligned} \|K_{1,1}\|^2_2&=\int_{s_1}^\infty dx \int_{s_1}^\infty dy \big[\Ai(x+y)\big]^2 \leq \int_{s_1}^\infty dx \int_{s_1}^\infty dy e^{-2(x+y)} \leq \tfrac14 e^{-4 s_1}. \end{aligned} \end{equation} Thus $\|K_{1,1}\|_2\leq \frac12 e^{-2 s_1}$. Similarly \begin{equation} \begin{aligned} \|\bar{K}_{1,1}\|^2_2&=\int_{s_1}^\infty dx \int_{s_1}^\infty dy \big[e^{\frac{y}{2}}\Ai(x+y)\big]^2\leq\int_{s_1}^\infty dx \int_{s_1}^\infty dy e^{-(x+2y)}=\tfrac12 e^{-3 s_1}, \end{aligned} \end{equation} which implies the second bound in \eqref{Ki}. Finally using \begin{equation} \|\mathbbm{1}-K_{i,i}\|_2^{-1}\leq (1-\|K_{i,i}\|_2)^{-1}, \end{equation} which holds whenever $\|K_{i,i}\|_2<1$, we get the last inequality. \end{proof} \begin{lem}\label{lem1.7} For $s_1,s_2\geq 0$ and $u>0$ \begin{equation}\label{K21} \begin{aligned} \|K_{2,1}\|_2\leq \frac{1}{2u^{3/2}}\exp\Big[-2(s_1+s_2)u-\tfrac43 u^3\Big]. \end{aligned} \end{equation} \end{lem} \begin{proof} Using the change of variables \eqref{LT} with $a=s_2,b=s_1$, it follows that \begin{equation}\label{eq4} \begin{aligned} \|K_{2,1}\|^2_2&=\int_{s_2}^\infty dx\int_{s_1}^\infty dy\big[\Ai(x+y+u^2)\big]^2 e^{-2(x+y)u-\tfrac43 u^3}\\ &=2\int_0^\infty d\omega \int_{-\omega}^\omega d\zeta \big[\Ai(\omega+s_1+s_2+u^2)\big]^2 e^{-2(\omega+s_1+s_2)u-\tfrac43 u^3}, \end{aligned} \end{equation} where we used that the Jacobian of the transformation in \eqref{LT} is 2. Next using \eqref{eqAiBetterBound} (with $x=\omega+s_1+s_2$) we get \begin{equation}\label{ub5} \begin{aligned} \eqref{eq4} &= 4e^{-2(s_1+s_2)u-\tfrac43 u^3}\int_0^\infty d\omega \big[\Ai(\omega+s_1+s_2+u^2)\big]^2\omega e^{-2\omega u}\\ &\leq \frac{4e^{-4(s_1+s_2)u-\tfrac83 u^3}}{u}\int_0^\infty d\omega\, \omega e^{-4\omega u}= \frac1{4 u^3}e^{-4(s_1+s_2)u-\tfrac83 u^3}. \end{aligned} \end{equation} \end{proof} \begin{lem}\label{lem1.8} For $s_1,s_2\geq 0$, there exists a constant $C_1>0$ such that \begin{equation}\label{Krl} \begin{aligned} \|\bar{K}^R_{1,2}\|_2,\|\bar{K}^L_{1,2}\|_2\leq \frac{C_1}{\sqrt{u}} e^{-s_1/2} \end{aligned} \end{equation} for all $u\geq \sqrt{s_1+s_2}$. \end{lem} \begin{proof}[Proof of Lemma~\ref{lem1.8}] By symmetry in $x,y$ the bounds for $\|\bar{K}^L_{1,2}\|_2^2$ and for $\|\bar{K}^R_{1,2}\|_2^2$ are the same. So we consider $\|\bar{K}^L_{1,2}\|_2^2$ only. Using $(A-B)^2\leq 2 (A^2+B^2)$ we get \begin{equation}\label{eq1.68} \begin{aligned} \|\bar{K}^L_{1,2}\|_2^2&=\int_{s_1}^\infty dx \int_{s_2}^\infty dy e^{-x}\Big[\Ai(x+y+u^2) e^{(x+y)u+\tfrac23 u^3}-\frac{e^{-(x-y)^2/4u}}{\sqrt{4\pi u}}\Big]^2\\ &\leq 2 \int_{s_1}^\infty dx \int_{s_2}^\infty dy e^{-x}\Big[(\Ai(x+y+u^2))^2 e^{2(x+y)u+\tfrac43 u^3}+\frac{e^{-(x-y)^2/2u}}{4\pi u}\Big]. \end{aligned} \end{equation} The second term in \eqref{eq1.68} can be bounded by integrating in $y$ over $\mathbb{R}$ and then integrating over $x\geq s_2$. This gives \begin{equation} 2 \int_{s_1}^\infty dx \int_{s_2}^\infty dy e^{-x} \frac{e^{-(x-y)^2/2u}}{4\pi u}\leq \frac{1}{\sqrt{2\pi u}} e^{-s_1}. \end{equation} For the first term in \eqref{eq1.68}, we use the change of variable \eqref{LT} with $a=s_1$, $b=s_2$ and obtain \begin{equation}\label{ub9} \begin{aligned} &2 \int_{s_1}^\infty dx \int_{s_2}^\infty dy e^{-x}(\Ai(x+y+u^2))^2 e^{2(x+y)u+\tfrac43 u^3}\\ &=4 e^{-s_1} \int_0^\infty d\omega (\Ai(\omega+s_1+s_2+u^2))^2 e^{2 (\omega+s_1+s_2) u+\tfrac43 u^3}\!\int_{-\omega}^\omega \! d\zeta e^{-\frac12(\omega+\zeta)} \\ &\leq 8 e^{-s_1} \int_0^\infty d\omega (\Ai(\omega+s_1+s_2+u^2))^2 e^{2 (\omega+s_1+s_2) u+\tfrac43 u^3}\\ &\stackrel{\eqref{eqAiExpBound2}}{\leq} 8 e^{-s_1} \int_0^\infty d\omega e^{-\frac43 (\omega+s_1+s_2+u^2)^{3/2}+2 (\omega+s_1+s_2) u+\tfrac43 u^3}. \end{aligned} \end{equation} Let $A(\omega)=-\frac43 (\omega+s_1+s_2+u^2)^{3/2}+2 (\omega+s_1+s_2) u+\tfrac43 u^3$. Then for all $s_1,s_2,u\geq 0$ it is a concave function in $\omega$ and thus \begin{equation} \begin{aligned} A(\omega)&\leq A(0)+A'(0)\omega\\ &=2 (s_1+s_2) u + \frac43 u^3 - \frac43 (s_1+s_2 + u^2)^{3/2}-2(\sqrt{s_1+s_2+u^2}-u)\omega. \end{aligned} \end{equation} The term independent of $\omega$ is always negative. Indeed, $x\mapsto \frac43(x+u^2)^{3/2}$ is convex and thus greater than its linear approximation at $x=0$, which is $\frac43 u^3+2ux$. So we have \begin{equation} \begin{aligned} \eqref{ub9}\leq \frac{4 e^{-s_1} }{\sqrt{s_1+s_2+u^2}-u}\leq \frac{4 e^{-s_1}}{(\sqrt{2}-1)u}, \end{aligned} \end{equation} where in the last step we used the assumption that $u\geq \sqrt{s_1+s_2}$. \end{proof} \begin{lem}\label{lem:K12K21} There exists a constant $C_1>0$ such that for $s_1,s_2\geq 0$, \begin{equation}\label{ub16} \|K_{1,2}K_{2,1}\|_1\leq \frac{C_1}{2u^{2}} e^{-\frac43 u^3-2(s_1+s_2)u} e^{s_2}\leq \frac{C_1}{2u^{2}} e^{-\frac43 u^3} \end{equation} for all $u\geq \max\{\frac12,\sqrt{s_1+s_2}\}$. \end{lem} \begin{proof} Denote \begin{equation} \bar{K}_{2,1}(x,y)=e^{\frac{x}2}K_{2,1}(x,y). \end{equation} Then we have the bound \begin{equation}\label{ub15} \begin{aligned} \|K_{1,2}K_{2,1}\|_1\leq \|\bar{K}_{1,2}^{R}\|_2\|\bar{K}_{2,1}\|_2. \end{aligned} \end{equation} From Lemma~\ref{lem1.8}, we get $\|\bar{K}_{1,2}^{R}\|_2\leq\frac{C_1}{u^{1/2}}$. Furthermore, using \eqref{eqAiBetterBound}, we have \begin{equation} \begin{aligned} \|\bar{K}_{2,1}\|_2^2&=\int_{s_2}^\infty dx\int_{s_1}^\infty dy\,e^{x}\big[\Ai(x+y+u^2)\big]^2 e^{-2(x+y)u-\tfrac43 u^3}\\ &\leq \frac{1}{u} \int_{s_2}^\infty dx\int_{s_1}^\infty dy\,e^{x} e^{-4(x+y)u-\tfrac83 u^3}\leq e^{-\frac83 u^3} e^{-4(s_1+s_2)u+s_2}\frac{1}{4u^2(4u-1)}\\ &\leq e^{-\frac83 u^3} e^{-4(s_1+s_2)u+s_2}\frac{1}{8u^3} \end{aligned} \end{equation} for all $u\geq 1/2$. \end{proof} We are now ready to bound $\|Q\|_1$. \begin{lem}\label{lem:Qb} Uniformly for $s_1,s_2\geq 0$, there exists a constant $C_1$ such that \begin{equation} \|Q\|_1\leq \frac{2 C_1 e^{-\min\{s_1,s_2\}}}{u^2}e^{-2(s_1+s_2)u-\tfrac43 u^3} \end{equation} for all $u\geq \max\{\frac12,\sqrt{s_1+s_2}\}$. \end{lem} \begin{proof} Applying the bounds of Lemmas~\ref{lem1.6} and~\ref{lem1.8} to \eqref{ub17a}-\eqref{ub17c}, we obtain \begin{equation} \|Q\|_1\leq \frac{8 C_1}{u^{1/2}} e^{-\min\{s_1,s_2\}} \|K_{2,1}\|_2 \end{equation} for all $s_1,s_2\geq 0$ and $u$ as mentioned. Then, applying the bound of Lemma~\ref{lem1.7} leads to \begin{equation} \|Q\|_1\leq \frac{4C_1 e^{-\min\{s_1,s_2\}}}{u^2}e^{-2(s_1+s_2)u-\tfrac43 u^3}. \end{equation} \end{proof}
1,314,259,993,696
arxiv
\section*{Introduction} Let $R$ be a commutative ring. It is well known the prime ideals or $R$ classify the homomorphisms from $R$ to division rings. Indeed, for any prime ideal $P$ of $R$, we obtain a homomorphism from $R$ to a division ring via the natural homomorphism $R\rightarrow Q(R/P)$, where $Q(R/P)$ denotes the field of fractions of $R/P$. Conversely, if $\varphi\colon R\rightarrow D$ is a homomorphism from $R$ to a division ring $D$, then $P=\ker \varphi$ is a prime ideal of $R$, $\varphi$ factors through $R\rightarrow Q(R/P)$ and therefore the division subring of $D$ generated by the image of $R$ is $R$-isomorphic to $Q(R/P)$. Moreover, let $P\subseteq P'$ be prime ideals of $R$. The localization of $R/P$ at the prime ideal $P'/P$ yields a local subring of $Q(R/P)$ with residue field isomorphic to $Q(R/P')$. This implies that any fraction $ab^{-1}\in Q(R)$ which is defined in $Q(R/P')$ it is also defined in $Q(R/P)$. Also, looking at the determinants of matrices, one sees that any matrix with entries in $R$ that becomes invertible in $Q(R/P')$ then it also becomes invertible in $Q(R/P)$. If the ring $R$ is not commutative, prime ideals no longer classify the homomorphisms to division rings. It may even possible that $R$ has infinitely many different ``fields of fractions'', see for example \cite[Section~9]{Lam2}. Let $R$ be any ring. An epic $R$-division ring is a ring homomorphism $R\rightarrow K$ where $K$ is a division ring generated by the image of $R$. In \cite{CohnfreeringsI}, P. M. Cohn showed that the epic $R$-division rings are characterized up to $R$-isomorphism by the collection of square matrices over $R$ which are carried to matrices singular over $K$. This set of matrices is called the singular kernel of $R\rightarrow K$. He also gave the precise conditions for a set of square matrices over $R$ to be a singular kernel, calling such a collection a prime matrix ideal of $R$. The name comes from the fact that, if we endow the set of square matrices over $R$ with certain two operations of sum and product (one of them partial), those sets have a similar behaviour to prime ideals. These operations are defined so that, when defined on square matrices over a commutative ring, the determinant of the sum of matrices equals the sum of the determinants and the determinant of a product of matrices equals the product of the determinants. Also in \cite{CohnfreeringsI}, Cohn showed that if $\mathcal{P}$, $\mathcal{P}'$ are prime matrix ideals of $R$ and $R\rightarrow K_\mathcal{P}$, $R\rightarrow K_{\mathcal{P}'}$ are the corresponding epic $R$-division rings, then $\mathcal{P}\subseteq\mathcal{P}'$ if and only if there exists a local subring of $K_\mathcal{P}$ containing the image of $R$ with residue class division ring $R$-isomorphic to $K_{\mathcal{P}'}$. We say that there exists a specialization from $K_\mathcal{P}$ to $K_{\mathcal{P}'}$. Furthermore, if a rational expression built up from elements of $R$ makes sense in $K_{\mathcal{P}'}$, then it can also be evaluated in $K_\mathcal{P}$. P. M. Cohn also provided conditions on square matrices over $R$ equivalent to the existence of (injective) homomorphisms from $R$ to division rings and to the existence of a \emph{best} epic $R$-division ring in the sense that a rational expression that makes sense in some epic $R$-division ring, makes sense in it. In \cite{Malcolmsondetermining}, P. Malcolmson described several alternative ways of determining epic $R$-division rings. One of them is induced from the notion of rank of a matrix over a division ring. If $\varphi\colon R\rightarrow K$ is an epic $R$-division ring, we can associate to each matrix over $R$ the rank of this matrix when considered over $K$ via $\varphi$. He determined which functions from the set of matrices over $R$ with values in $\mathbb{N}$ are rank functions induced from epic $R$-division rings. Another alternative way of determining epic $R$-division rings described by P. Malcolmson is induced from the notion of dimension over a division ring. More precisely, if $R\rightarrow K$ is an epic $R$-division ring, we can associate with each finitely presented right $R$-module $M$ the number $\dim_K(M\otimes_R K)\in\mathbb{N}$. He described which functions from the class of finitely presented right $R$-modules with values in $\mathbb{N}$ are induced from epic $R$-division rings as dimensions. Another important feature of rank functions is that, theoretically speaking, it is easy to know when there exists a specialization from an epic $R$-division ring to another in terms of rank functions as defined by P. Malcolmson. In \cite{Schofieldbook}, A. Schofield gave another equivalent notion to that of epic $R$-division rings in terms of a rank function that satisfies certain natural conditions. This time it is a function from the class of homomorphisms between finitely generated projective right $R$-modules with values in $\mathbb{N}$. We would like to remark that Sylvester rank functions with values in $\mathbb{R}_+$ have proved useful in many different situations \cite{AraClaramuntUniqueness,AraClaramuntSylvester,Eleklamplighter,Elekinfinitedimensional,GoodearlvonNeumann,JaikinZapirainbasechangeatiyah,JaikinLopezstrongatiyah,Schofieldbook}. The theory of group graded rings has played an important role in Ring Theory (see for example \cite{Hazrat_2016}, \cite{NastasescuvanOystaeyenMethodsgraded}) and many of the results in classical ring theory have a mirrored version for group graded rings. Furthermore, if $R$ is a filtered ring, it has proved fruitful to study the associated graded ring, which usually is a simpler object, in order to obtain information about the original ring. The main aim of this article is to develop Cohn's theory on division rings in the context of group graded rings. More precisely, let $\Gamma$ be a group and $R=\bigoplus\limits_{\gamma\in \Gamma}R_\gamma$ be a $\Gamma$-graded ring. A $\Gamma$-graded epic $R$-division ring is a homomorphism of $\Gamma$-graded rings $R\rightarrow K$ where $K$ is a $\Gamma$-graded division ring generated by the image of $R$. Matrices over $R$ represent homomorphisms between finitely generated free $R$-modules. Homomorphisms of $\Gamma$-graded modules between $\Gamma$-graded free $R$-modules are given by (what we call) \emph{homogeneous matrices}. These are $m\times n$ matrices $A$ for which there exist $\alpha_1,\dotsc,\alpha_m,\beta_1,\dotsc,\beta_n\in \Gamma$ such that each $(i,j)$ entry of $A$ belongs to $R_{\alpha_i\beta_j^{-1}}$. We show that $\Gamma$-graded epic $R$-division rings $R\rightarrow K$ are characterized, up to $R$-isomorphism of $\Gamma$-graded rings, by the collection of homogeneous matrices which are carried to singular matrices over $K$. These sets are called the gr-singular kernel of $R\rightarrow K$. We give the precise conditions under which a collection of homogeneous matrices over $R$ is a gr-singular kernel and thus defining the concept of gr-prime matrix ideal. If $\mathcal{P},\mathcal{P}'$ are gr-prime matrix ideals of $R$ and $R\rightarrow K_\mathcal{P}$, $R\rightarrow K_{{\mathcal{P}'}}$ are the corresponding $\Gamma$-graded epic $R$-division rings, then $\mathcal{P}\subseteq\mathcal{P}'$ if and only if there exists a $\Gamma$-graded local subring of $K_\mathcal{P}$ that contains the image of $R$ with residue class $\Gamma$-graded division ring $R$-isomorphic to $K_{\mathcal{P}'}$ as $\Gamma$-graded rings. Furthermore, if a homogeneous rational expression obtained from elements of $R$ make sense in $K_{\mathcal{P}'}$ then it can also be evaluated in $K_{\mathcal{P}}$. We then provide conditions on the set of square homogeneous matrices over $R$ that characterize when there exists an (injective) homomorphism of $\Gamma$-graded rings from $R$ to a $\Gamma$-graded division ring and when there exists a \emph{best} $\Gamma$-graded epic $R$-division ring. We also provide the graded concepts corresponding to the different rank functions defined by Malcolmson and Schofield. We show they give alternative ways of determining $\Gamma$-graded epic $R$-division rings in terms of rank functions from the set of homogeneous matrices, from the class of $\Gamma$-graded finitely presented modules and from the class of $\Gamma$-graded homomorphisms between $\Gamma$-graded projective $R$-modules, respectively, all of them with values in $\mathbb{N}$. In the study of division rings, one of the pioneering works carrying the information from the associated graded ring to the original filtered ring was \cite{Cohnembeddingringsskewfields}. P. M. Cohn showed that if a ring $R$ endowed with a valuation with values in $\mathbb{Z}$ is such that its associated graded ring is a (graded) Ore domain, then $R$ can be embedded in a division ring. Other proofs of this result can be found in \cite{Lichtmanvaluationmethods} and in \cite{AsensioVandenBerghVanOystaeyen} together with \cite{Limicrolocalizationskewfields}. More recently, a generalization of the result by Cohn has been given by A. I. Valitskas \cite{Valitskas2014filtered}. We believe that our work could be helpful in order to generalize the result by Cohn to a greater extent than has been done by Valitskas. An elementary application of our theory is as follows. Suppose that $R$ is a ring graded by a group $\Gamma$. As an immediate consequence of \cite[Proposition~1.2.2]{NastasescuvanOystaeyenMethodsgraded}, one obtains that if there exists an (injective) homomorphism from $R$ to a division ring, then there exists an (injective) homomorphism of $\Gamma$-graded rings from $R$ to a $\Gamma$-graded division ring. Thus if one shows that there do not exist (injective) homomorphisms of $\Gamma$-graded rings from $R$ to $\Gamma$-graded division rings, then there do not exist (injective) homomorphisms from $R$ to division rings. See section~\ref{sec:grprimespectrum} for other similar results. We end this introduction by showing that the existence of an (injective) homomorphism from a $\Gamma$-graded ring $R$ to a division ring is not equivalent to the existence of a homomorphism of $\Gamma$-graded rings from $R$ to a $\Gamma$-graded division ring. For that we produce an easy example of a graded ring for which there does not exist a homomorphism to a division ring but it is embeddable in a graded division ring. Let $T$ be the ring obtained as localization of $\mathbb{Z}$ at the prime ideal $3\mathbb{Z}$. Let $R$ be the ring $T[i]\subseteq \mathbb{C}$. Let $C_2=\langle x\rangle$ be the cyclic group of order two, and let $\sigma\colon C_2\rightarrow \Aut(R)$ be the homomorphism of groups which sends $x$ to the automorphism induced by the complex conjugation. Set now $S=R[C_2;\sigma]$. That is, $S$ is the skew group ring of $G$ over $R$ induced by $\sigma$. Hence $S$ is a $C_2$-graded ring, $S=S_e+S_x$ where $S_e=R$ and $S_x=Rx$ and the product is determined by $xr=\overline{r}x$ for all $r\in R$. Clearly $S$ is embeddable in the $C_2$ graded division ring $\mathbb{Q}[C_2;\sigma]$. Suppose that there exists a homomorphism of rings from $S$ to a division ring $K$. Let $\varphi\colon S\rightarrow K$ be such homomorphism. Since $(1-x)(1+x)=0$, then either $\varphi(1+x)=0$ or $\varphi(1-x)=0$. If $\varphi(1+x)=0$, then $0=\varphi(1+x)=1+\varphi(x)$. Thus $\varphi(x)=-1$. But then $(-1)\varphi(i)=\varphi(xi)=\varphi(-ix)=-\varphi(i)(-1)=\varphi(i)$. Since $\varphi(i)\neq 0$, then $K$ has characteristic two. This is a contradiction because $\varphi$ induces a homomorphism from $R=S_e$ to $K$ and $2$ is invertible in $R$. In the same way, it can be shown that if $\varphi(1-x)=0$, then $\varphi(x)=1$ and, again, it implies that the characteristic of $K$ is $2$, a contradiction. \medskip In Section~\ref{sec:basicdefinitions}, we introduce some of the notation that will be used throughout the paper and provide a short survey about the results on graded rings that will be used. Let $\Gamma$ be a group. A $\Gamma$-almost graded division ring is a (not necessarily graded) homomorphic image of a $\Gamma$-graded division ring. For example, let $K$ be a field and consider the group ring $K[\Gamma]$. It is a $\Gamma$-graded division ring, and the augmentation map $K[\Gamma]\rightarrow K$, which is not a homomorphism of $\Gamma$-graded rings, endows $K$ with as structure of $\Gamma$-almost graded division ring. In the nongraded context, this concept is not necessary because a nontrivial image of a division ring is again a division ring. In Section~\ref{sec:almostgradeddivisionrings}, we show that if $R$ is a $\Gamma$-graded ring, $\varphi\colon R\rightarrow D$ is a homomorphism of $\Gamma$-graded rings with $D$ a $\Gamma$-graded division ring and $\psi\colon D\rightarrow E$ is a ring homomorphisms where $E$ is a nonzero ring, then the homogeneous matrices over $R$ that become invertible via $\varphi$ and via $\psi\varphi$ are the same. Thus (a posteriori) $\psi(D)$ determines a $\Gamma$-graded epic $R$-division ring. The main results in Section~\ref{sec:gradedrationalclosure} are as follows. Let $\varphi\colon R\rightarrow D$ be a homomorphism of $\Gamma$-graded rings and let $\Sigma$ be a set of square homogeneous matrices with entries in $R$. Suppose that the matrices of $\Sigma$ become invertible in $D$ via $\varphi$. Then, under certain natural conditions on $\Sigma$, the entries of the inverses of the matrices in $\Sigma$ are the homogeneous elements of a $\Gamma$-graded subring of $R$. Moreover, if $D$ is a $\Gamma$-graded division ring and $\Sigma$ the set of homogeneous matrices that become invertible under $\varphi$, then any homogeneous element of $D$ is an entry of the inverse of some matrix in $\Sigma$. Section~\ref{sec:categoryspecializations} begins showing that the universal localization $R_\Sigma$ of the $\Gamma$-graded ring $R$ at a set of homogeneous matrices is again a $\Gamma$-graded ring. Then it is shown that a homomorphism of $\Gamma$-graded rings $\varphi\colon R\rightarrow D$, where $D$ is $\Gamma$-graded division ring, is an epimorphism in the category of $\Gamma$-graded rings if and only if $D$ is generated by the image of $\varphi$. If this is the case, we say that $(K,\varphi)$ is a $\Gamma$-graded epic $R$-division ring and we prove that if $\Sigma$ is the set of square homogeneous matrices that become invertible in $D$ via $\varphi$, then $R_\Sigma$ is a $\Gamma$-graded local ring with $\Gamma$-graded residue division ring $R$-isomorphic to $D$. Then the concept of gr-specialization between $\Gamma$-graded epic $R$-division rings is defined. The section ends showing that the existence of a gr-specialization from $(K,\varphi)$ to another $\Gamma$-graded epic $R$-division ring $(K',\varphi')$ is equivalent to say that all the homogeneous rational expressions (from elements of $R$) that make sense in $(K',\varphi')$ make sense in $(K,\varphi)$ too, and that it is also equivalent to fact that any homogeneous matrix over $R$ that becomes invertible in $(K',\varphi')$ becomes invertible in $(K,\varphi)$ too. Section~\ref{sec:Malcolmsonscriterion} is devoted to the proof of the graded version of the so called Malcolmson's criterion \cite{Malcolmsonscriterion} and an important consequence. This criterion determines the kernel of the natural homomorphism from $R$ to the universal localization $R_\Sigma$ of $R$ at certain sets of homogeneous matrices. As a corollary one obtains a sufficient condition for the ring $R_\Sigma$ not to be the zero ring. Both results play a key role in the following section, but the proof of Malcolmson's criterion is very long and technical. The concept of gr-prime matrix ideal is given in Section~\ref{sec:grprimematrixidealyields} and it is shown that the different $\Gamma$-graded epic $R$-division rings are determined by the gr-prime matrix ideals up to $R$-isomorphism of $\Gamma$-graded rings. In Section~\ref{sec:grmatrixideals}, the concepts of a gr-matrix ideal and of the radical of a gr-matrix ideal are defined and it is characterized how is the gr-matrix ideal generated by a set of homogeneous square matrices. Then it is proved that gr-prime matrix ideals behave like prime ideals in a commutative ring. All these concepts are used to provide necessary and sufficient condition for the existence of homomorphisms (embeddings) of $\Gamma$-graded rings to $\Gamma$-graded division rings. The basic theory of Sylvester rank functions in the graded context with values in $\mathbb{N}$ is developed in Section~\ref{sec:grSylvesterrankfunctions}. The main difference with the ungraded case stems from the the fact that, in the graded case, the same homogeneous matrix can define more than one homomorphism between $\Gamma$-graded free modules. As far as we know, this is the first paper where Sylvester rank functions are considered for graded objects. In Section~\ref{sec:grprimespectrum}, we deal with a new situation that appears in the graded context. If $\Gamma$ is a group and $R$ is a $\Gamma$-graded ring, then the ring $R$ can be considered as a $\Gamma/\Omega$-graded ring for any normal subgroup $\Omega$ of $\Gamma$. Thus there are $\Gamma$-graded and $\Gamma/\Omega$-graded versions of the concepts studied before. In this section, we try to relate them. Note that when $\Omega=\Gamma$, a $\Gamma/\Omega$-graded epic $R$-division ring is simply an $R$-division ring and thus one can relate the theory of $\Gamma$-graded division rings and the theory of division rings as developed by Cohn. The last section is devoted to identify inverse limits in the category of $\Gamma$-graded epic $R$-division rings with specializations as morphisms with certain ultraproducts of $\Gamma$-graded epic $R$-division rings. In the context of division rings, a similar result was given in \cite[Section~7]{Lichtmanvaluationmethodsingroupringsand}, but our proof is more direct and general even when specialized to the ungraded case. A second paper is in the works where we deal, among other topics, with the graded versions of weak algorithm, (semi)firs and (pseudo-)Sylvester domains. \medskip We would like to finish this introduction by pointing out that most of the techniques used in this paper are adaptations of the ones from the works by P. M. Cohn and P. Malcolmson. We just take credit for realizing that they can be applied in the more general setting of group graded rings. \section{Basic definitions and notation}\label{sec:basicdefinitions} Rings are supposed to be associative with $1$. We recall that a \emph{domain} is a nonzero ring such that for elements $x,y$ of the ring, the equality $xy=0$ implies that either $x=0$ or $y=0$. A \emph{division ring} is a nonzero ring such that every nonzero element is invertible. For a ring $R$, we define $\mathbb{M}(R)$ the set of all square matrices of any size. Also, for each $i$ with $1\leq i\leq n$, let $e_i$ denote the column $$\left(\begin{smallmatrix} 0\\\vdots\\1\\\vdots\\0 \end{smallmatrix}\right)$$ in which the $i$-th entry is $1$ and the other entries are zero. Let $A\in M_n(R)$. We say that $A$ is \emph{full} if whenever $A=PQ$, with $P\in M_{n\times r}(R)$ and $Q\in M_{r\times n}(R)$, then $r\geq n$. If we think of $A$ as an endomorphism of the free (right) $R$-module $R^n$, it means that $A$ does not factor through $R^r$ with $r<n$. We say that $A$ is \emph{hollow} if it has an $r\times s$ block of zeros where $r+s>n$. It is well known that a hollow matrix is not full. Let $S$ be a ring and $f\colon R\rightarrow S$ be a ring homomorphism. For each matrix $M$ with entries in $R$, we denote by $M^f$ the matrix whose entries are the images of the entries of $M$ by $f$, that is, if $a_{ij}\in R$ is the $(i,j)$-entry of $M$, then the $(i,j)$-entry of $M^f$ is $f(a_{ij})$. Given a set of matrices $\Sigma$, we denote $\Sigma^f=\{M^f\colon M\in\Sigma\}$. We say that the ring homomorphism $f\colon R\rightarrow S$ is \emph{$\Sigma$-inverting} if the matrix $M^f$ is invertible in $S$ for each $M\in\Sigma$. \bigskip We proceed to give some basics on group graded rings that can be found in \cite{NastasescuvanOystaeyenMethodsgraded} and \cite{Hazrat_2016}, for example. If $\Gamma$ is a group, the identity element of $\Gamma$ will be denoted by $e$. Let $\Gamma$ be a group. A ring $R$ is called a \emph{$\Gamma$-graded ring} if $R=\bigoplus\limits_{\gamma\in\Gamma}R_\gamma$ where each $R_\gamma$ is an additive subgroup of $R$ and $R_\gamma R_\delta\subseteq R_{\gamma\delta}$ for all $\gamma,\delta\in\Gamma$. The \emph{support of $R$} is defined as the set $\supp R=\{\gamma\in\Gamma\colon R_\gamma\neq \{0\}\}$. The set $\h(R)=\bigcup\limits_{\gamma\in\Gamma}R_\gamma$ is called the set of \emph{homogeneous elements} of $R$. It is well known that the identity element $1\in R$ belongs to $R_e$, that $R_e$ is a subring of $R$ and that if $x\in R_\gamma$ is invertible in $R$, then $x^{-1}\in R_{\gamma^{-1}}$. A (two-sided) ideal $I$ of $R$ is called a \emph{graded ideal} if $I=\bigoplus\limits_{\gamma\in\Gamma}(I\cap R_\gamma)$. Thus $I$ is a graded ideal if and only if for any $x\in I$, $x=\sum x_i$, where $x_i\in \h(R)$, implies that $x_i\in I$. Observe that if $X\subseteq \h(R)$, then the ideal of $R$ generated by $X$ is a graded ideal. If $I$ is a graded ideal, then the quotient ring $R/I$ is a $\Gamma$-graded ring with $R/I=\bigoplus_{\gamma\in\Gamma}(R/I)_\gamma$, where $(R/I)_\gamma=(R_\gamma+I)/I$. A \emph{$\Gamma$-graded domain} is a nonzero $\Gamma$-graded ring such that if $x,y\in\h(R)$, the equality $xy=0$ implies that either $x=0$ or $y=0$. A \emph{$\Gamma$-graded division ring} is a nonzero $\Gamma$-graded ring such that every nonzero homogeneous element is invertible. A commutative $\Gamma$-graded division ring is a \emph{$\Gamma$-graded field}. Clearly, any $\Gamma$-graded division ring is a $\Gamma$-graded domain. A $\Gamma$-graded ring $R$ is called a $\Gamma$-\emph{graded local ring} if the two-sided ideal $\mathfrak{m}$ generated by the noninvertible homogeneous elements is a proper ideal. In this case, the $\Gamma$-graded ring $R/\mathfrak{m}$ is a $\Gamma$-graded division ring and it will be called the \emph{residue class $\Gamma$-graded division ring} of $R$. For $\Gamma$-graded rings $R$ and $S$, a \emph{homomorphism of $\Gamma$-graded rings} $f\colon R\rightarrow S$ is a ring homomorphism such that $f(R_\gamma)\subseteq S_\gamma$ for all $\gamma\in\Gamma$. An \emph{isomorphism of $\Gamma$-graded rings} is a homomorphism of $\Gamma$-graded rings which is bijective. Notice that the inverse is also an isomorphism of $\Gamma$-graded rings. Let $\Omega$ be a normal subgroup of $\Gamma$. Consider the $\Gamma$-graded ring $R=\bigoplus_{\gamma\in\Gamma}R_\gamma$. It can be regarded as a $\Gamma/\Omega$-graded ring as follows $$R=\bigoplus_{\alpha\in\Gamma/\Omega} R_\alpha,\quad \textrm{where } R_\alpha=\bigoplus_{\gamma\in\alpha}R_\gamma.$$ \medskip Let $R$ be a $\Gamma$-graded ring. A $\Gamma$-\emph{graded (right) $R$-module} $M$ is defined to be a right $R$-module with a direct sum decomposition $M=\bigoplus_{\gamma\in\Gamma} M_\gamma$, where each $M_\gamma$ is an additive subgroup of $M$ such that $M_\lambda R_\gamma\subseteq M_{\lambda+\gamma}$ for all $\lambda,\gamma\in\Gamma$. A submodule $N$ of $M$ is called a graded submodule if $N=\bigoplus_{\gamma\in\Gamma}(N\cap M_\gamma)$. In this case, the factor module $M/N$ forms a $\Gamma$-graded $R$-module with $M/N=\bigoplus_{\gamma\in\Gamma} (M/N)_\gamma$, where $(M/N)_{\gamma}=(M_\gamma+N)/N$. For $\Gamma$-graded $R$-modules $M$ and $N$, a \emph{homomorphism of $\Gamma$-graded $R$-modules} $f\colon M\rightarrow N$ is a homomorphism of $R$-modules such that $f(M_\gamma)\subseteq N_\gamma$ for all $\gamma\in\Gamma$. In this case, $\ker f$ is a graded submodule of $M$ and $\im f$ is a graded submodule of $N$. If $\Omega$ is a normal subgroup of $\Gamma$, then a $\Gamma$-graded $R$-module $M=\bigoplus_{\gamma\in\Gamma}M_\gamma$ can be regarded as a $\Gamma/\Omega$-graded over the $\Gamma/\Omega$-graded ring $R$ as follows $$M=\bigoplus_{\alpha\in\Gamma/\Omega}M_\alpha,\quad \textrm{where } M_\alpha=\bigoplus_{\gamma\in \alpha} M_\gamma.$$ Moreover, a homommorphism of $\Gamma$-graded $R$-modules is also a homomorphism of $\Gamma/\Omega$-graded $R$-modules. Let $\{M_i\colon i\in I\}$ be a set of $\Gamma$-graded $R$-modules. Then $\bigoplus_{i\in I}M_i$ has a natural structure of $\Gamma$-graded $R$-module given by $(\bigoplus_{\gamma\in \Gamma}M_i)_\gamma=\bigoplus_{i\in I}{M_i}_\gamma$. Let $M$ be a $\Gamma$-graded right $R$-module and $N$ be a $\Gamma$-graded left $R$-module. Then the tensor product $M\otimes_R N$ has a natural structure of $\Gamma$-graded $\mathbb{Z}$-module where $(M\otimes_R N)_\gamma=\{\sum_i m_i\otimes n_i\colon m_i\in M_{\gamma'}, n_i\in N_{\gamma''}, \gamma'\gamma''=\gamma\}$. Let $M$ be a $\Gamma$-graded $R$-module. For $\delta\in\Gamma$, we define the \emph{$\delta$-shifted} $\Gamma$-graded $R$-module $M(\delta)$ as $$M(\delta)=\bigoplus_{\gamma\in\Gamma} M(\delta)_\gamma,\quad \textrm{where } M(\delta)_\gamma=M_{\delta\gamma}.$$ A $\Gamma$-graded $R$-module $F$ is called a \emph{$\Gamma$-graded free $R$-module} if $F$ is a free $R$-module with a homogeneous basis. It is well known that the $\Gamma$-graded free $R$-modules are of the form $$\bigoplus_{i\in I}R(\delta_i),\quad \textrm{where } I \textrm{ is an indexing set and } \delta_i\in \Gamma.$$ If $I=\{1,\dotsc,n\}$, then $\bigoplus_{i\in I}R(\delta_i)=R(\delta_1)\oplus\dotsb\oplus R(\delta_n)$, will also be denoted by $R^n(\overline{\delta})$ where $\overline{\delta}=(\delta_1,\dotsc,\delta_n)\in\Gamma^n$. A $\Gamma$-graded $R$-module $P$ is called a \emph{$\Gamma$-graded projective module} if for any diagram of $\Gamma$-graded $R$-modules and homomorphisms of $\Gamma$-graded modules $$\xymatrix{ & P\ar[d]^u \ar@{-->}[dl]_h & \\ M\ar[r]^g & N \ar[r] & 0 } $$ there is a graded $R$-module homomorphism $h\colon P\rightarrow M$ with $gh=u$. As in the ungraded case, the following statements are equivalent ways of saying that $P$ is a $\Gamma$-graded projective module \begin{enumerate} \item $P$ is $\Gamma$-graded and projective as an $R$-module. \item Every short exact sequence of homomorphisms of $\Gamma$-graded $R$-modules $0\rightarrow L\rightarrow M\rightarrow P\rightarrow 0$. splits via a homomorphism of $\Gamma$-graded $R$-modules. \item $P$ is isomorphic, as $\Gamma$-graded $R$-module, to a direct summand of a $\Gamma$-graded free $R$-module. \end{enumerate} Let $P$ be a $\Gamma$-graded projective $R$-module and let $\Omega$ be a normal subgroup of $\Gamma$. If we regard $P$ as a $\Gamma/\Omega$-graded $R$-module, then $P$ is also projective as a $\Gamma/\Omega$-graded $R$-module. \bigskip Let $\Gamma$ be a group and $R=\bigoplus\limits_{\gamma\in\Gamma}R_\gamma$ be a $\Gamma$-graded ring. Following \cite{Hazrat_2016}, for $\overline{\alpha}=(\alpha_1,\dotsc,\alpha_m)\in\Gamma^m$ and $\overline{\beta}=(\beta_1,\dotsc,\beta_n)$, set $$M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]=\begin{pmatrix} R_{\alpha_1\beta_1^{-1}} & R_{\alpha_1\beta_2^{-1}} & \dotsb & R_{\alpha_1\beta_n^{-1}} \\ R_{\alpha_2\beta_1^{-1}} & R_{\alpha_2\beta_2^{-1}} & \dotsb & R_{\alpha_2\beta_n^{-1}} \\ \vdots & \vdots & \ddots & \vdots \\ R_{\alpha_m\beta_1^{-1}} & R_{\alpha_m\beta_2^{-1}} & \dotsb & R_{\alpha_m\beta_n^{-1}} \end{pmatrix}.$$ That is $M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]$ consists of the matrices whose $(i,j)$-entry belongs to $R_{\alpha_i\beta_j^{-1}}$. Such a matrix $A\in M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]$ gives a homomorphism of $\Gamma$-graded rings $$R^n(\overline{\beta})\rightarrow R^m(\overline{\alpha}),\quad \left(\begin{smallmatrix} x_1 \\ \vdots \\ x_n \end{smallmatrix}\right)\mapsto A \left(\begin{smallmatrix} x_1 \\ \vdots \\ x_n \end{smallmatrix}\right),$$ and in this way $M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]$ can be identified with the set of all homomorphisms of $\Gamma$-graded $R$-modules $R^n(\overline{\beta})\rightarrow R^m(\overline{\alpha})$. By $A\in\mathfrak{M}_{m\times n}(R)$, we mean that $A\in M_{m\times n}[\overline{\alpha}][\overline{\beta}]$ of some $\overline{\alpha}\in\Gamma^m$ and $\overline{\beta}\in\Gamma^{n}$. It is important to note that, for a matrix $A\in \mathfrak{M}_{m\times n}(R)$, it is possible that $A\in M_{m\times n}[\overline{\alpha}][\overline{\beta}]\cap M_{m\times n}[\overline{\alpha'}][\overline{\beta'}]$ even if $\overline{\alpha}\neq \overline{\alpha'}$ or $\overline{\beta}\neq \overline{\beta'}$. The matrix $A$ belongs to that intersection if whenever the $(i,j)$-entry of $A$ is not zero, then $\alpha_i\beta_j^{-1}= \alpha'_i{\beta_j'}^{-1}$. We set $$\mathfrak{M}_\bullet(R)=\bigcup_{m,n}\mathfrak{M}_{m\times n}(R).$$ We remark that if $A\in M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]$ and $B\in M_{m\times n}(R)[\overline{\beta}][\overline{\varepsilon}]$ then $AB\in M_{m\times n}(R)[\overline{\alpha}][\overline{\varepsilon}]$. We will say that $A,B$ are \emph{compatible}. When $m=n$, we will write $M_{n}(R)[\overline{\alpha}][\overline{\beta}]$ and $\mathfrak{M}_n(R)$. The set of all such matrices will be denoted by $\mathfrak{M}(R)$, that is, $$\mathfrak{M}(R)=\bigcup\limits_n \mathfrak{M}_n(R).$$ If $A\in M_{n}(R)[\overline{\alpha}][\overline{\beta}]$ is an invertible matrix, then $A^{-1}\in M_{n}(R)[\overline{\beta}][\overline{\alpha}]$. If $\Sigma\subseteq \mathfrak{M}(R)$, we will write $\Sigma_n[\overline{\alpha}][\overline{\beta}]$ to denote the set $\Sigma\cap M_n(R)[\overline{\alpha}][\overline{\beta}]$. \medskip A matrix $A\in \mathfrak{M}_n(R)$ is \emph{gr-full} if every time that $A=PQ$ for some matrices $P\in M_{n\times r}[\overline{\alpha}][\overline{\lambda}]$, $Q\in M_{r\times n}(R)[\overline{\lambda}][\overline{\beta}]$, then $r\geq n$. If we think of $A$ as a homomorphism of $\Gamma$-graded modules between two $\Gamma$-graded free $R$-modules, it means that for all $\overline{\alpha},\overline{\beta}\in\Gamma^n$, such that $A$ defines a graded homomorphism $R^n(\overline{\beta})\rightarrow R^n(\overline{\alpha})$, then it never factors by any graded homommorphism $R^n(\overline{\beta})\rightarrow R^r(\overline{\lambda})$ with $r< n$. Suppose that $A\in M_n(R)[\overline{\alpha}][\overline{\beta}]$, $E\in M_n(R)$ is a permutation matrix obtained permuting the rows of $I_n$ according to the permutation $\sigma\in S_n$. Then $E\in M_n(R)[\overline{\alpha'}][\overline{\alpha}]$, where $\overline{\alpha'}=(\alpha_{\sigma(1)},\dotsc,\alpha_{\sigma(n)})$, and $EA\in M_n(R)[\overline{\alpha'}][\overline{\beta}]$. Similarly the matrix $E\in M_{n}(R)[\overline{\beta}][{\overline{\beta'}}]$, where $\overline{\beta'}=(\beta_{\sigma(1)},\dotsc,\beta_{\sigma(n)})$, and $AE\in M_n(R)[\overline{\alpha}][\overline{\beta'}]$. Hence, for permutation matrices $E,F$ of appropriate size, a matrix $A\in \mathfrak{M}(R)$ is gr-full if, and only if, $EAF$ is gr-full. A hollow matrix $A\in\mathfrak{M}(R)$ is not gr-full. Indeed, suppose that $A$ has an $r\times s$ block of zeros. There exist permutation matrices $E,F$ such that $EAF=\left(\begin{smallmatrix}T & 0\\U & V \end{smallmatrix}\right)$, that is, the block of $r\times s$ zeros is in the north-east corner. Then \[ \begin{pmatrix} T & 0\\U & V \end{pmatrix}= \begin{pmatrix} T & 0\\0 & I \end{pmatrix} \begin{pmatrix} I & 0\\U & V \end{pmatrix}, \] where $T\in M_{r\times(n-s)}(R)[\overline{\alpha}][\overline{\beta}]$, $U\in M_{(n-r)\times(n-s)}(R)[\overline{\delta}][\overline{\beta}]$, $V\in M_{(n-r)\times s}(R)[\overline{\delta}][\overline{\varepsilon}]$ for some sequences $\overline{\alpha},\overline{\beta},\overline{\gamma}$ of elements of $\Gamma$. The result now follows because $\left(\begin{smallmatrix} T & 0\\0 & I \end{smallmatrix}\right)\in M_{n\times (2n-r-s)}(R)[\overline{\alpha}*\overline{\delta}][\overline{\beta}*\overline{\delta}]$ and $\left(\begin{smallmatrix} I & 0\\U & V \end{smallmatrix}\right)\in M_{(2n-r-s)\times n}(R)[\overline{\beta}*\overline{\delta}][\overline{\beta}*\overline{\varepsilon}]$. \medskip Let $D$ be a $\Gamma$-graded division ring and $M$ be a $\Gamma$-graded $D$-module. As in the ungraded case, the following assertions hold true \begin{enumerate}[(1)] \item Any $\Gamma$-graded $D$-module is graded free. \item Any $D$-linearly independent subset of $M$ consisting of homogeneous elements can be extended to a homogeneous basis of $M$. \item Any two homogeneous bases of $M$ over $D$ have the same cardinality. \item If $N$ is a $\Gamma$-graded submodule of $M$, then $\dim_D(N)+\dim_D(M/N)=\dim_D(M)$. \end{enumerate} We remark that, over a $\Gamma$-graded division, the concepts of gr-full matrix and of invertible matrix coincide. \medskip Let $D$ be a $\Gamma$-graded division ring. Let $A\in \mathfrak{M}_{m\times n}(D)$. The \emph{elementary homogeneous row (column) operations} on $A$ are \begin{enumerate}[(1)] \item Interchange two rows (columns) of $A$. \item Multiply a row on the left (a column on the right) by a nonzero homogeneous element. \item Suppose thath $A\in M_{m\times n}(D)[\overline{\alpha}][\overline{\beta}]$. Multiply row $i$ on the left by an element of $R_{\alpha_j\alpha_i^{-1}}$ and add the result to row $j$ (multiply column $i$ on the right by an element of $R_{\beta_i\beta_j^{-1}}$ and add the result to column $j$). \end{enumerate} Notice that those three operations on the rows (columns) can be obtained multiplying $A$ on the left (right) by an invertible matrix in $M_m(D)[\overline{\alpha'}][\overline{\alpha}]$ (in $M_n(D)[\overline{\beta}][\overline{\beta'}]$). The \emph{rank} of $A$ is the dimension of the right $D$-module spanned by its columns. The matrix $A$ can be regarded as a $D$-linear map of right $D$-modules $R(\beta_1)\oplus\dotsb\oplus R(\beta_n)\rightarrow R(\alpha_1)\oplus\dotsb\oplus R(\alpha_n)$. The rank of $A$ coincides with the dimension of the image of $A$. The rank of $A$ can also be computed reducing the matrix $A$ to column echelon form by homogeneous column operations. It is the number of nonzero columns of the column echelon form. The rank of $A$ equals also the dimension of the left free $D$-module spanned by its rows. The matrix $A$ can be regarded as a $D$-linear map of left $D$-modules $R(\alpha^{-1})\oplus\dotsb\oplus R(\alpha_m^{-1})\rightarrow R(\beta_1^{-1})\oplus\dotsb\oplus R(\beta_n^{-1})$. The rank of $A$ coincides with the dimension of the image of $A$. The rank of $A$ can also be computed reducing the matrix $A$ to row echelon form by homogeneous row operations. It is the number of nonzero rows of the row echelon form. Furthermore, the rank of $A$ coincides with the size of a largest invertible square submatrix (obtained by eliminating rows and/or columns). We will denote the rank of $A$ by $\drank(A)$. \bigskip \section{Almost graded division rings}\label{sec:almostgradeddivisionrings} \emph{Throughout this section, let $\Gamma$ be a group}. We say that a ring $R$ is a \emph{$\Gamma$-almost graded ring} if there is a family $\{R_\gamma\colon \gamma\in\Gamma\}$ of additive subgroups $R_\gamma$ of $R$ such that $1\in R_e$, $R=\sum\limits_{\gamma\in\Gamma}R_\gamma$ and $R_\gamma R_{\gamma'}\subseteq R_{\gamma\gamma'}$ for all $\gamma,\gamma'\in\Gamma$. The name of almost graded rings was chosen to be compatible with the definition of almost strongly graded rings given in \cite[p.14]{NastasescuvanOystaeyenMethodsgraded}. We define $\supp{R}=\{\gamma\in\Gamma\colon R_\gamma\neq \{0\}\}$. Given two $\Gamma$-almost graded rings $R$ and $S$, a ring homomorphism $f\colon R\rightarrow S$ is a \emph{homomorphism of $\Gamma$-almost graded rings} if $f(R_\gamma)\subseteq S_\gamma$ for all $\gamma\in\Gamma$. Clearly, any $\Gamma$-graded ring $R=\bigoplus\limits_{\gamma\in\Gamma}R_\gamma$ is a $\Gamma$-almost graded ring in the natural way. Given two $\Gamma$-graded rings $R,S$, a homomorphism of $\Gamma$-almost graded rings is in fact a homomorphism of $\Gamma$-graded rings. Let $R$ be a $\Gamma$-graded ring, $S$ be ring and $f\colon R\rightarrow S$ be a ring homomorphism. Then $\im f$ is a $\Gamma$-almost graded ring with $(\im f)_\gamma=f(R_\gamma)$ and the restriction $f\colon R\rightarrow \im f$ is a homomorphism of $\Gamma$-almost graded rings. Furthermore, any $\Gamma$-almost graded ring can be regarded in this way. More precisely, suppose that $R=\sum\limits_{\gamma\in\Gamma}R_\gamma$ is a $\Gamma$-almost graded ring. Set $\widetilde{R}_\gamma$ to be a disjoint copy of $R_\gamma$. If $a\in R_\gamma$, denote by $\tilde{a}\in \widetilde{R}_\gamma$ the disjoint copy of $a\in R_\gamma$. Consider the $\Gamma$-graded additive group $\widetilde{R}=\bigoplus\limits_{\gamma\in\Gamma} \widetilde{R}_\gamma$. Define $\widetilde{R}_\gamma\times \widetilde{R}_{\gamma'}\rightarrow \widetilde{R}_{\gamma\gamma'}$ by $(\tilde{a},\tilde{b})\mapsto \widetilde{ab}$, and extend it by distributivity to $\widetilde{R}\times\widetilde{R}\rightarrow \widetilde{R}$. This endows $\widetilde{R}$ with a structure of $\Gamma$-graded ring such that $\supp\widetilde{R}=\supp R$ and $\varphi\colon\widetilde{R}=\bigoplus\limits_{\gamma\in\Gamma}\widetilde{R}_\gamma\rightarrow R=\sum\limits_{\gamma\in\Gamma}R_\gamma$ determined by $\tilde{a}\mapsto a$ for all $a\in R_\gamma,\ \gamma\in \Gamma$, is a homomorphism of $\Gamma$-almost graded rings such that $\varphi(\widetilde{R}_\gamma)=R_\gamma$, $\im\varphi=R$. Another important example is as follows. If $S=\bigoplus_{\alpha\in\Gamma/\Omega} S_\alpha$ is a $\Gamma/\Omega$-graded ring, then $S$ can be endowed with a structure of $\Gamma$-almost graded ring defining $S_\gamma=S_\alpha$ for all $\gamma\in\alpha$. Suppose that $R=\bigoplus_{\gamma\in\Gamma}R_\gamma$ is a $\Gamma$-graded ring and that that $\Omega$ is a normal subgroup of $\Gamma$. The ring $R$ is a $\Gamma/\Omega$-graded ring defining $R_\alpha=\bigoplus_{\gamma\in\alpha} R_\gamma$ for each $\alpha\in \Gamma/\Omega$. If $f\colon R\rightarrow S$ is a homomorphism of $\Gamma/\Omega$-graded rings, then it is a homomorphism of $\Gamma$-almost graded rings. Let $\Gamma'$ be the subgroup of $\Gamma$ generated by $\supp R$. Observe that if $\Omega$ is a normal subgroup of $\Gamma'$ (instead of $\Gamma$), $S$ is a $\Gamma'/\Omega$-graded ring and $f\colon R\rightarrow S$ a homomorphism of $\Gamma'/\Omega$-graded rings, then $f$ is a homomorphism of $\Gamma'$-almost graded rings. \bigskip We say that a nonzero ring $E$ is a \emph{$\Gamma$-almost graded division ring} if $E$ is a $\Gamma$-almost graded ring such that every nonzero element $x\in E_\gamma$, $\gamma\in\Gamma$, is invertible with inverse $x^{-1}\in E_{\gamma^{-1}}$. Note that if $E$ is a $\Gamma$-almost graded division ring, then $\widetilde{E}$ is a $\Gamma$-graded division ring. The following easy result tells us that $\Gamma$-almost graded division rings are graded division rings although not necessarily of type $\Gamma$. \begin{lemma}\label{lem:almostgradeddivisionrings} Let $E$ be a $\Gamma$-almost graded division ring. The following assertions hold true. \begin{enumerate}[\rm(1)] \item If $0\neq b\in E_\gamma$, then $bE_{\gamma'}=E_{\gamma\gamma'}$ and $E_{\gamma'}b=E_{\gamma'\gamma}$ for $\gamma\in\Gamma$. \item $E_\gamma\cdot E_{\gamma'}=E_{\gamma\gamma'}$ for all $\gamma,\gamma'\in\Gamma$. \item $\supp E$ is a subgroup of $\Gamma$. \item There exists a normal subgroup $N$ of $\supp E$ such that $E$ is a $\frac{\supp E}{N}$-graded division ring. \end{enumerate} \end{lemma} \begin{proof} If $u\in E_{\gamma\gamma'}$, then $b\cdot b^{-1}u=u$ where $b^{-1}u\in E_{\gamma'}$. The other part is analogous. Thus (1) is proved. (2) is a consequence of (1). Since $1\in E_e$, then (3) follows from (2). (4) First note that, for each $\gamma\in\Gamma$, the condition $E_\gamma\cap E_e\neq\{0\}$ implies that $E_\gamma=E_e$. Indeed, if $0\neq b\in E_\gamma\cap E_e$, (1) implies that $E_\gamma=bE_e=E_e$. Define $N=\{\gamma\in\Gamma\colon E_\gamma=E_e\}$. We show that $N$ is a normal subgroup of $\supp\Gamma$. Clearly $e\in N$. If $\gamma,\gamma'\in N$, then $E_{\gamma\gamma'}=E_\gamma E_{\gamma'}=E_eE_e=E_e$. Now, if $\gamma\in N$, then $E_e=E_\gamma E_{\gamma^{-1}}=E_eE_{\gamma^{-1}}=E_{\gamma^{-1}}$. Thus $\gamma^{-1}\in N$. Suppose $\gamma\in N$ and $\sigma\in\supp\Gamma$. Then $E_{\sigma\gamma\sigma^{-1}}=E_\sigma E_\gamma E_{\sigma^{-1}}=E_\sigma E_e E_{\sigma^{-1}} = E_e$. Thus $\sigma\gamma\sigma^{-1}\in N$. Now let $\gamma,\gamma'\in \supp \Gamma$. Then $$\gamma^{-1}\gamma'\in N \Leftrightarrow E_{\gamma^{-1}\gamma'}=E_e \Leftrightarrow E_\gamma=E_{\gamma'}.$$ And the result is proved. \end{proof} Let $R$ be a $\Gamma$-graded ring, $S$ be a ring and $f\colon R\rightarrow S$ be a ring homomorphism. For each $\gamma\in\Gamma$, define $$(S_0)_\gamma=f(R_\gamma).$$ If $n\geq 0$, and $(S_n)_\gamma$ has been defined for each $\gamma\in\Gamma$, define \[(T_{n+1})_\gamma=\{y^{-1}\colon y\in (S_n)_{\gamma^{-1}} \textrm{ and $y$ is invertible in }S \},\] $$(S_{n+1})_\gamma=\left. \begin{array}{c} \textrm{Additive subgroup of $S$ generated by } \\ \{x_1x_2\dotsm x_r\colon r\in\mathbb{N},\, x_i\in (S_n)_{\gamma_i}\cup(T_{n+1})_{\gamma_i},\, \gamma_1\gamma_2\dotsm\gamma_n=\gamma\}\end{array}\right.$$ Now set $(\DC(f))_\gamma=\textrm{Subgroup generated by } \bigcup_{n\geq 0}(S_n)_\gamma.$ Then the subring of $S$ defined by $$\DC(f)=\textrm{Additive subgroup generated by }\bigcup_{\gamma\in\Gamma}(\DC(f))_\gamma$$ is the \emph{almost graded division closure of $f\colon R\rightarrow S$}. Note that $\DC(f)$ is a $\Gamma$-almost graded ring such that if $x\in(\DC(f))_\gamma$ and $x$ is invertible in $S$, then $x^{-1}\in (\DC(f))_{\gamma^{-1}}$. It is the least subring of $S$ that contains $\im f$ and is closed under inversion of almost homogeneous elements. If $\DC(f)=S$ and $\DC(f)$ is a $\Gamma$-almost graded divison ring, we say that $S$ is the \emph{$\Gamma$-almost graded division ring generated by $\im f$}. Notice also that if $S$ is a division ring, then $\DC(f)$ is a $\Gamma$-almost graded division ring. Note that if $S$ is a $\Gamma$-graded ring, and $f\colon R\rightarrow S$ is a homomorphism of $\Gamma$-graded rings, then $(S_n)_\gamma\subseteq S_\gamma$ for each $n\geq 0$. Therefore $(\DC(f))_\gamma\subseteq S_\gamma$ and $\DC(f)$ is a $\Gamma$-graded subring of $S$. It is the least subring of $S$ that contains $\im f$ and is closed under inversion of homogeneous elements. Moreover if $S$ is a $\Gamma$-graded division ring, then $\DC(f)$ is a $\Gamma$-graded division subring of $S$. In this case, if $S=\DC(f)$ we say that $S$ is the \emph{$\Gamma$-graded division ring generated by $\im f$}. \begin{proposition}\label{prop:almostdivisionring} Let $\Gamma$ be a group, $D=\bigoplus_{\gamma\in\Gamma}D_\gamma$ be a $\Gamma$-graded division ring, and let $f\colon D\rightarrow S$ be a ring homomorphism with $S$ a nonzero ring. The following assertions hold true. \begin{enumerate}[\rm (1)] \item $\DC(f)$ is a $\Gamma$-almost graded division ring with $$\DC(f)_\gamma=(\im f)_\gamma=\{f(x)\colon x\in D_\gamma\}$$ and $D=\widetilde{\DC(f)}$. \item The sets $$\Upsilon=\{A\in\mathfrak{M}(D)\colon A \textrm{ is invertible over }D\},$$ $$\Sigma=\{A\in\mathfrak{M}(D)\colon A^f \textrm{ is invertible over } S\}$$ coincide. \item If $R$ is a $\Gamma$-graded ring and $\varphi\colon R\rightarrow D$ is a homomorphism of $\Gamma$-graded rings then the sets $$\Upsilon_\varphi=\{A\in\mathfrak{M}(R)\colon A^{\varphi} \textrm{ is invertible over }D\},$$ $$\Sigma_\varphi=\{A\in\mathfrak{M}(R)\colon A^{(f\varphi)} \textrm{ is invertible over } S\}$$ coincide. \end{enumerate} \end{proposition} \begin{proof} (1) has already been proved. (2) Clearly, if $A\in \Upsilon$, then $A\in\Sigma$. Suppose now that $A\in M_n(R)[\overline{\alpha}][\overline{\beta}]$ such that $A\notin\Upsilon.$ Then there exists a nonzero homogeneous column $\left(\begin{smallmatrix}x_1\\ \vdots \\ x_n \end{smallmatrix}\right)\in M_{n\times 1}(R)[\overline{\beta}][\delta]$ such that $A\left(\begin{smallmatrix}x_1\\ \vdots \\ x_n \end{smallmatrix}\right)=0$. Note that $\left(\begin{smallmatrix}x_1\\ \vdots \\ x_n \end{smallmatrix}\right)^f\neq 0$ because $D$ is a graded division ring and $S$ is not the zero ring. Thus $A^f\left(\begin{smallmatrix}x_1\\ \vdots \\ x_n \end{smallmatrix}\right)^f=0$, which implies that $A\notin\Sigma$. (3) follows from (2) because $\Sigma_\varphi=\{A\in\mathfrak{M}(R)\colon A^\varphi\in\Sigma\}$ and $\Upsilon_\varphi=\{A\in\mathfrak{M}(R)\colon A^\varphi\in\Upsilon\}.$ \end{proof} \section{Graded rational closure}\label{sec:gradedrationalclosure} \emph{Throughout this section, let $\Gamma$ be a group.} \medskip We begin this section introducing some important notation that will be used throughout. Let $\overline{\alpha}=(\alpha_1,\dotsc,\alpha_n)\in\Gamma^n$, $\overline{\alpha'}=(\alpha_1',\dotsc,\alpha_m')\in\Gamma^m$ and $\delta\in\Gamma$, then we define $$\overline{\alpha}*\overline{\alpha'}\coloneqq(\alpha_1,\dotsc,\alpha_n,\alpha_1',\dotsc,\alpha_m')\in\Gamma^{n+m}.$$ $$\overline{\alpha}\cdot \delta\coloneqq(\alpha_1\delta,\dotsc,\alpha_n\delta)\in\Gamma^n$$ Let $R$ be a $\Gamma$-graded ring and $S$ be a ring. For each $A\in\mathfrak{M}_n(R)$, the last column will be called $A_\infty$ and the matrix consisting of the remaining $n-1$ columns will be called $A_\bullet$. We will write $A=(A_\bullet\ A_\infty)$. For each sequence $\overline{\alpha}=(\alpha_1,\dots,\alpha_n)\in\Gamma^n$, the last element $\alpha_n$ will be denoted $\alpha_\infty$, and $(\alpha_1,\dotsc,\alpha_{n-1})$ will be denoted by $\alpha_\bullet$. Thus $\alpha=\alpha_\bullet*\alpha_\infty$ For $u\in M_{n\times 1}(S)$, the last entry of $u$ will be denoted by $u_\infty$ and the $(n-1)\times 1$ column consisting of the remaining entries will be denoted by $u_\bullet$. Hence $u=\left(\begin{smallmatrix} u_\bullet\\u_\infty \end{smallmatrix}\right).$ We remark that if $n=1$, then $A_\bullet,\alpha_\bullet,u_\bullet$ are empty and thus $A=A_\infty$, $\alpha=\alpha_\infty$ and $u=u_\infty$. If $A\in\mathfrak{M}_{n\times (n+1)}(R)$, we will denote by $A_0$ its first column, by $A_\infty$ its last column and by $A_\bullet$ the matrix consisting of the other $n-1$ columns, that is, we will write $A=(A_0\ A_\bullet\ A_\infty)$. We will call the matrix $(A_0\ A_\bullet)$ the \emph{numerator} of $A$ and the matrix $(A_\bullet\ A_\infty)$ the \emph{denominator} of $A$. If $A\in M_{n\times (n+1)}(R)[\overline{\alpha}][\overline{\beta}]$, we suppose $\beta$ is divided as $\beta_0*\beta_\bullet*\beta_\infty$. If $u\in M_{(n+1)\times 1}(S)$, we will write $u=\left(\begin{smallmatrix} u_0 \\ u_\bullet \\ u_\infty \end{smallmatrix}\right).$ Again, we remark that if $n=1$, then $A_\bullet$, $\beta_\bullet$, $u_\bullet$ are empty and thus $A=(A_0\ A_\infty)$, $\beta=(\beta_0, \beta_\infty)$ and $u=\left(\begin{smallmatrix} u_0 \\ u_\infty \end{smallmatrix}\right)$. \bigskip Let $R=\bigoplus\limits_{\gamma\in\Gamma} R_\gamma$ be a $\Gamma$-graded ring and $\Sigma\subseteq \mathfrak{M}(R)$. We say that the subset $\Sigma$ of $\mathfrak{M}(R)$ is \emph{gr-lower semimultiplicative} if it satisfies the following two conditions: \begin{enumerate}[(i)] \item $(1)\in \Sigma$, i.e. the identity matrix of size $1\times 1$ belongs to $\Sigma$. \item If $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$ and $B\in\Sigma_m[\overline{\alpha'}]\overline{\beta'}]$, then the matrix $\begin{pmatrix} A & 0 \\ C & B \end{pmatrix}\in\Sigma$ for any $C\in M_{m\times n}[\overline{\alpha'}][\overline{\beta}]$. Notice that the matrix $\begin{pmatrix} A & 0 \\ C & B \end{pmatrix}\in M_{(n+m)}(R)[\overline{\alpha}*\overline{\alpha'}][\overline{\beta}*\overline{\beta'}]$. \end{enumerate} An gr-upper semimultiplicative subset of $\mathfrak{M}(R)$ is defined analogously. A subset $\Sigma$ of $\mathfrak{M}(R)$ is \emph{gr-multiplicative} if it satisfies the following two conditions \begin{enumerate}[(i)] \item $\Sigma$ is lower gr-semimultiplicative. \item If $A\in\Sigma$, then $EAF\in\Sigma$ for any permutation matrices $E,F$ of appropriate size. \end{enumerate} \begin{remark}\label{rem:grmultiplicative} We remark that if $\Sigma$ is gr-multiplicative then it is also an upper gr-semimultiplicative subset of $\mathfrak{M}(R)$. Indeed, suppose that $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$, $B\in\Sigma_m[\overline{\alpha'}][\overline{\beta'}]$ and $C\in M_{n\times m}(R)[\overline{\alpha}][\overline{\beta'}]$. Then, since $\Sigma$ is lower gr-semimultiplicative, $\left(\begin{smallmatrix} B & 0 \\ C & A \end{smallmatrix}\right)\in\Sigma$. But now $\left(\begin{smallmatrix} A & C \\ 0 & B \end{smallmatrix}\right)=E^{-1}\left(\begin{smallmatrix} B & 0 \\ C & A \end{smallmatrix}\right)E\in\Sigma$ for some permutation matrix $E$, as desired. \end{remark} \begin{proposition} Let $R$ be a $\Gamma$-graded ring, $S$ be a ring and $f\colon R\rightarrow S$ be a ring homomorphism. Then the set $$\Sigma=\{M\in\mathfrak{M}(R)\colon M^f \textrm{ is invertible over } S \}$$ is gr-multiplicative. \end{proposition} \begin{proof} Clearly the $1\times 1$ matrix $(1)\in \Sigma.$. Let $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$, $B\in\Sigma_m[\overline{\alpha'}]\overline{\beta'}]$ and $C\in M_{m\times n}[\overline{\alpha'}][\overline{\beta}]$. Then the matrix $\begin{pmatrix} A & 0 \\ C & B \end{pmatrix}^f$ belongs to $\Sigma$ because it is invertible with inverse \[ \begin{pmatrix}{(A^f)}^{-1}&0\\-(B^f)^{-1}(C^f)(A^f)^{-1}&(B^f)^{-1}\end{pmatrix}. \] Notice that if $E,F$ are permutation matrices, then $E^f, F^f$ are also permutation matrices. Hence if $A\in \Sigma$, then $(EAF)^f$ is invertible with inverse $(F^f)^{-1}(A^f)^{-1}(E^f)^{-1}$. \end{proof} Note that if $S$ is a $\Gamma$-graded ring, $f\colon R\rightarrow S$ is a graded homomorphism and $A\in M_n(R)[\overline{\alpha}][\overline{\beta}]$, then $A^f \in M_n(S)[\overline{\alpha}][\overline{\beta}]$. Moreover, if $A^f$ is invertible, then $(A^f)^{-1} \in M_n(S)[\overline{\beta}][\overline{\alpha}]$, and the $(j,i)$-entry of $(A^f)^{-1}$ belongs to $R_{\beta_j\alpha_i^{-1}}$. With this in mind, we make the following definition. Let $R=\bigoplus\limits_{\gamma\in\Gamma} R_\gamma$ be a $\Gamma$-graded ring and $\Sigma\subseteq \mathfrak{M}(R)$. Let $S$ be a ring (not necessarily graded) and $f\colon R\rightarrow S$ be a $\Sigma$-inverting ring homomorphism. For $\gamma\in\Gamma$, we define the \emph{homogeneous rational closure of degree $\gamma$} as the set $(Q_f(\Sigma))_\gamma$ consisting of all $x\in S$ such that there exist $\overline{\alpha},\overline{\beta}\in\Gamma^n$ and $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$ such that $\gamma=(\alpha_i\beta_j^{-1})^{-1}=\beta_j\alpha_i^{-1}$ and $x$ is the $(j,i)$-entry of $(A^f)^{-1}$ (for some positive integer $n$ and $i,j\in\{1,\dotsc,n\}$). The \emph{homogeneous rational closure} is the set $$Q_f(\Sigma)=\bigcup\limits_{\gamma\in\Gamma}(Q_f(\Sigma))_\gamma.$$ The \emph{graded rational closure}, denoted by $R_f(\Sigma)$, is the additive subgroup of $S$ generated by $Q_f(\Sigma)$. When the set $\Sigma$ is gr-lower semimultiplicative, the graded rational closure $R_f(\Sigma)$ is a subring of $S$ as the following results show. \begin{lemma}\label{lem:homogeneousclosure} Let $R$ be a $\Gamma$-graded ring and $\Sigma$ be a gr-lower semimultiplicative subset of $\mathfrak{M}(R)$. Let $S$ be a ring and $f\colon R\rightarrow S$ be a $\Sigma$-inverting ring homomorphism. Fix $\gamma\in \Gamma$. For $x\in S$, the following conditions are equivalent. \begin{enumerate}[\rm(1)] \item $x\in (Q_f(\Sigma))_\gamma$. \item There exist $\overline{\alpha},\overline{\beta}\in\Gamma^n$ and $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$ such that $\alpha_i=e$, $\beta_j=\gamma$ and $x$ is the $(j,i)$-entry of $(A^f)^{-1}$. \item There exist $\overline{\alpha},\overline{\beta}\in\Gamma^n$, $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$ and $u\in M_{n\times 1}(S)$ such that $\alpha_i=e$, $\beta_j=\gamma$, $u_j=x$ and $A^fu=e_i$. \item There exist $\overline{\alpha},\overline{\beta}\in\Gamma^n$, $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$, $a\in M_{n\times 1}(R)[\overline{\alpha}][{e}]$ and $u\in M_{n\times 1}(S)$ such that $\beta_j=\gamma$, $u_j=x$ and $A^fu=a^f$. \item There exist $\overline{\alpha},\overline{\beta}\in\Gamma^n$, $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$, $a\in M_{n\times 1}(R)[\overline{\alpha}][{e}]$ and $u\in M_{n\times 1}(S)$ such that $\beta_\infty=\gamma$, $u_\infty=x$ and $A^fu=a^f$. \item There exist $\overline{\alpha},\overline{\beta}\in\Gamma^n$, $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$, $b\in M_{1\times n}(R)[\gamma][\overline{\beta}]$ and $c\in M_{n\times 1}(R)[\overline{\alpha}][{e}]$ such that $x=b^f(A^f)^{-1}c^f$. \item There exist $\overline{\alpha}\in\Gamma^n$, $\overline{\beta}\in\Gamma^{n+1}$ , $A\in M_{n\times (n+1)}(R)[\overline{\alpha}][\overline{\beta}]$ and $u\in M_{(n+1)\times 1} (S)$ such that $\beta_0=e$, $\beta_\infty=\gamma$, $u_0=1$, $u_\infty=x$, $(A_\bullet\ A_\infty)\in \Sigma$ and $A^fu=0$. \end{enumerate} \end{lemma} \begin{proof} (1)$\Rightarrow$(2) Let $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$ such that $x$ is the $(j,i)$-entry of $(A^f)^{-1}$ and there exist $i,j$ such that $\gamma=(\alpha_i\beta_j^{-1})^{-1}=\beta_j\alpha_i^{-1}$. Then $A$ can be regarded as a matrix in $A\in\Sigma_n[\overline{\alpha}\cdot \alpha_i^{-1}][\overline{\beta}\cdot\alpha_i^{-1}]$ and thus (2) follows. (2)$\Rightarrow$(3) Suppose that (2) holds. Let $u$ be the $i$th column of $(A^f)^{-1}$. Then $A^fu=e_i$, as desired. (3)$\Rightarrow$(4) It is clear because $e_i\in M_{n\times 1}(R)[\overline{\alpha}][e]$ and $e_i^f=e_i$. (4)$\Rightarrow$(5) Let $A\in\Sigma$, $i$, $j$, $a$ and $u$ be as in (4). Suppose that $A^fu=a^f$ with $u_j=x$. The matrix $\begin{pmatrix}A^f&0\\-e_j^t&1\end{pmatrix}\in \Sigma_n[\overline{\alpha}*\beta_j][\overline{\beta}*\beta_j]$. Notice that it belongs to $\Sigma$ because $\Sigma$ is gr-lower semimultiplicative. The matrix $\begin{pmatrix}a\\0\end{pmatrix}\in M_{(n+1)\times 1}(R)[\overline{\alpha}*\beta_j][e]$. Now (5) follows from the following equality \[ \begin{pmatrix}A^f&0\\-e_j^t&1\end{pmatrix}\begin{pmatrix}u\\x\end{pmatrix}=\begin{pmatrix}a^f\\0\end{pmatrix} =\begin{pmatrix}a\\0\end{pmatrix}^f. \] (5)$\Rightarrow$(6) From (5) we obtain that $u=(A^f)^{-1}a^f$. Hence $$x=(e_n^t)^fu=(e_n^t)^f(A^f)^{-1}a^f.$$ Now (6) follows because $e_n^t\in M_{1\times n}(R)[\gamma][\overline{\beta}]$. (6)$\Rightarrow$(1) Let $A$, $b$ and $c$ as in (6). Then \[ \begin{pmatrix}1&0&0\\c&A&0\\0&b&1\end{pmatrix}\in\Sigma_{(n+2)\times(n+2)}[e*\overline{\alpha}*\gamma][e*\overline{\beta}*\gamma]. \] Moreover \[ \begin{pmatrix}1&0&0\\c^f&A^f&0\\0&b^f&1\end{pmatrix}^{-1}=\begin{pmatrix}1&0&0\\-(A^f)^{-1}c^f&(A^f)^{-1}&0\\b^f(A^f)^{-1}c^f&-b^f(A^f)^{-1}&1\end{pmatrix} \] Thus $x=b^f(A^f)^{-1}c^f$ belongs to $(Q_f(\Sigma))_\gamma$. (5)$\Leftrightarrow$(7) Suppose $A\in M_n(R)[\overline{\alpha}][\overline{\beta}]$ with $\beta_\infty=\gamma$, $a\in M_{n\times 1}(R)[\overline{\alpha}][e]$ and $u\in M_{n\times 1}(S)$ with $u_\infty=x$. Then the equality $A^fu=a^f$ is equivalent to the equality \[ \begin{pmatrix}-a^f&A^f\end{pmatrix}\begin{pmatrix}1\\u\end{pmatrix}=0. \] Notice that $(-a\ A)\in M_{n\times (n+1)}(R)[\overline{\alpha}][e*\overline{\beta}]$. \end{proof} \begin{theorem}\label{theo:epimorphism} Let $R$ be a $\Gamma$-graded ring and $\Sigma$ be a gr-lower semimultiplicative subset of $\mathfrak{M}(R)$. Let $S$ be a ring and $f\colon R\rightarrow S$ a $\Sigma$-inverting ring homomorphism. Then \begin{enumerate}[\rm(1)] \item For each $\gamma\in \Gamma$, $f(R_\gamma)\subseteq(Q_f(\Sigma))_\gamma$. \item If $\gamma\in \Gamma$ and $x,y\in(Q_f(\Sigma))_\gamma$, then $x+y\in(Q_f(\Sigma))_\gamma$. \item If $\gamma,\delta\in\Gamma$ and $x\in(Q_f(\Sigma))_\gamma$, $y\in(Q_f(\Sigma))_\delta$, then $xy\in(Q_f(\Sigma))_{\gamma\delta}$. \end{enumerate} Hence $R_f(\Sigma)$ is a $\Gamma$-almost graded ring (which is a subring of $S$) that contains $\mathrm{im}(f)$. Furthermore \begin{enumerate}[\rm(1)] \setcounter{enumi}{3} \item The restriction $f\colon R\rightarrow R_f(\Sigma)$ is a ring epimorphism. \item If $S$ is a $\Gamma$-graded ring and $f\colon R\rightarrow S$ is a homomorphism of $\Gamma$-graded rings, then $(Q_f(\Sigma))_\gamma\subseteq S_\gamma$ for each $\gamma\in\Gamma$ and $R_f(\Sigma)=\bigoplus\limits_{\gamma\in\Gamma} (Q_f(\Sigma))_\gamma$ is a $\Gamma$-graded subring of $S$ such that $\h(R_f(\Sigma))=Q_f(\Sigma)$. \end{enumerate} \end{theorem} \begin{proof} (1) Let $r\in R_\gamma$. Then $f(1)f(r)=f(r)$ where $1\in M_1(R)[\gamma][\gamma]$ and $r \in M_1(R)[\gamma][e]$. Then Lemma~\ref{lem:homogeneousclosure}(5) implies that $f(r)\in (Q_f(\Sigma))_\gamma$. (2) Let $x,y\in(Q_f(\Sigma))_\gamma$. By Lemma~\ref{lem:homogeneousclosure}(5), there exist $\overline{\alpha},\overline{\beta}\in\Gamma^n$, $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$, $a\in M_{n\times 1}(R)[\overline{\alpha}][{e}]$ and $u\in M_{n\times 1}(S)$ such that $\beta_\infty=\gamma$, $u_\infty=x$ and $$A^fu=(A_\bullet^f\ A_\infty^f)\begin{pmatrix} u_\bullet \\ x \end{pmatrix}=a^f.$$ There also exist $B\in\Sigma_{n'}[\overline{\alpha'}][\overline{\beta'}]$, $b\in M_{n'\times 1}(R)[\overline{\alpha'}][{e}]$ and $v\in M_{n'\times 1}(S)$ such that $\beta'_\infty=\gamma$, $v_\infty=y$ and $$B^fv=(B_\bullet^f\ B_\infty^f)\begin{pmatrix} v_\bullet \\ y \end{pmatrix}=b^f.$$ Then the matrix $\left(\begin{array}{cc|c}A_\bullet&A_\infty&0\\\hline0&-B_\infty&B\end{array}\right)\in\Sigma_{n+n'}[\overline{\alpha}*\overline{\alpha'}][\overline{\beta}*\overline{\beta'}]$, the column $\begin{pmatrix}a\\b\end{pmatrix}\in M_{(n+n')\times 1}[\overline{\alpha}*\overline{\alpha'}][e]$ and we have the following equality \[ \left(\begin{array}{cc|c}A_\bullet^f&A_\infty^f&0\\\hline0&-B_\infty^f&B^f\end{array}\right)\begin{pmatrix}u_\bullet\\x\\ \hline v_\bullet\\x+y\end{pmatrix}=\begin{pmatrix}a^f\\b^f\end{pmatrix}=\begin{pmatrix}a\\b\end{pmatrix}^f. \] Hence $x+y\in (Q_f(\Sigma))_\gamma$. (3) Let $x\in(Q_f(\Sigma))_\gamma$ and $y\in(Q_f(\Sigma))_\delta$. There exist $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$, $a\in M_{n\times 1}(R)[\overline{\alpha}][{e}]$ and $u\in M_{n\times 1}(S)$ such that $\beta_\infty=\gamma$, $u_\infty=x$ and $A^fu=a^f$. There also exist $B\in\Sigma_{n'}[\overline{\alpha'}][\overline{\beta'}]$, $b\in M_{n'\times 1}(R)[\overline{\alpha'}][{e}]$ and $v\in M_{n'\times 1}(S)$ such that $\beta_\infty=\delta$, $v_\infty=y$ and $B^fv=b^f$. Now $\left(\begin{array}{cc|c}B_\bullet^f&B_\infty^f&0\\\hline0&-a^f&A^f\end{array}\right)\in \Sigma_{n'+n}[\overline{\alpha'}*\overline{\alpha}\beta'_\infty][\overline{\beta'}*\overline{\beta}\beta'_\infty]$ with $(\overline{\beta'}*\overline{\beta}\beta'_\infty)_\infty=\gamma\delta$, $\begin{pmatrix}b\\0\end{pmatrix} \in M_{(n'+n)\times 1}(R)[\overline{\alpha'}*\overline{\alpha}\beta'_\infty][e]$ and we have the equality \[ \left(\begin{array}{cc|c}B_\bullet^f&B_\infty^f&0\\\hline0&-a^f&A^f\end{array}\right)\begin{pmatrix}v_\bullet\\ y\\\hline u_\bullet y\\xy\end{pmatrix}=\begin{pmatrix}b^f\\0\end{pmatrix}=\begin{pmatrix}b\\0\end{pmatrix}^f. \] Hence $xy\in(Q_f(\Sigma))_{\gamma\delta}$. From (1)--(3), it is easy to show that $R_f(\Sigma)$ is a $\Gamma$-almost graded ring and a subring of $S$. (4) Let $g,h\colon R_f(\Sigma)\rightarrow T$ be ring homomorphisms. If $x\in (Q_f(\Sigma))_\gamma$, then $x$ is an entry of a square matrix $B$ which is the inverse of $A^f$ for some $A\in\Sigma$. From $A^fB=BA^f=I$, it follows that $A^{gf}B^g=B^gA^{gf}=I$ and $A^{hf}B^h=B^hA^{hf}=I$. Thus $B^g=B^h$, and $g(x)=h(x)$. Since $R_f(\Sigma)$ is generated by $(Q_f(\Sigma))_\gamma$, $\gamma\in\Gamma$, then $f\colon R\rightarrow R_f(\Sigma)$ is a ring epimorphism. (5) Now suppose that $S$ is a $\Gamma$-graded ring and $f\colon R\rightarrow S$ is homomorphism of $\Gamma$-graded rings. Let $x\in(Q_f(\Sigma))_\gamma$. There exist $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$, $a\in M_{n\times 1}(R)[\overline{\alpha}][{e}]$ and $u\in M_{n\times 1}(S)$ such that $\beta_\infty=\gamma$, $u_\infty=x$ and $A^fu=a^f$. Notice that $A^f\in M_n(S)[\overline{\alpha}][\overline{\beta}]$ is an invertible matrix and that $a^f\in M_{n\times 1}(S)[\overline{\alpha}][e]$. The matrix $(A^f)^{-1}\in M_n(S)[\overline{\beta}][\overline{\alpha}]$. Now $(A^f)^{-1}$ and $a^f$ are compatible and $u=(A^f)^{-1}a^f$. Then $x=u_\infty\in S_{\beta_\infty}$, that is $x\in S_\gamma$. By (1)--(3), it is easy to prove that $R_f(\Sigma)$ is a graded subring of $S$ whose set of homogeneous elements equals $Q_f(\Sigma)$. \end{proof} \begin{lemma}[Cramer's rule]\label{lem:Cramersrule} Let $R$ be a $\Gamma$-graded ring and $\Sigma$ be a subset of $\mathfrak{M}(R)$. Let $S$ be a ring and $f\colon R\rightarrow S$ be a $\Sigma$-inverting ring homomorphism. Let $\gamma\in\Gamma$ and $x\in (Q_f(\Sigma))_\gamma$. Suppose that $\overline{\alpha}\in\Gamma^n$, $\overline{\beta}\in\Gamma^{n+1}$, $A\in M_{n\times (n+1)}(R)[\overline{\alpha}][\overline{\beta}]$ and $u\in M_{(n+1)\times 1} (S)$ such that $\beta_0=e$, $\beta_\infty=\gamma$, $u_0=1$, $u_\infty=x$, $(A_\bullet\ A_\infty)\in \Sigma$ and $A^fu=0$. Then the following assertions hold true. \begin{enumerate}[\rm(1)] \item $x$ is invertible in $S$ if, and only if, the matrix $(A_0\ A_\bullet)^f$ is invertible in $M_n(S)$. \item $x$ is a regular element of $S$ if, and only if, the matrix $(A_0\ A_\bullet)^f$ is a regular element of $M_n(S)$. \item If $x=0$, then the matrix $(A_0\ A_\bullet)^f$ is not full over $S$. Furthermore, if $S$ is a $\Gamma$-graded ring and $f\colon R\rightarrow S$ is a homomorphism of graded rings, then matrix $(A_0\ A_\bullet)^f\in M_n(S)[\overline{\alpha}][\overline{\beta}]$ is not gr-full over $S$. \end{enumerate} \end{lemma} \begin{proof} First note the equality \begin{equation}\label{eq:Cramer} \begin{pmatrix}A_\bullet^f&-A_0^f\end{pmatrix}=\begin{pmatrix}A_\bullet^f&A_\infty^f\end{pmatrix}\begin{pmatrix}I&u_\bullet\\0&x\end{pmatrix}. \end{equation} Also notice that the homogeneous matrix $(A_\bullet^f\ A_\infty^f)$ is invertible in $M_n(S)$ because $f$ is $\Sigma$-inverting. (1) Suppose that $x$ is invertible in $S$. Then \[ \begin{pmatrix}I&u_\bullet\\0&x\end{pmatrix} \] is invertible in $M_n(S)$. Hence $(A_\bullet^f\ -A_0^f)$ is invertible, and therefore $(A_0^f\ A_\bullet^f)$ is invertible in $M_n(S)$. Conversely, suppose that $(A_0^f\ A_\bullet^f)$ is invertible in $M_n(S)$. Hence the fact that $(A_\bullet^f\ -A_0^f)$ is invertible and \eqref{eq:Cramer} imply that \[ \begin{pmatrix}I&u_\bullet\\0&x\end{pmatrix} \] is invertible in $M_n(S)$. Thus there exists $\left(\begin{smallmatrix}v&w\\y&z\end{smallmatrix}\right)\in M_n(S)$ such that \[ \begin{array}{cc} \begin{pmatrix}I&u_\bullet\\0&x\end{pmatrix}\begin{pmatrix}v&w\\y&z\end{pmatrix}=I,\quad\begin{pmatrix}v&w\\y&z\end{pmatrix}\begin{pmatrix}I&u_\bullet\\0&x\end{pmatrix}=I. \end{array} \] Thus $xz=1$, $yI=0$ and $yu_\bullet+zx=1$. Therefore $x$ is invertible in $S$. (2) Follows easily from \eqref{eq:Cramer}. (3) Suppose that $x=0$. Then \eqref{eq:Cramer} can be expressed as \[ \begin{array}{rcl} \begin{pmatrix}A_\bullet^f&-A_0^f\end{pmatrix}&=&\begin{pmatrix}A_\bullet^f&A_\infty^f\end{pmatrix}\begin{pmatrix}I&u_\bullet\\0&x\end{pmatrix}\\&=&\begin{pmatrix}A_\bullet^f&A_\infty^f\end{pmatrix}\begin{pmatrix}I&u_\bullet\\0&0\end{pmatrix}\\&=&\begin{pmatrix}A_\bullet^f&A_\infty^f\end{pmatrix}\begin{pmatrix}I\\0\end{pmatrix}\begin{pmatrix}I&u_\bullet\end{pmatrix} \end{array} \] which implies that $(A_\bullet^f\ -A_0^f)$ is not full and therefore $(A_0^f\ A_\bullet^f)$ is not full. If moreover $f\colon R\rightarrow S$ is a homomorphism of $\Gamma$-graded rings, then $(A_\bullet\ -A_0)^f\in M_n(S)[\overline{\alpha}][\beta_\bullet*e]$, $(A_\bullet\ A_\infty)^f\in M_n(S)[\overline{\alpha}][\beta_\bullet*\beta_\infty]$, $\begin{pmatrix}I\\0\end{pmatrix}\in M_{n\times(n-1)}(S)[\beta_\bullet*\beta_\infty][\beta_\bullet]$ and $(I\ u_\bullet)\in M_{(n-1)\times n}[\beta_\bullet][\beta_\bullet*e]$. It implies that the matrix $(A_\bullet\ -A_0)^f$ is not gr-full which in turn implies that $(A_0\ A_\bullet)^f$ is not gr-full, as desired. \end{proof} Given $A$ and $x$ as in Lemma~\ref{lem:Cramersrule}, we say that $(A_0\ A_\bullet)$ is the \emph{numerator of $x$} and $(A_\bullet\ A_\infty)$ is the \emph{denominator of $x$}. Thus, $x$ is invertible in $S$ if and only if its numerator is invertible in $M_n(S)$. \begin{theorem} Let $R$ be a $\Gamma$-graded ring. Let $S$ be a ring and $f\colon R\rightarrow S$ be a ring homomorphism. Set $$\Sigma=\{A\in\mathfrak{M}(R)\colon A^f \textrm{ is invertible over }S \}.$$ If $x\in (Q_f(\Sigma))_\gamma$ is invertible in $S$, then $x^{-1}\in (Q_f(\Sigma))_{\gamma^{-1}}$. Moreover, if $S$ is a $\Gamma$-almost graded division ring and $f\colon R\rightarrow S$ is a homomorphism of $\Gamma$-almost graded rings, then $R_f(\Sigma)$ is a $\Gamma$-almost graded division subring of $S$. \end{theorem} \begin{proof} Let $x\in(Q_f(\Sigma))_\gamma$. By Lemma~\ref{lem:homogeneousclosure}(7), there exist $A\in M_{n\times (n+1)}(R)[\overline{\alpha}][\overline{\beta}]$ and $u\in M_{(n+1)\times 1} (S)$ such that $\beta_0=e$, $\beta_\infty=\gamma$, $u_0=1$, $u_\infty=x$, $(A_\bullet\ A_\infty)\in \Sigma$ and $A^fu=(A_0^f\ A_\bullet^f\ A_\infty^f) \left(\begin{smallmatrix} 1 \\ u_\bullet \\ x \end{smallmatrix}\right)=0$. Equivalently $A_0^f+A_\bullet^f u_\bullet +A_\infty^f u_\infty=0$. Hence $A_0^fx^{-1}+A_\bullet^f u_\bullet x^{-1}+A_\infty^f=0$, or equivalently \[ \begin{pmatrix}A_\infty^f&A_\bullet^f&A_0^f\end{pmatrix}\begin{pmatrix}1\\u_\bullet x^{-1}\\x^{-1}\end{pmatrix}=0. \] Since $x$ is invertible, Cramer's rule implies that the matrix $(A_0^f\ A_\bullet^f)$ is invertible over $S$. Thus $(A_\bullet^f\ A_0^f)$ is invertible over $S$, and therefore $(A_\bullet\ A_0)\in \Sigma$. Moreover, notice that $(A_\infty\ A_\bullet\ A_0)\in M_{n\times(n+1)}(R)[\overline{\alpha}][\beta_\infty*\beta_\bullet*\beta_0]$. This can also be expressed as $A\in M_{n\times (n+1)}(R)[\overline{\alpha}\beta_\infty^{-1}][\beta_\infty\beta_\infty^{-1}*\beta_\bullet\beta_\infty^{-1}*\beta_0\beta_\infty^{-1}]$. By Lemma~\ref{lem:homogeneousclosure}(7), and observing the equality $\beta_\infty\beta_\infty^{-1}*\beta_\bullet\beta_\infty^{-1}*\beta_0\beta_\infty^{-1}= e*\beta_\bullet\beta_\infty^{-1}*\gamma^{-1}$ in $\Gamma^{n+1}$, we get that $x^{-1}\in (Q_f(\Sigma))_{\gamma^{-1}}$. The second part follows because, by Theorem~\ref{theo:epimorphism}, $R_f(\Sigma)$ is a $\Gamma$-almost graded subring of $S$ which, by the foregoing, is closed under inverses of homogeneous elements. \end{proof} \begin{corollary} Let $R$ be a $\Gamma$-graded ring, $K$ be a $\Gamma$-graded division ring and $f\colon R\rightarrow K$ be a homomorphism of $\Gamma$-graded rings. If $$\Sigma=\{A\in\mathfrak{M}(R)\colon A^f \textrm{ is invertible over }K \}$$ and $K$ is generated as a $\Gamma$-graded division ring by the image of $f$, then $K=R_f(\Sigma)$. \qed \end{corollary} We end this section with an interesting result, but that will not be used in later sections. We show that two elements (and by induction any finite number of elements) can be brought to a common denominator. \begin{lemma} Let $R$ be a $\Gamma$-graded ring and $\Sigma$ be a gr-lower semimultiplicative subset of $\mathfrak{M}(R)$. Let $S$ be a ring and $f\colon R\rightarrow S$ a $\Sigma$-inverting ring homomorphism. If $x\in(Q_f(\Sigma))_\gamma$ and $y\in(Q_f(\Sigma))_\delta$ for some $\gamma,\delta\in \Gamma$, then they can be brought to a common denominator. \end{lemma} \begin{proof} Let $x\in(Q_f(\Sigma))_\gamma$ and $y\in(Q_f(\Sigma))_\delta$. There exist $A\in M_{n\times (n+1)}(R)[\overline{\alpha}][\overline{\beta}]$ and $u\in M_{(n+1)\times 1} (S)$ such that $\beta_0=e$, $\beta_\infty=\gamma$, $u_0=1$, $u_\infty=x$, $(A_\bullet\ A_\infty)\in \Sigma$ and $A^fu=(A_0^f\ A_\bullet^f\ A_\infty^f) \left(\begin{smallmatrix} 1 \\ u_\bullet \\ x \end{smallmatrix}\right)=0$. There also exist $B\in M_{n'\times (n'+1)}(R)[\overline{\alpha'}][\overline{\beta'}]$ and $v\in M_{(n'+1)\times 1} (S)$ such that $\beta_0'=e$, $\beta_\infty'=\delta$, $v_0=1$, $v_\infty=y$, $(B_\bullet\ B_\infty)\in \Sigma$ and $B^fv=(B_0^f\ B_\bullet^f\ B_\infty^f) \left(\begin{smallmatrix} 1 \\ v_\bullet \\ y \end{smallmatrix}\right)=0$. Then $$\left(\begin{array}{c|cc|cc}A_0&A_\bullet&A_\infty&0&0\\\hline0&0&-B_\infty&B_\bullet&B_\infty\end{array}\right)\left(\begin{array}{c} 1 \\\hline u_\bullet \\ x \\\hline 0\\ x \end{array}\right)=0 $$ $$\left(\begin{array}{c|cc|cc}0&A_\bullet&A_\infty&0&0\\\hline B_0&0&-B_\infty&B_\bullet&B_\infty\end{array}\right)\left(\begin{array}{c} 1\\ \hline 0 \\0\\ \hline v_\bullet\\ q\end{array}\right)=0$$ Now $\left(\begin{array}{c|cc|cc}A_0&A_\bullet&A_\infty&0&0\\\hline0&0&-B_\infty&B_\bullet&B_\infty\end{array}\right)\in M_{{(n+n')}\times (n+n'+1)}(R)[\overline{\alpha}*\overline{\alpha'}{\beta_{\infty}'}^{-1}\beta_\infty] [\overline{\beta}*\overline{\varepsilon}]$ where $\overline{\varepsilon}=(\beta_\bullet'{\beta_{\infty}'}^{-1}\beta_\infty,\, \beta_\infty)$. The matrix $\left(\begin{array}{c|cc|cc}0&A_\bullet&A_\infty&0&0\\\hline B_0&0&-B_\infty&B_\bullet&B_\infty\end{array}\right)$ belongs to $M_{{(n+n')}\times (n+n'+1)}(R)[\overline{\alpha}\beta_\infty^{-1}\beta_\infty'*\overline{\alpha'}][\overline{\nu}]$ where $\overline{\nu}= \beta_0'* \beta_\bullet\beta_\infty^{-1}\beta'_\infty* \beta_\infty'* \beta_\bullet'* \beta_\infty'$. \end{proof} \section{The category of graded $R$-division rings and gr-specializations}\label{sec:categoryspecializations} This section is an adaptation of \cite[Section~7.2]{Cohnfreeeidealringslocalization} to the graded situation. \medskip \emph{Throughout this section, let $\Gamma$ be a group} Let $R=\bigoplus\limits_{\gamma\in\Gamma}R_\gamma$ be a $\Gamma$-graded ring. A \emph{$\Gamma$-graded $R$-ring} is a pair $(K,\varphi)$ where $K$ is a $\Gamma$-graded ring and $\varphi\colon R\rightarrow K$ is a homomorphism of graded rings. A \emph{graded $R$-subring} of $(K,\varphi)$ is a graded subring $L$ of $K$ such that $\varphi(R)\subseteq L$. A \emph{$\Gamma$-graded $R$-division ring} is a $\Gamma$-graded $R$-ring $(K,\varphi)$ such that $K$ is a $\Gamma$-graded division ring. If $K=\DC(\varphi)$, that is, $K$ is the $\Gamma$-graded division ring generated by the image of $\varphi$, we say that $(K,\varphi)$ is a \emph{$\Gamma$-graded epic $R$-field} A \emph{homomorphism of $\Gamma$-graded $R$-rings} between $\Gamma$-graded $R$-rings $(K,\varphi)$ and $(K',\varphi')$ is a homomorphism of graded rings $f\colon K\rightarrow K'$ such that $\varphi'=f\circ \varphi$. If, moreover, $f\colon K\rightarrow K'$ is an isomorphism of $\Gamma$-graded rings, we say that $f$ is an \emph{isomorphism of $\Gamma$-graded $R$-rings}. \medskip Let now $\Sigma\subseteq \mathfrak{M}(R)$. The \emph{universal localization of $R$ at $\Sigma$} is a pair $(R_\Sigma,\lambda)$ where $R_\Sigma$ is a ring and $\lambda\colon R\rightarrow R_\Sigma$ is a $\Sigma$-inverting homomorphism such that for any other $\Sigma$-inverting ring homomorphism $f\colon R\rightarrow S$ there exists a unique ring homomorphism $F\colon R_\Sigma \rightarrow S$ with $f=F\lambda$. Now we give some important properties of $R_\Sigma$. \begin{proposition}\label{prop:basicsonuniversallocalization} Let $R$ be a $\Gamma$-graded ring and let $\Sigma\subseteq \mathfrak{M}(R)$. Then the following statements hold true \begin{enumerate}[\rm(1)] \item There exists the universal localization $(R_\Sigma,\lambda)$ of $R$ at $\Sigma$. \item $\lambda\colon R\rightarrow R_\Sigma$ is a ring epimorphism. \item The ring $R_\Sigma$ is a $\Gamma$-graded ring, $\lambda \colon R\rightarrow R_\Sigma$ is a homomorphism of $\Gamma$-graded rings, and $(R_\Sigma,\lambda)$ is unique up to isomorphism of $\Gamma$-graded $R$-rings. \item Suppose that $S$ is a $\Gamma$-graded ring, $f\colon R\rightarrow S$ is a $\Sigma$-inverting homomorphism of $\Gamma$-graded rings and $F\colon R_\Sigma\rightarrow S$ is the unique homomorphism of rings such that $f=F\lambda$. Then $F\colon R_\Sigma\rightarrow S$ is a homomorphism of $\Gamma$-graded $R$-rings. Moreover, if $\Sigma$ is gr-lower semimultiplicative, then $\im F=R_f(\Sigma)$. \end{enumerate} \end{proposition} \begin{proof}First we construct a free ring $\mathbb{Z}\langle X\rangle$ where $X$ is constructed as follows. For each $\gamma\in\Gamma$ and $r\in R_\gamma$, consider a symbol $x_r^\gamma$. For each matrix $A=(a_{ij})\in\Sigma$, fix $(\overline{\alpha},\overline{\beta})$ such that $A\in M_{n}(R)[\overline{\alpha}][\overline{\beta}]$ and consider a matrix $A^*$ whose entries are symbols $A^*=(a_{ij}^*)$. Then let $X$ be the disjoint union $$X=\{x_{r}^{\gamma}\colon r\in R_\gamma,\gamma\in\Gamma\}\cup \{a_{ij}^*\colon a_{ij} \textrm{ is the $(i,j)$-entry of } A\in\Sigma\}. $$ Now we turn $\mathbb{Z}\langle X\rangle$ into a $\Gamma$-graded ring by giving degrees to the elements of $X$. If $r\in R_\gamma$, we set $x_{r,\gamma}$ to be of degree $\gamma$. If $A=(a_{ij})\in\Sigma$ with fixed $(\overline{\alpha},\overline{\beta})$, then $a_{ij}\in R_{\alpha_i\beta_j^{-1}}$, thus we let $a_{ij}^*$ of degree $\beta_i\alpha_j^{-1}$. Notice that $A^*\in M_n(\mathbb{Z}\langle X\rangle) [\overline{\beta}][\overline{\alpha}]$. Let $I$ be the ideal of $\mathbb{Z}\langle X\rangle$ generated by the homogeneous elements of any of the following forms \begin{itemize} \item $x_{r+s}^{\gamma}-x_{r}^{\gamma}-x_{s}^{\gamma}$ for $r,s\in R_\gamma$. \item $x_{rs}^{\gamma\delta}-x_{r}^{\gamma}x_{s}^{\delta}$ for $r\in R_\gamma$ and $s\in R_\delta$. \item $x_1^e - 1$. \item $\sum_k x_{a_{i,k}}^{\alpha_i\beta_k^{-1}}a_{k,j}^*-\delta_{i,j}$ for $A\in\Sigma$. \item $\sum_k a_{i,k}^*x_{a_{k,j}}^{\alpha_k\beta_j^{-1}}-\delta_{i,j}$ for $A\in\Sigma$. \end{itemize} Set $R_\Sigma=\mathbb{Z}\langle X\rangle /I$ and $\lambda \colon R\rightarrow R_\Sigma$ be the homomorphism of $\Gamma$-graded rings determined by $\lambda (r)=\overline{x_r^\gamma}$ for each $r\in R_\gamma$, $\gamma\in\Gamma$. Since $I$ is a graded ideal of $\mathbb{Z}\langle X\rangle$, then $R_\Sigma$ is a $\Gamma$-graded ring and $\lambda$ is a homomorphism of graded rings. Suppose that $f\colon R\rightarrow S$ is a $\Sigma$-inverting ring homomorphism. For each $A=(a_{ij})\in\Sigma$ with fixed $(\overline{\alpha},\overline{\beta})$, suppose that $(A^f)^{-1}=(b_{ij})$. Notice that $A^f\in M_n(S)[\overline{\alpha}][\overline{\beta}]$ and $(A^f)^{-1}\in M_n(S)[\overline{\beta}][\overline{\alpha}]$. Then there exists a unique homomorphism of $\Gamma$-graded rings $F'\colon \mathbb{Z}\langle X\rangle\rightarrow S$ such that $F'(x_r^\gamma)=f(r)$ for each $r\in R_\gamma$, $\gamma\in\Gamma$, and $F'(a_{ij}^*)=b_{ij}$. Note that $I\subseteq \ker F$, and let $F\colon R_\Sigma\rightarrow S$ be the induced homomorphism. Hence $F\lambda=f$, as desired. To prove the uniqueness and the fact that $\lambda\colon R\rightarrow R_\Sigma$ is a ring epimorphism, notice that from $ F\lambda=f$, we obtain that $F(\overline{x_r^\gamma})=f(r)$, and now the same argument of Theorem~\ref{theo:epimorphism}(4) shows that $F\big(\overline{a_{ij}^*}\big)=b_{ij}$. Now we proceed to show (4). If $S$ is a $\Gamma$-graded ring and $f\colon R\rightarrow S$ is a $\Sigma$-inverting homomorphism of graded rings, then $f(r)\in S_\gamma$ for each $r\in R_\gamma$, $\gamma\in\Gamma$, and $b_{ji}\in R_{\beta_{j}\alpha_{i}^{-1}}$ for each $A=(a_{ij})\in \Sigma_n[\overline{\alpha}][\overline{\beta}]$. Hence $F'$ and $F$ are homomorphisms of $\Gamma$-graded rings. Now $R_f(\Sigma)$ is a subring of $S$ generated by $\im f$ and the entries of the inverses of the matrices in $\Sigma^f$, and that is exactly the image of $F$. \end{proof} Now our aim is to show that if $(K,\varphi)$ is a \emph{$\Gamma$-graded epic $R$-field}, then $\varphi\colon R\rightarrow K$ is in fact an epimorphism of ($\Gamma$-graded) rings. For the sake of completion, we preferred to give the proof of the following lemma, but this could be shown as a direct consequence of \cite[Proposition~7.2.1]{Cohnfreeeidealringslocalization} and the fact that if $f\colon R\rightarrow S$ is a homomorphism of $\Gamma$-graded rings that is an epimorphism in the category of $\Gamma$-graded rings, then it is an epimorphism in the category of rings. The proof of this fact is as follows, if $g_1,g_2\colon S\rightarrow T$ are homomorphisms of rings such that $g_1f=g_2f$, there exist homomorphisms of $\Gamma$-graded rings $\widetilde{g_1}\colon S\rightarrow \widetilde{\im g_1f}$, $\widetilde{g_2}\colon S\rightarrow \widetilde{\im g_2f}$ and homomorphism of rings $\pi\colon\widetilde{\im g_1f}\rightarrow T$ such that $\widetilde{g_1}f=\widetilde{g_2}f$ and $g_1=\pi\widetilde{g_1}$, $g_2=\pi\widetilde{g_2}$. Since $f$ is an epimorphism of $\Gamma$-graded rings, then $\widetilde{g_1}=\widetilde{g_2}$. Thus $g_1=g_2$. \begin{lemma} Let $R$, $S$ be $\Gamma$-graded rings and $f\colon R\rightarrow S$ be a homomorphism of $\Gamma$-graded rings. The following statements are equivalent. \begin{enumerate}[\rm(1)] \item $f$ is an epimorphism of $\Gamma$-graded rings. \item In the $\Gamma$-graded $S$-bimodule $S\otimes_RS$, $x\otimes 1=1\otimes x$ for all $x\in S$. \item The natural map $f\colon S\otimes_R S\rightarrow S$ determined by $f(x\otimes y)=xy$ is an isomorphism of graded $S$-bimodules. \end{enumerate} \end{lemma} \begin{proof} $(1)\Rightarrow (2)$ Consider the $\Gamma$-graded additive group $M=S\oplus (S\otimes_R S)$. It can be endowed with a structure of $\Gamma$-graded ring via the multiplication $(x,u)(y,v)=(xy,xv+uy)$. Notice that if $(x,u)\in M_\gamma$ and $(y,v)\in M_\delta$, then $x,u$ have degree $\gamma$ and $y,v$ have degree $\delta.$ Hence $xy$ and $xv+uy$ have degree $\gamma\delta$. Consider the homomorphisms of $\Gamma$-graded rings $g,h\colon S\rightarrow M$ defined by $g(x)=(x,0)$ and $h(x)=(x,x\otimes 1-1\otimes x)$. Since $gf=hf$ and $f$ is an epimorphism of graded rings, then $x\otimes 1=1\otimes x$. $(2)\Rightarrow (1)$ Let $g,h\colon S\rightarrow T$ be homomorphisms of $\Gamma$-graded rings such that $gf=hf$. Then there exists a well defined map $F\colon S\otimes_R S\rightarrow T$, $x\otimes y\mapsto g(x)h(y)$. For each $x\in S$, since $x\otimes 1=1\otimes x$, we obtain that $g(x)=F(x\otimes 1)=F(1\otimes x)=h(x)$. Thus $g=h$, as desired. $(2)\Rightarrow (3)$ First note that $f$ is a homomorphism of $\Gamma$-graded $S$-bimodules. Clearly $f$ is surjective. Now, since $f\left(\sum_i x_i\otimes y_i\right)=\sum_i x_iy_i$, injectivity follows from the fact that $\sum x_i\otimes y_i=\sum_ix_i(1\otimes y_i)=\sum_ix_i(y_i\otimes 1)= \sum_ix_iy_i\otimes 1=\left(\sum_i x_iy_i\right)\otimes 1$. $(3)\Rightarrow (2)$ Since for each $x\in S$, $f(x\otimes 1)=x=f(1\otimes x)$ and $f$ is an isomorphism, the result follows. \end{proof} \begin{proposition} Let $R$ be a $\Gamma$-graded ring, $K$ be a $\Gamma$-graded division ring and $f\colon R\rightarrow K$ be a homomorphism of $\Gamma$-graded rings. Then $f$ is an epimorphism of graded rings if, and only if, $K=\DC(f)$. \end{proposition} \begin{proof} Suppose that $f\colon R\rightarrow K$ is an epimorphism of $\Gamma$-graded rings. Consider the graded division subring $\DC(f)$ of $K$. Let $\mathcal{B}$ be a set of homogeneous elements of $K$ that is a basis of $K$ as a right $\DC(f)$-module. Then we have the following isomorphisms of graded right $\DC(f)$-modules \begin{eqnarray*} K\cong K\otimes_{DC(f)}K & \cong & \left(\bigoplus_{b\in\mathcal{B}}b\DC(f) \right)\otimes_{\DC(f)}K \cong\bigoplus_{b\in\mathcal{B}}\left(b\DC(f)\otimes_{\DC(f)}K\right) \\ &\cong&\bigoplus_{b\in\mathcal{B}}b\otimes_{\DC(f)} K \cong \bigoplus_{b\in\mathcal{B}}K(\gamma_b),\end{eqnarray*} for some $\gamma_b\in\Gamma$. Hence $\mathcal{B}$ must consist of just one element. Conversely, suppose that $\DC(f)=K$. Let $$\Sigma=\{A\in\mathfrak{M}(R)\colon A^f \textrm{ is invertible over }K\}.$$ Then $K=\DC(f)=R_f(\Sigma)$. By Theorem~\ref{theo:epimorphism}(4), $f\colon R\rightarrow K$ is a ring epimorphism, and therefore an epimorphism of $\Gamma$-graded rings. \end{proof} \begin{theorem}\label{theo:gradedlocal} Let $R$ be a $\Gamma$-graded ring. \begin{enumerate}[\rm(1)] \item If $\Sigma\subseteq\mathfrak{M}(R)$ is such that the universal localization $R_\Sigma$ is a $\Gamma$-graded local ring with maximal graded ideal $\mathfrak{m}$, then $R_\Sigma/\mathfrak{m}$ is a $\Gamma$-graded epic $R$-division ring. \item Let $K$ be a $\Gamma$-almost graded division ring and $f\colon R\rightarrow K$ be a homomorphism of $\Gamma$-almost graded rings such that $\DC(f)=K$. Let $$\Sigma=\{A\in\mathfrak{M}(R)\colon A^f \textrm{ is invertible over }K \}.$$ The following assertions hold true. \begin{enumerate}[\rm(a)] \item $R_\Sigma$ is a $\Gamma$-graded local ring. \item If $\mathfrak{m}$ is the maximal graded ideal of $R_\Sigma$, then $R_\Sigma/\mathfrak{m}$ is a $\Gamma$-graded epic $R$-division ring satisfying the following statements. \begin{enumerate}[\rm(i)] \item There exists a surjective homomorphism of $\Gamma$-almost graded rings $\widetilde{F}\colon R_\Sigma/\mathfrak{m}\rightarrow K$ such that the following diagram is commutative $$\xymatrix{R\ar[r]^\lambda\ar[rd]_f & R_\Sigma \ar[d]^F\ar[r]^\pi & R_\Sigma/\mathfrak{m}\ar[ld]^{\widetilde{F}} \\ & K & }$$ \item If $K$ is a $\Gamma$-graded division ring, then $\widetilde{F}\colon R_\Sigma/\mathfrak{m} \rightarrow K$ is an isomorphism of $\Gamma$-graded epic $R$-division rings. \end{enumerate} \end{enumerate} \end{enumerate} \end{theorem} \begin{proof} (1) The homomorphism $\lambda \colon R\rightarrow R_\Sigma$ is a ring epimorphism by Proposition~\ref{prop:basicsonuniversallocalization}(2). The natural homomorphism $\pi\colon R_\Sigma\rightarrow R_\Sigma/\mathfrak{m}$ is surjective. Therefore $\pi\lambda\colon R\rightarrow R_\Sigma/\mathfrak{m}$ is a ring epimorphism, thus a $\Gamma$-graded epic $R$-division ring. (2) Let $\lambda\colon R\rightarrow R_\Sigma$ be the canonical homomorphism. Hence there exists a unique homomorphism of $\Gamma$-almost graded $R$-rings $F\colon R_\Sigma\rightarrow K$ such that $F\lambda=f$. Set $\mathfrak{m}=(\ker F)_g$, in other words, $\mathfrak{m}=\bigoplus\limits_{\gamma\in\Gamma}(\ker F\cap (R_\Sigma)_\gamma)\subseteq \ker F$. Let $x\in (R_\Sigma)_\gamma\setminus\mathfrak{m}$. Then $F(x)\neq 0$ and then $F(x)\in K_\gamma$ is invertible in $K$. By Proposition~\ref{prop:basicsonuniversallocalization}(4), $R_\Sigma=R_\lambda(\Sigma)$. Thus, there exist $\overline{\alpha}\in\Gamma^n$, $\overline{\beta}\in\Gamma^{n+1}$ , $A\in M_{n\times (n+1)}(R)[\overline{\alpha}][\overline{\beta}]$ and $u\in M_{(n+1)\times 1} (S)$ such that $\beta_0=e$, $\beta_\infty=\gamma$, $u_0=1$, $u_\infty=x$, $(A_\bullet\ A_\infty)\in \Sigma$ and $$(A_0^\lambda\ A_\bullet^\lambda\ A_\infty^\lambda)\left(\begin{smallmatrix}1 \\ u_\bullet \\ x \end{smallmatrix}\right)=0.$$ Applying $F$ to the entries of the matrices involved we obtain $$(A_0^f\ A_\bullet^f\ A_\infty^f)\left(\begin{smallmatrix}1 \\ u_\bullet^F \\ F(x) \end{smallmatrix}\right)=0.$$ Since $F(x)$ is invertible, by Cramer's rule~\ref{lem:Cramersrule}, $(A_0^f\ A_\bullet^f)$ is invertible in $K$. Therefore $(A_0\ A_\bullet)\in\Sigma$ and $(A_0^\lambda\ A_\bullet^\lambda)$ is invertible in $R_\Sigma.$ Again by Cramer's rule, $x$ is invertible in $R_\Sigma$. Hence $R_\Sigma$ is a $\Gamma$-graded local ring where $\mathfrak{m}$ is the ideal generated by the nonivertible homogeneous elements of $R_\Sigma$, and (a) is proved. The ring $R_\Sigma/\mathfrak{m}$ is a $\Gamma$-graded division ring and, by (1), (b) follows. (i) and (ii) follow, respectively, because $\mathfrak{m}\subseteq \ker F$ and $\mathfrak{m}=\ker F$ if $K$ is a $\Gamma$-graded division ring. \end{proof} Now we proceed to define the category of graded epic $R$-division rings and gr-specializations. \medskip Let $R$ be a $\Gamma$-graded ring. Suppose that $(K,\varphi)$, $(L,\psi)$ are $\Gamma$-graded epic $R$-division rings and set $$\Sigma=\{A\in\mathfrak{M}(R)\colon A^\psi \textrm{ is invertible over } L\}.$$ If there exists a homomorphism of $\Gamma$-graded $R$-rings $\Phi\colon R_\Sigma\rightarrow K$, we define the \emph{core of $L$ in $K$} as $\mathfrak{C}_{L}(K)=\Phi(R_\Sigma)$. We remark that, if it exists, it is unique and observe that, by Proposition~\ref{prop:basicsonuniversallocalization}(4), $\mathfrak{C}_L(K)=R_\varphi(\Sigma)$. By Theorem~\ref{theo:gradedlocal}(2)(a), $R_\Sigma$ is a $\Gamma$-graded local ring. Therefore $\mathfrak{C}_L(K)$ is a $\Gamma$-graded local local subring of $K$ that contains $R$. Moreover, the natural homomorphism of $\Gamma$-graded $R$-rings $\Psi\colon R_\Sigma\rightarrow L$ factors through $\mathfrak{C}_L(K)$ in a unique way, because $L\cong R_\Sigma/\mathfrak{m}$ where $\mathfrak{m}$ is the maximal graded ideal of $R_\Sigma$. A \emph{gr-subhomomorphism} is a homomorphism of $\Gamma$-graded $R$-rings $f\colon K_f\rightarrow L$ where $K_f$ is a graded $R$-subring of $K$ such that $x^{-1}\in K_f$ for each $x\in \h(K_f)\setminus \ker f$. Note that $K_f$ is a graded local subring of $K$ because any homogeneous element not in the graded ideal $\ker f$ is invertible. Hence $K_f/\ker f$ is a $\Gamma$-graded $R$-division ring contained in $L$. This implies that $f$ is a surjective homomorphism of $\Gamma$-graded $R$-rings and that $K_f/\ker f\cong L$ is a $\Gamma$-graded epic $R$-division ring. For each $A\in\Sigma$, consider $A^\varphi$ which belong to $\mathfrak{M}(K)$. Since $K_f$ is a $\Gamma$-graded local $R$-ring whose residue graded division ring is $L$, we get that $A^\varphi$ is invertible in $K_f$. Thus there exists a unique homomorphism of graded $R$-rings $\Phi\colon R_\Sigma\rightarrow K_f\subseteq K$ and a commutative diagram of homomorphisms of $\Gamma$-graded $R$-rings \begin{equation}\label{eq:commutativityspecializations} \xymatrixrowsep{3pt} \xymatrixcolsep{20pt} \xymatrix{ & K_f\ar[dd]^f \\ R_\Sigma\ar[rd]_\Psi\ar[ru]^\phi & \\ & L } \end{equation} Thus $\mathfrak{C}_L(K)$ is contained in the domain of any subhomomorphism from $K$ to $L$, it is a $\Gamma$-graded local $R$-subring of $K$, the restriction of any subhomomorphism to $\mathfrak{C}_L(K)$ is a subhomomorphism and and all such restrictions coincide in $\mathfrak{C}_L(K)$, because of the commutativity of \eqref{eq:commutativityspecializations}. Now we give another description of $\mathfrak{C}_L(K)$. Let $f\colon K_f\rightarrow L$ be a gr-subhomomorphism between the $\Gamma$-graded epic $R$-fields $(K,\varphi)$, $(L,\psi)$. For each $\gamma\in \Gamma$ define $(c(f)_0)_\gamma=\varphi(R_\gamma)$, and if $n\geq 0$, set $$(c(f)_{n+1})_\gamma=\left. \begin{array}{cc}\textrm{ Additive subgroup of $K$ generated by } \\ \{x_1\dotsm x_r\colon r\geq 1, x_i\in (c(f)_n)_{\gamma_i} \textrm{ or } x_i=y_i^{-1} \textrm{ where } \\ y_i\in (c(f)_n)_{\gamma_i^{-1}}\setminus\ker f,\ \gamma_1\dotsc\gamma_r=\gamma \}\end{array} \right.$$ Then define $c(f)_\gamma=\bigcup\limits_{n\geq 0} (c(f)_n)_\gamma$, and $C_L(K)=\bigoplus_{\gamma\in\Gamma} c(f)_\gamma.$ Note that $C_L(K)$ is a $\Gamma$-graded local $R$-subring of $K_f$ with maximal graded ideal $C_L(K)\cap\ker f$ and such that the restriction $f\colon C_L(K)\rightarrow L$ is a gr-subhomomorphism. If we take $K_f=\mathfrak{C}_L(K)$, then we obtain that $C_L(K)\subseteq \mathfrak{C}_L(K)$, but since $\mathfrak{C}_L(K)$ is contained in the domain of any gr-subhomomorphism, we get that $C_L(K)=\mathfrak{C}_L(K)$. Roughly speaking, this equality means that any rational homogeneous expression obtained from the elements of (the image of) $R$ in $L$ makes sense in $K$ and the elements obtained with those rational expressions from the elements of (the image of) $R$ in $K$ form $\mathfrak{C}_L(K)$. Because if there exist gr-subhomomorphisms between the $\Gamma$-graded epic $R$-division rings $(K,\varphi)$ and $(L,\psi)$, then they all coincide in the core, we make the following definition. A \emph{gr-specialization} is the unique homomorphism of $\Gamma$-graded $R$-rings $f\colon \mathfrak{C}_L(K)\rightarrow L$. Suppose that $(K,\varphi)$, $(L,\psi)$ and $(M,\phi)$ are $\Gamma$-graded epic $R$-division rings. If $f\colon K_f\rightarrow L$ and $g\colon L_g\rightarrow M$ are gr-subhomomorphisms, then the restriction $gf\colon P=f^{-1}(L_g)\rightarrow M$ is a gr-subhomomorphism which will be called the \emph{composition gr-subhomomorphism} of $f$ and $g$. Indeed, suppose that $z\in\h(P)\setminus \ker(gf)$. Since $g(f(z))\neq 0$, then $f(z)^{-1}\in L_g$. As $f(z)\neq 0$, and thus $z^{-1}\in K_f$, then $z^{-1}\in P$. We define the composition of the corresponding gr-specializations, as the gr-specialization corresponding to the composition gr-subhomomorphism of $f$ and $g$. In other words, the unique homomorphism of $\Gamma$-graded $R$-rings $\mathfrak{C}_M(K)\rightarrow M$. It follows that the composition of gr-specializations is associative. Note that the only subhomomorphism from the $\Gamma$-graded epic $R$-division ring $(K,\varphi)$ to $(K,\varphi)$ is the identity map on $K$. Therefore $\mathfrak{C}_K(K)=K$ and the corresponding specialization is the identity map. We define the category $\mathcal{E}_R$ as the category whose objects are the $\Gamma$-graded epic $R$-division rings and whose morphisms are the gr-specializations. We remark that there is at most one morphism between two objects in this category and that isomorphisms correspond to isomorphisms of $\Gamma$-graded $R$-rings. Indeed, if the composition of two gr-specializations $f$ and $g$ is the identity gr-specialization, then they have to be isomorphisms of $\Gamma$-graded $R$-rings. An initial object $(K,\varphi)$ in the category $\mathcal{E}_R$ is a \emph{universal $\Gamma$-graded epic $R$-division ring}. In other words, there exists a gr-specialization from $(K,\varphi)$ to any other $\Gamma$-graded epic $R$-division ring $(L,\psi)$. If moreover, $\varphi\colon R\rightarrow K$ is injective, we say that this initial object is a \emph{universal $\Gamma$-graded epic $R$-division ring of fractions of $R$}. Now we give the following important result. \begin{theorem}\label{theo:specialization} Let $R$ be a $\Gamma$-graded ring and let $(K_1,\varphi_1)$, $(K_2,\varphi_2)$ be $\Gamma$-graded epic $R$-division rings. Set $$\Sigma_i=\{A\in\mathfrak{M}(R)\colon A^{\varphi_i} \textrm{ is invertible over } K_i \},\ i=1,2.$$ The following statements are equivalent. \begin{enumerate}[\rm(1)] \item There exists a gr-specialization from $(K_1,\varphi_1)$ to $(K_2,\varphi_2)$. \item $\Sigma_2\subseteq \Sigma_1$. \item There exists a homomorphism $R_{\Sigma_2}\rightarrow R_{\Sigma_1}$ of $\Gamma$-graded $R$-rings. \end{enumerate} Furthermore, if there exists a gr-specialization from $(K_1,\varphi_1)$ to $(K_2,\varphi_2)$ and another gr-specialization from $(K_2,\varphi_2)$ to $(K_1,\varphi_1)$, then $K_1$ and $K_2$ are isomorphic $\Gamma$-graded $R$-rings. \end{theorem} \begin{proof} $(1)\Rightarrow(2)$ By definition, there exists a homomorphism of $\Gamma$-graded $R$ rings $\mathfrak{C}_{K_2}(K_1)\rightarrow K_2$. By definition of $\mathfrak{C}_{K_2}(K_1)$, any matrix in $\Sigma_2$ is invertible over $\mathfrak{C}_{K_2}(K_1)\subseteq K_1$. Thus $\Sigma_2\subseteq \Sigma_1$ $(2)\Rightarrow(3)$ If $\Sigma_2\subseteq \Sigma_1$, the universal property of $R_{\Sigma_2}$ implies the existence of a homomorphism of $\Gamma$-graded $R$-rings $R_{\Sigma_2}\rightarrow R_{\Sigma_1}$. $(3)\Rightarrow(1)$ Consider the unique homomorphisms of $\Gamma$-graded $R$-rings $\Phi_i\colon R_{\Sigma_i}\rightarrow K_i$, $i=1,2$. Let $h\colon R_{\Sigma_2}\rightarrow R_{\Sigma_1}$ be a homomorphism of $\Gamma$-graded $R$-rings. Then there exists the homomorphism of graded $R$-rings $\Phi_1h\colon R_{\Sigma_2}\rightarrow K_1$. Then by what has been explained above, $\Phi_2$ factors through $\mathfrak{C}_{K_2}(K_1)$, and gives the desired specialization. Now suppose that there exist gr-specializations $f\colon \mathfrak{C}_{K_2}(K_1)\rightarrow K_2$ and $g\colon \mathfrak{C}_{K_1}(K_2)\rightarrow K_1$. Then the composition $gf$ gives a gr-specialization from $K_1$ in itself. Thus it has to be the identity. Similarly the composition $fg$ gives a gr-specialization from $K_2$ in itself. Hence, $f$ is an isomorphism in the category $\mathcal{E}_R$ of $\Gamma$-graded epic $R$-division rings. Therefore, $f$ is an isomorphism of graded $R$-rings. \end{proof} \begin{corollary}\label{coro:divisionringuniversallocalization} Let $R$ be a $\Gamma$-graded ring. Suppose that there exists $\Omega\subseteq\mathfrak{M}(R)$ such that $(R_\Omega,\lambda)$, where $\lambda\colon R\rightarrow R_\Omega$ is the canonical homomorphism, is a $\Gamma$-graded (epic) $R$-division ring. Then the only gr-specializations to $(R_\Omega,\lambda)$ are isomorphisms of $\Gamma$-graded $R$-rings. \end{corollary} \begin{proof} Suppose there exists a gr-specialization from the $\Gamma$-graded epic $R$-division ring $(K,\varphi)$ to $(R_\Omega,\lambda)$. By Theorem~\ref{theo:specialization}(3), then there exists a (unique) homomorphism of $\Gamma$-graded $R$-rings $R_\Omega\rightarrow R_\Sigma\rightarrow K$, where $$\Sigma=\{A\in\mathfrak{M}(R)\colon A^\varphi \textrm{ is invertible over } K\}.$$ Now, since $R_\Omega$ and $K$ are $\Gamma$-graded epic $R$-division rings, the image of $R_\Omega$ must be $K$ and therefore they are isomorphic as $\Gamma$-graded $R$-rings. \end{proof} \begin{corollary} Let $R$ be a $\Gamma$-graded ring with a universal $\Gamma$-graded epic $R$-division ring $(U,\rho)$. Suppose that $\Sigma\subseteq\mathfrak{M}(R)$ is such that there exists a homomorphism of $\Gamma$-graded rings $R_\Sigma\rightarrow L$ for some $\Gamma$-graded division ring $L$. Then $(U,\rho)$ is a universal $\Gamma$-graded epic $R_\Sigma$-division ring. \end{corollary} \begin{proof} Consider the canonical homomorphism $\lambda\colon R\rightarrow R_\Sigma$. Let $f\colon R_\Sigma\rightarrow L$ be a homomorphism of $\Gamma$-graded rings with $L$ a $\Gamma$-graded division ring. Then $(\DC(f),f\lambda)$ is a $\Gamma$-graded epic $R$-division ring such that the matrices in $\Sigma$ become invertible. Hence, by Theorem~\ref{theo:specialization}, $\Sigma^\rho$ consists of invertible matrices in $U$. Thus there exists a unique homomorphism of $\Gamma$-graded rings $\psi\colon R_\Sigma\rightarrow U$ and $(U,\psi)$ is a $\Gamma$-graded epic $R_\Sigma$-division ring. Consider a $\Gamma$-graded epic $R_\Sigma$-division ring $(K,\varphi)$. The composition $\varphi\lambda\colon R\rightarrow K$ is an epimorphism of $\Gamma$-graded rings, because $\lambda$ and $\varphi$ are. Hence $(K,\varphi\lambda)$ is a $\Gamma$-graded epic $R$-division ring and therefore there exists a specialization from $(U,\rho)$ to $(K,\varphi\lambda)$. \end{proof} Adapting \cite[p.426]{Cohnfreeeidealringslocalization} to the graded context, we give some examples to illustrate the concepts of universal graded division ring and graded division rings that are universal localizations. Let $R=\bigoplus\limits_{\gamma\in\Gamma}R_\gamma$ be a commutative $\Gamma$-graded domain. Then the localization of $R$ at the set $\h(R)\setminus\{0\}$ of nonzero homogeneous elements yields a $\Gamma$-graded epic $R$-field $(F,\varphi)$. We point out that $F=\bigoplus\limits_{\gamma\in\Gamma}F_\gamma$ is a $\Gamma$-graded field with $$F_\gamma=\{ab^{-1}\mid a\in R_\delta, b\in R_\varepsilon, \delta\varepsilon^{-1}=\gamma\}$$ for each $\gamma\in\Gamma$. Furthermore, if $(K,\psi)$ is a $\Gamma$-graded epic $R$-division ring, then $\ker \psi$ is a graded prime ideal of $R$. That is, $\ker\varphi\neq R$ and if $x,y\in \h(R)\cap\ker \psi$ with $xy\in\ker\psi$, then $x\in\ker\psi$ of $y\in \ker\psi$. Hence $\h(R)\setminus\ker \psi$ is a multiplicative subset of $R$. Then the localization of $R$ at $\h(R)\setminus\ker \psi$ is a $\Gamma$-graded local subring of $F$ with $\Gamma$-graded residue division ring $R$-isomorphic to $K$. Therefore $(F,\varphi)$ is a $\Gamma$-graded universal $R$-division ring of fractions that is a universal localization. Let $S=E\times F$ be the direct product of two $\Gamma$-graded fields $E=\bigoplus\limits_{\gamma\in\Gamma} E_\gamma$ and $F=\bigoplus\limits_{\gamma\in\Gamma} F_\gamma$. Then $S=\bigoplus\limits_{\gamma\in\Gamma} S_\gamma$ is a $\Gamma$-graded ring with $S_\gamma=E_\gamma\times F_\gamma$. Suppose $(D,\rho)$ is a $\Gamma$-graded epic $S$-division ring. Since $(1,1)=(1,0)+(0,1)$ and $(1,0)(0,1)=(0,0)$, then either $\rho(1,0)=0$ or $\rho(0,1)=0$. If $\rho(1,0)=0$, then $\rho(E\times\{0\})=0$ and if $\rho(0,1)=0$, then $\rho(\{0\}\times F)=0$. Hence $S$ has only two epic $S$-division rings which are $E$ and $F$. Note that none of them is a universal $\Gamma$-graded epic $R$-division ring. On the other hand, both are universal localizations. For example $E$ is the universal localization of $S$ at $\{(1,1)\}\cup\{(a,0)\mid a\in \h(E)\setminus\{0\}\}$. Let now $E$ be a $\Gamma$-graded field. Then the polynomial ring $E[x]=\bigoplus\limits_{\gamma\in\Gamma} E[x]_\gamma$ is a $\Gamma$-graded ring with $$E[x]_\gamma=E_\gamma[x]=\{a_0+a_1x+\dotsb+a_nx^n\mid a_i\in E_\gamma, n\in\mathbb{N}\}.$$ The ideal $(x^2)$ is a graded ideal of $E[x]$. Hence $T=E[x]/(x^2)$ is a $\Gamma$-graded local ring with maximal graded ideal $(x)/(x^2)$. Then $E$ is the unique $\Gamma$-graded epic $T$-division ring, and thus $E$ is a universal $\Gamma$-graded epic $T$-division ring. Notice that $E$ is not a universal localization at matrices in $\mathfrak{M}(R)$ because the matrices which become invertible in $E$ are already invertible in $T$ since $E$ is the $\Gamma$-graded residue division ring of $T$. The ring $U=T\times F$ with $T$ as before and $F$ a $\Gamma$-graded field has $E$ and $F$ as $\Gamma$-graded epic $U$-division rings, but only $F$ is a universal localization. \section{Malcolmson's Criterion}\label{sec:Malcolmsonscriterion} \emph{Throughout this section, let $\Gamma$ be a group.} \medskip In this section, we show that the natural extension of the results and arguments of the paper by P. Malcolmson \cite{Malcolmsonscriterion} work for $\Gamma$-graded rings. The main results of this section, and the only ones that will be used later, are Theorem~\ref{theo:Malcolmsoncriterion} and Corollary~\ref{coro:Malcolmsoncriterion}. The proof of Theorem~\ref{theo:Malcolmsoncriterion} is very technical and most of this long section is devoted to prove it. In this section, for the ease of exposition, we use the following notation. By the expression \emph{$A$ is a homomgeneous matrix}, we mean $A\in \mathfrak{M}_\bullet (R)$. We will also use the terms \emph{homogeneous row}, \emph{homogeneous column} to emphasize that the matrix in question is a row or a column, respectively. If $A\in M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]$, but we do not want to make reference to the size of $A$, we will say \emph{$A$ is a homogenousmatrix of distribution $(\alpha,\beta)$}. Also, the sequence $\overline{\alpha}\gamma$ will be denoted by $\alpha\gamma$ for each $\overline{\alpha}\in\Gamma^n$ and $\gamma\in \Gamma$. \begin{theorem}\label{theo:Malcolmsoncriterion} Let $R$ be a $\Gamma$-graded ring and $\Sigma$ be a gr-lower semimultiplicative subset of $\mathfrak{M}(R)$. Consider the canonical homomorphism of $\Gamma$-graded rings $\lambda\colon R\rightarrow R_\Sigma$. For $\gamma\in\Gamma$, a homogeneous element $r\in R_\gamma$ belongs to $\ker \lambda$ if and only if there exist $L,M,P,Q\in\Sigma$, homogeneous rows $J,U$ and homogeneous columns $W,V$ such that \[ \left(\begin{array}{cc|c} L&0&W\\0&M&0\\\hline 0&J&r \end{array}\right)= \left(\begin{array}{c} P\\\hline U \end{array}\right) \left(\begin{array}{c|c} Q&V \end{array}\right), \] where $P,U,Q,V$ have distributions $(\pi,\omega),(\gamma,\omega),(\omega,\theta),(\omega,e)$, respectively. \end{theorem} \begin{corollary}\label{coro:Malcolmsoncriterion} Let $\Gamma$ be a group, $R$ be a $\Gamma$-graded ring and $\Sigma$ be a gr-multiplicative subset of $\mathfrak{M}(R)$ consisting of gr-full matrices. Then $R_\Sigma$ is a nonzero $\Gamma$-graded ring. \end{corollary} \begin{proof} It is enough to prove that $1\in R_e$ is not in the kernel of the canonical homomorphism of graded rings $\lambda\colon R\rightarrow R_\Sigma$. Suppose that $1\in \ker \lambda$. Then, by Theorem~\ref{theo:Malcolmsoncriterion}, there exist $L,M,P,Q\in\Sigma$, homogeneous rows $J,U$ and homogeneous columns $W,V$ such that \[ \left(\begin{array}{cc|c} L&0&W\\0&M&0\\\hline 0&J&1 \end{array}\right)= \left(\begin{array}{c} P\\\hline U \end{array}\right) \left(\begin{array}{c|c} Q&V \end{array}\right), \] where $P,U,Q,V$ have distributions $(\pi,\omega),(\gamma,\omega),(\omega,\theta),(\omega,e)$, respectively. Making elementary column operations, we obtain \[ \left(\begin{array}{cc|c} L&-WJ&W\\0&M&0\\\hline 0&0&1 \end{array}\right)= \left(\begin{array}{c} P\\\hline U \end{array}\right) \left(\begin{array}{c|c} Q'&V \end{array}\right), \] where $P,U,Q',V$ have distributions $(\pi,\omega),(\gamma,\omega),(\omega,\theta),(\omega,e)$, respectively. Since $\Sigma$ is gr-multiplicative, it is also upper gr-semimultiplicative by Remark~\ref{rem:grmultiplicative}. Thus, the matrix $\left(\begin{smallmatrix} L&-WJ&W\\0&M&0\\ 0&0&1 \end{smallmatrix}\right)\in \Sigma$ but it is not gr-full, a contradiction. Therefore, $1\notin\ker\lambda$. \end{proof} \subsection{Equivalence relation} Let $\Gamma$ be a group and $R$ be a $\Gamma$-graded ring. Let $\Sigma$ be a gr-lower semimultiplicative subset of $\mathfrak{M}(R)$. For $\gamma\in \Gamma$, let $(T_\Sigma)_\gamma$ be the set of $5$-tuples $(F,A,X,\alpha,\beta)$ where $A\in \Sigma$ of distribution $(\alpha,\beta)$, $F$ is a homogeneous row of distribution $(\gamma,\beta)$, and $X$ is a homogeneous column of distribution $(\alpha,e)$. Let $(F,A,X,\alpha,\beta), (G,B,Y,\delta,\varepsilon)\in (T_\Sigma)_\gamma$. We say that $$(F,A,X,\alpha,\beta)\sim (G,B,Y,\delta,\varepsilon),$$ if and only if there exist $L,M,P,Q\in \Sigma$, homogeneous rows $J,U$ and homogeneous columns $W,V$ such that \begin{equation}\label{eq:equivalencerelation} \left(\begin{array}{cccc|c} A&0&0&0&X\\0&B&0&0&Y\\0&0&L&0&W\\0&0&0&M&0\\\hline F&-G&0&J&0 \end{array}\right)= \left(\begin{array}{c} P\\\hline U \end{array}\right) \left(\begin{array}{c|c} Q&V \end{array}\right) \end{equation} where $P,U,Q,V$ have distributions $(\pi,\omega),(\gamma,\omega),(\omega, \theta),(\omega,e)$, respectively, and, if we think of $$\pi=\pi_1*\pi_2*\pi_3*\pi_4\ \textrm{ and } \ \theta=\theta_1*\theta_2*\theta_3*\theta_4$$ then $\pi_1=\alpha$, $\pi_2=\delta$, $\theta_1=\beta$, $\theta_2=\varepsilon$. The right hand side of \eqref{eq:equivalencerelation} will also be denoted by \[ \left(\begin{array}{cccc} P_{11}&P_{12}&P_{13}&P_{14}\\P_{21}&P_{22}&P_{23}&P_{24}\\P_{31}&P_{32}&P_{33}&P_{34}\\P_{41}&P_{42}&P_{43}&P_{44}\\\hline U_{1}&U_{2}&U_{3}&U_{4} \end{array}\right) \left(\begin{array}{cccc|c} Q_{11}&Q_{12}&Q_{13}&Q_{14}&V_{1}\\Q_{21}&Q_{22}&Q_{23}&Q_{24}&V_{2}\\Q_{31}&Q_{32}&Q_{33}&Q_{34}&V_{3}\\Q_{41}&Q_{42}&Q_{43}&Q_{44}&V_{4} \end{array}\right). \] \begin{lemma} Let $(F,A,X,\alpha,\beta),(G,B,Y,\delta,\varepsilon)\in (T_\Sigma)_\gamma$ such that there is a factorization as a product of homogeneous matrices of any of these forms with $L,M,P,Q\in \Sigma$ and with the corresponding distributions \begin{enumerate}[\rm(1)] \item $ \left(\begin{array}{cc|c} A&0&X\\0&B&Y\\\hline F&-G&0 \end{array}\right)= \left(\begin{array}{c} P\\\hline U \end{array}\right) \left(\begin{array}{c|c} Q&V \end{array}\right) $ \item $ \left(\begin{array}{ccc|c} A&0&0&X\\0&B&0&Y\\0&0&M&W\\\hline F&-G&0&0 \end{array}\right)= \left(\begin{array}{c} P\\\hline U \end{array}\right) \left(\begin{array}{c|c} Q&V \end{array}\right) $ \item $ \left(\begin{array}{ccc|c} A&0&0&X\\0&B&0&Y\\0&0&L&0\\\hline F&-G&J&0 \end{array}\right)= \left(\begin{array}{c} P\\\hline U \end{array}\right) \left(\begin{array}{c|c} Q&V \end{array}\right) $ \end{enumerate} \noindent then $(F,A,X,\alpha,\beta)\sim (G,B,Y,\delta,\varepsilon)$. \end{lemma} \begin{proof} (1) Suppose $P,U,Q,V$ have distributions $(\pi,\omega)$, $(\gamma,\omega)$, $(\omega,\theta)$, $(\omega,e)$ where $\pi_1=\alpha$, $\pi_2=\delta$, $\theta_1=\beta$ and $\theta_2=\varepsilon$, and that we have the factorization \[ \left(\begin{array}{cc|c} A&0&X\\0&B&Y\\\hline F&-G&0 \end{array}\right)= \left(\begin{array}{cc} P_{11}&P_{12}\\P_{21}&P_{22}\\\hline U_1&U_2 \end{array}\right) \left(\begin{array}{cc|c} Q_{11}&Q_{12}&V_1\\Q_{21}&Q_{22}&V_2 \end{array}\right) \] Thus, we have the factorization \[ \left(\begin{array}{cccc|c} A&0&0&0&X\\0&B&0&0&Y\\0&0&1&0&1\\0&0&0&1&0\\\hline F&-G&0&1&0 \end{array}\right) = \left(\begin{array}{cccc} P_{11}&P_{12}&0&0\\P_{21}&P_{22}&0&0\\0&0&1&0\\0&0&0&1\\\hline U_{1}&U_{2}&0&1 \end{array}\right) \left(\begin{array}{cccc|c} Q_{11}&Q_{12}&0&0&V_{1}\\Q_{21}&Q_{22}&0&0&V_{2}\\0&0&1&0&1\\0&0&0&1&0 \end{array}\right) \] where the factors of the right hand side have distributions \[ \begin{array}{cccc} (\alpha*\delta*e*\gamma*\gamma,&\omega_1*\omega_2*e*\gamma),&(\omega_1*\omega_2*e*\gamma,&\beta*\varepsilon*e*\gamma*e) \end{array} \] \noindent (2) Suppose $P,U,Q,V$ have distributions $(\pi,\omega)$, $(\gamma,\omega)$, $(\omega,\theta)$, $(\omega,e)$ where $\pi_1=\alpha$, $\pi_2=\delta$, $\theta_1=\beta$ and $\theta_2=\varepsilon$ and we have the factorization: \[ \left(\begin{array}{ccc|c} A&0&0&X\\0&B&0&Y\\0&0&L&W\\\hline F&-G&0&0 \end{array}\right)= \left(\begin{array}{ccc} P_{11}&P_{12}&P_{13}\\P_{21}&P_{22}&P_{23}\\P_{31}&P_{32}&P_{33}\\\hline U_1&U_2&U_3 \end{array}\right) \left(\begin{array}{ccc|c} Q_{11}&Q_{12}&Q_{13}&V_1\\Q_{21}&Q_{22}&Q_{23}&V_2\\Q_{31}&Q_{32}&Q_{33}&V_3 \end{array}\right) \] Thus, we have the following equality \[\setlength{\arraycolsep}{4pt} \left(\begin{array}{cccc|c} A&0&0&0&X\\0&B&0&0&Y\\0&0&L&0&W\\0&0&0&1&0\\\hline F&-G&0&1&0 \end{array}\right) = \left(\begin{array}{cccc} P_{11}&P_{12}&P_{13}&0\\P_{21}&P_{22}&P_{23}&0\\P_{31}&P_{32}&P_{33}&0\\0&0&0&1\\\hline U_{1}&U_{2}&U_{3}&1 \end{array}\right) \left(\begin{array}{cccc|c} Q_{11}&Q_{12}&Q_{13}&0&V_{1}\\Q_{21}&Q_{22}&Q_{23}&0&V_{2}\\Q_{31}&Q_{32}&Q_{33}&0&V_{3}\\0&0&0&1&0 \end{array}\right) \] where the factors of the right hand side have distributions \[ \begin{array}{cccc} (\alpha*\delta*\pi_3*\gamma*\gamma,&\omega_1*\omega_2*\omega_3*\gamma),&(\omega_1*\omega_2*\omega_3*\gamma,&\beta*\varepsilon*\theta_3*\gamma*e) \end{array} \] \noindent(3) Suppose $P,U,Q,V$ have distributions $(\pi,\omega)$, $(\gamma,\omega)$, $(\omega,\theta)$, $(\omega,e)$ where $\pi_1=\alpha$, $\pi_2=\delta$, $\theta_1=\beta$ and $\theta_2=\varepsilon$ and that we have the factorization \[ \left(\begin{array}{ccc|c} A&0&0&X\\0&B&0&Y\\0&0&M&0\\\hline F&-G&J&0 \end{array}\right)= \left(\begin{array}{ccc} P_{11}&P_{12}&P_{13}\\P_{21}&P_{22}&P_{23}\\P_{31}&P_{32}&P_{33}\\\hline U_1&U_2&U_3 \end{array}\right) \left(\begin{array}{ccc|c} Q_{11}&Q_{12}&Q_{13}&V_1\\Q_{21}&Q_{22}&Q_{23}&V_2\\Q_{31}&Q_{32}&Q_{33}&V_3 \end{array}\right) \] Thus, we have the factorization \[ \setlength{\arraycolsep}{2.5pt}\left(\begin{array}{cccc|c} A&0&0&0&X\\0&B&0&0&Y\\0&0&M&0&0\\0&0&0&M&0\\\hline F&-G&0&J&0 \end{array}\right) = \left(\begin{array}{cccc} P_{11}&P_{12}&P_{13}&0\\P_{21}&P_{22}&P_{23}&0\\P_{31}&P_{32}&P_{33}&0\\P_{31}&P_{32}&P_{33}&M\\\hline U_{1}&U_{2}&U_{3}&J \end{array}\right) \left(\begin{array}{cccc|c} Q_{11}&Q_{12}&Q_{13}&0&V_{1}\\Q_{21}&Q_{22}&Q_{23}&0&V_{2}\\Q_{31}&Q_{32}&Q_{33}&0&V_{3} \\0&0&-1&1&0 \end{array}\right) \] where the factors of the right hand side have distribution \[ \begin{array}{cccc} (\alpha*\delta*\pi_3*\pi_3*\gamma,&\omega_1*\omega_2*\omega_3*\theta_3),&(\omega_1*\omega_2*\omega_3*\theta_3,&\beta*\varepsilon*\theta_3*\theta_3*e) \end{array}, \] respectively. \end{proof} \begin{lemma} For each $\gamma\in\Gamma$, the relation $\sim$ defined in $(T_\Sigma)_\gamma$ is an equivalence relation. \end{lemma} \begin{proof} Let $(F,A,X,\alpha,\beta), (G,B,Y,\delta,\varepsilon), (H,C,Z,\zeta,\eta)\in (T_\Sigma)_\gamma$. The relation $\sim$ is symmetric. Indeed, we have the factorization \[ \left(\begin{array}{cc|c} A&0&X\\0&A&X\\\hline F&-F&0 \end{array}\right)= \left(\begin{array}{cc} I&0\\I&-A\\\hline 0&F \end{array}\right) \left(\begin{array}{cc|c} A&0&X\\I&-I&0 \end{array}\right) \] where the factors are homogeneous matrices that have distributions $(\alpha*\alpha*\gamma\,,\,\alpha*\beta)$ and $(\alpha*\beta\,,\,\beta*\beta*e)$, respectively. This shows that $(F,A,X,\alpha,\beta)\sim(F,A,X,\alpha,\beta)$. Suppose now that $(F,A,X,\alpha,\beta)\sim (G,B,Y,\delta,\varepsilon)$. There exist $L,M,P,Q\in \Sigma$, homogeneous rows $J,U$ and homogeneous columns $W,V$ such that \begin{equation}\label{eq:equivalencerelation2} \setlength{\arraycolsep}{1.5pt}\left(\begin{array}{cccc|c} A&0&0&0&X\\0&B&0&0&Y\\0&0&L&0&W\\0&0&0&M&0\\\hline F&-G&0&J&0 \end{array}\right)= \left(\begin{array}{cccc} P_{11}&P_{12}&P_{13}&P_{14}\\P_{21}&P_{22}&P_{23}&P_{24}\\P_{31}&P_{32}&P_{33}&P_{34}\\P_{41}&P_{42}&P_{43}&P_{44}\\\hline U_{1}&U_{2}&U_{3}&U_{4} \end{array}\right) \left(\begin{array}{cccc|c} Q_{11}&Q_{12}&Q_{13}&Q_{14}&V_{1}\\Q_{21}&Q_{22}&Q_{23}&Q_{24}&V_{2}\\Q_{31}&Q_{32}&Q_{33}&Q_{34}&V_{3}\\Q_{41}&Q_{42}&Q_{43}&Q_{44}&V_{4} \end{array}\right), \end{equation} where $P,U,Q,V$ have distributions $(\pi,\omega),(\gamma,\omega),(\omega, \theta),(\omega,e)$, respectively, and \linebreak $\pi_1=\alpha$, $\pi_2=\delta$, $\theta_1=\beta$, $\theta_2=\varepsilon$. Then we have the factorization $$\scalebox{0.85}{$ \setlength{\arraycolsep}{2pt} \left(\begin{array}{cccccc|c} B&0&0&0&0&0&Y\\0&A&0&0&0&0&X\\0&0&B&0&0&0&0\\0&0&0&L&0&0&W\\0&0&0&0&M&0&0\\0&0&0&0&0&B&0\\\hline G&-F&0&0&-J&G&0 \end{array}\right)= \left(\begin{array}{cccccc} I&0&0&0&0&0\\0&P_{11}&P_{12}&P_{13}&P_{14}&0\\-I&P_{21}&P_{22}&P_{23}&P_{24}&0\\0&P_{31}&P_{32}&P_{33}&P_{34}&0\\0&P_{41}&P_{42}&P_{43}&P_{44}&0\\-I&P_{21}&P_{22}&P_{23}&P_{24}&B\\\hline 0&-U_1&-U_2&-U_3&-U_4&G \end{array}\right) \left(\begin{array}{cccccc|c} B&0&0&0&0&0&Y\\Q_{12}&Q_{11}&Q_{12}&Q_{13}&Q_{14}&0&V_1\\Q_{22}&Q_{21}&Q_{22}&Q_{23}&Q_{24}&0&V_2\\ Q_{32}&Q_{31}&Q_{32}&Q_{33}&Q_{34}&0&V_3\\Q_{42}&Q_{41}&Q_{42}&Q_{43}&Q_{44}&0&V_4\\0&0&-I&0&0&I&0 \end{array}\right), $}$$ where the factors have distributions $(\delta*\alpha*\delta*\pi_3*\pi_4*\delta*\gamma\,,\, \delta*\omega_1*\omega_2*\omega_3*\omega_4*\varepsilon)$ and $(\delta*\omega_1*\omega_2*\omega_3*\omega_4*\varepsilon\ , \ \varepsilon*\beta*\varepsilon*\theta_3* \theta_4*\varepsilon*e)$ respectively. Hence, $(G,B,Y,\delta,\varepsilon) \sim(F,A,X,\alpha,\beta)$, and the symmetric property of the relation $\sim$ is proved. Now we proceed to prove that $\sim$ satisfies the transitive property. Suppose that $(F,A,X,\alpha,\beta)\sim (G,B,Y,\delta,\varepsilon)$ and $(G,B,Y,\delta,\varepsilon)\sim(H,C,Z,\zeta,\eta)$. Hence, there exist $L,M,P,Q\in \Sigma$, homogeneous rows $J,U$ and homogeneous columns $W,V$ as in \eqref{eq:equivalencerelation2}, and there exist $L',M',P',Q'\in \Sigma$, homogeneous rows $J',U'$ and homogeneous columns $W',V'$ such that \[ \setlength{\arraycolsep}{1.5pt}\left(\begin{array}{cccc|c} B&0&0&0&Y\\0&C&0&0&Z\\0&0&L'&0&W'\\0&0&0&M'&0\\\hline G&-H&0&J'&0 \end{array}\right)= \left(\begin{array}{cccc} P_{11}'&P_{12}'&P_{13}'&P_{14}'\\P_{21}'&P_{22}'&P_{23}'&P_{24}'\\P_{31}'&P_{32}'&P_{33}'&P_{34}'\\P_{41}'&P_{42}'&P_{43}'&P_{44}'\\\hline U_{1}'&U_{2}'&U_{3}'&U_{4}' \end{array}\right) \left(\begin{array}{cccc|c} Q_{11}'&Q_{12}'&Q_{13}'&Q_{14}'&V_{1}'\\Q_{21}'&Q_{22}'&Q_{23}'&Q_{24}'&V_{2}'\\Q_{31}'&Q_{32}'&Q_{33}'&Q_{34}'&V_{3}'\\Q_{41}'&Q_{42}'&Q_{43}'&Q_{44}'&V_{4}' \end{array}\right), \] where $P',U',Q',V'$ have distributions $(\pi',\omega'),(\gamma,\omega'),(\omega', \theta'),(\omega',e)$, respectively, and $\pi_1'=\delta$, $\pi_2'=\zeta$, $\theta_1'=\varepsilon$, $\theta_2'=\eta$. Then we have the factorization of the matrix \[ \left(\begin{array}{ccccccccccc|c} C & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & Z \\ 0 & A & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & X \\ 0 & 0 & B & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & Y \\ 0 & 0 & 0 & L & 0 & 0 & 0 & 0 & 0 & 0 & 0 & W \\ 0 & 0 & 0 & 0 & M & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & L' & 0 & 0 & 0 & 0 & 0 & W' \\ 0 & 0 & 0 & 0 & 0 & 0 & B & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & C & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & L' & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & M' & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & M & 0 \\ \hline H &-F & 0 & 0 & 0 & 0 &-G & H & 0 &-J' &J & 0 \end{array}\right) \] as a product of the matrices \[ \scalebox{0.78}{$\setlength{\arraycolsep}{0.4pt} \left(\begin{array}{ccccccccccc} I & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & P_{11} & P_{12} & P_{13} & P_{14} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & P_{21} & P_{22} & P_{23} & P_{24} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & P_{31} & P_{32} & P_{33} & P_{34} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & P_{41} & P_{42} & P_{43} & P_{44} & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & I & 0 & 0 & 0 & 0 & 0 \\ 0 & -P_{21} & -P_{22} & -P_{23} & -P_{24} & 0 & P_{11}' & P_{12}' & P_{13}' & P_{14}' & 0 \\ -I & 0 & 0 & 0 & 0 & 0 & P_{21}' & P_{22}' & P_{23}' & P_{24}' & 0 \\ 0 & 0 & 0 & 0 & 0 & -I & P_{31}' & P_{32}' & P_{33}' & P_{34}' & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & P_{41}' & P_{42}' & P_{43}' & P_{44}' & 0 \\ 0 & -P_{41} & -P_{42} & -P_{43} & -P_{44} & 0 & 0 & 0 & 0 & 0 & M \\ \hline 0 & -U_1 & -U_2 & -U_3 & -U_4 & 0 & -U_1' & -U_2' & -U_3' & -U_4' & J \end{array}\right)\!\! \left(\begin{array}{ccccccccccc|c} C & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & Z \\ 0 & Q_{11} & Q_{12} & Q_{13} & Q_{14} & 0 & 0 & 0 & 0 & 0 & 0 & V_1 \\ 0 & Q_{21} & Q_{22} & Q_{23} & Q_{24} & 0 & 0 & 0 & 0 & 0 & 0 & V_2 \\ 0 & Q_{31} & Q_{32} & Q_{33} & Q_{34} & 0 & 0 & 0 & 0 & 0 & 0 & V_3 \\ 0 & Q_{41} & Q_{42} & Q_{43} & Q_{44} & 0 & 0 & 0 & 0 & 0 & 0 &V_4 \\ 0 & 0 & 0 & 0 & 0 & L' & 0 & 0 & 0 & 0 & 0 & W' \\ Q_{12}' & 0 & Q_{11}' & 0 & 0 & Q_{13}' & Q_{11}' & Q_{12}' & Q_{13}' & Q_{14}' & 0 & V_1'\\ Q_{22}' & 0 & Q_{21}' & 0 & 0 & Q_{23}' & Q_{21}' & Q_{22}' & Q_{23}' & Q_{24}' & 0 & V_2'\\ Q_{32}' & 0 & Q_{31}' & 0 & 0 & Q_{33}' & Q_{31}' & Q_{32}' & Q_{33}' & Q_{34}' & 0 & V_3'\\ Q_{42}' & 0 & Q_{41}' & 0 & 0 & Q_{43}' & Q_{41}' & Q_{42}' & Q_{43}' & Q_{44}' & 0 & V_4'\\ 0 & 0 & 0 & 0 & I & 0 & 0 & 0 & 0 & 0 & I & 0 \\ \end{array}\right)$}, \] where the factors have distributions $$(\zeta*\alpha*\delta*\pi_3*\pi_4*\pi_3'*\delta*\zeta*\pi_3'*\pi_4'*\pi_4*\gamma \ , \ \zeta*\omega_1*\omega_2*\omega_3*\omega_4*\pi_3'*\omega_1'*\omega_2'*\omega_3'*\omega_4'*\theta_4),$$ $$(\zeta*\omega_1*\omega_2*\omega_3*\omega_4*\pi_3'*\omega_1'*\omega_2'*\omega_3'*\omega_4'*\theta_4 \ , \ \eta*\beta*\varepsilon*\theta_3*\theta_4*\theta_3'*\varepsilon*\eta*\theta_3'*\theta_4'*\theta_4*e),$$ respectively. This factorization implies that $(F,A,X,\alpha,\beta)\sim (H,C,Z,\zeta,\eta)$, as desired. \end{proof} \subsection{Operations}\label{subsec:operations} Let $\gamma\in \Gamma$. If $(F',A',X',\alpha',\beta'), (F,A,X,\alpha,\beta)\in (T_\Sigma)_\gamma$, then we define $$(F',A',X',\alpha',\beta') + (F,A,X,\alpha,\beta)= \left(\begin{pmatrix}F'&F\end{pmatrix},\begin{pmatrix}A'&0\\0&A\end{pmatrix},\begin{pmatrix}X'\\X\end{pmatrix}, \alpha'*\alpha,\beta'*\beta\right).$$ Note that it belongs to $(T_\Sigma)_\gamma$. If $(F',A',X',\alpha',\beta')\in (T_\Sigma)_{\gamma'}$ and $(F,A,X,\alpha,\beta)\in (T_\Sigma)_\gamma$, then we define \[ (F',A',X',\alpha',\beta')\cdot(F,A,X,\alpha,\beta)= \setlength{\arraycolsep}{1.2pt} \left(\begin{pmatrix}0&F'\end{pmatrix},\begin{pmatrix}A&0\\-X'F& A'\end{pmatrix},\begin{pmatrix}X\\0\end{pmatrix}, \alpha*\alpha'\gamma, \beta*\beta'\gamma\!\right). \] Note that this element belongs to $(T_\Sigma)_{\gamma'\gamma}$ because the homogeneous matrix $(0\ F')$ has distribution $(\gamma'\gamma,\beta*\beta'\gamma)$ and the homogeneous matrix $\left(\begin{smallmatrix} X \\ 0 \end{smallmatrix}\right)$ has distribution $(\alpha*\alpha'\gamma,e)$. If $(F,A,X,\alpha,\beta)\in (T_\Sigma)_\gamma$, we define $$-(F,A,X,\alpha,\beta)=(-F,A,X,\alpha,\beta)\in (T_\Sigma)_\gamma.$$ Finally, if $r\in R_\gamma$, we define $$\mu(r)=(r,1,1,e,e)\in (T_\Sigma)_\gamma.$$ Now we prove a series of lemmas that show the compatibility of the operations just defined and the equivalence relation $\sim$. \begin{lemma}\label{lem:easywelldefined} The following assertions hold true. \begin{enumerate}[\rm(1)] \item If $(F',A,X,\alpha,\beta),(F,A,X,\alpha,\beta)\in(T_\Sigma)_g$, then $$(F',A,X,\alpha,\beta)+(F,A,X,\alpha,\beta)\sim(F'+F,A,X,\alpha,\beta).$$ \item If $(F,A,X',\alpha,\beta),(F,A,X,\alpha,\beta)\in(T_\Sigma)_g$, then $$(F,A,X',\alpha,\beta)+(F,A,X,\alpha,\beta)\sim (F,A,X'+X,\alpha,\beta).$$ \item If $r\in R_{\gamma'}$ and $(F,A,X,\alpha,\beta)\in(T_\Sigma)_\gamma$, then $$\mu(r)\cdot(F,A,X,\alpha,\beta)\sim(rF,A,X,\alpha,\beta)\in (T_\Sigma)_{\gamma\gamma'}.$$ \item If $(F',A',X',\alpha',\beta')\in(T_\Sigma)_{\gamma'}$ and $r\in R_\gamma$, then $$(F',A',X',\alpha',\beta')\cdot\mu(r)\sim(F',A',X'r,\alpha'\gamma,\beta'\gamma)\in (T_\Sigma)_{\gamma\gamma'}.$$ \end{enumerate} \end{lemma} \begin{proof} (1) It follows from the following factorization \[ \left(\begin{array}{ccc|c} A&0&0&X\\0&A&0&X\\0&0&A&X\\\hline F'&F&-F'-F&0 \end{array}\right)= \left(\begin{array}{ccc} I&0&0\\I&A&0\\I&0&A\\\hline 0&F&-F'-F \end{array}\right) \left(\begin{array}{ccc|c} A&0&0&X\\-I&I&0&0\\-I&0&I&0 \end{array}\right) \] where the factors of the right hand side have distributions $(\alpha*\alpha'*\alpha*\gamma\, , \, \alpha*\beta*\beta)$ and $(\alpha*\beta*\beta\, , \, \beta*\beta*\beta*e)$, respectively. (2) It follows from the equality \[ \left(\begin{array}{ccc|c} A&0&0&X'\\0&A&0&X\\0&0&A&X'+X\\\hline F&F&-F&0 \end{array}\right)= \left(\begin{array}{ccc} I&0&0\\0&I&0\\I&I&A\\\hline 0&0&-F \end{array}\right) \left(\begin{array}{ccc|c} A&0&0&X'\\0&A&0&X\\-I&-I&I&0 \end{array}\right) \] where the factors of the right hand side have distributions $(\alpha*\alpha*\alpha*\gamma\, , \, \alpha*\alpha*\beta)$ and $(\alpha*\alpha*\beta\, , \, \beta*\beta*\beta*e)$, respectively. (3) It follows from the factorization \[ \left(\begin{array}{ccc|c} A&0&0&X\\-F&1&0&0\\0&0&A&X\\\hline0&r&-rF&0 \end{array}\right)= \left(\begin{array}{ccc} I&0&0\\0&1&0\\I&0&A\\\hline 0&r&-rF \end{array}\right) \left(\begin{array}{ccc|c} A&0&0&X\\-F&1&0&0\\-I&0&I&0 \end{array}\right) \] where the factors of the right hand side have distributions $(\alpha*\gamma*\alpha*\gamma'\gamma \, , \, \alpha*\gamma*\beta)$ and $(\alpha*\gamma*\beta \, , \, \beta*\gamma*\beta*e)$, respectively. (4) It follows from the factorization \[ \left(\begin{array}{ccc|c} 1&0&0&1\\-X'r&A'&0&0\\0&0&A'&X'r\\\hline0&F'&-F'&0 \end{array}\right)= \left(\begin{array}{ccc} 1&0&0\\0&I&0\\X'r&I&A'\\\hline 0&0&-F' \end{array}\right) \left(\begin{array}{ccc|c} 1&0&0&1\\-X'r&A'&0&0\\0&-I&I&0 \end{array}\right) \] where the factors of the right hand side have distributions $(e*\alpha'\gamma*\alpha'\gamma*\gamma'\gamma \, , \, e*\alpha'\gamma*\beta'\gamma)$ and $(e*\alpha'\gamma*\beta'\gamma \, , \, e*\beta'\gamma*\beta'\gamma*e)$, respectively. \end{proof} \begin{lemma}\label{lem:welldefined} The relation $\sim$ is compatible with the operations defined on the $(T_\Sigma)_\gamma$'s. More precisely, the following assertions hold true. \begin{enumerate}[\rm(1)] \item For $x',x\in(T_\Sigma)_\gamma$, then $x+x'\sim x'+x$. \item For $x',x,y\in(T_\Sigma)_\gamma$ such that $x\sim y$, then $x'+x\sim x'+y$ and $x+x'\sim y+x'$. \item For $x,y\in(T_\Sigma)_\gamma$ and $x'\in(T_\Sigma)_{\gamma'}$ such that $x\sim y$, then $x'x\sim x'y$ and $xx'\sim yx'$. \item For $x,y\in(T_\Sigma)_\gamma$ such that $x\sim y$, then $-x\sim -y$. \end{enumerate} \end{lemma} \begin{proof} (1) Let $(F',A',X',\alpha',\beta'),(F,A,X,\alpha,\beta)\in (T_\Sigma)_\gamma$. The equality \[ \setlength{\arraycolsep}{4pt}\left(\begin{array}{cccc|c} A'&0&0&0&X'\\0&A&0&0&X\\0&0&A&0&X\\0&0&0&A'&X'\\\hline F'&F&-F&-F'&0 \end{array}\right)= \left(\begin{array}{cccc} I&0&0&0\\0&I&0&0\\0&I&A&0\\I&0&0&A'\\\hline 0&0&-F&-F' \end{array}\right) \left(\begin{array}{cccc|c} A'&0&0&0&X'\\0&A&0&0&X\\0&-I&I&0&0\\-I&0&0&I&0 \end{array}\right) \] where the factors of the right hand side have distributions $(\alpha'*\alpha*\alpha*\alpha'*\gamma\,,\,\alpha'*\alpha*\beta*\beta')$ and $(\alpha'*\alpha*\beta*\beta'\,,\,\beta'*\beta*\beta*\beta'*e)$, respectively, shows (1). (2) First note that, by (1), it is enough to prove that $x'+ x\sim x'+y.$ Now let $\gamma,\gamma'\in\Gamma$, let $(F',A',X',\alpha',\beta')\in (T_\Sigma)_{\gamma'}$ and let $(F,A,X,\alpha,\beta),(G,B,Y,\delta,\varepsilon)\in(T_\Sigma)_\gamma$ be such that $(F,A,X,\alpha,\beta)\sim(G,B,Y,\delta,\varepsilon)$. Thus, there exist $L,M,P,Q\in \Sigma$, homogenous rows $J,U$, and homogeneous columns $W,V$ as in \eqref{eq:equivalencerelation}. The result follows because the matrix \[ \left(\begin{array}{ccccccccc|c} A'&0&0&0&0&0&0&0&0&X'\\0&A&0&0&0&0&0&0&0&X\\0&0&A'&0&0&0&0&0&0&X'\\0&0&0&B&0&0&0&0&0&Y\\0&0&0&0&L&0&0&0&0&W\\ 0&0&0&0&0&A&0&0&0&0\\0&0&0&0&0&0&B&0&0&0\\0&0&0&0&0&0&0&L&0&0\\0&0&0&0&0&0&0&0&M&0\\\hline F'&F&-F'&-G&0&F&-G&0&J&0 \end{array}\right) \] can be expressed as the product of the homogeneous matrices \[ \setlength{\arraycolsep}{1.5pt}\left(\begin{array}{ccccccccccc} I&0&0&0&0&0&0&0&0\\ 0&I&0&0&0&0&0&0&0\\ I&0&A'&0&0&0&0&0&0\\ 0&0&0&I&0&0&0&0&0\\ 0&0&0&0&I&0&0&0&0\\ 0&-I&0&0&0&P_{11} &P_{12}&P_{13}&P_{14}\\ 0&0&0&-I&0&P_{21}&P_{22}&P_{23}&P_{24}\\ 0&0&0&0&-I&P_{31}&P_{32}&P_{33}&P_{34}\\ 0&0&0&0&0&P_{41}&P_{42}&P_{43}&P_{44}\\\hline 0&0&-F'&0&0&U_1&U_2&U_3&U_4 \end{array}\right) \left(\begin{array}{ccccccccc|c} A'&0&0&0&0&0&0&0&0&X'\\ 0&A&0&0&0&0&0&0&0&X\\ -I&0&I&0&0&0&0&0&0&0\\ 0&0&0&B&0&0&0&0&0&Y\\ 0&0&0&0&L&0&0&0&0&W\\ 0&Q_{11}&0&Q_{12}&Q_{13}&Q_{11}&Q_{12}&Q_{13}&Q_{14}&V_1\\ 0&Q_{21}&0&Q_{22}&Q_{23}&Q_{21}&Q_{22}&Q_{23}&Q_{24}&V_2\\ 0&Q_{31}&0&Q_{32}&Q_{33}&Q_{31}&Q_{32}&Q_{33}&Q_{34}&V_3\\ 0&Q_{41}&0&Q_{42}&Q_{43}&Q_{41}&Q_{42}&Q_{43}&Q_{44}&V_4\\ \end{array}\right) \] that have distributions $(\alpha'*\alpha*\alpha'*\delta*\pi_3*\alpha*\delta*\pi_3*\pi_4*\gamma\,,\, \alpha'*\alpha*\beta'*\delta*\pi_3*\omega_1*\omega_2*\omega_3*\omega_4)$ and $(\alpha'*\alpha*\beta'*\delta*\pi_3*\omega_1*\omega_2*\omega_3*\omega_4\,,\, \beta'*\beta*\beta'*\varepsilon*\theta_3*\beta*\varepsilon*\theta_3*\theta_4*e)$, respectively. (3) Let $\gamma,\gamma'\in\Gamma$, let $(F',A',X',\alpha',\beta')\in (T_\Sigma)_{\gamma'}$ and let $(F,A,X,\alpha,\beta),\linebreak (G,B,Y,\delta,\varepsilon)\in(T_\Sigma)_\gamma$ be such that $(F,A,X,\alpha,\beta)\sim(G,B,Y,\delta,\varepsilon)$. Thus, there exist $L,M,P,Q\in \Sigma$, homogenous rows $J,U$, and homogeneous columns $W,V$ as in \eqref{eq:equivalencerelation}. We prove first that $$ (F',A',X',\alpha',\beta')\cdot(F,A,X,\alpha,\beta)\sim (F',A',X',\alpha',\beta')\cdot(G,B,Y,\delta,\varepsilon).$$ It follows because the following homogeneous matrix \[ \left(\begin{array}{cccccccccc|c} A&0&0&0&0&0&0&0&0&0&X\\ -X'F&A'&0&0&0&0&0&0&0&0&0\\ 0&0&B&0&0&0&0&0&0&0&Y\\ 0&0&-X'G&A'&0&0&0&0&0&0&0\\ 0&0&0&0&L&0&0&0&0&0&W\\ 0&0&0&0&0&A&0&0&0&0&0\\ 0&0&0&0&0&0&B&0&0&0&0\\ 0&0&0&0&0&0&0&L&0&0&0\\ 0&0&0&0&0&0&0&0&M&0&0\\ 0&0&0&0&0&-X'F&X'G&0&-XJ&A'&0\\\hline 0&F'&0&-F'&0&0&0&0&0&F'&0 \end{array}\right) \] has the factorization, as a product of homogeneous matrices, \[ \scalebox{0.8}{$\setlength{\arraycolsep}{1pt} \left(\begin{array}{cccccccccc} I&0&0&0&0&0&0&0&0&0\\ 0&I&0&0&0&0&0&0&0&0\\ 0&0&I&0&0&0&0&0&0&0\\ 0&0&0&I&0&0&0&0&0&0\\ 0&0&0&0&I&0&0&0&0&0\\ -I&0&0&0&0&P_{11}&P_{12}&P_{13}&P_{14}&0\\ 0&0&-I&0&0&P_{21}&P_{22}&P_{23}&P_{24}&0\\ 0&0&0&0&-I&P_{31}&P_{32}&P_{33}&P_{34}&0\\ 0&0&0&0&0&P_{41}&P_{42}&P_{43}&P_{44}&0\\ 0&-I&0&I&0&-X'U_1&-X'U_2&-X'U_3&-X'U_4&A'\\\hline 0&0&0&0&0&0&0&0&0&F' \end{array}\right) \left(\begin{array}{cccccccccc|c} A&0&0&0&0&0&0&0&0&0&X\\ -X'F&A'&0&0&0&0&0&0&0&0&0\\ 0&0&B&0&0&0&0&0&0&0&Y\\ 0&0&-X'G&A'&0&0&0&0&0&0&0\\ 0&0&0&0&L&0&0&0&0&0&W\\ Q_{11}&0&Q_{12}&0&Q_{13}&Q_{11}&Q_{12}&Q_{13}&Q_{14}&0&V_1\\ Q_{21}&0&Q_{22}&0&Q_{23}&Q_{21}&Q_{22}&Q_{23}&Q_{24}&0&V_2\\ Q_{31}&0&Q_{32}&0&Q_{33}&Q_{31}&Q_{32}&Q_{33}&Q_{34}&0&V_3\\ Q_{41}&0&Q_{42}&0&Q_{43}&Q_{41}&Q_{42}&Q_{43}&Q_{44}&0&V_4\\ 0&I&0&-I&0&0&0&0&0&I&0 \end{array}\right)$} \] where the factors have distributions $$(\alpha*\alpha'\gamma*\delta*\alpha'\gamma*\pi_3*\alpha*\delta*\pi_3*\pi_4*\alpha'\gamma*\gamma'\gamma \, , \, \alpha*\alpha'\gamma*\delta*\alpha'\gamma*\pi_3*\omega_1*\omega_2*\omega_3*\omega_4*\beta'\gamma),$$ $$(\alpha*\alpha'\gamma*\delta*\alpha'\gamma*\pi_3*\omega_1*\omega_2*\omega_3*\omega_4*\beta'\gamma \, , \, \beta*\beta'\gamma*\varepsilon*\beta'\gamma*\theta_3*\beta*\varepsilon*\theta_3*\theta_4*\beta'\gamma*e),$$ respectively. Now let $\gamma,\gamma'\in\Gamma$, let $(F,A,X,\alpha,\beta),\in(T_\Sigma)_\gamma$ and let $(F',A',X',\alpha',\beta'),\linebreak (G',B',Y',\delta',\varepsilon')\in (T_\Sigma)_{\gamma'}$ be such that $(F',A',X',\alpha',\beta')\sim(G',B',Y',\delta',\varepsilon')$. Thus, there exist $L',M',P,Q\in \Sigma$, homogenous rows $J',U$, and homogeneous columns $W',V$ such that \begin{equation} \setlength{\arraycolsep}{1.5pt}\left(\begin{array}{cccc|c} A'&0&0&0&X'\\0&B'&0&0&Y'\\0&0&L'&0&W'\\0&0&0&M'&0\\\hline F'&-G'&0&J'&0 \end{array}\right)= \left(\begin{array}{cccc} P_{11}&P_{12}&P_{13}&P_{14}\\P_{21}&P_{22}&P_{23}&P_{24}\\P_{31}&P_{32}&P_{33}&P_{34}\\P_{41}&P_{42}&P_{43}&P_{44}\\\hline U_{1}&U_{2}&U_{3}&U_{4} \end{array}\right) \left(\begin{array}{cccc|c} Q_{11}&Q_{12}&Q_{13}&Q_{14}&V_{1}\\Q_{21}&Q_{22}&Q_{23}&Q_{24}&V_{2}\\Q_{31}&Q_{32}&Q_{33}&Q_{34}&V_{3}\\Q_{41}&Q_{42}&Q_{43}&Q_{44}&V_{4} \end{array}\right), \end{equation} where $P,U,Q,V$ have distributions $(\pi',\omega'),(\gamma',\omega'),(\omega', \theta'),(\omega',e)$, respectively, and \linebreak $\pi_1'=\alpha'$, $\pi_2'=\delta'$, $\theta_1'=\beta'$, $\theta_2'=\varepsilon'$. We show that $$(F',A',X',\alpha',\beta')\cdot(F,A,X,\alpha,\beta)\sim (G',B',Y',\delta',\varepsilon')\cdot (F,A,X.\alpha,\delta).$$ It follows because the following homogeneous matrix \[ \left(\begin{array}{ccccccccccc|c} A&0&0&0&0&0&0&0&0&0&0&X\\ -X'F&A'&0&0&0&0&0&0&0&0&0&0\\ 0&0&A&0&0&0&0&0&0&0&0&X\\ 0&0&-Y'F&B'&0&0&0&0&0&0&0&0\\ 0&0&0&0&A&0&0&0&0&0&0&X\\ 0&0&0&0&-W'F&L'&0&0&0&0&0&0\\ 0&0&0&0&0&0&A&0&0&0&0&0\\ 0&0&0&0&0&0&0&A&0&0&0&X\\ 0&0&0&0&0&0&0&-X'F&A'&0&0&0\\ 0&0&0&0&0&0&0&0&0&L'&0&0\\ 0&0&0&0&0&0&0&0&0&0&M'&0\\\hline 0&F'&0&-G'&0&0&0&F'&-G'&0&J'&0 \end{array}\right) \] can be expressed as the following product of homogeneous matrices \[ \scalebox{0.75}{$\setlength{\arraycolsep}{1pt} \left(\begin{array}{ccccccccccc} I&0&0&0&0&0&0&0&0&0&0\\ 0&I&0&0&0&0&0&0&0&0&0\\ 0&0&I&0&0&0&0&0&0&0&0\\ 0&0&0&I&0&0&0&0&0&0&0\\ 0&0&I&0&A&0&0&0&0&0&0\\ 0&0&0&0&0&I&0&0&0&0&0\\ I&0&-I&0&0&0&A&0&0&0&0\\ 0&-I&0&0&0&0&-X'F&P_{11}&P_{12}&P_{13}&P_{14}\\ 0&0&0&-I&0&0&0&P_{21}&P_{22}&P_{23}&P_{24}\\ 0&0&0&0&-W'F&-I&0&P_{31}&P_{32}&P_{33}&P_{34}\\ 0&0&0&0&0&0&0&P_{41}&P_{42}&P_{43}&P_{44}\\\hline 0&0&0&0&0&0&0&U_1&U_2&0U_3&U_4 \end{array}\right) \left(\begin{array}{ccccccccccc|c} A&0&0&0&0&0&0&0&0&0&0&X\\ -X'F&A'&0&0&0&0&0&0&0&0&0&0\\ 0&0&A&0&0&0&0&0&0&0&0&X\\ 0&0&-Y'F&B'&0&0&0&0&0&0&0&0\\ 0&0&-I&0&I&0&0&0&0&0&0&0\\ 0&0&0&0&-W'F&L'&0&0&0&0&0&0\\ -I&0&-I&0&0&0&I&0&0&0&0&0\\ 0&Q_{11}&-V_1F&Q_{12}&0&Q_{13}&0&Q_{11}&Q_{12}&Q_{13}&Q_{14}&0\\ 0&Q_{21}&-V_2F&Q_{22}&0&Q_{23}&0&Q_{21}&Q_{22}&Q_{23}&Q_{24}&0\\ 0&Q_{31}&-V_3F&Q_{32}&0&Q_{33}&0&Q_{31}&Q_{32}&Q_{33}&Q_{34}&0\\ 0&Q_{41}&-V_4F&Q_{34}&0&Q_{43}&0&Q_{41}&Q_{42}&Q_{43}&Q_{44}&0 \end{array}\right)$} \] where the factors have distributions $(\alpha*\alpha'\gamma*\alpha*\delta'\gamma*\alpha*\pi_3'\gamma*\alpha*\alpha'\gamma*\delta'\gamma* \pi_3'\gamma*\pi_4'\gamma*\gamma'\gamma \, , \, \alpha*\alpha'\gamma*\alpha*\delta'\gamma*\beta*\pi_3'\gamma*\beta*\omega_1'\gamma*\omega_2'\gamma* \omega_3'\gamma*\omega_4'\gamma)$ and $(\alpha*\alpha'\gamma*\alpha*\delta'\gamma*\beta*\pi_3'\gamma*\beta*\omega_1'\gamma*\omega_2'\gamma* \omega_3'\gamma*\omega_4'\gamma\,,\, \beta*\beta'\gamma*\beta*\varepsilon'\gamma*\beta*\theta_3'\gamma*\beta*\beta'\gamma*\varepsilon'\gamma *\theta_3'\gamma*\theta_4'\gamma*e)$, respectively. (4) Let $(F,A,X,\alpha,\beta),(G,B,Y,\delta,\varepsilon)\in(T_\Sigma)_\gamma$ be such that $(F,A,X,\alpha,\beta)\sim(G,B,Y,\delta,\varepsilon)$. Thus, there exist $L,M,P,Q\in \Sigma$, homogenous rows $J,U$, and homogeneous columns $W,V$ as in \eqref{eq:equivalencerelation}. The result follows because we have the factorization \[ \setlength{\arraycolsep}{3pt}\left(\begin{array}{cccc|c} A&0&0&0&X\\0&B&0&0&Y\\0&0&L&0&W\\0&0&0&M&0\\\hline -F&G&0&-J&0 \end{array}\right) = \left(\begin{array}{cccc} P_{11}&P_{12}&P_{13}&P_{14}\\ P_{21}&P_{22}&P_{23}&P_{24}\\ P_{31}&P_{32}&P_{33}&P_{34}\\ P_{41}&P_{42}&P_{43}&P_{44}\\\hline -U_{1}&-U_{2}&-U_{3}&-U_{4} \end{array}\right) \left(\begin{array}{cccc|c} Q_{11}&Q_{12}&Q_{13}&Q_{14}&V_{1}\\ Q_{21}&Q_{22}&Q_{23}&Q_{24}&V_{2}\\ Q_{31}&Q_{32}&Q_{33}&Q_{34}&V_{3}\\ Q_{41}&Q_{42}&Q_{43}&Q_{44}&V_{4} \end{array}\right) \] where the factors have distributions $ (\alpha*\delta*\pi_3*\pi_4*\gamma\,,\,\omega_1*\omega_2*\omega_3*\omega_4)$ and $(\omega_1*\omega_2*\omega_3*\omega_4\,,\, \beta*\varepsilon*\theta_3*\theta_4*e)$, respectively. \end{proof} \subsection{Graded ring structure} We define $(\mathcal{R}_\Sigma)_\gamma$ as the set of equivalence classes in $(T_\Sigma)_\gamma$ under the equivalence relation $\sim$. The equivalent class of $(F,A,X,\alpha,\beta)\in (T_\Sigma)_\gamma$ will be denoted by $[F,A,X,\alpha,\beta]$. In Section~\ref{subsec:operations}, we proved that the operation + is well defined in $(\mathcal{R}_\Sigma)_\gamma$ for each $\gamma\in\Gamma$. \begin{lemma} Let $\gamma\in\Gamma$. Then $(\mathcal{R}_\Sigma)_\gamma$ is an abelian group with sum defined by $$[F',A',X',\alpha',\beta']+[F,A,X,\alpha,\beta]= \left[\begin{pmatrix}F'&F\end{pmatrix},\begin{pmatrix}A'&0\\0&A\end{pmatrix},\begin{pmatrix}X'\\X\end{pmatrix}, \alpha'*\alpha,\beta'*\beta\right]$$ \end{lemma} \begin{proof} The operation is well defined and commutative by Lemma~\ref{lem:welldefined}(2) and (1). Now we show that the operation is associative. Let $[F'',A'',X'',\alpha'',\beta'']$,\linebreak $[F',A',X',\alpha',\beta']$,$[F,A,X,\alpha,\beta]\in (T_\Sigma)_\gamma$. Then \[ \begin{array}{cl} &[F'',A'',X'',\alpha'',\beta'']+\left([F',A',X',\alpha',\beta']+[F,A,X,\alpha,\beta]\right)\\ =&[F'',A'',X'',\alpha'',\beta'']+\left[\begin{pmatrix}F'&F\end{pmatrix},\begin{pmatrix}A'&0\\0&A\end{pmatrix},\begin{pmatrix}X'\\X\end{pmatrix},\alpha'*\alpha,\beta''*\beta\right]\\ =&\left[\begin{pmatrix}F''&F'&F\end{pmatrix},\begin{pmatrix}A''&0&0\\0&A'&0\\0&0&A\end{pmatrix},\begin{pmatrix}X''\\X'\\X\end{pmatrix},\alpha''*\alpha'*\alpha,\beta''*\beta'*\beta\right]\\ =&\left[\begin{pmatrix}F''&F'\end{pmatrix},\begin{pmatrix}A''&0\\0&A'\end{pmatrix},\begin{pmatrix}X''\\X'\end{pmatrix},\alpha''*\alpha',\beta''*\beta'\right]+[F,A,X,\alpha,\beta]\\ =&\left([F'',A'',X'',\alpha'',\beta'']+[F',A',X',\alpha',\beta']\right)+[F,A,X,\alpha,\beta], \end{array} \] as desired. The element $\mu(0)=[0,1,1,e,e]$ is the zero element. Indeed, let $(F,A,X,\alpha,\beta)\in (T_\Sigma)_\gamma$. Then we have the following factorization \[ \left(\begin{array}{ccc|c} A&0&0&X\\0&1&0&1\\0&0&A&X\\\hline F&0&-F&0 \end{array}\right)= \left(\begin{array}{ccc} I&0&0\\0&1&0\\I&0&A\\\hline 0&0&-F \end{array}\right) \left(\begin{array}{ccc|c} A&0&0&X\\0&1&0&1\\-I&0&I&0 \end{array}\right) \] by Lemma~\ref{lem:easywelldefined} where the factors have distributions $(\alpha*e*\alpha*\gamma\,,\,\alpha*e*\beta)$ and $(\alpha*e*\beta\, ,\, \beta*e*\beta*e)$, respectively. Thus, $[F,A,X,\alpha,\beta]+[0,1,1,e,e]=[F,A,X,\alpha,\beta]$. Given $(F,A,X,\alpha,\beta)\in (T_\Sigma)_\gamma$, the element $[-F,A,X,\alpha,\beta]$ is well defined by Lemma~\ref{lem:welldefined}(4). We claim that it is the additive inverse of $[F,A,X,\alpha,\beta]$ in $\mathcal{R}_\gamma$. Thus, consider the following factorization\[ \left(\begin{array}{ccc|c} A&0&0&X\\0&A&0&X\\0&0&1&1\\\hline F&-F&0&0 \end{array}\right)= \left(\begin{array}{ccc} I&0&0\\I&A&0\\0&0&1\\\hline 0&-F&0 \end{array}\right) \left(\begin{array}{ccc|c} A&0&0&X\\-I&I&0&0\\0&0&1&0 \end{array}\right) \] where the factors have distributions $(\alpha*\alpha*e*\gamma\,,\,\alpha*\beta*e)$ and $(\alpha*\beta*e\,,\, \beta*\beta*e*e)$, respectively. It shows that $[F,A,X,\alpha,\beta]+[-F,A,X,\alpha,\beta]=[0,1,1,e,e]$, as claimed. \end{proof} In Section~\ref{subsec:operations}, we showed that the product functions $(\mathcal{R}_\Sigma)_{\gamma'}\times (\mathcal{R}_\Sigma)_\gamma\rightarrow (\mathcal{R}_\Sigma)_{\gamma'\gamma}$ are well defined. Now we define $\mathcal{R}_\Sigma=\bigoplus_{\gamma\in\Gamma} (\mathcal{R}_\Sigma)_\gamma$ By the foregoing lemma, it is an additive group. We now prove that it is a $\Gamma$-graded ring with the induced product. \begin{lemma} $\mathcal{R}_\Sigma$ is a $\Gamma$-graded ring with the product determined by the rule \[ [F',A',X',\alpha',\beta']\cdot[F,A,X,\alpha,\beta]= \setlength{\arraycolsep}{1.2pt} \left[\begin{pmatrix}0&F'\end{pmatrix},\begin{pmatrix}A&0\\-X'F& A'\end{pmatrix},\begin{pmatrix}X\\0\end{pmatrix}, \alpha*\alpha'\gamma, \beta*\beta'\gamma\!\right], \] for any $(F',A',X',\alpha',\beta')\in (T_\Sigma)_{\gamma'}$ and $(F,A,X,\alpha,\beta)\in (T_\Sigma)_\gamma$. \end{lemma} \begin{proof} By Lemma~\ref{lem:welldefined}(3), the product is well defined. By Lemma~\ref{lem:easywelldefined}(3) and (4), the identity element is $[1,1,1,e,e]$. Now we proceed to show that the product is associative. Let\linebreak $(F'',A'',X'',\alpha'',\beta'')\in(T_\Sigma)_{\gamma''},$ $(F',A',X',\alpha',\beta')\in(T_\Sigma)_{\gamma'}$ and $(F,A,X,\alpha,\beta)\in(T_\Sigma)_{\gamma}$. Then \[ \begin{array}{cl} &\left([F'',A'',X'',\alpha'',\beta'']\cdot[F',A',X',\alpha',\beta']\right)\cdot [F,A,X,\alpha,\beta]\\ =&\left[\begin{pmatrix}0&F'\end{pmatrix},\begin{pmatrix}A'& 0\\-X''F'&A'\end{pmatrix},\begin{pmatrix}X'\\0\end{pmatrix}, \alpha'*\alpha''\gamma^{'},\beta'*\beta''\gamma'\right]\cdot[F,A,X,\alpha,\beta]\\ =&\setlength{\arraycolsep}{2pt}\left[\begin{pmatrix}0&0&F''\end{pmatrix},\begin{pmatrix}A&0&0\\-X'F&A'&0\\0&-X''F'&A''\end{pmatrix},\begin{pmatrix}X\\0\\0\end{pmatrix},\alpha*\alpha'\gamma*\alpha''\gamma'\gamma,\beta*\beta'\gamma*\beta''\gamma'\gamma\right]\\ =&[F'',A'',X'',\alpha'',\beta'']\cdot\left[\begin{pmatrix}0&F'\end{pmatrix},\begin{pmatrix}A&0\\-X'F&A'\end{pmatrix},\begin{pmatrix}X\\0\end{pmatrix},\alpha*\alpha'\gamma,\beta*\beta'\gamma\right]\\ =&[F'',A'',X'',\alpha'',\beta'']\cdot\left([F',A',X',\alpha',\beta']\cdot[F,A,X,\alpha,\beta]\right), \end{array} \] which shows that the product is associative. It remains to show that the distributive laws are satisfied. Let $(F',A',X',\alpha',\beta'),$ $(G',B',Y',\delta',\varepsilon')\in(T_\Sigma)_{\gamma'}$ and $(F,A,X,\alpha,\beta)\in(T_\Sigma)_{\gamma}$. First note that \begin{align} \left( [F',A',X',\alpha',\beta']+[G',B',Y',\delta',\varepsilon'] \right)\cdot [F,A,X,\alpha,\beta] =\phantom{aaaaaaaaaaaaaa}\nonumber\\ \left[\begin{pmatrix}0&F'&G'\end{pmatrix},\begin{pmatrix}A&0&0\\-X'F&A'&0 \\ -Y'F&0&B' \end{pmatrix},\begin{pmatrix}X\\0\\0\end{pmatrix},\alpha*\alpha'\gamma*\delta'\gamma,\beta*\beta'\gamma*\varepsilon'\gamma\right]. \label{eq:distributive1} \end{align} Second observe that \begin{align} [F',A',X',\alpha',\beta']\cdot[F,A,X,\alpha,\beta]+[G',B',Y',\delta',\varepsilon']\cdot[F',A',X',\alpha',\beta']=\phantom{aaa}\nonumber\\ \setlength{\arraycolsep}{1.8pt} \left[\begin{pmatrix}0&F'&0&G'\end{pmatrix},\begin{pmatrix}A&0&0&0\\-X'F&A'&0&0 \\ 0&0&A&0\\ 0&0&-Y'F&B' \end{pmatrix},\begin{pmatrix}X\\0\\X\\0\end{pmatrix},\alpha*\alpha'\gamma*\alpha*\delta'\gamma,\beta*\beta'\gamma*\beta*\varepsilon'\gamma\right].\label{eq:distributive2} \end{align} The fact that \eqref{eq:distributive1} equals \eqref{eq:distributive2} follows because the homogeneous matrix \[ \left( \begin{array}{ccccccc|c} A&0&0&0&0&0&0&X\\ -X'F&A'&0&0&0&0&0&0\\ -Y'F&0&B'&0&0&0&0&0\\ 0&0&0&A&0&0&0&X\\ 0&0&0&-X'F&A'&0&0&0\\ 0&0&0&0&0&A&0&X\\ 0&0&0&0&0&-Y'F&B'&0\\\hline 0&F'&G'&0&-F'&0&-G'&0 \end{array} \right) \] factorizes as the product of homogeneous matrices \[ \setlength{\arraycolsep}{3pt} \left(\begin{array}{ccccccc} I&0&0&0&0&0&0\\ 0&I&0&0&0&0&0\\ 0&0&I&0&0&0&0\\ I&0&0&A&0&0&0\\ 0&I&0&-X'F&A'&0&0\\ I&0&0&0&0&A&0\\ 0&0&I&0&0&-Y'F&B'\\\hline 0&0&0&0&-F'&0&-G' \end{array}\right) \left(\begin{array}{ccccccc|c} A&0&0&0&0&0&0&X\\ -X'F&A'&0&0&0&0&0&0\\ -Y'F&0&B'&0&0&0&0&0\\ -I&0&0&I&0&0&0&0\\ 0&-I&0&0&I&0&0&0\\ -I&0&0&0&0&I&0&0\\ 0&0&-I&0&0&0&I&0 \end{array}\right) \] where the factors have distributions $$(\alpha*\alpha'\gamma*\delta'\gamma*\alpha*\alpha'\gamma*\alpha*\delta'\gamma*\gamma'\gamma \, ,\, \alpha*\alpha'\gamma*\delta'\gamma*\beta*\beta'\gamma*\beta*\varepsilon'\gamma),$$ $$(\alpha*\alpha'\gamma*\delta'\gamma*\beta*\beta'\gamma*\beta*\varepsilon'\gamma \, , \, \beta*\beta'\gamma*\varepsilon'\gamma*\beta*\beta'\gamma*\beta*\varepsilon'\gamma*e),$$ respectively. Let now $(F',A',X',\alpha',\beta')\in (T_\Sigma)_{\gamma'}$ and $(F,A,X,\alpha,\beta)$,$(G,B,Y,\delta,\varepsilon)\in (T_\Sigma)_\gamma$. First note that \begin{align} [F',A',X',\alpha',\beta']\cdot\left( [F,A,X,\alpha,\beta]+[G,B,Y,\delta,\varepsilon] \right) =\phantom{aaaaaaaaaaaaaa}\nonumber\\ \left[\begin{pmatrix}0&0&F'\end{pmatrix},\begin{pmatrix}A&0&0\\0&B&0 \\ -X'F&-X'G&A' \end{pmatrix},\begin{pmatrix}X\\Y\\0\end{pmatrix},\alpha*\delta*\alpha'\gamma,\beta*\varepsilon*\beta'\gamma\right]. \label{eq:distributive3} \end{align} Second observe that \begin{align} [F',A',X',\alpha',\beta']\cdot[F,A,X,\alpha,\beta]+[F',A',X',\alpha',\beta']\cdot[G,B,Y,\delta,\varepsilon]=\phantom{aaaaa}\nonumber\\ \setlength{\arraycolsep}{1.8pt} \left[\begin{pmatrix}0&F'&0&F'\end{pmatrix},\begin{pmatrix}A&0&0&0\\-X'F&A'&0&0 \\ 0&0&B&0\\ 0&0&-X'G&A' \end{pmatrix},\begin{pmatrix}X\\0\\Y\\0\end{pmatrix},\alpha*\alpha'\gamma*\delta*\alpha'\gamma,\beta*\beta'\gamma*\varepsilon*\beta'\gamma\right].\label{eq:distributive4} \end{align} The fact that \eqref{eq:distributive3} equals \eqref{eq:distributive4} follows because the homogeneous matrix \[ \left( \begin{array}{ccccccc|c} A&0&0&0&0&0&0&X\\ -X'F&A'&0&0&0&0&0&0\\ 0&0&B&0&0&0&0&Y\\ 0&0&-X'G&A'&0&0&0&0\\ 0&0&0&0&A&0&0&X\\ 0&0&0&0&0&B&0&Y\\ 0&0&0&0&-X'F&-X'G&A'&0\\\hline 0&F'&0&F'&0&0&-F'&0 \end{array} \right) \] factorizes as the product of homogeneous matrices \[ \setlength{\arraycolsep}{3pt} \left(\begin{array}{ccccccc} I&0&0&0&0&0&0\\ 0&I&0&0&0&0&0\\ 0&0&I&0&0&0&0\\ 0&0&0&I&0&0&0\\ I&0&0&0&A&0&0\\ 0&0&I&0&0&B&0\\ 0&I&0&I&-X'F&-X'G&A'\\\hline 0&0&0&0&0&0&-F' \end{array}\right) \left(\begin{array}{ccccccc|c} A&0&0&0&0&0&0&X\\ -X'F&A'&0&0&0&0&0&0\\ 0&0&B&0&0&0&0&Y\\ 0&0&-X'G&A'&0&0&0&0\\ -I&0&0&0&I&0&0&0\\ 0&0&-I&0&0&I&0&0\\ 0&-I&0&-I&0&0&I&0 \end{array}\right) \] where the factors have distributions $$(\alpha*\alpha'\gamma*\delta*\alpha'\gamma*\alpha*\delta*\alpha'\gamma*\gamma'\gamma \, ,\, \alpha*\alpha'\gamma*\delta*\alpha'\gamma*\beta*\varepsilon*\beta'\gamma),$$ $$(\alpha*\alpha'\gamma*\delta*\alpha'\gamma*\beta*\varepsilon*\beta'\gamma \, , \, \beta*\beta'\gamma*\varepsilon*\beta'\gamma*\beta*\varepsilon*\beta'\gamma*e),$$ respectively. \end{proof} \subsection{Universal localization property} \begin{proposition}\label{prop:universallocalization} Consider the map $\mu\colon R\rightarrow \mathcal{R}_\Sigma$ determined by $\mu(r)=[r,1,1,e,e]$ for all $r\in R_\gamma$, $\gamma\in\Gamma$. Then the pair $(\mathcal{R}_\Sigma,\mu)$ is the universal localization of $R$ at $\Sigma$. \end{proposition} \begin{proof} By Lemma~\ref{lem:easywelldefined}(1) and (3), $\mu$ is a homomorphism of $\Gamma$-graded rings. By $E_i$ we will denote the column matrix consisting of 1 as its ith-entry and all the other entries are zero, and by $E_i^T$ its transpose, the row matrix consisting of 1 as its ith-entry and all other entries are zero. T Let $A=(a_{ij})\in\Sigma$ an $n\times n$ homogeneous matrix of distribution $(\alpha,\beta)$. We claim that the $n\times n$ matrix $B=([E_i^T, A, E_j,\alpha,\beta])_{ij}$ is the inverse of $A^\mu$. First observe that $[E_i^T, A, E_j,\alpha,\beta]\in \mathcal{R}_{\beta_i\alpha_j^{-1}}$ because $E_i^T$ has distribution \linebreak $(\beta_i\alpha_j^{-1},\beta\alpha_j^{-1})$, $A$ has distribution $(\alpha\alpha_j^{-1}, \beta\alpha_j^{-1})$ and $E_j$ has distribution $(\alpha\alpha_j^{-1},e)$. Thus, $([E_i^T, A, E_j,\alpha,\beta])_{ij}$ is homogeneous of distribution $(\beta,\alpha)$. Second, using Lemma~\ref{lem:easywelldefined}(3) and (1), we obtain that the product of the $i$-th line of $A^\mu$ with the $j$-th column of $B$ equals \[ \begin{array}{lcl} \sum_k\mu(a_{i,k})[E_k^T,A,E_j,\alpha,\beta]& =&\sum_k[a_{i,k}E_k^T,A,E_j,\alpha,\beta]\\ &=&[\sum_ka_{i,k}E_k^T,A,E_j,\alpha,\beta]\\ &=&[E_i^TA,A,E_j,\alpha,\beta]\in \mathcal{R}_{\alpha_i\alpha_j^{-1}}. \end{array} \] Third, we show that $$[E_i^TA,A,E_j,\alpha,\beta]=\mu(\delta_{ij})= [\delta_{ij},1,1,e,e]=\left\{\begin{smallmatrix} [1,1,1,e,e] \textrm{ if } i=j, \\ [0,1,1,e,e] \textrm{ if } i\neq j. \end{smallmatrix}\right.$$ It follows from the following factorization \[ \left(\begin{array}{cc|c} A&0&E_j\\0&1&0\\\hline E_i^TA&-\delta_{ij}&0 \end{array}\right)= \left(\begin{array}{cc} I&0\\0&1\\\hline E_i^T&-\delta_{ij} \end{array}\right) \left(\begin{array}{cc|c} A&0&E_j\\0&1&1 \end{array}\right) \] where the factors have distributions $(\alpha\alpha_j^{-1}*e*\alpha_i\alpha_j^{-1}\, , \, \alpha\alpha_j^{-1}*e)$ and \linebreak$(\alpha\alpha_j^{-1}*e,\beta\alpha_j^{-1}*e*e)$, respectively. Therefore $B$ is the right inverse of $A^\mu$. Now we proceed to prove that $B$ is the left inverse of $A^\mu$. Using Lemma~\ref{lem:easywelldefined}(4) and (2), we obtain that the product of the $i$-th line of $B$ with the $j$-th column of $A^\mu$ equals \[ \begin{array}{lcl} \sum_k[E_i^T,A,E_k,\alpha,\beta]\mu(a_{k,j})& =&\sum_k[E_i^T,A,E_ka_{k,j},\alpha,\beta]\\ &=&[E_i^T,A,\sum_kE_ka_{k,j},\alpha,\beta]\\ &=&[E_i^T,A,AE_j,\alpha,\beta]\in\mathcal{R}_{\beta_i\beta_j^{-1}}. \end{array} \] As before, we show that $[E_i^T,A,AE_j,\alpha,\beta]= \mu(\delta_{ij})$. It follows from \[ \left(\begin{array}{cc|c} A&0&AE_j\\0&1&1\\\hline E_i^T&\delta_{ij}&0 \end{array}\right)= \left(\begin{array}{cc} A&0\\0&1\\\hline E_i^T&-\delta_{ij} \end{array}\right) \left(\begin{array}{cc|c} I&0&E_j\\0&1&1 \end{array}\right) \] where the factors have distributions $(\alpha\beta_j^{-1}*e*\beta_i\beta_j^{-1} \,,\, \beta\beta_j^{-1}*e)$ and $(\beta\beta_j^{-1}*e\,,\,\beta\beta_j^{-1}*e*e)$, respectively. Therefore, the claim is proved. It remains to prove that $\mu\colon R\rightarrow \mathcal{R}_\Sigma$ is universal. Note that if $(F,A,X,\alpha,\beta)\in (T_\Sigma)_\gamma$, and we suppose that $F=(f_1,\dotsc,f_n)$ and $X=\left(\begin{smallmatrix} x_1\\ \vdots \\ x_n \end{smallmatrix}\right)$, then \begin{eqnarray} F^\mu(A^\mu)^{-1}X^\mu &=&\sum_{i,j}\mu(f_i)[E_i^T,A,E_j,\alpha,\beta]\mu(x_j) \nonumber \\ &=&\sum_{i,j}[f_iE_i^T,A,E_jx_j,\alpha,\beta] \nonumber \\ &=&[\sum_if_iE_i^T,A,\sum_jE_jx_j,\alpha,\beta] \nonumber \\ &=&[F,A,X,\alpha,\beta]. \label{eq:uniqueness} \end{eqnarray} Let now $S$ be a $\Gamma$-graded ring and $\varphi\colon R\rightarrow S$ be a $\Sigma$-inverting homomorphism of graded rings. We define $\Phi\colon \mathcal{R}_\Sigma \rightarrow S$ as follows. Let $(F,A,X,\alpha,\beta)\in (T_\Sigma)_\gamma$, then \[ \Phi([F,A,X,\alpha,\beta]) =F^\varphi(A^\varphi)^{-1}X^\varphi\in S_\gamma. \] Now we show that $\Phi$ is well defined. Let $(F,A,X,\alpha,\beta),(G,B,Y,\delta,\varepsilon)\in (T_\Sigma)_\gamma$ be such that $(F,A,X,\alpha,\beta),(G,B,Y,\delta,\varepsilon)$. Then there exist $L,M,P,Q\in\Sigma$, homogeneous rows $J,U$ and homogeneous columns $W,V$ such that \[ \left(\begin{array}{cccc|c} A&0&0&0&X\\0&B&0&0&Y\\0&0&L&0&W\\0&0&0&M&0\\\hline F&-G&0&J&0 \end{array}\right)= \left(\begin{array}{c} P\\\hline U \end{array}\right) \left(\begin{array}{c|c} Q&V \end{array}\right) \] where $P,U,Q,V$ have distributions $(\pi,\omega),(\gamma,\omega),(\omega, \theta),(\omega,e)$, respectively, and \linebreak $\pi_1=\alpha$, $\pi_2=\delta$, $\theta_1=\beta$, $\theta_2=\varepsilon$. Then \begin{eqnarray*} 0 & = & U^\varphi V^\varphi \\ & = &U^\varphi Q^\varphi(Q^\varphi)^{-1}(P^\varphi)^{-1}P^\varphi V^\varphi \\ & = &(U Q)^\varphi((PQ)^\varphi)^{-1}(PV)^\varphi \\ & = & \left(\begin{array}{cccc}F^\varphi&-G^\varphi&0&J^\varphi\end{array}\right) \left(\begin{array}{cccc}A^\varphi&0&0&0\\0&B^\varphi&0&0\\0&0&L^\varphi&0\\0&0&0&M^\varphi\end{array}\right)^{-1} \left(\begin{array}{c}X^\varphi\\Y^\varphi\\W^\varphi\\0\end{array}\right)\\ & = &\setlength{\arraycolsep}{3pt}\left(\begin{array}{cccc}F^\varphi&-G^\varphi&0&J^\varphi\end{array}\right) \left(\begin{array}{cccc}(A^\varphi)^{-1}&0&0&0\\0&(B^\varphi)^{-1}&0&0\\0&0&(L^\varphi)^{-1}&0\\0&0&0&(M^\varphi)^{-1}\end{array}\right)\left(\begin{array}{c}X^\varphi\\Y^\varphi\\W^\varphi\\0\end{array}\right)\\ & = & F^\varphi (A^\varphi)^{-1}X^\varphi-G^\varphi (B^\varphi)^{-1}Y^\varphi+ 0(L^\varphi)^{-1}W^\varphi+J^\varphi(M^\varphi)^{-1}0\\ & = &F^\varphi (A^\varphi)^{-1}X^\varphi-G^\varphi (B^\varphi)^{-1}Y^\varphi, \end{eqnarray*} which shows that $\Phi$ is well defined. Let now $(F',A',X',\alpha',\beta')$, $(F,A,X,\alpha,\beta)\in (T_\Sigma)_\gamma$. Then \begin{flalign*} \Phi([F',A',X',\alpha',\beta']+[F,A,X,\alpha,\beta]) &\setlength{\arraycolsep}{2pt} = \left(\begin{array}{cc}F'&F\end{array}\right)^\varphi\left(\left(\begin{array}{cc}A'&0\\0&A\end{array}\right)^\varphi\right)^{-1}\left(\begin{array}{c}X'\\X\end{array}\right)^\varphi\\&= \setlength{\arraycolsep}{2pt}\left(\begin{array}{cc}{F'}^\varphi&F\end{array}\right)\left(\begin{array}{cc}({A'}^{\varphi})^{-1}&0\\0&(A^\varphi)^{-1}\end{array}\right)\left(\begin{array}{c}{X'}^\varphi\\X^\varphi\end{array}\right)\\&={F'}^\varphi ({A'}^\varphi)^{-1}{X'}^\varphi+F^\varphi (A^{\varphi})^{-1}X^\varphi\\&=\Phi([F',A',X',\alpha',\beta'])+ \Phi([F,A,X,\alpha,\beta]). \end{flalign*} Thus, $\Phi$ is an additive map. Let $(F',A',X',\alpha',\beta')\in(T_\Sigma)_{\gamma'}$ and $(F,A,X,\alpha,\beta)\in(T_\Sigma)_\gamma$. Then \begin{flalign*} \Phi([F',A',X',\alpha',\beta']&\cdot[F,A,X,\alpha,\beta]) = \\ & = \setlength{\arraycolsep}{2pt}\left(\begin{array}{cc}0&F'\end{array}\right)^\varphi \left(\left(\begin{array}{cc}A&0\\-X'F&A'\end{array}\right)^\varphi\right)^{-1} \left(\begin{array}{c}X\\0\end{array}\right)^\varphi \\ & = \setlength{\arraycolsep}{2pt} \left(\begin{array}{cc} 0 &{F'}^\varphi \end{array}\right)\left(\begin{array}{cc} A^{\varphi} & 0\\ -{X'}^\varphi F^\varphi &{A'}^\varphi \end{array}\right)^{-1} \left(\begin{array}{c}X^\varphi\\0\end{array}\right) \\ & = \setlength{\arraycolsep}{3pt} \left(\begin{array}{cc}0&{F'}^{\varphi}\end{array}\right)\left(\begin{array}{cc}(A^\varphi)^{-1}& 0\\ -({A'}^\varphi)^{-1}{X'}^\varphi F^\varphi (A^\varphi)^{-1} &({A'}^\varphi)^{-1}\end{array}\right)\left(\begin{array}{c}X^\varphi\\0\end{array}\right)\\ & = {F'}^\varphi ({A'}^\varphi)^{-1}{X'}^\varphi F^\varphi (A^{\varphi})^{-1}X^\varphi\\ & = \Phi([F',A',X',\alpha',\beta'])\cdot \Phi([F,A,X,\alpha,\beta]). \end{flalign*} Hence, $\Phi$ is a homomorphism of graded rings. Clearly $\Phi\mu=\varphi$. The uniqueness of $\Phi$ now follows from \eqref{eq:uniqueness}. \end{proof} \subsection{Proof of Theorem~\ref{theo:Malcolmsoncriterion}} By Proposition~\ref{prop:universallocalization}, $\mu\colon R\rightarrow \mathcal{R}_\Sigma$ is the universal localization of $R$ at $\Sigma$. Thus $\lambda(r)=0$ if and only if $\mu(r)=0$. Hence, suppose that $r\in R_\gamma$ is such that $\mu(r)=0$. It means that $[r,1,1,e,e]\sim [0,1,1,e,e]$. Thus there exist $L,M,P,Q\in \Sigma$, homogeneous lines $J,U$ and homogeneous columns $W,V$ such that \[\setlength{\arraycolsep}{3pt} \left(\begin{array}{cccc|c} 1&0&0&0&1\\0&1&0&0&1\\0&0&L&0&W\\0&0&0&M&0\\\hline r&0&0&J&0 \end{array}\right) = \left(\begin{array}{cccc} P_{11}&P_{12}&P_{13}&P_{14}\\P_{21}&P_{22}&P_{23}&P_{24}\\P_{31}&P_{32}&P_{33}&P_{34}\\P_{41}&P_{42}&P_{43}&P_{44}\\\hline U_{1}&U_{2}&U_{3}&U_{4} \end{array}\right) \left(\begin{array}{cccc|c} Q_{11}&Q_{12}&Q_{13}&Q_{14}&V_{1}\\Q_{21}&Q_{22}&Q_{23}&Q_{24}&V_{2}\\Q_{31}&Q_{32}&Q_{33}&Q_{34}&V_{3}\\Q_{41}&Q_{42}&Q_{43}&Q_{44}&V_{4} \end{array}\right) \] where $P$ has distribution $(\pi,\omega)$, $U$ has distribution $(\gamma,\omega)$, $Q$ has distribution $(\omega,\theta)$ and $V$ has distribution $(\omega,e)$. Now the following equality \[\setlength{\arraycolsep}{2pt} \left(\begin{array}{ccccc|c} 1&0&0&0&0&1\\0&1&0&0&0&1\\0&0&L&0&0&W\\0&0&0&M&0&0\\0&0&0&0&1&0\\\hline 0&0&0&-J&r&r \end{array}\right) = \left(\begin{array}{ccccc} P_{11}&P_{12}&P_{13}&P_{14}&0\\P_{21}&P_{22}&P_{23}&P_{24}&0\\P_{31}&P_{32}&P_{33}&P_{34}&0\\P_{41}&P_{42}&P_{43}&P_{44}&0\\-P_{11}&-P_{12}&-P_{13}&-P_{14}&1\\\hline -U_{1}&-U_{2}&-U_{3}&-U_{4}&r \end{array}\right) \left(\begin{array}{ccccc|c} Q_{11}&Q_{12}&Q_{13}&Q_{14}&0&V_{1}\\Q_{21}&Q_{22}&Q_{23}&Q_{24}&0&V_{2}\\Q_{31}&Q_{32}&Q_{33}&Q_{34}&0&V_{3}\\ Q_{41}&Q_{42}&Q_{43}&Q_{44}&0&V_{4}\\1&0&0&0&1&1 \end{array}\right) \] where the homogeneous matrices of the right hand side have distributions $(e*e*\pi_3*\pi_4*e\gamma\,,\,\omega_1*\omega_2*\omega_3*\omega_4*e)$ and $(\omega_1*\omega_2*\omega_3*\omega_4*e\,,\, e*e*\theta_3*\theta_4*e*e )$, respectively, shows the result. Conversely, suppose there exist $L,M,P,Q\in\Sigma$, homogeneous rows $J,U$ and homogeneous columns $W,V$ such that \[ \left(\begin{array}{cc|c} L&0&W\\0&M&0\\\hline 0&J&r \end{array}\right)= \left(\begin{array}{c} P\\\hline U \end{array}\right) \left(\begin{array}{c|c} Q&V \end{array}\right), \] where $P,U,Q,V$ have distributions $(\pi,\omega),(\gamma,\omega),(\omega,\theta),(\omega,e)$, respectively. It follows that $[0,1,1,e,e]\sim[r,1,1,e,e]$ because \[ \left(\begin{array}{cccc|c} 1&0&0&0&1\\0&1&0&0&1\\0&0&L&0&W\\0&0&0&M&0\\\hline 0&-r&0&J&0 \end{array}\right) = \left(\begin{array}{ccc} 1&0&0\\0&1&0\\0&0&P\\\hline 0&-r&U \end{array}\right) \left(\begin{array}{ccc|c} 1&0&0&1\\0&1&0&1\\0&0&Q&V \end{array}\right) \] where the factors of the right hand side have distributions $(e*e*\pi*\gamma\,,\, e*e*\omega)$ and $(e*e*\omega\,,\, e*e*\theta*e)$, respectively. \section{A gr-prime matrix ideal yields a graded division ring, and vice versa}\label{sec:grprimematrixidealyields} This Section is the adaptation to the graded context of the first part of \cite[Section~7.3]{Cohnfreeeidealringslocalization} and the second part of \cite[Section~7.4]{Cohnfreeeidealringslocalization}. For the proof of the main result Theorem~\ref{theo:primematrixequalsdivisionring}, instead of using an analog of the first part of \cite[Section~7.4]{Cohnfreeeidealringslocalization}, we use Corollary~\ref{coro:Malcolmsoncriterion}. Theorem~\ref{theo:primematrixequalsdivisionring} could also have been proved via a graded version of \cite{Malcolmsonprimematrixideal} that can be found in \cite{DanielMAT0148}. \medskip \emph{Throughout this section, let $\Gamma$ be a group.} \medskip Let $R$ be a $\Gamma$-graded ring. If $(K,\varphi)$ is a graded epic $R$-field, the set $$\{A\in\mathfrak{M}(R)\colon A^\varphi \textrm{ is not invertible over } K\}$$ will be called the \emph{singular kernel of $(K,\varphi)$}. Now we show that gr-singular kernels are gr-prime matrix ideals. The aim of this section is to show that gr-singular kernels determine graded epic $R$-division rings in a similar way as commutative $R$-fields are determined by prime ideals of $R$. Given an $n\times n$ matrix $A$ with entries in $R$, if we write $A=(A_1\ A_2\, \dotsc\, A_n)$ we understand that $A_1,\dotsc,A_n$ are the columns of $A$. And if we write $A=\left(\begin{smallmatrix} A_1\\ \vdots \\ A_n \end{smallmatrix}\right)$ we understand that $A_1,\dotsc,A_n$ are the rows of $A$. Given two matrices $A,B\in\mathfrak{M}(R)$, we define the \emph{diagonal sum} of $A$ and $B$ as $$A\oplus B = \left(\begin{array}{cc} A&0 \\ 0 & B \end{array}\right).$$ Notice that if $A\in M_m(R)[\overline{\alpha}][\overline{\beta}]$ and $B\in M_n(R)[\overline{\alpha'}][\overline{\beta'}]$, then $A\oplus B\in M_{m+n}(R)[\overline{\alpha}*\overline{\alpha'}][\overline{\beta}*\overline{\beta'}]$. Let $A,B\in M_n(R)[\overline{\alpha}][\overline{\beta}]$. If they differ at most in the $i$-th column, then we define the \emph{determinantal sum} of $A$ and $B$ with respect to the $i$-th column as $$A\nabla B=(A_1\ \dotsc\ A_i+B_i\ \dotsc\ A_n).$$ Similarly, if they differ at most in the $i$-th row we define the determinantal sum of $A$ and $B$ with respect to the $i$-th row as $$A\nabla B=\left(\begin{array}{c} A_1 \\ \vdots \\ A_i+B_i \\ \vdots \\A_n\end{array}\right).$$ The matrix $A\nabla B$, when defined, has the same distribution as $A$ and $B$. Note that the operation $\oplus$ is associative. On the other hand, the operation $\nabla$ is not always defined, and as a consequence it is not associative. Notice that distributive laws are satisfied. More precisely, if $C$ is another homogeneous matrix, then $C\oplus(A\nabla B)=(C\oplus A) \nabla (C\oplus B)$ and $(A\nabla B)\oplus C=(A\oplus C)\nabla (B\oplus C)$ whenever $A\nabla B$ is defined. On the other hand, if $B,C\in M_n(R)[\overline{\alpha}][\overline{\beta}]$ which differ on at most one column, $A\in M_n(R)[\overline{\alpha'}][\overline{\alpha}]$, $D\in M_n(R)[\overline{\beta}][\overline{\beta'}]$, then may happen that $$A(B\nabla C)\neq AB\nabla AC,\qquad (B\nabla C)D\neq BD\nabla CD,$$ because, for example, $AB$ and $AC$ ($BD,BC$) may differ on more than $1$ row/column. But in some cases we one can apply the distributive law. Let $X\in \mathfrak{M}(R)$ and suppose that either $X$ is a diagonal matrix, or $X$ is a permutation matrix, then $$X(B\nabla C)= XB\nabla XC,\qquad (B\nabla C)X = BX\nabla CX.$$ Moreover, we can regard $X\in M_n(R)[\overline{\alpha'}][\overline{\alpha}] \cap M_n(R)[\overline{\beta}][{\overline{\beta'}}]$ for some $\alpha',\beta'\in \Gamma^n$. Thus $X(B\nabla C)\in M_n(R)[\overline{\alpha'}][\overline{\beta}]$ and $(B\nabla C)X\in M_n(R)[\overline{\alpha}][\overline{\beta'}]$. \medskip Let $\Gamma$ be a group and let $R$ be a $\Gamma$-graded ring. A subset $\mathcal{P}$ of $\mathfrak{M}(R)$ is a gr-prime matrix ideal if the following conditions are satisfied. \begin{enumerate}[(PM1)]\label{def:grprimematrixideal} \item $\mathcal{P}$ contains all the homogeneous matrices that are not gr-full; \item If $A,B\in\mathcal{P}$ and their determinantal sum (with respect to a row or column) exists, then $A\nabla B\in\mathcal{P}$; \item If $A\in\mathcal{P}$, then $A\oplus B\in\mathcal{P}$ for all $B\in\mathfrak{M}(R)$; \item For $A,B\in\mathfrak{M}(R)$, $A\oplus B\in\mathcal{P}$ implies that $A\in\mathcal{P}$ or $B\in\mathcal{P}$; \item $1\notin \mathcal{P}$; \item If $A\in\mathcal{P}$ and $E,F$ are permutation matrices of appropriate size, then $EAF\in\mathcal{P}$. \end{enumerate} We remark that when $\Gamma=\{1\}$, that is, the ungraded case, (PM6) is a consequence of (PM1)--(PM5) as shown in \cite[(g) in p.431]{Cohnfreeeidealringslocalization}. We have not been able to obtain (PM6) from the others in the general graded case. \begin{proposition}\label{prop:viceversa} Let $R$ be a $\Gamma$-graded ring. Let $K$ be a $\Gamma$-almost graded division ring and $\varphi\colon R\rightarrow K$ be a homomorphism of $\Gamma$-almost graded rings. Then $$\mathcal{P}=\{A\in\mathfrak{M}(R)\colon A^\varphi \textrm{ is not invertible }\}$$ is a gr-prime matrix ideal. Therefore, the following assertions hold true. \begin{enumerate}[\rm(1)] \item If $(K,\varphi)$ is a $\Gamma$-graded epic $R$-division ring, then the gr-singular kernel of $(K,\varphi)$ is a $\Gamma$-gr-prime matrix ideal. \item Let $N$ be a normal subgroup of $\Gamma$ and consider $R$ as a $\Gamma/N$-graded ring. Let $(K,\varphi)$ be a $\Gamma/N$-graded epic $R$-division ring. Then $$\mathcal{P}=\{A\in\mathfrak{M}_\Gamma(R)\colon A^\varphi \textrm{ is not invertible }\}$$ is a $\Gamma$-gr-prime matrix ideal. \end{enumerate} \end{proposition} \begin{proof} Let $K$ be a $\Gamma$-almost graded division ring and $\varphi\colon R\rightarrow K$ be a homomorphism of $\Gamma$-almost graded rings. First suppose that $K=\DC(\varphi)$ and let $$\Sigma=\mathfrak{M}(R)\setminus\mathcal{P}=\{A\in\mathfrak{M}(R)\colon A^\varphi \textrm{ is invertible over } K\}.$$ By Theorem~\ref{theo:gradedlocal}(2), $R_\Sigma$ is a local ring. If $\mathfrak{m}$ is the maximal graded ideal of $R_\Sigma$, there exists a surjective homomorphism of $\Gamma$-almost graded rings $\widetilde{\Phi}\colon R_\Sigma/\mathfrak{m}\rightarrow K$ such that the following diagram is commutative $$\xymatrix{R\ar[r]^\lambda\ar[rd]_\varphi & R_\Sigma \ar[d]^\Phi\ar[r]^\pi & R_\Sigma/\mathfrak{m}\ar[ld]^{\widetilde{\Phi}} \\ & K & }$$ By Proposition~\ref{prop:almostdivisionring}(3), the sets $\{A\in\mathfrak{M}(R)\colon A^{(\pi\lambda)} \textrm{ is invertible over } R_\Sigma/\mathfrak{m}\}$ and $\{A\in\mathfrak{M}(R)\colon A^{(\widetilde{\Phi}\pi\lambda)} \textrm{ is invertible over }K\}$ are equal. Because this last set equals $\Sigma$, we get that $\Sigma=\{A\in\mathfrak{M}(R)\colon A^{(\pi\lambda)} \textrm{ is invertible over } R_\Sigma/\mathfrak{m}\}.$ Now, since $(R_\Sigma/\mathfrak{m}, \pi\lambda)$ is a $\Gamma$-graded epic $R$-division ring, it is enough to prove (1). Thus, suppose that $K$ is a $\Gamma$-graded division ring and $(K,\varphi)$ is a $\Gamma$-graded epic $R$-division ring. Let $\mathcal{P}=\{A\in\mathfrak{M}(R)\colon A^\varphi \textrm{ is not invertible over } K\}$. If $A\in\mathfrak{M}(R)$ is not gr-full, then $A^\varphi$ is not gr-full. Since $K$ is a $\Gamma$-graded division ring, $A^\varphi$ is not invertible over $K$. Thus, (PM1) is satisfied. Let now $A,B\in\mathcal{P}_n[\overline{\alpha}][\overline{\beta}]$ such that $A\nabla B$ is defined. We may suppose that $A,B$ differ on the first column. Hence $A=(A_1\ C_2\,\dotsc\,C_n)$ and $B=(B_1\ C_2\,\dotsc\,C_n)$. Since $A^\varphi$ and $B^\varphi$ are not invertible over $K$, the columns of $A^\varphi$ and $B^\varphi$ are right linearly dependent over $K$. If the columns $C_2^\varphi,\dotsc,C_n^\varphi$ are right linearly dependent over $K$, then the columns of $(A\nabla B)^\varphi$ are right linearly dependent over $K$ and thus $A\nabla B\in\mathcal{P}$. Hence we can suppose that there exist homogeneous elements $a_1,\dotsc,a_n, b_1,\dotsc,b_n\in K$, with $a_1,b_1\neq 0$, such that $$A_1a_1+C_2a_2+\dotsb+C_na_n=0,\quad B_1b_1+C_2b_2+\dotsb+C_nb_n=0.$$ But then $$A_1+B_1+ C_2(a_2a_1^{-1}+b_2b_1^{-1})+\dotsb +C_n(a_na_1^{-1}+b_nb_1^{-1})=0,$$ which shows that $A\nabla B\in\mathcal{P}$. Thus (PM2) is proved. Let $A\in\mathcal{P}$ and $B\in\mathfrak{M}(R)$, then $A^\varphi$ is not invertible over $K$, but then $A^\varphi\oplus B^\varphi=(A\oplus B)^\varphi$ is not invertible over $K$. It implies (PM3). Now suppose that $A,B\in\mathfrak{M}(R)$ are such that $A\oplus B\in\mathcal{P}$. It means that the homogeneous matrix $A^\varphi\oplus B^\varphi$ is not invertible over $K$. It implies that either $A^\varphi$ or $B^\varphi$ is not invertible. That is, $A\in\mathcal{P}$ or $B\in\mathcal{P}$ and (PM4) follows. Clearly, (PM5) is satisfied. Let $A\in\mathcal{P}$ and $E,F$ be permutation matrices with entries in $R$. Notice that $E^\varphi, F^\varphi$ are permutation matrices with entries in $K$. Thus, if $(EAF)^\varphi=E^\varphi A^\varphi F^\varphi$ were invertible over $K$, then $A^\varphi=(E^\varphi)^{-1}(EAF)^\varphi (F^\varphi)^{-1}$ would be invertible over $K$, a contradiction. Thus $EAF\in\mathcal{P}$ and (PM6) is shown. \end{proof} \begin{lemma}\label{lem:grprimematrixideal} Let $R$ be a $\Gamma$-graded ring and $\mathcal{P}$ be a gr-prime matrix ideal. Let $A,B\in\mathfrak{M}(R)$. The following assertions hold true. \begin{enumerate}[\rm(1)] \item If $A$ and $B$ are such that $C=A\nabla B$ exists and $B$ is not gr-full. Then $A\in\mathcal{P}$ if and only if $C\in\mathcal{P}$. \item Let $A\in\mathcal{P}$. The result of adding a suitable right multiple of one column of $A$ to another column again lies in $\mathcal{P}$. More precisely, if $A\in M_n(R)[\overline{\alpha}][\overline{\beta}]$ and $a\in R_{\beta_i\beta_j^{-1}}$, then $(A_1\,\dotsc\, A_{j-1}\ A_j+A_ia\ A_{j+1}\,\dotsc\, A_n)\,$ belongs to $\mathcal{P}$. \item If $A\oplus B\in\mathcal{P}$, then $B\oplus A\in\mathcal{P}$. \item Suppose that $A\in M_m(R)[\overline{\alpha}][\overline{\beta}]$ and $B\in M_n(R)[\overline{\delta}][\overline{\varepsilon}]$. For $C\in M_{n\times m}(R)[\overline{\delta}][\overline{\beta}]$, \[ \begin{pmatrix} A&0\\C&B \end{pmatrix}\in\mathcal{P}\ \textrm{ if and only if } \ \begin{pmatrix} A&0\\0&B \end{pmatrix}\in\mathcal{P} \] Similarly, for $C\in M_{m\times n}(R)[\overline{\beta}][\overline{\varepsilon}]$, \[ \begin{pmatrix} A&C\\0&B \end{pmatrix}\in\mathcal{P}\ \textrm{ if and only if } \ \begin{pmatrix} A&0\\0&B \end{pmatrix}\in\mathcal{P} \] \item The set $\mathfrak{M}(R)\setminus \mathcal{P}$ is gr-multiplicative. \item No identity matrix belongs to $\mathcal{P}$ \item Suppose that $A\in M_n(R)[\overline{\alpha}][\overline{\beta}]$ and $B\in M_n(R)[\overline{\beta}][\overline{\delta}]$. Then $AB\in\mathcal{P}$ if, and only if, $A\oplus B\in\mathcal{P}$. \item No invertible matrix in $\mathfrak{M}(R)$ belongs to $\mathcal{P}$. \item Suppose that $A$ and $B$ are such that $C=A\nabla B$ exists and $B\in\mathcal{P}$. Then $A\in\mathcal{P}$ if, and only if, $C\in\mathcal{P}$. \end{enumerate} \end{lemma} \begin{proof} (1) By (PM1) and (PM2), if $A\in\mathcal{P}$, then $C\in\mathcal{P}$. Conversely, suppose that $C\in\mathcal{P}$. Clearly $A=C\nabla B'$ where $B'$ is obtained from $B$ changing the sign of a row or column. Now $A\in\mathcal{P}$ because $B'$ is not gr-full. (2) Suppose that $\overline{\beta}=\beta_1*\overline{\beta'}$ with and $c\in R_{\beta_2\beta_1^{-1}}$. If $A=(A_1\ A_2\,\dotsc\,A_n)$, then \[ \begin{array}{lcl} (A_1+A_2c\ A_2\ \dots\ A_n)& = & (A_1\ A_2\ \dots\ A_n)\nabla(A_2c\ A_2\ \dots\ A_n)\\ & = & A\nabla (A_2\ A_3\ \dots\ A_n) \begin{pmatrix} c & 1 & & 0\\&&\ddots&\\0&0&&1 \end{pmatrix} \end{array}. \] Thus, the right hand side is a determinantal sum of $A$ and $(A_2c\ A_2\ \dots\ A_n)$, which is not gr-full. Indeed, it is the product of $(A_2\ A_3\ \dots\ A_n)\in M_{n\times(n-1)}(R)[\overline{\alpha}][\overline{\beta'}]$ and $\left(\begin{smallmatrix} c & 1 & & 0\\&&\ddots&\\0&0&&1 \end{smallmatrix}\right)\in M_{(n-1)\times n}(R)[\overline{\beta'}][\overline{\beta}]$. respectively. (3) It follows from (PM6). (4) We show the first statement, the other can be proved analogously. If we write $A=(A_1\ A')$ and $C=(C_1\ C')$, then \[ \begin{pmatrix} A&0\\C&B \end{pmatrix}= \begin{pmatrix} A_1&A'&0\\0&C'&B \end{pmatrix}\nabla \begin{pmatrix} 0&A'&0\\C_1&C'&B \end{pmatrix} \] The second matrix of the right hand side is a matrix with a submatrix that is a block of zeros of size $m\times (n+1)$. Since $m+n+1>m+n$, that matrix is hollow and therefore not gr-full. By (1), \[ \begin{pmatrix} A&0\\C&B \end{pmatrix}\in\mathcal{P}\ \textrm{ if and only if }\ \begin{pmatrix} A_1&A'&0\\0&C'&B \end{pmatrix}\in\mathcal{P} \] Similarly, one can repeat the argument applied to columns of $A'$ and $C'$ and so on, to obtain the desired result. (5) Let $\Sigma=\mathfrak{M}(R)\setminus \mathcal{P}$. By (PM5), $1\in \Sigma$. By (PM4), $A\oplus B\in\Sigma$ if $A,B\in\Sigma$. Now (4) implies that $\Sigma$ is lower gr-semimultiplicative. Finally, (PM6) shows that $\Sigma$ is gr-multiplicative. (6) It follows from (PM4) and (PM5). (7) First notice that, by (6) and (PM4), a matrix $C\in \mathfrak{M}(R)$ belongs to $\mathcal{P}$ if and only if $C\oplus I\in \mathcal{P}$ for the identity matrix $I$ of the same size as $C$. We claim that $C\in\mathcal{P}$ if and only if $-C\in\mathcal{P}$. Indeed, \begin{flalign* \begin{pmatrix} C&0\\0&I \end{pmatrix}\in\mathcal{P}\stackrel{(2)}{\Leftrightarrow} \begin{pmatrix} C&-C\\0&I \end{pmatrix}\in\mathcal{P} \stackrel{(2)}{\Leftrightarrow} \begin{pmatrix} 0&-C\\I&I \end{pmatrix}\in\mathcal{P} & \\ \stackrel{\textrm{(PM6)}}{\Leftrightarrow} \begin{pmatrix} -C&0\\I&I \end{pmatrix}\in\mathcal{P}\stackrel{(4)}{\Leftrightarrow} \begin{pmatrix} -C&0\\0&I \end{pmatrix}\in\mathcal{P}, \end{flalign*} and the claim is proved. Then \begin{flalign*} \begin{pmatrix} A&0\\0&B \end{pmatrix}\in\mathcal{P} \stackrel{(4)}{\Leftrightarrow} \begin{pmatrix} A&0\\I&B \end{pmatrix}\in\mathcal{P}\stackrel{(2)}{\Leftrightarrow} \begin{pmatrix} A&-AB\\I&0 \end{pmatrix}\in\mathcal{P}& \\ \stackrel{\textrm{(PM6)}}{\Leftrightarrow} \begin{pmatrix} -AB&A\\0&I \end{pmatrix}\in\mathcal{P}\stackrel{(4)}{\Leftrightarrow} \begin{pmatrix} -AB&0\\0&I \end{pmatrix}, \end{flalign*} and, by the claim, the result follows. (8) If $A\in M_n(R)[\overline{\alpha}][\overline{\beta}]$ is invertible, then $A^{-1}\in M_n(R)[\overline{\beta}][\overline{\alpha}]$. Since $AA^{-1}=I\notin\mathcal{P}$, (7) implies that $A\oplus A^{-1}\notin\mathcal{P}$. Now (PM3) shows that $A\notin\mathcal{P}$. (9) By (PM2), if $A\in\mathcal{P}$, then $C\in\mathcal{P}$. Conversely, suppose that $C\in\mathcal{P}$. Clearly $A=C\nabla B'$ where $B'$ is obtained from $B$ changing the sign of a row or column. More precisely, $B'$ is the product of $B$ by a diagonal matrix $D$ whose diagonal elements are $1$ or $-1$. Now $B\oplus D \in\mathcal{P}$ because $B\in\mathcal{P}$. Thus $B'\in\mathcal{P}$ by (7). Therefore $A\in\mathcal{P}$ by (PM2). \end{proof} The proof of Lemma~\ref{lem:grprimematrixideal} is very similar to the one for the ungraded case, see for example \cite[p. 430--431]{Cohnfreeeidealringslocalization}. The main difference is that we were not able to show \cite[(d) p. 430]{Cohnfreeeidealringslocalization} because not every multiple of a column can be added to another column so that the matrix remains homogeneous. As a consequence the proof of Lemma~\ref{lem:grprimematrixideal}(7) is also different. The following result is well known and can be found, for example, in \cite[Proposition~1.1.31]{Hazrat_2016}. \begin{lemma}\label{lem:gradedlocal} Let $R$ be a $\Gamma$-graded ring. Then $R$ is a $\Gamma$-graded local ring if and only if $R_e$ is a local ring.\qed \end{lemma} The proof of the following lemma follows the one given in the ungraded result in \cite[Proposition~7.2.6]{Cohnfreeeidealringslocalization}. \begin{lemma}\label{lem:locuniversalgradedlocal} Let $R$ be a $\Gamma$-graded ring, $\Sigma$ be a gr-multiplicative subset of $\mathfrak{M}(R)$ and $\lambda\colon R\rightarrow R_\Sigma$ be the natural homomorphism of $\Gamma$-graded rings. Then $R_\Sigma$ is a $\Gamma$-graded local ring if and only if it satisfies the following two conditions: \begin{enumerate}[\rm(1)] \item $R_\Sigma\neq\{0\}$; \item For a matrix $A\in\Sigma_n[\overline{\alpha'}*e][\overline{\beta'}*e]$, if $B$, the $(n,n)$-minor of $A$, is such that $B^\lambda$ is not invertible over $R_\Sigma,$ then $(A-e_{nn})^\lambda$ is invertible over $R_\Sigma$, where $e_{nn}$ denotes the matrix with $1$ in the $(n,n)$ entry and zeros everywhere else. \end{enumerate} \end{lemma} \begin{proof} Consider the canonical homomorphism of $\Gamma$-graded local rings $\lambda\colon R\rightarrow R_\Sigma$. Suppose that $R_\Sigma$ is a $\Gamma$-graded local ring with maximal graded ideal $\mathfrak{m}$ and canonical homomorphism $\pi\colon R_\Sigma\rightarrow R_\Sigma/\mathfrak{m}$. Since $R_\Sigma$ is graded local, by definition, $R_\Sigma\neq \{0\}$. Recall that any matrix $C\in \mathfrak{M}(R_\Sigma)$ is invertible if and only if $C^\pi$ is invertible over $R/\mathfrak{m}$. Let $A\in\Sigma_n[\overline{\alpha'}*e][\overline{\beta'}*e]$ such that its $(n,n)$-minor $B$ is not invertible over $R_\Sigma$. It is enough to show that $(A-e_{nn})^\pi$ is invertible. Some non-trivial left linear combination (over the graded division ring $R/\mathfrak{m}$) with homogeneous coefficients of the rows of $B^\pi$ is zero. If we take the corresponding left linear combination of the first $n-1$ rows of $A^\pi$, we obtain $(0,0,\dotsc,0,c)$ where $c$ is homogeneous and $c\neq 0$, because $A^\pi$ is invertible. We now subtract from the last row of $A$, $c^{-1}$ times this combination of the other rows and obtain the matrix $A-e_{nn}$, which is therefore invertible in $R_\Sigma/\mathfrak{m}$ because it is the product of the matrix corresponding to those elementary operations on $A^\pi$ times $A^\pi$. Conversely, suppose now that conditions (1) and (2) are satisfied. By Lemma~\ref{lem:gradedlocal}, it is enough to prove that $(R_\Sigma)_e$ is a local ring. Let $x\in (R_\Sigma)_e$. By Lemma~\ref{lem:homogeneousclosure}(3), there exist $\overline{\alpha},\overline{\beta}\in\Gamma^n$, $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$ and $u\in M_{n\times 1}(R_\Sigma)$ such that $\alpha_i=e$, $\beta_j=e$, $u_j=x$ and $A^\lambda u=e_i$. Since $\Sigma$ is gr-multiplicative, we may suppose that $A\in \Sigma_n[\overline{\alpha'}*e][\overline{\beta'}*e]$, $u_n=x$ and $A^\lambda u=e_n$. Suppose $x$ is not invertible in $R_\Sigma$. Equivalently, by Lemma~\ref{lem:Cramersrule}, the matrix $(A_\bullet^\lambda\, e_n^\lambda)$ is not invertible in $R_\Sigma$. This implies that the $(n,n)$-minor of $(A_\bullet^\lambda\,\, e_n^\lambda)$, which is the $(n,n)$-minor of $A$ is not invertible in $R_\Sigma.$ Hence, $(A-e_{nn})^\lambda$ is invertible over $R_\Sigma$. Then the matrix $(A^\lambda)^{-1}(A-e_{nn})^\lambda=I-(A^\lambda)^{-1}e_{nn}$ is invertible in $R_\Sigma$. Since this matrix is of the form $$\begin{pmatrix} 1&0&\cdots&0&* \\ 0 & 1 & \cdots &0 & *\\ \vdots & \dots & \ddots & 1&*\\ 0 & \cdots & \cdots &0 &1-x\end{pmatrix},$$ we obtain that $1-x$ is invertible in $R_\Sigma$, as desired. \end{proof} Let $R$ be a graded ring and let $\mathcal{P}$ be a gr-prime matrix ideal. The universal localization of $R$ at the set $\Sigma=\mathfrak{M}(R)\setminus \mathcal{P}$ will be denoted by $R_\mathcal{P}$ (instead of $R_\Sigma$). \begin{theorem}\label{theo:primematrixequalsdivisionring} Let $\Gamma$ be a group and $R$ be a $\Gamma$-graded ring. The following assertions hold true. \begin{enumerate}[\rm(1)] \item If $\mathcal{P}$ is any gr-prime matrix ideal of $R$, then the localization $R_\mathcal{P}$ is a graded local ring. Moreover, its residue class $\Gamma$-graded division ring is a $\Gamma$-graded epic $R$-division ring such that its gr-singular kernel equals $\mathcal{P}$. \item If $(K,\varphi)$ is a $\Gamma$-graded epic $R$-division ring, with gr-singular kernel $\mathcal{P}$, then $\mathcal{P}$ is a gr-prime matrix ideal and the $\Gamma$-graded local ring $R_\mathcal{P}$ has residue class graded division ring $R$-isomorphic to $K$. \end{enumerate} \end{theorem} \begin{proof} (1) First note that $\Sigma=\mathfrak{M}(R)\setminus\mathcal{P}$ is a gr-multiplicative subset of $\mathfrak{M}(R)$ by Lemma~\ref{lem:grprimematrixideal}(5). By Corollary~\ref{coro:Malcolmsoncriterion}, since $\Sigma$ consists of gr-full matrices, we get that $R_\mathcal{P}$ is a nonzero $\Gamma$-graded ring. Consider now an $n\times n$ matrix $A\in\Sigma_n[\overline{\alpha'}*e][\overline{\beta'}*e]$ and let $B$ be its $(n,n)$-minor. Suppose that $B^\lambda$ is not invertible over $R_\Sigma.$. Hence, $B$ belongs to $\mathcal{P}$. Write $A=(A'\ A_n)$ where $A_n$ is the last column of $A$. Then $$A-e_{nn}=(A'\, A_n)\nabla (A'\, -e_n).$$ Now note that $A=(A'\, A_n)\in \Sigma$, and that $(A'\, -e_n)\in\mathcal{P}$ because $B\in\mathcal{P}$ and $\mathcal{P}$ is gr-lower semimultiplicative. By Lemma~\ref{lem:grprimematrixideal}(9), $A-e_{nn}\notin \mathcal{P}$, or equivalently $A-e_{nn}\in\Sigma$. Therefore $(A-e_{nn})^\lambda$ is invertible over $R_\Sigma$. Now Lemma~\ref{lem:locuniversalgradedlocal} implies that $R_\Sigma$ is a $\Gamma$-graded local ring. By Theorem~\ref{theo:gradedlocal}(2)(a), the residue class graded division ring is a graded epic $R$-division ring. By construction, the singular kernel equals $\mathcal{P}$. (2) By Proposition~\ref{prop:viceversa}, $\mathcal{P}$ is a gr-prime matrix ideal. By (1), $R_\mathcal{P}$ is a $\Gamma$-graded local ring and its residue class graded division ring is a graded epic $R$-division ring with singular kernel $\mathcal{P}$. Then, by Theorem~\ref{theo:gradedlocal}(b)(ii), $K$ and the residue class graded division ring of $R_\Sigma$ are isomorphic $\Gamma$-graded $R$-rings. \end{proof} The following is Theorem~\ref{theo:specialization}, but expressed in terms of gr-prime matrix ideals. \begin{corollary}\label{coro:specialization} Let $R$ be a $\Gamma$-graded ring, $(K_i,\varphi_i)$, $i=1,2$, be $\Gamma$-graded epic $R$-division rings with singular kernels $\mathcal{P}_i$, respectively. The following statements are equivalent. \begin{enumerate}[\rm(1)] \item There exists a gr-specialization from $K_1$ to $K_2$. \item $\mathcal{P}_1\subseteq \mathcal{P}_2$. \item There exists a homomorphism $R_{\Sigma_2}\rightarrow R_{\Sigma_1}$ of $\Gamma$-graded $R$-rings. \end{enumerate} Furthermore, if there exists a gr-specialization from $K_1$ to $K_2$ and another gr-specialization from $K_2$ to $K_1$, the $K_1$ and $K_2$ are isomorphic graded $R$-rings. \qed \end{corollary} \begin{corollary} Let $R$ be a $\Gamma$-graded ring and $(K,\varphi)$ be a graded epic $R$-division ring with singular kernel $\mathcal{P}$. Suppose that $\gamma\in\Gamma$. Consider the universal localization $\lambda\colon R\rightarrow R_\mathcal{P}$ and let $\Phi\colon R_\mathcal{P} \rightarrow K$ be the homomorphism of $\Gamma$-graded rings such that $\varphi=\Phi\lambda$. \begin{enumerate}[\rm(1)] \item Let $x\in K_\gamma$. Then $x=0$ if and only if its numerator belongs to $\mathcal{P}$. \item Let $x\in (R_\mathcal{P})_\gamma$. Then $x\in\ker\Phi$ if and only if its numerator belongs to $\mathcal{P}$. \end{enumerate} \end{corollary} \begin{proof} Suppose that $(A_0\, A_\bullet)$ is the numerator of $x$. (1) By Lemma~\ref{lem:Cramersrule}(1), $x$ is invertible if and only if $(A_0\, A_\bullet)^\varphi$ is invertible over $K$. That is, if and only if $(A_0\, A_\bullet)$ belongs to $\mathcal{P}$. (1) By Lemma~\ref{lem:Cramersrule}(1), $x$ is invertible if and only if $(A_0\, A_\bullet)^\lambda$ is invertible over $R_\mathcal{P}$. Since $R_\mathcal{P}$ is a local ring with residue class graded division ring $R$-isomorphic to $K$, $x$ is invertible if and only if $(A_0\, A_\bullet)^{\Phi\lambda}$ is invertible over $K$. That is, $x\in\ker \Phi$ if and only if $(A_0\, A_\bullet)$ belongs to $\mathcal{P}$. \end{proof} \begin{corollary} Let $R$ and $R'$ be $\Gamma$-graded rings with gr-prime matrix ideals $\mathcal{P}$ and $\mathcal{P}'$, respectively, with corresponding graded epic $R$-division rings $(K,\varphi)$ and $(K',\varphi')$ respectively. Let $f\colon R\rightarrow R'$ be a homomorphism of $\Gamma$-graded rings. The following assertions hold true. \begin{enumerate}[\rm(1)] \item $f$ extends to a gr-specialization if, and only if, $\mathcal{P}^f\subseteq\mathcal{P}'$. \item $f$ extends to a homomorphism $K\rightarrow K'$ if, and only if, $\mathcal{P}^f\subseteq \mathcal{P}'$ and $\Sigma^f\subseteq \Sigma'$, where $\Sigma=\mathfrak{M}(R)\setminus \mathcal{P}$ and $\Sigma'=\mathfrak{M}(R')\setminus\mathcal{P}'$. \end{enumerate} \end{corollary} \begin{proof} (1) First note that the set ${\mathcal{P}''}=\{A\in\mathfrak{M}(R)\colon A^{f}\in\mathcal{P}'\}$ is a gr-prime matrix ideal whose corresponding graded epic $R$-division ring is $\varphi'f\colon R\rightarrow \DC(\varphi'f)$. By Corollary~\ref{coro:specialization}, there exists a specialization from $(K,\varphi)$ to $(\DC(\varphi'f))$ if, and only if, $\mathcal{P}\subseteq \mathcal{P}''$. (2) If $\mathcal{P}^f\subseteq \mathcal{P}'$ and $\Sigma^f\subseteq \Sigma'$, then $\mathcal{P}=\mathcal{P}''$, and therefore the gr-specialization of (1) is in fact an isomorphism by Corollary~\ref{coro:specialization}. \end{proof} \section{Gr-matrix ideals}\label{sec:grmatrixideals} In this section, the concepts, arguments and proofs are an adaptation of the ones in \cite[Section~7.3]{Cohnfreeeidealringslocalization} to the graded context. \medskip \emph{Throughout this section, let $\Gamma$ be a group}. \medskip Let $R$ be a $\Gamma$-graded ring. A subset $\mathcal{I}$ of $\mathfrak{M}(R)$ is a \emph{gr-matrix pre-ideal} if the following conditions are satisfied. \begin{enumerate}[({I}1)] \item $\mathcal{I}$ contains all the homogeneous matrices that are not gr-full; \item If $A,B\in\mathcal{I}$ and their determinantal sum (with respect to a row or column) exists, then $A\nabla B\in\mathcal{I}$; \item If $A\in\mathcal{I}$, then $A\oplus B\in\mathcal{I}$ for all $B\in\mathfrak{M}(R)$; \item If $A\in\mathcal{I}$ and $E,F$ are permutation matrices of appropriate size, then $EAF\in\mathcal{I}$. \end{enumerate} \medskip If, moreover, we have \begin{enumerate}[({I}5)] \item For $A\in\mathfrak{M}(R)$, if $A\oplus 1\in\mathcal{I}$, then $A\in\mathcal{I}$, \end{enumerate} we call $\mathcal{I}$ a \emph{gr-matrix ideal}. \bigskip Clearly, $\mathfrak{M}(R)$ is a gr-matrix ideal. A \emph{proper} gr-matrix ideal is a gr-matrix ideal different from $\mathfrak{M}(R)$. \begin{lemma}\label{lem:grmatrixideal} Let $R$ be a $\Gamma$-graded ring and $\mathcal{I}$ be a gr-matrix pre-ideal. Let $A,B\in \mathfrak{M}(R)$. The following assertions hold true. \begin{enumerate}[\rm(1)] \item If $A$ and $B$ are such that $C=A\nabla B$ exists and $B$ is not gr-full. Then $A\in\mathcal{I}$ if and only if $C\in\mathcal{I}$. \item Let $A\in\mathcal{I}$. The result of adding a suitable right multiple of one column of $A$ to another column again lies in $\mathcal{I}$. More precisely, if $A\in M_n(R)[\overline{\alpha}][\overline{\beta}]$ and $a\in R_{\beta_i\beta_j^{-1}}$, then $(A_1\,\dotsc\, A_{j-1}\ A_j+A_ia\ A_{j+1}\,\dotsc\, A_n)\,$ belongs to $\mathcal{I}$. \item If $A\oplus B\in\mathcal{I}$, then $B\oplus A\in\mathcal{I}$. \item Suppose that $A\in M_m(R)[\overline{\alpha}][\overline{\beta}]$ and $B\in M_n(R)[\overline{\delta}][\overline{\varepsilon}]$. For $C\in M_{n\times m}(R)[\overline{\delta}][\overline{\beta}]$, \[ \begin{pmatrix} A&0\\C&B \end{pmatrix}\in\mathcal{I}\ \textrm{ if and only if } \ \begin{pmatrix} A&0\\0&B \end{pmatrix}\in\mathcal{I} \] Similarly, for $C\in M_{m\times n}(R)[\overline{\beta}][\overline{\varepsilon}]$, \[ \begin{pmatrix} A&C\\0&B \end{pmatrix}\in\mathcal{I}\ \textrm{ if and only if } \ \begin{pmatrix} A&0\\0&B \end{pmatrix}\in\mathcal{I} \] \end{enumerate} If, moreover, $\mathcal{I}$ is a gr-matrix ideal, then the following assertions hold true. \begin{enumerate}[\rm(1)] \setcounter{enumi}{4} \item Suppose that $A\in M_n(R)[\overline{\alpha}][\overline{\beta}]$, $B\in M_n(R)[\overline{\beta}][\overline{\delta}]$. Then $AB\in\mathcal{I}$ if and only if $A\oplus B\in\mathcal{I}$. \item If $A$ and $B$ are such that $C=A\nabla B$ exists and $B\in\mathcal{I}$. Then $A\in\mathcal{I}$ if and only if $C\in\mathcal{P}$. \item If an identity matrix $I_n$, $n\geq1$, belongs to $\mathcal{I}$, then $\mathcal{I}=\mathfrak{M}(R)$ \end{enumerate} \end{lemma} \begin{proof} Note that (I1),(I2),(I3), (I4) are the same as (PM1),(PM2),(PM3),(PM6). Hence (1)--(6) follow in exactly the same way as in Lemma~\ref{lem:grprimematrixideal}. To prove (7), note that if $I_n\in\mathcal{I}$, for some $n\geq 1$, an application of (I5), shows that $1\times 1$ matrix $1\in\mathcal{I}$. By (I3), any identity matrix $I_m$, $m\geq 1$, belongs to $\mathcal{I}$. Again, using (I3), $I_m\oplus A\in\mathcal{I}$ for any positive integer $m$ and matrix $A\in\mathfrak{M}(R)$. By (5), any $A\in\mathfrak{M}(R)$ belongs to $\mathcal{I}$, as desired. \end{proof} One could think of defining a gr-prime matrix ideal as a gr-matrix ideal $\mathcal{I}$ such that the following two conditions are satisfied. \begin{enumerate}[({I}1)] \setcounter{enumi}{5} \item $\mathcal{I}$ is a proper gr-matrix ideal. \item $\mathcal{I}$ satisfies (PM4). \end{enumerate} We proceed to show that both definitions are equivalent. Let $\mathcal{P}$ be a gr-prime matrix ideal, i.e. (PM1)--(PM6) in page~\pageref{def:grprimematrixideal} are satisfied. Clearly, $\mathcal{P}$ satisfies (I1)--(I4) and (I7). By (PM5), $1\notin\mathcal{P}$. Therefore, by (PM4), if $A\oplus 1\in\mathcal{P}$, then $A\in\mathcal{P}$ for any $A\in\mathfrak{M}(R)$. Hence (I5) is satisfied. Again by (PM5), $\mathcal{P}$ is a proper gr-matrix ideal. Conversely, suppose that $\mathcal{I}$ satisfies (I1)--(I7). Clearly (PM1)--(PM4), (PM6) are satisfied. By Lemma~\ref{lem:grmatrixideal}(7) and (I6), (PM5) is satisfied, as desired. \bigskip It is easy to prove that any intersection of gr-matrix (pre-)ideals is again a gr-matrix (pre-)ideal. Thus, given a subset $\mathcal{S}\subseteq \mathfrak{M}(R)$, we define the \emph{gr-matrix (pre-)ideal generated by $\mathcal{S}$} as the intersection of gr-matrix (pre-)ideals $\mathcal{I}$ that contain $\mathcal{S}$. That is, $\bigcap_{\mathcal{S}\subseteq\mathcal{I}}\mathcal{I}$. Note that this gr-matrix (pre)-ideal is contained in any gr-matrix (pre-)ideal that contains $\mathcal{S}$. Now we fix some notation that will be used in what follows. Let $\mathcal{W}\subseteq\mathfrak{M}(R)$. We say that a matrix $C\in\mathfrak{M}(R)$ is a determinantal sum of elements of $\mathcal{W}$ if there exist $A_1,\dotsc,A_m\in\mathcal{W}$, $m\geq 1$, such that $A_1\nabla A_2\nabla\dotsc\nabla A_m$ exists for some choice of parenthesis and equals $C$. We will write $\mathcal{N}$ to denote the subset of $\mathfrak{M}(R)$ consisting of the matrices which are not gr-full. We will denote the set of all identity matrices by $\mathfrak{I}$. If $\mathcal{X}\subseteq\mathfrak{M}(R)$, we denote by $\mathcal{D}(\mathcal{X})$ the set of all matrices in $\mathfrak{M}(R)$ which are of the form $E(X\oplus A)F$ where $X\in\mathcal{X}$, $A\in\mathfrak{M}(R)$ and $E,F$ are permutation matrices of appropriate sizes. We remark that we allow $A$ to be the empty matrix $\mathbb{O}$. \begin{lemma}\label{lem:idealgenerated} Let $R$ be a $\Gamma$-graded ring and $\mathcal{A}$ be a gr-matrix pre-ideal. Suppose that $\Sigma\subset \mathfrak{M}(R)$ satisfies the following two conditions \begin{enumerate}[\rm(i)] \item $1\in\Sigma$; \item if $P,Q\in\Sigma$, then $P\oplus Q\in\Sigma$. \end{enumerate} Then the following assertions hold true \begin{enumerate}[\rm(1)] \item The set $\mathcal{A}/\Sigma\coloneqq\{A\in\mathfrak{M}(R)\colon A\oplus P\in\mathcal{A} \textrm{ for some } P\in\Sigma\}$ is a gr-matrix ideal containing $\mathcal{A}$. \item The gr-matrix ideal $\mathcal{A}/\Sigma$ is proper if and only if $\mathcal{A}\cap\Sigma=\emptyset$. \item The gr-matrix ideal $\mathcal{A}/\mathfrak{I}$ is the gr-matrix ideal generated by $\mathcal{A}$. \end{enumerate} \end{lemma} \begin{proof} (1) Let $A\in\mathcal{A}$. By (I3), $A\oplus 1\in\mathcal{A}$. Since $1\in\Sigma$, $A\in\mathcal{A}/\Sigma$. Hence $\mathcal{A}\subseteq \mathcal{A}/\Sigma$ and, by (I1), all non gr-full matrices belong to $\mathcal{A}$. Therefore $\mathcal{A}/\Sigma$ satisfies (I1). Let $A,B\in\mathcal{A}/\Sigma$ be such that $A\nabla B$ is well defined. There exist $P,Q\in\Sigma$ such that $A\oplus P, B\oplus Q\in \mathcal{A}.$ By (I3), $A\oplus P\oplus Q$ and $B\oplus Q\oplus P$ belong to $\mathcal{A}$. By (I4), $B\oplus P\oplus Q\in\mathcal{A}$. Now $(A\nabla B)\oplus P\oplus Q=(A\oplus P\oplus Q)\nabla(B\oplus P\oplus Q)\in\mathcal{A}$ by (I2). Hence $A\nabla B\in\mathcal{A}/\Sigma$ and $\mathcal{A}/\Sigma$ satisfies (I2). Let $A\in\mathcal{A}/\Sigma$ and $B\in\mathfrak{M}(R)$. There exists $P\in\Sigma$ such that $A\oplus P\in\mathcal{A}$. By (I3), $A\oplus P\oplus B\in\mathcal{A}$. Now (I4) implies that $A\oplus B\oplus P\in\mathcal{A}$. Hence $A\oplus B\in\mathcal{A}/\Sigma$, and $\mathcal{A}/\Sigma$ satisfies (I3). Let $A\in\mathcal{A}/\Sigma$ and $E,F$ be permutation matrices of the same size as $A$. There exists $P\in\Sigma$ such that $A\oplus P\in\mathcal{A}$. Since $E\oplus I$ and $F\oplus I$ are also permutation matrices, (I4) implies that $(E\oplus I)(A\oplus P)(F\oplus I)=EAF\oplus P\in\mathcal{A}$. Hence $EAF\in\mathcal{A}/\Sigma$ and (I4) is satisfied. Let now $A\in\mathfrak{M}(R)$ such that $A\oplus 1\in\mathcal{A}/\Sigma$. Thus there exists $P\in \Sigma$ such that $A\oplus 1\oplus P\in\mathcal{A}$. Since $1\oplus P\in\Sigma$, then $A\in\mathcal{A}/\Sigma$ and $A/\Sigma$ satisfies (I5). (2) Suppose that $\mathcal{A}\cap\Sigma\neq\emptyset$. Let $P\in\mathcal{A}\cap\Sigma$ and $M\in\mathfrak{M}(R)$. Then $P\oplus M\in\mathcal{A}$ by (I3). By (I4), $M\oplus P\in\mathcal{A}$. Hence $M\in\mathcal{A}/\Sigma$. Therefore, $\mathcal{A}/\Sigma=\mathfrak{M}(R)$. Conversely, suppose that $\mathcal{A}/\Sigma=\mathfrak{M}(R)$. Thus, $1\in\mathcal{A}/\Sigma$ and there exists $P\in\Sigma$ such that $1\oplus P\in\mathcal{A}$. Notice that $1\oplus P\in\Sigma$, by (i) and (ii). Therefore $\mathcal{A}\cap\Sigma\neq \emptyset$. (3) Clearly $\mathfrak{I}$ satisfies conditions (i) and (ii). Thus $\mathcal{A}/\mathfrak{I}$ is a gr-matrix ideal that contains $\mathcal{A}$ by (1). Let now $\mathcal{B}$ be a gr-matrix ideal such that $\mathcal{A}\subseteq \mathcal{B}$. If $A\in\mathcal{A}/\mathfrak{I}$, then there exists $n\geq 1$ such that $A\oplus I_n\in\mathcal{A}\subseteq \mathcal{B}$. By applying (I5) repeteadly, we obtain that $A\in\mathcal{B}$, as desired. \end{proof} \begin{lemma}\label{lem:idealgenerated2} Let $R$ be a $\Gamma$-graded ring and let $\mathcal{X}\subseteq\mathfrak{M}(R)$. Let $\mathcal{A}(\mathcal{X})$ be the subset of $\mathfrak{M}(R)$ consisting of all the matrices that can be expressed as determinantal sum of elements of $\mathcal{N}\cup\mathcal{D}(\mathcal{X})$. The following assertions hold true. \begin{enumerate}[\rm(1)] \item $\mathcal{A}(\mathcal{X})$ is the gr-matrix pre-ideal generated by $\mathcal{X}$. \item $\mathcal{A}(\mathcal{X})/\mathfrak{I}$ is the gr-matrix ideal generated by $\mathcal{X}$. \item The gr-matrix ideal generated by $\mathcal{X}$ is proper if and only if $ \mathcal{A}(\mathcal{X})\cap\mathfrak{I}=\emptyset$. \end{enumerate} \end{lemma} \begin{proof} (1) $\mathcal{X}\subseteq \mathcal{A}(\mathcal{X})$ because $X=I(X\oplus\mathbb{O})I$ for all $X\in\mathcal{X}$. By definition of $\mathcal{A}(\mathcal{X})$, every homogeneous matrix that is not gr-full belongs to $\mathcal{A}(\mathcal{X})$. By the same reason, if $A,B\in\mathcal{A}(\mathcal{X})$ and $A\nabla B$ is defined, then $A\nabla B\in\mathcal{A}(\mathcal{X})$. Let $A\in \mathcal{A}(\mathcal{X})$ and $B\in\mathfrak{M}(R)$. That $A\oplus B\in\mathcal{A}(\mathcal{X})$ follows from the following three facts. First, for any $U,V\in\mathfrak{M}(R)$, when defined $(U\nabla V)\oplus M=(U\oplus M)\nabla (V\oplus M)$. Second, for $X\in\mathcal{X}$ and $U,M\in\mathfrak{M}(R)$ and permutation matrices $E,F$ of suitable size $E(X\oplus U)F\oplus M=(E\oplus I)(X\oplus U\oplus M)(F\oplus I)$. Third, if $U$ is not gr-full, then, for all $M\in\mathfrak{M}(R)$, $U\oplus M$ is not full for all $M\in\mathfrak{M}(R)$. Indeed, if $U=U_1U_2$, then $U\oplus M=(U_1\oplus M)(U_2\oplus I)$. If $A\in\mathcal{A}(\mathcal{X})$ and $E,F$ are permutation matrices of appropriate size, then $EAF\in\mathcal{A}(\mathcal{X})$. This follows from the following facts. First, if $U,V\in\mathfrak{M}(R)$ and $E,F$ are permutation matrices such that $E(A\nabla B)F$ is defined, then $E(A\nabla B)F=EAF\nabla EBF$. Second, for $X\in\mathcal{X}$, $U\in\mathfrak{M}(R)$ and permutation matrices $E,F,P,Q$ of appropriate sizes then $P(E(X\oplus U)F)Q=(PE)(X\oplus U)(FQ)$. Third, if $U\in\mathfrak{M}(R)$ is not gr-full, and $E,F$ are permutation matrices of appropriate size, then $EUF$ is not gr-full. Indeed, if $U=U_1U_2$, then $EUF=(EU_1)(U_2F)$. Therefore, $\mathcal{A}(\mathcal{X})$ is a gr-matrix pre-ideal that contains $\mathcal{X}$. Let now $\mathcal{B}$ be a gr-matrix pre-ideal such that $\mathcal{X}\subseteq\mathcal{B}$. By (I1), $\mathcal{N}\subseteq\mathcal{B}$. By (I3) and (I4), $E(X\oplus A)F\in\mathcal{B}$ for all $X\in\mathcal{X}$, $A\in\mathfrak{M}(R)$ and permutation matrices $E,F$ of appropriate size. By (I2), $\mathcal{A}(\mathcal{X})\subseteq\mathcal{B}$. (2) Any gr-matrix ideal containing $\mathcal{X}$, must contain $\mathcal{A}(\mathcal{X})$. By Lemma~\ref{lem:idealgenerated}(3), the result follows. (3) By (2), the gr-matrix ideal generated by $\mathcal{X}$ equals $\mathcal{A}(\mathcal{X})/\mathfrak{I}$. By Lemma~\ref{lem:idealgenerated}(2), $\mathcal{A}(\mathcal{X})/\mathfrak{I}$ is proper if and only if $\mathcal{A}(\mathcal{X})\cap\mathfrak{I}=\emptyset$. \end{proof} \begin{corollary}\label{coro:leastideal} Let $R$ be a $\Gamma$ graded ring. The set $\mathcal{A}(\mathcal{N})/\mathfrak{I}$ is the least gr-matrix ideal. Hence $R$ has proper gr-matrix ideals if and only if no matrix of $\mathfrak{I}$ can be expressed as a determinantal sum of matrices of $\mathcal{N}$. \end{corollary} \begin{proof} The set $\mathcal{N}$ is contained in each gr-matrix ideal. By Lemma~\ref{lem:idealgenerated2}(2), $\mathcal{A}(\mathcal{N})/\mathfrak{I}$ is the gr-matrix ideal generated by $\mathcal{N}$. Thus all gr-matrix ideals contain the gr-matrix ideal $\mathcal{A}(\mathcal{N})/\mathfrak{I}$. Since any matrix in $\mathfrak{M}(R)$ of the form $E(X\oplus A)F$ where $X\in\mathcal{\mathcal{N}}$, $A\in\mathfrak{M}(R)$ and $E,F$ are permutation matrices of appropriate sizes, again belongs to $\mathcal{N}$, then $\mathcal{D}(\mathcal{N})=\mathcal{N}$. Thus, $\mathcal{A}(\mathcal{N})$ consists of the matrices in $\mathfrak{M}(R)$ that can be expressed as a determinantal sum of matrices from $\mathcal{N}$. Now $R$ has proper gr-matrix ideals if and only if $\mathcal{A}(\mathcal{N})/\mathfrak{I}$ is proper. By Lemma~\ref{lem:idealgenerated}(3), this is equivalent to $\mathcal{A}(\mathcal{N})\cap\mathfrak{I}=\emptyset$. In other words, no matrix of $\mathfrak{I}$ can be expressed as a determinantal sum of matrices of $\mathcal{N}$. \end{proof} \begin{lemma}\label{lem:quotientideal} Let $R$ be a $\Gamma$-graded ring, $\mathcal{I}$ be a gr-matrix ideal and $\mathcal{Z}\subseteq\mathfrak{M}(R)$. Then the set $\mathcal{I}_\mathcal{Z}=\{A\in\mathfrak{M}(R)\colon A\oplus Z\in\mathcal{I} \textrm{ for all }Z\in\mathcal{Z}\}$ is a gr-matrix ideal. \end{lemma} \begin{proof} Let $A\in\mathfrak{M}(R)$ and suppose it is not gr-full. If $A=BC$, then $A\oplus Z=(B\oplus Z)(C\oplus I)$ for all $Z\in\mathcal{Z}$. Thus $A\in\mathcal{I}_\mathcal{Z}$ and (I1) is satisfied. Let $A,B\in\mathcal{I}_\mathcal{Z}$ and suppose that $A\nabla B$ exists. Then $(A\nabla B)\oplus Z=(A\oplus Z)\nabla(B\oplus Z)$ for all $Z\in\mathcal{Z}$. Since $A\oplus Z,B\oplus Z\in\mathcal{I}$, then $(A\nabla B)\oplus Z\in\mathcal{I}$ for all $Z\in\mathcal{Z}$. Hence $A\nabla B\in\mathcal{I}_\mathcal{Z}$, and (I2) is satisfied. Let $A\in\mathcal{I}_\mathcal{Z}$ and $B\in\mathfrak{M}(R)$. Since $A\oplus Z\in\mathcal{I}$ for all $Z\in\mathcal{Z}$ and $\mathcal{I}$ is a gr-matrix ideal, then $A\oplus Z\oplus B\in\mathcal{I}$ for all $Z\in\mathcal{Z}$. By (I4), $A\oplus B\oplus Z\in\mathcal{I}$ for all $Z\in\mathcal{Z}$. Therefore $A\oplus B\in\mathcal{I}_\mathcal{Z}$ and (I3) is satisfied. If $A\in\mathcal{I}_\mathcal{Z}$, $Z\in\mathcal{Z}$ and $E,F$ are permutation matrices of appropriate size, then $EAF\oplus Z=(E\oplus I)(A\oplus Z)(F\oplus I)$. It shows that $EAF\in\mathcal{I}_\mathcal{Z}$ and (I4) is satisfied. Suppose now that $A\in\mathfrak{M}(R)$ and that $A\oplus 1\in\mathcal{I}_\mathcal{Z}$. Hence $A\oplus 1\oplus Z\in\mathcal{I}$ for all $Z\in\mathcal{Z}$. By (I4), $A\oplus Z\oplus 1\in\mathcal{I}$ for all $Z\in\mathcal{Z}$. Now, by (I5), $A\oplus Z\in\mathcal{I}$ for all $Z\in\mathcal{Z}$, which shows that $A\in\mathcal{I}_\mathcal{Z}$. Therefore (I5) is satisfied. \end{proof} Let $\mathcal{A}_1,\mathcal{A}_2$ be two gr-matrix ideals of a $\Gamma$-graded ring $R$. The \emph{product of $\mathcal{A}_1$ and $\mathcal{A}_2$}, denoted by $\mathcal{A}_1\mathcal{A}_2$, is the gr-matrix ideal generated by the set $$\{A_1\oplus A_2\colon A_1\in\mathcal{A}_1, A_2\in\mathcal{A}_2\}.$$ A helpful description of $\mathcal{A}_1\mathcal{A}_2$ is given in the following lemma. \begin{lemma} Let $R$ be $\Gamma$-graded ring and $\mathcal{X}_1,\mathcal{X}_2\subseteq\mathfrak{M}(R)$. Set $$\mathcal{X}=\{X_1\oplus X_2\colon X_1\in\mathcal{X}_1, X_2\in\mathcal{X}_2\}.$$ Let $\mathcal{A}_1$ be the gr-matrix ideal generated by $\mathcal{X}_1$, $\mathcal{A}_2$ be the gr-matrix ideal generated by $\mathcal{X}_2$ and $\mathcal{A}$ be the gr-matrix ideal generated by $\mathcal{X}$. Then $\mathcal{A}=\mathcal{A}_1\mathcal{A}_2$. As a consequence, for any $A,B\in\mathfrak{M}(R)$, $\langle A\rangle\langle B\rangle=\langle A\oplus B\rangle$, where $\langle A\rangle$ denotes the gr-matrix ideal generated by $\{A\}$. \end{lemma} \begin{proof} First, $\mathcal{A}\subseteq \mathcal{A}_1\mathcal{A}_2$ because $X_1\oplus X_2\in\mathcal{A}_1\mathcal{A}_2$ for all $X_1\in\mathcal{X}_1,X_2\in\mathcal{X}_2$. Now observe that $X_1\oplus X_2\in\mathcal{X}\subseteq \mathcal{A}$ for all $X_1\in\mathcal{X}_1$, $X_2\in\mathcal{X}_2$. By (I4), $X_2\oplus X_1\in\mathcal{X}\subseteq \mathcal{A}$ for all $X_1\in\mathcal{X}_1$, $X_2\in\mathcal{X}_2$. Hence $\mathcal{X}_2$ is contained in the gr-matrix ideal $\mathcal{A}_{\mathcal{X}_1}$. Thus, $\mathcal{A}_2\subseteq \mathcal{A}_{\mathcal{X}_1}$. It implies that $A_2\oplus X_1\in\mathcal{A}$ for all $A_2\in\mathcal{A}_2$ and $X_1\in\mathcal{X}_1$. Again by (I4), $X_1\oplus A_2\in\mathcal{A}$ for all $A_2\in\mathcal{A}_2$ and $X_1\in\mathcal{X}_1$. Therefore $\mathcal{X}_1$ is contained in the gr-matrix ideal $\mathcal{A}_{\mathcal{A}_2}$. Thus $\mathcal{A}_1\subseteq \mathcal{A}_{\mathcal{A}_2}$. This means that $A_1\oplus A_2\in\mathcal{A}$ for all $A_1\in\mathcal{A}_1$ and $A_2\in\mathcal{A}_2$. Therefore $\mathcal{A}_1\mathcal{A}_2\subseteq \mathcal{A}$. \end{proof} Now we show that gr-prime matrix ideals behave like graded prime ideals of graded rings. \begin{proposition} Let $R$ be a $\Gamma$-graded ring. For a proper gr-matrix ideal $\mathcal{P}$, the following are equivalent \begin{enumerate}[\rm(1)] \item $\mathcal{P}$ is a gr-prime matrix ideal. \item For gr-matrix ideals $\mathcal{A}_1,\mathcal{A}_2$, if $\mathcal{A}_1\mathcal{A}_2\subseteq \mathcal{P}$, then $\mathcal{A}_1\subseteq \mathcal{P}$ or $\mathcal{A}_2\subseteq \mathcal{P}$. \item For gr-matrix ideals $\mathcal{A}_1,\mathcal{A}_2$ that contain $\mathcal{P}$, if $\mathcal{A}_1\mathcal{A}_2\subseteq \mathcal{P}$, then $\mathcal{A}_1=\mathcal{P}$ or $\mathcal{A}_2=\mathcal{P}$. \end{enumerate} \end{proposition} \begin{proof} Suppose (1) holds true. Let $\mathcal{A}_1,\mathcal{A}_2$ be gr-matrix ideals such that $\mathcal{A}_1\nsubseteq\mathcal{P}$ and $\mathcal{A}_2\nsubseteq\mathcal{P}$. Hence there exist $A_1\in\mathcal{A}_1\setminus\mathcal{P}$ and $A_2\in\mathcal{A}_2\setminus\mathcal{P}$. Hence $A_1\oplus A_2\notin\mathcal{P}$. It implies that $\mathcal{A}_1\mathcal{A}_2\nsubseteq\mathcal{P}$. Therefore (2) holds true. Clearly (2) implies (3). Suppose (3) holds true and let $A_1,A_2\in\mathfrak{M}(R)$ be such that $A_1\oplus A_2\in\mathcal{P}$. Let $\mathcal{A}_1,\mathcal{A}_2$ be the gr-matrix ideals generated by $\mathcal{P}\cup\{A_1\}$ and $\mathcal{P}\cup\{A_2\}$, respectively. Notice that $X_1\oplus X_2\in\mathcal{P}$ for $X_1\in \mathcal{A}_1$, $X_2\in\mathcal{A}_2$. Hence $\mathcal{A}_1\mathcal{A}_2\subseteq \mathcal{P}$. By (3), either $\mathcal{A}_1=\mathcal{P}$ or $\mathcal{A}_2=\mathcal{P}$. Hence $A_1\in\mathcal{P}$ or $A_2\in\mathcal{P}$, and (1) is satisfied. \end{proof} Let $\mathcal{A}$ be a gr-matrix ideal. The \emph{radical of $\mathcal{A}$} is defined as the set $$\sqrt{\mathcal{A}}=\{A\in\mathfrak{M}(R)\colon \oplus^r\!\! A\in\mathcal{A} \textrm{ for some positive integer }r\}.$$ We say that a proper gr-matrix ideal $\mathcal{A}$ is \emph{gr-semiprime} if $\sqrt{\mathcal{A}}=\mathcal{A}$. \begin{lemma} Let $R$ be a $\Gamma$-graded ring and let $\mathcal{A}$ be a gr-matrix ideal. The following assertions hold true. \begin{enumerate}[\rm(1)] \item $\sqrt{\mathcal{A}}$ is a gr-matrix ideal that contains $\mathcal{A}$. \item $\sqrt{\sqrt{\mathcal{A}}}=\sqrt{\mathcal{A}}$. \item If $\mathcal{A}$ is a gr-prime matrix ideal, then $\sqrt{\mathcal{A}}=\mathcal{A}$. \end{enumerate} \end{lemma} \begin{proof} (1) If $A\in\mathcal{A}$, then, for $r=1$, we obtain that $A=\oplus^1 A\in\mathcal{A}$. Hence $\mathcal{A}\subseteq \sqrt{\mathcal{A}}$. In particular, all homogeneous matrices which are not gr-full belong to $\sqrt{\mathcal{A}}$. Thus $\sqrt{\mathcal{A}}$ satisfies (I1). Let $A,B\in\sqrt{\mathcal{A}}$ such that $A\nabla B$ exists. There exist $r,s\geq 1$ such that $\oplus^r A$, $\oplus^s B\in\mathcal{A}$. Set $n=r+s+1$. To prove that $\sqrt{\mathcal{A}}$ satisfies (I2), it is enough to show that $\oplus^n(A\nabla B)\in\mathcal{A}$. For that aim, using $(A\nabla B)\oplus P=(A\oplus P)\nabla(B\oplus P)$, one can prove by induction on $n$ that $\oplus^n(A\nabla B)$ is a determinantal sum of elements of the form \begin{equation}\label{eq:radical} C_1\oplus C_2\oplus\dotsb\oplus C_n \end{equation} where each $C_i$ equals $A$ or $B$. By the choice of $n$, there are at least $r$ $C_i$'s equal to $A$ or at least $s$ $C_i$'s equal to $B$. Either case, there exist permutation matrices $E,F$ of appropriate size such that $$C_1\oplus C_2\oplus\dotsb\oplus C_n=\left\{ \begin{array}{l} E((\oplus^rA)\oplus C_{r+1}'\oplus \dotsb \oplus C_n')F \\ E((\oplus^s B)\oplus C_{s+1}'\oplus \dotsb \oplus C_n')F \end{array}\right. .$$ It implies that the elements in \eqref{eq:radical} belong to $\mathcal{A}$ by (I3). Now (I2) implies that $\oplus^n(A\nabla B)\in\mathcal{A}$, as desired. Let now $A\in\sqrt{\mathcal{A}}$ and $B\in\mathfrak{M}(R)$. There exists $r\geq 1$ such that $\oplus^rA\in\mathcal{A}$. The equality $\oplus^r(A\oplus B)=E((\oplus^rA)\oplus(\oplus^r B))F$ holds for some permutation matrices $E,F$. Hence $\oplus^r(A\oplus B)\in\mathcal{A}$. Thus $A\oplus B\in\sqrt{\mathcal{A}}$ and $\sqrt{\mathcal{A}}$ satisfies (I3). Let $A\in\sqrt{\mathcal{A}}$ such that $\oplus^rA\in\mathcal{A}$. For permutation matrices $E,F$ of appropriate size $$\oplus^r(EAF)=(\oplus^rE)(\oplus^rA)(\oplus^rF)\in\mathcal{A}.$$ Therefore $EAF\in\sqrt{\mathcal{A}}$ and $\sqrt{\mathcal{A}}$ satisfies (I4). If $X\in\mathfrak{M}(R)$ is such that $X\oplus 1\in\sqrt{\mathcal{A}}$, then there exists $t\geq 1$ such that $\oplus^t(X\oplus 1)\in\mathcal{A}$. But now $(\oplus^t X)\oplus I_t=E(\oplus^t(X\oplus 1))F\in\mathcal{A}$. Applying (I5), we get that $\oplus^tX\in\mathcal{A}$, and therefore $X\in\sqrt{\mathcal{A}}$. Hence $\sqrt{\mathcal{A}}$ satisfies (I5). (2) By (1), $\sqrt{\mathcal{A}}\subseteq \sqrt{\sqrt{\mathcal{A}}}$. Let now $A\in\sqrt{\sqrt{\mathcal{A}}}$. It means that $\oplus^rA\in\sqrt{\mathcal{A}}$ for some positive integer $r$. Hence there exists a positive integer $s$ such that $\oplus^s(\oplus^r A)\in\mathcal{A}$. Thus, $\oplus^{rs}A=\oplus^s(\oplus^r A)\in\mathcal{A}$. Therefore $A\in\sqrt{\mathcal{A}}$, as desired. (3) Suppose $\mathcal{A}$ is a gr-prime matrix ideal and let $A\in\sqrt{\mathcal{A}}$. Hence $\oplus^rA\in\mathcal{A}$. By (PM4), $A\in\mathcal{A}$, as desired. \end{proof} \begin{proposition}\label{prop:maximalisprime} Let $R$ be a $\Gamma$-graded ring. Suppose that the nonempty subset $\Sigma$ of $\mathfrak{M}(R)$ and the gr-matrix ideal $\mathcal{A}$ satisfy the following two conditions. \begin{enumerate}[\rm(i)] \item $A\oplus B\in\Sigma$ for all $A,B\in\Sigma$; \item $\mathcal{A}\cap\Sigma=\emptyset$. \end{enumerate} Then the set $W$ of gr-matrix ideals $\mathcal{B}$ such that $\mathcal{A}\subseteq \mathcal{B}$ and $\mathcal{B}\cap\Sigma=\emptyset$ has maximal elements and each such maximal element is a gr-prime matrix ideal. \end{proposition} \begin{proof} Let $(\mathcal{C}_i)_{i\in I}$ be a nonempty chain in $W$. Set $\mathcal{C}=\bigcup_{i\in I}\mathcal{C}_i$. It is not difficult to show that $\mathcal{C}$ is a gr-matrix ideal. Then clearly $\mathcal{A}\subseteq\mathcal{C}_i\subseteq\mathcal{C}$ and $\mathcal{C}\cap\Sigma=(\bigcup_{i\in I}\mathcal{C}_i)\cap\Sigma= \bigcup_{i\in I}(\mathcal{C}_i\cap\Sigma)=\emptyset$. By Zorn's lemma, $W$ has maximal elements. Suppose that $\mathcal{P}$ is a maximal element of $W$. Since $\mathcal{P}\cap\Sigma=\emptyset$, $\mathcal{P}$ is a proper gr-matrix ideal. Let $\mathcal{A}_1,\mathcal{A}_2$ be gr-matrix ideals such that $\mathcal{P}\subsetneq\mathcal{A}_1$, $\mathcal{P}\subsetneq\mathcal{A}_2$. Since $\mathcal{P}$ is maximal in $W$, there exist $A_1\in\mathcal{A}_1\cap\Sigma$, $A_2\in\mathcal{A}_2\cap\Sigma$. Then $A_1\oplus A_2\in\Sigma$ and $A_1\oplus A_2 \notin\mathcal{P}$. Therefore $\mathcal{A}_1\mathcal{A}_2\neq \mathcal{P}$. \end{proof} \begin{corollary}\label{coro:primeidealsexistence} Let $R$ be a $\Gamma$-graded ring. Let $\mathcal{A}$ be a proper gr-matrix ideal. Then there exist maximal gr-matrix ideals $\mathcal{P}$ with $\mathcal{A}\subseteq \mathcal{P}$, and such maximal gr-matrix ideals are gr-prime matrix ideals. In particular, if there are proper gr-matrix ideals, then gr-prime matrix ideals exist. \end{corollary} \begin{proof} By Lemma~\ref{lem:grmatrixideal}(7), no identity matrix belongs to $\mathcal{A}$. Apply now Proposition~\ref{prop:maximalisprime} to $\mathcal{A}$ and $\Sigma=\mathfrak{I}$. \end{proof} \begin{proposition}\label{prop:radicalofideal} Let $R$ be a $\Gamma$-graded ring. For each proper gr-matrix ideal $\mathcal{A}$, the radical $\sqrt{\mathcal{A}}$ is the intersection of all gr-prime matrix ideals that contain $\mathcal{A}$. \end{proposition} \begin{proof} Let $\mathcal{P}$ be a prime matrix ideal such that $\mathcal{A}\subseteq\mathcal{P}$. If $A\in\sqrt{\mathcal{A}}$, then $\oplus^rA\in\mathcal{A}\subseteq\mathcal{P}$ for some positive integer $r$. By (PM4), $A\in\mathcal{P}$. Thus $\sqrt{\mathcal{A}}\subseteq\mathcal{P}$. Let now $A\in\mathfrak{M}(R)\setminus\sqrt{\mathcal{A}}$. Notice that such $A$ exists because $\sqrt{\mathcal{A}}\subseteq\mathcal{P}$. If we apply Proposition~\ref{prop:maximalisprime} to $\mathcal{A}$ and $\Sigma=\{\oplus^rA\colon r \textrm{ positive integer }\}$, we obtain a gr-prime matrix ideal $\mathcal{P}$ such that $\mathcal{A}\subseteq\mathcal{P}$, $\mathcal{P}\cap\Sigma=\emptyset$. Therefore $A$ does not belong to the intersection of the gr-prime matrix ideals that contain $\mathcal{A}$. \end{proof} \begin{corollary} Let $R$ be a $\Gamma$-graded ring. A proper gr-matrix ideal is gr-semiprime if and only if it is the intersection of gr-prime matrix ideals. \qed \end{corollary} Let $R$ be a $\Gamma$-graded ring. By Corollary~\ref{coro:leastideal}, $\mathcal{A}(\mathcal{N})/\mathfrak{I}$ is the least gr-matrix ideal. We define the \emph{gr-matrix nilradical} of $R$ as the gr-matrix ideal $\mathfrak{N}=\sqrt{\mathcal{A}(\mathcal{N})/\mathfrak{I}}$. \begin{theorem} Let $R$ be a $\Gamma$-graded ring. The following assertions are equivalent. \begin{enumerate}[\rm(1)] \item There exists a $\Gamma$-graded epic $R$-division ring $(K,\varphi)$. \item There exists a homomorphism of $\Gamma$-almost graded rings from $R$ to a $\Gamma$-almost graded division ring. \item The gr-matrix nilradical is a proper gr-matrix ideal. \item No identity matrix can be expressed as a determinantal sum of elements of $\mathcal{N}$. \end{enumerate} \end{theorem} \begin{proof} (1) is equivalent to (2) by Theorem~\ref{theo:gradedlocal}(2)(b). One could also argue as follows. By Proposition~\ref{prop:viceversa}, (2) implies the existence of gr-prime matrix ideals, and therefore of $\Gamma$-graded epic $R$-division rings by Theorem~\ref{theo:primematrixequalsdivisionring}. If (1) holds, the gr-singular kernel of $\varphi$ is a gr-prime matrix ideal by Theorem~\ref{theo:primematrixequalsdivisionring}. Thus (3) holds. If (3) holds, then $\mathcal{A}(\mathcal{N})/\mathfrak{I}$ is a proper gr-matrix ideal. By Corollary~\ref{coro:leastideal}, (4) holds. Suppose that (4) holds true. Again by Corollary~\ref{coro:leastideal}, there exist proper gr-matrix ideals. By Corollary~\ref{coro:primeidealsexistence}, gr-prime matrix ideal exists. Now Theorem~\ref{theo:primematrixequalsdivisionring} implies (1). \end{proof} \begin{theorem} Let $R$ be a $\Gamma$-graded ring. There exists a universal $\Gamma$-graded epic $R$-division ring if and only if the gr-matrix nilradical is a gr-prime matrix ideal. \end{theorem} \begin{proof} By Corollary~\ref{coro:specialization}, the existence of a universal $\Gamma$-graded epic $R$-division ring is equivalent to the existence of a least gr-prime matrix ideal $\mathcal{P}$. Hence the least gr-matrix ideal $\mathcal{A}(\mathcal{N})/\mathfrak{I} \subseteq \mathcal{P}$ is proper. By Proposition~\ref{prop:radicalofideal}, $\mathfrak{N}$ is the intersection of all gr-prime matrix ideals. Hence $\mathfrak{N}=\mathcal{P}$. Conversely, if $\mathfrak{N}$ is a gr-prime matrix ideal, then $\mathcal{A}(\mathcal{N})/\mathfrak{I}$ is proper and, by Proposition~\ref{prop:radicalofideal}, $\mathfrak{N}$ is the intersection of all gr-prime matrix ideals. Therefore $\mathfrak{N}$ is the least gr-prime matrix ideal. \end{proof} \begin{proposition}\label{prop:invertiblematrices} Let $R$ be a $\Gamma$-graded ring and let $P,Q\in\mathfrak{M}(R)$. There exists a homomorphism of $\Gamma$-graded rings $\varphi\colon R\rightarrow K$ to a $\Gamma$-graded division ring $K$ such that $P^\varphi$ is invertible over $K$ and $Q^\varphi$ is not invertible over $K$ if and only if no matrix of the form $I\oplus (\oplus^r P)$ can be expressed as a determinantal sum of matrices of $\mathcal{N}\cup \mathcal{D}(\{Q\})$. \end{proposition} \begin{proof} The existence of such $(K,\varphi)$ is equivalent to the existence of gr-prime matrix ideals $\mathcal{P}$ such that $Q\in\mathcal{P}$ and $P\notin\mathcal{P}$. The existence of such gr-prime matrix ideals is equivalent to the condition $P\notin\sqrt{\langle Q\rangle}$, where $\langle Q\rangle$ denotes the gr-matrix ideal generated by $Q$. Hence it is equivalent to the condition that no matrix of the form $\oplus^rP\in\langle Q\rangle$. By Lemma~\ref{lem:idealgenerated2}(2), $\langle Q\rangle$ is of the form $\mathcal{A}(\{Q\})/\mathfrak{I}$. Therefore, by Lemmas~\ref{lem:idealgenerated} and \ref{lem:idealgenerated2}, everything is equivalent to the condition that no matrix of the form $I\oplus(\oplus^rP)$ can be expressed as a determinantal sum of matrices of $\mathcal{N}\cup\mathcal{D}(\{Q\})$, as desired. \end{proof} \begin{corollary} Let $R$ be a $\Gamma$-graded ring and let $P,Q\in\mathfrak{M}(R)$. The following assertions hold true. \begin{enumerate}[\rm(1)] \item There exists a $\Gamma$-graded epic $R$-division ring $(K,\varphi)$ such that $P^\varphi$ is invertible over $K$ if and only if no matrix of the form $I\oplus(\oplus^rP)$ can be expressed as a determinantal of matrices of $\mathcal{N}$. \item There exists a $\Gamma$-graded epic $R$-division ring $(K,\varphi)$ such that $Q^\varphi$ is not invertible over $K$ if and only if no identity matrix can be expressed as a determinantal of matrices of $\mathcal{N}\cup\mathcal{D}(\{Q\})$. \end{enumerate} \end{corollary} \begin{proof} (1) In Proposition~\ref{prop:invertiblematrices}, let $Q=0$. (2) In Proposition~\ref{prop:invertiblematrices}, let $P=1$. \end{proof} \begin{theorem} Let $R$ be a $\Gamma$-graded ring. The following assertions are equivalent. \begin{enumerate}[\rm(1)] \item There exists a $\Gamma$-graded epic $R$-division ring of fractions $(K,\varphi)$. \item There exists a homomorphism of $\Gamma$-almost graded rings $\varphi\colon R\rightarrow K$ with $K$ a $\Gamma$-almost graded division ring such that $\varphi(x)\neq 0$ for each $x\in\h(R)\setminus\{0\}$. \item $R$ is a $\Gamma$-graded domain and no matrix of the form $aI$ with $a\in\h(R)\setminus\{0\}$ can be expressed as a determinantal sum of matrices of $\mathcal{N}$. \item No diagonal matrix with nonzero homogeneous elements on the main diagonal can be expressed as a determinantal sum of matrices of $\mathcal{N}$. \end{enumerate} \end{theorem} \begin{proof} (1) and (2) are equivalent by Theorem~\ref{theo:gradedlocal}(b). Suppose that (1) holds true. Then, for each diagonal matrix $A$ as in (4), $A^\varphi$ is invertible. Thus, $A\notin\mathcal{P}$, the gr-prime matrix ideal given as the gr-singular kernel of $\varphi$. In particular, $A$ cannot be expressed as the determinantal sum of matrices in $\mathcal{N}$. Thus (4) holds. Suppose (4) holds. Clearly no matrix of the form $aI$ with $a\in\h(R)\setminus\{0\}$ can be expressed as a determinantal sum of matrices of $\mathcal{N}$. Thus to prove (3), it remains to show that $R$ is a $\Gamma$-graded domain. Thus, let $a,b\in\h(R)$ of degrees $\gamma,\delta\in\Gamma$, respectively. If $ab=0$, then $\left(\begin{smallmatrix} a & 0 \\ 0 & b \end{smallmatrix} \right) \in M_2(R)[(\gamma,e)][(e,\delta^{-1})]$. Then we can express $\left(\begin{smallmatrix} a & 0 \\ 0 & b \end{smallmatrix} \right)= \left(\begin{smallmatrix} a & 0 \\ 1 & b \end{smallmatrix} \right)\nabla \left(\begin{smallmatrix} 0 & 0 \\ -1 & b \end{smallmatrix} \right)$ as a determinantal sum of matrices in $M_2(R)[(\gamma,e)][(e,\delta^{-1})]$. Note that $\left(\begin{smallmatrix} a & 0 \\ 1 & b \end{smallmatrix} \right)$ is hollow, and hence it is not gr-full. Furthermore, $\left(\begin{smallmatrix} a & 0 \\ 1 & b \end{smallmatrix} \right)= \left(\begin{smallmatrix} a \\ 1 \end{smallmatrix} \right) \left(\begin{smallmatrix} 1 & b \end{smallmatrix} \right)$, where the factors belong to $M_{2\times 1}(R)[(\gamma,e)][e]$ and $M_{1\times 2}(R)[e][(e,\delta^{-1})]$, respectively. Hence $\left(\begin{smallmatrix} a & 0 \\ 0 & b \end{smallmatrix} \right)$ can be expressed as a determinantal sum of matrices from $\mathcal{N}$. By (4), either $a=0$ or $b=0$. Hence $R$ is a $\Gamma$-graded domain and (3) holds. Suppose now that (3) holds. If there does not exist a $\Gamma$-graded epic $R$-division ring of fractions, then, by Corollary~\ref{coro:ultraproductoffractions}, there exists nonzero $a\in\h(R)$ such that $a^\varphi$ is not invertible for every homomorphism of $\Gamma$-graded rings $\varphi\colon R\rightarrow K$ with $K$ a $\Gamma$-graded division ring. Hence the $1\times 1$ homogeneous matrix $(a)$ belongs to the intersection of all gr-prime matrix ideals, i.e. $(a)\in\mathfrak{N}$. Hence $\oplus^r(a)\in\mathcal{A}(\mathcal{N})/\mathfrak{I}$. Thus $I_s\oplus(\oplus^r(a))=I_s\oplus aI_r$ can be written as a determinantal sum of matrizes of $\mathcal{N}$. Then, since $aI_s\oplus I_r\in \mathfrak{M}(R)$ and it is diagonal, $aI_{r+s}=(aI_s\oplus I_r)(I_s\oplus aI_r)$ is a determinantal sum of matrices of $\mathcal{N}$, a contradiction. Therefore (1) holds. \end{proof} \section{gr-Sylvester rank functions}\label{sec:grSylvesterrankfunctions} Throughout this section, let $\Gamma$ be a group. \medskip The aim of this section is to show that the different definitions of gr-Sylvester rank functions (with values in $\mathbb{N}$) given below are equivalent between them and with the definition of a gr-prime matrix ideal, and thus they uniquely determine homomorphisms to graded division rings. We will adapt the definitions, results and proofs of \cite[Sections~1 and 3]{Malcolmsondetermining} and \cite[p.94--98]{Schofieldbook} to the graded situation. In defining gr-Sylvester rank functions, the main difference with the ungraded case stems from the the fact that, in the graded case, the same matrix $A\in\mathfrak{M}_\bullet(R)$ can define more than one homomorphism between $\Gamma$-graded free modules. This is reflected in properties (MatRF4), (ModRF4) and (MapRF5) below. We begin this section providing the different definitions of gr-Sylvester rank functions for a $\Gamma$-graded ring (with values in $\mathbb{N}$), together with some of its basic properties. \medskip Let $R$ be a $\Gamma$-graded ring. A \emph{gr-Sylvester matrix rank function} for $R$ is a map $\rank\colon \mathfrak{M}_{\bullet}(R)\rightarrow \mathbb{N}$ that satisfies the following conditions \begin{enumerate}[({MatRF}1)] \item $\rank((1))=1$, where $(1)$ is the identity matrix of size $1\times1$. \item $\rank(AB)\leq \min\{\rank(A),\rank(B)\}$ for all compatible matrices $A,B\in\mathfrak{M}_\bullet(R)$. \item $\rank\left(\begin{smallmatrix} A & 0 \\ 0 & B\end{smallmatrix}\right)=\rank(A)+\rank(B)$ for all $A,B\in\mathfrak{M}_\bullet(R)$. \item $\rank\left(\begin{smallmatrix} A & C \\ 0 & B\end{smallmatrix}\right)\geq\rank(A)+\rank(B)$ for all $A,B,C\in\mathfrak{M}_\bullet(R)$ such that $A$ has distribution $(\overline{\alpha},\overline{\beta})$, $B$ has distribution $(\overline{\delta},\overline{\varepsilon})$ and $C$ has distribution $(\overline{\alpha}, \overline{\varepsilon})$ for some finite sequences $\overline{\alpha}$, $\overline{\beta}$, $\overline{\delta}$, $\overline{\varepsilon}$ of elements of $\Gamma$ . \end{enumerate} Let $\rank_1,\rank_2$ be two gr-Sylvester matrix rank functions for $R$. We say that $\rank_1\leq \rank_2$ if $\rank_1(A)\leq \rank_2(A)$ for all $A\in\mathfrak{M}_\bullet(R)$. In this way, there is defined a partial order in the set of gr-Sylvester matrix rank functions for $R$. The following lemma describes some useful properties of gr-Sylvester matrix rank functions. \begin{lemma}\label{lem:grrankproperties} Let $R$ be a $\Gamma$-graded ring and $\rank\colon\mathfrak{M}_\bullet(R)\rightarrow\mathbb{N}$ be a gr-Sylvester matrix rank function. Let $A\in M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]$. The following assertions hold true. \begin{enumerate}[\rm(1)] \item $\rank(I_n)=n$ for all positive integers $n$. \item $\rank(Z)=0$ for all zero matrices of any size. \item The condition $\rank(A)\geq 0$ follows from {\rm(MatRF1)--(MatRF4)}. \item $\rank(A)=\rank(PA)=\rank(AQ)$ for all invertible matrices $P\in M_m(R)[\overline{\alpha'}][\overline{\alpha}]$ and $Q\in M_{n}(R)[\overline{\beta}][\overline{\beta'}]$. \item If $P\in\mathfrak{M}(R)$ is invertible of size $n\times n$, then $\rank(P)=n$. \item $\rank(A)\leq\min(m,n)$. \item $\rank(A)\leq \rank\left(\begin{smallmatrix} A\\ B \end{smallmatrix}\right)$ and $\rank(A)\leq \rank (A\ C)$ for all $B\in M_{m'\times n}(R)[\overline{\alpha'}][\overline{\beta}]$ and $C\in M_{m\times n'}(R) [\overline{\alpha}][\overline{\beta'}]$. \end{enumerate} \end{lemma} \begin{proof} (1) follows from (MatRF1) and (MatRF3). (2) We denote the zero matrix of size $m\times n$ by $0_{m\times n}$. From $$(1\ 0_{1\times m})\left(\begin{array}{ll} 1 & 0 \\ 0 & 0_{m\times n} \end{array}\right) \left(\begin{array}{l} 1\\ 0_{n\times 1} \end{array}\right)=(1),\quad \left(\begin{array}{l}1 \\ 0_{m\times 1} \end{array}\right)(1)(1\ 0_{1\times n}) = \left(\begin{array}{ll} 1 & 0 \\ 0 & 0_{m\times n} \end{array}\right),$$ applying (MatRF1)--(MatRF3), we obtain that $1+\rank(0_{m\times n})\geq 1$ and $1\geq 1+\rank(0_{m\times n})$, respectively. (3) Let $Z$ be a zero matrix of appropriate size. By (2) and (MatRF2), $0=\rank(Z)=\rank(AZ)\leq \min\{\rank(A),\rank(Z)\}\leq \rank(A).$ (4) By (MatRF2), $$\rank(PA)\leq\min\{\rank(P),\rank(A)\}\leq \rank(A),$$ $$\rank(A)=\rank(P^{-1}PA)\leq \min\{\rank(P^{-1}),\rank(PA)\}\leq\rank(PA).$$ Hence $\rank(A)=\rank(PA)$. The other case is shown in the same way. (5) By (4), $\rank(I)=\rank(PI)=\rank(P)$. (6) Since $A$ is $m\times n$, using (1), we obtain $\rank(A)=\rank(I_mA)\leq \min\{\rank(I_m),\rank(A)\}\leq m$ and $\rank(A)=\rank(AI_n)=\min\{\rank(A),\rank(I_n)\}\leq n$. (7) follows from $$\rank(A)=\rank\left(\Big(I_m\ \ 0_{m\times m'}\Big) \left(\begin{array}{c} A\\ B\end{array}\right)\right) \leq \rank\left(\left(\begin{array}{c} A\\ B\end{array}\right)\right),$$ $$\rank(A)=\rank\left(\Big(A\ \ C\Big) \left(\begin{array}{c} I_n\\ 0_{n'\times n}\end{array}\right)\right) \leq \rank\left( \Big(A\ \ C \Big)\right).$$ \end{proof} \medskip Let $R$ be a $\Gamma$-graded ring. We will denote the forgetful functor from the category of finitely presented $\Gamma$-graded $R$-modules to the category of finitely presented $R$-modules by $\mathcal{F}$. A \emph{gr-Sylvester module rank function} for $R$ is a function $\di$ on the class of finitely presented $\Gamma$-graded (right) $R$-modules with values on $\mathbb{N}$ such that \begin{enumerate}[({ModRF}1)] \item $\di(R)=1$, where $R$ is the $\Gamma$-graded ring $R$ considered as a (right) $R$-module in the natural way. \item $\di(M_1\oplus M_2)=\di(M_1)+\di(M_2)$. \item For any exact sequence $M_1\rightarrow M_2\rightarrow M_3\rightarrow 0$ of graded homomorfisms between finitely presented $\Gamma$-graded $R$-modules, $$\di(M_3)\leq \di(M_2)\leq \di(M_1)+\di(M_3).$$ \item Let $f\colon R^n(\overline{\beta})\rightarrow R^m(\overline{\alpha})$ and $f'\colon R^n(\overline{\beta'})\rightarrow R^m(\overline{\alpha'})$ be homomorphisms of $\Gamma$-graded $R$-modules. If $\mathcal{F}(f)=\mathcal{F}(f')$, then $\di(\coker f)=\di(\coker f')$. \end{enumerate} Let $\di_1,\di_2$ be two gr-Sylvester module rank functions for $R$ with values in $\mathbb{N}$. We say that $\di_1\leq \di_2$ if $\di_1(M)\leq \di_2(M)$ for all finitely presented $\Gamma$-graded $R$-modules $M$. In this way, there is defined a partial order in the set of gr-Sylvester module rank functions for $R$. The following easy but important remarks are in order. \begin{lemma} Let $R$ be a $\Gamma$-graded ring and $\di$ be a gr-Sylvester module rank function. \begin{enumerate}[{\rm (1)}] \item $\di(0)=0$, where $0$ denotes the zero module. \item $\di(M)\geq 0$ for all finitely presented $\Gamma$-graded $R$-modules follows from {\rm(ModRF1)--(ModRF4)}. \item If $M$ and $N$ are isomorphic as $\Gamma$-graded $R$-modules, then $\di(M)=\di(N)$. \item $\di(R(\theta))=1$ for all $\theta\in\Gamma$. \item In {\rm(ModRF4)}, the condition $\mathcal{F}(f)=\mathcal{F}(f')$ implies that the lengths of the sequences $\overline{\alpha}$ and $\overline{\alpha'}$ (respectively $\overline{\beta}$ and $\overline{\beta'}$) coincide. \item $\di(R^n(\overline{\delta}))=\di(R^n(\overline{\delta'}))$ for any finite sequences $\overline{\delta}$, $\overline{\delta'}$ of the same length. \end{enumerate} \end{lemma} \begin{proof} (1) follows from (ModRF3) applied to $0\rightarrow 0\oplus 0\oplus 0\rightarrow 0\rightarrow 0$. (2) follows applying (ModRF3) to $M\rightarrow 0\rightarrow 0\rightarrow 0$. (3) Make $M_1=0$, $M_2=M$ and $M_3=N$ in (ModRF3). (4) By (ModRF4), the natural inclusions of $R$ in the first component $R\rightarrow R\oplus R(\theta)$, $R\rightarrow R\oplus R$, implies that $\di(R(\theta))=\di(R)$. (5) Trivial. (6) holds true by (ModRF2) and the foregoing. \end{proof} Let $R$ be a $\Gamma$-graded ring. A \emph{gr-Sylvester map rank function} for $R$ is a function $\rho$ on the class of all homomorphisms of $\Gamma$-graded (right) $R$-modules between finitely generated $\Gamma$-graded projective $R$-modules with values on $\mathbb{N}$ such that \begin{enumerate}[({MapRF}1)] \item $\rho(1_R)=1$, where $1_R$ denotes the identity map on $R$. \item $\rho(gf)\leq \min\{\rho(f),\rho(g)\}$. \item $\rho\left(\begin{smallmatrix}f & 0 \\ 0 & g \end{smallmatrix}\right)=\rho(f)+\rho(g)$. \item $\rho\left(\begin{smallmatrix}f & h \\ 0 & g \end{smallmatrix}\right)\geq \rho(f)+\rho(g)$. \item Let $f\colon R^n(\overline{\beta})\rightarrow R^m(\overline{\alpha})$ and $f'\colon R^n(\overline{\beta'})\rightarrow R^m(\overline{\alpha'})$ be homomorphisms of $\Gamma$-graded $R$-modules. If $\mathcal{F}(f)=\mathcal{F}(f')$, then $\rho(f)=\rho(f')$. \end{enumerate} Let $\rho_1,\rho_2$ be two gr-Sylvester map rank functions for $R$. We say that $\rho_1\leq \rho_2$ if $\rho_1(f)\leq \rho_2(f)$ for all homomorphism $f$ of $\Gamma$-graded modules between finitely generated $\Gamma$-graded projective modules. In this way, there is defined a partial order in the set of gr-Sylvester map rank functions for $R$. The proof of the following remarks can be proved very much as in Lemma~\ref{lem:grrankproperties}. \begin{lemma}\label{lem:grmapproperties} Let $R$ be a $\Gamma$-graded ring and $\rho$ be a gr-Sylvester map rank function. Let $f\colon P\rightarrow Q$ be a homomorphism of $\Gamma$-graded rings between the finitely generated $\Gamma$-graded projective modules $P$ and $Q$. The following assertions hold true. \begin{enumerate}[\rm(1)] \item $\rank(1_{R^n(\overline\theta)})=n$ where $1_{R^n(\overline\theta)}$ denotes the identity homomorphism of the $\Gamma$-graded free $R$-module ${R^n(\overline\theta)}$. \item $\rank(0)=0$ where $0$ denotes the zero homomorphism between any finitely generated $\Gamma$-graded projective $R$-modules. \item The condition $\rank(f)\geq 0$ follows from {\rm(MapRF1)--(MapRF4)}. \item $\rank(f)=\rank(gf)=\rank(fh)$ for all isomorphisms of $\Gamma$-graded $R$-modules between finitely generated $\Gamma$-graded projective $R$-modules $g\colon Q\rightarrow Q'$, $h\colon P'\rightarrow P$. \item If $f$ is invertible, then $\rho(f)=\rho(1_P)=\rho(1_Q).$ \item $\rho(f)\leq\min(\rho(1_P),\rho(1_Q))$. \item $\rho(f)\leq \rho\left(\begin{smallmatrix} f\\ g \end{smallmatrix}\right)$ and $\rho(f)\leq \rho (f\ h)$ for all homomorphisms of $\Gamma$-graded $R$-modules $g\colon P\rightarrow Q'$ and $h\colon P'\rightarrow Q$ with $P',Q'$ being finitely generated $\Gamma$-graded projective $R$-modules. \end{enumerate} \end{lemma} \begin{proof} (1) Notice that the identity maps $1_{R(\theta)}\colon R(\theta)\rightarrow R(\theta)$ and $1_R\colon R\rightarrow R$ are such that $\mathcal{F}(1_R)=\mathcal{F}(1_{R(\theta)})$. Hence $\rho(1_{R(\theta)})=1$ for all $\theta\in\Gamma$. Now apply (MapRF3) to $1_{R^n(\overline{\theta})}$ for any $\overline{\theta}\in\Gamma^n$. (2) We denote by $0_{MN}$ the zero homomorphism $M\rightarrow N$ between the $\Gamma$-graded $R$-modules $M$ and $N$. From the equalities $$(1_R\ 0_{QR})\left(\begin{array}{ll} 1_R & 0_{PR} \\ 0_{RQ} & 0_{PQ} \end{array}\right) \left(\begin{array}{l} 1_R\\ 0_{RP} \end{array}\right)=(1_R),$$ $$\left(\begin{array}{l}1_R \\ 0_{RQ} \end{array}\right)(1_R)(1_R\ 0_{PR}) = \left(\begin{array}{ll} 1_R & 0_{PR} \\ 0_{RQ} & 0_{PQ} \end{array}\right),$$ we obtain that $1+\rho(0_{PQ})=\rho\left(\begin{smallmatrix} 1_R & 0_{PR} \\ 0_{RQ} & 0_{PQ} \end{smallmatrix}\right)\geq \rho(1_R)=1$ and $1=\rho(1_R)\geq \rho\left(\begin{smallmatrix} 1_R & 0_{PR} \\ 0_{RQ} & 0_{PQ} \end{smallmatrix}\right)=1+\rho(0_{PQ})$. Hence $\rho(0_{PQ})=0$. (3) By (2) and (MapRF2), $0=\rho(0_{PQ})=\rho(0_{QQ}f)\leq \min\{\rho(0_{QQ}),\rho(f)\}\leq \rho(f)$. (4) By (MapRF2), $$\rho(gf)\leq\min\{\rho(g),\rho(f)\}\leq \rho(f),$$ $$\rho(f)=\rho(g^{-1}gf)\leq \min\{\rho(g^{-1}),\rho(gf)\}\leq\rho(gf).$$ Hence $\rho(f)=\rho(gf)$. The other case is shown in the same way. (5) By (4), $\rho(1_P)=\rho(f1_P)=\rho(f)$ and similarly $\rho(f)=\rho(1_Q)$. (6) Using (MapRF2), we obtain $\rho(f)=\rho(1_Qf)\leq \min\{\rho(1_Q),\rho(f)\}\leq \rho(1_Q)$ and $\rho(f)=\rho(f1_P)=\min\{\rho(f),\rho(1_P)\}\leq \rho(1_P)$. (7) follows from $$\rho(f)=\rho\left(\Big(1_Q\ \ 0_{Q'Q}\Big) \left(\begin{array}{c} f\\ g\end{array}\right)\right) \leq \rho\left(\left(\begin{array}{c} f\\ g\end{array}\right)\right),$$ $$\rho(f)=\rho\left(\Big(f\ \ h\Big) \left(\begin{array}{c} 1_P\\ 0_{PP'}\end{array}\right)\right) \leq \rho\left( \Big(f\ \ h \Big)\right).$$ \end{proof} \subsection{Equivalence between gr-prime matrix ideals and gr-Sylvester matrix rank functions} \label{subsec:gr-prime_gr-Sylvester_matrix} The proofs and arguments contained in this subsection are the easy and natural adaptation of the ones in \cite[Section~3]{Malcolmsondetermining}. \medskip Let $D$ be a $\Gamma$-graded division ring. It is not difficult to show that $\mathfrak{M}_\bullet(D)\rightarrow \mathbb{N}$, $A\mapsto \drank(A)$, satisfies (MatRF1)--(MatRF4). Let $R$ be a $\Gamma$-graded ring and let $(K,\varphi)$ be a $\Gamma$-graded epic $R$-division ring. One can induce a gr-rank function $\rank_\varphi$ for $R$ defining $\rank_\varphi(A)=\drank(A^\varphi)$ for all $A\in\mathfrak{M}_\bullet(R)$. Note that non-isomorphic $\Gamma$-graded epic $R$-division rings induce different gr-rank functions because the gr-singular kernels do not coincide, by Theorem~\ref{theo:primematrixequalsdivisionring}. The aim of this section is to show that there are no other gr-rank functions for $R$. \begin{lemma}\label{lem:grrankalternative} Let $R$ be a $\Gamma$-graded ring. Suppose that $\rank\colon\mathfrak{M}_\bullet(R)\rightarrow \mathbb{N}$ is a gr-rank function and that $\rank(A)=n$ for some $A\in\mathfrak{M}_\bullet(R)$. The following assertions hold true. \begin{enumerate}[\rm(1)] \item Suppose that $A'$ is obtained as the result of eliminating a column (a row) of $A$, then $$\rank(A)-1\leq \rank(A')\leq \rank(A).$$ \item If the result $A'$ of eliminating any of the columns (rows) of $A$ is such that $\rank(A')<\rank(A)$, then $A$ has exactly $n$ columns (rows). \item $n$ is the largest natural number such that there is a square submatrix $B$ of $A$ with $\rank(B)=\textrm{size of } B$. \end{enumerate} \end{lemma} \begin{proof} (1) Suppose that $A=(A'\ a)$, where $a$ is the last column of $A$. By Lemma~\ref{lem:grrankproperties}(7), $\rank(A')\leq\rank(A)$. Moreover $$\rank(A)=\rank(A'\ a)\leq\rank\left(\begin{array}{cc} A'&a\\ 0 & 1 \end{array}\right)= \rank\left(\begin{array}{cc} A'&0\\ 0 & 1 \end{array}\right)=\rank(A')+1,$$ where we have used Lemma~\ref{lem:grrankproperties}(4) on the second equality. When the column we eliminate is not the last one, the result follows by Lemma~\ref{lem:grrankproperties}(4). When $A'$ is obtained by eliminating some row, the result can be proved analogously. (2) We claim that $A'$ obtained as the result of eliminating $m$ of the columns of $A$ satisfies $\rank(A')=n-m$. We prove the claim by induction on $m$. If $m=1$, the result follows from (1). Suppose that the claim holds true for $m\leq k-1$. Let $A'$ be the result of eliminating any $k$ columns and $a_1,a_2$ be two different columns of $A$ from among those eliminated in obtaining $A'$. By induction hypothesis, $$\rank(A'\ a_1)=n-(k-1),\quad \rank(A'\ a_2)=n-(k-1),\quad \rank(A'\ a_1\ a_2)=n-(k-2).$$ Now \begin{eqnarray*}\label{eq:} \rank(A')+\rank(A'\ a_1\ a_2) & \leq & \rank\begin{pmatrix} A'&0&a_1&0\\ 0&A'&a_1&a_2 \end{pmatrix} =\rank\begin{pmatrix} A'&a_1&0&0\\ 0&a_1&A'&a_2 \end{pmatrix}\\ &=& \rank\begin{pmatrix} A'&a_1&0&0\\ A'&a_1&A'&a_2 \end{pmatrix} = \rank\begin{pmatrix} A'&a_1&0&0\\ 0&0&A'&a_2 \end{pmatrix}\\ & = & \rank(A'\ a_1)+\rank(A'\ a_2), \end{eqnarray*} where we have used Lemma~\ref{lem:grrankproperties}(4) in all but the last equality. Thus $\rank(A')\leq n-k$. On the other hand, applying (1) several times, we get that $n-k\leq \rank(A')$, and the claim is proved. Since $\rank(A)\geq 0$ for all $A\in\mathfrak{M}_\bullet(R)$, the claim proves the result. When we eliminate rows instead of columns, the result follows analogously. (3) By (2), we can eliminate rows and/or columns of $A$ until we reach a square submatrix of $A$ of size exactly $n$. \end{proof} Now we are ready to prove the main result of this subsection. \begin{theorem}\label{theo:grprimematrix_grSylvestermatrix} Let $R$ be a $\Gamma$-graded ring. There is an anti-isomorphism of partially ordered sets \begin{eqnarray*} \{\textrm{gr-prime matrix ideals of }R \}&\longrightarrow& \{\textrm{gr-Sylvester matrix rank functions for }R\} \\ \mathcal{P}&\longmapsto& \rank_{\mathcal{P}} \\ \mathcal{P}_{\rank{}} &\longmapsfrom& \rank \end{eqnarray*} defined as follows. \begin{enumerate}[\rm(a)] \item If $\mathcal{P}$ is a gr-prime matrix ideal of $R$ and $A\in\mathfrak{M}_\bullet(R)$, then $\rank_\mathcal{P}(A)$ is the size of the largest square submatrix of $A$ which is not in $\mathcal{P}$. Equivalently, if $(K,\varphi_\mathcal{P})$ is a $\Gamma$-graded epic $R$-division ring with singular kernel $\mathcal{P}$, then $\rank_\mathcal{P}(A)=\drank(A^{\varphi_\mathcal{P}})$. \item Conversely, given a gr-Sylvester matrix rank function $\rank\colon \mathfrak{M}_\bullet(R)\rightarrow \mathbb{N}$, then the set $$\mathcal{P}_{\rank}=\{A\in\mathfrak{M}(R)\colon\rank(A)<\textrm{size of }A\}$$ is a gr-prime matrix ideal. \end{enumerate} \end{theorem} \begin{proof} Theorem~\ref{theo:primematrixequalsdivisionring} proves that the two ways of describing the correspondence $\mathcal{P}\mapsto \rank_{\mathcal{P}}{}$ are equivalent. By the comment at the beginning of Section~\ref{subsec:gr-prime_gr-Sylvester_matrix}, $\rank_{\mathcal{P}}{}$ is a gr-rank function for $R$. Moreover, if $\mathcal{P}\subseteq\mathcal{Q}$ are gr-prime matrix ideals, then $\rank_{\mathcal{P}{}}\geq \rank_{\mathcal{Q}{}}$ because there are more possible square submatrices which are not in $\mathcal{P}$. (a) Let now $\rank{}$ be a gr-Sylvester matrix rank function for $R$. Let $\mathcal{P}_{\rank{}}$ be defined as (b) of the statement of the theorem. We have to show that $\mathcal{P}_{\rank{}}$ is a gr-prime matrix ideal. Let $A\in\mathfrak{M}(R)$ be an $n\times n$ non gr-full matrix. There exist $\overline{\alpha},\overline{\beta}\in \Gamma^n$ such that $A\in M_n(R)[\overline{\alpha}][\overline{\beta}]$, $\overline{\delta}\in \Gamma^{n-1}$ and matrices $B\in M_{n\times (n-1)}(R)[\overline{\alpha}][\overline{\delta}]$, $C\in M_{(n-1)\times n}(R)[\overline{\delta}][\overline{\beta}]$ such that $A=BC$. By Lemma~\ref{lem:grrankproperties}(6), $\rank(B)\leq n-1$ and $\rank(C)\leq n-1$. By (MatRF2), $\rank(A)=\rank(BC)\leq \min\{\rank(B),\rank(C)\}<n$. Thus $A\in\mathcal{P}_{\rank{}}$ and $\mathcal{P}_{\rank{}}$ satisfies (PM1). Let $A\in\mathcal{P}_{\rank{}}$, $B\in\mathfrak{M}(R)$. Since $\rank(A)<\textrm{size of }A$, then $\rank(A\oplus B)=\rank(A)+\rank(B)<\textrm{size of } A\oplus B$. Thus (PM3) follows. Let $A\in\mathfrak{M}(R)$ and suppose that $A\oplus 1\in\mathcal{P}_{\rank{}}$. It means that $\rank(A)+1=\rank(A\oplus 1)<1+\textrm{size of } A$. Hence $\rank(A)<\textrm{size of }A$. Therefore $A\in\mathcal{P}_{\rank{}}$ and (PM5) is satisfied. By definition of gr-Sylvester matrix rank function, $(1)\notin\mathcal{P}_{\rank{}}$ and thus (PM5) holds. By Lemma~\ref{lem:grrankproperties}(4), (PM6) follows. It remains to show (PM2). Let $A,A'\in\mathcal{P}_r$ such that $A\nabla A'$ exists with respect to the last column. Suppose that $A=(B\ c)$, $A'=(B\ c')$. Then $A\nabla A'=(B\ c+c')$. We claim that $\rank(A\nabla A')\leq \max\{\rank(A),\rank(A')\}$. This claim implies that $A\nabla A'\in\mathcal{P}_{\rank{}}$. Then the case of the determinantal sum with respect to any other column follows from Lemma~\ref{lem:grrankproperties}(4) and the claim. Now we prove the claim. \begin{eqnarray} \rank(A)+\rank(A')=\rank(B\ c)+\rank(B\ c') & = & \rank\begin{pmatrix} B & c & 0 & 0 \\ 0 & 0 & B & c' \end{pmatrix} = \rank\begin{pmatrix} B & c & 0 & 0 \\ -B & 0 & B & c' \end{pmatrix} \nonumber \\ & = & \rank\begin{pmatrix} B & c & 0 & 0 \\ 0 & c & B & c' \end{pmatrix}= \rank\begin{pmatrix} B & c & 0 & c \\ 0 & c & B & c+c' \end{pmatrix} \nonumber\\ &\geq & \rank(B) +\rank(c\ B\ c+c').\label{eq:grrank} \end{eqnarray} If $\rank(A)\leq\rank(B)$, then $\rank(A)=\rank(B)$ by Lemma~\ref{lem:grrankproperties}(7). By \eqref{eq:grrank}, $$\rank(A')\geq \rank(c\ B\ c+c')\geq \rank(B\ c+c')=\rank(A\nabla A'),$$ as desired. If $\rank(A)>\rank(B)$, then $$\rank(A)\geq \rank(B)+1=\rank\begin{pmatrix} B & 0 \\ 0 & 1 \end{pmatrix}=\rank \begin{pmatrix} B & c+c' \\ 0 & 1 \end{pmatrix}\geq \rank(B\ c+c')=\rank(A\nabla A').$$ Interchanging the roles of $A$ and $A'$, we get that if $\rank(A')\leq \rank(B)$, then $\rank(A)\geq \rank(A\nabla A')$ and that if $\rank(A')>\rank(B)$, then $\rank(A')\geq \rank(A\nabla A')$. Thus, the claim is proved. It remains to show that the maps $\mathcal{P}\mapsto\rank_{\mathcal{P}}{}$ and $\rank{}\mapsto \mathcal{P}_{\rank{}}$ are inverse one of the other. If $\mathcal{P}$ is a gr-prime matrix ideal, the gr-prime matrix ideal that corresponds to $\rank_\mathcal{P}{}$ is the set of matrices $A\in\mathfrak{M}(R)$ such that $\rank_\mathcal{P}(A)<\textrm{size of } A$. That is, the set of matrices $A\in\mathfrak{M}(R)$ whose largest square submatrix that is not in $\mathcal{P}$ is less than the size of $A$. In other words, the matrizes $A\in\mathcal{P}$. Therefore $\rank_\mathcal{P}{}\mapsto\mathcal{P}$. On the other hand, let now $\rank{}$ be a gr-rank function for $R$. Let $\rank_{\mathcal{P}_{\rank{}}}{}$ be the associated gr-rank function associated to $\mathcal{P}_{\rank{}}$. If $A\in\mathfrak{M}_\bullet(R)$, then $\rank_{\mathcal{P}_{\rank{}}}(A)$ equals the size of a largest square submatrix of $A$ which is not in $\mathcal{P}_{\rank{}}$. That is, the size of a largest square submatrix $B$ of $A$ such that $\rank(B)=\textrm{size of }B$. Hence we have to show that $\rank(A)=n$ if and only if $n$ is the size of a largest square submatrix of $A$ such that $\rank(B)=n$. But this now follows from Lemma~\ref{lem:grrankalternative}(3). \end{proof} \subsection{Equivalence between gr-Sylvester rank functions} The following result can be proved in exactly the same way as in \cite[Lemma~2]{Malcolmsondetermining} where the ungraded case is shown. \begin{lemma}\label{lem:gradedSchanuel} Let $R$ be a $\Gamma$-graded ring. If $0\rightarrow K\rightarrow Q\rightarrow M\rightarrow 0$ and $0\rightarrow K'\rightarrow Q'\rightarrow M\rightarrow 0$ are exact sequences of $\Gamma$-graded $R$-modules with $Q$ and $Q'$ $\Gamma$-graded projective $R$-modules and with $K\subseteq Q$ and $K'\subseteq Q'$, then there is an automorphism of $\Gamma$-graded $R$-modules of $Q\oplus Q'$ which maps $K\oplus Q'$ onto $Q\oplus K'$. \qed \end{lemma} The next result was first stated in \cite[p.~97]{Schofieldbook} for the ungraded case. Our proof follows the one of \cite[Theorem~4]{Malcolmsondetermining}. We would like to remark that the fact that $\mathbb{N}$ is the set of values of Sylvester rank functions is not used in the proof. \begin{theorem}\label{theo:module_map_equivalent} Let $R$ be a $\Gamma$-graded ring. There is an anti-isomorphism of partially ordered sets \begin{eqnarray*} \left\{\begin{array}{c} \textrm{gr-Sylvester module rank } \\ \textrm{functions for }R \end{array}\right\}&\longrightarrow & \left\{\begin{array}{c} \textrm{gr-Sylvester map rank } \\ \textrm{functions for }R \end{array}\right\} \\ \di & \longmapsto & \rho_{\di} \\ \di_\rho & \longmapsfrom & \rho \end{eqnarray*} defined as follows. \begin{enumerate}[\rm(a)] \item If $\di$ is a gr-Sylvester module rank function for $R$ and $f\colon P\rightarrow Q$ is a homomorphism of $\Gamma$-graded $R$-modules with $P,Q$ finitely generated $\Gamma$-graded projective $R$-modules, then $\rho_{\di}(f)=\di(Q)-\di(\coker f)$. \item Conversely, let $\rho$ be a gr-Sylvester map rank function and suppose that \linebreak $f\colon P\rightarrow Q$ is a homomorphism of $\Gamma$-graded $R$-modules with $P,Q$ finitely generated $\Gamma$-graded projective $R$-modules such that $\coker f=M$. Then \linebreak $\di_\rho(M)=\rho(1_Q)-\rho(f)$. \end{enumerate} \end{theorem} \begin{proof} First we show that the correspondence is an anti-isomorphism of partially ordered sets. Let $\rho_1\leq\rho_2$ be two gr-Sylvester map rank functions. Let $M$ be a finitely presented $\Gamma$-graded $R$-module and suppose that $R^n(\overline{\beta})\stackrel{f}{\rightarrow} R^m(\overline{\alpha})\rightarrow M\rightarrow 0$ is a presentation of $M$ as $\Gamma$-graded $R$-module. Then $\rho_1(1_{R^m(\overline{\alpha})})= \rho_2(1_{R^m(\overline{\alpha})})=m$ and $\rho_1(f)\leq \rho_2(f)$. Hence $\di_{\rho_1}(M)=\rho_1(1_{R^m(\overline{\alpha})})-\rho_1(f)\geq \rho_2(1_{R^m(\overline{\alpha})})-\rho_2(f)=\di_{\rho_2}(M)$. Secondly, we show that the correspendences are one inverse of the other. Let $\di$ be a gr-Sylvester module rank function. Let $M$ be a finitely presented $\Gamma$-graded $R$-module. Then, given a graded presentation of $M$, $P\stackrel{f}{\rightarrow} Q\rightarrow M\rightarrow 0$, $\di_{\rho_{\di}}(M)=\rho_{\di}(1_Q)-\rho_{\di}(f)=\di(Q)-0-(\di(Q)-\di(M))=\di(M)$. Conversely, let $\rho$ be a gr-Sylvester map rank function. Let $f\colon P\rightarrow Q$ be a homomorphism of $\Gamma$-graded $R$-modules with $P,Q$ finitely generated $\Gamma$-graded projective $R$-modules. Then $\rho_{\di_{\rho}}(f)=\di_{\rho}(Q)-\di_{\rho}(\coker f)=\rho(1_Q)-0-(\rho(1_Q)-\rho(f))=\rho(f)$, as desired. (a) Suppose that $\di$ is a gr-Sylvester module rank function. For a homomorphism of $\Gamma$-graded $R$-modules $f\colon P\rightarrow Q$ between finitely generated $\Gamma$-graded projective $R$-modules define $\rho_{\di}(f)=\di(Q)-\di(\coker f)$. Notice it is clearly well defined. We must show that $\rho_{\di}$ satisfies (MapRF1)--(MapRF5). Clearly $\rho_{\di}(1_R)=\di(R)-\di(0)=\di(R)=1$. Thus (MatRF1) is satisfied. Let $g\colon Q\rightarrow T$ be a graded homomorphism between finitely generated $\Gamma$-graded $R$-modules. By definition, $\rho_{\di}(gf)=\di(T)-\di(\coker(gf))$. From the natural homomorphism of graded modules $T/\im(gf)\rightarrow T/\im g\rightarrow 0\rightarrow 0$, by (ModRF3), we get that $\di(\coker g)\leq \di(\coker(gf))$. Thus $\rho_{\di}(gf)\leq\di(T)-\di(\coker g)=\rho_{\di}(g)$. Let $\pi_1\colon Q\rightarrow Q/\im f$ and $\pi_2\colon T\rightarrow T/\im (gf)$ be the natural homomorphisms (of $\Gamma$-graded $R$-modules), and consider $\overline{g}\colon Q/\im f\rightarrow T/\im(gf)$ induced from $g\colon Q\rightarrow T$. It is not difficult to show that the following sequence of homomorphisms of $\Gamma$-graded $R$-modules is exact $$Q\stackrel{\left(\begin{smallmatrix} g\\ \pi_1\end{smallmatrix}\right)}{\longrightarrow}T\oplus Q/\im f\stackrel{(-\pi_2\ g)}{\longrightarrow} T/\im(gf)\longrightarrow 0.$$ Hence, by (ModRF3), $\di(\coker(gf))\leq \di(T)+\di(\coker f)\leq \di(\coker (gf))+\di(Q)$. Now, subtracting $\di(\coker(gf))+\di(\coker f)$ on both sides of the second inequality we obtain $\rho_{\di}(gf)=\di(T)-\di(\coker f)\leq \di(Q)-\di(\coker f)=\rho_{\di}(f)$. Thus (MapRF2) is satisfied. Let $f\colon P\rightarrow Q$, $f'\colon P'\rightarrow Q'$ be homomorphisms of $\Gamma$-graded $R$-modules where $P,Q,P',Q'$ are $\Gamma$-graded projective $R$-modules. Now \begin{eqnarray*} \rho_{\di}\left(\left(\begin{smallmatrix}f & 0 \\ 0 & f' \end{smallmatrix}\right)\right) & = & \di(Q\oplus Q')-\di\left(\coker \left(\begin{smallmatrix}f & 0 \\ 0 & f' \end{smallmatrix}\right)\right) \\ & = & \di(Q)+\di(Q')-\di(\coker f\oplus \coker f')\\ & = & \di(Q)-\di(\coker f)+\di(Q')-\di(\coker f') \\ & = & \rho_{\di}(f)+\rho_{\di}(f'). \end{eqnarray*} Thus (MapRF3) is satisfied. Let $h\colon P'\rightarrow Q$ be a homomorphism of $\Gamma$-graded $R$-modules and consider the homomorphism of $\Gamma$-graded $R$-modules $\left(\begin{smallmatrix} f & h \\ 0 & g \end{smallmatrix}\right)\colon P\oplus P'\rightarrow Q\oplus Q'$. Let $\overline{\iota}\colon Q/\im f\rightarrow \frac{Q\oplus Q'}{\im \left(\begin{smallmatrix} f & h \\ 0 & g \end{smallmatrix}\right)}$ be induced from the natural inclusion $Q\rightarrow Q\oplus Q'$. Let $\overline{\pi}\colon \frac{Q\oplus Q'}{\im \left(\begin{smallmatrix} f & h \\ 0 & g \end{smallmatrix}\right)} \rightarrow Q'/\im f'$ be induced from the natural projection $Q\oplus Q'\rightarrow Q'$. It is not difficult to show that $$\frac{Q}{\im f}\longrightarrow \frac{Q\oplus Q'}{\im\left(\begin{smallmatrix} f & h \\ 0 & g \end{smallmatrix}\right) }\longrightarrow \frac{Q'}{\im f'}\longrightarrow 0$$ is an exact sequence of finitely presented $\Gamma$-graded $R$-modules. From (ModRF3), one obtains that $\di(\coker \left(\begin{smallmatrix} f & h \\ 0 & g \end{smallmatrix}\right))\leq \di(\coker f)+\di(\coker f')$. Therefore \begin{eqnarray*} \rho_{\di}\left( \left(\begin{smallmatrix} f & h \\ 0 & g \end{smallmatrix}\right)\right) & = & \di(Q\oplus Q')-\di\left(\coker \left(\begin{smallmatrix} f & h \\ 0 & g \end{smallmatrix}\right)\right) \\ & \geq & \di(Q)+\di(Q')-\di(\coker f)-\di(\coker g) \\ & = & \rho_{\di}(f)+\rho_{\di}(g), \end{eqnarray*} and (MapRF4) is satisfied. Let now $f\colon R^n(\overline{\beta})\rightarrow R^m(\overline{\alpha})$ and $f'\colon R^n(\overline{\beta'})\rightarrow R^m(\overline{\alpha'})$ be homomorphisms of $\Gamma$-graded $R$-modules such that $\mathcal{F}(f)=\mathcal{F}(f')$. By (ModRF4), $\di(\coker f)=\di(\coker f')$ and $\di(R^m(\overline{\alpha}))=\di(R^m(\overline{\alpha'}))$. Hence $\rho_{\di}(f)=\rho_{\di}(f')$ and (MapRF5) is satisfied. Therefore $\rho_{\di}$ is a gr-Sylvester map rank function. \medskip (b) Suppose that $\rho$ is a gr-Sylvester map rank function. If $M$ is a finitely presented $\Gamma$-graded $R$-module and $f\colon P\rightarrow Q$ is a homomorphism of $\Gamma$-graded $R$-modules with $P,Q$ finitely generated $\Gamma$-graded projective $R$-modules such that $\coker f=M$, then we define $\di_\rho(M)=\rho(1_Q)-\rho(f)$. We must show that $\di$ is well defined and satisfies (ModRF1)--(ModRF4). We begin showing that $\di_\rho$ is well defined. Suppose that $P\stackrel{f}{\rightarrow}Q\rightarrow M \rightarrow 0$ and $P'\stackrel{f'}{\rightarrow}Q'\rightarrow M \rightarrow 0$ are two graded presentations with $P,P',Q,Q'$ finitely generated $\Gamma$-graded $R$-modules. By Lemma~\ref{lem:gradedSchanuel}, there exists an automorphism $h$ of $\Gamma$-graded $R$-modules of $Q\oplus Q'$ which maps $\im f\oplus Q'$ onto $Q\oplus \im f'$. Since $Q\oplus Q'$ is a $\Gamma$-projective $R$-module, we obtain the following diagram of homomorphisms of $\Gamma$-graded $R$-modules $$\xymatrix{Q\oplus Q'\ar[r]\ar@<-1ex>[d]_u\ar@/^1pc/[rr]^{\left(\begin{smallmatrix} f & 0 \\ 0 & 1_{Q'} \end{smallmatrix}\right)} & \im f \oplus Q'\ar@{^{(}->}[r]\ar@<-1ex>[d]_h & Q\oplus Q'\ar@<-1ex>[d]_h \\ Q\oplus Q'\ar[r]\ar@<-1ex>[u]_{u'}\ar@/_1pc/[rr]_{\left(\begin{smallmatrix} 1_Q & 0 \\ 0 & f' \end{smallmatrix}\right)} & Q \oplus \im f'\ar@{^{(}->}[r]\ar@<-1ex>[u]_{h^{-1}} & Q\oplus Q'\ar@<-1ex>[u]_{h^{-1}} },$$ Hence we obtain $h\left(\begin{smallmatrix}f & 0 \\ 0 & 1_{Q'} \end{smallmatrix}\right) =\left(\begin{smallmatrix}1_Q & 0 \\ 0 & f' \end{smallmatrix}\right)u$ and $h^{-1}\left(\begin{smallmatrix}1_Q & 0 \\ 0 & f' \end{smallmatrix}\right)=\left(\begin{smallmatrix}f & 0 \\ 0 & 1_{Q'} \end{smallmatrix}\right) u'$ and, by (MapRF2), $$\rho(f)+\rho(1_{Q'})=\rho\left(h\left(\begin{smallmatrix}f & 0 \\ 0 & 1_{Q'} \end{smallmatrix}\right)\right)\leq \rho\left(\left(\begin{smallmatrix}1_Q & 0 \\ 0 & f' \end{smallmatrix}\right)\right) = \rho(f')+\rho(1_Q),$$ $$\rho(f')+\rho(1_Q)=\rho\left(h^{-1}\left(\begin{smallmatrix}1_Q & 0 \\ 0 & f' \end{smallmatrix}\right) \right)\leq \rho\left(\left(\begin{smallmatrix}f & 0 \\ 0 & 1_{Q'} \end{smallmatrix}\right)\right)= \rho(f)+\rho(1_Q).$$ It implies $\rho(1_{Q'})-\rho(f')\leq \rho(1_Q)-\rho(f)$ and $\rho(1_Q)-\rho(f)\leq \rho(1_{Q'})-\rho(f')$. Therefore $\di_{\rho}(M)=\rho(1_Q)-\rho(f)$ is well defined. Consider the exact sequence $0\stackrel{0}{\rightarrow} R\stackrel{1_R}{\rightarrow} R\rightarrow 0.$ By definition, $\di_\rho(R)=\rho(1_R)-\rho(0)=1-0=1$. Thus (ModRF1) is satisfied. Let $M_1,M_2$ be finitely presented $\Gamma$-graded $R$-modules. Let $P_1\stackrel{f_1}{\rightarrow} Q_1\rightarrow M_1\rightarrow 0$ and $P_2\stackrel{f_2}{\rightarrow} Q_2\rightarrow M_2\rightarrow 0$ be exact sequences of homomorphisms of $\Gamma$-graded $R$-modules with $P_1,P_2,Q_1,Q_2$ finitely generated $\Gamma$-graded projective $R$-modules. Then $P_1\oplus P_2\stackrel{\left(\begin{smallmatrix}f_1 & 0 \\ 0 & f_2 \end{smallmatrix}\right)}{\longrightarrow} Q_1\oplus Q_2\longrightarrow M_1\oplus M_2\rightarrow 0$ is a $\Gamma$-graded presentation of $M_1\oplus M_2$. By definition and (MapRF3), $\di_\rho(M_1\oplus M_2)=\rho(1_{Q_1\oplus Q_2})-\rho\left(\left(\begin{smallmatrix}f_1 & 0 \\ 0 & f_2 \end{smallmatrix}\right)\right)=\rho(1_{Q_1})-\rho(f_1)+\rho(1_{Q_2})-\rho(f_2)=\di_{\rho}(M_1)+\di_\rho(M_2)$. Hence (ModRF2) is satisfied. Let $M_1\stackrel{u}{\rightarrow}M_2\stackrel{v}{\rightarrow}M_3\rightarrow 0$ be an exact sequence of homomorphisms of $\Gamma$-graded $R$-modules with $M_1,M_2,M_3$ finitely presented $\Gamma$-graded $R$-modules. Let $P_1\stackrel{f_1}{\rightarrow}Q_1\stackrel{g_1}{\rightarrow}M_1\rightarrow 0$ and $P_3\stackrel{f_3}{\rightarrow}Q_3\stackrel{g_3}{\rightarrow}M_3\rightarrow 0$ be exact sequences of homomorphisms of $\Gamma$-graded $R$-modules with $P_1,Q_1,P_3,Q_3$ finitely generated $\Gamma$-graded projective $R$-modules. By diagram chasing, it is easy to obtain the following commutative diagram, with exact rows and columns, of homomorphisms of $\Gamma$-graded $R$-modules where $\iota$ and $\pi$ are the natural inclusion and projection, respectively $$ \xymatrix{ & & 0\ar[d] & 0\ar[d] & \\ & &\ker g\ar[r]^{\pi_{|\ker g}}\ar[d] & \ker g_3\ar[r]\ar[d] & 0 \\ 0\ar[r] & Q_1\ar[r]^\iota\ar[d]_{g_1} & Q_1\oplus Q_3\ar[r]^\pi\ar[d]^g & Q_3 \ar[r]\ar[d]^{g_3} & 0 \\ & M_1\ar[d]\ar[r]^u & M_2\ar[d]\ar[r]^v & M_3\ar[d]\ar[r] & 0 \\ & 0 & 0 & 0 & }$$ Since $\ker g$ is a finitely generated $\Gamma$-graded $R$-module, there exist a finitely generated $\Gamma$-graded projective $R$-module and a surjective homomorphism of $\Gamma$-graded $R$-modules $P\stackrel{f}{\rightarrow}\ker g$. In this way, we obtain a commutative diagram, with exact rows and columns, of homomorphisms of $\Gamma$-graded $R$-modules $$\xymatrix{ & & P\ar[d]^f\ar@{=}[r] & P\ar[d]^{\pi f=f_3'} & \\ 0\ar[r] & Q_1\ar[d]^{g_1}\ar[r]^\iota & Q_1\oplus Q_3\ar[d]^g\ar[r]^\pi & Q_3\ar[r]\ar[d]^{g_3} & 0 \\ & M_1\ar[r]^u\ar[d] & M_2\ar[d]\ar[r]^v & M_3\ar[r]\ar[d] & 0 \\ & 0 & 0 & 0 & } $$ Note that $\iota f_1(P_1)\subseteq \ker g$ and that $f\colon P\rightarrow Q_1\oplus Q_3$ is of the form $\left(\begin{smallmatrix}\lambda \\ f_3' \end{smallmatrix}\right)$ for some homomorphism of $\Gamma$-graded $R$-modules $\lambda\colon P\rightarrow Q_1$. Thus we can modify the foregoing diagram to obtain $$\xymatrix{ & & P_1\oplus P\ar[d]^{\scriptsize\left(\begin{smallmatrix} f_1 & \lambda \\ 0 & f_3' \end{smallmatrix}\right)}\ar[r]^\pi & P\ar[d]^{f_3'} & \\ 0\ar[r] & Q_1\ar[d]^{g_1}\ar[r]^\iota & Q_1\oplus Q_3\ar[d]^g\ar[r]^\pi & Q_3\ar[r]\ar[d]^{g_3} & 0 \\ & M_1\ar[r]^u\ar[d] & M_2\ar[d]\ar[r]^v & M_3\ar[r]\ar[d] & 0 \\ & 0 & 0 & 0 & } $$ Then, by (MapRF4), \begin{eqnarray*} \di_{\rho}(M_2) & = & \rho(1_{Q_1\oplus Q_3})-\rho\left(\left(\begin{smallmatrix} f_1 & \lambda \\ 0 & f_3' \end{smallmatrix}\right)\right) \\ & \leq & \rho(1_{Q_1})+\rho(1_{Q_3}) -\rho(f_1)-\rho(f_3') \\ & = & \di_{\rho}(M_1)+\di_{\rho}(M_3). \end{eqnarray*} Moreover, \begin{eqnarray*} \di_{\rho}(M_2) & = & \rho(1_{Q_1\oplus Q_3})-\rho\left(\left(\begin{smallmatrix} f_1 & \lambda \\ 0 & f_3' \end{smallmatrix}\right)\right) \\ & \geq & \rho(1_{Q_1}) + \rho(1_{Q_3})-\rho(f'_3)-\rho((f_1\ \lambda)) \\ & = & \di_{\rho}(M_3)+\rho(1_{Q_1})-\rho((f_1\ \lambda)) \\ & \geq & \di_{\rho}(M_3), \end{eqnarray*} where we have used the fact that $\left(\begin{smallmatrix} f_1 & \lambda \\ 0 & f_3' \end{smallmatrix}\right)=\left(\begin{smallmatrix} f_1 & \lambda & 0 \\ 0& 0 & f_3' \end{smallmatrix}\right)\left(\begin{smallmatrix} 1_{P_1} & 0 \\ 0 & 1_P \\ 0 & 1_P \end{smallmatrix}\right)$, and properties (MapRF2), (MapRF3) on the first inequality, and Lemma~\ref{lem:grmapproperties}(6) on the second inequality. Therefore (ModRF3) is satisfied. (ModRF4) follows easily. Indeed, let $f\colon R^n(\overline{\beta})\rightarrow R^m(\overline{\alpha})$, $f'\colon R^n(\overline{\beta'})\rightarrow R^m(\overline{\alpha'})$ be homomorphisms of $\Gamma$-graded $R$-modules such that $\mathcal{F}(f)=\mathcal{F}(f')$. By (MapRF5), $\rho(f)=\rho(f')$ and therefore $\di_{\rho}(\coker f)=\rho(1_{R^m(\overline{\alpha})})-\rho(f)= \rho(1_{R^m(\overline{\alpha'})}) -\rho(f')=\di_{\rho}(\coker f')$. \end{proof} The next result was given in \cite[Theorem~4]{Malcolmsondetermining} in the ungraded context. \begin{theorem}\label{theo:module_matrix_equivalent} Let $R$ be a $\Gamma$-graded ring. There is an anti-isomorphism of partially ordered sets \begin{eqnarray*} \left\{\begin{array}{c} \textrm{gr-Sylvester matrix rank } \\ \textrm{functions for }R \end{array}\right\}&\longrightarrow & \left\{\begin{array}{c} \textrm{gr-Sylvester module rank } \\ \textrm{functions for }R \end{array}\right\} \\ \rank & \longmapsto & \di_{\rank} \\ \rank_{\di} & \longmapsfrom & \di \end{eqnarray*} defined as follows. \begin{enumerate}[\rm(a)] \item If $\rank$ is a gr-Sylvester matrix rank function for $R$ and $M$ is a finitely presented $\Gamma$-graded $R$-module with presentation $R^n(\overline{\beta})\stackrel{A}{\rightarrow} R^m(\overline{\alpha})\rightarrow M\rightarrow 0$, where $A\in M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]$, we define $\di_{\rank}(M)=m-\rank(A)$. \item Conversely, let $\di$ is a gr-Sylvester module rank function for $R$. If $A\in M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]$, we consider $A$ as a homomorphism of $\Gamma$-graded $R$-modules $R^n(\overline{\beta})\rightarrow R^m(\overline{\alpha})$ and define $\rank(A)=m-\di(R^m(\overline{\alpha})/A(R^n(\overline{\beta})))$. \end{enumerate} \end{theorem} \begin{proof} First we show that the correspondence is an anti-isomorphism of partially ordered sets. Let $\rank_1\leq\rank_2$ be two gr-Sylvester matrix rank functions. Let $M$ be a finitely presented $\Gamma$-graded $R$-module and suppose that $R^n(\overline{\beta})\stackrel{A}{\rightarrow} R^m(\overline{\alpha})\rightarrow M\rightarrow 0$ is a presentation of $M$ as $\Gamma$-graded $R$-module. Then $\rank_1(1_{R^m(\overline{\alpha})})= \rank_2(1_{R^m(\overline{\alpha})})=m$ and $\rank_1(A)\leq \rank_2(A)$. Hence $\di_{\rank_1}(M)=\rank_1(1_{R^m(\overline{\alpha})})-\rank_1(A)\geq \rank_2(1_{R^m(\overline{\alpha})})-\rank_2(f)=\di_{\rank_2}(M)$. Secondly, we show that the correspondences are one inverse of the other. Let $\di$ be a gr-Sylvester module rank function. Let $M$ be a finitely presented $\Gamma$-graded $R$-module. Then, given a graded presentation of $M$, $R^n(\overline{\alpha})\stackrel{A}{\rightarrow} R^m(\overline{\beta})\rightarrow M\rightarrow 0$, $\di_{\rank_{\di}}(M)=\rank_{\di}(1_{R^m(\overline{\alpha})})-\rank_{\di}(A)=\di(R^m(\overline{\alpha}))-0-(\di(R^m(\overline{\alpha}))-\di(M))=\di(M)$. Conversely, let $\rank$ be a gr-Sylvester matrix rank function. Let $A\in M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]$. Then $\rank_{\di_{\rho}}(A)=\di_{\rank}(R^m({\overline{\alpha}}))-\di_{\rank}(\coker A)=\rank(1_{R^m(\overline{\alpha})})-0-(\rank(1_{R^m(\overline{\alpha})})-\rank(A))=\rank(A)$, as desired. (a) Suppose that $\di$ is a Sylvester module rank function for $R$. Let $A\in M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]$ and consider $A$ as a homomorphism of $\Gamma$-graded $R$-modules $R^n(\overline{\beta})\rightarrow R^m(\overline{\alpha})$. Define $\rank_{\di}(A)=\di(R^m(\overline{\alpha}))-\di(\coker A)=m-\di(R^m(\overline{\alpha}))-\di(R^m(\overline{\alpha})/A(R^n(\overline{\beta}))).$ The fact that $\rank_{\di}$ is well defined follows from (ModRF4). That is, the matrix $A$ may define different homomorphisms of $\Gamma$-graded modules, but the value of $\rank_{\di}(A)$ is the same. The proof that $\rank_{\di}$ satisfies (MatRF1)--(MatRF4) follows in the same way as the proof that $\rho_{\di}$ satisfies (MapRF1)--(MapRF5) in Theorem~\ref{theo:module_map_equivalent}. (b) Suppose now that $\rank$ is a gr-Sylvester matrix rank function for $R$. Let $M$ be a finitely presented $\Gamma$-graded $R$-module with presentation $R^n(\overline{\beta})\stackrel{A}{\rightarrow} R^m(\overline{\alpha})\rightarrow M\rightarrow 0$, where $A\in M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]$, we define $\di_{\rank}(M)=m-\rank(A)$. One can show that $\di_{\rank}$ is well defined and satisfies (ModRF1)--(ModRF4) in the same way that one proves that $\di_{\rho}$ is well defined and satisfies (ModRF1)--(ModRF4). There is another way to prove that $\di_{\rank}$ is well defined and satisfies (ModRF1)--(ModRF4). By Theorem~\ref{theo:grprimematrix_grSylvestermatrix}, let $(K,\varphi)$ be the corresponding $\Gamma$-graded epic $R$-division ring with $\rank$. Then, for a finitely presented $\Gamma$-graded $R$-module $M$, the $K$-module $M\otimes_R K$ is a $\Gamma$-graded free $K$-module. One can define $\di(M)=\dim_K(M\otimes_R K)$. It is not difficult to show that $\di$ satisfies (ModRF1)--(ModRF4). Now let $M$ be a finitely presented $\Gamma$-graded $R$-module with presentation $R^n(\overline{\beta})\stackrel{A}{\rightarrow} R^m(\overline{\alpha})\rightarrow M\rightarrow 0$, where $A\in M_{m\times n}(R)[\overline{\alpha}][\overline{\beta}]$. Then, by Theorem~\ref{theo:grprimematrix_grSylvestermatrix}, $\dim (M\otimes_R K)$ equals $m$ minus the number of columns of $A^\varphi$ which are right linearly independent over $K$ and that is exactly $m-\rank(A)=\di_{\rho}(M)$. \end{proof} It is worth noting the following corollary. It is just a re-writing of parts of Theorems~\ref{theo:specialization}, \ref{theo:module_map_equivalent}, \ref{theo:module_matrix_equivalent} and Corollary~\ref{coro:specialization}. \begin{corollary} Let $R$ be a $\Gamma$-graded ring. Suppose that $(K_1,\varphi_1)$ and $(K_2,\varphi_2)$ are $\Gamma$-graded epic $R$-division rings with corresponding gr-prime matrix ideals $\mathcal{P}_1$, $\mathcal{P}_2$, respectively, and gr-Sylvester rank functions $\rank_1$, $\rank_2$, $\di_1$, $\di_2$, $\rho_1$, $\rho_2$, respectively. The following assertions are equivalent \begin{enumerate}[\rm(1)] \item There exists a gr-specialization from $(K_1,\varphi_1)$ to $(K_2, \varphi_2)$. \item $\mathcal{P}_1\subseteq \mathcal{P}_2$. \item $\rank_1\geq \rank_2$. \item $\di_1\leq \di_2$. \item $\rho_1\geq \rho_2$. \end{enumerate} \end{corollary} \section{gr-prime spectrum}\label{sec:grprimespectrum} \emph{Throughout this section, let $\Gamma$ be a group and $\Omega\subseteq \Omega'$ be normal subgroups of $\Gamma$}. \medskip Let $R$ be a $\Gamma$-graded ring. It can be considered as a $\Gamma/\Omega$-graded ring too. Now we introduce some notation in order to clarify which structure of graded object is being considered. We will denote by $\mathfrak{M}^\Gamma(R)$, $\mathfrak{M}^\Gamma_\bullet(R)$ and by $\mathfrak{M}^{\Gamma/\Omega}(R)$, $\mathfrak{M}^{\Gamma/\Omega}_\bullet(R)$ the corresponding sets of matrices. Notice that $\mathfrak{M}^\Gamma(R)\subseteq \mathfrak{M}^{\Gamma/\Omega}(R)$ and $\mathfrak{M}^\Gamma_\bullet(R)\subseteq \mathfrak{M}^{\Gamma/\Omega}_\bullet(R)$. We denote by $\Spec_\Gamma(R)$ the set of all $\Gamma$-gr-prime matrix ideals and by $\Spec_{\Gamma/\Omega}(R)$ the set of all $\Gamma/\Omega$-gr-prime matrix ideals. If $\Omega=\Gamma$, we will write $\Spec(R)$ instead of $\Spec_{\Gamma/\Gamma}(R)$. Note that $\Spec(R)$ is the usual set of prime matrix ideals. It follows directly from the definition that if $\mathcal{P}$ is a $\Gamma/\Omega$-gr-prime matrix ideal, then $\mathcal{P}\cap \mathfrak{M}^\Gamma(R)$ is a $\Gamma$-gr-prime matrix ideal. Hence, there exists a map $$\Spec_{\Gamma/\Omega}(R)\rightarrow \Spec_{\Gamma}(R),\quad \mathcal{P}\mapsto \mathcal{P}\cap\mathfrak{M}^\Gamma(R).$$ Suppose now that $\rank^{\Gamma/\Omega}\colon \mathfrak{M}_\bullet^{\Gamma/\Omega}(R)\rightarrow \mathbb{N}$ is a Sylvester $\Gamma/\Omega$-gr-matrix rank function. It follows from the definition that the restriction of $\rank^{\Gamma/\Omega}$ to $\mathfrak{M}_\bullet^\Gamma(R)$ induces a $\Gamma$-gr-matrix rank function $\rank^\Gamma\colon \mathfrak{M}_\bullet^\Gamma(R)\rightarrow \mathbb{N}$. In this way, there is a function \begin{align*} \left\{\begin{array}{c} \Gamma/\Omega\textrm{-gr-Sylvester matrix} \\ \textrm{rank functions for }R \end{array}\right\} & & \longrightarrow & & \left\{\begin{array}{c} \Gamma\textrm{-gr-Sylvester matrix} \\ \textrm{rank functions for }R \end{array}\right\}, \quad \rank^{\Gamma/\Omega}\mapsto \rank^\Gamma. \end{align*} Similarly, since any $\Gamma$-graded (finitely generated, finitely presented, projective) $R$-module is also a $\Gamma/\Omega$-graded (finitely generated, finitely presented, projective) $R$-module and any homomorphism between $\Gamma$-graded $R$-modules is also a homomorphism of $\Gamma/\Omega$-graded $R$-modules, by restriction, we obtain functions \begin{align*} \left\{\begin{array}{c} \Gamma/\Omega\textrm{-gr-Sylvester module} \\ \textrm{rank functions for }R \end{array}\right\} & & \longrightarrow & & \left\{\begin{array}{c} \Gamma\textrm{-gr-Sylvester module} \\ \textrm{rank functions for }R \end{array}\right\}, \quad \di^{\Gamma/\Omega}\mapsto \di^\Gamma. \end{align*} \begin{align*} \left\{\begin{array}{c} \Gamma/\Omega\textrm{-gr-Sylvester map} \\ \textrm{rank functions for }R \end{array}\right\} & & \longrightarrow & & \left\{\begin{array}{c} \Gamma\textrm{-gr-Sylvester map} \\ \textrm{rank functions for }R \end{array}\right\}, \quad \rho^{\Gamma/\Omega}\mapsto \rho^\Gamma. \end{align*} Considering $R$ as a $\Gamma/\Omega$-graded ring and $\Omega'/\Omega$ as a normal subgroup of $\Gamma/\Omega$, we obtain maps $\Spec_{\Gamma/\Omega'}(R)\rightarrow\Spec_{\Gamma/\Omega}(R)$ for each pair of such normal subgroups of $\Gamma$. Hence if $\Spec_{\Gamma/\Omega}(R)$ is empty, then $\Spec_{\Gamma/\Omega'}(R)$ is also empty. In other words, if there does not exist a $\Gamma/\Omega$-graded epic $R$-division ring, then there does not exist a $\Gamma/\Omega'$-graded epic $R$-division ring. In particular, for $\Omega'=\Gamma$ we obtain maps $\Spec(R)\rightarrow\Spec_{\Gamma/\Omega}(R)$, $\mathcal{Q}\mapsto\mathcal{Q}\cap\mathfrak{M}^{\Gamma/\Omega}(R)$, for each normal subgroup $\Omega$ of $\Gamma$. Therefore, if there exist a normal subgroup $\Omega$ of $\Gamma$ such that there does not exist a $\Gamma/\Omega$-graded epic $R$-division ring, then there does not exist an epic $R$-division ring. Let $\mathcal{Q}'\in\Spec_{\Gamma/\Omega'}(R)$ and let $\mathcal{Q}=\mathcal{Q}'\cap\mathfrak{M}^{\Gamma/\Omega}(R)$ be the corresponding element in $\Spec_{\Gamma/\Omega}(R)$. Let $(K_\mathcal{Q'},\varphi_\mathcal{Q'})$ be the $\Gamma/\Omega'$-graded epic $R$-division ring determined by $\mathcal{Q'}$, and let $(K_\mathcal{Q},\varphi_\mathcal{Q})$ be the $\Gamma/\Omega$-graded epic $R$-division ring determined by $\mathcal{Q}$. Let $x$ be a homogeneous element of $R$ considered as a $\Gamma/\Omega$-graded ring. Notice it is also a homogeneous element of $R$ considered as a $\Gamma/\Omega'$-graded ring. If $x\notin\ker \varphi_{\mathcal{Q}'}$, then $x\in\mathfrak{M}^{\Gamma/N'}(R)\setminus\mathcal{Q}'$. Thus $x\in\mathfrak{M}^h_{\Gamma/N}(R)\setminus\mathcal{Q}$, and therefore $x\notin\ker\varphi_\mathcal{Q}$. Hence if $\varphi_\mathcal{Q'}$ is injective, then $\varphi_\mathcal{Q}$ is also injective. In other words, if $(K_{\mathcal{Q}'},\varphi_{Q'})$ is a $\Gamma/\Omega'$-graded epic $R$-division ring of fractions, then $(K_{\mathcal{Q}},\varphi_{Q})$ is also a $\Gamma/\Omega$-graded epic $R$-division ring of fractions. Therefore, if there exists a normal subgroup $\Omega$ of $\Gamma$ such that there does not exist a $\Gamma/\Omega$-graded epic $R$-division ring of fractions, then there does not exist an epic $R$-division ring of fractions. Let $\mathcal{P'}\in\Spec_{\Gamma/\Omega'}(R)$ and set $\mathcal{P}=\mathcal{P}'\cap\mathfrak{M}^{\Gamma/\Omega}(R)\in \Spec_{\Gamma/\Omega}(R)$. If $\mathcal{P}'\subseteq\mathcal{Q}'$, then $\mathcal{P}\subseteq\mathcal{Q}$. Hence a specialization from $(K_{\mathcal{P}'},\varphi_{{P}'})$ to $(K_{\mathcal{Q}'},\varphi_{{Q}'})$, implies the existence of a specialization from $(K_{\mathcal{P}},\varphi_\mathcal{P})$ to $(K_\mathcal{Q},\varphi_\mathcal{Q})$ by Corollary~\ref{coro:specialization}. Notice that it could happen that $\mathcal{Q}=\mathcal{P}$. Also, if the map $\Spec_{\Gamma/\Omega'}(R)\rightarrow \Spec_{\Gamma/\Omega}(R)$ is surjective and $R$ has a universal $\Gamma/\Omega'$-graded epic $R$-division ring (of fractions), then $R$ has a universal $\Gamma/\Omega$-graded epic $R$-division ring (of fractions). Suppose that for each $\Gamma$-graded epic $R$-division ring $D$ there exist ring homomorphisms to division rings. Then $\Spec_{\Gamma/\Omega}(R)\rightarrow\Spec_\Gamma(R)$ is surjective for each $\Omega\lhd \Gamma$. Let $(D,\varphi)$ be a $\Gamma$-graded epic $R$-division ring with $\Gamma$-singular kernel $\mathcal{P}$. Let $\phi\colon D\rightarrow E$ be a ring homomorphism with $E$ a division ring. Consider the composition $\phi\circ\varphi\colon R\rightarrow E$. It is a homomorphism of $\Gamma/\Omega$-almost graded rings with $E$ a $\Gamma/\Omega$-almost graded division ring. By Theorem~\ref{theo:gradedlocal}(2)(b), there exists $\psi\colon R\rightarrow D'$ a $\Gamma/\Omega$-graded epic $R$-division ring, and a homomorphism $\rho\colon D'\rightarrow E$ such that $\phi\varphi=\rho\psi$. By Proposition~\ref{prop:almostdivisionring}, $$\{A\in\mathfrak{M}^{\Gamma}(R)\colon A^{(\phi\varphi)} \textrm{ invertible over } E\}=\{A\in\mathfrak{M}^{\Gamma}(R)\colon A^\varphi\textrm{ is invertible over } D\},$$ $$\{A\in\mathfrak{M}^{\Gamma/\Omega}(R)\colon A \textrm{ invertible over } D'\}= \{A\in\mathfrak{M}^{\Gamma/\Omega}(R)\colon A^{(\rho\psi)} \textrm{ invertible over } E\}.$$ Now, since $\mathfrak{M}^{\Gamma}(R)\subseteq\mathfrak{M}^{\Gamma/\Omega}(R)$, we get that $$\{A\in\mathfrak{M}^{\Gamma}(R)\colon A^\varphi\textrm{ inverts over } D\}= \{A\in\mathfrak{M}^{\Gamma/\Omega}(R)\colon A \textrm{ inverts over } D'\}\cap \mathfrak{M}^\Gamma(R).$$ Hence, if $\mathcal{P'}$ is the $\Gamma/\Omega$-singular kernel of $(D',\psi)$, then $\mathcal{P}=\mathcal{P'}\cap\mathfrak{M}^\Gamma(R)$. We gather together what we have just proved in the following result. \begin{theorem}\label{theo:relationsbetweenspectra} Let $R$ be a $\Gamma$-graded ring. The following assertions hold true. \begin{enumerate}[\rm(1)] \item If there does not exist a $\Gamma/\Omega$-graded epic $R$-division ring (of fractions), then there does not exist a $\Gamma/\Omega'$-graded epic $R$-division ring (of fractions). Therefore, if there exists a normal subgroup $\Omega$ of $\Gamma$ such that there does not exist a $\Gamma/\Omega$-graded epic $R$-division ring (of fractions), then there does not exist an epic $R$-division ring (of fractions). \item Let $(K_\mathcal{P'},\varphi_\mathcal{P'})$, $(K_\mathcal{Q'},\varphi_\mathcal{Q'})$ be $\Gamma/\Omega'$-epic $R$-division rings, such that there exists a specialization from $(K_\mathcal{P'},\varphi_\mathcal{P'})$ to $(K_\mathcal{Q'},\varphi_\mathcal{Q'})$, then there exists a specialization between the corresponding $\Gamma/\Omega$-graded epic $R$-division rings. \item If the map $\Spec_{\Gamma/\Omega'}(R)\rightarrow \Spec_{\Gamma/\Omega}(R)$, $\mathcal{Q'}\mapsto \mathcal{Q'}\cap\mathfrak{M}^{\Gamma/\Omega}(R)$, is surjective, then the existence of a universal $\Gamma/\Omega'$-graded epic $R$-division ring implies the existence of a universal $\Gamma/\Omega$-graded epic $R$-division ring. Therefore, if $\Spec(R)\rightarrow \Spec_{\Gamma/\Omega}(R)$, $\mathcal{Q'}\mapsto \mathcal{Q'}\cap\mathfrak{M}^{\Gamma/\Omega}(R)$, is surjective, the existence of a universal $R$-division ring, implies the existence of a universal $\Gamma/\Omega$-graded epic $R$-division ring. \item If for each $\Gamma$-graded epic $R$-division ring there exist ring homomorphisms to division rings, then $\Spec_{\Gamma/\Omega}(R)\rightarrow\Spec_\Gamma(R)$, $\mathcal{Q}\mapsto \mathcal{Q}\cap \mathfrak{M}^\Gamma(R)$, is surjective. \qed \end{enumerate} \end{theorem} Note that there is a corresponding statement of Theorem~\ref{theo:relationsbetweenspectra} in terms of the different gr-Sylvester rank functions instead of gr-prime matrix ideals. \medskip Let $R=\bigoplus_{\gamma\in\Gamma}R_\gamma$ be a $\Gamma$-graded ring. In the foregoing, we gave a correspondence from the set of $\Gamma/\Omega$-graded epic $R$-division rings to the set of $\Gamma$-graded epic $R$-division rings. We proceed to give a more down to earth description of such correspondence. Recall that $R$ can be regarded as a $\Gamma/\Omega$-graded ring making $R=\bigoplus_{\Delta\in\Gamma/\Omega}R_\alpha$ where $R_\alpha=\bigoplus_{\gamma\in\alpha}R_\gamma$ for each $\alpha\in\Gamma/\Omega$. Let $E=\bigoplus_{\alpha\in\Gamma/\Omega}E_\alpha$ be a $\Gamma/\Omega$-graded division ring. Consider the group ring $E[\Gamma]=\bigoplus_{\gamma\in\Gamma}E\gamma$. We construct a $\Gamma$-graded division ring $D=\bigoplus_{\gamma\in\Gamma}D_\gamma$ which is a $\Gamma$-graded subring of $E[\Gamma]$ in the same way as in \cite[Proposition~1.2.2]{NastasescuvanOystaeyenMethodsgraded}. For each $\gamma\in\Gamma$, there exists a unique $\alpha\in\Gamma/\Omega$ such that $\gamma\in\alpha$. Set $D_\gamma=E_\alpha \gamma\subseteq E\gamma$. Note that $$D_\gamma D_{\gamma'}=E_\alpha\gamma E_{\alpha'}\gamma=E_\alpha E_{\alpha'}\gamma\gamma'\subseteq E_{\alpha\alpha'}\gamma\gamma'=D_{\gamma\gamma'}.$$ Hence $D$ is a $\Gamma$-graded ring. Since $E$ is a $\Gamma/\Omega$-graded division ring, any nonzero homogeneous element of $D$ is invertible. Thus $D$ is a $\Gamma$-graded division ring. Suppose that $(E,\varphi)$ is a $\Gamma/\Omega$-graded epic $R$-division ring. Let $\gamma\in\Gamma$ and $\alpha\in\Gamma/\Omega$ such that $\gamma\in\alpha$. For each $a_\gamma\in R_\gamma$, $\varphi(a_\gamma)\in E_\alpha$. Then define $\psi(a_\gamma)=\varphi(a_\gamma)\gamma\in D_\gamma$. In this way, we obtain a homomorphism of $\Gamma$-graded rings $\psi\colon R\rightarrow D$. Let $A=(a_{ij})\in M_n(R)[\overline{\delta}][\overline{\varepsilon}].$ We claim that $A^\varphi$ is invertible in $E$ if and only if $A^\psi$ is invertible in $D$. Indeed, let $\alpha_i,\beta_j\in\Gamma/\Omega$ be such that $\delta_i\in\alpha_i$, $\varepsilon_j\in\beta_j$. Then $A^\varphi=(b_{ij})$ with $b_{ij}\in E_{\alpha_i\beta_j^{-1}}$ and $(A^\varphi)^{-1}=(c_{ij})$ with $c_{ij}\in E_{\beta_i\alpha_j^{-1}}$. Then $A^{\psi}=(b_{ij}\alpha_i\beta_j^{-1})$ is invertible in $D$ with inverse $(A^{\psi})^{-1}=(c_{ij}\beta_i\alpha_j^{-1})$. Conversely, if $A^\psi$ is invertible with inverse $(A^\psi)^{-1}=(d_{ij}\beta_i\alpha_j^{-1})$ where $d_{ij}\in E_{\beta_i\alpha_j^{-1}}$, then $(A^\varphi)^{-1}=(d_{ij})$. Hence, let $\mathcal{P}\in\Spec_{\Gamma/\Omega}(R)$. If $(E,\varphi)$ is the $\Gamma/\Omega$-graded epic $R$-division ring associated to $\mathcal{P}$, then the $\Gamma$-graded epic $R$-division ring associated to $\mathcal{P}\cap\mathfrak{M}^\Gamma(R)$ is determined by the $\Gamma$-graded division ring $\psi\colon R\rightarrow D$. That is, the $\Gamma$-graded epic $R$-division ring $\psi\colon R\rightarrow D'$ where $D'$ the graded division ring generated by $\im\psi$. \bigskip Now we proceed to give an important family of examples of Theorem~\ref{theo:relationsbetweenspectra}(4). Let $(\Gamma,<)$ be an ordered group. Let $D=\bigoplus_{\gamma\in\Gamma}D_\gamma$ be a $\Gamma$-graded division ring. Given a map $f\colon\Gamma\rightarrow D$, let $\supp f=\{\gamma\in\Gamma\colon f(\gamma)\neq 0\}$. We will write $f$ as a series. Thus, $f=\sum_{\gamma\in\Gamma}a_\gamma$ means that $f(\gamma)=a_\gamma\in D$ for each $\gamma\in\Gamma$. Consider the set $$D((\Gamma;<))=\left\{f=\sum_{\gamma\in\Gamma}a_\gamma\colon a_\gamma\in D_\gamma \textrm{ for all }\gamma\in\Gamma,\ \supp f\textrm{ is well ordered}\right\}.$$ $D((\Gamma;<))$ is an abelian group under the natural sum. That is, for $f=\sum_{\gamma\in\Gamma}a_\gamma$, $f'=\sum_{\gamma\in\Gamma}a'_\gamma$, then $$f+f'=\sum_{\gamma\in\Gamma}(a_\gamma+a'_\gamma).$$ One can then define the product in $D((\Gamma;<))$ as $$ff'=\sum_{\gamma\in\Gamma}\left(\sum_{\delta\varepsilon=\gamma}a_\delta a'_{\varepsilon}\right).$$ These operations endow $D((\Gamma;<))$ with a ring structure. We regard $D$ as a subring of $D((\Gamma;<))$ identifying $D$ with the series of $D((\Gamma;<))$ of finite support. Malcev and Neumann independently showed that $D((\Gamma;<))$ is in fact a division ring \cite{Malcev}, \cite{Neumann}. Hence, we have just shown that for every $\Gamma$-graded division ring there exists a homomorphism of rings to a division ring. Now we proceed to show that every $D((\Gamma;<))$ contains a $\Gamma/\Omega$-graded division ring and that it corresponds to $D$ via $\Spec_{\Gamma/\Omega}(R)\rightarrow \Spec_{\Gamma}(R)$. Let $\Omega$ be a normal subgroup of $\Gamma$. Consider $D$ as a $\Gamma/\Omega$-graded ring. For each $\alpha\in\Gamma/\Omega$, define the subset of $D((\Gamma;<))$ $$E_\alpha=\left\{f=\sum_{\gamma\in\Gamma}a_\gamma\in D((\Gamma;<))\colon \supp f\subseteq \alpha \right\}.$$ Note that $E_\alpha$ is an additive subgroup of $D((\Gamma;<))$. Let $\alpha,\beta\in\Gamma/\Omega$. Suppose that $f=\sum_{\gamma\in\Gamma}a_\gamma\in E_\alpha$ and $f'=\sum_{\gamma\in\Gamma}a_\gamma'\in E_\beta$. Then $$ff'=\sum_{\gamma\in\Gamma}\left(\sum_{\delta\varepsilon=\gamma}a_\delta a'_{\varepsilon}\right)\in E_{\alpha\beta}.$$ Hence $E_\alpha E_\beta\subseteq E_{\alpha\beta}$. Moreover, if $\alpha\in\Gamma/\Omega$, $$E_\alpha\cap \left(\sum_{\beta\in\Gamma/\Omega,\, \beta\neq \alpha} E_\beta \right)=\{0\},$$ because $\Gamma$ is the disjoint union $\Gamma=\bigcup_{\beta\in\Gamma/\Omega}\beta.$ Hence $E(\Omega)=\bigoplus_{\alpha\in\Gamma/\Omega}E_\alpha$ is a $\Gamma/\Omega$-graded ring. Furthermore, let $f=\sum_{\gamma\in\Gamma}a_\gamma\in E_\alpha$, $f\neq 0$. Then $f$ is invertible in $D((\Gamma;<))$ with inverse $$f^{-1}=\left(\sum_{n\geq0}(-1)^ng^n\right)a_{\gamma_0}^{-1}$$ where $\gamma_0=\min\supp f$ and $g=\sum_{\gamma\in\Gamma} a_{\gamma_0}^{-1}a_\gamma$. Since $\supp\gamma\subseteq \alpha$, $\gamma_0\in\alpha$ and $\gamma_0^{-1}\in\alpha^{-1}$, then $\supp g\subseteq E_e$ where $e$ denotes the identity element in $\Gamma/\Omega$. Hence $\supp g^n\subseteq E_e$ for each integer $n\geq 0$ and $\supp(\sum_{n\geq 0}(-1)^ng^n)\subseteq E_e$. Thus, $\supp f^{-1}\subseteq \alpha^{-1}$. Therefore $E(\Omega)$ is a $\Gamma/\Omega$-graded division ring and the embedding $\phi_\Omega\colon D\hookrightarrow E(\Omega)$ is a homomorphism of $\Gamma/\Omega$-graded rings. Let $D(\Omega)$ be the $\Gamma/\Omega$ graded division subring of $E(\Omega)$ generated by $D$. Then $(D(\Omega),\phi_\Omega\colon D\hookrightarrow D(\Omega))$ is a $\Gamma/\Omega$-graded epic $D$-division ring. Let $R=\bigoplus_{\gamma\in\Gamma} R_\gamma$ be a $\Gamma$-graded ring, where $(\Gamma,<)$ is an ordered group. Let $\mathcal{P}\in\Spec_\Gamma(R)$ with corresponding epic $R$-division ring $(K,\varphi)$. Consider $K((\Gamma,<))$. Then for each $\Omega\lhd \Gamma$, we get that $\Spec_{\Gamma/\Omega}(R)\rightarrow \Spec_\Gamma(R)$ is surjective. Indeed, if $\mathcal{Q}\in\Spec_{\Gamma/\Omega}(R)$ be the corresponding $\Gamma/\Omega$-graded prime matrix ideal to the $\Gamma/\Omega$-graded epic $R$-division ring $(K(\Omega),\phi_\Omega\varphi)$, then $\mathcal{Q}\mapsto\mathcal{P}$ by Proposition~\ref{prop:almostdivisionring}. We would like to remark that $D(\Gamma)$, the division subring of $D((\Gamma;<))$ generated by $D$, does not depend on the order $<$ of $\Gamma$ by \cite{Hughes} or \cite{DicksHerberaSanchez}. Hence, since $D(\Omega)$ is just $\DC(\phi_\Omega)$, then $D(\Omega)$ does not depend on the order $<$ of $\Gamma$. \medskip We end this section with a concrete application of the results in this section. Let $K$ be a field, $X$ be a nonempty set and $K\langle X\rangle$ be the free $K$-algebra on $X$. It is well known that $K\langle X\rangle$ has a universal division ring of fractions \cite[Section~7.5]{Cohnfreeeidealringslocalization}. Let now $\Gamma$ be a group and $X\rightarrow \Gamma$, $x\mapsto \hat{x}$, be a map. Then $K\langle X\rangle =\bigoplus_{\gamma\in\Gamma} K\langle X\rangle_\gamma$ is a $\Gamma$-graded ring where $K\langle X\rangle_\gamma$ is the $K$-vector space spanned by the monomials $x_1x_2\dotsc x_r$ such that $\hat{x}_1\hat{x}_2\cdots \hat{x}_r=\gamma$. If $(\Gamma,<)$ is an ordered group, then $K\langle X\rangle$ has a $\Gamma$-graded universal division ring of fractions by the foregoing example and Theorem~\ref{theo:relationsbetweenspectra}(3),(4). \section{Inverse limits and ultraproducts in the category of graded epic $R$-division rings} \label{sec:inverselimitsandultraproducts} For details on filters and ultrafilters we refer the reader to \cite{BourbakiGeneralTobology}. Let $I$ be a nonempty set. A \emph{filter} on $I$ is a set $\mathfrak{F}$ of subsets of $I$ which has the following properties \begin{enumerate}[(F1)] \item Every subset of $I$ that contains a set of $\mathfrak{F}$ belongs to $\mathfrak{F}$. \item Every finite intersection of sets of $\mathfrak{F}$ belongs to $\mathfrak{F}$. \item The empty set is not in $\mathfrak{F}$. \end{enumerate} The set of filters on $I$ is partially ordered by inclusion. An \emph{ultrafilter} on $I$ is a maximal filter. By \cite[Theorem~1, p.60]{BourbakiGeneralTobology}, each filter is contained in an ultrafilter. An ultrafilter $\mathfrak{U}$ on $I$ has the following property: if $J,K$ are subsets of $I$ such that $J\cup K=I$, then either $J\in\mathfrak{U}$ or $K\in \mathfrak{U}$. The concrete ultrafilters we will be dealing with are constructed as follows. Let $(I,\leq)$ be a directed preordered set. For each $i\in I$, the set $\mathfrak{S}(i)=\{j\in I\colon i\leq j\}$ is called a \emph{section} of $I$ relative to $i$. The set $\mathfrak{S}$ consisting of all sections relative to elements of $I$ is a filter base and there exists a filter containing $\mathcal{S}$ \cite[Proposition~2, p.59]{BourbakiGeneralTobology}. Therefore there exists an ultrafilter of $I$ containing $\mathfrak{S}$. \bigskip Let $\Gamma$ be a group. Let $I$ be a set and $\mathfrak{U}$ be an ultrafilter on $I$. For each $i\in I$, let $R_i=\bigoplus_{i\in I}R_{i\gamma}$ be a $\Gamma$-graded ring. We proceed to define the graded ultraproduct of the family $\{R_i\}_{i\in I}$ following \cite{IonNita}. Consider the ring $P=\prod_{i\in I} R_i$ and consider the following subset $S$ of $P$ $$S=\bigoplus_{\gamma\in\Gamma}\left(\prod_{i\in I} R_{i\gamma} \right).$$ Note that $S$ is a subring of $P$ which is $\Gamma$-graded with $S_\gamma=\prod_{i\in I}R_{i\gamma}$. For each $\gamma\in\Gamma$, if $x=(x_{i\gamma})_{i\in I}\in S_\gamma$, let $z(x)=\{i\in I\colon x_{i\gamma}=0\}$. The set $Z_\gamma=\{x\in S_\gamma\colon z(x)\in\mathfrak{U}\}$ is an additive subgroup of $S_\gamma$. Moreover, if $y\in S_\delta$ and $x\in Z_\gamma$, then $yx\in Z_{\delta\gamma}$ and $xy\in Z_{\gamma\delta}$. Therefore $Z=\bigoplus_{\gamma\in\Gamma}Z_\gamma$ is a graded ideal of $S$. Then the $\Gamma$-graded ring $U=S/Z$ is called the \emph{graded ultraproduct} of the family of $\Gamma$-graded rings $\{R_i\}_{i\in I}$. A homogeneous element element $x\in U_\gamma$ is the class of an element $(x_i)_{i\in I}\in S_\gamma$, where each $x_i\in R_{i\gamma}$. We will write $x=[(x_i)_{i\in I}]_\mathfrak{U}$. Observe that if $x=[(x_i)_{i\in I}]_\mathfrak{U}$ and $y=[(y_i)_{i\in I}]_\mathfrak{U}$, then $x=y$ if and only if the set $\{i\in I\colon x_i=y_i\}\in\mathfrak{U}$. Suppose that $(R_i,\varphi_i)$ is a $\Gamma$-graded $R$-ring for each $i\in I$. Hence $\varphi_i\colon R\rightarrow R_i$ is a homomorphism of $\Gamma$-graded rings. Then there exists a unique homomorphism of rings $\varphi'\colon R\rightarrow \prod_{i\in I}R_i$ such that $\pi_i\varphi'=\varphi_i$ for each $i\in I$. Observe that $\im \varphi'\subseteq S$. Composing with the natural homomorphism $S\rightarrow S/Z=U$, we obtain a homomorphism of $\Gamma$-graded rings $\varphi\colon R\rightarrow U$. Hence $U$ is a $\Gamma$-graded $R$-ring in a natural way. This fact and the following lemma will be very useful in this section. \begin{lemma}\label{lem:ultraproduct} Let $\Gamma$ be a group. Let $I$ be a nonempty set and $\mathfrak{U}$ be an ultrafilter on $I$. \begin{enumerate}[\rm(1)] \item If $R_i$ is a $\Gamma$-graded division ring for each $i\in I$, then the ultraproduct $U$ of the family $\{R_i\}_{i\in I}$ is a $\Gamma$-graded division ring. \item If $R_i$ is a $\Gamma$-graded local ring with graded maximal ideal $\mathfrak{m}_i$ for each $i\in I$, then the ultraproduct $U$ of the family $\{R_i\}_{i\in I}$ is a $\Gamma$-graded local ring with residue $\Gamma$-graded division ring $V$, the ultraproduct of the family of $\Gamma$-graded division rings $\{R_i/\mathfrak{m}_i\}_{i\in I}$. \end{enumerate} \end{lemma} \begin{proof} (1) Let $x\in U_\gamma$. Then $x=[(x_i)_{i\in I}]_\mathfrak{U}$ for some $x_{i}\in R_{i\gamma}$. If $x$ is nonzero, then $J=\{i\in I\colon x_i\neq 0\}\in \mathfrak{U}$. For each $i\in I$, define $${x_i}'=\left\{\begin{array}{ll} x_i^{-1} & \textrm{if } i\in J \\ 0 & \textrm{if } i\notin J\end{array}\right. .$$ Notice that ${x_i}'\in R_{i\gamma^{-1}}$ for each $i \in I$. Then $x'=[({x'}_i)_{i\in I}]_\mathfrak{U}\in U_{\gamma^{-1}}$ and $xx'=x'x=1$, as desired. (2) A homogeneous element $x=[(x_i)_{i\in I}]_\mathfrak{U}$ is invertible in $U$ if and only if the set $\{i\in I\colon x_i \textrm{ is invertible in }U\}\in\mathfrak{U}$ if and only if $\{i\in I\colon x_i\notin \mathfrak{m}_i\}\in \mathfrak{U}$. Therefore, the ideal $\mathfrak{m}$ generated by the homogeneous noninvertible elements, that is, the set $\{[(x_i)_{i\in I}]_{\mathfrak{U}}\in \h(U)\colon \{i\in I\colon x_i\in\mathfrak{m}_i\}\in\mathfrak{U}\}$, is a proper ideal of $U$. Hence $U$ is a graded local ring. It is not difficult to prove that the projections $R_i\rightarrow R_i/\mathfrak{m}_i$, $a\mapsto\overline{a}$, induce a surjective homomorphism of $\Gamma$-graded rings $U\rightarrow V$, $[(x_i)_{i\in I}]_\mathfrak{U}\mapsto [(\overline{x_i})_{i\in I}]_\mathfrak{U}$. Note that a homogeneous element $[(x_i)_{i\in I}]_\mathfrak{U}$ is in the kernel if and only if it belongs to $\mathfrak{m}$. \end{proof} The following corollary was used in Section~\ref{sec:grmatrixideals} \begin{corollary}\label{coro:ultraproductoffractions} Let $R$ be a $\Gamma$-graded domain. Suppose that, for each $a\in\h(R)\setminus\{0\}$, there exists a homomorphism of $\Gamma$-graded rings $\varphi_a\colon R\rightarrow K_a$, where $K_a$ is a $\Gamma$-graded division ring such that $\varphi_a(a)\neq 0$. Then there exists a $\Gamma$-graded epic $R$-division ring of fractions. \end{corollary} \begin{proof} Let $I=\h(R)\setminus\{0\}$. For each $a\in I$, let $I_a=\{\lambda\in I\colon \varphi_\lambda(a)\neq 0\}$. Let $E=\{a_1,\dotsc,a_n\}$ be a finite subset of $I$. Then $\bigcap_{i=1}^n I_{a_i}\neq \emptyset$, because $\varphi_{a_1\dotsb a_n}(a_i)\neq 0$ for each $i=1,\dotsc,n$. Hence the set $\mathfrak{B}=\mathcal\{I_a\colon a\in I\}$ is a set of subsets of $I$ such that no finite subset of $\mathfrak{B}$ has empty intersection. By \cite[Proposition~1,p.58]{BourbakiGeneralTobology}, there exists a filter on $I$ containing $\mathfrak{B}$. By \cite[Theorem~1, p.60]{BourbakiGeneralTobology}, there exists an ultrafilter $\mathfrak{U}$ on $I$ containing $\mathfrak{B}$. By Lemma~\ref{lem:ultraproduct}(1), the ultraproduct $U$ of the family $\{K_a\}_{a\in I}$ is a $\Gamma$-graded division ring and there exists a homomorphism of $\Gamma$-graded rings $\varphi\colon R\rightarrow U$, defined by $\varphi(x)=[(\varphi_a(x))_{a\in I}]_{\mathfrak{U}}$. Since the set $I_x\in\mathfrak{U}$, then $\varphi(x)\neq 0$ for each $x\in\h(R)\setminus\{0\}$. Therefore $\varphi$ is injective. \end{proof} Let $R$ be a $\Gamma$-graded ring. Consider the category $\mathcal{E}_R$ of $\Gamma$-graded epic $R$-divison rings with specializations as morphisms defined in Section~\ref{sec:categoryspecializations}. First we look at how inverse systems are in this category. An inverse system in $\mathcal{E}_R$ is a pair $((K_i,\varphi_i)_{i\in I}, (\psi_{i,j})_{i\geq j})$ where $(I,\leq)$ is a directed preordered set, $(K_i,\varphi_i)$ is a $\Gamma$-graded epic $R$-division ring for each $i\in I$, and $\psi_{i,j}$ is a specialization from $(K_i,\varphi_i)$ to $(K_j,\varphi_j)$, $i\geq j$, such that \begin{equation}\label{eq:inversesystem} \psi_{j,k}\circ\psi_{i,j}=\psi_{i,k} \textrm{ for all } i,j,k\in I, i\geq j\geq k. \end{equation} Observe that \eqref{eq:inversesystem} is superfluous because since the specializations already exist, and there is at most one specialization between graded epic $R$-division rings, the equality in \eqref{eq:inversesystem} holds trivially. Now we look at inverse limits. An inverse limit of the inverse system \linebreak $((K_i,\varphi_i)_{i\in I}, (\psi_{i,j})_{i\geq j})$ in $\mathcal{E}_R$ is a pair $((K,\varphi),(\psi_i)_{i\in I})$ where $(K,\varphi)$ is a $\Gamma$-graded epic $R$-division ring and $\psi_i$ is a specialization from $(K,\varphi)$ to $(K_i,\varphi_i)$ for each $i\in I$ such that the following properties are satisfied: \begin{enumerate}[(i)] \item $\psi_{i,j}\circ \psi_i=\psi_j$ for all $i,j\in I$, $i\geq j$. \item For each pair $((K',\psi'),(\psi_i')_{i\in I})$ that satisfies (i), i.e. $\psi_{i,j}\circ \psi_i'=\psi'_j$ for $i\geq j$, then there is a specialization $\phi\colon L\rightarrow K$ such that $\psi_i\circ \phi=\psi'$ for all $i\in I$. \end{enumerate} Again note that (i) and the equality of specializations in (ii) are superfluous. \begin{theorem} Let $R$ be a $\Gamma$-graded ring. Let $((K_i,\varphi_i)_{i\in I}, (\psi_{i,j})_{i\geq j})$ be an inverse system in $\mathcal{E}_R$ indexed on the directed nonempty preordered set $(I,\leq)$. Consider an ultrafilter $\mathfrak{U}$ on $I$ that contains all the sections $\mathfrak{S}(i)$, $i\in I$, of $I$. Set $$\Sigma_i=\{A\in\mathfrak{M}(R)\colon A^{\varphi_i} \textrm{ is invertible over } K_i\},\quad i\in I.$$ The following assertions hold true. \begin{enumerate}[\rm(1)] \item There exists a $\Gamma$-graded epic $R$-division ring $(K,\varphi)$ which is the inverse limit of $((K_i,\varphi_i)_{i\in I}, (\psi_{i,j})_{i\geq j})$. \item If $\Sigma=\{A\in\mathfrak{M}(R)\colon A^\varphi \textrm{ is invertible over }K\}$, then $\Sigma=\bigcup_{i\in I}\Sigma_i$. \item The $\Gamma$-graded epic $R$-division ring determined by the ultraproduct $U$ of the family $\{(K_i,\varphi_i)\}_{i\in I}$ equals $(K,\varphi)$. \item The ultraproduct $V$ of the family $\{R_{\Sigma_i}\}_{i\in I}$ is a $\Gamma$-graded local ring with residue $\Gamma$-graded division ring equal to $U$. \item $R_\Sigma$ embeds in $V$. \end{enumerate} \end{theorem} \begin{proof} For each $i\in I$, let $\mathcal{P}_i$ be the singular kernel of $\varphi_i\colon R\rightarrow K_i$. (1) Since there exists a specialization $\psi_{i,j}\colon K_i\rightarrow K_j$ for $i,j\in I$, $i\geq j$, then $\mathcal{P}_i\subseteq\mathcal{P}_j$ by Corollary~\ref{coro:specialization}. Thus, the family of prime matrix idelas $\{\mathcal{P}_i\}_{i\in I}$ is directed from below. Set $\mathcal{P}=\bigcap_{i\in I}\mathcal{P}_i$. It is not difficult to prove that $\mathcal{P}$ satisfies (PM1)--(PM3), (PM5) and (PM6) in the definition of gr-prime matrix ideal. To show that $\mathcal{P}$ satisfies (PM4), let $A,B\in\mathfrak{M}(R)\setminus\mathcal{P}$. There exist $i,j\in I$ such that $A\notin \mathcal{P}_i$ and $B\notin \mathcal{P}_j$. Since $I$ is directed, there exists $k\in I$ such that $i\leq k$, $j\leq k$. Thus $\mathcal{P}_k\subset\mathcal{P}_i$ and $\mathcal{P}_k\subseteq \mathcal{P}_j$, and both $A$ and $B$ do not belong to the gr-prime matrix ideal $\mathcal{P}_k$. Hence $A\oplus B\notin\mathcal{P}_k$, and therefore $A\oplus B\notin\mathcal{P}$. Let $(K,\varphi)$ be the $\Gamma$-graded epic $R$-division ring corresponding to $\mathcal{P}$. Since $\mathcal{P}\subseteq \mathcal{P}_i$ for all $i\in I$, there exists a unique specialization $\psi_i\colon K\rightarrow K_i.$ Consider now a pair $((L,\varphi'),(\psi_i')_{i\in I})$ where $L$ is a $\Gamma$-graded epic $R$-division ring and $\psi_i'$ is a gr-specialization from $(L,\varphi')$ to $(K_i,\varphi_i)$ for each $i\in I$. Let $\mathcal{Q}$ be the singular kernel of $\varphi'\colon R\rightarrow L$. Then $\mathcal{Q}\subseteq\mathcal{P}_i$ for each $i\in I$. Hence $\mathcal{Q}\subseteq \mathcal{P}$. Therefore, there exists a specialization $\phi\colon L\rightarrow K$ by Corollary~\ref{coro:specialization}. (2) By (1), the singular kernel of $(K,\varphi)$ equals $\mathcal{P}=\bigcap_{i\in I}\mathcal{P_i}$. Hence, $\Sigma=\mathfrak{M}(R)\setminus\mathcal{P}=\bigcup_{i\in I}\Sigma_i$. (3) Let $U$ be the graded ultraproduct of the family $((K_i,\varphi_i)_{i\in I}$ of $\Gamma$-graded epic $R$-division rings, and let $\varphi\colon R\rightarrow U$ be the canonical homomorphism of $\Gamma$-graded rings. Let $L$ be the $\Gamma$-graded division subring of $U$ generated by the image of $\varphi$. Consider the $\Gamma$-graded epic $R$-division ring $(L,\varphi)$. Let $A\in\bigcup_{i\in I}\Sigma_i$ be such that $A\in M_n(R)[\overline{\alpha}][\overline{\beta}]$for some $\overline{\alpha},\overline{\beta}\in \Gamma^n$. By Theorem~\ref{theo:specialization}, note that $\Sigma_j\subseteq\Sigma_i$ for all $i,j$, $i\geq j$. Let $t\in I$ be such that $A\in\Sigma_t$. Then $A\in\Sigma_i$ for all $i\in\mathfrak{S}(t)$. Define $M_i=\left\{\begin{array}{ll} (A^{\varphi_i})^{-1} & \textrm{if } i\in\mathfrak{S}(t) \\ 0 & \textrm{if } i\notin\mathfrak{S}(t). \end{array}\right.$ If $M_i$ has $(u,v)$-entry $m_{uv}^i\in (K_i)_{\beta_u\alpha_v^{-1}}$, let $M=(m_{uv})\in M_n(L)$ where $m_{uv}=[(m^i_{uv})_{i\in I}]_\mathfrak{U}$. Note now that $A^\varphi M=MA^{\varphi}=I$. Indeed, if $AM=([c^i_{uv}]_\mathfrak{U})$, then $[c^i_{uv}]_\mathfrak{U}\in U_e$, $\mathfrak{S}(t)\subseteq\{i\in I\colon c_{uu}^i=1\}$ and $\mathfrak{S}(t)\subseteq\{i\in I\colon c^i_{uv}=0,\, u\neq v\}$. Similarly one can show that $MA=I$. Conversely, suppose that $A\in\Sigma_n[\overline{\alpha}][\overline{\beta}]$. Let $(A^{\varphi})^{-1}=(m_{uv})$, $u,v=1,\dotsc,n$, where $m_{u,v}=[(m_{uv}^i)_{i\in I}]_\mathfrak{U}$. For each $i\in I$, let $M_i\in M_n(K_i)[\overline{\alpha}][\overline{\beta}]$ be the matrix with $(u,v)$-entry $m^i_{uv}$. Since $A^\varphi(A^\varphi)^{-1}$ and $(A^\varphi)^{-1}A^\varphi$ equal the identity matrix, the set $$\mathcal{U}=\{i\in I\colon M_i \textrm{ is the inverse of } A^{\varphi_i} \textrm{ over } K_i\}\subseteq \mathfrak{U}.$$ In particular $\mathcal{U}\neq \emptyset$. If $i\in\mathcal{U}$, then $A\in\Sigma_i$, and therefore $A\in\bigcup_{i\in I}\Sigma_i$. Hence, the singular kernel of $(L,\varphi)$ is $\mathcal{P}=\mathfrak{M}(R)\setminus(\bigcup_{i\in I}\Sigma_i)$. (4) is Lemma~\ref{lem:ultraproduct}(2). (5) First observe that $R_\Sigma=\varinjlim R_{\Sigma_i}$. Let $\tau_i\colon R_{\Sigma_i}\rightarrow R_\Sigma$ and $\lambda_i\colon R\rightarrow R_{\Sigma_i}$ be the natural homomorphism of $\Gamma$-graded rings. By (3), there exists a natural homomorphism of $\Gamma$-graded rings $\rho\colon R_\Sigma\rightarrow U$. Since $V$ is $\Gamma$-graded local with residue $\Gamma$-graded division ring equal to $U$, the universal property of $R_\Sigma$, induces a homomorphism of $\Gamma$-graded rings $\rho\colon R_\Sigma\rightarrow V$. We must prove that $\rho$ is injective. Suppose that $x\in\ker\rho$ is homogeneous. Then $x$ is the $(u,v)$-entry of the inverse of a matrix $A^\varphi$ with $A\in\Sigma=\bigcup_{i\in I}\Sigma_i.$ Let $i\in I$ be such that $A\in\Sigma_i$. Note that $A\in\Sigma_l$ for all $l\in I$, $i\leq l$. Let $x^l_{uv}$ be the $(u,v)$-entry of $A^{\lambda_l}$, $i\leq l$. Then $\tau_i(x^l_{uv})=x$. Then $\rho(x)=[(z_{uv}^l)_{l\in I}]_\mathfrak{U}$ where $z_{uv}^l=\left\{\begin{array}{ll}x_{u,v}^l & \textrm{if } i\leq l\\ 0 & \textrm{otherwise} \end{array}\right.$. Since $\rho(x)=0$, then $[(z_{uv}^l)_{l\in I}]_\mathfrak{U}=0$ and the set $\{j\in I\colon z_{uv}^j=0\}\in\mathfrak{U}$. It implies that $\tau_i(x_{uv}^j)=0$. Therefore $x=0$. \end{proof} Note that the proof of (3) shows also (1) in a more elementary way. \bibliographystyle{amsplain}
1,314,259,993,697
arxiv
\section{Introduction} \label{sec:intro} The uniqueness theorem states that all black hole solutions of the Einstein-Maxwell equation of gravitation and electromagnetism can be completely characterized by only three parameters: mass, electric charge, and angular momentum \cite{Chrusciel:2012jk,Herdeiro:2015waa}. As we know, the Kerr-Newman black hole is a solution for a rotating and charged black hole in the Einstein-Maxwell field equation. However, rotating and charged black hole solutions can also be found in other theories. For example, there is also a rotating and charged black hole solution known as a Kerr-Sen black hole, which is a solution to a set of classical equations of motion arising in the low energy limit of heterotic string field theory. In 1992, Sen \cite{Sen:1992ua} was the first to derive this solution by transforming the Kerr metric into a Kerr-Sen one. As we know, both Kerr-Newman and Kerr-Sen black holes have quite similar physical properties because both are rotating and charged black hole solutions. The motivation for studying Kerr-Sen black holes is perhaps that the Universe does not precisely follow Einstein-Maxwell theory, but rather a more complex one, namely, string theory. If this is the case, the expected rotating and charged black hole solution would be the Kerr-Sen solution instead of the Kerr-Newman one.\\ \indent Hod proposed that Kerr-Newman black holes can support linear charged scalar fields in the near-extremal regions \cite{Hod:2014baa}. These bound states are the stationary scalar configurations in the black hole backgrounds, which are regular at the horizon and outside it. They are called stationary scalar clouds \cite{Hod:2014baa,Herdeiro:2014goa}. The importance of these clouds is that they suggest the existence of hairy black holes in the fully nonlinear regime, i.e., considering the full Einstein-scalar theory. This was first shown in \cite{Herdeiro:2014goa}, complemented by \cite{Herdeiro:2015gia}, and conjectured as a general mechanism in \cite{Herdeiro:2014ima}. The existence of a stationary bound state of a charged massive scalar field in black hole backgrounds should have an asymptotically exponentially decaying radial solution behavior \cite{Benone:2014ssa}. By analyzing the radial equation (\ref{SV04}), we found that this asymptotic radial solution matched this requirement well \cite{Siahaan:2015xna}, \be \label{IN01} R_{lm}\left( r \right) \approx \left\{ \begin{array}{l} {e^{ - i\left( {\omega - {\omega _c}} \right){r_*}}} ~~~~~{\rm{ for }}~r \to {r_ + },\\ {e^{ - i{r_*}\sqrt {{\omega ^2} - {\mu ^2}} }} ~~~{\rm{ for }}~r \to \infty ,{\rm{ }} \end{array} \right. \ee where $\omega_c$ is the critical frequency \be \label{IN02} {\omega _c} \equiv m{\Omega _H} + q{\Phi _H}. \ee Here, $r_+$ is the radial coordinate of the outer horizon defined in Eq. (\ref{KSG07}), ${\Omega _H}$ is the angular velocity, and ${\Phi _H}$ is the electrostatic potential defined in Eqs. (\ref{KSG15}) and (\ref{KSG16}). The ``tortoise'' radial coordinate $r_*$ is defined by \be \label{IN03} \frac{{d{r_*}}}{{dr}} = \frac{{{\Delta _{KS}} + 2Mr}}{{{\Delta _{KS}}}}. \ee The critical frequency $\omega_c$ in Kerr-Sen black hole backgrounds is the same as the one found in the Kerr-Newman case. It is not surprising because they both describe the spacetime outside of charged and rotating black holes \cite{Siahaan:2015xna}. In this paper, we study whether there is a scalar cloud surrounding a Kerr-Sen black hole.\\ \indent The paper is organized as follows. In Sec. \ref{sec:kerr-sen} we review the Kerr-Sen geometry and the corresponding Hawking temperature, angular velocity, and electrostatic potential. In Sec. \ref{sec:angular-radial} we separate the scalar field equation on Kerr-Sen geometry into the angular and radial parts. In Sec. \ref{sec:far} we study and derive the radial solution in the far region, defined by $x\gg\tau$. Then in Sec. \ref{sec:near} we derive the radial solution in the near region defined by $x\ll 1$, and the solution in the matching region defined by $\tau\ll x\ll1$ is given in Sec. \ref{sec:matching}. Finally, the stationary bound state of charged massive scalar fields in the Kerr-Sen black hole is given in Sec. \ref{sec:bound-state}. In Sec. \ref{sec:GMGHS} we review the Gibbons-Maeda-Garfinkle-Horowitz-Strominger (GMGHS) spacetime and stationary charged scalar clouds in GMGHS spacetime. In the last section we give our conclusions. In this paper we use units in which $G = c = \hbar = 1$. \section{Kerr-Sen Geometry} \label{sec:kerr-sen} In 1992, Sen derived a four-dimensional charged and rotating black hole solution in the low energy limit of heterotic string field theory. The string theory effective action in four dimensions is given by \cite{Siahaan:2015xna,Ghezelbash:2012qn,Siahaan:2015ljs} \be \label{KSG01} S = \int {{d^4}x\sqrt { - g} } {e^{ - \tilde \Phi }}\left( {R - \frac{1}{8}{F_{\mu \nu }}{F^{\mu \nu }} + {g^{\mu \nu }}{\partial _\mu }\tilde \Phi {\partial _\nu }\tilde \Phi - \frac{1}{{12}}{H_{\kappa \lambda \mu }}{H^{\kappa \lambda \mu }}} \right), \ee where $g$ is the determinant of the tensor metric $g_{\mu\nu}$, $R$ is the scalar curvature, $\tilde \Phi$ is the dilaton field, $F_{\mu\nu}$ is the field-strength tensor \be \label{KSG02} {F_{\mu \nu }} = {\partial _\mu }{A_\nu } - {\partial _\nu }{A_\mu }, \ee and ${H_{\kappa \lambda \mu }}$ is the third-rank tensor field \be \label{KSG03} {H_{\kappa \mu \nu }} = {\partial _\kappa }{B_{\mu \nu }} + {\partial _\nu }{B_{\kappa \mu }} + {\partial _\mu }{B_{\nu \kappa }} - \frac{1}{4}\left( {{A_\kappa }{F_{\mu \nu }} + {A_\nu }{F_{\kappa \mu }} + {A_\mu }{F_{\nu \kappa }}} \right). \ee Note that $B_{\nu\sigma}$ is a second-rank antisymmetric tensor gauge field. Sen applied a transformation to the Kerr solution, known as a solution to the vacuum Einstein equation, to obtain the charged rotating black hole solution in the theory (\ref{KSG01}), known as the Kerr-Sen solution. In Boyer-Lindquist coordinates ($t,r,\theta,\phi$), the Kerr-Sen metric in the Einstein frame can be read as \cite{Siahaan:2015xna,Wu:2003qb} \bea \nonumber d{s^2} &=& - \left( {1 - \frac{{2Mr}}{{{\rho ^2}}}} \right)d{t^2} + {\rho ^2}\left( {\frac{{d{r^2}}}{{{\Delta _{KS}}}} + d{\theta ^2}} \right) - \frac{{4Mra}}{{{\rho ^2}}}{\sin ^2}\theta dtd\phi \\\label{KSG04}&\quad&+ \left( {{\rho ^2} + {a^2}{{\sin }^2}\theta + \frac{{2Mr{a^2}{{\sin }^2}\theta }}{{{\rho ^2}}}} \right){\sin ^2}\theta d{\phi ^2}, \eea where, \bea \label{KSG05} \Delta _{KS} &=& {r^2} + 2(b - M)r + {a^2},\\ \label{KSG06} \rho^2 &=& {r^2} + 2br + {a^2}{\cos ^2}\theta ,\\ \label{KSG07} {r_ \pm } &=& M - b \pm \sqrt {{{\left( {M - b} \right)}^2} - {a^2}} ,\\ \label{KSG08} b &=& \frac{{{Q^2}}}{{2M}}, \eea where $r_+$ is the outer horizon and $r_-$ is the inner horizon. The nonvanishing components of the Kerr-Sen contravariant tensor metric in the Einstein frame are \bea \nonumber {g^{tt}} &=& \frac{{{\Delta _{KS}}{a^2}{{\sin }^2}\theta - {{\left( {{r^2}+2br + {a^2}} \right)}^2}}}{{{\Delta _{KS}}\rho^2 }},~{g^{rr}} = \frac{{{\Delta _{KS}}}}{\rho^2 },~\\\label{KSG09} {g^{\theta \theta }} &=& \frac{1}{\rho^2 },~{g^{\phi \phi }} = \frac{{{\Delta _{KS}} - {a^2}{{\sin }^2}\theta }}{{{\Delta _{KS}}\rho^2 {{\sin }^2}\theta }},~\\\nonumber {g^{t\phi }} &=& {g^{\phi t}} = - \frac{2Mar}{{{\Delta _{KS}}\rho^2 }}, \eea with $\sqrt{-g}=\rho^2 \sin(\theta)$. The metric (\ref{KSG04}) describes a black hole with mass $M$, charge $Q$, and angular momentum $J=Ma$. The solutions for nongravitational fields are \bea \label{KSG10} \tilde \Phi &=& - \frac{1}{2}\ln \frac{{{\rho ^2}}}{{{r^2} + {a^2}{{\cos }^2}\theta }},\\ \label{KSG11} {A_t} &=& - \frac{{Qr}}{{{\rho ^2}}},\\ \label{KSG12} {A_\phi } &=& \frac{{Qra{{\sin }^2}\theta }}{{{\rho ^2}}},\\ \label{KSG13} {B_{t\phi }} &=& \frac{{bra{{\sin }^2}\theta }}{{{\rho ^2}}}, \eea and the related Hawking temperature, angular velocity, and electrostatic potential at the horizon are given by \bea \label{KSG14} {T_H} &=& \frac{{{r_ + } - {r_ - }}}{{8\pi M{r_ + }}},\\ \label{KSG15} {\Omega _H} &=& \frac{a}{{2M{r_ + }}},\\ \label{KSG16} {\Phi _H} &=& \frac{Q}{{2M}}. \eea \indent Setting $b=0$, solutions for nongravitational fields (\ref{KSG10})--(\ref{KSG13}) vanish, and therefore the Kerr-Sen metric (\ref{KSG04}) reduces to the Kerr metric. Instead, turning off the rotational parameter $a$ followed by a coordinate transformation $r\rightarrow r-Q^2/M$ transforms the Kerr-Sen solution into the GMGHS solution which describes a static electrically charged black hole in string theory \cite{Gibbons:1987ps,Garfinkle:1990qj}. \section{Separation of Variables} \label{sec:angular-radial} Let us conceive a massive charged scalar particle field $\Phi$ outside of a Kerr-Sen black hole with mass $\mu$ and charge $q$ that obey the subsequent Klein-Gordon wave equation \cite{Konoplya:2013rxa} \be \label{SV01} \frac{1}{{\sqrt { - g} }}{\partial _\alpha }\left( {{g^{\alpha \beta }}\sqrt { - g} {\partial _\beta }\Phi } \right) - 2iq{A_\alpha }{g^{\alpha \beta }}{\partial _\beta }\Phi - {q^2}{g^{\alpha \beta }}{A_\alpha }{A_\beta }\Phi - {\mu ^2}\Phi = 0. \ee To solve the equation above, as usual we can use the ansatz \cite{Brill:1972xj} \be \label{SV02} \Phi = \sum\limits_{l,m} {{\Phi _{lm}}} =\sum\limits_{l,m} {e^{i\left( {m\phi - \omega t} \right)}}R_{lm}\left( r \right)S_{lm}\left( \theta \right), \ee where $\omega$ is the frequency of the wave field, $l$ is the spheroidal harmonic index, and $m$ is the azimuthal harmonic index. Substituting Eq. (\ref{SV02}) into (\ref{SV01}) gives us two separated equations, specifically the angular part \be \label{SV03} \frac{1}{{\sin \theta }}\frac{d}{{d\theta }}\left( {\sin \theta \frac{dS_{lm}\left( \theta \right)}{{d\theta }}} \right) + \left[ {\lambda_{lm} + {a^2}\left( {{\mu ^2} - {\omega ^2}} \right) - {a^2}{{\cos }^2}\theta \left( {{\mu ^2} - {\omega ^2}} \right) - \frac{{{m^2}}}{{{{\sin }^2}\theta }}} \right]S_{lm}\left( \theta \right) = 0, \ee and the radial part \be \label{SV04} \frac{d}{{dr}}\left( {\Delta _{KS}\frac{dR_{lm}\left( r \right)}{{dr}}} \right) + \left[ {\frac{{{G^2}}}{\Delta _{KS}} - {\mu ^2}\left( {{r^2}+ 2br + {a^2}} \right) + 2am\omega - \lambda_{lm}} \right]R_{lm}\left( r \right) = 0, \ee with \be \label{SV05} G = \omega \left( {{r^2} + 2br + {a^2}} \right) - qQr - am. \ee The coupling constant $\lambda_{lm}$ may be expanded as a power series \cite{Dolan:2007mj} \be \label{SV06} {\lambda _{lm}} + {a^2}\left( {{\mu ^2} - {\omega ^2}} \right) = l\left( {l + 1} \right) + \sum\limits_{k = 1}^\infty {{c_k}{a^{2k}}{{\left( {{\mu ^2} - {\omega ^2}} \right)}^k},} \ee where coefficients $c_k$ are given in \cite{Abra}.\\ \indent The charged massive scalar field with the frequency $\omega=\omega_c$ represents the stationary bound state of the charged massive scalar field in Kerr-Sen spacetime. Following \cite{Hod:2014baa}, we outline the new dimensionless variables \bea \label{SV07} x &\equiv& \frac{{r - {r_ + }}}{{{r_ + }}},\\\label{SV08} \tau &\equiv& \frac{{{r_ + } - {r_ - }}}{{{r_ + }}},\\\label{SV09} k &\equiv& 2{\omega _c}{r_ + } - qQ. \eea Therefore, the radial Teukolsky equation (\ref{SV04}) can be rewritten in the form \be \label{SV10} x\left( {x + \tau } \right)\frac{{{d^{2}R(x)}}}{{d{x^2}}} + \left( {2x + \tau } \right)\frac{dR(x)}{{dx}} + \mathcal{V}R\left( x \right) = 0, \ee where \bea \label{SV11} \mathcal{V} &=& \frac{{{G^2}}}{{{r_ + }^2x\left( {x + \tau } \right)}} - \lambda + 2am{\omega _c} - {\mu ^2}\left[ {{r_ + }^2{{\left( {x + 1} \right)}^2} + 2b{r_ + }\left( {x + 1} \right) + {a^2}} \right],\\ \label{SV12} G &=& r_ + ^2{\omega _c}{x^2} + (2b{\omega _c}+k){r_ + } x. \eea \section{Far-Region Analysis} \label{sec:far} Now we analyze the radial Teukolsky equation (\ref{SV10}) within the region $x \gg \tau $. In this far region, Eq. (\ref{SV10}) can approximately be expressed as \be \label{FR01} {x^2}\frac{{{d^{2}R(x)}}}{{d{x^2}}} + 2x\frac{dR(x)}{{dx}} + {\mathcal{V}_{far}}R\left( x \right) = 0, \ee with $\mathcal{V}\equiv{\mathcal{V}_{far}}$, \be \label{FR02} {\mathcal{V}_{far}} \equiv {\left( {\left( {2b + r_ + ^{}x} \right){\omega _c} + k} \right)^2} - \lambda + 2am{\omega _c} - {\mu ^2}\left[ {{r_ + }^2{{\left( {x + 1} \right)}^2} + 2b{r_ + }\left( {x + 1} \right) + {a^2}} \right]. \ee The solution of Eq. (\ref{FR01}) is \cite{Abra} \bea \nonumber R\left( x \right) &=& {\mathcal{C}_1} \times {\left( {2\tilde\varepsilon } \right)^{\frac{1}{2} + \tilde\beta }}{x^{ - \frac{1}{2} + \tilde\beta }}{e^{ - \tilde\varepsilon x}}M\left( {\frac{1}{2} + \tilde\beta - \tilde\kappa ,1 + 2\tilde\beta ;2\tilde\varepsilon x} \right)\\\label{FR03} &\quad& +~{\mathcal{C}_2} \times {\left( {2\tilde\varepsilon } \right)^{\frac{1}{2} - \tilde\beta }}{x^{ - \frac{1}{2} - \tilde\beta }}{e^{ - \tilde\varepsilon x}}M\left( {\frac{1}{2} - \tilde\beta - \tilde\kappa ,1 - 2\tilde\beta ;2\tilde\varepsilon x} \right), \eea with \be \label{FR04} \tilde\varepsilon \equiv {r_ + }\sqrt {{\mu ^2} - {\omega _c}^2}. \ee In Eq. (\ref{FR03}) above, $M(a,b;z)$ is the Whittaker function, and ${\mathcal{C}_1},~{\mathcal{C}_2}$ are some normalization constants. In addition, \bea \label{FR05} {\tilde\beta ^2} &\equiv& \frac{1}{4} + \lambda - 2am{\omega _c} - {\left( {2b{\omega _c} + k} \right)^2} + {\mu ^2}\left( {{r_ + }^2 + 2b{r_ + } + {a^2}} \right),\\\label{FR06} \tilde\kappa &\equiv& \frac{{{\omega _c}\left( {2b{\omega _c} + k} \right) - {\mu ^2}\left( {b + {r_ + }} \right)}}{{\sqrt {{\mu ^2} - {\omega _c}^2} }}. \eea \section{Near-Region Analysis} \label{sec:near} Next we analyze the radial Teukolsky equation (\ref{SV10}) within the region $x\ll1$. In this near region, the effective potential (\ref{SV11}) approaches \bea \label{NR01} {\mathcal{V}_{near}} &\equiv& \frac{{{{\left( {2b{\omega _c} + k} \right)}^2}x}}{{\left( {x + \tau } \right)}} - \lambda + 2am{\omega _c} - {\mu ^2}\left[ {{r_ + }^2 + 2b{r_ + } + {a^2}} \right]. \eea The solution of Eq. (\ref{SV10}) with $\mathcal{V}\equiv\mathcal{V}_{near}$ is \cite{Abra} \be \label{NR02} {R}\left( x \right) = {\left( {\frac{x}{\tau } + 1} \right)^{{-i (2b{\omega _c}+k)}}}~_2{F_1}\left( {\frac{1}{2} + \tilde\beta - i {(2b{\omega _c}+k)} ,\frac{1}{2} - \tilde\beta - i {(2b{\omega _c}+k)} ;1; - \frac{x}{\tau }} \right), \ee where $_2{F_1}\left( {a,b;c;z} \right)$ is the hypergeometric function and $\tilde\beta$ is the same as Eq. (\ref{FR05}). \section{Matching-Region Analysis} \label{sec:matching} Consider the following condition for near-extremal black holes: \be \label{MR00} \tau \ll 1. \ee For near-extremal Kerr-Sen black holes with $\tau \ll 1$, there is a matching region $\tau\ll x\ll1$; therefore, the Eqs. (\ref{FR03}) and (\ref{NR02}) can be matched. For $x\ll1$, the limit of Eq. (\ref{FR03}) is given by \cite{Hod:2014baa,Hartman:2009nz} \be \label{MR01} R \to {\mathcal{C}_1} \times {\left( {2\tilde\varepsilon } \right)^{\frac{1}{2} + \tilde\beta }}{x^{ - \frac{1}{2} + \tilde\beta }} + {\mathcal{C}_2} \times {\left( {2\tilde\varepsilon } \right)^{\frac{1}{2} - \tilde\beta }}{x^{ - \frac{1}{2} - \tilde\beta }}. \ee For $x\gg\tau$, the limit of Eq. (\ref{NR02}) is given by \cite{Hod:2014baa,Hartman:2009nz} \be \label{MR02} R \to {\mathcal{C}_3}\times{x^{ - \frac{1}{2} + \tilde\beta }} + {\mathcal{C}_4}\times{x^{ - \frac{1}{2} - \tilde\beta }}, \ee where \bea \label{MR03} {\mathcal{C}_3} &=& {\tau ^{\frac{1}{2} - \tilde\beta }}\frac{{\Gamma \left( {2\tilde\beta } \right)}}{{\Gamma \left( {\frac{1}{2} + \tilde\beta - i(2b{\omega _c}+k)} \right)\Gamma \left( {\frac{1}{2} + \tilde\beta + i {(2b{\omega _c}+k)} } \right)}},\\\label{MR04} {\mathcal{C}_4} &=& {\tau ^{\frac{1}{2} + \tilde\beta }}\frac{{\Gamma \left( {-2\tilde\beta } \right)}}{{\Gamma \left( {\frac{1}{2} - \tilde\beta - i(2b{\omega _c}+k)} \right)\Gamma \left( {\frac{1}{2} - \tilde\beta + i {(2b{\omega _c}+k)} } \right)}}. \eea Matching (\ref{MR01}) and (\ref{MR02}) in this region, we find the two normalization constants $\mathcal{C}_{1},~\mathcal{C}_{2}$ of Eq. (\ref{MR01}), \bea \label{MR05} {\mathcal{C}_1} &=& {\tau ^{\frac{1}{2} - \tilde\beta }}{\left( {2\tilde\varepsilon } \right)^{ - \left( {\frac{1}{2} + \tilde\beta } \right)}}\frac{{\Gamma \left( {2\tilde\beta } \right)}}{{\Gamma \left( {\frac{1}{2} + \tilde\beta - i(2b{\omega _c}+k)} \right)\Gamma \left( {\frac{1}{2} + \tilde\beta + i {(2b{\omega _c}+k)} } \right)}},\\\label{MR06} {\mathcal{C}_2} &=& {\tau ^{\frac{1}{2} + \tilde\beta }}{\left( {2\tilde\varepsilon } \right)^{ - \left( {\frac{1}{2} - \tilde\beta } \right)}}\frac{{\Gamma \left( {-2\tilde\beta } \right)}}{{\Gamma \left( {\frac{1}{2} - \tilde\beta - i(2b{\omega _c}+k)} \right)\Gamma \left( {\frac{1}{2} - \tilde\beta + i {(2b{\omega _c}+k)} } \right)}}. \eea The expansion for $x\to\infty$ of Eq. (\ref{FR03}) is given by \cite{Hod:2014baa,Hod:2012px} \[R \to \left[{\mathcal{C}_1} \times {\left( {2{\tilde\varepsilon} } \right)^{\tilde\kappa} }{x^{ - 1 + {\tilde\kappa} }}{\left( { - 1} \right)^{ - \frac{1}{2} - {\tilde\beta} + {\tilde\kappa} }}\frac{{\Gamma \left( {1 + 2{\tilde\beta} } \right)}}{{\Gamma \left( {\frac{1}{2} + {\tilde\beta} + {\tilde\kappa} } \right)}}\right.\] \[\left.~+ {\mathcal{C}_2} \times {\left( {2{\tilde\varepsilon} } \right)^{\tilde\kappa} }{x^{ - 1 + {\tilde\kappa} }}{\left( { - 1} \right)^{ - \frac{1}{2} + {\tilde\beta} + {\tilde\kappa} }}\frac{{\Gamma \left( {1 - 2{\tilde\beta} } \right)}}{{\Gamma \left( {\frac{1}{2} - {\tilde\beta} + {\tilde\kappa} } \right)}}\right]{e^{ - {\tilde\varepsilon} x}}\] \be \label{MR07} + \left[ {{\mathcal{C}_1} \times {{\left( {2{\tilde\varepsilon} } \right)}^{ - {\tilde\kappa} }}{x^{ - 1 - {\tilde\kappa} }}\frac{{\Gamma \left( {1 + 2{\tilde\beta} } \right)}}{{\Gamma \left( {\frac{1}{2} + {\tilde\beta} - {\tilde\kappa} } \right)}} + {\mathcal{C}_2} \times {{\left( {2{\tilde\varepsilon} } \right)}^{ - {\tilde\kappa} }}{x^{ - 1 - {\tilde\kappa} }}\frac{{\Gamma \left( {1 - 2{\tilde\beta} } \right)}}{{\Gamma \left( {\frac{1}{2} - {\tilde\beta} - {\tilde\kappa} } \right)}}} \right]{e^{{\tilde\varepsilon} x}}. \ee \section{Stationary Bound-State charged massive scalar fields} \label{sec:bound-state} The bound states of the charged massive scalar fields are portrayed by an exponentially decaying radial solution at infinity. This shows that the coefficient of the growing exponent ${e^{ \tilde\varepsilon x}}$ in (\ref{MR07}) must vanish, \be \label{BS01} {{\mathcal{C}_1}\times {{\left( {2\tilde\varepsilon } \right)}^{ - \tilde\kappa }}{x^{ - 1 - \tilde\kappa }}\frac{{\Gamma \left( {1 + 2\tilde\beta } \right)}}{{\Gamma \left( {\frac{1}{2} + \tilde\beta - \tilde\kappa } \right)}} + {\mathcal{C}_2}\times {{\left( {2\tilde\varepsilon } \right)}^{ - \tilde\kappa }}{x^{ - 1 - \tilde\kappa }}\frac{{\Gamma \left( {1 - 2\tilde\beta } \right)}}{{\Gamma \left( {\frac{1}{2} - \tilde\beta - \tilde\kappa } \right)}}}=0. \ee Substituting Eqs. (\ref{MR05}) and (\ref{MR06}) into (\ref{BS01}), one finds \be \label{BS02} \frac{1}{{\Gamma \left( {\frac{1}{2} + \tilde\beta - \tilde\kappa } \right)}}= {\left( {2\tilde\varepsilon \tau } \right)^{2\tilde\beta }}{\left( {\frac{{\Gamma \left( { - 2\tilde\beta } \right)}}{{\Gamma \left( {2\tilde\beta } \right)}}} \right)^2}\frac{{\Gamma \left( {\frac{1}{2} + \tilde\beta - i\tilde k} \right)\Gamma \left( {\frac{1}{2} + \tilde\beta + i\tilde k} \right)}}{{\Gamma \left( {\frac{1}{2} - \tilde\beta - i\tilde k } \right)\Gamma \left( {\frac{1}{2} - \tilde\beta + i \tilde k } \right)\Gamma \left( {\frac{1}{2} - \tilde\beta - \tilde\kappa } \right)}}, \ee where $\tilde k = {2b{\omega _c} + k} $. The right-hand side of Eq. (\ref{BS02}) is of order $\mathcal{O}\left( {{{\left( {\varepsilon \tau } \right)}^{2\beta }}} \right) \ll 1$; therefore, Eq. (\ref{BS02}) can be composed within the form \be \label{BS03} \frac{1}{2} + \tilde\beta - \tilde\kappa = \mathcal{O}\left( {{{\left( {\tilde\varepsilon \tau } \right)}^{2\tilde\beta }}} \right) - {n}, \ee with ${n} = 0,1,2,3, \ldots $. Equation (\ref{BS03}) can be solved within the regime \be \label{BS04} \tilde\varepsilon \ll 1. \ee Taking note of Eqs. (\ref{SV06})--(\ref{SV09}) and (\ref{FR04})--(\ref{FR06}) within the regime (\ref{BS04}), one finds \bea \label{BS05} \tilde\beta &=& {\tilde\beta _0} + \mathcal{O}\left( {{\tilde\varepsilon ^2}} \right),\\ \label{BS06} \tilde\kappa &=& \frac{\tilde\alpha }{\tilde\varepsilon } + \mathcal{O}\left( \tilde\varepsilon \right), \eea where \bea \label{BS07} {\tilde\beta _0} &\equiv& \sqrt {{{\left( {l + \frac{1}{2}} \right)}^2} - 2ma{\omega _c} - \varpi + \chi},\\ \label{BS08} \tilde\alpha &\equiv& {r_ + }{\omega _c}\left( {{\omega _c}\left( {b + {r_ + }} \right) - qQ} \right), \eea with $\varpi={{\left( {2{\omega _c}\left( {b + {r_ + }} \right) - qQ} \right)}^2}$ and $\chi={\omega _c}^2\left( {{r_ + }^2 + 2b{r_ + } + {a^2}} \right)$. Substituting Eqs. (\ref{BS05}) and (\ref{BS06}) into (\ref{BS03}), one finds \be \label{BS09} \tilde\varepsilon = \frac{\tilde\alpha }{{\frac{1}{2} + {\tilde\beta _0} + n}}. \ee Taking note of Eqs. (\ref{BS05}), (\ref{BS06}), and (\ref{BS09}), one recognizes that $\tilde\alpha>0$ is a required condition for the existence of the stationary bound-state charged massive scalar fields. Finally, from Eq. (\ref{SV04}) the equations that are analogous to the stationary bound state of the massive charged scalar particle fields in Kerr-Sen spacetime are given by \be \label{BS10} \mu {r_ + } = \sqrt{{{\tilde\varepsilon} ^2} + {{\left( {{\omega _c}{r_ + }} \right)}^2}}. \ee It turns out that Eq. (\ref{BS10}) for Kerr-Sen spacetime resembles the equation for Kerr-Newman spacetime as found by Hod in \cite{Hod:2014baa}. However, there are several distinguishable parameters between these two cases, namely, the explicit expressions for $\tilde\varepsilon$, $\tilde\beta_0$, and $\tilde\alpha$ that depend on $r_+$\footnote{For Kerr-Newman spacetime, ${r_ + } = M + \sqrt {{M^2} - {Q^2} - {a^2}}$, and for Kerr-Sen spacetime, ${r_ + } = \left( {M - b} \right) + \sqrt {\left( {{M^2} - {b^2}} \right) - {a^2}} $.}. \section{Charged Scalar Clouds in GMGHS Spacetime} \label{sec:GMGHS} A static spherical symmetric charged black hole in the low energy limit of heterotic string field theory in four dimensions was first found by Gibbons and Maeda in \cite{Gibbons:1987ps} and independently obtained by Garfinkle, Horowitz, and Strominger in \cite{Garfinkle:1990qj} three years later. The metric that describes this GMGHS spacetime is \cite{Li:2015bfa} \be \label{SVGMGHS01} d{s^2} = - \left( {1 - \frac{{2M}}{r}} \right)d{t^2} + {\left( {1 - \frac{{2M}}{r}} \right)^{ - 1}}d{r^2} + r\left( {r - \frac{{{Q^2}}}{M}} \right)\left( {d{\theta ^2} + {{\sin }^2}d{\phi ^2}} \right). \ee The corresponding potential vector and the dilaton field are \bea \label{SVGMGHS02} {A_t} &=& - \frac{Q}{r},\\ \label{SVGMGHS03} {e^{2\tilde \Phi }} &=& 1 - \frac{{{Q^2}}}{{Mr}}. \eea The event horizon of the GMGHS black hole is located at ${r_ + } = 2M$ and ${r_ - } = {Q^2}/M$. We start analyzing a massive charged scalar particle field $\Phi$ outside of a GMGHS black hole with the mass $\mu$ and charge $q$ obeying the following Klein-Gordon wave equation: \be \label{SVGMGHS04} \frac{1}{{\sqrt { - g} }}{\partial _\alpha }\left( {{g^{\alpha \beta }}\sqrt { - g} {\partial _\beta }\Phi } \right) - 2iq{A_\alpha }{g^{\alpha \beta }}{\partial _\beta }\Phi - {q^2}{g^{\alpha \beta }}{A_\alpha }{A_\beta }\Phi - {\mu ^2}\Phi = 0. \ee To solve the equation above, as usual we use the ansatz of the scalar field \cite{Degollado:2013eqa}, \be \label{SVGMGHS05} \Phi=\sum\limits_{l,m}\Phi_{lm} = \sum\limits_{l,m}{e^{ - i\omega t}}R_l\left( r \right){Y_{lm}}\left( {\theta ,\phi } \right), \ee where $l$ is the spherical harmonic index and $m$ is the azimuthal harmonic index with $- l \le m \le l$; one finds the radial equation \cite{Li:2013jna} \be \label{SVGMGHS06} {\Delta _{G}}\frac{d}{{dr}}\left( {{\Delta _{G}}\frac{dR_l\left( r \right)}{{dr}}} \right) + UR\left( r \right) = 0, \ee where \be \label{SVGMGHS07} {\Delta _G} = \left( {r - {r_ + }} \right)\left( {r - {r_ - }} \right), \ee and the potential $U$ is given by \be \label{SVGMGHS08} U = {\left( {r - \frac{{{Q^2}}}{M}} \right)^2}{\left( {\omega r - qQ} \right)^2} - {\Delta _G}\left[ {{\mu ^2}r\left( {r - \frac{{{Q^2}}}{M}} \right) + l\left( {l + 1} \right)} \right]. \ee \indent The charged massive scalar field with frequency $\omega=q\Phi_H$ represents the stationary bound state of the charged massive scalar field in GMGHS spacetime, where $\Phi_H=Q/2M$ is the electrostatic potential at the horizon \cite{Li:2015bfa}. Following \cite{Hod:2014baa,Li:2015bfa}, we outline the new dimensionless variables \bea \label{SVGMGHS09} x &\equiv& \frac{{r - {r_ + }}}{{{r_ + }}},\\\label{SVGMGHS10} \tau &\equiv& \frac{{{r_ + } - {r_ - }}}{{{r_ + }}}, \eea in terms of which the radial equation (\ref{SVGMGHS06}) becomes \be \label{SVGMGHS11} x\left( {x + \tau } \right)\frac{{{d^{2}R\left( x \right)}}}{{d{x^2}}} + \left( {2x + \tau } \right)\frac{dR\left( x \right)}{{dx}} + \mathcal{U}R\left( x \right) = 0, \ee where \be \label{SVGMGHS12} \mathcal{U} = {q^2}{Q^2}x\left( {x + \tau } \right) - {\mu ^2}\left( {{r_ + }^2{{\left( {x + 1} \right)}^2} - 2{Q^2}\left( {x + 1} \right)} \right) - l\left( {l + 1} \right). \ee By setting $\mu=0$, i.e. a stationary massless charged scalar field, Eq. (\ref{SVGMGHS11}) becomes \cite{Li:2015bfa} \be \label{SVGMGHS13} x\left( {x + \tau } \right)\frac{{{d^{2}R\left( x \right)}}}{{d{x^2}}} + \left( {2x + \tau } \right)\frac{dR\left( x \right)}{{dx}} + \left[{q^2}{Q^2}x\left( {x + \tau } \right) - l\left( {l + 1} \right)\right]R\left( x \right) = 0. \ee Following Ref. \cite{Li:2015bfa}, Eq. (\ref{SVGMGHS13}) can be solved within the asymptotic highly charged regime $qQ\gg1$ \footnote{In this asymptotic highly charged regime, the charge $q$ of scalar field and the charge $Q$ of black hole are strongly interacting electrically.}, and within the near-horizon region $x\ll\tau$ \cite{Hod:2010hw,Hod:2012zzb,Konoplya:2013rxa,Hod:2014tqa}, one finds that the radial equation (\ref{SVGMGHS13}) can be approximated by \be \label{SVGMGHS14} x\frac{{{d^{2}R\left( x \right)}}}{{d{x^2}}} + \frac{dR\left( x \right)}{{dx}} + {q^2}{Q^2}x R\left( x \right) = 0. \ee The solution of the radial equation (\ref{SVGMGHS14}) is then given by the Bessel function of the first kind, \be \label{SVGMGHS15} R\left( x \right) = {J_0}\left( {qQx} \right). \ee \indent The radial solution (\ref{SVGMGHS15}) describes the stationary scalar field around a static charged black hole in the low energy limit of heterotic string field theory, namely, the GMGHS black hole \cite{Li:2015bfa}. This does not resembles the situation for a static charged black hole in Einstein-Maxwell theory, known as the Reissner-Nordstr\"{o}m black hole. In the Kerr-Newman black hole case ($a\ne0$) in Ref. \cite{Hod:2014baa}, setting the rotational parameter $a=0$ (a nonrotating charged Reissner-Nordstr\"{o}m black hole), one finds that Eq. (29) in Ref. \cite{Hod:2014baa} has no solution and concludes that the Reissner-Nordstr\"{o}m black hole cannot support the existence of stationary scalar clouds \cite{Hod:2013eea}. However, setting the rotational parameter $a=0$ for the Kerr-Sen black hole, one finds $\tilde\alpha\ne0$; therefore, there is a solution for Eq. (\ref{BS03}). This is consistent with Li's works in \cite{Li:2015bfa} that concludes there is a stationary scalar field around charged black holes in the low energy limit of heterotic string field theory. \section{Summary and Discussion} \label{sec:summary} In summary, we have studied the stationary massive charged scalar clouds in the Kerr-Sen black hole spacetime. In \cite{Hod:2014baa}, Hod showed that the Kerr-Newman black hole can support stationary massive charged scalar clouds by analytically solving the Klein-Gordon equation for a stationary charged massive scalar fields. The authors of Ref. \cite{Huang:2016qnk} also numerically investigate stationary massive charged scalar clouds in the Kerr-Newman black hole spacetime and find that for fixed black hole parameters, the mass $\mu$ and charge $q$ of the scalar clouds are limited in a finite region in the parameter space of the scalar fields. The Kerr-Newman scalar clouds were recently promoted to a fully nonlinear solution (Kerr-Newman black holes with scalar hair) in Ref. \cite{Delgado:2016jxq}. Of course, the existence of clouds in this Kerr-Sen case shows that a new family of fully nonlinear solutions will also exist in the Kerr-Sen case. This motivated us to perform such an analysis for the Kerr-Sen black hole since Kerr-Newman and Kerr-Sen black holes have several similarities in their physical properties. In this paper, we have studied that the Kerr-Sen black hole can also support stationary massive charged scalar clouds.\\ \indent The stationary charged scalar clouds can be formed within the background of a static electrically charged black hole solution in the low energy limit of heterotic string field theory, namely, the GMGHS black holes at linear level: The gravitational attraction is equal because of the electromagnetic repulsion. The existence of this stationary scalar field was demonstrated numerically and analytically in the important work by Li, Zhao, Wu, and Zhang \cite{Li:2015bfa}. This does not resemble the situation in Einstein-Maxwell theory, where the Reissner-Nordstr\"om black holes cannot support the existence of stationary scalar clouds because the gravitational attraction and electromagnetic repulsion cannot reach equilibrium \cite{Hod:2013eea,Degollado:2013eqa,Li:2015bfa}. Lastly, we note that it is worth performing a numerical analysis of the linear stationary charged scalar clouds for the rotating and charged black hole in the low energy limit of heterotic string field theory. Intuitively, we predict that the same result can be found for the stationary scalar clouds surrounding a Kerr-Sen black hole. There might be related numerical works, but we hypothesize that the situation is not exactly the same; it can be found in Refs. \cite{Hwang:2011mn,Hansen:2013vha}. Perhaps some of these results would approach some of stationary solutions in this paper, though we are not completely sure. For this case, there seems to be scalar or charge clouds because of AdS background. In Refs. \cite{Hansen:2014rua,Hansen:2015dxa,Nakonieczna:2015umf,Nakonieczna:2016iof} we found various string-inspired models that allow various scalar hairs (normal scalar as well as Brans-Dicke scalar). In this case, a hair appears due to a special coupling with the electric charge. \section*{Acknowledgments} I am very thankful to my supervisor Haryanto M.~Siahaan for all his support and the knowledge he shared with me. I also thank Professor Carlos A. R. Herdeiro and the anonymous referee for reading this manuscript and for their useful comments and suggestions.
1,314,259,993,698
arxiv
\section{Introduction} \label{sec:intro} The discovery of the $125$~GeV Higgs boson at the LHC~\cite{Aad:2012tfa, Chatrchyan:2012xdj} exemplifies the success of the Standard Model (SM). Considering the experimental precision, possibilities of physics beyond the SM (BSM) are not excluded yet, however, given the null results of the direct searches, any BSM physics that is discovered is likely to be more exotic in nature. It is therefore important to determine to what extent different BSM scenarios are still viable. Inspired by the supersymmetric models, the Two-Higgs-Doublet Model (2HDM), adding another Higgs doublet, is one of the simplest and most commonly studied extensions of the SM. Manohar and Wise (MW), starting from the principal of Minimal Flavor Violation (MFV), proposed an alternative model~\cite{Manohar:2006ga}. It follows from MFV that the scalar sector can have only two representations, color-singlet and color-octet. Therefore, they construct an extension of the SM by adding a color-octet electroweak doublet scalar. The phenomenology of the model has been studied in detail~\cite{Gresham:2007ri, Gerbush:2007fe, Burgess:2009wm, He:2011ti, Dobrescu:2011aa, Bai:2011aa, Arnold:2011ra, Kribs:2012kz, Reece:2012gi, Cao:2013wqa, He:2013tla, Cheng:2015lsa, Martinez:2016fyd, Hayreter:2017wra}, including aspects such as production of scalars, the lower limit of the scalar masses and possible constraints on the parameter space. Combining the above motivations, Ref.~\cite{Cheng:2016tlc, Cheng:2017tbn} recently proposed a new model containing aspects of both MW model and 2HDM. In particular, the scalar sector of the model in consideration consists of two color-singlet electroweak doublets, $\Phi_{1, 2}$, and one color-octet electroweak doublet, $S$. The MW model is the limiting case with $\Phi_2 \to 0$, whereas the 2HDM is recovered in the limit $S \to 0$. Due to the existence of these two limiting cases we will refer to this model as the \textit{2HDMW}. The inclusive character of the 2HDMW model is capable of explaining new physics and meanwhile compatible with the established experimental observations. For example, it is a viable model for LHC physics in terms of $h$-signal strengths since they are not necessarily affected at tree-level. It is also suggested that the 2HDMW can emerge naturally from GUT theories \cite{Georgi:1979df,Dorsner:2006dj,Perez:2016qbo}. Therefore, the 2HDMW is one of the possible physics at the low scale upon the breaking down of the more general symmetry. In the meantime, $CP$ violating phases are introduced to the scalar sector in its most general formulation. Ref.~\cite{Cheng:2016tlc} investigated tree-level constraints on the 2HDMW arising from symmetries and perturbative unitarity. A study of LHC phenomenology was also performed, and found that the color-octet scalar added to the 2HDM could produce large corrections to the one-loop couplings of the Higgs boson to two gluons or photons. Ref.~\cite{Cheng:2017tbn} derived the one-loop beta functions for the scalar couplings in the 2HDMW, and the evolution of the renormalization group equations (RGEs) was then used to place upper limits on the parameters of the model. Similar practices were applied in studies of the the SM~\cite{Callaway:1988ya, Sher:1988mj}, the MW model~\cite{He:2013tla}, and the 2HDM~\cite{Chakrabarty:2014aya, Chowdhury:2015yja, Ferreira:2015rha}. The parameter space was further constrained in Ref.~\cite{Cheng:2017tbn} by requiring no Landau poles (LPs) below a certain high energy scale $\Lambda$, the scalar potential being stable and perturbative unitarity satisfied at all scales below $\Lambda$. The perturbative unitarity constraints imposed on the model in Ref.~\cite{Cheng:2016tlc, Cheng:2017tbn} are leading order (LO), and a considerable region in the parameter space survives. Although instuctive, the preceding studies on constraints imposed on the 2HDMW are not yet comprehensive. It is a reasonable expectation that supplementing corrections at higher orders can result in noticeable modifications to the surviving parameter space. However, the behavior of higher order corrections is usually complicated. Whether their impact is to tighten or relax the viable ranges of couplings, there is no simple answer to that. In this paper, we utilize the generic tool provided by Ref.~\cite{Grinstein:2015rtl, Cacchio:2016qyh, Murphy:2017ojk} to explore these perturbative unitarity bounds at next-to-leading order (NLO) and firstly impose them on color-octet scalar. On the other hand, the positivity conditions are only known for 2HDM. With additional color-octet taking into play, one should reconsider the scalar potential as a whole and secure the existence of the global minimum. Completely solving this problem is extremely challenging. This paper is also the first work to expand the set of positivity conditions to both MW and 2HDMW models. More generally this work focuses on theoretical constraints on the 2HDMW. An investigation of experimental bounds on the model is saved for future work. The rest of the paper is organized as follows: The 2HDMW model is defined in Section~\ref{sec:model}. The theoretical constraints are explained in Sec.~\ref{sec:constraints}. Following that our results for the surviving parameter space are presented in Sec.~\ref{sec:results}. Concluding remarks are given in Sec.~\ref{sec:conclusions}. \section{The model} \label{sec:model} As stated above, the scalar sector of the model consists of two color-singlet electroweak doublets $\Phi_{1, 2}$, and one color-octet electroweak doublet $S$. The most general renormalizable potential of the scalar sector is \cite{Cheng:2016tlc, Branco:2011iw}: \begin{align} V_{\text{\tiny{gen}}} & = m_{11}^2\Phi_1^\dagger\Phi_1^{\phantom{\dagger}} + m_{22}^2\Phi_2^\dagger\Phi_2^{\phantom{\dagger}} - m_{12}^2 \left( \Phi_1^\dagger\Phi_2^{\phantom{\dagger}} +\Phi_2^\dagger\Phi_1^{\phantom{\dagger}}\right) + \tfrac12 \lambda_1\left(\Phi_1^\dagger\Phi_1^{\phantom{\dagger}}\right)^2 + \tfrac12 \lambda_2\left(\Phi_2^\dagger\Phi_2^{\phantom{\dagger}}\right)^2 \nonumber \\ &\phantom{{}={}} + \lambda_3 \left(\Phi_1^\dagger\Phi_1^{\phantom{\dagger}}\right) \left(\Phi_2^\dagger\Phi_2^{\phantom{\dagger}}\right) + \lambda_4 \left(\Phi_1^\dagger\Phi_2^{\phantom{\dagger}}\right) \left(\Phi_2^\dagger\Phi_1^{\phantom{\dagger}}\right) + \tfrac12 \left[ \lambda_5 \left(\Phi_1^\dagger\Phi_2^{\phantom{\dagger}}\right)^2 + {\rm h.c.} \right] \nonumber \\ &\phantom{{}={}} + \left[ \lambda_6 \left( \Phi_1^\dagger\Phi_1^{\phantom{\dagger}} \right) \left( \Phi_1^\dagger\Phi_2^{\phantom{\dagger}} \right) +\lambda_7 \left( \Phi_2^\dagger\Phi_2^{\phantom{\dagger}} \right) \left( \Phi_1^\dagger\Phi_2^{\phantom{\dagger}} \right) + {\rm h.c.} \right] \nonumber \\ &\phantom{{}={}} + 2 m_S^2 {\rm Tr}\left(S^{\dagger i} S^{\phantom{\dagger}}_i\right) + \mu_1 {\rm Tr}\left(S^{\dagger i} S^{\phantom{\dagger}}_i S^{\dagger j} S^{\phantom{\dagger}}_j\right) + \mu_2 {\rm Tr}\left(S^{\dagger i} S^{\phantom{\dagger}}_j S^{\dagger j} S^{\phantom{\dagger}}_i\right) + \mu_3 {\rm Tr}\left(S^{\dagger i} S^{\phantom{\dagger}}_i\right) \left(S^{\dagger j} S^{\phantom{\dagger}}_j\right) \nonumber \\ &\phantom{{}={}} + \mu_4 {\rm Tr}\left(S^{\dagger i} S^{\phantom{\dagger}}_j\right) \left(S^{\dagger j} S^{\phantom{\dagger}}_i\right) + \mu_5 {\rm Tr}\left(S^{\phantom{\dagger}}_i S^{\phantom{\dagger}}_j\right) \left(S^{\dagger i} S^{\dagger j}\right) + \mu_6 {\rm Tr}\left(S^{\phantom{\dagger}}_i S^{\phantom{\dagger}}_j S^{\dagger j} S^{\dagger i}\right) \nonumber \\ &\phantom{{}={}} + \nu_1 \Phi_1^{\dagger i}\Phi_{1i}^{\phantom{\dagger}} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_j\right) + \nu_2 \Phi_1^{\dagger i}\Phi_{1j}^{\phantom{\dagger}} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_i\right) \nonumber \\ &\phantom{{}={}} +\left[ \nu_3 \Phi_1^{\dagger i}\Phi_1^{\dagger j} {\rm Tr}\left(S^{\phantom{\dagger}}_i S^{\phantom{\dagger}}_j\right) +\nu_4 \Phi_1^{\dagger i} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_j S^{\phantom{\dagger}}_i\right) +\nu_5 \Phi_1^{\dagger i} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_i S^{\phantom{\dagger}}_j\right) + {\rm h.c.}\right] \nonumber \\ &\phantom{{}={}} + \omega_1 \Phi_2^{\dagger i}\Phi_{2i}^{\phantom{\dagger}} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_j\right) + \omega_2 \Phi_2^{\dagger i}\Phi_{2j}^{\phantom{\dagger}} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_i\right) \nonumber \\ &\phantom{{}={}} +\left[ \omega_3 \Phi_2^{\dagger i}\Phi_2^{\dagger j} {\rm Tr}\left(S^{\phantom{\dagger}}_i S^{\phantom{\dagger}}_j\right) +\omega_4 \Phi_2^{\dagger i} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_j S^{\phantom{\dagger}}_i\right) +\omega_5 \Phi_2^{\dagger i} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_i S^{\phantom{\dagger}}_j\right) +{\rm h.c.}\right] \nonumber \\ &\phantom{{}={}} +\left[ \kappa_1 \Phi_1^{\dagger i} \Phi_{2i}^{\phantom{\dagger}} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_j\right) +\kappa_2 \Phi_1^{\dagger i} \Phi_{2j}^{\phantom{\dagger}} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_i\right) +\kappa_3 \Phi_1^{\dagger i} \Phi_2^{\dagger j} {\rm Tr}\left(S^{\phantom{\dagger}}_j S^{\phantom{\dagger}}_i\right) +{\rm h.c.}\right] \label{eq:genpot} \end{align} All interactions between $S$, $\Phi_1$, and $\Phi_2$ and the self-interactions are included. In Eq.~\eqref{eq:genpot}, we use $i,j$ as $SU(2)$ indices; the notation $S_i = S_i^A T^A$, where $A$ is color index. The trace is taken over the color indices. The physical parameters of this model are the masses of the $\Phi_1$ and $\Phi_2$ fields, which we denote like in the 2HDM as $m_h$, $m_H$ and $m_A$ for the neutral bosons and as $m_{H^\pm}$ for the charged Higgs particles, as well as the octet masses $m_R$, $m_I$ and $m_{S^\pm}$ for the neutral scalar, the neutral pseudoscalar and the charged octet scalar of the MW model. Moreover, we will call the two angles of the diagonalization of the mass matrix in the 2HDM sector $\alpha$ and $\beta$, according to the convention in the literature. We apply the following conditions to reduce the number of the parameters in the scalar potential and the Yukawa potential, defined below in Eq.~\eqref{eq:pot} and Eq.~\eqref{eq:yukawa}, respectively. \begin{itemize} \item We restrict the 2HDM sector to be $CP$-conserving. \item Custodial symmetry~\cite{Sikivie:1980hm, Pomarol:1993mu, Grzadkowski:2010dj}: We adopt the less restrictive method discussed in~\cite{Cheng:2016tlc}. The mass degeneracies $m_{H^\pm} = m_A$ and $m_{S^\pm} = m_I$ result from custodial symmetry. \item We impose a $\mathbb{Z}_2$ symmetry, which is only softly broken by quadratic terms. This prevents tree level flavor changing neutral currents (FCNCs), and further reduces the number of free parameters. The charge assignments we consider are given in Table~\ref{tab:types}. Note that the original MW paper was motivated by the principle of minimal flavor violation~\cite{Chivukula:1987py, DAmbrosio:2002vsn}. This is in contrast with our approach of imposing $\mathbb{Z}_2$ symmetry, which is motivated by the practicality of reducing the number of parameters in the scalar potential while still maintaining some ability to generate flavor effects. \end{itemize} \begin{table} \centering \begin{tabular}{| c || c | c | c | c | c | c |} \hline & $\Phi_1$ & $\Phi_2$ & $S$ & $U_R$ & $D_R$ & $Q_L$ \\ \hline Type I & $-$ & $+$ & $-$ & $-$ & $-$ & $+$ \\ Type IIu & $-$ & $+$ & $-$ & $-$ & $+$ & $+$ \\ Type IId & $-$ & $+$ & $+$ & $-$ & $+$ & $+$ \\ \hline \end{tabular} \caption{$\mathbb{Z}_2$ charge assignments in the 2HDMW that forbid tree level FCNCs. In type IIu (IId) the color-octet scalar $S$ only interacts with up-type (down-type) quarks.} \label{tab:types} \end{table} The scalar potential of the model with the aforementioned constraints imposed reads \begin{align} V_{\text{\tiny{fit}}} & = m_{11}^2\Phi_1^\dagger\Phi_1^{\phantom{\dagger}} + m_{22}^2\Phi_2^\dagger\Phi_2^{\phantom{\dagger}} - m_{12}^2 \left( \Phi_1^\dagger\Phi_2^{\phantom{\dagger}} +\Phi_2^\dagger\Phi_1^{\phantom{\dagger}}\right) + \tfrac12 \lambda_1\left(\Phi_1^\dagger\Phi_1^{\phantom{\dagger}}\right)^2 + \tfrac12 \lambda_2\left(\Phi_2^\dagger\Phi_2^{\phantom{\dagger}}\right)^2 \nonumber \\ &\phantom{{}={}} + \lambda_3 \left(\Phi_1^\dagger\Phi_1^{\phantom{\dagger}}\right) \left(\Phi_2^\dagger\Phi_2^{\phantom{\dagger}}\right) + \tfrac12 \lambda_4 \left[ \left(\Phi_1^\dagger\Phi_2^{\phantom{\dagger}}\right) + \left(\Phi_2^\dagger\Phi_1^{\phantom{\dagger}}\right) \right]^2 \nonumber \\ &\phantom{{}={}} + 2 m_S^2 {\rm Tr}\left(S^{\dagger i} S^{\phantom{\dagger}}_i\right) + \mu_1 \left[ {\rm Tr}\left(S^{\dagger i} S^{\phantom{\dagger}}_i S^{\dagger j} S^{\phantom{\dagger}}_j\right) +{\rm Tr}\left(S^{\dagger i} S^{\phantom{\dagger}}_j S^{\dagger j} S^{\phantom{\dagger}}_i\right) +2 {\rm Tr}\left(S^{\phantom{\dagger}}_i S^{\phantom{\dagger}}_j S^{\dagger j} S^{\dagger i}\right) \right] \nonumber \\ &\phantom{{}={}} + \mu_3 {\rm Tr}\left(S^{\dagger i} S^{\phantom{\dagger}}_i\right) \left(S^{\dagger j} S^{\phantom{\dagger}}_j\right) + \mu_4 \left[ {\rm Tr}\left(S^{\dagger i} S^{\phantom{\dagger}}_j\right) \left(S^{\dagger j} S^{\phantom{\dagger}}_i\right) + {\rm Tr}\left(S^{\phantom{\dagger}}_i S^{\phantom{\dagger}}_j\right) \left(S^{\dagger i} S^{\dagger j}\right) \right] \nonumber \\ &\phantom{{}={}} + \nu_1 \Phi_1^{\dagger i}\Phi_{1i}^{\phantom{\dagger}} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_j\right) +\tfrac12 \nu_2 \left[ \Phi_1^{\dagger i}\Phi_{1j}^{\phantom{\dagger}} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_i\right) +\Phi_1^{\dagger i}\Phi_1^{\dagger j} {\rm Tr}\left(S^{\phantom{\dagger}}_i S^{\phantom{\dagger}}_j\right) + {\rm h.c.}\right] \nonumber \\ &\phantom{{}={}} +\nu_4 \left[ \Phi_1^{\dagger i} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_j S^{\phantom{\dagger}}_i\right) +\Phi_1^{\dagger i} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_i S^{\phantom{\dagger}}_j\right) + {\rm h.c.}\right] \nonumber \\ &\phantom{{}={}} + \omega_1 \Phi_2^{\dagger i}\Phi_{2i}^{\phantom{\dagger}} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_j\right) +\tfrac12 \omega_2 \left[ \Phi_2^{\dagger i}\Phi_{2j}^{\phantom{\dagger}} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_i\right) +\Phi_2^{\dagger i}\Phi_2^{\dagger j} {\rm Tr}\left(S^{\phantom{\dagger}}_i S^{\phantom{\dagger}}_j\right) + {\rm h.c.}\right] \nonumber \\ &\phantom{{}={}} +\omega_4 \left[ \Phi_2^{\dagger i} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_j S^{\phantom{\dagger}}_i\right) +\Phi_2^{\dagger i} {\rm Tr}\left(S^{\dagger j} S^{\phantom{\dagger}}_i S^{\phantom{\dagger}}_j\right) + {\rm h.c.}\right] , \label{eq:pot} \end{align} where $\omega_4 = 0$ in the Type I and the Type IIu 2HDMW and $\nu_4 = 0$ in the Type IId 2HDMW, leaving us with four massive and twelve massless parameters. The masses of scalars and their mixing angles are obtained by diagonalizing the mass matrices of this model; the expressions of those physical parameters were presented in Eq.~(6) and (7) in Ref.~\cite{Cheng:2016tlc}. For an overview over all assumptions, a comparison with the limiting cases of the 2HDM and the MW model and an account of the free parameters, we refer to table \ref{tab:modeloverview}. The general Yukawa potential of the 2HDMW in the flavor eigenstate basis is given by \begin{eqnarray} L_{Y} &= \left(- \eta_1^D {\left( Y_D \right)^a}_b {\bar D}_{R, a }\Phi_1^\dag Q_L^b - \eta_2^D {\left( Y_D \right)^a}_b {\bar D}_{R, a } \Phi_2^\dag Q_L^b - \eta_1^U {\left( Y_U \right)^a}_b {\bar U}_{R, a}{\tilde \Phi}_1^\dag Q_L^b \right. \nonumber\\ &-\left. \eta_S^D {\left( Y_D \right)^a}_b {\bar D}_{R, a}S^\dag Q_L^b - \eta_S^U {\left( Y_U \right)^a}_b {\bar U}_{R, a }{\tilde S}^\dag Q_L^b \right) + {\rm h.c.} , \label{eq:yukawa} \end{eqnarray} where the $\eta_i$ are complex constants.\footnote{Note that contrary to the 2HDM convention, up-type quarks do not couple to $\Phi_2$ in our notation.} In the Type I 2HDMW we have $\eta_2^D \equiv 0$ and in Type IIu (IId) $\eta_1^D\equiv 0$ and $\eta_S^D\equiv 0$ ($\eta_S^U\equiv 0$). We use the convention ${\tilde H}_i = \varepsilon_{ij} H_j^*$, where $H=\Phi_{1,2},S$, and $a,b$ are flavor indices. \begin{table} {\centering \begin{tabular}{l|cc|cc|cc} & MW & dof. & 2HDM & dof. & 2HDMW & dof.\\ \hline General & -- & (16) & -- & (13) & -- & (42)\\ \hline $CP$ conservation & ${\rm Im}[\nu_{i+2}]=0$ & (14) & ${\rm Im}[m_{12}^2]=0$, & (10) & ${\rm Im}[m_{12}^2]={\rm Im}[\lambda_{i+4}]=0$, & (30)\\ & & & ${\rm Im}[\lambda_{i+4}]=0$ & & ${\rm Im}[\nu_{i+2}]={\rm Im}[\omega_{i+2}]={\rm Im}[\kappa_i]=0$ &\\ \hline Custodial & $\mu_1=\mu_2=\frac12 \mu_6$, & (9) & ${\rm Im}[m_{12}^2]=0$, & (9) & ${\rm Im}[m_{12}^2]={\rm Im}[\lambda_{i+4}]=0$, & (24)\\ symmetry & $\mu_4=\mu_5$, & & ${\rm Im}[\lambda_{i+4}]=0$, & & $\lambda_4=\lambda_5$, &\\ case 1 & $\nu_2=2\nu_3$, & & $\lambda_4=\lambda_5$ & & $\mu_1=\mu_2=\frac12 \mu_6$, $\mu_4=\mu_5$, &\\ of Ref.\cite{Cheng:2016tlc} & $\nu_4=\nu_5^*$ & & & & $\nu_2=2\nu_3$, $\nu_4=\nu_5^*$, &\\ & & & & & $\omega_2=2\omega_3$, $\omega_4=\omega_5^*$, &\\ & & & & & $\kappa_2=\kappa_3$ &\\ \hline $\mathbb{Z}_2$ symmetry & -- & (16) & $\lambda_6=\lambda_7=0$ & (9) & $\omega_4=\omega_5=\lambda_6=\lambda_7=0$, & (28)\\ I/IIu &&&&& $\kappa_i=0$ &\\ \hline $\mathbb{Z}_2$ symmetry & $\nu_4=\nu_5=0$ & (12) & $\lambda_6=\lambda_7=0$ & (9) & $\nu_4=\nu_5=\lambda_6=\lambda_7=0$, & (28)\\ IId &&&&& $\kappa_i=0$ &\\ \hline Everything I/IIu & & (9) & & (7) & & (16)\\ \hline Everything IId & & (8) & & (7) & & (16)\\ \hline \end{tabular} } \caption{Overview over different model assumptions and their implementation and the number of free parameters (``dof.'') in the corresponding scalar potentials. The index $i$ is running from $1$ to $3$. The last two lines are combinations of all assumptions and thus represent the $CP$ conserving custodial $\mathbb{Z}_2$ symmetric models used for our fits.} \label{tab:modeloverview} \end{table} \section{Theory constraints} \label{sec:constraints} \subsection{Priors} For our analysis we make use of the open source package \texttt{HEPfit}\xspace \cite{hepfit}, which is linked to the Bayesian Analysis Toolkit \cite{Caldwell:2008fw}. Even if we will not apply experimental constraints and thus not necessarily rely on a fitting tool, we chose this set-up for the following reasons: BAT can also deal with flat likelihood distributions and \texttt{HEPfit}\xspace is optimized for a fast evaluation of the constraints. The sampling covers the whole parameter space, so we cannot miss relevant regions. This is not guaranteed if we use a random scattering approach. Furthermore, the presented \texttt{HEPfit}\xspace implementation of the 2HDMW as well as the MW and 2HDM limiting cases are available for everyone \cite{hepfit} and can be used in future \texttt{HEPfit}\xspace studies on these models including experimental data. For more information about \texttt{HEPfit}\xspace see Refs.~\cite{deBlas:2016ojx, Cacchio:2016qyh}.\\ In our Bayesian fits we use flat priors for the 2HDMW parameters with the following ranges: \begin{align*} &-50<\lambda_i,\mu_i,\nu_i,\omega_i<50\\ &-2<\log_{10}(\tan\beta)<2\\ &0 \;\text{GeV}^2<m_S^2<(1000 \;\text{GeV})^2 \end{align*} We fix $\beta-\alpha$ to $\pi/2$ in order to align the light Higgs $h$ with the SM Higgs and reproduce its signal strength values at tree-level; and we set $m_{12}^2=0$ because its value is not relevant here. Note that we do not require $m_h=125$ GeV in the 2HDMW. This constraint can always be accomplished by adjusting $m_{12}^2$. Only in the MW limiting case with $\Phi_2=0$, we impose that the SM-like Higgs has a mass of $125.18\pm0.16$ GeV \cite{Aad:2015zhl,Sirunyan:2017exp}, which results in an almost fixed $\lambda_1$, like in the SM. \subsection{Unitarity} \label{sec:unitarity} The unitarity of the $S$-matrix can be used to place constraints on the parameters of a theory~\cite{Lee:1977eg} (see also~\cite{He:2013tla, Kanemura:1993hm, Horejsi:2005da, Ginzburg:2005dt, Grinstein:2015rtl, Goodsell:2018tti, Goodsell:2018fex}). If a certain combination of parameters becomes too large, an amplitude will appear to be non-unitary at a given order in perturbation theory. We will refer to these constraints as perturbative unitarity bounds, or just unitarity bounds for short, even though the more accurate statement is that perturbation theory is breaking down. Considering only two-to-two scattering these constraints take the following forms at various orders in perturbation theory \begin{align} \label{eq:Ubounds} &\text{LO:} \quad \left(a_j^{(0)}\right)^2 \leq \frac{1}{4}, \\ &\text{NLO:} \quad 0 \leq \left(a_j^{(0)}\right)^2 + 2 \left(a_j^{(0)}\right) \text{Re}\left(a_0^{(1)}\right) \leq \frac{1}{4} , \nonumber \\ &\text{NLO+:} \quad \left[\left(a_j^{(0)}\right) + \text{Re}\left(a_j^{(1)}\right)\right]^2 \leq \frac{1}{4}, \nonumber \end{align} where $a_j^{(\ell)}$ is the contribution at the $\ell$th order in perturbation theory to the $j$th partial wave amplitude. The NLO+ inequality includes the square of NLO correction, and thus contains some, but not all of the NNLO contributions to the partial-wave amplitude. When considering the scattering of scalars at high energy only the $j = 0$ partial wave amplitude is important. The matrix of partial wave amplitudes is given by \begin{equation} \left(\mathbf{a_0}\right)_{i,f} = \frac{1}{16 \pi s} \int_{-s}^0 \! dt \, \mathcal{M}_{i \to f}(s, t) , \end{equation} and we use $a_0$ to indicate the eigenvalues of $\mathbf{a_0}$. The two-to-two scattering matrix at tree level in the neutral, color singlet channel of the 2HDMW model was recently derived in Refs.~\cite{Cheng:2016tlc, Cheng:2017tbn}. As the scalar potential, Eq.~\eqref{eq:genpot}, contains only quartic interaction terms, the NLO unitarity bounds can be computed approximately using the algorithm of Ref.~\cite{Murphy:2017ojk}. A virtue of this approach is its simplicity as it only relies on knowledge of the LO partial wave matrix and the one-loop scalar contributions to the beta functions of the theory. This algorithm is built on previous work in Ref.~\cite{Grinstein:2015rtl, Cacchio:2016qyh}, and results for the special case of the 2HDM can be found in those references. The NLO contribution to the eigenvalue is given by a sum of two terms \begin{equation} a_0^{(1)} = a_{0, \sigma}^{(1)} + a_{0, \beta}^{(1)} . \end{equation} The first term follows from the unitarity of the theory, and is proportional to the square of the LO eigenvalue \begin{equation} a_0^{(1)} = \left(i - \frac{1}{\pi}\right) \left(a_0^{(0)}\right)^2 . \end{equation} The second term depends on the one-loop beta functions of the theory, and can be written in terms of the well known formula for the perturbations of the eigenvalues of an eigensystem for the which exact LO solution is known \begin{equation} a_{0, \beta}^{(1)} = \vec{x}_{(0)}^{\top} \cdot \mathbf{a}_{0, \beta}^{(1)} \cdot \vec{x}_{(0)} , \end{equation} where $\vec{x}_{(0)}$ are the LO eigenvectors and \begin{equation} \mathbf{a}_{0, \beta}^{(1)} = - \frac{3}{2} \left.\mathbf{a}_{0}^{(0)}\right|_{\lambda_m \to \beta_{\lambda_m}} . \end{equation} with $\beta_{\lambda_m}$ being the beta function associated with the coupling $\lambda_m$. The approximation is computed at a scale where the center-of-mass energy is much greater than the other scales in the problem.\footnote{Recently, finite $m^2/s$ corrections have been studied for colorless scalar SM extensions \cite{Goodsell:2018tti,Goodsell:2018fex}.} As such we only start enforcing the unitarity bounds for RGE scales above $ 750$~GeV$\:\approx\!\sqrt{10}\:v$, and do not impose unitarity bounds when running from the EW scale to 750~GeV. We also enforce the smallness of higher order corrections to the partial wave amplitudes with the following constraint~\cite{Grinstein:2015rtl, Cacchio:2016qyh, Durand:1992wb, Chowdhury:2017aav} \begin{equation} R^{\prime} \equiv \frac{\left|a_0^{(1)}\right|}{\left|a_0^{(0)}\right|} < 1 \end{equation} for each eigenvalue of the partial wave matrix as long as $a_0^{(0)}>0.01$. \subsection{Boundedness from below} \label{sec:boundedness} In order to have a potential which is bounded from below, we extract the positivity conditions from the generic potential \eqref{eq:genpot}, assuming only that all couplings are real. Setting all but one or two of the real scalar fields to zero we require the resulting coefficient matrix to be copositive \cite{Kannike:2016fmd}. \begin{align} &\mu=\mu_1 + \mu_2 + \mu_6 + 2 (\mu_3 + \mu_4 + \mu_5) > 0 \label{eq:MWfirst} \\ &\mu_1 + \mu_2 + \mu_3 + \mu_4 > 0\\ &14 (\mu_1 + \mu_2) + 5 \mu_6 + 24 (\mu_3 + \mu_4) - 3 \left| 2 (\mu_1 + \mu_2) - \mu_6 \right| > 0\\ &5 (\mu_1 + \mu_2 + \mu_6) + 6 (2\mu_3 + \mu_4 + \mu_5) - \left| \mu_1 + \mu_2 + \mu_6 \right| > 0\\ &\nu_1 + \sqrt{\lambda_1 \mu }> 0\\ &\nu_1 + \nu_2 - 2|\nu_3| + \sqrt{\lambda_1 \mu}> 0\\ &\lambda_1 + \frac14 \mu + \nu_1 + \nu_2 + 2 \nu_3 - \frac{1}{\sqrt{3}}|\nu_4+\nu_5| >0\\ &\lambda_1> 0\label{eq:MWlast}\\ &\lambda_2> 0\\ &\lambda_3+\sqrt{\lambda_1 \lambda_2}> 0\\ &\lambda_3+\lambda_4-|\lambda_5|+\sqrt{\lambda_1 \lambda_2}> 0\\ &\frac12(\lambda_1+\lambda_2)+\lambda_3+\lambda_4+\lambda_5-2|\lambda_6+\lambda_7|> 0 \label{eq:2HDMlast}\\ &\omega_1 + \sqrt{\lambda_2 \mu}> 0 \label{eq:mixfirst}\\ &\omega_1 + \omega_2 - 2|\omega_3| + \sqrt{\lambda_2 \mu}> 0\\ &\lambda_2 + \frac14 \mu + \omega_1 + \omega_2 + 2 \omega_3 - \frac{1}{\sqrt{3}}|\omega_4+\omega_5| >0\label{eq:mixlast} \end{align} We want to stress that these conditions are necessary but not sufficient, since we did not analyze the cases with three or more non-zero fields, leaving the $\kappa_i$ unconstrained. While the pure 2HDM inequalities \eqref{eq:MWlast} to \eqref{eq:2HDMlast} have been known before \cite{Deshpande:1977rw,Branco:2011iw}, we are not aware of such conditions in the Manohar-Wise model; that is why we derive \eqref{eq:MWfirst} to \eqref{eq:MWlast} in the most general way. Finally, \eqref{eq:mixfirst} to \eqref{eq:mixlast} only appear in the 2HDMW. In our simplified potential $V_{\text{\tiny{fit}}}$, the positivity conditions reduce to \begin{align} &\mu'=4\mu_1 + 2\mu_3 + 4\mu_4 > 0, \qquad 5 \mu_1 + 3 \mu_3 + 3 \mu_4 - \left| \mu_1 \right| > 0, \\ &\nu_1 + \sqrt{\lambda_1 \mu'}> 0, \qquad \nu_1 + 2\nu_2 + \sqrt{\lambda_1 \mu'} > 0, \qquad \lambda_1 + \frac14 \mu' + \nu_1 + 2\nu_2 - \frac{2}{\sqrt{3}}|\nu_4| > 0, \nonumber \\ &\lambda_1> 0, \qquad \lambda_2> 0, \qquad \lambda_3+\sqrt{\lambda_1 \lambda_2} > 0, \qquad \lambda_3+2\lambda_4+\sqrt{\lambda_1 \lambda_2} > 0, \nonumber \\ &\omega_1 + \sqrt{\lambda_2 \mu'}> 0, \qquad \omega_1 + 2\omega_2 + \sqrt{\lambda_2 \mu'}> 0, \qquad \lambda_2 + \frac14 \mu' + \omega_1 + 2\omega_2 - \frac{2}{\sqrt{3}}|\omega_4| > 0. \nonumber \end{align} \subsection{Positivity of the mass squares} Additional bounds are derived from requiring the masses of the colored scalars to be real: \begin{equation} \nu_1 c_{\beta}^2 + \omega_1 s_{\beta}^2 > - \frac{4 m_S^2}{v^2} , \quad (\nu_1 + 2 \nu_2) c_{\beta}^2 + (\omega_1 + 2 \omega_2) s_{\beta}^2 > - \frac{4 m_S^2}{v^2} . \end{equation} with $v = \sqrt{v_1^2 + v_2^2} \approx 246$~GeV, and where $s_{\beta}$ and $c_{\beta}$ are sine and cosine of $\beta$, respectively with $\tan\beta = v_1 / v_2$. We must have $m_S^2 > 0$ so that the vacuum preserves $SU(3)_C$. Note that the mass splitting between the colored states is \begin{equation} \frac{2}{v^2} \left(m_R^2 - m_{S^{\pm}}^2\right) = \nu_2 c_{\beta}^2 + \omega_2 s_{\beta}^2 \label{eq:mRsqminusmIsq}, \end{equation} and $m_{S^{\pm}}^2 = m_I^2$ due to custodial symmetry. \subsection{Renormalization group stability} So far we only discussed theory constraints at the electroweak scale. Assuming the validity of the model up to some higher scale imposes bounds on the parameters: Scenarios that define a viable model at $m_Z$ could feature one (or more) quartic couplings with an unstable behavior under the renormalization group evolution to a higher scale. This could be due to a Landau pole, but also the boundedness-from-below criteria described in Section \ref{sec:boundedness} should be fulfilled at any scale. Furthermore, the unitarity conditions should be applied at least above some scale, $\mu_u$, as they are computed in the limit $\mu_u\gg \sqrt{\lambda_i} v$ with $\lambda_i$ being a quartic coupling of the theory. Here, we chose to use $\mu_u=750$ GeV like in \cite{Cacchio:2016qyh}. We only take into account the quartic coupling terms from \cite{Cheng:2017tbn} and neglect the contributions of Yukawa and gauge couplings to the RGEs. In Ref.~\cite{Cheng:2017tbn} it was shown how the parameter space is constrained in three cases where $\log_{10}(\Lambda/1\ {\rm GeV})=10,13,19$ if one uses LO unitarity and 2HDM stability. \section{Results} \label{sec:results} While the boundedness-from-below constraints are trivial, we want to discuss the different unitarity constraints in the 2HDMW, before we consider higher scales and the effect of the theory constraints on the physical parameters for both, the 2HDMW and the MW model. Due to the large number of degrees of freedom, we present the direct comparison of \textit{model} parameters in the most cases. The results are also translated into \textit{physical} parameters, such as the scalar masses. The contours in the figures presented below are the 100\% posterior probability regions. If we change the prior distribution of $\tan\beta$, for instance, replacing a flat $\log_{10}(\tan\beta)$ by a flat $\tan\beta$ prior, this will modify the shape of the posterior distributions (probably only slightly), but not the 100\% limits. \subsection{Different unitarity constraints} \begin{figure} \begin{picture}(500,280)(0,0) \centering \put(-10,0){\includegraphics[width=\textwidth]{Unitarity.pdf}} \put(400,230){\includegraphics[width=70pt]{HEPfitLogo.png}} \end{picture} \caption{Comparison of the different unitarity bounds in the $\lambda_4$ vs.~$\lambda_3$ and $\mu_4$ vs.~$\mu_3$ planes at the electroweak scale. The bounds shaded by colors other than red are obtained without renormalization-group running. Tree-level unitarity constrains the quartic couplings to the beige areas; the two sets of one-loop conditions NLO and NLO+ force the couplings to stay within the pink and light blue regions, respectively. The purple contour delimits the area compatible with the $R'$ conditions. The different unitarity bounds at the electroweak scale need to be compared to the regions with stable running including unitarity up to a scale of 1 TeV (red).} \label{fig:unitarity} \end{figure} In Figure \ref{fig:unitarity} we show the effects of LO, NLO and NLO+ criteria on the $\lambda_4$ vs.~$\lambda_3$ and $\mu_4$ vs.~$\mu_3$ planes as well as the impact of the $R'$ conditions explained in Section \ref{sec:unitarity} at the electroweak scale. Note that these bounds are calculated without running the renormalization-group equations, except for the red region. We observe that -- contrary to the 2HDM case, \textit{c.f.} Figure 2 of Ref.~\cite{Cacchio:2016qyh} -- the quartic couplings enjoy more freedom if we apply NLO(+) or the $R'$ criteria instead of the LO unitarity. The reason for this is that the LO unitarity conditions only depend on few quartic couplings and disallow extreme values for them, while in the NLO(+) case, large quartic couplings can be compensated by tuning some of the other quartic couplings. Along the diagonal of the left hand panel of Figure \ref{fig:unitarity} we can observe the consequence of not applying the $R'$ criteria if the LO unitarity condition is accidentally small: In the small strip with $|\lambda_4+\lambda_3|\leq 0.01$ the quartic coupling $\lambda_4$ can be larger than $11$ in magnitude. If we compare all sets of unitarity constraints with the region that is stable at least up to 1 TeV and compatible with NLO+ unitarity and the $R'$ conditions, we observe that the latter is a very strong bound. We would like to stress that we recommend to use the NLO(+) unitarity conditions only at scales significantly larger than the electroweak vev because beyond LO the quartic couplings are running couplings evaluated at an energy much larger than $v$. \subsection{Combination of all theoretical constraints} \begin{figure} \centering \begin{picture}(500,290)(0,0) \put(0,0){\includegraphics[width=\textwidth]{RGrunningI.pdf}} \put(300,240){\includegraphics[width=70pt]{HEPfitLogo.png}} \end{picture} \caption{RG stability in the $\lambda_4$ vs.~$\lambda_3$ and $\nu_4$ vs.~$\nu_2$ planes of the 2HDMW of type I at the electroweak scale. The blue contours represent all scenarios that lead to a stable potential up to $1$ TeV without imposing any unitarity constraint, whereas NLO+ unitarity and $R'$ are added to the set of constraints for the red regions. The dark red region is compatible with all theory bounds and with a stable potential up to $63$ TeV.} \label{fig:running} \end{figure} In Figure \ref{fig:running} we illustrate the combination of the theory constraints with stability up to a certain scale in the $\lambda_4$ vs.~$\lambda_3$ and $\nu_4$ vs.~$\nu_2$ planes as representative examples of the 2HDMW. The limits obtained from the global fit to all quartic couplings of the 2HDMW and the MW limiting case can be found in Table \ref{tab:quarticlimits}. In this section, we analyze three different scenarios: In the first case, we run all quartic couplings to the stability scale of $\mu_{st}=1$ TeV, controlling at each iteration of the RG evolution if the potential is bounded from below and if all quartic couplings are in the perturbative regime, that is smaller than $4\pi$ in magnitude. We find that with these constraints, the absolute value of the quartic couplings at the electroweak scale cannot exceed limits between $3.3$ and $8.5$ ($1.7$ and $7.5$) without applying any unitarity bound to the 2HDMW (MW). The has to be confronted with the second scenario, for which we add the NLO+ unitarity constraints as well as the $R'$ criteria at scales above $750$ GeV to the previous fit. The impact on the parameters is quite sizable: In Figure \ref{fig:running} we see that the allowed regions shrink by a factor of $1.5$ to $2$. The maximally allowed values for the quartic couplings range from $2.2$ to $5.7$ in the 2HDMW and from $1.3$ to $5.6$ in the MW, see Table \ref{tab:quarticlimits}. For comparison, the theory-only upper limit on the quartic couplings of the 2HDM is $5.75$~\cite{Cacchio:2016qyh}. Finally, we impose that the scalar potential with all discussed theory bounds is stable up to even higher scales $\Lambda$. Originally, we wanted to test high scales of $10^{4.8}$, $10^{7.6}$, $10^{12}$ and $10^{19}$ GeV, going in evenly spaced steps in the logarithm of $\log_{10} \Lambda$ towards the Planck scale, but our fitting set-up turned out to become unstable beyond $10^{4.8}$ GeV. If we choose $10^{4.8}$ GeV$\approx 63$ TeV as our high scale example, all parameters have to be between $-1.8$ and $2.2$ in the 2HDMW and between $-1.6$ and $2.2$ in the MW. Hence, the limits at $63$ TeV are stronger by a factor of about $2/5$ with respect to the ones obtained with stability at $1$ TeV. As a complete illustration, we arrange pairwise correlations of the bounds between all the couplings in Figs.~\ref{fig:triangle-plots1} and~\ref{fig:triangle-plots2} in the appendix. In the MW limiting case the role of $\lambda_1$ is different, as it is the only parameter on which the mass of the SM-like Higgs depends; it is thus basically fixed by the Higgs mass measurements. Also, we do not impose any $\mathbb{Z}_2$ symmetry on the MW model, that is we treat $\nu_4$ as a free parameter. The main difference between the limits on the quartic couplings in the 2HDMW and MW models is in how negative $\nu_1$ can be. Since $\lambda_1$ is fixed in the MW model, the positivity of the model limits the size of $\nu_1$. The Higgs trilinear coupling is sensitive to $\nu_1$ at the one-loop level. Thus larger values of $\nu_1$ are advantageous in trying to observe double-Higgs boson production at the LHC. \begin{table} {\footnotesize \begin{tabular}{c|cccc|cccc|} & \multicolumn{4}{c|}{2HDMW limits} & \multicolumn{4}{c|}{MW limits}\\ Unitarity & -- & LO & NLO+,R' & NLO+,R' & -- & LO & NLO+,R' & NLO+,R' \\ $\mu_{st}$ & 1 TeV & 1 TeV & 1 TeV & 63 TeV & 1 TeV & 1 TeV & 1 TeV & 63 TeV \\ &\cellcolor[HTML]{3377ff} & &\cellcolor[HTML]{ff2222} &\cellcolor[HTML]{990000} & & & &\\ \hline $\lambda_1$ & [0, 3.9] & [0, 3.9] & [0, 2.7] & [0, 1.0] & \multicolumn{4}{c|}{$0.2585\pm 0.0007$} \\ $\lambda_2$ & [0, 3.9] & [0, 3.9] & [0, 2.7] & [0, 1.0] & \multicolumn{4}{c|}{--} \\ $\lambda_3$ & [-3.4, 5.8] & [-3.2, 5.5] & [-2.4, 4.2] & [-0.9, 1.6] & \multicolumn{4}{c|}{--} \\ $\lambda_4$ & [-3.3, 3.8] & [-3.2, 3.5] & [-2.2, 2.5] & [-0.9, 0.9] & \multicolumn{4}{c|}{--} \\ $\mu_1$ & [-5.5, 6.0] & [-5.3, 5.8] & [-3.8, 4.1] & [-1.5, 1.4] & [-5.3, 5.8] & [-5.3, 2.0] & [-3.6, 4.0] & [-1.4, 1.2] \\ $\mu_3$ & [-8.5, 7.8] & [-8.1, 7.7] & [-5.2, 5.7] & [-1.8, 2.2] & [-8.5, 7.5] & [0.0, 4.4] & [-5.1, 5.6] & [-1.6, 2.2] \\ $\mu_4$ & [-3.7, 4.9] & [-3.3, 4.8] & [-2.3, 3.2] & [-0.9, 1.2] & [-3.6, 4.8] & [-4.0, 2.3] & [-2.1, 3.1] & [-0.7, 1.2] \\ $\nu_1$ & [-4.7, 6.3] & [-4.5, 5.6] & [-3.1, 4.6] & [-1.2, 1.7] & [-1.7, 6.3] & [-1.2, 6.4] & [-1.3, 4.3] & [-0.8, 1.6] \\ $\nu_2$ & [-4.0, 5.2] & [-3.6, 5.0] & [-2.7, 3.5] & [-1.1, 1.3] & [-3.3, 5.1] & [-6.2, 6.4] & [-2.3, 3.4] & [-1.0, 1.3] \\ $\nu_4$ & [-5.0, 5.0] & [-4.8, 4.7] & [-3.3, 3.3] & [-1.3, 1.3] & [-4.6, 4.5] & [-7.6, 7.7] & [-2.9, 2.9] & [-1.1, 1.1] \\ $\omega_1$ & [-4.7, 6.3] & [-4.5, 6.0] & [-3.1, 4.5] & [-1.2, 1.7] & \multicolumn{4}{c|}{--} \\ $\omega_2$ & [-4.0, 5.2] & [-3.9, 5.1] & [-2.8, 3.5] & [-1.1, 1.3] & \multicolumn{4}{c|}{--} \\ $\omega_4$ & [-4.9, 4.9] & [-4.8, 4.7] & [-3.2, 3.3] & [-1.3, 1.3] & \multicolumn{4}{c|}{--} \\ \hline $m_A-m_H$ [GeV] & [-390, 440] & [-340,400] & [-340, 360] & [-210, 230] & \multicolumn{4}{c|}{--} \\ $m_R-m_I$ [GeV] & [-320, 370] & [-280,330] & [-260, 310] & [-170, 190] & [-250, 300] & [-100, 230] & [-180, 250] & [-150, 180] \\ \end{tabular} } \caption{Limits on the quartic couplings and two mass differences with different assumptions. The second to fourth columns contain the 2HDMW results. Note that $\nu_4$ ($\omega_4$) is only non-zero in the case(s) of the type I, IIu (IId) 2HDMW. Columns five to seven contain the results of the MW limiting case. In this case, $\lambda_1=m_h^2/v^2$.} \label{tab:quarticlimits} \end{table} Comparing our results with those of Ref.~\cite{Cheng:2017tbn}, we find that our allowed ranges for the quartic couplings assuming stability and NLO unitarity up to 63 TeV are more or less of the same size as previous limits using LO unitarity and no MW positivity up to $2\cdot 10^4$ TeV. \begin{figure} \centering \begin{picture}(500,290)(0,0) \put(0,0){\includegraphics[width=\textwidth]{MassDiff.pdf}} \put(300,240){\includegraphics[width=70pt]{HEPfitLogo.png}} \end{picture} \caption{Comparison of different stability scales in the $m_H-m_A$ vs.~$m_H$ and $m_R-m_I$ vs.~$m_R$ planes of the Type I 2HDMW. For the color code we refer to Fig.~\ref{fig:running}.} \label{fig:masses} \end{figure} The limits on the quartic couplings can be translated into bounds on the physical model parameters. Like in the 2HDM we observe strong restrictions of the differences between $m_H^2$ and $m_A^2$ \cite{Cacchio:2016qyh, Chowdhury:2017aav}, but also $m_I^2$ cannot deviate very much from $m_R^2$, see Figure \ref{fig:masses}. The former mass square difference depends on the values of the $\lambda_i$, while $m_R^2-m_I^2$ is proportional to $\nu_2 c^2_\beta+\omega_2 s^2_\beta$, see eq.~\eqref{eq:mRsqminusmIsq}. The linear dependence of the upper limit of the mass splittings for light $m_H$ and $m_R$ comes from requiring both masses to be positive. This feature does not appear in the analogous limit in Figure 6 of~\cite{Cacchio:2016qyh} because the mass splittings in that figure are plotted against the mass of a third Higgs boson. Fits to the mass differences $m_A-m_H$ and $m_R-m_I$ in the three mentioned scenarios yield upper bounds between $440$ and $170$ GeV in the 2HDMW and between $300$ and $150$ GeV in the MW model, see the last two rows of Table \ref{tab:quarticlimits}. Just like for the 2HDMW, the theory-only limit on the mass splitting in the 2HDM is 360~GeV~\cite{Cacchio:2016qyh}. Even if Fig.~\ref{fig:running} and \ref{fig:masses} were obtained for the Type I 2HDMW, the 2HDMW limits in Table \ref{tab:quarticlimits} hold for all three types, only that either $\nu_4$ or $\omega_4$ have to be set to zero, depending on the type. \section{Conclusions} \label{sec:conclusions} We have studied the NLO unitarity bounds on the 2HDMW, which extends the scalar sector of 2HDM with an additional color octet scalar. Although less constraining than the LO unitarity bounds at the electro-weak scale, the NLO unitarity constraints become stronger when running up to higher scales greater than $1$~TeV. However, compared with the MW model which is the limiting case of 2HDMW, the common quartic couplings, i.e. $\mu$'s and $\nu$'s, are allowed for larger ranges under these constraints. In addition, we have derived a set of necessary conditions to bound the 2HDMW potential from below for the first time. These conditions constrain most of the quartic couplings except a few. They are also applicable to the limiting case of the MW model. Finally, we have combined all theoretical constraints and found limits of the couplings assuming stability at different scales. Requiring a stable potential at a higher scale favors smaller mass differences between pairs of neutral scalars, such as $m_A-m_H$ and $m_R-m_I$. The next obvious step would be a combination with experimental constraints, for which our publicly available \texttt{HEPfit}\xspace implementation could be used. \section*{Acknowledgments} We thank C.~Murgui, A.~Pich and G.~Valencia for fruitful discussions. We thank the INFN Roma Tre Cluster, where most of the fits were performed. The work of OE was supported by the Agencia Estatal de Investigaci\'{o}n (AEI, ES) and the European Regional Development Fund (ERDF, EU) [Grants No.~FPA2014-53631-C2-1-P, FPA2017-84445-P and SEV-2014-0398]. The work of CM was supported by the United States Department of Energy under Grant Contract {DE-SC0012704}.
1,314,259,993,699
arxiv
\section{Introduction} Besides being the most reactive lanthanide, the rare earth (RE) element Eu stands out in forming many intermetallic compounds in a divalent state, i.\,e. having a valence of nearly +2. According to Hund's rule Eu$^{2+}$ (4f$^7$) has a strong magnetic moment connected to localized 4f-electrons, whereas Eu$^{3+}$ (4f$^6$) has zero moment \cite{sampathkumaran_new_1981}. Despite the quite different electronic and magnetic configuration, both valence states can be rather close in energy. As a consequence, in these compounds a variety of competing phenomena such as heavy-fermion like behaviour, antiferromagnetic ordering or a valence transition occur \cite{onuki_divalent_2017}. In particular, a transition of the valence state of Eu$^{2+}\rightarrow$ Eu$^{3+}$ may be tuned e.\,g. by temperature \cite{sampathkumaran_new_1981}, pressure \cite{adams_effect_1991} or high magnetic fields \cite{mitsuda_field-induced_1997}. The valence transition is accompanied by a drastic change of the ionic radius of Eu by about 15\%. Prototypical EuPd$_2$Si$_2$ is known to be a valence-fluctuating material with a temperature-induced valence change of Eu from 2.2 to 2.8 between 150-170\,K in a narrow temperature interval \cite{sampathkumaran_new_1981}. This valence transition is accompanied by a large reduction ($\sim$2\%) of its a-lattice parameter upon cooling at ambient pressure \cite{onuki_divalent_2017}. The c-lattice parameter instead remains mainly temperature-independent, whereby EuPd$_2$Si$_2$ already has a minimal Si-Si-bond length of 2.462\,\AA\, (ICSD 657595). Recent research is focusing on the particularly strong coupling between electronic fluctuations and the lattice degrees of freedom near the second order critical endpoint in the p-T-phasediagram \cite{kliemt_strong_2022}, although it is not unambiguously clear whether EuPd$_2$Si$_2$ is on the high- or low-pressure side of this endpoint and how disorder or defects influence the valence transition. Chemical substitution of, e.\,g., Pd with larger Au atoms, equivalent to applying negative chemical pressure, is claimed to lead to a first order phase transition in Eu(Pd$_{1-x}$Au$_x$)$_2$Si$_2$ with x $>$ 0.1 \cite{segre_valence_1982} such that the critical endpoint may become accessible via application of hydrostatic pressure. As evident from resonant photoemission studies of EuPd$_2$Si$_2$ divalent Eu is found for the outermost Eu layers irrespective of temperature \cite{wertheim_final-state_1985}, suggesting different properties at the surface as compared to the bulk. Ball milling EuPd$_2$Si$_2$ into the nanoparticle range leads to a broad distribution of valence transitions and possibly magnetic ordering at temperatures below 8\,K \cite{iyer_eu_2018}. To our knowledge, up to now no thin film specific report regarding this ternary compound exists, leaving several questions regarding the coupling between lattice and electronic degrees of freedom unanswered. Furthermore, for many RE-based compounds synthesis of single phase epitaxial thin films has not yet been achieved succesfully due to various difficulties during preparation \cite{chatterjee_heavy_2021}. In this study we report experimental results concerning the growth of EuPd$_2$Si$_2$ epitaxial thin films and their temperature-dependent structural, magnetic and valence properties. \section{Experimental Procedure} Epitaxially grown EuPd$_2$Si$_2$ thin films were prepared on MgO(001) substrates using molecular beam epitaxy (MBE) in an ultra-high vacuum (UHV) chamber with a base pressure below $1\times10^{-10}$\,mbar. During growth the pressure in the chamber did not exceed $1\times 10^{-8}$\,mbar. The substrates are commercially available (Crystec GmbH) single-side epi-polished (R$_a <$ 0.5 nm) pieces of MgO(001) with a size of $10\times10\times0.5\,$mm$^3$. Any chemical treatment was avoided to minimize exposure to water, which is known to hydroxylate MgO and thus reduce surface quality \cite{braun_situ_2020}. MgO crystallizes in the simple rocksalt structure (space group 225) with a cubic lattice parameter of a = 4.212\,\AA\, \cite{smith_low-temperature_1968} at room temperature. The MgO[001] surface reflects a square array of Mg- and O-ions, representing the most stable plane \cite{crozier_preparation_1992}. EuPd$_2$Si$_2$ has a tetragonal structure (space group 139) with lattice constants a = 4.231\,\AA\, and c = 9.86\,\AA\, (ICSD 657595). The substrate material was choosen because of the low misfit of $(a_{\text{EuPd}_2\text{Si}_2}-a_{\text{MgO}})/a_{\text{EuPd}_2\text{Si}_2} = 0.45\%$ at room temperature, implying a cuboid-on-cube growth with an out-of-plane c-axis orientation for the EuPd$_2$Si$_2$ thin film.\\ Before growth the substrates were degassed at 400$^\circ$C and thermally cleaned at 1000$^\circ$C under UHV for one hour, respectively, to desorb possible surface contaminations. The temperature was then lowered to 450$^\circ$C for growth and allowed to stabilize. Sample temperature was measured indirectly via a radiatively coupled type-C thermocouple (95\%W/5\%Re - 74\%W/26\%Re) behind the heater. For the codeposition of the low vapor pressure materials Si and Pd electron beam evaporators were used, whereas Eu was sublimated from a boron nitride crucible inside an effusioncell at a temperature of 470$^\circ$C measured at the bottom of the crucible. The evaporation rates of Si and Pd were continuously quantified throughout the deposition process and feedback-controlled via two water-cooled quartz microbalances (qmb) to ensure constant growth rates. For Eu deposition another qmb was used to check the rate directly before and after the deposition, showing no measurable deviation. During growth the temperature of the effusioncell was kept constant to better than 0.1$^\circ$C. After growth the samples were allowed to cool to room temperature before an amorphous Si capping layer was deposited to prevent oxidation at atmosphere during further investigations at ambient pressure. Between different growth steps the film surface properties were analyzed via reflection high-energy electron diffraction (RHEED) with a 15\,keV/5\,$\mu$A electron beam. Scanning electron microscopy (SEM) images were aquired ex-situ to study the surface morphology in a FEI Nova NanoLab 600. Furthermore the surface topography of the thin films were determined through atomic force microscopy (AFM), utilizing a Nanonis Nanosurf in non-contact mode under ambient conditions. For structural characterization high- and low-angle X-ray diffraction was done with a Bruker D8 Discover high-resolution diffractometer using Cu$_{K, \alpha}$ radiation with a parallelized primary beam and a diffracted-side monochromator in air. To analyze small and high angle oscillations Bruker$^\prime$s DiffracPlus Leptos Software was used. Additional low temperature diffraction data was collected with a Siemens D500 diffractometer equipped with a Cryogenics closed-cycle helium refrigeration system down to 10\,K under high vacuum. To study the transport properties, the thin films were patterned by means of UV-photolithography and low-energy Ar-ion etching to define 6-contact Hall bar structures with a cross-area of 30\,$\times$\,100\,$\mu \text{m}^2$. Low temperature magnetotransport measurements were then acquired via a variable temperature insert inside an Oxford helium flow cryostat between 3\,K and 300\,K in a magnetic field up to 5\,T. For evaluation of the Eu mean valence Hard X-ray Photoelectron Spectroscopy (HAXPES) measurements of the Eu 3d core levels and the valence band at a photon energy of 3.4\,keV were conducted at beamline P22 of the storage ring PETRA III at DESY in Hamburg (Germany) using a time-of-flight momentum microscope \cite{babenkov_high-accuracy_2019}, \cite{medjanik_progress_2019}. Typical Si-cap layer thickness was around 2\,nm for samples used for HAXPES and 6\,nm otherwise. \section{Results and discussion} \subsection{RHEED, AFM and SEM} \begin{figure}[tb] \begin{center} \includegraphics[width=0.8\textwidth]{RHEED2.pdf} \caption{\label{rheed} a) SEM image of a 24\,nm (A1) and 46\,nm thin film (B1) including Si capping layer. A small crystal with facettes is observed for the thicker film. The scale bar corresponds to 1\,$\mu$m. b) RHEED pattern before (top half) and directly after thin film deposition (bottom half) without any change of sample position. c) AFM image of sample B1 with added Si capping layer. The area of the micrograph is 770$\times$770\,nm$^2$. } \end{center} \end{figure} In-situ RHEED azimuthal scans show a fourfold symmetry upon rotation around the surface normal of the pure MgO substrate after annealing at 1000$^\circ$C. After completion of the EuPd$_2$Si$_2$ thin film deposition again a fourfold symmetry of narrow streaks appear, see Fig.\,\ref{rheed} c. Comparison of the directions for the main symmetry axes from the substrate and the thin film implies a parallel alignment of their crystallographic a-axes. Even for thicker films broad Kikuchi bands arise, pointing to a laterally well ordered crystalline film. In contrast the RHEED pattern vanishes completely after deposition of the Si capping layer indicating an amorphous overlayer structure with a thickness of some nanometers. SEM (Fig.\,\ref{rheed} a, b) and AFM (Fig.\,\ref{rheed} d) images of the Si-capped thin films reveal a locally smooth but stepped topography with a typical lateral island size in the order of 50-200\,nm and a mean surface roughness of $S_a \approx 3\,$nm. \subsection{X-ray diffraction and reflectometry} \begin{figure*}[tb] \includegraphics[width=\textwidth]{XRD-full2.pdf} \caption{\label{xrd} a) X-ray reflectometry scan (black) of a 46\,nm thin film (sample B1) with corresponding fit (red). b) Longitudinal symmetric X-ray scan near the EuPd$_2$Si$_2$(002)-reflex with fit of the Laue oscillations. c) $\phi$-scan around sample normal of MgO$\{4\bar{2}0\}$- and EuPd$_2$Si$_2\{0\bar{2}8\}$-reflexes in asymmetric geometry. d) Longitudinal symmetric scan of the same EuPd$_2$Si$_2$ thin film and a pure MgO(001) substrate. e) Temperature-dependent relative change of lattice parameters, $\Delta x/x$ for x=a and c, from cubic MgO and tetragonal EuPd$_2$Si$_2$ normalized at 300\,K (obtained from several reflexes as indicated in the legend). For comparison MgO single crystal data from Durand \cite{durand_coefficient_1936} are shown. Broad curves are a guide to the eye only.} \end{figure*} Despite of the measured roughness, Kiessig fringes arising from a relatively smooth interface and surface are visible in symmetric low angle X-ray reflectometry (XRR) scans (Fig.\,\ref{xrd} a) above 2$\Theta$ = 4$^\circ$, yielding an average film thickness $d$ of 46\,nm for EuPd$_2$Si$_2$ and approximately 2\,nm for Si, respectively, for sample B1 (see Table\,\ref{table} for a detailed comparison). Deviations between the experimental data and the fit may result from the oxidation of the Si capping layer leading to a partial SiO$_x$ overlayer, which is not included in the fitting procedure. The symmetric high angle X-ray diffraction scan (Fig.\,\ref{xrd} d) shows (00$\ell$)-reflexes of EuPd$_2$Si$_2$ with even number $\ell$ appearing up to the 10th order, suggesting a well ordered epitaxial thin film. On the left hand side of the EuPd$_2$Si$_2$(008)-reflex a small shoulder is visible for all films, which is due to scattering from the sample holder and appears without any sample, too. At 35.0$^\circ$ an additional but small EuPd$_2$Si$_2$(112)-reflex appears, implying a minor contribution from misaligned crystallites as evident from the much higher structural form factor of the (112)-reflex as compared, e.\,g.\,, to the EuPd$_2$Si$_2$(004)-reflex. For thicker films the higher order EuPd$_2$Si$_2$(224)-reflex at 74.0$^\circ$ starts to become visible. Laue oscillations next to the EuPd$_2$Si$_2$(002)- and (004)-reflex (Fig.\,\ref{xrd} b) indicate a high degree of structural order for the out-of-plane direction with a crystalline coherence length $L_c$ that is $\sim$90\% of the total layer thickness as obtained by XRR. Evaluating the precise position of the EuPd$_2$Si$_2$(0$\bar{2}$8)-reflex a complete relaxation of the in-plane lattice constant towards the bulk value of 4.237\,\AA\, of the single crystal is observed for all thicknesses investigated here. $\phi$-scans in asymmetric reflection geometry of the MgO$\{4\bar{2}0\}$- and EuPd$_2$Si$_2\{0\bar{2}8\}$-reflexes show 4 regular spaced peaks at the same $\phi$-angles respectively, proving a parallel alignment of the corresponding a-axes. The intensity of the MgO\{4$\bar{2}$0\}-reflexes in the asymmetric X-ray $\phi$-scan around the surface normal differs, which is caused by the high crystallinity and thus very sharp reciprocal lattice points in combination with a very small tilt offset of about 0.01$^\circ$ of the sample. As a result, the normal vectors of the sample surface and the diffractometers $\phi$-circle are not exactly parallel, leading to an effective $\omega$-tilt during acquisition of the $\phi$-scan. In the case of the EuPd$_2$Si$_2$\{0$\bar{2}$8\}-reflexes this effect is negligible, since these represent rather broad peaks, such that a change in the $\omega$-angle does only cause a minor change with respect to the measured intensity, as is directly evident from the much higher full width at half maximum (FWHM) visible in transverse scans (not shown). The $\phi$-measurements thus confirm the epitaxial relationship deduced from RHEED to: \begin{center} MgO(100)$\parallel$EuPd$_2$Si$_2$(100) \& MgO[001]$\parallel$EuPd$_2$Si$_2$[001] \end{center} Extending the X-ray structural analysis to low temperatures yields further insight into the coupling between the two lattices and a clamping effect with respect to the in-plane lattice constant of EuPd$_2$Si$_2$. Using the asymmetric reflection geometry we track the position of selected reflexes in reciprocal space using iteratively coupled 2$\Theta/\omega$- and pure $\omega$-scans for each temperature to find the exact reflex position with maximum intensity. Fig.\,\ref{xrd} e shows the resulting relative changes of the a- and c-axis lattice constants with temperature for the substrate and for a 46\,nm EuPd$_2$Si$_2$ thin film. Due the low thermal expansion of MgO of less than $1\times 10^{-5}\,\text{K}^{-1}$ at room temperature \cite{smith_low-temperature_1968}, a maximum linear shrinkage of the substrate lattice by -0.14\,\% between 300\,K and 10\,K is expected. The measured temperature-dependent out-of-plane lattice constant of MgO\{001\} is thus in good agreement within the experimental error with values of the thermal expansion for MgO single crystals given in Ref.\,\cite{durand_coefficient_1936}. For the out-of-plane c-axis lattice constant of the EuPd$_2$Si$_2$ thin film we find a reduction of -0.23\% from both the (0$\bar{2}$8)- and the (004)-reflex, with a nearly linear decrease from 300\,K down to 70\,K. Below 70\,K the c-axis lattice constant remains unchanged. The a-axis lattice parameter can be assessed by the EuPd$_2$Si$_2$(0$\bar{2}$8)-reflex, showing roughly the same thermal expansion as the out-of-plane cubic lattice constant of the MgO substrate. Note in particular that no abrupt change of the a-axis for EuPd$_2$Si$_2$ occurs, in contrast to the rapid reduction of $\sim$2\% observed below 150\,K for single crystals \cite{kliemt_strong_2022}. As a consequence of the low thermal expansion of MgO and a strong coupling, i.\,e., a clamping effect of the thin film to the substrate, a high tensile biaxial in-plane strain is expected. \subsection{HAXPES} \begin{figure} \begin{center} \includegraphics[width=0.7\textwidth]{HAXPES-Eu-50nm.pdf} \caption{\label{haxpes} Temperature dependence of the Eu 3d core spectra obtained with a photon energy $h\nu$ of 3.4 keV at 300\,K (red) and 20\,K (blue) for a 44\,nm thin film (sample B2). v represents the mean valence as derived from the ratio of the areas of the corresponding 3d core orbitals. The specified binding energies are referred to the Fermi level.} \end{center} \end{figure} To investigate the electronic states in the EuPd$_2$Si$_2$ thin films in more detail, hard X-ray photoemission spectroscopy experiments were conducted on a 44\,nm thin film with a 2\,nm Si capping layer (sample B2) with the aim to study the Eu mean valence v above and below the bulk phase transition temperature. Photoelectrons are excited with a photon energy of 3.4\,keV and the energy resolution is set to 200\,meV. At this photon energy the inelastic mean free path of the photoelectrons is $\sim$4\,nm \cite{seah_quantitative_1979} and one obtains bulk-related information rather than surface properties. Due to the spin-orbit interaction the Eu 3d spectrum is splitted into an Eu$_{5/2}$- and an Eu$_{3/2}$-component, whereby a chemical shift separates the Eu$^{2+}$- and Eu$^{3+}$-components by about 10\,eV in each case, allowing a precise determination of the mean valence from the corresponding peak areas \cite{mimura_temperature-induced_2011}. The comparison between the high (300\,K) and low (20\,K) temperature spectra reveals no significant change with respect to the ratio of the Eu$^{2+}$-/Eu$^{3+}$-components and a temperature-independent valence near 2.0 (see Fig.\,\ref{haxpes}). HAXPES measurements of the valence region (not shown) reveal the same density of states at 300\,K and at 20\,K and in particular an absence of Eu$^{3+}$ 4f states, confirming the presence of a pure Eu$^{2+}$ state even at 20\,K. In contrast HAXPES measurements of polycrystalline bulk samples of EuPd$_2$Si$_2$ show a strong redistribution of the peak heights upon cooling below the valence transition temperature \cite{mimura_temperature_2004}. Remarkably the temperature-induced valence transition is thus completly suppressed. \subsection{Magnetotransport properties} \begin{figure} \includegraphics[width=\textwidth]{Cryo-full2.pdf} \caption{\label{transport} a) Temperature dependence of the electrical 4-wire resistivity for three different films (samples A1, B1, C1) of thicknesses as indicated. b) Normalized transverse magnetoresistance measured at constant temperatures with typical magnetoresistance curves for sample B1 shown in the inset. c) Hall resistivity at 5\,T determined at constant temperature, see text for details.} \end{figure} All transport measurements (on samples A1, B1, C1, see Table\,\ref{table}) were done using a constant current of 100\,$\mu$A nearly parallel to the EuPd$_2$Si$_2$ a-axis, resulting in a current density of less than $2 \times 10^8\,\text{A/m}^2$. The linear increase of the voltage with increasing current as observed at selected temperatures confirms the absence of heating effects. The temperature dependence of the resistivity (Fig.\,\ref{transport} a) reveals a metallic behaviour of the EuPd$_2$Si$_2$ thin films above $\sim$50\,K. At lower temperatures, a flat maximum around 16\,K with a small minimum around 20\,K appears, possibly indicating a phase transition. This resistivity anomaly occurs at a nearly thickness-independent temperature $T^*$. The residual resistivity ratio (RRR) between 280\,K and 3\,K amounts to $\sim$1.2 with a trend to larger values for thicker films. With increasing film thickness the resistivity decreases strongly, which may be attributed to a more dense layer with stronger overlapping neighbouring islands, where the electrical transport is strongly influenced by grain boundary scattering. In contrast, EuPd$_2$Si$_2$ single crystals containing small flux inclusions exhibit only a RRR slightly above 2 with a residual resistivity of approximatly 20\,$\mu\Omega$cm for current direction perpendicular to the c-axis \cite{kliemt_strong_2022}. Application of a magnetic field perpendicular to the thin film leads to a very small negative magnetoresistance (MR) with a parabolic shape (inset of Fig.\,\ref{transport} b). The temperature dependence of MR(B = 5\,T) shows a minimum at the same characteristic temperature $T^*$ for samples A1 and B1.\\ Measurements of the Hall effect reveal a linear behavior with respect to the applied magnetic field at all temperatures and in magnetic fields up to 5\,T. For fixed magnetic field the temperature-dependent Hall resistivity exhibits a sample-independent maximum at $T^*$ (see Fig.\,\ref{transport} c). As to the reason for the observed anomaly we can only speculate at this stage, as we were so far not successful in acquiring temperature- and field-dependent magnetization data. Nevertheless, the sizable value for the Hall conductivity of, e.\,g., $\sim$20\,1/Ohm\,cm for sample C1, in conjunction with the suppressed valence transition and thus the constant charge carrier density, indicate a magnetic origin of the anomaly. As the magnitude of the magnetoresistance increases with decreasing temperature but does neither show saturation nor hysteresis effects as a function of magnetic field, a ferromagnetic ground state is unlikely. A simple collinear antiferromagnetic ground state is, however, also unlikely, as an enhanced Hall effect due to topological reasons is not expected to occur \cite{smejkal_anomalous_2022}. However, even for a simple centrosymmetric crystal structure a topological Hall effect in the antiferromagnet state may appear, as has been experimentally shown recently for EuAl$_4$ \cite{shang_anomalous_2021}. \begin{table} \begin{center} \caption{\label{table} Structural parameters and corresponding magnetotransport and valence properties of four samples. The deposition parameters for all samples are the same except for deposition time and therefor their total thickness $d$. na means ‘not available’.\\} \begin{tabular}{cccccccc} Sample & $d$/nm & $L_c$/nm & RRR & $\rho_{5\,K}$/$\mu\Omega cm$ & $T^*$ & v(20\,K)\\ \hline A1 & 24 & 22 & 1.1 & 135.3 & 18 & na\\ B1 & 46 & 42 & 1.2 & 79.8 & 19 & na\\ B2 & 44 & 42 & na & na & na & 2.03\\ C1 & 64 & 61 & 1.2 & 74.0 & 19 & na\\ \end{tabular} \end{center} \end{table} \section{Conclusion} EuPd$_2$Si$_2$ grows as epitaxial and (001)-oriented thin film on Mg(001) substrates using molecular beam epitaxy with a typical lateral island size of the order of 100\,nm, while the film is fully coherent in the out-of-plane direction. The evolution of a stepped surface indicates an island-like growth mode for EuPd$_2$Si$_2$ on MgO(001), which is typical for metal-on-insulator growth as, e.\,g., for Pd on MgO \cite{renaud_growth_1999}. For all thin films investigated here a relaxed growth takes place, where the a- and c-axis lattice constants are equal to their bulk crystal values at room temperature. A simple cuboid-on-cube model with the epitaxial relationship MgO(100)$\parallel$EuPd$_2$Si$_2$(100) and MgO[001]$\parallel$EuPd$_2$Si$_2$[001] was identified by RHEED and XRD experiments. Due to a clamping-effect of the EuPd$_2$Si$_2$ thin film to the MgO substrate with negligible thermal expansion, the abrupt change of the a-lattice constant of EuPd$_2$Si$_2$ known from bulk material is suppressed, leading to a highly strained thin film upon cooling. It is important to note, that the a-lattice parameter of EuPd$_2$Si$_2$ thin film shows the same temperature dependence as that of the substrate for all temperatures measured. In contradistinction for the simple Eu/Nb/Al$_2$O$_3$-system a thickness dependence of the clamping temperature was found, where the Eu lattice is free to expand upon heating above a certain temperature. Below this clamping temperature T$_{cl}$ the Eu in-plane lattice constants follow the behavior of the sapphire substrate, which may be explained by the temperature-dependent mobility of defects near the interface \cite{soriano_clamping_2004}. For EuPd$_2$Si$_2$ thin films the valence transition is consequently suppressed, whereby a temperature-independent mean valence near 2.0 even at 20\,K is derived from HAXPES experiments. Magnetotransport measurements indicate instead a phase transition between 16 and 20\,K reflecting a possible magnetic ordering. The exact influence of the clamping towards, e.\,g., an antiferromagnetic ground state is still to be elucidated, but research along these lines is under way. \section{Acknowledgment} Special thanks are due to F. Ritter for assistance in collecting the low temperature x-ray data and T. Reimer (Gutenberg University Mainz) for argon ion beam etching of the lithographic patterned samples. This work was supported by the German Research Foundation (DFG) via the TRR288 (422213477, projects A04, A03 and B04). We thank the Bundesministerium f\"ur Bildung und Forschung (BMBF) through the project EffSpin-HAXPES (project number 05K16UMC) for additional funding. \bibliographystyle{unsrt}
1,314,259,993,700
arxiv
\section{#1}\setcounter{equation}{0}} \renewcommand{B.\arabic{equation}}{\thesection.\arabic{equation}} \def\Fus#1#2#3#4#5#6{ F_{#5#6}\left[ \begin{array} [c]{cc}% #3 & #2\\ #4 & #1% \end{array} \right]} \begin{document} \begin{titlepage} \thispagestyle{empty} \begin{flushright} UT-07-09\\ hep-th/0702221\\ \end{flushright} \vskip 1.5 cm \vspace*{2.5cm} \begin{center} \noindent{\textbf{\LARGE{ Black Hole - String Transition \\\vspace{0.5cm} and Rolling D-brane \vspace{0.5cm}\\ }}} \vskip 1.5cm \noindent{\large{Yu Nakayama}\footnote{E-mail: nakayama@hep-th.phys.s.u-tokyo.ac.jp}}\\ \vspace{1cm} \noindent{\small{\textit{Department of Physics, Faculty of Science, University of Tokyo}} \\ \vspace{2mm} \small{\textit{Hongo 7-3-1, Bunkyo-ku, Tokyo 113-0033, Japan}}} \end{center} \vspace{1cm} \newpage \vspace*{4cm} \begin{abstract} We investigate the black hole - string transition in the two-dimensional Lorentzian black hole system from the exact boundary states that describe the rolling D-brane falling down into the two-dimensional black hole. The black hole - string phase transition is one of the fundamental properties of the non-supersymmetric black holes in string theory, and we will reveal the nature of the phase transition from the exactly solvable world-sheet conformal field theory viewpoint. Since the two-dimensional Lorentzian black hole system ($SL(2;\mathbb{R})_k/U(1)$ coset model at level $k$) typically appears as near-horizon geometries of various singularities such as NS5-branes in string theory, our results can be regarded as the probe of such singularities from the non-supersymmetric probe rolling D-brane. The exact construction of boundary states for the rolling D0-brane falling down into the two-dimensional D-brane enables us to probe the phase transition at $k=1$ directly in the physical amplitudes. During the study, we uncover three fundamental questions in string theory as a consistent theory of quantum gravity: small charge limit v.s. large charge limit of non-supersymmetric quantum black holes, analyticity v.s. non-analyticity in physical amplitudes and physical observables, and unitarity v.s. open closed duality in time-dependent string backgrounds. This work is based on the PhD thesis submitted to Department of Physics, Faculty of Science, University of Tokyo, which was defended on January 2007. \end{abstract} \end{titlepage} \tableofcontents \newpage \sectiono{Introduction}\label{sec:1} {\bf From the Heaven} {\it A luminous star, of the same density as the Earth, and whose diameter should be two hundred and fifty times larger than that of the Sun, would not, in consequence of its attraction, allow any of its rays to arrive at us; it is therefore possible that the largest luminous bodies in the universe may, through this cause, be invisible} (Laplace: 1798). It was Laplace who first predicted the existence of the black hole from the Newtonian mechanics. More than a hundred years later, in 1915 when he was serving in Russia for World War I, Schwarzshild discovered the exact static black hole solution in Einstein's general relativity. Ever since, the black hole has continued to attract our broad attention in theoretical physics. Black holes are fascinating and indeed mysterious. It is remarkable that some properties of the black hole are quite reminiscent of those of the thermodynamics: it has a definite temperature, energy and entropy, and moreover it satisfies the thermodynamical laws. To understand this coincidence, it had been long suggested that the quantum gravity would explain the microscopic statistical origins of the thermodynamic properties of the black hole. Furthermore black holes challenge the validity of the quantum mechanics. The Hawking radiation, predicted from the quantum mechanics, leads to evaporation of the black hole, which ironically results in the failure of the unitary evolution of the quantum system. These mysterious natures of the black holes have continued to enchant generations of theoretical physicists. Over this past two decades, theoretical physicists have gained more and more confidence in string theory as a candidate for the final theory of everything. The theory of everything, from its tacit implication, should include a consistent theory of quantum gravity with sufficient predictive power. The best arena to test the quantum gravity is quantum black hole systems, where the semiclassical analysis leads to the puzzling issues raised above. Whether the string theory resolves these issues or not is a big challenge to string theorists. One of the greatest achievements of the string theory so far is to yield a microscopic explanation of the entropy for (near) BPS black holes with large charges. The string theory, along with various dualities, has enabled us to ``count" microscopic states forming such black holes. The counting successfully agrees with the classical Bekenstein Hawking entropy formula of the corresponding macroscopic black hole. The situations, however, still remain unclear when one studies non-BPS black holes with small charge. The large quantum corrections, both in string coupling constant and large curvature effects, prevent us from the quantitative enumeration of quantum states corresponding to the black hole. Qualitatively, it has been suggested that the so-called black hole - string phase transition occurs when we consider such a small charge black hole. One of the motivation of this thesis is to understand the black hole - string phase transition in exactly solvable string theory backgrounds. In this thesis, we study the exact dynamics of rolling D-brane in the two-dimensional black hole system. The two-dimensional black hole system not only gives a toy model for the exactly solvable black hole systems in string theory, but also it can be embedded in the full superstring theory as a solution corresponding to black NS5-branes. Although our model is rather specific, we will uncover many important and universal features of quantum gravity such as the black hole - string transition. In particular We would like ask three fundamental questions about the nature of the quantum gravity, or string theory as a candidate for the theory of everything. The first problem we would like to ask in this thesis is the small charge limit of the non-supersymmetric black hole and its relation to the black hole - string transition. By studying the black hole - string transition in the two-dimensional black hole, we would like to explicitly show the phase transition between the large charge limit and the small charge limit of the non-BPS black hole systems. The origin of the phase transition is the existence of two characteristic temperatures in the string theory: the one is the Hawking temperature associated with the Hawking radiation from the black hole, and the other is the Hagedorn temperature of the underlying string theory. The relation between the two temperatures is of utmost importance in understanding the black hole - string phase transition, and we will show that the phase transition occurs exactly when these two temperatures coincide in the two-dimensional black hole system by examining the properties of the exact probe rolling D-brane boundary states. A related issue is whether the genuine two-dimensional non-critical string theory (i.e. the target space is two-dimensional) admits a black hole solution. The question has remained long unanswered. Actually, the two-dimensional black hole in the two-dimensional non-critical string theory is located well below the black hole - string phase transition point, suggesting the difficulty of physical interpretations as a black hole. Our study will also support this argument in a negative way. The second problem we would like to investigate in this thesis is the relation between the analyticity and non-analyticity in amplitudes and physical quantities. It is well-known that in the supersymmetric situations, holomorphy (analyticity) plays a crucial role in determining exact BPS properties of the theory. On the other hand, to discuss phase transitions such as the black hole - string transition, the non-analyticity of the physical quantities is essential. Throughout this thesis, the interplay between the analyticity and non-analyticity appears intermittently. Especially, we highlight the universality of the decaying D-brane and the subtleties associated with Wick rotation in curved spaces in this context. The third problem we would like to study is the consistency between the unitarity and the open-closed duality. The unitarity is one of the crucial ingredients of the quantum theory. In the first quantized string theory, however, the unitarity in time-dependent background is not always manifest, especially in the Euclidean world-sheet formulation. The simplest consequence of the unitarity is the optical theorem. In the time-dependent physics associated with the D-brane decay, however, it is not apparently obvious whether the analytic continuation involved is consistent with the requirement from the unitarity. Indeed, the abuse of the careless Wick rotation between the Lorentzian world-sheet theory and the Euclidean world-sheet theory, would result in inconsistent results, violating the optical theorem, which will be only fixed after the careful studies of the neglected pole contributions that appear through the process of Wick rotation. The rolling D-brane in the two-dimensional black hole system is an excellent arena to check the validities of proposed prescription for the Wick rotations given in the literatures. {\bf Down to Earth} So far, we have stated the celestial motivations of the thesis. What about the terrestrial motivations? In other words, which practical physics can we learn from the study of the rolling D-brane in two-dimensional black holes? The dynamics of the rolling D-brane in the two-dimensional black holes closely resembles that of the rolling tachyon associated with the D-brane decay in flat space. Indeed, our study suggests that this tachyon - radion correspondence shows rather universal features of closed string radiation rate from the decaying D-brane. The string (particle) production from the time-dependent system such as the dynamical D-brane system itself is an interesting arena of theoretical physics, but it also has potential applications to the quantum cosmology based on the superstring theory. In the recent observational cosmology, the existence of the inflational epoch of our universe has been confirmed with increasingly great accuracy. It is, therefore, a great challenge for the string theory to provide a natural setup for the inflation. One viable scenario for the string inflation is the so-called brane inflation, where the potential between the D-brane and anti D-brane provides the inflaton field. Recent studies show that the brane inflation could be embedded in the flux compactification of the type II string theory with all moduli fixed. The end-point of the brane inflation is the pair annihilation between the D-brane and the anti D-brane. This is the point where the effective field theory approximation for the brane inflation breaks down and the stringy effects dominate. The reheating of the universe associated with the inflation decay is astonishingly different in the brane inflation scenario from the conventional field theory scenario. To understand the reheating process with the open string tachyon condensation, the universality of the radiation rate of the D-brane decay we will discuss in this thesis is crucial. We will also see that large closed string loops will form during the D-brane decay and they will dominate the radiated energy once the fundamental string charge is induced. The subsequent evolution of such macroscopic strings will be of great importance to understand and estimate the relic cosmic strings in our universe, which might be observed in near future by experiment, directly proving the string theory. In this way, the study of the D-brane decay has potential applications to quantum cosmology. We believe that our results, especially the universal properties of the decaying D-branes will become fundamental backgrounds for the realistic brane inflation models with successful reheating. {\bf Organization of the Thesis} The organization of this paper is as follows. In section \ref{sec:2}, we review the two-dimensional black hole from the space-time viewpoint. In section \ref{sec:3}, we review the two-dimensional black hole from the conformal field theory viewpoint. In section \ref{sec:4}, we introduce the concept of the black hole - string transition. In section \ref{sec:5}, we study the rolling tachyon dynamics and introduce the tachyon - radion correspondence conjecture. In section \ref{sec:6}, we study the D-branes in two-dimensional black hole system in the Euclidean signature. In section \ref{sec:7}, we construct the exact boundary states for the rolling D-brane in two-dimensional black hole in the Lorentzian signature. In section \ref{sec:8}, we study the closed string radiation rate from the rolling D-brane and probe the black hole - string transition. In section \ref{sec:9}, we present some discussions and the conclusion of the paper. In appendices we collect useful facts used in the main part of the thesis. In appendix \ref{sec:A}, we fix our conventions and collect useful formulae. In appendix \ref{sec:B}, we present miscellaneous topics, whose detailed discussions are omitted in the main stream of the thesis. A part of the thesis is based on the published papers. In particular, a large portion of the discussions in section \ref{sec:7} and \ref{sec:8} is based on \cite{Nakayama:2005pk,Nakayama:2006qm}. \newpage \sectiono{Two-dimensional Black Hole: Space-Time Viewpoint}\label{sec:2} In this section, we review the two-dimensional black hole from the space-time viewpoint. We will see that the string theory is replete with exactly solvable solutions containing the two-dimensional black hole systems. By studying such backgrounds, we can understand the $\alpha'$ exact physics of the string theory near singularities. The organization of this section is as follows. In section \ref{sec:2-1}, we introduce the black NS5-brane background as a most typical string solution based on the two-dimensional black hole system. In section \ref{sec:2-2}, we generalize the construction to study string theory near various singularities. In section \ref{sec:2-3}, we review the basic aspect of the classical two-dimensional black hole system. In particular we focus on the thermodynamic properties in section \ref{sec:2-4}. \subsection{(Black) NS5-brane background}\label{sec:2-1} As is often said, the string theory is not a theory of strings only. It turns out to contain other higher dimensional nonperturbative objects such as D-branes and NS-branes. Stable D-branes are charged under the Ramond-Ramond fields, and defined as objects on which perturbative strings can end. On the other hand, NS-branes are charged under the Kalb-Ramond $B_{\mu\nu}$ field, and do not possess a perturbative definition. They can be constructed as solitonic solutions of the equation of motions of the effective supergravity in ten-dimension. Historically, all these important ingredients of the string theory are discovered as exact (BPS) solitonic solutions of the effective supergravity in ten-dimension. The tension of the D-branes is proportional to $1/g_s$ while the tension of the NS-branes is proportional to $1/g_s^2$, where $g_s$ denotes the string coupling constant. Hence, in the perturbative limit (i.e. $g_s \to 0$), all these objects are infinitely massive compared with the perturbative string spectrum and could be neglected as excitations. Rather we regard the existence of such solitonic objects as super-selection sectors of the perturbative string theory. The moduli spaces of the string theory is connected by various dualities. In particular, one of the most important recent achievements is the advent of the gauge - gravity correspondence. Before this new development, it had been believed that the local quantum field theory cannot realize the gravitational theory (Weinberg-Witten theorem \cite{Weinberg:1980kq}). However, the holographic realization of the gauge theory avoid this no-go theorem in a remarkable manner, and it has enabled us to study the strongly coupled gauge theory from the weakly coupled gravity. Explicit realization in the string theory involves the low-energy decoupling limit (Maldacena limit \cite{Maldacena:1997re}) of the localized excitations: the most famous example is the low-energy field theory limit of open-string theory living on the D3-brane in flat ten-dimensional space, which yields the duality between type IIB string theory on $AdS_5 \times \mathbb{S}^5$ and the $\mathcal{N}=4$ supersymmetric Yang-Mills theory on $\mathbb{R}^{1,3}$ (or $\mathbb{R}^1 \times \mathbb{S}^3$) \cite{Maldacena:1997re,Aharony:1999ti}. The decoupling limit of the localized degrees of freedom and the gauge - gravity correspondence are not only important for the understanding of the strongly coupled gauge theories, but also essential to understand the quantum gravitational nature of the string theory. What is the microscopic origin of the black hole entropy? What is the fundamental degrees of freedom for the quantum gravity? How does (or does not) string theory solve the information paradox? These questions have been partially answered from the gauge - gravity correspondence of D-branes. The decoupling limit is essentially the near horizon limit of the corresponding supergravity background, and the properties of black hole can be understood through the gauge - gravity correspondence in this way. For NS5-brane, situations are more involved. Compared with D-branes, the NS5-brane is more geometrical in its origin. Indeed, as we will see in section \ref{sec:2-2}, it is T-dual to the singular geometry, and it appears not obvious what is the localized degrees of freedom in the decoupling limit. On the other hand, the closed string background for the near horizon limit of the NS5-brane is exactly quantized, so we are able to understand the gauge - gravity correspondence beyond the supergravity approximation. Our starting point is the supergravity solution for the extremal NS5-brane: the solution contains nontrivial dilaton and the metric\footnote{Throughout this thesis, we use the string frame for supergravity solutions.} \begin{align} \mbox{d} s^2 \equiv G_{\mu\nu} \mbox{d} x^\mu \mbox{d} x^\nu =- \mbox{d} t^2 + \left(1+\frac{k\alpha'}{r^2}\right) \left(\mbox{d} r^2+r^2 \mbox{d} \Omega_3^2\right)+ \mbox{d} {\bf y}^2_{\mathbb{R}^5}~, ~~~ e^{2\Phi(r)} = g_s^2 \left(1+\frac{k\alpha'}{r^2}\right) \ , \label{exNS5} \end{align} along with $k$-units of NS-NS $H_3$-flux penetrating through $\mathbb{S}^3$: \begin{align} H_{mnp} = -\epsilon_{mnp}^{\ \ \ \ q} \partial_q \Phi(r) \ , \end{align} where $x^m$ ($m=6,\cdots,9$) are transverse to the 5-brane. Thus, $k$ refers to the number of NS5-branes at $r=0$, ${\bf y}$ are the spatial coordinates of the planar NS5-brane worldvolume, and $g_s$ is the string coupling constant at infinity. The background preserves 16 supercharges of the type II (A or B) supergravity. Following the argument of decoupling limit given above, we take the near horizon limit of the geometry \eqref{exNS5} by zooming in the $r^2 \ll \alpha'$ region. Neglecting the constant term (i.e. $1$) in the harmonic function $\left(1+\frac{k\alpha'}{r^2}\right)$, we obtain the near horizon limit of the extremal NS5-brane background \cite{Rey:1989xi,Rey:1989xj,Rey:1991uu,Callan:1991dj,Callan:1991at} \begin{align} \mbox{d} s^2 = - \mbox{d} t^2 + k\alpha' \mbox{d} \rho^2 + k\alpha' \mbox{d} \Omega_3^2 + \mbox{d} {\bf y}_{\mathbb{R}^5}^2 ~, \qquad {\Phi} = -\rho + \text{constant}\ , \label{NH ext NS5} \end{align} where $r = \sqrt{k\alpha'} \exp \rho$. This near horizon background remarkably admits an exact conformal field theory description involving a linear dilaton theory and $SU(2)_k$ super Wess-Zumino-Novikov-Witten (WZNW) model:\footnote{Here, $k$ is the level of total current of super SU(2) WZNW models and $\sqrt{\frac{2}{k}}$ is the amount of background charge for the linear dilaton system.} \begin{align} \Big[ \mathbb{R}_t \times \mathbb{R}_{\rho , \sqrt{2\over k}} \times SU(2)_k \Big]_\perp \times \Big[\mathbb{R}^5 \Big]_{||}\ . \end{align} The first part describes the five-dimensional curved space-time transverse to the NS5-brane while the second part describes the flat spatial directions parallel to the NS5-brane. The criticality condition for superstring theories is satisfied for any $k$ because \begin{align} \left( 1 + \frac{6}{k} + \frac{1}{2}\right) + 3 \times \left( \frac{k-2}{k} + \frac{1}{2}\right)+6\times \left(1 + \frac{1}{2} \right) =15 \label{crit} \ . \end{align} Although the background is exactly solvable, the string background is singular due to the existence of the linear dilaton direction $\rho$. In the large negative $\rho$, the string coupling constant effectively diverges and the string perturbation theory is ill-defined. Physically, there exists a core of NS5-branes at $r=0$, and the dynamical degrees of freedom on the NS5-brane cannot be neglected. There are several ways to regularize this linear dilaton singularity so that the string world-sheet perturbation theory makes sense with sufficient predictive power. One way to do this is to introduce the non-extremality to the geometry. Let us consider the non-extremal or black NS5-brane solution in the ten-dimensional type II supergravity: \begin{align} \mbox{d} s^2 = -\left(1-\frac{r_0^2}{r^2}\right) \mbox{d} t^2 + \left(1+\frac{k\alpha'}{r^2}\right) \left(\frac{\mbox{d} r^2}{1-\frac{r_0^2}{r^2}} +r^2 \mbox{d} \Omega_3^2\right)+ \mbox{d} {\bf y}^2_{\mathbb{R}^5}~, ~~~ e^{2\Phi(r)} = g_s^2 \left(1+\frac{k\alpha'}{r^2}\right) \ \label{blackNS5} \end{align} along with $k$-units of NS-NS $H_3$-flux penetrating through $\mathbb{S}^3$ again. Here $r=r_0$ is the location of the event horizon of the black NS5-brane. One type of near-horizon limit is $r_0 \rightarrow 0$ and $g_s \rightarrow 0$ independently, leading to the `throat geometry' of extremal NS5-branes that reduce to \eqref{NH ext NS5}. Another type of near-horizon limit is $r_0 \rightarrow 0$ and $g_s \rightarrow 0$ while keeping the energy density above the extremal configuration $\mu \equiv {r_0^2}/{g_s^2 \alpha'}$ fixed. It yields `throat geometry' of the near-extremal NS5-branes \eqref{blackNS5} \cite{Maldacena:1997cg,Kutasov:2000jp}: \begin{align} \hspace{-5mm} \mbox{d} s^2 = - \tanh^2\rho \, \mbox{d} t^2 + k\alpha' \mbox{d} \rho^2 + k\alpha' \mbox{d} \Omega_3^2 + \mbox{d} {\bf y}_{\mathbb{R}^5}^2 ~, \qquad e^{2\Phi} = \frac{k}{\mu \cosh^2 \rho} ~, \label{NH black NS5} \end{align} where $r= r_0\cosh \rho$. For $(t,\rho)$-subspace, the metric and the dilaton coincide with those of the two-dimensional black hole with a Lorentzian signature. This Lorentzian black hole can be described by Kazama-Suzuki supercoset conformal field theory $SL(2; \mathbb{R})_k / U(1)$ (where $U(1)$ subgroup is chosen to be the non-compact component (i.e. space-like direction)) of central charge $c=3(1+2/k)$. Likewise, taking account of the NS-NS $H_3$-flux penetrating through $\mathbb{S}^3$ which is omitted in (\ref{NH black NS5}), the angular part $\mathbb{S}^3$ can be described by the (super) $SU(2)$-WZNW model as we have seen in the extremal case. In this way, the string background of the nonextremal NS5-brane is reduced to a solvable superconformal field theory system:\footnote {Here again, $k$ denotes the level of total current of super WZNW models. Namely, $k+2$, $k-2$ are the levels of bosonic $SL(2)$ and $SU(2)$ currents.} \begin{align} \Big[ {SL(2;\mathbb{R})_{k} \over U(1)} \times SU(2)_{k} \Big]_\perp \times \Big[\, \mathbb{R}^5 \, \Big]_{||}~. \label{SCFT black NS5} \end{align} Here, the first part describes the five-dimensional curved space-time (including the time direction) transverse to the NS5-brane, while the second part describes the flat spatial directions parallel to the NS5-brane. The criticality condition is satisfied for any $k$ as in \eqref{crit}. As we will review in the next section, the classical geometry of the two-dimensional black hole itself is not singularity free. This is because although in the Schwarzshild-like coordinate used in \eqref{NH black NS5} there is no singularity at all, the event horizon exists at $\rho=0$, and we can extend the coordinate inside the horizon. In the maximally extended geometry, we observe a curvature and dilaton singularity as is the case with the usual Schwarzshild black hole. It is interesting, however, despite the appearance of the singularity, the exact SCFT formulation \eqref{SCFT black NS5} appears perfectly well-defined, at least formally. Another way to regularize the linear dilaton singularity, while keeping the space-time supersymmetry in contrast with the above non-extremal resolution, is to separate the position of the NS5-branes in a ring-like manner and study the smeared solution \cite{Sfetsos:1998xd} (see also \cite{Israel:2004ir,Itzhaki:2005zr}). The background is described by the coset model \begin{align} \frac{\Big[ {SL(2;\mathbb{R})_{k} \over U(1)} \times \frac{SU(2)_{k}}{U(1)} \Big]_\perp}{\mathbb{Z}_k} \times \Big[\, \mathbb{R}^{1,5} \, \Big]_{||}~. \label{SCFT sNS5} \end{align} Here $\mathbb{Z}_k$ orbifold serves as a GSO projection\footnote{With the abuse of convention, the Gliozzi-Scherk-Olive (GSO) projection has a two-fold meaning in this thesis (and in many literatures). The one is the summation over the spin structure \cite{Gliozzi:1976qd}, and the other is the restriction to the integral $U(1)_R$ charge sector for the internal SCFT. Both are imperative to preserve the target-space supersymmetry.} that restricts the spectrum to the sector with integral $U(1)_R$ charge so that the space-time supercharge is well-defined. Intuitively, we have extracted a particular $U(1)$ direction from the $SU(2)$ WZNW model and combined it with the linear dilaton direction to construct the Euclidean ${SL(2;\mathbb{R})_{k} \over U(1)}$ coset model by a marginal deformation.\footnote{$U(1)$ subgroup here is chosen to be the compact direction.} The linear dilaton direction together with the $U(1)$ direction is deformed to the ${SL(2;\mathbb{R})_{k} \over U(1)}$ coset model that does not possess a dilaton singularity. To see the geometrical meaning of this deformation, we write the coset part $\Big[ {SL(2;\mathbb{R})_{k} \over U(1)} \times \frac{SU(2)_{k}}{U(1)} \Big]_\perp $ as \begin{align} \mbox{d} s^2 = \alpha'k(\mbox{d} \theta^2 + \tan^2\theta \mbox{d}\tilde{\phi}^2_2 + \mbox{d}\rho^2 + \tanh^2\rho \mbox{d}\tilde{\phi}_1^2) \ , \ \ e^{2\Phi} = \frac{1}{\cos^2\theta\cosh^2\rho} \ . \label{startm} \end{align} It is interesting to note that this geometry does {\it not} admit any Killing spinor needed for an apparent supersymmetry: the supersymmetry will be recovered after taking the $\mathbb{Z}_k$ orbifold \cite{Israel:2004ir} (see also \cite{Bakas:1994ba,Bergshoeff:1994cb,Bakas:1995hc} for earlier discussions). The $\mathbb{Z}_k$ orbifold is defined as $(\tilde{\phi}_1,\tilde{\phi}_2) \sim (\tilde{\phi}_1+2\pi/k,\tilde{\phi}_2+2\pi/k)$. We define new coordinates \begin{align} \tilde{\phi}_1 = \phi_1+\phi_2/k \ , \ \ \tilde{\phi}_2 = \phi_2/k \end{align} so that the $\mathbb{Z}_k$ orbifold simply acts as $(\phi_1,\phi_2) \sim (\phi_1, \phi_2 + 2\pi)$. In the new coordinates, the metric reads \begin{align} \mbox{d} s^2 = \alpha'k(\mbox{d}\theta^2 +\mbox{d}\rho^2 \tanh^2\rho \mbox{d}\phi_1^2) + 2\alpha'\tanh^2\rho \mbox{d}\phi_1\mbox{d}\phi_2 + \frac{\alpha'}{k}(\tan^2\theta + \tanh^2\rho) \mbox{d}\phi_2^2 \ . \end{align} Since $\phi_2$ direction has a usual $2\pi$ periodicity, one can perform the T-duality along the $\phi_2$ direction. Applying Buscher's rule (see appendix \ref{busc}), we obtain \begin{align} \mbox{d} s^2 &= \alpha'k\left(\mbox{d}\theta^2 + \mbox{d}\rho^2 + \frac{\tan^2\theta\tanh^2\rho}{\tan^2\theta+\tanh^2\rho} \mbox{d}\phi_1^2 + \frac{1}{\tan^2\theta + \tanh^2\rho}{\mbox{d}\hat{\phi}_2} \right) \cr B&= \frac{\alpha'k\tanh^2\rho}{\tan^2\theta+ \tanh^2\rho}\mbox{d}\phi_1\wedge \mbox{d}\hat{\phi}_2 \ , \ \ e^{2\Phi} = \frac{1}{\cos^2\theta\cosh^2\rho(\tan^2\theta+\tanh^2\rho)} \ . \label{rmet} \end{align} In the asymptotic region $\rho \to \infty$, we recover the NS5-brane solution \eqref{NH ext NS5}, and we can also see that the NS5-branes are now localized along the ring $\theta = \rho=0$, where the dilaton diverges (see figure \ref{fig:ring} for a description of our coordinate system). In other words, the NS5-branes are located along the ring in the $(x^8,x^9)$ plane.\footnote{Our parametrization is $x^6 = \rho_0 \sinh\rho \sin \theta \cos \phi_1$, $x^7 = \rho_0 \sinh\rho \sin \theta \sin\phi_1$, $x^8 = \rho_0 \cosh\rho\cos\theta \cos \hat{\phi}_2$, $x^9 = \rho_0\cosh\rho \cos \theta \sin\hat{\phi}_2$. } In this sense, the geometry still appears singular, but as we will discuss later, this is just an artefact of loose applications of T-duality: the trumpet singularity in \eqref{rmet} will be resolved by the ``winding tachyon condensation". Another quick way to see the absence of singularity is to revisit our starting point \eqref{startm}: it does not possess any dilaton singularity. It is also clear that the coset \eqref{SCFT sNS5} is manifestly singularity free as an SCFT up to a harmless orbifold structure. Although we will not explicitly do it here, we can begin with the appropriate (smeared) harmonic function ansatz for the ring-likely distributed NS5-branes and reproduce the metric \eqref{rmet} purely from the supergravity solution by taking a suitable near horizon limit \cite{Sfetsos:1998xd}. In this approach, the space-time supersymmetry is manifest. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\linewidth,keepaspectratio,clip]{ring.eps} \end{center} \caption{NS5-branes are localized along the ring in the $(x_8,x_9)$ plane with $\theta = \rho = 0$.} \label{fig:ring} \end{figure} \subsection{Noncritical superstring and LST}\label{sec:2-2} In section \ref{sec:2-1}, we discussed the relation between the two-dimensional black hole systems and the near horizon NS5-brane configurations. It is possible to generalize this construction to describe the singular limit of the geometry from exactly solvable conformal field theories. The construction is similar to the Gepner construction for compact Calabi-Yau spaces \cite{Gepner:1987vz,Gepner:1987qi}, and it can be named ``non-compact Gepner construction" \cite{Ghoshal:1995wm,Ooguri:1995wj,Giveon:1999zm,Lerche:2000uy,Hori:2002cd,Eguchi:2004ik}. In this subsection, we would like to review this construction. Thanks to this generalized ``non-compact Gepner construction", most of the results we will present in later sections can be applied to various singular Calabi-Yau spaces. \subsubsection{noncompact Calabi-Yau and wrapped NS5-branes}\label{sec:2-2-1} Our starting points to construct exactly solvable world-sheet conformal field theories for singular Calabi-Yau spaces from two-dimensional black hole and minimal models are the following two claims. {\bf Calabi-Yau / Landau-Ginzburg correspondence} \cite{Vafa:1988uu,Lerche:1989uy,Martinec:1988zu,Witten:1993yc} Let us consider the algebraic varieties defined by \begin{align} \sum_{i=1}^{n+2} x_i^{r_i} = 0 \label{nocv} \end{align} in the weighted projective space $\mathbb{WCP}_{n+1}\left(\frac{1}{r_1},\cdots ,\frac{1}{r_{n+2}}\right)$. The Calabi-Yau condition reads $\sum_{i=1}^{n+2}\frac{1}{r_i} = 1$. The Calabi-Yau / Landau-Ginzburg correspondence says that the (quantum) sigma model defined on the $n$-dimensional algebraic varieties \eqref{nocv} is (weakly) equivalent\footnote{The precise meaning of the weak equivalence can be found e.g. in \cite{Hori:2000kt}.} to the $\mathcal{N} = 2$ supersymmetric Landau-Ginzburg orbifold theory with the superpotential \begin{align} W(X_i) = \sum_{i=1}^{n+2} X_i^{r_i} \ . \label{lgo} \end{align} The orbifold projection serves as a GSO projection demanding the integrality of the $U(1)_R$-charge of the total model. The Calabi-Yau condition can be understood as the criticality condition for the SCFT with the central charge $\hat{c} = c/3 = n$. When all $r_i$ are positive, the resulting model is nothing but the Gepner construction for compact Calabi-Yau spaces (see also \cite{Greene:1988ut,Witten:1993yc}). When some of $r_i$ are negative, the Calabi-Yau manifold is non-compact and the definition of the Landau-Ginzburg orbifold needs extra care as we will do it momentarily. {\bf Landau-Ginzburg / minimal model correspondence} \cite{Lerche:1989uy} The $\mathcal{N}=2$ supersymmetric Landau-Ginzburg model with the superpotential $W(X) = X^{k}$ is equivalent to the $\mathcal{N}=2$ $(k-2)$-th minimal model with the central charge $\hat{c} = c/3 = 1 - \frac{2}{k}$. The minimal model has an algebraic formulation, but an alternative construction is based on the Kazama-Suzuki coset $SU(2)_k/U(1)$ associated with the level $k$ $SU(2)$ current algebra.\footnote{We always stick to the convention where $k$ denotes the {\it total} level of the current algebra: the bosonic $SU(2)$ current algebra has the level $\kappa = k-2$ and the bosonic $SL(2;\mathbb{R})$ current algebra has the level $\kappa = k+2$.} Kazama-Suzuki construction guarantees that the coset CFT associated with the $\mathcal{N}=1$ current algebra actually possesses $\mathcal{N}=2$ superconformal symmetry when the target space is a special Kahler manifold (the hermitian symmetric manifold) \cite{Kazama:1988uz,Kazama:1988qp}. In our simplest case, it is indeed the case and the theory is equivalent to the $\mathcal{N}=2$ minimal model.\footnote{Actually, if the denominator $H$ in the coset $G/H$ is a Cartan subgroup of $G$, the coset admits the $\mathcal{N}=2$ superconformal symmetry even if it is a non-symmetric space \cite{Kazama:1988va}.} We can formally generalize the above discussion to define the $\mathcal{N}=2$ supersymmetric Landau-Ginzburg model with the negative power superpotential $W(X) = X^{-k}$. The analytic continuation of the central charge for the positive power superpotential yields $\hat{c} = c/3 = 1 + \frac{2}{k}$. The Kazama-Suzuki coset construction has a natural generalization in this case as well. Instead of considering $SU(2)_k/U(1)$ supercoset model, we consider $SL(2;\mathbb{R})_k/U(1)$ supercoset model whose central charge is also given by $\hat{c} = c/3 = 1 + \frac{2}{k}$. This CFT will be reviewed in section \ref{sec:3}. Since the Lagrangian formulation based on the Landau-Ginzburg model with the negative power superpotential does not seem to be well-defined while the $SL(2;\mathbb{R})_k/U(1)$ coset does, the precise claim of the non-compact Gepner construction is that the Landau-Ginzburg orbifold appearing in \eqref{lgo} should be understood as the $SL(2;\mathbb{R})_k/U(1)$ coset model. At this point, it would be interesting to mention that the formal Landau-Ginzburg description suggests a duality between the $\mathcal{N}=2$ Liouville theory and the $SL(2;\mathbb{R})_k/U(1)$ coset model. We begin with the topological path integral for the partition function on the sphere: \begin{align} Z &= \int \mbox{d} X \mbox{d} \bar{X} \frac{1}{g_s^2} e^{-W(X)-\bar{W}(\bar{X})} \cr &=\int \mbox{d} X \mbox{d} \bar{X} \frac{1}{g_s^2}e^{-X^{-k}-\bar{X}^{-k}} \ . \label{pi} \end{align} Introducing the $\mathcal{N}=2$ Liouville coordinate $X^{-k} = e^{\frac{1}{\mathcal{Q}}\Phi}$ with $\mathcal{Q}^2 = \frac{2}{k}$, we can rewrite the path integral \eqref{pi} as \begin{align} Z = \int \mbox{d}\Phi \mbox{d}\bar{\Phi} \frac{1}{g_s^2} \exp\left({-\mathcal{Q}\mathrm{Re}\Phi -e^{\frac{1}{\mathcal{Q}}\Phi}- e^{\frac{1}{\mathcal{Q}}\bar{\Phi}}}\right) \ . \end{align} The important step is to regard the measure factor $\exp\left({-\mathcal{Q}\mathrm{Re}\Phi}\right)$ as a space-dependent coupling constant, namely, linear dilaton background with the slope $\mathcal{Q}$. The remaining action shows the structure of the $\mathcal{N}=2$ Liouville superpotential $W(\Phi) = e^{\frac{1}{\mathcal{Q}}\Phi}$. This heuristic equivalence between the $SL(2;\mathbb{R})/U(1)$ coset model and the $\mathcal{N}=2$ Liouville theory at the topological level will be made more precise in later section \ref{sec:3-4}. Now combining these two facts, we can construct the equivalent description for (non-compact) Calabi-Yau spaces by considering tensor products of $SL(2;\mathbb{R})/U(1)$ coset models ($\mathcal{N}=2$ Liouville theory) and $SU(2)/U(1)$ coset models ($\mathcal{N}=2$ minimal models) with appropriate GSO projections. We call such a construction a generalized (non-compact) Gepner construction. Let us discuss some simple examples. 1) $A_{k-1}$ type ALE spaces We take $n=2$, and set $-r_1 = r_2 = k$, $r_3=r_4=2$. From the projective invariance, we can set $x_1= - \mu$ without loss of generality.\footnote{Note that we are considering a noncompact space, so the domain of the projective coordinate $x_1$ is in $\mathbb{C}^* \equiv \mathbb{C} - \{0\}$. } The resulting algebraic variety is given by \begin{align} x_2^k + x_3^2 + x_4^2 = \mu^k \label{deak} \end{align} in $\mathbb{C}^3$, which is nothing but the $A_{k-1}$ type ALE space with a deformation parametrized by $\mu$. On the other hand, the noncompact Gepner construction yields \begin{align} \frac{\Big[ {SL(2;\mathbb{R})_{k} \over U(1)} \times \frac{SU(2)_{k}}{U(1)} \Big]}{\mathbb{Z}_k} \end{align} because the massive theory with the quadratic superpotential $W(X) = X^2$ will decouple under the renormalization group flow. We now recognize that the resulting theory is same as the near horizon limit of the $k$ NS5-brane solutions discussed in section \ref{sec:2-1}. This shows an equivalence between the near horizon limit of $k$ NS5-brane solutions and the $A_k$ type ALE spaces. They are indeed related with each other via the T-duality. The deformation parameter $\mu$ in the ALE space corresponds to the separation of NS5-branes. We can easily generalize the construction for other ALE spaces with ADE singularities. 2) Generalized conifolds We next consider the case of Calabi-Yau three-fold $(n=3)$. We take $r_2,r_3,r_4,r_5> 0$ and set $r_1 = 1-\frac{1}{\sum_{i=2}^5 r_i^{-1}} < 0$. After fixing the projective invariance by eliminating $x_1$, the resultant Calabi-Yau space is the so-called generalized (deformed) conifold \begin{align} x_2^{r_2} + x_3^{r_3} + x_4^{r_4} + x_5^{r_5} = \mu \ \label{dgc} \end{align} in $\mathbb{C}^4$. Mathematically, we can regard it as a complex structure deformation of the Brieskorn-Pham type singularity (see section \ref{sec:2-2-3}). The Gepner construction leads to the orbifolded tensor products of $\mathcal{N}=2$ minimal models with one $SL(2;\mathbb{R})_{-r_1}/U(1)$ coset model. The simplest example is the case with $r_1=-1$ and $r_2 = r_3= r_4=r_5=2$. The geometry is the deformed conifold: \begin{align} x_2^2 + x_3^2 + x_4^2 + x_5^2 = \mu \ . \end{align} The noncompact Gepner construction is given by $SL(2;\mathbb{R})_1/U(1)$ coset model with the level 1 parent current algebra. This is the famous Ghoshal-Vafa duality \cite{Ghoshal:1995wm}. 3) ALE($A_{k-1})$ fibration over weighted projective spaces We finally consider the model with two negative charges: $n=3$, $r_1=-k(1+\frac{k_1}{k_2})$, $r_2 = -k(1+\frac{k_2}{k_1})$, $r_3 = k$, and $r_4=r_5=2$. The Landau-Ginzburg superpotential is given by \begin{align} W(X_i) = X_1^{-k(1+\frac{k_1}{k_2})} + X_2^{-k(1+\frac{k_2}{k_1})} + X_3^{k} \ . \end{align} By introducing new variables: $Z= \log X_1 + \log X_2$, $Y=kk_1\log X_1 - kk_2 \log X_2$ and $X^k = e^{kZ} X_3^k $, we can rewrite the Landau-Ginzburg superpotential as \begin{align} W = e^{-kZ}(X^k + e^{Y/k_1} + e^{-Y/k_2}) \ , \end{align} with the linear dilaton $\Phi = -\mathrm{Re} Z$. After integrating over $Z$, the topological path integral is localized along the locus\footnote{We have added the superpotential term $W_1^2 + W_2^2$ by hand to match the dimensionality.} \begin{align} e^{y/k_1} + e^{-y/k_2} + x^k + w_1^2 + w_2^2 = 0 \ , \end{align} which shows a structure of ALE($A_{k-1}$) fibration over $\mathbb{WCP}^1(k_1,k_2)$. The simplest example with $k_1=k_2$, we obtain the ALE($A_{k-1}$) fibration over $\mathbb{CP}_1$. The geometry of the two $SL(2;\mathbb{R})/U(1)$ coset and one $SU(2)/U(1)$ coset can be analysed in a similar way as we did in section \ref{sec:2-1}, and the result is given by the wrapped NS5-brane solution around $\mathbb{CP}_1$, where we have chosen $k_1=k_2=1$ for simplicity (see \cite{Hori:2002cd} for details). This is expected from the fact that the $A_{k-1}$ singularity is T-dual to flat $k$ NS5-branes and we could perform the fiber-wise T-duality for the ALE($A_{k-1}$) fibration over $\mathbb{CP}_1$. The partition functions and elliptic genera of these noncompact Gepner models have been studied in \cite{Eguchi:2004yi,Eguchi:2004ik,Eguchi:2006tu}. In this section we restricted ourselves to the Landau-Ginzburg construction where the theory is defined as (an orbifold of) the direct product of the Landau-Ginzburg models with monomial superpotentials. Geometrically, they corresponded to the (deformations of) the Brieskorn-Pham type singularities. It is possible to construct more general Landau-Ginzburg orbifolds with generic polynomial superpotentials. The generalized models have a potential applications to the singular locus of the $\mathcal{N}=2$ supersymmetric Yang-Mills theories (Argyres-Douglas point) and their deformations. The exact quantization of the world-sheet theory beyond the topological subsector, however, is a difficult task and we would not pursue these generalizations any further in this thesis. \subsubsection{singular limit and LST}\label{sec:2-2-2} In section 2.1, we have discussed that the coinciding $k$ NS5-branes superstring solution corresponds to the linear dilaton background while the supersymmetric deformation (separation of NS5-branes in a ring-like manner) corresponds to the $SL(2;\mathbb{R})/U(1)$ coset background (i.e. two-dimensional Euclidean black hole). Here we would like to take the similar singular limit in more general noncritical superstring backgrounds discussed in section \ref{sec:2-2-1}.\footnote{What we mean by ``noncritical" here is that the SCFTs involved does not necessarily possess an apparent 10-dimensional background as is the case with the Gepner construction for compact Calabi-Yau spaces. In a more specific narrower sense, we sometimes call a theory ``noncritical" when it possesses a Liouville direction.} It is particularly easy to see the singular limit if we start with the $\mathcal{N}=2$ Liouville description: it has a superpotential \begin{align} W(\Phi) = \mu e^{\frac{1}{\mathcal{Q}}\Phi} \ , \end{align} and the parameter $\mu$ directly corresponds to the deformation parameter appearing e.g. in \eqref{deak},\eqref{dgc}. Thus the singular limit $\mu \to 0$ is equivalent to switching off the Liouville potential so that we are left with the linear dilaton theory. The duality between the $\mathcal{N}=2$ Liouville theory and $SL(2;\mathbb{R})/U(1)$ coset theory then confirms the statement that the singular limits of the non-compact Gepner models correspond to replacing $SL(2;\mathbb{R})/U(1)$ coset part by the $\mathcal{N}=2$ supersymmetric linear dilaton theory with the same central charge and the same asymptotic dilaton gradient. Let us formulate the proposal discussed above in a more precise way \cite{Giveon:1999zm}. We begin with the type II string theory on a singular Calabi-Yau varieties $X^{2n}$ with the complex dimension $n$ defined as the vicinity of a hypersurface singularity \begin{align} F(z_1,\cdots, z_{n+1}) = 0 \label{defeq} \end{align} in $\mathbb{C}^{n+1}$, where $F$ is a quasi-homogeneous polynomial on $\mathbb{C}^{n+1}$. This means that $F$ has degree $1$ under the $\mathbb{C}^{*}$ action: \begin{align} z_i \to \lambda^{r_i} z_i \ . \label{Cac} \end{align} Now we can define a locally holomorphic $n$-form $\Omega$ as \begin{align} \Omega = \frac{\mbox{d} z_1\wedge \dots \wedge \mbox{d} z_n}{\partial F/\partial z_{n+1}} \ \end{align} on the patch $\partial F/\partial z_{n+1}\neq 0$. It can be extended to other patches where $\partial F\partial z_i \neq 0$ with the similar expressions and glued together to form a globally well-defined holomorphic $n$-form with the charge $r_{\Omega} = \sum_i r_i -1$ under the $\mathbb{C}^{*}$ action \eqref{Cac}. Such constructed varieties $X^{2n}$ are Gorenstein\footnote{Gorenstein means that $X^{2n}-\{0\}$ admit a nowhere vanishing holomorphic $n$-form.} equipped with a natural $\mathbb{C}^{*}$ action \eqref{Cac} by construction. We consider the type II string theory on the singular Calabi-Yau varieties $\mathbb{R}^{d-1,1}\times X^{2n}$ in the vicinity of the isolated singular point $y_0$ in the decoupling limit $g_s \to 0$. The proposed dual theory is the type II string theory on $\mathbb{R}^{d-1,1} \times \mathbb{R}_{\phi} \times \mathcal{N}$. Here $\mathcal{N}$ is the infrared limit of the sigma model on the manifold $\mathcal{N} = X^{2n}/\mathbb{R}_+$, where the division by $\mathbb{R}_+$ is an action on $z_i$ as \eqref{Cac} with $\lambda \in \mathbb{R}_+$, The infrared limit\footnote{We assume $r_{\Omega}>0$ so that $\mathcal{N}$ is Fano meaning that the curvature of the Einstein metric on it is positive.} of the sigma model on $\mathcal{N}$ is given by a Landau-Ginzburg model with superpotential $F(Z_i)$ and an additional $\mathbb{S}^1$ circle.\footnote{As a CFT, they are not a simple direct product but an orbifold. We need to impose the GSO projection to preserve the target-space supersymmetry.} Here $\mathbb{S}^1$ direction corresponds to the $U(1)_R$ symmetry associated with the residual $U(1)$ action \eqref{Cac} with $|\lambda| =1$. The quotient space $\mathcal{N}/U(1)$ is equivalent to the Landau-Ginzburg model with superpotential $W= F(Z_i)$ from the standard Landau-Ginzburg / non-linear sigma model correspondence. The linear dilaton slope is determined by the total criticality condition of the string theory. This construction is equivalent to the one discussed above by turning off the $\mathcal{N}=2$ Liouville superpotential or $SL(2;\mathbb{R})/U(1)$ deformation. For later purposes, it is worthwhile studying the normalizability of such deformations. Consider the variation of the complex structure of $X^{2n}$: \begin{align} F(z_i) + \sum_a t_a A_a(z_i) = 0 \ . \end{align} Here $t_a$ are complex deformation parameters, and $A_a(z_i)$ are complex structure deformations of the defining equation \eqref{defeq}. The Kahler potential of the Weil-Petersson metric for such complex structure deformations is given by the formula \cite{Candelas:1990pi} \begin{align} K = - \log \int_{X^{2n}} \Omega \wedge \bar{\Omega} \ . \end{align} To discuss the normalizability of the perturbation associated with $A_a$, we have to evaluate \begin{align} \frac{\partial^2}{\partial t_a \partial \bar{t}_a} \Omega \wedge \bar{\Omega} \ . \label{metd} \end{align} This can be done by the simple scaling argument \cite{Gukov:1999ya}: if $A_a$ scales under \eqref{Cac} as $\lambda^{r_a}$, $t_a$ should scale as $\lambda^{1-r_a}$, so \eqref{metd} scales with a weight \begin{align} \omega_a = 2\left(\sum_{i}r_i + r_a - 2\right) \ . \end{align} The modes satisfying $r_a > 1-r_{\Omega}$ are non-normalizable deformations as $|z_i| \to \infty$ while the modes satisfying $r_a < 1-r_{\Omega}$ are normalizable deformations. The non-normalizable deformations should be regarded as boundary conditions we have to impose at infinity to define a theory. Different boundary conditions would give rise to different theories. On the other hand, the normalizable deformations should be regarded as fluctuating fields after the quantization. Their values can be varied within a given theory. As we will discuss later in section \ref{sec:3-1}, the normalizability of the deformations from the space-time viewpoint presented here is deeply connected with the normalizability of the corresponding operators in the world-sheet $\mathcal{N}=2$ linear dilaton theory. Indeed, one can regard this agreement as a nontrivial support for the duality proposed in \cite{Giveon:1999zm} and reviewed here. As an example, let us consider a class of generalized conifolds defined by the hypersurface \begin{align} F(z_i) = H(z_1,z_2) + z_3^2 + z_4^2 \ \end{align} in $\mathbb{C}^4$. It can be regarded as an NS5-brane wrapped around the Riemann surface $H(z_1,z_2)= 0$ along the line of arguments reviewed at the end of section \ref{sec:2-2-1}. We begin with the $A_{n-1}$ type Brieskorn-Pham singularity with $H(z_1,z_2) = z_1^n + z_2^2$, and consider the perturbations of the form $z_1^a$ $(a=0,1,\cdots,n-2)$. $U(1)_R$-charges are given by $r_{\Omega} = \frac{1}{n}+\frac{1}{2}$ and $r_a = \frac{a}{n}$. From the condition $r_a > 1-r_{\Omega}$, we conclude that the deformations with \begin{align} a > \frac{n}{2}-1 \label{normc} \end{align} are non-normalizable \cite{Gukov:1999ya,Giveon:1999zm}. We can also understand the normalizability of these deformations from the dual $\mathcal{N}=2$ supersymmetric four-dimensional field theory viewpoints by studying the Seiberg-Witten theory near the Argyres-Douglas points \cite{Argyres:1995jj,Argyres:1995xn,Eguchi:1996vu}. To conclude this section, we would like to revisit the question: what is the decoupling limit of the theory defined on the singularities \eqref{defeq}? We have reviewed the proposed {\it dual} string theory defined as (deformations of) the $\mathcal{N}=2$ linear dilaton theory coupled with the Landau-Ginzburg model. We here summarize the low-energy decoupled physics from the original NS5-brane construction in flat ten-dimensional Minkowski space. The decoupled theory has a conventional name ``little string theory (LST)" \cite{Losev:1997hx}.\footnote{See \cite{Aharony:1999ks} for an earlier review.} \begin{itemize} \item As a decoupled six-dimensional theory, it has $\mathcal{N}=(2,0)$ (type IIA) or $\mathcal{N}=(1,1)$ (type IIB) supersymmetry. The theory is non-local. \item They are classified by the ADE classification. \item IR limit is the six-dimensional super Yang-Mills theory in type IIB and the six-dimensional interacting (2,0) SCFT in type IIA \cite{Seiberg:1997ax}. \item BPS excitation includes a string with tension $l_s$ (little string) and the theory shows a Hagedorn-like thermodynamics with the Hagedorn temperature $\beta_{\mathrm Hg} \sim 2\pi\sqrt{2k}$ (see section \ref{sec:2-4}). Most of these high-energy states are nonperturbative in nature. \end{itemize} After compactification (by wrapping NS5-branes on projective spaces for instance), we have four- (or two-) dimensional effective theory, which includes Seiberg-Witten theory near the Argyres-Douglas singularities, corresponding to the Calabi-Yau 3-fold singularities. The properties of these theories, known as the LST, are less known than the field theory living on D-branes. However, the closed string dual theory is exactly quantizeable in many cases unlike the R-R background in the near horizon limit of the D-branes. The study of $\alpha'$ exact information is an interesting subject of its own, besides the application to the dual theories, and we will pursue this direction in the following sections. \subsubsection{obstruction for conical metrics}\label{sec:2-2-3} So far, we have assumed the existence of the Calabi-Yau varieties defined on the hypersurface singularity \eqref{defeq}. In the compact Calabi-Yau case such as the curve defined in \eqref{nocv}, Calabi-Yau theorem guarantees the existence of the unique Ricci-flat Kahler metric once the Calabi-Yau condition is satisfied. The existence of the Calabi-Yau metric on the hypersurface singularities \eqref{defeq}, however, is an open problem (see \cite{SE} for a review). From the $\mathbb{C}^{*}$ action \eqref{Cac} on the complex variables $z_i$, it is natural to assume the conical metric \begin{align} \mbox{d} s^2_{X^{2n}} = \mbox{d} r^2 + r^2 \mbox{d} s^2_L \ ,\label{metc} \end{align} where $r\le 0$ denotes the radial direction $\mathbb{R}_+$ and $\mbox{d} s^2_L$ is the Sasaki-Einstein metric of the link $L$ associated with the non-compact Calabi-Yau variety $X^{2n}/\{0\} = \mathbb{R}_+ \times L $. The Sasaki condition is equivalent to the statement that the total metric is Kahler, and the Einstein condition is equivalent to the statement that the total metric is Ricci flat. It turns out to be extremely difficult to give necessary and sufficient conditions for the existence of such conical metric (or alternatively existence of the Sasaki-Einstein metric on the link $L$) while infinitely many examples of explicit metrics have been constructed quite recently \cite{Gauntlett:2004yd,Cvetic:2005ft,Cvetic:2005vk}. For definiteness we restrict ourselves to the Brieskorn-Pham type singularities defined by the particular polynomial \begin{align} F(z_i) = \sum_{i=1}^{n+1} z_i^{a_i} \ . \label{bpsin} \end{align} The corresponding hypersurface singularities $X^{2n}$ are always isolated and Gorenstein. However from the following physical reasoning, we believe that not every singularity possesses a conical metric. Assuming the existence of such a conical metric, we can compute the volume of such a hypothetical Sasaki-Einstein link $L$ by the formula \cite{Bergman:2001qi} \begin{align} \mathrm{Vol}(L) = \frac{r_{\Omega}^n}{n^n\prod_{i=1}^{n+1}r_i} \mathrm{Vol}(\mathbb{S}^{2n-1}) \ , \end{align} where $\mathrm{Vol}(\mathbb{S}^{2n-2}) = \frac{2\pi^n}{(n-1)!}$.\footnote{We have assumed that the Reeb-vector (conformal $U(1)_R$-symmetry) is given by the natural $\mathbb{C}^{*}$ action \eqref{Cac}. If this is not the case, we have to determine the ``correct" Reeb-vector by using the $Z$-minimization \cite{Martelli:2005tp,Martelli:2006yb} ($a$-maximization \cite{Intriligator:2003jj}) principle. } Via the AdS-CFT correspondence, the central charge $a$ of the dual SCFT living on D3-branes placed at the tip of the cone is related to the volume of the Sasaki-Einstein link $L$ \cite{Gubser:1998vd} as \begin{align} a \propto \frac{1}{\mathrm{Vol}(L)} \ . \end{align} On the other hand, the conjectured $a$-theorem states that $a$ is a decreasing function during a renormalization group flow from UV to IR. Geometrically speaking, the relevant deformations to \eqref{bpsin} should increase the volume.\footnote{From the pure gravity viewpoint, this is a consequence of the weaker energy condition \cite{Freedman:1999gp}.} However, this statement is clearly violated in some explicit examples such as the series $F = z_1^k + z_2^2 + z_3^2 + z_4^2$ with $k>4$, where $\mathrm{Vol}(k+1) > \mathrm{Vol}(k)$ contradicting with the $a$-theorem. Recently, \cite{Gauntlett:2006vf} has given two mathematical obstructions for the existence of conical Calabi-Yau metric for such varieties. {\bf The Bishop obstruction} For the existence of the conical Calabi-Yau metric \eqref{metc}, $\mathrm{Vol}(L) < \mathrm{Vol}(\mathbb{S}^{2n-1})$ is necessary. From the dual gauge theory viewpoint, the condition corresponds to the fact that by appropriate Higgsing that decreases $a$, we can reach $\mathcal{N}=4$ SYM theory. {\bf The Lichnerowicz obstruction} When $X^{2n}$ admits a holomorphic function with $U(1)_R$-charge $\lambda <1$, the conical Calabi-Yau metric does not exist. For the Brieskorn-Pham type singularities, it is equivalent to the statement $r_{\Omega} \le n r_a$ for any deformation. From the dual gauge theory viewpoint, the condition corresponds to the unitarity bound of dual operators for the deformations. It can be shown that the Lichnerowicz obstruction also eliminate a possible violation of $a$-theorem for the Brieskorn-Pham type singularities (see appendix \ref{Lic}). As an example let us consider the Calabi-Yau four-fold defined by \begin{align} F = z_1^k + z_2^2 + z_3^2 + z_4^2 + z_5^2 = 0 \ . \label{fourc} \end{align} The conical Calabi-Yau metric only exists for $k=2$, and other varieties are obstructed from the Lichnerowicz bound. Thus the $AdS_3\times L_7$ compactification of M-theory is only possible for $k=2$. This should be so because otherwise the $a$-theorem would be violated or the weaker energy-condition would be spoiled. On the other hand, one can consider the two-dimensional string compactification on such a hypothetical conical Calabi-Yau manifold \eqref{fourc} and add $N_f$ fundamental strings on the noncompact $\mathbb{R}^{1,1}$ directions at the tip of the cone. Due to the gravitational backreaction, we can see the near horizon geometry would be $AdS_3 \times \mathcal{N}$ with the constant string coupling $g_s^2 \sim \frac{1}{N_f}$, where $\mathcal{N}$ has been introduced in section \ref{sec:2-2-2} denoting the infrared limit of the sigma model on the hypothetical Sasaki-Einstein link $L$ associated with \eqref{fourc}. As discussed in this section, the Sasaki-Einstein link $L$ is obstructed, but the string theory on $AdS_3 \times \mathbb{S}^1 \times (LG(F)\sim \mathcal{N}/U(1))$ has a well-defined perturbative description based on the $SL(2;\mathbb{R})$ current algebra with level $k$ times Landau-Ginzburg orbifold defined by $F(Z_i)$ (or $\mathcal{N}=2 $ minimal model) \cite{Aharony:1999ti}.\footnote{This does not mean the existence of such Sasaki-Einstein metric because we are sitting at the Gepner-point of the sigma model, where the geometrical description is questionable due to large $\alpha'$ corrections.} Interestingly, unlike the $a$-theorem associated with the M-theory compactification on $AdS_3 \times \mathcal{N}$, the $c$-theorem for the dual CFT of $AdS_3 \times \mathbb{S}^1 \times (LG(F)\sim \mathcal{N}/U(1))$ is always satisfied because the dual CFT central charge, which is determined from the curvature of the $AdS_3 \sim SL(2;\mathbb{R})$, is given by $c=6kQ_1$.\footnote{For instance, $k=\frac{n}{n+1}$ for $A_{n-1}$ type Calabi-Yau four-folds.} In a similar fashion, the near horizon geometry of every Brieskorn-Pham singularities admit the noncritical string construction based on the non-compact Gepner models as we have presented in this section irrespective of the above-mentioned obstructions. It would be interesting to understand the obstructions of the existence of conical metrics for such singularities from the noncritical string theory viewpoint. For instance, we can translate the Lichnerowicz obstruction as the claim that every relevant deformations up to $z_i^{a_i-3}$ should be normalizable. \subsection{Classical two-dimensional black hole}\label{sec:2-3} In section \ref{sec:2-1}, we have introduced the two-dimensional black hole geometry as a near horizon limit of the black NS5-brane solutions in the type II superstring theory: \begin{align} \mbox{d} s^2 = k\alpha'(- \tanh^2\rho \, \mbox{d} t^2 + \mbox{d} \rho^2) \ , \label{tbhm} \end{align} with nontrivial dilaton gradient $e^{2\Phi} = \frac{k}{\mu \cosh^2 \rho}$.\footnote{We have rescaled the normalization of $t$ for simplicity of notation. We will sometimes do this in the following without notice, for it would be convenient to stick to $2\pi$ periodicity in the Euclidean time direction after the Wick rotation.} It has been claimed in the literature that the background is $\alpha'$ exact perturbatively in the type II superstring theory while the bosonic two-dimensional black hole might receive perturbative world-sheet $\alpha'\sim \frac{1}{k}$ corrections \cite{Tseytlin:1992ri}. We will discuss physical importance of the nonperturbative corrections later in section \ref{sec:3-4}. In this subsection, we review the classical geometry of the two-dimensional black hole. First of all, the metric \eqref{tbhm} has an event horizon at $\rho = 0$, but the $(t,\rho)$ coordinate does not cover the whole causal region of the black hole. One can maximally extend the geometry \eqref{tbhm} by introducing the Kruscal coordinate \begin{align} u = \sinh\rho e^{t} \ , \ \ v= -\sinh\rho e^{-t} \ , \ \ \mbox{d} s^2 = -2k\frac{\mbox{d} u\mbox{d} v}{1-uv} \ , \ \ e^{2\Phi} = \frac{k}{\mu(1-uv)} \ . \end{align} Note that in two-dimension, it is always possible to introduce the conformal coordinate $(u,v)$ locally with the conformally flat metric $\mbox{d} s^2 = f(u,v)\mbox{d} u\mbox{d} v$. In this coordinate, the event horizon is located at $uv = 0$, and inside the horizon, we encounter singularity at $uv=1$, where the curvature and the dilaton diverges. The Kruscal diagram can be found in figure \ref{fig:kruscal}. Causal region of the Lorentzian black hole background has four boundaries: past and future horizons ${\cal H}^\pm$, and past and future asymptotic infinities ${\cal I}^\pm$. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\linewidth,keepaspectratio,clip]{kruscal.eps} \end{center} \caption{Kruscal diagram for the two-dimensional black hole system.} \label{fig:kruscal} \end{figure} We can also study the global structure of the metric by using the Penrose coordinates, and one can write down the Penrose diagram (see figure \ref{fig:bh}) of the two-dimensional black hole system, which looks exactly same as that for the four-dimensional Schwarzshild black hole system (upon neglecting $\mathbb{S}^2$ angular directions). This is one of the motivations to study the two-dimensional black hole system as an exactly solvable toy model for four-dimensional Schwarzshild black hole. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\linewidth,keepaspectratio,clip]{bh.eps} \end{center} \caption{Penrose diagram for the two-dimensional black hole.} \label{fig:bh} \end{figure} The geodesic motion of a particle with the minimal coupling interaction to the geometry \begin{align} S = \int \mbox{d} s \ = \int \mbox{d}\tau \sqrt{\frac{\dot{u}\dot{v}}{1-uv}} \ , \end{align} where dot denotes the derivative with respect to $\tau$, is given by \begin{align} \begin{cases} \ddot{u}(1-uv) = -v\dot{u}^2 \cr \ddot{v}(1-uv) = -u \dot{v}^2 \ . \end{cases} \end{align} Later, we will compare this with the string motion and D-particle motion, both of which show quite distinct properties. Finally, we would like to study the ``mass" of the two-dimensional black hole. For this purposes, it is convenient to use the Schwarzshild(-like) coordinate \eqref{blackNS5}: \begin{align} \mbox{d} s^2 = -\left(1-\frac{2M}{r}\right)\mbox{d} t^2 + \frac{k}{1-\frac{2M}{r}}\frac{\mbox{d} r^2}{r^2} \ , \ \ e^{2\phi} = r \ , \end{align} where $ r_0 = 2M$ is the location of the horizon. From the expression, it is clear that we can easily shift the value of $M$ multiplicatively $M\to aM$ by the scaling of $r$ as $r \to r/a$. Therefore the physical meaning of the ``mass" of the two-dimensional black hole is solely determined by the value of the string coupling constant (dilaton) at the horizon $r=2M$. It corresponds to the fact that the mass parameter $M$ is related to the world-sheet $\mathcal{N}=2$ Liouville cosmological constant $\mu$ in the dual $\mathcal{N}=2$ Liouville theory, where $\mu$ can be shifted by the shift of the Liouville coordinate (see section \ref{sec:3-4} for more about the duality). Because of this property, the Hawking temperature of the two-dimensional black hole is independent of $M$ unlike the case with higher-dimensional black-holes. Similarly many features of the string theory in the two-dimensional black hole background such as scattering amplitudes also show rather trivial dependence on $M$.\footnote{It is customary to set $M=1$ as we will do in most part of the thesis.} It is related to the Knizhnik-Polyakov-Zamolodchikov (KPZ) scaling law of the dual Liouville theory \cite{Knizhnik:1988ak}. \subsection{Wick rotation: thermodynamic properties}\label{sec:2-4} In this section, we would like to study thermodynamic properties of the two-dimensional black hole (and hence LST on the black NS5-branes). It is a well-known but profound fact that the black hole system shows a thermodynamic properties \cite{Bekenstein:1973ur,Bardeen:1973gs}: \begin{itemize} \item Surface gravity $\kappa$ (temperature $T$) is constant over horizon of stationary black hole (the zeroth law: $T=\frac{\kappa}{2\pi}$) \item $\mbox{d} M = \frac{1}{8\pi} \kappa \mbox{d} A + \Omega \mbox{d} Q$ (the first law: $S= \frac{A}{4}$) \item $\delta A \ge 0$ in any physical process (the second law) \item It is impossible to achieve $\kappa = 0$ by any physical process (the third law) \end{itemize} Here $M$ is the mass of the black hole, $A = 4S$ is the area of the event horizon ($=$ entropy), $Q$ is the charge, and $\Omega$ is its chemical potential. One of the biggest motivations to study the quantum gravity such as the string theory is to understand the black hole thermodynamics from the microscopic viewpoint. Let us begin with the temperature of the two-dimensional black hole. One convenient way to compute the Hawking temperature of black hole systems \cite{Hawking:1974sw} is to use the Euclidean path integral formalism \cite{Gibbons:1976ue}. In our case, we can study the Wick rotation (i.e. $ t\to i\tau_E$) of the Lorentzian two-dimensional black hole \label{tbhm}: \begin{align} \mbox{d} s^2_E = \tanh^2\rho \, \mbox{d} \tau_E^2 + k\alpha' \mbox{d} \rho^2 \ . \label{tbhmE} \end{align} To avoid a conical singularity at the origin $\rho=0$, we have to set the periodicity of the Euclidean time direction by $\beta_{\rm Hw} = {2\pi}{\sqrt{k\alpha'}}$: $\tau_E \sim \tau_E + \beta_{\rm Hw}$. In the Euclidean path integral formulation, we regard this periodicity as the inverse of the Hawking temperature of the black hole: $T_{\rm Hw} = \frac{1}{\beta_{\rm Hw}} = \frac{1}{2\pi\sqrt{k\alpha'}}$ because in the Matsubara formalism, the periodicity of the Euclidean time corresponds to the inverse temperature. Note that in the large $k$ semi-classical limit, we have vanishing Hawking temperature so that the back-reaction associated with the Hawking radiation is negligible and the black hole geometry is infinitely long-lived (i.e. eternal). As we mentioned in section \ref{sec:2-3}, it is also interesting to note that the Hawking temperature does not depend on the mass $m$ of the two-dimensional black hole. In the context of the dual LST defined in section \ref{sec:2-3}, we will regard this temperature as the (non-perturbative) Hagedorn temperature of the LST. There are several derivations of the Hawking temperature other than the Euclidean path integral method. Recently \cite{Robinson:2005pd,Iso:2006wa} proposed a new derivation based on the gravitational anomaly in the vicinity of the event horizon.\footnote{See also \cite{Christensen:1977jc} for a derivation based on the trace anomaly.} We briefly review their derivation focusing on the two-dimensional case (see \cite{Iso:2006ut,Murata:2006pt,Vagenas:2006qb,Setare:2006hq} for related works). Let us take the very near-horizon limit (Rindler-limit)\footnote{Although there is nothing wrong with taking the Rindler-limit in the general relativity, the limit is a little bit subtle in the string theory because the string theory introduces a ``minimal size" (string scale) to the geometry. In our example, the central charge of the original two-dimensional black-hole and its very near horizon limit is different, so we need an extra compensation of the central charge in order to preserve the criticality condition. Since we are only interested in the classical thermodynamics, we will neglect this subtlety for a time being. See also the discussion of stretched horizon of the two-dimensional black holes in section \ref{sec:4}.} of the two-dimensional black hole: \begin{align} \mbox{d} s^2 = -\frac{2(r-r_0)}{\sqrt{k\alpha'}} \mbox{d} t^2 + \frac{\sqrt{k\alpha'}}{2(r-r_0)} \mbox{d} r^2 \ , \ \ \Phi = \mathrm{const} \ . \end{align} Now let us suppose we neglect the classically irrelevant in-falling modes of any scalar field propagating in the vicinity of the horizon $r=r_0$. The massless scalar fields with this boundary condition are effectively chiral, so it will show a gravitational anomaly: \begin{align} \nabla_\mu T^{\mu}_{\ \nu} = \frac{1}{\sqrt{-g}}\partial_\mu N^\mu_{\ \nu} \end{align} with the explicit component expression \begin{align} N_{t}^t = N_{r}^r = 0 \ , \ \ N_t^r = \frac{1}{192\pi} \frac{4}{k\alpha'} \ , \ \ N_{r}^t = -\frac{1}{192\pi (r-r_0)^2} \ . \end{align} Especially, it shows a pure flux contribution \begin{align} \Phi = N_t^r|_{r=r_0} = \frac{1}{192\pi} \frac{4}{k\alpha'} \ . \label{fl} \end{align} To cancel the gravitational anomaly, we need a quantum contribution that can be attributed to the Hawking radiation from the black hole. The black body radiation with the temperature $T_{\rm Hw}$ gives rise to the flux \begin{align} \Phi = \frac{\pi}{12}T^2_{\rm Hw} \ , \label{flt} \end{align} and the comparison between \eqref{fl} and \eqref{flt} establishes the Hawking temperature $T_{\rm Hw} = \frac{1}{2\pi\sqrt{k\alpha'}}$. We will later see similar effects related with the choice of boundary conditions of wavefunction at the horizon when we study D-brane motions in the two-dimensional black hole geometries. Let us move on to the other thermodynamic quantities. Since the temperature does not depend on the mass of the two-dimensional black hole, we see that the black hole entropy is given by \begin{align} S(m) = \beta_{\rm Hw} m = 2\pi\sqrt{\alpha' k} m \ . \label{entr} \end{align} In higher dimensions, we could identify the entropy of the black hole as the area of the event horizon (i.e. the Bekenstein formula $S=\frac{A}{4\pi}$), but in the two-dimensional space-time, the event horizon is just a point and we cannot apply the Bekenstein formula. Instead, we have defined the entropy through the thermodynamic relation $\beta = \frac{\partial S}{\partial m} $. This formula predicts the high energy density of states in the LST. Assuming the microscopic explanation of the black hole entropy \eqref{entr} from the LST, the density of states of LST should be given by \begin{align} \rho(E) \sim e^{2\pi\sqrt{\alpha' k} E } \ , \end{align} in the high energy limit $E\to \infty$. \newpage \sectiono{Two-dimensional Black Hole: CFT Viewpoint}\label{sec:3} In this section, we review the two-dimensional black hole from the exactly solvable CFT viewpoint. We begin with the Euclidean version of the two-dimensional black hole and then we move on to the Lorentzian two-dimensional black hole from an appropriate Wick rotation. This is because the Euclidean version is much better understood than the Lorentzian counterpart. The organization of this section is as follows. In section \ref{sec:3-1}, We begin with the classical geometries for the $SL(2;\mathbb{R})/U(1)$ coset model that yields an exact CFT model for the two-dimensional black hole system. In section \ref{sec:3-2}, we review the Euclidean spectrum of the $SL(2;\mathbb{R})/U(1)$ coset model. In section \ref{sec:3-3}, we deal with the Lorentzian case in detail. In section \ref{sec:3-4}, we comment the duality between $SL(2;\mathbb{R})/U(1)$ coset model and the $\mathcal{N}=2$ Liouville theory and discuss implications of the associated winding tachyon condensation. \subsection{Classical geometries for $SL(2;\mathbb{R})/U(1)$ coset}\label{sec:3-1} From the world-sheet viewpoint, the reason why we are interested in the two-dimensional black hole system is that we can quantize the string theory on it by using the $SL(2;\mathbb{R})/U(1)$ coset CFT. In this subsection, we would like to overview the correspondence between the $SL(2;\mathbb{R})/U(1)$ coset model and the two-dimensional black hole system from the gauged Wess-Zumino-Novikov-Witten (WZNW) model construction \cite{Witten:1991yr,Elitzur:1991cb,Mandal:1991tz}. It is possible to define the coset CFT such as $SL(2;\mathbb{R})/U(1)$ model purely from the algebraic viewpoint (at least at the level of the left-right chiral $SL(2;\mathbb{R})/U(1)$ representations: the most difficult point is to construct the modular invariant combinations), but we would like to begin with the Lagrangian construction based on \cite{Witten:1991yr}. This is because the construction directly gives the geometric interpretation of the model as the non-linear sigma model on the two-dimensional black hole in the semi-classical limit $(k \to \infty)$. The path integral formulation based on the Lagrangian can also be used to derive the (formally) modular invariant partition function of the Euclidean two-dimensional black hole \cite{Hanany:2002ev} (see appendix \ref{part}). The ungauged WZNW model for a general Lie group $G$ has the following action: \begin{align} S_{WZNW}(g) = \frac{\kappa}{8\pi} \int_{\Sigma} \mbox{d}^2 x \sqrt{|\gamma|}\gamma^{ij} \mathrm{tr} (g^{-1}\partial_i g g^{-1} \partial_j g) + i\kappa\Gamma(g) \ . \label{WZWa} \end{align} The Wess-Zumino term $\Gamma(g)$ is given by \begin{align} \Gamma(g) = \frac{1}{12\pi} \int_B \mathrm{tr} (g^{-1}\mbox{d} g\wedge g^{-1}\mbox{d} g\wedge g^{-1}\mbox{d} g ) \ , \end{align} where $B$ is a three-dimensional manifold whose boundary is $\Sigma$. $\kappa$ denotes the level of the current algebra realized by the WZNW model. When the Lie group $G$ is compact, the level $\kappa$ should be quantized so that the Wess-Zumino term contributes to the path-integral uniquely with an arbitrary choice of $B$. In our case, however, since the Lie group is non-compact and $H^3(SL(2;\mathbb{R}),\mathbb{R}) = 0$, the quantization condition of the level $\kappa$ is not necessary. Let $G$ be $SL(2;\mathbb{R})$ for our discussion in the following. The action \eqref{WZWa} possesses a global $SL(2;\mathbb{R})\times SL(2;\mathbb{R})$ symmetry $g \to a gb^{-1}$, with $a,b \in SL(2;\mathbb{R})$. Quantum mechanically, it will be elevated to a current algebra with the level $\kappa$: the chiral current \begin{align} j^A(z) = \kappa \mathrm{Tr}(T^A \partial g g^{-1}) \ , \end{align} with $T^3= \frac{1}{2}\sigma_2$, $T^{\pm} = \pm \frac{1}{2}(\sigma_3 \pm i\sigma_1)$, satisfies the OPE of the affine $\widehat{SL(2;\mathbb{R})_{\kappa}}$ current algebra \begin{align} \begin{cases} j^3(z)j^3(0) \sim -\frac{\kappa}{2z^2} \cr j^3(z) j^{\pm}(0) \sim \pm \frac{1}{z}j^{\pm}(0) \cr j^+(z)j^{-}(0) \sim \frac{\kappa}{z^2} - \frac{2}{z}j^{3}(0) \end{cases} \ . \end{align} The bosonic $SL(2;\mathbb{R})$ WZNW model has the central charge $c = \frac{3\kappa}{\kappa-2}$. We will gauge the anomaly free subgroup of the global symmetry of the $SL(2;\mathbb{R})$ WZNW model to obtain the Lagrangian formulation for the coset CFT. Let us first begin with the Euclidean coset. The $SL(2;\mathbb{R})$ WZNW model has a negative-signature direction in $J^3 \sim i\sigma_2$ and the target space is a Lorentzian manifold. We gauge the (compact) $U(1)$ subgroup generated by \begin{align} \delta g = \epsilon (i\sigma_2 \cdot g + g \cdot (i\sigma_2)) \ , \end{align} or by setting \begin{align} a = b^{-1} = h= \begin{pmatrix} \cos\epsilon & \sin\epsilon \\ -\sin\epsilon & \cos\epsilon \end{pmatrix} \ . \label{gauget} \end{align} To promote the global axial symmetry $g \to hgh $ to the gauge symmetry, we have to introduce the gauge connection $A_i$ that transforms as $\delta A_i = -\partial_i \epsilon$ under the gauge transformation \eqref{gauget} with a space varying gauge parameter $\epsilon(x_i)$. The covariantized action reads \begin{align} S_{\mathrm{gauged}} &= S_{\mathrm{WZNW}}(g) + \cr &+ \frac{\kappa}{2\pi} \int \mbox{d}^2z \bar{A} \mathrm{Tr}\left(i\sigma_2 g^{-1}\partial g\right) + A\mathrm{Tr}\left(i\sigma_2 \bar{\partial} g ^{-1}\right) + A\bar{A}\left(-2 + \mathrm{Tr}\left(i\sigma_2 g i\sigma_2 g^{-1}\right) \right) \ . \label{gWZW} \end{align} We call this gauged WZNW model as $SL(2;\mathbb{R})^{(A)}/U(1)$ axial coset. To obtain the classical geometry of the $SL(2;\mathbb{R})^{(A)}/U(1)$ coset CFT, we will integrate out the gauge field by fixing the gauge \footnote{In terms of the Euler angle parametrization (see appendix \ref{a-1-2}), $g = e^{i\sigma_2 \frac{t-\phi}{2}} e^{r\sigma_1}e^{i\sigma_2\frac{t+\phi}{2}}$ and we set $t=0$, where $\phi = \theta - \frac{\pi}{2}$. It is clear that this gauge fixing is always possible and unique.} \begin{align} g = \cosh r + \sinh r \begin{pmatrix} \cos\theta & \sin\theta \\ \sin\theta & -\cos\theta \end{pmatrix} \ . \end{align} The resulting sigma model for the gauge fixed coordinate $(r,\theta)$ is given by the action \begin{align} S = \frac{\kappa}{2\pi} \int \mbox{d}^2z \left(\partial r \bar{\partial} r + \tanh^2 r \partial \theta \bar{\partial} \theta \right) \ , \label{axcea} \end{align} with the dilaton gradient $e^{2\Phi} = \frac{k}{\mu \cosh^2 r}$, which originates from the one-loop determinant factor for the gauge field $A_i$. The geometry one can read from the sigma model action is the Euclidean two-dimensional black hole we have introduced in section \ref{sec:2}. In the bosonic coset model, we expect a perturbative (and nonperturbative) $\alpha'$ corrections for this gauge fixing procedure and the corrected sigma model was proposed in \cite{Dijkgraaf:1992ba}. In the supersymmetric Kazama-Suzuki coset, it is believed that there is no perturbative $\alpha'$ corrections to the metric. Nonperturbative corrections which will be reviewed in section \ref{sec:3-4}, however, are present and they are one of the key elements to understand the ``black hole - string transition". The Lorentzian coset is obtained by gauging the non-compact subgroup \begin{align} \delta g = \epsilon\left(\sigma_3 g + g \sigma_3\right) \ . \end{align} We fix the gauge by setting \begin{align} g = \begin{pmatrix} a & u \\ -v & a \end{pmatrix} \end{align} with the determinant constraint $uv = 1- a^2$. The gauge fixing condition is valid for $1-uv>0$ and we can see that $u$ and $v$ are gauge invariant coordinates. With this gauge fixing condition,\footnote{Strictly speaking, the coset is a double cover of the $(u,v)$ plane, where the two-sheets are distinguished by the signature of $a$. We will neglect this small subtlety throughout the thesis.} the target space is spanned by the $(u,v)$ plane. After integrating out the gauge field $A_i$, the resulting sigma model is given by \begin{align} S = -\frac{\kappa}{4\pi} \int \mbox{d}^2x \sqrt{|\gamma|}\frac{\gamma^{ij}\partial_i u\partial_j v}{1-uv} \ , \end{align} which reproduces the classical two-dimensional black hole system discussed in section \ref{sec:2}. Here we have used the Lorentzian signature world-sheet so that the sigma model with a Lorentzian signature target space is well-defined. In this thesis, we mainly focus on the supersymmetric generalization of the two-dimensional black hole based on the supersymmetric $SL(2;\mathbb{R})_k/U(1)$ coset model. The starting point is the bosonic $SL(2;\mathbb{R})_\kappa /U(1)$ coset mode with the level $\kappa = k+2$ bosonic current algebra. In addition to the bosonic action \eqref{gWZW}, we introduce the fermionic part: \begin{align} S_f = \frac{1}{2\pi} \int \mbox{d}^2z \left(\psi^+(\bar{\partial}-\bar{A})\psi^- + \psi^- (\bar{\partial}+\bar{A}) \psi^{+} + \tilde{\psi}^+ (\partial -A) \tilde{\psi}^- + \tilde{\psi}^-(\partial +A)\tilde{\psi}^+ \right) \ , \end{align} with the OPE $\psi^+(z)\psi^-(0) \sim 1/z$, $\psi^{\pm}(z)\psi^{\pm}(0) \sim 0$. Let us concentrate on the Euclidean case for definiteness. From the Kazama-Suzuki construction, we can realize the $\mathcal{N}=2$ superconformal symmetry on the supersymmetric $SL(2;\mathbb{R})/U(1)$ coset model. The explicit realization is given by \begin{align} T(z) & = \frac{1}{k}(j^{1}j^{1} +j^2j^2) - \frac{1}{2}(\psi^+\partial \psi^- - \partial \psi^+ \psi^-) \cr G^{\pm}(z) &= \frac{1}{\sqrt{k}} \psi^{\pm} j^{\mp} \cr J(z) & = \psi^+\psi^- + \frac{2}{k}(j^{3} + \psi^+\psi^-) \ , \end{align} whose central charge is given by $c = 3(1+\frac{2}{k}) $. The gauging current is defined by $J^3 = j^3 + \psi^+ \psi^-$, which commutes with all the elements of the $\mathcal{N}=2$ superalgebra. The fermionic part of the Lorentzian case is obtained from the analytic continuation by formally replacing $\psi^+ = \frac{1}{\sqrt{2}}(\psi^1 + i \psi^2) \to \frac{1}{\sqrt{2}}(\psi^1 + \psi^3)$ and $\psi^- = \frac{1}{\sqrt{2}}(\psi^1 - i \psi^2) \to \frac{1}{\sqrt{2}}(\psi^1 - \psi^3)$ . From the path integral viewpoint, instead of treating the (Euclidean) $SL(2;\mathbb{R})^{(A)}/U(1)$ coset, it is more convenient to study the equivalent description based on the $\mathbb{H}_3^+/\mathbb{R}$ coset model. The $\mathbb{H}_3^+ = SL(2;\mathbb{C})/SU(2)$ model is defined by the sigma model on the upper sheet \begin{align} g =\begin{pmatrix} a & u \\ \bar{u} & b \end{pmatrix} = \begin{pmatrix} e^{\phi} & e^{\phi}\bar{\gamma} \\ e^{\phi}\gamma & e^{\phi}\gamma\bar{\gamma} + e^{-\phi} \end{pmatrix} \ , \end{align} where we have introduced a real field $\phi$ and a complex field $\gamma$ with its complex conjugation $\bar{\gamma}$. The sigma model has the action \begin{align} S = -\frac{\kappa}{2\pi} \int \mbox{d}^2z \left(\partial\phi \bar{\partial}\phi + e^{2\phi}\partial\gamma\bar{\partial}\gamma \right) \ . \end{align} The model has a positive definite action and the path integral is well-defined (unlike the ungauged $SL(2;\mathbb{R})$ WZNW model).\footnote{However, the model has an imaginary $H_3$ flux, so the physical interpretation is unclear.} The two-dimensional Euclidean black hole is obtained by axially gauging the noncompact $U(1)$ direction $\sigma_2$ (e.g. one can choose a gauge $a = b$ and obtain the metric $ds^2 = \kappa\frac{dud\bar{u}}{1+|u|^2}$). This construction has the advantage that the parent sigma model has a definite Euclidean path integral while the $SL(2;\mathbb{R})/U(1)$ coset does not because the parent WZNW model has a Lorentzian signature. So far, we have studied the axial coset of the $SL(2;\mathbb{R})$ WZNW model, but it is possible to gauge the vector symmetry \begin{align} \delta g = \epsilon (i\sigma_2 \cdot g - g \cdot (i\sigma_2)) \ . \end{align} This symmetry has a fixed point, and the corresponding effective action \begin{align} S = \frac{\kappa}{2\pi} \int \mbox{d}^2z \left(\partial \rho \bar{\partial} \rho + \frac{1}{\tanh^2 \rho} \partial \tilde{\theta} \bar{\partial} \tilde{\theta} \right) \ , \label{efftru} \end{align} which is also known as the trumpet model, has a singularity at $\rho=0$. However, from the algebraic coset viewpoint, there is no singularity at all. We have just replaced right moving $\bar{J}_3$ current of the $SL(2;\mathbb{R})$ WZNW model with $-\bar{J}_3$. In order to specify the model, we have to determine the periodicity of the variable $\tilde{\theta}$ in the effective action \eqref{efftru}. The angular variable $\theta$ in the cigar geometry has a natural periodicity $2\pi$ coming from the $SL(2;\mathbb{R})$ WZNW model. In the trumpet case, there is no apriori natural periodicity of $\tilde{\theta}$ because it is non-contractible loop in $SL(2;\mathbb{R})$ and any periodicity is allowed if we study the universal cover of the $SL(2;\mathbb{R})$. In terms of the Euler angle parametrization (see appendix \ref{a-1-2}), $g = e^{i\sigma_2 \frac{t-\phi}{2}} e^{r\sigma_1}e^{i\sigma_2\frac{t+\phi}{2}}$ and we set $\phi=0$. In this sense, the natural periodicity for $t = \tilde{\theta}$ is $2\pi$. We {\it define} that the $SL(2;\mathbb{R})^{(V)}/U(1)$ vector coset model has $2\pi$ periodicity in $\tilde{\theta}$. From the algebraic construction, the vector coset merely changes the sign convention of the right-moving current $\bar{J}_3$, which reminds us of the T-duality or mirror symmetry. Along this line of reasoning, an alternatively good definition of the vector coset is to take $ 2\pi/k$ periodicity in $\tilde{\theta}$.\footnote{This convention is the one given in \cite{Dijkgraaf:1992ba}.} This is indeed motivated by Bucsher's T-duality rule: if we perform the T-duality to our original cigar model \eqref{axcea} with $2\pi$ periodicity in $\theta$, we obtain the trumpet model with $ 2\pi/k$ periodicity. We call this model as ``$\mathbb{Z}_k$ orbifold of the vector coset $SL(2;\mathbb{R})^{(V)}/U(1)$". The $\mathbb{Z}_k$ orbifold of the $SL(2;\mathbb{R})^{(V)}/U(1)$ or $\mathbb{Z}_k$ orbifold of the trumpet model is same as the cigar model as a CFT (up to a GSO projection in the supersymmetric case).\footnote{Let us mention the similar structure in the $SU(2)$ case. It is known that the axial coset $SU(2)^{(A)}/U(1)$ is the $\mathbb{Z}_k$ orbifold of the vector coset $SU(2)^{(V)}/U(1)$. However, in this case, $\mathbb{Z}_k$ orbifold of the vector coset $SU(2)^{(V)}/U(1)$ is the same model as the original $SU(2)^{(V)}/U(1)$. Therefore, we can also say that the T-dual of the $SU(2)^{(A)}/U(1)$ coset is the $SU(2)^{(V)}/U(1)$. In the $SL(2;\mathbb{R})$ case, although the asymptotic spectrum of the $\mathbb{Z}_k$ orbifold of the $SL(2;\mathbb{R})^{(V)}/U(1)$ coincides with that of the $SL(2;\mathbb{R})^{(V)}/U(1)$, the (un-regularized) partition function is different from each other. Here we implicitly assumed that $k$ is an integer, but the situation is more involved when $k$ is not an integer because the meaning of the $\mathbb{Z}_k$ orbifold is obscure. Note that we have defined the $\mathbb{Z}_k$ orbifold of the $SL(2;\mathbb{R})^{(V)}/U(1)$ as the T-dual of the $SL(2;\mathbb{R})^{(A)}/U(1)$, which perfectly makes sense even for irrational $k$.} Since the vector coset model with the trumpet geometry is related to the T-duality of the cigar geometry, there should be no singularity at all in the vector coset model as a CFT. What happens to the apparent singularity of the classical geometry? We will come back to this problem in section \ref{sec:3-4}. The equivalence between the axial coset and the ($\mathbb{Z}_k$ orbifold of) vector coset leads to a remarkable observation made in \cite{Dijkgraaf:1992ba} --- the duality between the singularity and the horizon. If we gauge the vector symmetry for the Lorentzian coset, we end up with the same Lorentzian two-dimensional black hole.\footnote{This is up to global duplications. For instance, if one considers an axial coset of the universal cover of the $SL(2;\mathbb{R})/U(1)$ coset, we have an infinite copies of the Lorentzian two-dimensional black holes.} However, the analytic continuation of the trumpet geometry (vector coset) has a natural interpretation as the region inside the singularity: \begin{align} \mbox{d} s^2 = \alpha' k\left(- \frac{1}{\tanh^2\rho} \mbox{d} t^2 + \mbox{d}\rho^2\right) \ . \end{align} In this way, the axial-vector duality suggests the duality of the region parametrized $uv$ and $1-uv$ while keeping $t$. In particular, it exchanges the region outside the horizon and the one inside the singularity. This duality would shed a new light on the physics of the black hole and especially, relation between the T-duality and ``winding tachyon" in the Lorentzian two-dimensional black hole. It is, however, fair to say that the precise physical meaning of the duality is far from being well-understood in the Lorentzian signature. To close this section, we discuss the pure two-dimensional background by setting $k=1/2$ or $\kappa = 9/4$. Then the model is a critical string theory by itself and one can regard it as a pure two-dimensional background \cite{Witten:1991yr}. In the Euclidean signature, there is a proposed dual matrix model \cite{Kazakov:2000pm} and the theory is supposedly exactly solvable. In the context of more general dilaton gravity in the two-dimensional space-time, the classical solutions are investigated in \cite{Grumiller:2005sq}. \subsection{Euclidean spectrum}\label{sec:3-2} Let us study the spectrum of the Euclidean $SL(2;\mathbb{R})/U(1)$ coset model. \subsubsection{algebraic coset}\label{sec:3-2-1} We begin with the algebraic structure of the coset model from the noncompact para-fermion construction. Let $\Phi_{jm}(z)$ be holomorphic part of the primary fields of $SL(2;\mathbb{R})$ WZNW model with bosonic level $\kappa$. It has a (bosonic) left-moving $j^3_0$ eigenvalue $m$: \begin{align} j^3(z) \Phi_{jm}(0) = m \frac{\Phi_{jm}}{z} \ . \end{align} It is also a holomorphic part of the primary for the supersymmetric $SL(2;\mathbb{R})$ WZNW model with the same eigenvalue for $J^3_0$. Here, the supersymmetric current $J^3$ is defined by \begin{align} J^3 = j^3 -\psi^-\psi^+ \ . \end{align} For later convenience, we introduce the bosonized current\footnote{Note that the gauging current must be time-like in the Euclidean coset.} \begin{align} \partial H &= i\psi^-\psi^+ \cr J^3 &= -\sqrt{\frac{k}{2}}\partial X_3 \cr j^3 &= J^3 + i\partial H = -\sqrt{\frac{\kappa}{2}}\partial x_3 \ . \end{align} Using $X_3$, or $x_3$, we can decompose $\Phi_{jm}$ as \begin{align} \Phi_{jm} = U_{jm} e^{m\sqrt{\frac{2}{k}}X_3} = V_{jm} e^{m\sqrt{\frac{2}{\kappa}}x_3} \ . \label{decb} \end{align} The para-fermion fields $U_{jm}$ and $V_{jm}$ have the conformal dimension \begin{align} \Delta(U_{jm}) &= \frac{-j(j+1)+m^2}{k} \cr \Delta(V_{jm}) &= -\frac{j(j+1)}{\kappa-2} + \frac{m^2}{\kappa} \ , \end{align} and they are (holomorphic) primaries of the supersymmetric Euclidean coset $SL(2;\mathbb{R})/U(1)$ and the bosonic coset respectively. For the supersymmetric case, we should also decompose the $U(1)$ fermion current as \begin{align} e^{inH} = e^{n\sqrt{\frac{2}{k}}X_3} e^{i\sqrt{\frac{c}{3}} X_R} \ , \label{decf} \end{align} where the bosonized $U(1)_R$-current (of the $\mathcal{N}=2$ SUSY algebra: see appendix \ref{SCA2}) is defined by \begin{align} J = i\sqrt{\frac{c}{3}}\partial X_R \ , \end{align} where \begin{align} iH = \sqrt{\frac{2}{k}}X_3 + i\sqrt{1+\frac{2}{k}}X_R \ . \end{align} Under the decomposition \eqref{decb} and \eqref{decf}, (holomorphic) primary fields of the supersymmetric $SL(2;\mathbb{R})/U(1)$ coset takes the form \begin{align} V^{n}_{jm} = V_{jm} e^{i(\frac{2m}{k+2} +n)\sqrt{\frac{c}{3}}X_R} \ , \end{align} which has the conformal dimension \begin{align} \Delta(V_{jm}^n) = -\frac{j(j+1) + (m+n)^2}{k} + \frac{n^2}{2} \ , \end{align} and the $U(1)_R$ charge \begin{align} R(V_{jm}^n) = \frac{2m}{k} + \frac{nc}{3} \ . \end{align} The (half) integer $n$ is the amount of the spectral flow of the $\mathcal{N} = 2 $ superconformal algebra. The structure of the descendants, depending on quantum numbers $(j,m,n)$, are completely fixed from that of the $SL(2;\mathbb{R})$ (see appendix \ref{a-1}), or alternatively from the representation of the $\mathcal{N}=2 $ superconformal algebra. \subsubsection{spectrum from partition function}\label{sec:3-2-2} To obtain the full spectrum of the CFT from the holomorphic data discussed in section \ref{sec:3-2-1}, we need to combine left-moving parts and right-moving parts in a consistent way. For example, in the compact $SU(2)$ WZNW model, the complete classification of the modular invariant partition function (and hence the spectrum) is given by the so-called ADE classification. In the non-compact case, we have not yet achieved such a systematic classification. Physically, however, we are primarily interested in the two-dimensional black hole interpretation of the $SL(2;\mathbb{R})/U(1)$ coset model, so we will only consider the simplest realization from the gauged WZNW model as we have focused in section \ref{sec:3-1}. We begin with the partition function for the bosonic Euclidean two-dimensional black hole \cite{Hanany:2002ev} \begin{align} Z_{\mathbb{H}^{3(A)}_{+}/\mathbb{R}} = \int_{\Sigma} \frac{\mbox{d} u^2}{\tau_2} \frac{e^{\frac{u_2^2}{\tau_2}}}{\sqrt{\tau_2}|\theta_1(\tau,u)|^2 } \sqrt{\tau_2}|\eta(\tau)|^2 \sum_{m,\omega\in \mathbb{Z}} e^{-\frac{\pi \kappa}{\tau_2}|\omega\tau - m + u|^2} \ . \label{ppo} \end{align} See appendix \ref{part} for a summary of various partition functions. Unfortunately, the partition function is divergent, and the leading divergence comes from the integration near $u_1 = u_2 = 0$. The diverging factor could be attributed to the volume divergence in the radial $\rho$ direction of the cigar model. Thus, at the leading order, we have \begin{align} Z_{\mathbb{H}^{3(A)}_{+/\mathbb{R}}} \sim \frac{1}{2\pi} (\log\epsilon) Z_{\mathrm{free}}(\tau) Z_{\sqrt{\kappa}}(\tau) + \text{finite part} \ , \label{lpf} \end{align} which gives the asymptotic degrees of freedom realized by a free non-compact boson (with the linear dilaton): \begin{align} Z_{\mathrm {free}}(\tau) = \frac{1}{\sqrt{\tau_2}|\eta(\tau)|^2} \ , \end{align} and a compact boson with radius $R^2 = \kappa$. \begin{align} Z_{\sqrt{\kappa}} = \frac{\sqrt{\kappa}}{\sqrt{\tau_2}|\eta(\tau)|^2} \sum_{m,\omega\in \mathbb{Z}} e^{-\frac{\pi \kappa}{\tau_2}|\omega\tau - m|^2} \ . \end{align} The appearance of the compact boson is due to the summation over the lattice $(n,\omega)$, which arises from the zeros of $\theta_1(\tau,u)$ in \eqref{ppo}. A more precise (but formal) manipulation \cite{Hanany:2002ev} leads to the following decomposition of the partition function: \begin{align} Z_{\mathbb{H}^{3(A)}_{+/\mathbb{R}}} = \int^{-1/2}_{-(\kappa-1)/2} \mbox{d} j \mathrm{Tr}_{\hat{D}^+_j\otimes\hat{D}^+_j} q^{L_0}q^{\bar{L}_0} + \sum_{\omega,n}\int_0^\infty \mbox{d} p 2\rho(p) \mathrm{Tr}_{\hat{C}_{-\frac{1}{2}+ip}\otimes\hat{C}_{-\frac{1}{2}+ip}} q^{L_0}q^{\bar{L}_0} + \dots \ , \label{decompp} \end{align} where the Hilbert space $\hat{D}^+_j\otimes\hat{D}^+_j$ is the discrete representations of the $SL(2;\mathbb{R})$ with the constraints $J_0^3 -\bar{J}_0^3 = n$, $J_0^3 + \bar{J}_0^3 = \kappa \omega $ and no contribution from the $J^3_{n<0}$ oscillators. The same restriction is imposed on the continuous representations $\hat{C}_{-\frac{1}{2}+ip}\otimes\hat{C}_{-\frac{1}{2}+ip}$. The density of states for the continuous representations is given by \begin{align} \rho(p) = \frac{1}{2\pi} 2 \log \epsilon + \frac{1}{2\pi i} \frac{\partial}{2\partial p} \log \frac{\Gamma(-ip+\frac{1}{2}-m)\Gamma(-ip+\frac{1}{2}+\bar{m})}{\Gamma(+ip+\frac{1}{2}+\bar{m})\Gamma(+ip+\frac{1}{2}-m)} \ , \label{dnsi} \end{align} where $m = \frac{1}{2}(n+\kappa \omega)$, $\bar{m} = -\frac{1}{2}(n-\kappa\omega)$ are the eigenvalues of $J_0^3$ and $\bar{J}_0^3$. The density of states appearing here is consistent with the reflection amplitude (or sphere two-point function) of the two-dimensional black hole as we will review in section \ref{sec:3-2-3}. The leading diverging part proportional to $\log \epsilon$ agrees with \eqref{lpf}. There are, however, several subtleties associated with the decomposition \eqref{decompp}. First of all, the expression \eqref{decompp} is {\it not} modular invariant although our starting point \eqref{ppo} is formally invariant. The failure is due to the nontrivial $p$ dependent density of states \eqref{dnsi} and the contribution from the discrete series. In other words, the regularization rule for the character decomposition \eqref{decompp} does not preserve the modular invariance. The only one can say is that the leading part, (or the partition function per unit volume as $\epsilon \to 0$) is modular invariant \eqref{lpf}. Another subtlety is related to the omitted terms in \eqref{decompp}. The regularization procedure proposed in \cite{Hanany:2002ev} (see also \cite{Maldacena:2000hw} for related models) actually leaves us with finite terms that could not be written as the character appearing in \eqref{decompp} \cite{Israel:2004ir}. Again this depends on the regularization scheme and one natural (but not unique) solution is to omit this part as we will implicitly assume in the following. Despite all these subtleties, the decomposition \eqref{decompp} seems to capture important physics of the two-dimensional black hole. In particular, it predicts the existence of the discrete spectrum localized near the tip of the cigar. Indeed the range of the discrete representations $\frac{-\kappa+1}{2}<j<-\frac{1}{2}$ will be independently checked by the Cardy analysis of the boundary states for $SL(2;\mathbb{R})/U(1)$ coset model as we will see in section \ref{sec:6-2-3}. Also, the minisuperspace analysis for the two-dimensional black hole reproduces the zero-slope limit of the results given here including the density of states \eqref{dnsi}. We will review the mini-superspace analysis in section \ref{sec:3-2-3}. Before moving on to the mini-superspace analysis, we will briefly present a generalization to the supersymmetric $SL(2;\mathbb{R})/U(1)$ coset model. The partition function is given by \begin{align} Z^{(NS)}(\tau) = \int_{\Sigma} \frac{\mbox{d} u^2}{\tau_2} \frac{|\theta_3(\tau,u)|^2}{\sqrt{\tau_2}|\theta_1(\tau,u)|^2 } \sqrt{\tau_2}|\eta(\tau)|^2 \sum_{m,\omega\in \mathbb{Z}} e^{-\frac{\pi k}{\tau_2}|\omega\tau - m + u|^2} \ \end{align} and the decomposition to the character is obtained as \begin{align} \int^{-1/2}_{-(k+1)/2} \mbox{d} j \mathrm{Tr}_{\hat{D}^+_j\otimes\hat{D}^+_j} q^{L_0}q^{\bar{L}_0} + \sum_{\omega,n \in \mathbb{Z}}\int_0^\infty \mbox{d} p 2 \rho(p) \mathrm{Tr}_{\hat{C}_{-\frac{1}{2}+ip}\otimes\hat{C}_{-\frac{1}{2}+ip}} q^{L_0}q^{\bar{L}_0} + \dots \ , \label{decomps} \ . \end{align} In this case, the trace should be taken over the (NS-NS) Hilbert space of the supersymmetric coset instead of the bosonic one. Explicitly \begin{align} \mathrm{Tr}_{\hat{C}_{-\frac{1}{2}+ip}\otimes\hat{C}_{-\frac{1}{2}+ip}} q^{L_0}q^{\bar{L}_0} = q^{\frac{p^2+m^2}{k}}\bar{q}^{\frac{p^2+\bar{m}^2}{k}} \frac{|\theta_3(\tau)|^2}{|\eta(\tau)|^6} \ , \end{align} and \begin{align} \int^{-1/2}_{-(k+1)/2} \mbox{d} j \mathrm{Tr}_{\hat{D}^+_j\otimes\hat{D}^+_j} q^{L_0}q^{\bar{L}_0} &= \sum_{\omega,n \in {\mathbf{Z}}} \sum_{j \in \mathcal{J}_{\omega,n}} \chi_{\mathrm{dis}, 1+j+\frac{k}{2}, m +\frac{k}{2}}(\tau) \chi_{\mathrm{dis}, 1+j+\frac{k}{2}, \bar{m} + \frac{k}{2}}(\bar{\tau}) \cr \mathcal{J}_{\omega,n} &= \left.\left[-\frac{k+1}{2},-\frac{1}{2}\right.\right) \cap \left(\frac{k\omega-n}{2} + \mathbb{Z}\right) \cr \chi_{\mathrm{dis}, j,j+n}({\tau}) &= \frac{q^{\frac{(j+n)^2}{k}-\frac{1}{4k}}}{1+q^{n+1/2}}\frac{\theta_3(\tau,0)}{\eta(\tau)^3} \ . \end{align} The partition functions of the other sectors are readily obtained by performing the spectral flow symmetry of the $\mathcal{N}=2$ SCA. We note that in order to obtain a superstring compactification with other sectors (such as $\mathcal{N}=2$ minimal models), we have to project down to the sectors with integral $U(1)_R$ -charge so that the space-time supersymmetry is well-defined (GSO projection). We can read some important physics from the spectrum of the Euclidean two-dimensional black hole: \begin{itemize} \item The continuous representations have a mass gap, which is consistent with the (asymptotic) linear dilaton background. Due to the mass gap, would-be graviton is massive, which is again consistent with the statement that the LST is non-gravitational theory. \item The discrete representations correspond to local dynamical degrees of freedom that has a winding quantum number being localized near the tip of the cigar. From the space-time point of view, they are normalizable deformation of the background localized in the vicinity of the singularity. The improved unitarity bound perfectly agrees with the geometrical normalizability condition discussed in section \ref{sec:2-2-2}. \end{itemize} The improved unitarity bound has an important application to obtain the normalizable deformations of the LST. As promised, we will derive the geometrical bound \eqref{normc} from the improved unitarity bound of the $SL(2;\mathbb{R})/U(1)$ coset model. The dual string theory for the generalized conifold \begin{align} z_1^{n} + z_2^2 + z_3^2 + z_4^2 = 0 \ \end{align} is given by the $(n-2)$-th $\mathcal{N}=2$ minimal model coupled with $SL(2;R)/U(1)$ coset with the level $k = \frac{2n}{n+2}$. The vertex operators corresponding to massless deformations of the geometry can be obtained by combining (anti-)chiral primary operators of the $\mathcal{N}=2$ minimal model and the $SL(2;\mathbb{R})/U(1)$ coset model restricted to $h=\bar{h}=\frac{1}{2}$. Labeling the chiral primaries of the minimal model by $l$ ($0 \le l \le n-2)$ with the $U(1)_R$ charge $Q_R = \frac{l}{n}$, we obtain the conformal condition: \begin{align} \frac{l}{n} + \frac{2m}{k} = 1 \ . \end{align} On the other hand, the improved unitarity constraint is \begin{align} 1 \le 2m \le 1+k \ , \end{align} which gives the constraint \begin{align} l = 0 , 1 , \cdots , \left[\frac{n-2}{2}\right] \ . \end{align} The bound is in perfect agreement with \eqref{normc}. \subsubsection{minisuperspace analysis}\label{sec:3-2-3} For a complementary method to read the spectrum of the sigma model is to use the point particle approximation known as the mini-superspace approximation. Let us consider the Euclidean two-dimensional black hole background, known as `cigar geometry': \begin{equation} \mbox{d} s^2 \equiv G_{ij} \mbox{d} x^i \mbox{d} x^j = 2k (\mbox{d} \rho^2 + \tanh^2\rho \mbox{d} \theta^2) \qquad \mbox{and} \qquad e^{\Phi} = \frac{e^{\Phi_0}}{\cosh\rho} ~. \label{Euclidean cigar} \end{equation} Recall that $k$ sets characteristic curvature radius in unit of the string scale and hence string world-sheet effects, while $e^{\Phi_0}$ sets the maximum value of the string coupling at the tip $\rho=0$ of the cigar geometry. We shall assume the limit $k \gg 1$ and $e^{-\Phi_0} \gg 1$: this limit suppresses both string world-sheet and space-time quantum effects and facilitates to truncate closed string spectrum to zero-modes, viz. to mini-superspace approximation. In the mini-superspace approach, difference between bosonic strings (with no world-sheet supersymmetry) and fermionic strings (with ${\cal N}=2$ world-sheet supersymmetry) becomes unimportant. The closed string Hamiltonian $L_0 + \overline{L}_0$ is reduced in the mini-superspace approximation to the target space Laplacian $\Delta_0$, where: \begin{align} \Delta_0 &= \frac{1}{e^{-2\Phi} \sqrt{G}} \partial_i \left(e^{-2\Phi} \sqrt{G} G^{ij} \partial_j\right) \equiv -\frac{1}{2k}[\partial_\rho^2 +2\coth2\rho\partial_\rho + \coth^2\rho\partial_\theta^2] ~. \label{Laplacian} \end{align} The Hamiltonian is defined with respect to the volume element: \begin{align} \mbox{d} \mbox{Vol} = e^{-2\Phi}\sqrt{G} \mbox{d} \rho \mbox{d} \theta := {2k} \sinh \rho \cosh \rho \mbox{d} \rho \, \mbox{d} \theta \equiv k \sinh 2\rho \mbox{d} \rho \, \mbox{d} \theta~, \label{vol cigar} \end{align} inherited from the Haar measure on the $SL(2;\mathbb{R})$ group manifold. In the volume element, the dilaton factor $e^{-2\Phi}$ is taken into account, as the inner product for closed string states is defined by the world-sheet two-point correlators on the sphere. The normalized eigenfunctions are obtained straightforwardly \cite{Dijkgraaf:1992ba,Ribault:2003ss}. They are: \begin{align} \phi_n^j(\rho,\theta) & = -\frac{\Gamma^2(-j+\frac{|n|}{2})}{\Gamma(|n|+1)\Gamma(-2j-1)} e^{in\theta} \times\cr & \times \left[ \sinh^{|n|}\rho \cdot F\left(j+1+\frac{|n|}{2},-j+\frac{|n|}{2};|n|+1;-\sinh^2\rho \right)\right] ~ , \label{ef} \end{align} where $F(\alpha, \beta; \gamma; z)$ is the Gaussian hypergeometric function. These eigenfunctions correspond to the primary state vertex operators of conformal weights \begin{align} && h= \bar{h}= -\frac{j(j+1)}{k-2} + \frac{n^2}{4k} \qquad \mbox{or} \qquad h= \bar{h}= -\frac{j(j+1)}{k} + \frac{n^2}{4k} \end{align} for bosonic\footnote{The eigenvalue is actually proportional to $-\frac{j(j+1)}{k}+ \frac{n^2}{4k}$. We will return to this small mismatch at the end of this subsection.} and fermionic strings, respectively. We shall focus on the continuous series, parametrise the radial quantum number $j$ as $j= -\frac{1}{2}+ i\frac{p}{2}$ $(p\in \mathbb{R})$, and label the eigenfunctions as $\phi^p_n(\rho,\theta)$ instead of $\phi^j_n(\rho,\theta) $. We adopt the convention that, in the asymptotic region $\rho \sim \infty$, the vertex operators with $p>0$ corresponds to the incoming waves and those with $p<0$ corresponds to the outgoing waves. The eigenfunctions \eqref{ef} are then normalized as \begin{align} \Big(\phi^p_n, \phi^{p'}_{n'} \Big) = \delta_{n,n'} \Big\lbrack 2 \pi \delta(p-p')+ {\cal R}_0(p',n) \, 2 \pi \delta(p+p') \Big\rbrack ~,\label{inner product} \end{align} where the inner product is defined with respect to the volume element \eqref{vol cigar}. Here, $ {\cal R}_0(p,n)$ refers to the reflection amplitude of the mini-superspace analysis: \begin{align} {\cal R}_0(p,n) = \frac{\Gamma(+ip)\Gamma^2(\frac{1}{2}-\frac{ip}{2}+\frac{n}{2})} {\Gamma(-ip)\Gamma^2(\frac{1}{2}+\frac{ip}{2}+\frac{n}{2})} \ . \label{cref amp} \end{align} That is, from the definition \eqref{ef}, the reflection amplitude is seen to obey the mini-superspace reflection relation: \begin{align} \phi^{-p}_n(\rho,\theta) = {\cal R}_0(-p,|n|) \, \phi^{+p}_n(\rho,\theta)~. \label{cref rel} \end{align} We shall refer ${\cal R}_0(p,n)$ as `mini-superspace' reflection amplitude, valid strictly within mini-superspace approximation at $k \rightarrow \infty$, and anticipate string world-sheet effects at finite $k$. Notice that no winding states wrapping around $\theta$-direction are present since by definition the mini-superspace approximation retains states with zero winding only. Utilizing the analytic continuation formula of the hypergeometric functions: \begin{align} F(\alpha,\beta;\gamma;z) &= \frac{\Gamma(\gamma)\Gamma(\beta-\alpha)} {\Gamma(\beta)\Gamma(\gamma-\alpha)} (-z)^{-\alpha}F(\alpha,\alpha+1-\gamma; \alpha+1-\beta;1/z) \cr &+ \frac{\Gamma(\gamma)\Gamma(\alpha-\beta)} {\Gamma(\alpha)\Gamma(\gamma-\beta)}(-z)^{-\beta} F(\beta,\beta+1-\gamma;\beta+1-\alpha;1/z) \label{eq:inv} \ , \end{align} the eigenfunction \eqref{ef} can be decomposed into \begin{align} \phi^p_n(\rho,\theta) = \phi^p_{L,n}(\rho,\theta) + {\cal R}_0(p,|n|) \phi^p_{R,n}(\rho,\theta) ~, \label{decomp ef} \end{align} where \begin{align} \phi^p_{L,n}(\rho,\theta) &\equiv e^{in\theta} (\sinh \rho)^{-1-ip}\, F\Big(\frac{1}{2}+\frac{ip+n}{2},\frac{1}{2}+\frac{ip-n}{2}; 1+ip; -\frac{1}{\sinh^2\rho} \Big) ~,\nonumber\\ & \sim e^{-\rho}e^{-ip\rho+in\theta} \qquad \mbox{at} \qquad \rho \, \rightarrow\, +\infty~ \label{phiL} \end{align} and \begin{align} \phi^p_{R,n}(\rho,\theta) &\equiv e^{in\theta} (\sinh \rho)^{-1+ip}\, F\Big(\frac{1}{2}-\frac{ip+n}{2},\frac{1}{2}-\frac{ip-n}{2}; 1-ip; -\frac{1}{\sinh^2\rho} \Big) \nonumber\\ & \sim e^{-\rho}e^{ip\rho+in\theta} \qquad \mbox{at} \qquad \rho \, \rightarrow\, +\infty \label{phiR} \end{align} refer to the left- and the right-movers, respectively, at $\rho \rightarrow +\infty$, and ${\cal R}_0(p, |n|)$ is defined in \eqref{cref amp}. Obviously, they are related to each other under the reflection of radial momentum: $\phi^{+p}_{R, n} = \phi^{-p}_{L, n}$, which is also evident from \eqref{decomp ef} and \eqref{cref amp}. These mini-superspace wave functions \eqref{decomp ef} constitute the starting point of constructing boundary states of D-brane in the Euclidean two-dimensional black hole background. We close the mini-superspace analysis with remarks concerning Wick rotation of the results to the Lorentzian background and string world-sheet effects present at finite $k$. \begin{enumerate} \item The decomposition of $\phi^p_n$ into $\phi^p_{L,n}$ and $\phi^p_{R,n}$ cannot globally defined over the entire cigar geometry. They are ill-defined around the tip $\rho =0$, and the reflection relation \eqref{cref rel} implies that $\phi^{-p}_n$ is not independent of $\phi^{+p}_n$. Therefore, of the continuous series, only the eigenfunctions $\phi^p_n$ with $p>0,~ n\in \mathbb{Z}$ span the physical Hilbert space of the closed strings on the Euclidean two-dimensional black hole. On the other hand, the situation will become further complicated once Wick rotated to the Lorentzian two-dimensional black hole. \item Notice that $\phi^p_n$ is not analytic with respect to the angular quantum number $n$ as it depends on its absolute value, $|n|$. This leads to the ambiguity for Wick rotation from Euclidean to Lorentzian background, under which roughly speaking $i n$ is replaced by energy $\omega$. As for the mini-superspace reflection amplitude ${\cal R}_0(p, n)$, since ${\cal R}_0(p,-n)= {\cal R}_0(p,n)$ holds for all $n \in \mathbb{Z}$, it is unnecessary to take absolute value $|n|$ in \eqref{cref rel}, \eqref{decomp ef}. When taking Wick rotation, we will start from the expression ${\cal R}_0(p,|n|)$. In other words, we analytically continue ${\cal R}_0(p,n)$ if $n > 0$ and ${\cal R}_0(p,-n)$ if $n<0$. \item It is evident that $|{\cal R}_0(p,n)|=1$, viz, the mini-superspace reflection amplitude is purely a phase shift in the Euclidean black hole background. It is of utmost importance that, in the Lorentzian black hole background, $n$ is analytically continued to pure imaginary value, and the modulus of the reflection amplitude becomes less than unity. \item For the fermionic Euclidean $SL(2; \mathbb{R})/U(1)$ conformal field theory, exact result for the reflection amplitude (i.e. taking account of all string world-sheet effects) is known \cite{Teschner:1997ft,Teschner:1999ug,Giveon:1999px,Giveon:1999tq}. In our notations, it is \begin{align} {\cal R}(j,m,\bar{m}) = \nu(k)^{-2j-1}\, \frac{\Gamma(1+\frac{2j+1}{k})}{\Gamma(1-\frac{2j+1}{k})} \frac{\Gamma(2j+1)\Gamma(-j+m)\Gamma(-j-\bar{m})}{\Gamma(-2j-1) \Gamma(j+1+m)\Gamma(j+1-\bar{m})}, \label{qref amp}\end{align} where \begin{align} \nu(k)\equiv \frac{1}{\pi}\frac{\Gamma(1-\frac{1}{k})} {\Gamma(1+\frac{1}{k})}~, \qquad m=\frac{kw+n}{2}~, \qquad \bar{m} = \frac{kw-n}{2}~. \nonumber\\ \end{align} Denoting by $\Phi_{j;m,\bar{m}}$ the vertex operator with conformal weights $h= \frac{m^2-j(j+1)}{k}$, $\bar{h}= \frac{\bar{m}^2-j(j+1)}{k}$, the exact reflection relation reads \begin{align} \Phi_{-(j+1);m,\bar{m}} = {\cal R}(-(j+1),m,\bar{m}) \Phi_{j;m,\bar{m}}~, \label{qref rel} \end{align} The mini-superspace reflection amplitude ${\cal R}_0(p,n)$ is then related to the exact one ${\cal R}(j,m,\bar{m})$ by taking the $k\,\rightarrow\,\infty$ limit as mentioned above (up to overall constant): \begin{align} {\cal R}_0(p,n) = \lim_{k\rightarrow + \infty}\, {\cal R}(j=-\frac{1}{2}+\frac{ip}{2}, m=\frac{n}{2}, \bar{m}=-\frac{n}{2})~. \end{align} \end{enumerate} Although the mini-superspace approximation can only describe the momentum mode of the full spectrum, it is possible to study the winding mode by using the T-duality even if we restrict ourselves to the mini-superspace approximation. In the remaining part of this subsection, we will study the mini-superspace spectrum for the T-dualized trumpet geometry (i.e. $\mathbb{Z}_k$ orbifold of the vector coset $SL(2;\mathbb{R})^{(V)}/U(1)$). T-dualized classical geometry is given by the trumpet geometry \begin{align} \mbox{d} s^2 = 2\left(k \mbox{d}\rho^2 + \frac{\mbox{d}\tilde{\theta}^2}{k\tanh^2\rho}\right) \ , \ \ e^{\Phi} = \frac{\mu}{\sinh\rho} \ . \end{align} Note that we have a curvature and dilaton singularity at $\rho = 0$. For later purposes, let us discuss the minisuperspace analysis for the bulk spectrum. The minisuperspace spectrum is determined by the eigenfunctions of the string Laplacian \begin{align} \Delta &= - \frac{1}{e^{-2\Phi}\sqrt{\det G}} \partial_i e^{-2\Phi} \sqrt{\det{G}} G^{ij} \partial_j \cr &= -\frac{2}{k}\left[\partial_\rho^2 + (\coth\rho+ \tanh\rho) \partial_\rho + k^2 \tanh^2\rho\partial_{\tilde{\theta}}^2 \right] \end{align} The (delta-function normalizable) eigenfunctions are given by \begin{align} \phi_{p,w}(\rho,\tilde{\theta}) &= C_1 e^{iw\theta}(\cosh\rho)^{-1-ip} F\left(\frac{1}{2}-\frac{kw}{2}+\frac{ip}{2},\frac{1}{2}+\frac{kw}{2}+ \frac{ip}{2};1+ip;\frac{1}{\cosh^2\rho}\right) \cr &+ C_2 e^{iw\tilde{\theta}}(\cosh\rho)^{-1+ip} F\left(\frac{1}{2}-\frac{kw}{2}-\frac{ip}{2},\frac{1}{2}+\frac{kw}{2}- \frac{ip}{2};1-ip;\frac{1}{\cosh^2\rho}\right) \ . \end{align} It is not apriori clear which boundary condition one should impose because the trumpet geometry has a singularity at $\rho = 0$. Our natural guess would be to impose $\phi_{p,w}(\rho = 0,\tilde{\theta}) \equiv 0$. With our convenient normalization $C_1= 1$, this boundary condition amounts to \begin{align} C_2 = \mathcal{R}_0(p,\omega) = \frac{\Gamma(ip)\Gamma(\frac{1}{2}-\frac{ip}{2}+\frac{kw}{2})\Gamma(\frac{1}{2}-\frac{ip}{2}-\frac{kw}{2})}{\Gamma(-ip)\Gamma(\frac{1}{2}+\frac{ip}{2}+\frac{kw}{2})\Gamma(\frac{1}{2}+\frac{ip}{2}-\frac{kw}{2})} \ , \label{ref dual} \end{align} which is consistent with the semiclassical limit of the exact reflection amplitude that is descended from the $SL(2;\mathbb{R})$ WZNW model (or $\mathbb{H}_3^+$ model). Here we have used the formulae in the appendix A to evaluate the behavior of the hypergeometric function near the singularity. Before we close our study of the mini-superspace Euclidean two-dimensional black hole system, we introduce the so-called ``exact string background" for the bosonic two-dimensional black hole. As we have seen, for the bosonic string case, the spectrum shows $1/k$ corrections as \begin{align} h = \bar{h} = -\frac{j(j+1)}{k-2} + \frac{n^2}{4k} \ , \label{extst} \end{align} compared with the mini-superspace results \begin{align} h_0 = \bar{h}_0 = -\frac{j(j+1)}{k} + \frac{n^2}{4k} \ . \end{align} To cure this small mismatch, \cite{Dijkgraaf:1992ba} introduced the following improved Laplacian \begin{align} \Delta'_0 = -\frac{1}{k-2}\left(\frac{\partial^2}{4\partial \rho^2} + \coth 2\rho \frac{\partial}{2\partial \rho} + (\coth^2\rho-\frac{2}{k})\frac{\partial^2}{\partial \theta^2}\right) \ , \label{imlap} \end{align} to reproduce the exact $1/k$ corrected spectrum \eqref{extst}. The corresponding metric is \begin{align} ds^2 = 2(k-2)\left(\mbox{d}\rho^2 + \frac{\mbox{d}\theta^2}{(\coth^2\rho -\frac{k}{2})} \right) \end{align} with the dilaton \begin{align} e^{2\Phi} = \mu \sinh2\rho\sqrt{\coth^2\rho-\frac{2}{k}} \ . \end{align} In the literature, it has been shown that this background is a solution of the bosonic string equation of motion in a particular renormalization scheme \cite{Tseytlin:1992ri}. However, from the modern viewpoint, the reflection amplitude obtained from \eqref{imlap} is the same as the one from \eqref{Laplacian}, and does not capture the nonperturbative $1/k$ corrections that appear in the exact result \eqref{qref amp}. We will study the origin of the non-perturbative corrections to the mini-superspace reflection amplitude in section \ref{sec:3-4}. \subsection{Lorentzian spectrum}\label{sec:3-3} \subsubsection{classical string in two-dimensional black hole}\label{sec:3-3-1} We would like to study the classical string solution in the two-dimensional black hole geometry. For this purposes, we can directly solve the string equation of motion (and Virasoro constraint) on the classical background, or we can study the gauged WZNW model before integrating out the gauge constraint \cite{Bars:1994sv}. If one takes the axial gauge $A=0$ in the classical gauged WZNW action \eqref{gWZW}, the solution can be constructed as follows.\footnote{Since we are studying the Lorentzian target space, we use the Lorentzian signature world-sheet.} The classical solution of the parent WZNW model is written as \begin{align} g(\sigma_+,\sigma_-) =g_L(\sigma_+)g_R(\sigma_-)^{-1} \ , \end{align} where we parametrize $g_L(\sigma_+)$, $g_R(\sigma_-)^{-1}$ $\in SL(2;\mathbb{R})$ as \begin{align} g_L = \begin{pmatrix} a_L & u_L \\ -v_L & b_L \end{pmatrix} \ , \ \ g_R^{-1} = \begin{pmatrix} b_R & -u_R \\ v_R & a_R \end{pmatrix} \ , \\ g =\begin{pmatrix} u_Lv_R+a_Lb_R & -a_Lu_R+u_La_R \\ b_Lv_R-v_Lb_R & v_Lu_R+b_La_R \end{pmatrix} \ , \end{align} with the determinant constraint $u_Lv_L+a_Lb_L = u_Rv_R+a_Rb_R = 1$. Now the current constraint $J_2 = \bar{J}_2=0$ reduces to \begin{align} \mathrm{Tr}(\sigma_3\partial_+ g_Lg_L^{-1})= v_L \partial_+ u_L + b_L \partial_+ a_L = -u_L\partial_+ v_L - a_L\partial_+ b_L = 0 \ , \label{curcon} \end{align} and the Virasoro constraint becomes\footnote{It is interesting to study general solutions of the coset CFT without imposing the Virasoro constraint. We will later come back to this point.} \begin{align} (a_L\partial_+ u_L - u_L \partial a_L)(b_L\partial_+v_L - v_L \partial_+ b_L) = 0 \ . \label{vircon} \end{align} The right moving part satisfies the similar equations. Due to the determinant constraint, two equations in \eqref{curcon} are not independent, so we expect an appearance of one arbitrary function in the full solutions. Indeed, \eqref{curcon} and \eqref{vircon} suggests that either $\partial_+ u_L = \partial_+ a_L =0$, or $\partial_+v_L = \partial_+ b_L =0$ should be satisfied. Then the general solutions can be expressed as \begin{align} g_L = \begin{pmatrix} a_L & u_L \\ -v_L(\sigma^+) &\frac{1}{a_L} - \frac{u_L}{a_L} v_L(\sigma^+) \end{pmatrix} \ \text{or} \ \ \begin{pmatrix} \frac{1}{b_L} - \frac{v_L}{b_L} u_L(\sigma^+) &u_L(\sigma^+) \\ -v_L &b_L \end{pmatrix} \cr g_R^{-1} = \begin{pmatrix} \frac{1}{a_R} -\frac{u_R}{a_R} v_R(\sigma^-) & -u_R \\ v_R(\sigma^-) & a_R\end{pmatrix} \ \text{or} \ \ \begin{pmatrix} b_R & -u_L(\sigma^-) \\ v_R &\frac{1}{b_R} - \frac{v_R}{b_R} u_R(\sigma^-) \end{pmatrix} \ . \end{align} Combining the left moving part and the right moving part, we totally obtain four possible solutions: \begin{align} g_A(\sigma_+,\sigma_-) &= \begin{pmatrix} \frac{1}{b}(1-{u}(\sigma^+){v}(\sigma^-)) & {u}(\sigma^+) \\ -{v}(\sigma^-) & b \end{pmatrix} \cr g_B(\sigma_+,\sigma_-) &= \begin{pmatrix} a & \bar{u}(\sigma^-) \\ -\bar{v}(\sigma^+) & \frac{1}{a}(1-\bar{u}(\sigma^-)\bar{v}(\sigma^+)) \end{pmatrix} \cr g_C(\sigma_+,\sigma_-) &= \begin{pmatrix} {a}(\sigma^-) & c_1\\ \frac{1}{c_1}(-1+{a}(\sigma^{-}){b}(\sigma^+)) &{b}(\sigma^+) \end{pmatrix} \cr g_D(\sigma_+,\sigma_-) &= \begin{pmatrix} \bar{a}(\sigma^+) & \frac{1}{c_2}(1-\bar{a}(\sigma^+)\bar{b}(\sigma^-)) \\ -c_2 & \bar{b}(\sigma^-) \end{pmatrix} \ , \end{align} where $u(\sigma^+), v(\sigma^-), \bar{u}(\sigma^{-}), \bar{v}(\sigma^{+}), a(\sigma^{-}), b(\sigma^+) , \bar{a}(\sigma^{+}), \bar{b}(\sigma^{-})$ are arbitrary real functions, and $(b,a,c_1,c_2)$ are real integration constants. We can read off the gauge invariant motion of strings from $u$ and $v$ components: \begin{align} &A: & u&=u(\sigma^+) \ , & v&= v(\sigma^-) \cr &B: & u&=\bar{u}(\sigma^-) \ ,& v&= \bar{v}(\sigma^+) \cr &C: & u&=c_1 \ , & v&= \frac{1}{c_1}(1-a(\sigma^-)b(\sigma^+)) \cr &D: & u&= \frac{1}{c_2}(1-\bar{a}(\sigma^+)\bar{b}(\sigma^-)) \ , & v&= c_2 \ . \label{sols} \end{align} It is interesting to note that the solution $A$ and $B$ are actually solutions of the string equation of motion of any two-dimensional target space metric written in the conformal (Kruscal) coordinate: $\mbox{d} s^2 = f(u,v) \mbox{d} u\mbox{d} v$. We still have a gauge degree of freedom associated with the conformal transformation: $\sigma^+ \to f(\sigma^+)$ and $\sigma^- \to \bar{f}(\sigma^-)$. By using this gauge symmetry, we can {\it locally} gauge away the arbitrary functions in \eqref{sols} to make them reduce to the point particle (collapsed) string solution. For example, in the solution $A$ (similarly for $B$), we can expand \begin{align} u = u_0 + p_u(\tau + \sigma) + \sum_{n\neq 0} \alpha_n e^{-in(\tau + \sigma)} \ . \end{align} As is the case with the flat space, we can gauge away the oscillatory part, and due to the periodicity of $\sigma$ for closed strings, we have to set $p_u = 0$ unless the target space has a periodic directions. In the solution $C$ (similarly for $D$), we can locally set $ v = v(\tau)$ independent of $\sigma$ by using the conformal transformation. The resultant string motion is nothing but the geodesics for the massless point particle in the two-dimensional black hole. Thus the classical solution of the string equation collapses to a massless point particle in the two-dimensional black hole background, and it seems to be consistent with the vertex operator analysis, where the only tachyon fields are dynamical classically. However, if one allows a folded string solution, we can construct more general string solutions as we will review in the following section \ref{sec:3-3-2}. This can be done by patching different solutions of \eqref{sols} within the same world-sheet. \subsubsection{folded strings}\label{sec:3-3-2} For an illustrative purpose, let us begin with the folded string solution \cite{Bardeen:1975gx,Maldacena:2005hi} in two-dimensional flat space $(T,X)$. We can fix the world-sheet conformal invariance by choosing the gauge $\tau = T$.\footnote{This static gauge choice is not always possible especially in the curved space-time background.} The classical equation of motion is given by $\partial_+\partial_- X = 0$ with the Virasoro constraint\footnote{If one embeds the two-dimensional Minkowski space in higher dimensional space-time, the Virasoro constraint can be relaxed. For instance, if one introduces a contribution from the other zero modes than $(T,X)$ the resultant two-dimensional motion becomes massive rather than massless.} \begin{align} -2 + (\partial_+ X)^2 = 0 \ , \ \ -2 + (\partial_- X)^2 = 0 \ , \end{align} which amounts to $\partial_+ X = \pm \sqrt{2}$. If we restrict ourselves to the solutions with continuous first derivative, they reduce to massless particles. To obtain the folded string solution, we can set $X= X_+(\sigma^+) + X_-(\sigma^-)$, where $X_+$ and $X_-$ are periodic functions with the same period and with derivatives $\pm \sqrt{2}$. In other words, we partition the world-sheet and assign different solutions that satisfy $ \partial_+ X = \pm \sqrt{2}$ and patch-work together so that the full solution is periodic in $\sigma$ direction. A simple solution with the static center of motion is given by \begin{align} X = \sqrt{2}(|\sigma^+|_{\mathrm{per}} + |\sigma^-|_{\mathrm{per}}) \ , \label{minfold} \end{align} where the periodic absolute value function $|\sigma^+|_{\mathrm{per}}$ is given by $|\sigma^+|_{\mathrm{per}} = |\sigma^+|$ for $-\pi<\sigma^+ \le \pi$ and periodically extended outside this interval. More complicated solutions are possible, and one simple way to obtain them is to perform the target space Lorentz boost to the solution \eqref{minfold}. The resultant string solutions describes the folded string with the motion of the center of mass. Note that the Lorentz boost breaks our original gauge choice. Next. we will consider the linear dilaton background $\Phi = QX$. The equation of motion does not change, but the Virasoro constraint is modified as \begin{align} - 2 + (\partial_+ X)^2 - Q\partial_+^2 X = 0 \ , \ \ - 2 + (\partial_- X)^2 - Q\partial_-^2 X = 0 \ . \end{align} The point is that due to the existence of the second derivatives in the Virasoro constraint, the solutions such as \eqref{minfold} are no more allowed. The most general solutions are given by \begin{align} X = X_0 -Q \log\left(\cosh\frac{\sqrt{2}\tau}{Q} + \cosh\frac{\sqrt{2}\sigma}{Q}\right) \ . \end{align} Except for the special limit where the string collapse to a point particle, the solution is not periodic in $\sigma$ direction. Thus only the long string stretched to the infinity is the allowed folded string solution. This agrees with the fact that there are no closed string states in the linear dilaton theory other than the massless tachyon. The inclusion of the Liouville potential does not alter the qualitative feature of the classical string solutions. Let us now move on to the folded string solutions in the two-dimensional black hole. First, we partition the string world-sheet as in figure \ref{fig:partition}. We know that the solution should be locally given by \eqref{sols}. We fix the world-sheet conformal invariance by giving the boundary condition: \begin{align} &A: & u&=u_0 +p^+\sigma^+_{\mathrm{per}}\ , & v&=v_0+p^-\sigma^-_{\mathrm{per}} \ ,\cr &B: & u&=u_0 +p^+\sigma^-_{\mathrm{per}}\ , & v&=v_0+p^-\sigma^+_{\mathrm{per}} \ , \label{first} \end{align} where $\sigma_{\mathrm{per}}^+ = \sigma^+ - m\pi$ for $\frac{\pi}{2}\le \sigma^+ <\frac{\pi}{2}$ and periodically identified outside this range. If we had considered a flat Minkowski space $\mbox{d} s^2 = \mbox{d} u\mbox{d} v$, the solution in the region $C$ and $D$ would be independent of $\sigma$: \begin{align} &C_{\mathrm{flat}}: & u&=u_0 + \frac{\pi p^{+}}{2}\ , & v&=v_0 + \frac{\pi p^-}{2}+ p^-(\sigma_{\mathrm{per}}^++\sigma_{\mathrm{per}}^-) \ \cr &D_{\mathrm{flat}}: & u&=u_0 +\frac{\pi p^+}{2} +p^+(\sigma^-_{\mathrm{per}}+\sigma^-_{\mathrm{per}})\ , & v&=v_0+\frac{\pi p^-}{2} \ . \end{align} so that we have a fold as in \eqref{minfold}. In the two-dimensional black-hole case, the solution in $C$ and $D$ region is given by \begin{align} C_{\mathrm{2DBH}}: u&=u_0 + \frac{\pi p^{+}}{2}\ , \cr v &=\frac{1}{u_0+\frac{\pi p^+}{2}} \left(1-\frac{[1-(u_0+\frac{\pi p^+}{2})(v_0+p^-\sigma_{\mathrm{per}}^+)]\times{[1-(u_0+\frac{\pi p^+}{2})(v_0+p^-\sigma_{\mathrm{per}}^-)]}}{[1-(u_0+\frac{\pi p^+}{2})(v_0-\frac{\pi p^-}{2})]} \right) \ , \cr D_{\mathrm{2DBH}}: u& =\frac{1}{v_0+\frac{\pi p^-}{2}} \left(1-\frac{[1-(v_0+\frac{\pi p^-}{2})(u_0+p^+\sigma_{\mathrm{per}}^+)]\times{[1-(v_0+\frac{\pi p^-}{2}(u_0+p^+\sigma_{\mathrm{per}}^-)]}}{[1-(v_0+\frac{\pi p^-}{2})(u_0-\frac{\pi p^+}{2})]} \right) \cr v&=v_0+\frac{\pi p^-}{2} \ . \end{align} We could continue this analysis, but important point is that in the region $A'$ and $B'$ the solution turns out to be linear in $\sigma^+$ and $\sigma^-$ again just as in \eqref{first}. Thus the structure repeats itself and the simple recursion formula to derive the full solution exists (see \cite{Bars:1994sv}). Physically, the pulsating string falls into the black-hole as a massive particle. The folds move with the speed of light because $u$ or $v$ is constant. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\linewidth,keepaspectratio,clip]{partition.eps} \end{center} \caption{In order to obtain folded string solution, we partition the world-sheet and assign different solutions on each patch.} \label{fig:partition} \end{figure} The folded string solution in the two-dimensional Liouville background was identified as the {\it open string} attached to the FZZT brane \cite{Fateev:2000ik,Teschner:2000md} in a certain limit \cite{Maldacena:2005hi}. The scattering amplitude computed from the CFT analysis completely matches with the matrix model computation in the non-singlet sector \cite{Fidkowski:2005ck,Kostov:2006dy}. In this sense, it makes sense to regard the non-singlet sector (or winding sector) in the matrix model corresponds to the folded string solution in the Lorentzian target space theory in the Liouville background. Therefore, one might expect that the folded string solution in the two-dimensional black hole background should play an important role in constructing the dual matrix model for the two-dimensional black hole with the Lorentzian target space signature. At present, we do not have a conclusive argument for or against this direction, but we present some remarks here: \begin{itemize} \item It is important to note that the folded string solution in the Liouville background is in the open string sector and not in the closed string sector. This should be contrasted with the winding sector in the Euclidean two-dimensional black hole (and its hypothetical analytic continuation to the Lorentzian signature black hole). \item To obtain the long stretched string solution, the existence of the linear dilaton in the Virasoro constraint has been crucial. In our semiclassical analysis for the two-dimensional black hole, the existence of the nontrivial dilaton was neglected. This is the reason why we obtained a folded string solution that is asymptotically identical to the flat space solution \eqref{minfold}. Since in the linear dilaton case, such a solution was excluded, we might expect that only the long string solution would survive in the full quantization.\footnote{Just adding the dilaton term in the classical energy-momentum tensor is not consistent in the classical treatment because the one-loop correction to the equation of motion together with the classical contribution from the dilaton guarantee the one-loop holomorphic structure of the energy-momentum tensor.} \item In order to have a support for such long string solutions, we need a D-brane localized at asymptotic infinity. However, the two-dimensional Lorentzian black hole does {\it not} admit such a D-brane solution from the classical DBI action analysis (see section \ref{sec:7-1}). \end{itemize} We leave the role of the folded strings in the full quantization of the Lorentzian two-dimensional black hole system for future studies. In most of the following sections, we will concentrate on the mini-superspace approximation (point-particle approximation). Before closing our discussion on the classical string solutions in the two-dimensional black hole, we would like to present the most general solutions of the classical sigma model without imposing the Virasoro constraint for completeness \cite{deVega:1993pm}. First we introduce arbitrary three-component vectors lying on the hyperboloids \begin{align} \vec{A}\cdot\vec{A} &= - (A_0)^2 + (A_1)^2 + (A_2)^2 = 1 \cr \vec{B}\cdot\vec{B} &= - (B_0)^2 + (B_1)^2 + (B_2)^2 = 1 \ . \end{align} Then the most general solutions of the sigma model (in the Schwarzshild-like coordinate) are given by \begin{align} \cosh^2 \rho(\sigma,\tau) &= \frac{1}{2}\left(1+\vec{A}(\sigma^+)\cdot \vec{B}(\sigma^-)\right) \cr t(\sigma,\tau) &= \frac{1}{2}\int^{\sigma^+}_a\left(\frac{\epsilon_{abc}A'_aA_bB_c}{\vec{A}\cdot\vec{B}-1}\right)(x_+,b)dx_+ \cr &-\frac{1}{2}\int^{\sigma^-}_b\left(\frac{\epsilon_{abc}B'_aA_bB_c}{\vec{A}\cdot\vec{B}-1}\right)(\sigma_+,x_-)dx_- + c \end{align} The energy momentum tensor takes the form \begin{align} T_{++} = -\frac{A'(\sigma^+)^2}{8} \ , \ \ T_{--} = -\frac{B'(\sigma^-)^2}{8} \ . \end{align} If one imposes the Virasoro constraint, the solutions reduce to \eqref{sols}. \subsubsection{spectrum from partition function?}\label{sec:3-3-3} The partition function for the Lorentzian two-dimensional black hole should be able to determine its spectrum in principle. However, in practice the subtleties associated with the non-compactness of the target space and the Lorentzian signature make the partition function ill-defined and prevent us from reading the spectrum. At the classical level, the Lorentzian two-dimensional black hole is obtained by the Wick rotation $\theta \to i t$ in the cigar geometry. The vertex operator (corresponding to the massive character) should be Wick rotated as well \begin{align} \Delta(j,m=-\bar{m}=\frac{n}{2}) &= -\frac{j(j+1)}{k-2} + \frac{n^2}{4k} \cr \to \Delta(j,\omega) &= -\frac{j(j+1)}{k-2} - \frac{\omega^2}{4k} \end{align} based on the naive Wick rotation $n \to i \omega$. Since the time-direction $t$ is non-compact, $\omega$ should be continuous unlike the Euclidean two-dimensional black hole case, where $n$ is quantized. For the same reason, the Lorentzian black hole does not have a winding states along the $t$ direction.\footnote{Since the discrete states that descend from the $SL(2;\mathbb{R})$ primaries have a winding quantum number in the Euclidean black hole, the corresponding states do not seem to exist in the Lorentzian black hole, but this is a controversial issue.} From the mini-superspace analysis we will review in \ref{sec:3-3-4}, the classical spectrum is obtained from this naive analytic continuation (with some care about the boundary conditions). We would like to make an attempt to read the spectrum from the Lorentzian partition function with some hindsight from the mini-superspace analysis. The proposed partition function (with a suitable analytic continuation) for the Lorentzian $SL(2;\mathbb{R})/U(1)$ coset is \begin{align} Z_{SL(2;\mathbb{R})/U(1)} = \int_{\mathbb{R}^2} \frac{\mbox{d} v^2}{\tau_2}\frac{e^{\frac{v_1^2}{\tau_2}-\frac{\pi k}{\tau_2}|v|^2} }{\sqrt{\tau_2}|\theta_1(\tau,iv)|^2 } \sqrt{\tau_2}|\eta(\tau)|^2 \ . \label{ppl} \end{align} Here we study the bosonic case first for simplicity. Since we are integrating over the whole complex plane spanned by $v$, this partition function is formally same as that for the Euclidean axial coset \eqref{ppo}. The conclusion that the spectrum of the Euclidean coset and the Lorentzian coset, nevertheless, has the same spectrum seems too quick given the fact that the mini-superspace approximation gives a totally different answer. In the Euclidean case, the divergence near $iv = u = n+ \omega \tau$ gives a bulk contribution of the noncompact boson (with a linear dilaton) coupled to a compact boson with radius $R^2 = k$, where the summation over $n$ and $\omega$ shows the existence of the compact direction. In the Lorentzian case, we propose that the the torus modulus $\tau$ should be Lorentzian, namely $\tau$ and $\bar{\tau}$ and real and mutually independent. On the Lorentzian torus, the divergence of the partition function \eqref{ppl} only appears at $v=0$, leading to the contribution of the noncompact boson (with a linear dilaton) coupled to a non-compact boson:\footnote{On the Lorentzian torus, the Euclidean coset partition function still has an infinitely many origin of divergence at $ u = n+\omega \tau$, which gives a compact boson.} \begin{align} Z \sim \log \epsilon Z_{free}^2 + \text{finite terms}\ . \end{align} The leading order partition function seems to agree with the mini-superspace analysis.\footnote{One should note that the partition function on the Lorentzian torus needs a careful $i\epsilon$ prescription. This is an interesting but subtle subject, and we will not delve into the details here.} More precise discussions are needed to determine the finite part of the partition function. The situation is more involved than in the Euclidean case, and so far no conclusive agreements are available. One interesting related question is how the target-space supersymmetry is broken in the (world-sheet supersymmetric) Lorentzian two-dimensional black hole background. If we restrict ourselves to the leading order partition function, we can certainly define a GSO projection, and the partition function (at the leading order) vanishes because asymptotically the theory is essentially free and the spectrum coincides with the supersymmetric linear dilaton theory, and the target-space supersymmetry operator can be constructed. Therefore, the target-space supersymmetry should be broken near the horizon, where the curvature effects will be present. To study this problem, let us consider the type II GSO projected partition function for the two-dimensional Lorentzian black hole (coupled to free transverse sectors that is represented by free CFTs for simplicity). Our original partition function for the Lorentzian $SL(2;\mathbb{R})/U(1)$ Kazama-Suzuki coset is the diagonal modular invariant one: \begin{align} Z^{(NS)}(\tau) = \int_{\mathbb{R}^2} \frac{\mbox{d} v^2}{\tau_2} \frac{|\theta_3(\tau,iv)|^2}{\sqrt{\tau_2}|\theta_1(\tau,iv)|^2 } \sqrt{\tau_2}|\eta(\tau)|^2 e^{-\frac{\pi k}{\tau_2}|v|^2} \ \end{align} for NS-NS sectors (other sectors can be obtain by replacing $\theta_3(\tau,u)$ with $\theta_a(\tau,u)$ $(a=0,1,2)$ from the spectral flow). A natural candidate for the type II partition function is \begin{align} Z(\tau) = \int_{\mathbb{R}^2} \frac{\mbox{d} v^2}{\tau_2} \frac{|\theta_3(\tau,iv)\theta_3^3 - \theta_2(\tau,iv)\theta_2^3 \pm \theta_1(\tau,iv)\theta_1^3 - \theta_0(\tau,iv)\theta_0^3)|^2}{\sqrt{\tau_2}|\theta_1(\tau,iv)|^2 } \sqrt{\tau_2}|\eta(\tau)|^{-14} e^{-\frac{\pi k}{\tau_2}|v|^2} \ , \label{typii} \end{align} where $\theta_a^3 = \theta_a(\tau,0)^3$ have been introduced from the free CFT contributions. The expression is almost supersymmetric: for example, if we first take the leading diverging part at $v=0$, then the fermionic oscillator part gives zero after summing over the spin structure due to the abstruse identity of Jacobi. A remaining uncancelled part, which describes the breaking of the target-space supersymmetry, is of particular interest. By using the Riemann quartic identity \eqref{rqi}, we can rewrite \eqref{typii} as \begin{align} Z(\tau) = \int_{\mathbb{R}^2} \frac{\mbox{d} v^2}{\tau_2} \frac{|\theta_1(\tau,i\frac{v}{2})|^8}{\sqrt{\tau_2}|\theta_1(\tau,iv)|^2 } \sqrt{\tau_2}|\eta(\tau)|^{-14} e^{-\frac{\pi k}{\tau_2}|v|^2} \ . \label{typeiit} \end{align} Now one can see that the leading order divergence near the origin $(v=0)$ is indeed removed, which suggests that the bulk part of the spectrum is supersymmetric.\footnote{Unfortunately, for complex $\tau$, the partition function still shows a volume divergence at $iv = n + \omega \tau$ with a pair of {\it odd} integers $(n,\omega)$, where the GSO projection acts {\it oppositely}. We do not fully understand the origin of this failure of bulk cancellation. Since these divergences seem unphysical in the Lorentzian partition function if we stick to the Lorentzian torus, we do not see their physical relevance.} It is still difficult to evaluate \eqref{typeiit} to uncover the non-supersymmetric spectrum of the two-dimensional Lorentzian black hole partially because the formal $q$ expansion of \eqref{typeiit} gives a divergent series. Let us suppose that the major part of the $v$ integral comes near the origin $v=0$ since in the large $k$ limit, the Gaussian factor would provide a strong convergence factor for the integral (with no good justification because of the oscillatory nature of the $\theta$ functions on the Lorentzian torus). The subsequent integration over $v$ would lead to \begin{align} Z(\tau) \sim |\eta(\tau)|^4 \ . \end{align} The partition function looks like a free 0-dimensional bosonic string (probably localized near the horizon). The support of the (non-supersymmetric) bosonic degrees of freedom should be localized because we do not have a diverging volume factor. It seems likely that the supersymmetry is broken only locally in the vicinity of the horizon. One naively guess that this should correspond to the Lorentzian version of the winding condensation near the tip of the cigar in the Euclidean two-dimensional black hole as we will see in section \ref{sec:3-4}. A further study of this subject is of great interest and the more precise definition of the (almost) supersymmetric partition function \label{typeiit} and its evaluation is highly desirable. \subsubsection{minisuperspace approximation}\label{sec:3-3-4} Since it is difficult to read the spectrum of the Lorentzian two-dimensional black hole directly from the partition function, it is very important to study the classical spectrum based on the mini-superspace approximation. The Wick rotation of the mini-superspace eigenfunctions in the Euclidean cigar geometry \eqref{ef} is not so trivial. Fortuitously, the Lorentzian eigenfunctions are already classified thoroughly in \cite{Dijkgraaf:1992ba}. The complete basis for waves outside the black hole horizon are spanned by the following four types of eigenfunctions\footnote{Here we adopt slightly different normalization from \cite{Dijkgraaf:1992ba}.} of the Lorentzian Klein-Gordon operator. For those with the eigenvalue $\frac{p^2}{4k}-\frac{\omega^2}{4k}+ \frac{1}{4k}$ of the Klein-Gordon operator, the four eigenfunctions are \begin{align} U^p_{\omega}(\rho,t) &= - \frac{\Gamma^2(\nu_+)}{\Gamma(1-i\omega)\Gamma(-ip)} e^{-i\omega t} (\sinh \rho)^{-i\omega} F(\nu_+,\nu^*_-;1-i\omega;-\sinh^2\rho) \nonumber\\ &\sim e^{-i\omega t - i\omega \ln \rho} \qquad \mbox{as} \qquad \rho\,\rightarrow\,0~, \label{U} \\ V^p_{\omega}(\rho,t) &= - \frac{\Gamma^2(\nu^*_+)}{\Gamma(1+i\omega)\Gamma(ip)} e^{-i\omega t} (\sinh \rho)^{i\omega} F(\nu^*_+,\nu_-;1+i\omega;-\sinh^2\rho) \nonumber\\ &\sim e^{-i\omega t + i\omega \ln \rho} \qquad \mbox{as} \qquad \rho\,\rightarrow\,0~, \label{V} \\ L^p_{\omega} (\rho,t ) &= e^{-i\omega t} (\sinh \rho)^{-1-ip} F(\nu^*_+, \nu^*_-;1+ip; -\frac{1}{\sinh^2 \rho}) \nonumber\\ &\sim e^{-\rho} e^{-ip\rho -i\omega t} \qquad \mbox{as} \qquad \rho\,\rightarrow\,\infty~, \label{L} \\ R^p_{\omega} (\rho,t ) &= e^{-i\omega t} (\sinh \rho)^{-1+ip} F(\nu_+, \nu_-;1-ip; -\frac{1}{\sinh^2 \rho}) \nonumber\\ &\sim e^{-\rho} e^{+ip\rho -i\omega t} \qquad \mbox{as} \qquad \rho\,\rightarrow\,\infty \label{R} \end{align} with the notations \begin{align} \nu_{\pm} = \frac{1}{2} - i\left(\frac{p}{2}\pm \frac{\omega}{2}\right)~. \nonumber \end{align} These eigenfunctions are defined by the following analytic continuations of the mini-superspace Euclidean eigenfunctions: \begin{align} U^p_{\omega}(\rho,t)& = \left\{ \begin{array}{ll} \phi^p_{n=+i\omega}(\rho,\theta = +it)~ ~~ (\omega>0,~n<0) \\ \phi^p_{n=-i\omega}(\rho,\theta = -it)~ ~~ (\omega<0,~n>0) \end{array} \right. \nonumber\\ V^p_{\omega}(\rho,t&)= \left\{ \begin{array}{ll} \phi^{-p}_{n=-i\omega}(\rho,\theta = - it)~ ~~ (\omega>0,~n<0) \\ \phi^{-p}_{n=+i\omega}(\rho,\theta = +it)~ ~~ (\omega<0,~n>0) \end{array} \right. \nonumber\\ L^p_{\omega}(\rho,t)&= \phi^p_{L,n=i\omega} (\rho, \theta=+it) \nonumber\\ R^p_{\omega}(\rho,t)&= \phi^p_{R,n=i\omega} (\rho, \theta=+it) ~, \label{ac UVLR} \end{align} where the $n<0$ and $n>0$ ranges are mapped to $\omega>0$ and $\omega<0$, respectively. As discussed in \cite{Dijkgraaf:1992ba}, only two out of the four eigenfunctions are linearly independent. In particular, \begin{align} V_\omega^p(\rho, t) = U^{p*}_\omega(\rho, -t) \qquad \mbox{and} \qquad R_\omega^p(\rho, t) = L^{p*}_\omega (\rho, -t). \nonumber \end{align} The reason why we introduce the above four eigenfunctions is because they encode four possible boundary conditions (We here assume $p>0$) in the Lorentzian black hole background. Recall that, for the region outside the horizon of the eternal black hole, the boundaries consist of four segments: `future (past) horizon' $t=+\infty, \, \rho=0$ ($t=-\infty, \, \rho=0$) by ${\cal H}^+$ (${\cal H}^-$), and the `future (past) infinity' $t=+\infty, \, \rho=+\infty$ ($t=-\infty, \, \rho=+\infty$) by ${\cal I}^+$ (${\cal I}^-$). The four eigenfunctions $U, V, L, R$ are the ones obeying boundary conditions: \begin{align} U^p_{\omega} &= 0 ~~ \mbox{at}~ {\cal H}^-~,~~~ V^p_{\omega} = 0 ~~ \mbox{at}~~ {\cal H}^+~, \cr ~~~ L^p_{\omega} &= 0~ (R^p_{\omega} = 0)~~ \mbox{at}~~ {\cal I}^+~, ~~~ R^p_{\omega} = 0 ~(L^p_{\omega} = 0) ~~ \mbox{at}~~ {\cal I}^- \end{align} for $\omega>0$ ($\omega <0$). See Figure \ref{wave}. \begin{figure}[htbp] \begin{center} \includegraphics[width=13cm,height=10cm] {wave.eps} \end{center} \caption{The boundary conditions of the Lorentzian eigenfunctions ($\omega>0$ sector). For $\omega<0$, the figures for $L$ and $R$ should be interchanged.} \label{wave} \end{figure} By Wick rotating the mini-superspace reflection relations \eqref{cref rel}, we obtain linear relations among the Lorentzian eigenfunctions: \begin{align} U^p_{\omega} = L^{p}_{\omega} + {\cal R}_0(p,\omega) R^p_{\omega} \qquad \mbox{and} \qquad V^p_{\omega} = R^{p}_{\omega} + {\cal R}^*_0(p,\omega) L^p_{\omega}~. \label{decomp ef 2} \end{align} Equivalently, \begin{align} L^p_{\omega} &= \frac{1}{1-|{\cal R}_0(p,\omega)|^2} \left\{ U^p_{\omega} - {\cal R}_0(p,\omega) V^p_{\omega} \right\} \cr \quad \mbox{and} \quad R^p_{\omega} &= \frac{1}{1-|{\cal R}_0(p,\omega)|^2} \left\{ V^p_{\omega} - {\cal R}^*_0(p,\omega) U^p_{\omega} \right\}. \label{decomp ef 2-2} \end{align} Here, the mini-superspace reflection amplitude ${\cal R}_0(p,\omega)$ in Lorentzian theory is given by \begin{align} {\cal R}_0(p,\omega) = \frac{\Gamma(+ip)\Gamma^2(\nu_+)} {\Gamma(-ip)\Gamma^2(\nu^*_-)} \equiv - \frac{B(\nu_+,\nu_-)}{B(\nu_+^*,\nu_-^*)} \cdot \frac{\cosh \pi \left(\frac{p-\omega}{2}\right)} {\cosh \pi \left(\frac{p+\omega}{2}\right)}~. \label{cref amp 2} \end{align} Notice that, in sharp contrast to the Euclidean black hole, the reflection amplitude is less than unity due to the second factor: \begin{align} |{\cal R}_0(p,\omega)|^2 = {\cosh^2 \pi \left({p-\omega \over 2}\right) \over \cosh^2 \pi \left({p+\omega \over 2} \right)} \leq 1. \label{inequality} \end{align} The inequality is saturated at $p=\omega = 0$. The inequality \eqref{inequality} shall play a prominent role for understanding string dynamics in the Lorentzian black hole background. The mini-superspace reflection relations for $U^p_{\omega}$, $V^p_{\omega}$ are also expressible in a form similar to the Euclidean ones. Recalling that ${\cal R}_0(-p, \omega) {\cal R}_0(+p, \omega) = 1$, \begin{align} U^{-p}_{\omega}(\rho,t)= {\cal R}_0(-p,\omega) U^{p}_{\omega}(\rho,t) \qquad \mbox{and} \qquad V^{-p}_{\omega}(\rho,t)= {\cal R}^*_0(-p,\omega) V^{p}_{\omega}(\rho,t)~, \label{cref rel UV} \end{align} while $L^p_{\omega}$ and $R^p_{\omega}$ are simply related by reflection: \begin{align} L^{-p}_{\omega} (\rho,t) = R^{+p}_{\omega} (\rho,t)~. \label{cref rel LR} \end{align} Moreover, $U^p_{\omega}$ and $V^p_{\omega}$ are linearly independent except for the special kinematic regime, $\omega=0$. Notice also, in the relation \eqref{inequality}, the reflection amplitude involves the mini-superspace contribution only, not the full-fledged stringy one. Before proceeding further, we shall here collect explicitly relations among inner products of Lorentzian primary fields, where the inner product is defined with respect to the Lorentzian measure $\mbox{d} v_L = k\sinh 2\rho \mbox{d} \rho \mbox{d} t$. Taking quantum numbers $p$, $\omega$ fixed and dropping off delta function factors $2\pi\delta(p-p')$, $2\pi\delta(\omega-\omega')$ for notational simplicity, we have \begin{align} & (U^p_{\omega}, U^p_{\omega}) = (V^p_{\omega}, V^p_{\omega}) = N_0(p,\omega)~, \qquad N_0(p,\omega) \equiv \frac{1+|{\cal R}_0(p,\omega)|^2}{2} \nonumber\\ & (U^p_{\omega},V^p_{\omega}) = {\cal R}_0^*(p,\omega)~, \nonumber\\ & (L_{\omega}^p,L_{\omega}^p)= (R^p_{\omega}, R^p_{\omega})= \frac{1}{2}~, \qquad \quad (L^p_{\omega}, R^p_{\omega})= 0 ~, \nonumber\\ & (U^p_{\omega},L^p_{\omega}) = (V^p_{\omega}, R^p_{\omega}) =\frac{1}{2}~, \qquad \quad (R^p_{\omega},U^p_{\omega}) = (V^p_{\omega}, L^p_{\omega}) =\frac{{\cal R}_0(p,\omega)}{2}~. \label{inner product UVLR} \end{align} The inner products involving $L^p_\omega$ and $R^p_\omega$ are readily evaluated since dominant contributions are supported in the asymptotic region $\rho \gg 0$, yielding the volume factor $2\pi\delta(0)$. The remaining inner products ca be extracted from the linear relations \eqref{decomp ef 2}, \eqref{decomp ef 2-2}.\footnote {We checked these inner products numerically using MATHEMATICA.} We also fixed the overall normalization factors from consistency with the Euclidean inner product \eqref{inner product} under the $\omega\,\rightarrow\, 0$ limit. Notice also that \begin{align} N_0(-p,\omega) = \left|{\cal R}_0(-p,\omega)\right|^2 \, N_0(+p,\omega)~, \end{align} as is consistent with the mini-superspace reflection relation \eqref{cref rel UV}. It is easy to construct the exact string vertex operators or primary states corresponding to the mini-superspace eigenfunctions $U$, $V$, $L$, $R$. To be specific, we shall consider primarily the fermionic $SL_k (2, \mathbb{R})/U(1)$ supercoset conformal field theory.\footnote {For the bosonic $SL(2;\mathbb{R})_{\kappa}/U(1)$ coset conformal field theory, we instead have $h= \bar{h}= \frac{p^2}{4(\kappa-2)}-\frac{\omega^2}{4\kappa} +\frac{1}{4(\kappa-2)}$, and ${\cal R}(p,\omega)\equiv {\cal R}_0(p,\omega) \frac{\Gamma\left(1+\frac{ip}{\kappa-2}\right)} {\Gamma\left(1-\frac{ip}{\kappa-2}\right)}$. } The primary states $\ket{U^p_{\omega}}$, $\ket{V^p_{\omega}}$ are the ones of conformal weights $h= \bar{h}= \frac{p^2}{4k}-\frac{\omega^2}{4k} +\frac{1}{4k}$ and obey the exact reflection relations \begin{align} \ket{U^{-p}_{\omega}}= {\cal R}(-p,\omega) \ket{U^{p}_{\omega}}~, ~~~ \ket{V^{-p}_{\omega}}= {\cal R}^*(-p,\omega) \ket{V^{p}_{\omega}}~, \label{qref rel UV} \end{align} and the exact reflection amplitude is given by \begin{align} {\cal R}(p,\omega)\equiv {\cal R}_0(p,\omega) \frac{\Gamma\Big(1+\frac{ip}{k}\Big)} {\Gamma\Big(1-\frac{ip}{k}\Big)}. \label{exactra} \end{align} Notice that the string world-sheet effect entering through the $1/k$-correction is a pure phase. Thus, the exact reflection probability $\vert {\cal R} (p, \omega) \vert^2$ remains unmodified from the mini-superspace approximation result $\vert {\cal R}_0(p, \omega) \vert^2$ given in \eqref{inequality}. We shall normalize the primary states $\ket{U^p_{\omega}}$, $\ket{V^p_{\omega}}$ ($p>0$) as \begin{align} & \bra{U^p_{\omega}} U^{p'}_{\omega'}\rangle = \bra{V^p_{\omega}} V^{p'}_{\omega'}\rangle = N(p,\omega) \, 2\pi\delta(p-p') 2\pi\delta(\omega-\omega') ~, \nonumber \\ & \bra{V^p_{\omega}} U^{p'}_{\omega'} \rangle = {\cal R}^* (p,\omega) \, 2\pi\delta(p-p') 2\pi\delta(\omega-\omega') ~, \label{norm UV} \end{align} where the new normalization factor $N(p,\omega)$ is simply defined by replacing ${\cal R}_0$ with ${\cal R}$ in $N_0(p,\omega)$. The primary states $\ket{L^p_{\omega}}$, $\ket{R^p_{\omega}}$ are also definable by using the linear relations \eqref{decomp ef 2} or \eqref{decomp ef 2-2} but now with ${\cal R}_0$ replaced by ${\cal R}$. Notice that $\ket{U^p_{\omega}}$, $\ket{V^p_{\omega}}$ are the ones analytically continuable to the Euclidean primary states $\ket{\phi^{\pm p}_n}$, so often referred as the `Hartle-Hawking vacua'. On the other hand, the states $\ket{L^p_{\omega}}$, $\ket{R^p_{\omega}}$ does not have Euclidean counterparts. Recall that, over the Euclidean black hole background, $\phi^p_{L,n}$, $\phi^p_{R,n}$ behave badly in the vicinity of $\rho = 0$ and hence ill-defined. We also find it useful to introduce the dual basis $\widehat{\bra{U^p_{\omega}}}$, $\widehat{\bra{V^p_{\omega}}}$ ($p,p'>0$) with inner products \begin{align} & \widehat{\bra{U^p_{\omega}}} U^{p'}_{\omega'} \rangle = \widehat{\bra{V^p_{\omega}}} V^{p'}_{\omega'} \rangle = 2\pi\delta(p-p')2\pi\delta(\omega-\omega')~, \qquad \widehat{\bra{U^p_{\omega}}} V^{p'}_{\omega'} \rangle = \widehat{\bra{V^p_{\omega}}} U^{p'}_{\omega'} \rangle = 0~. \label{hat U V} \end{align} Explicitly, they are given by \begin{align} \widehat{\bra{U^p_{\omega}}} &= \frac{2}{1-\left|{\cal R}(p, \omega) \right|^2} \left\{ \bra{L^p_{\omega}} - {\cal R}^* (p, \omega) \bra{R^p_{\omega}} \right\}~, \cr \qquad \widehat{\bra{V^p_{\omega}}} &= \frac{2}{1-\left|{\cal R}(p, \omega) \right|^2} \left\{ \bra{R^p_{\omega}} - {\cal R}(p, \omega) \bra{L^p_{\omega}} \right\}~. \label{def hat U V} \end{align} As such, these dual basis obey the following exact reflection relations: \begin{align} \widehat{\bra{U^{-p}_{\omega}}} = {\cal R}(p,\omega) \widehat{\bra{U^{p}_{\omega}}} \qquad \mbox{and} \qquad \widehat{\bra{V^{-p}_{\omega}}} = {\cal R}(p,\omega)^* \widehat{\bra{V^{p}_{\omega}}}~. \label{ref hat U V} \end{align} A remark is in order. The dual basis $\widehat{\bra{U^{p}_{\omega}}}$, $\widehat{\bra{V^{p}_{\omega}}}$ are {\em not\/} Wick rotatable to the Euclidean dual basis $\bra{\phi^{+p}_n}$, $\bra{\phi^{-p}_n}$, since $|{\cal R}(p,\omega)|=1$ for $\omega \in i\mathbb{R}$. The correct procedure would be that we first define Wick rotations for the `ket' states, and then define their dual states within the Lorentzian Hilbert space. Nevertheless, one-point correlators in the Lorentzian theory, from which a set of physical observables can be computed, ought to be always analytically continuable to the one-point correlators in the Euclidean theory. Roughly speaking, ambiguities inherent to the Wick rotation of dual states drop out upon taking inner product. \subsection{Duality and winding tachyon condensation}\label{sec:3-4} One of the salient features of the (Euclidean) $SL(2;\mathbb{R})/U(1)$ coset model is the so-called Fateev-Zamolodchikov-Zamolodchikov (FZZ) duality \cite{FZZ}. Mathematically speaking, this duality has enabled us to compute exact two-, and three-point functions of the $SL(2;\mathbb{R})/U(1)$ coset model and revealed their pole structures. Physically speaking, on the other hand, it has established a duality between the winding tachyon condensation and the singularity resolution of the geometry, and uncovered, from an exact CFT perspective, the importance of the winding tachyon condensation near the classical singularities. Let us formulate the FZZ duality in the $\mathcal{N}=2$ supersymmetric case. The FZZ duality states: {\bf FZZ duality} ($\mathcal{N}=2$) supersymmetric $SL(2;\mathbb{R})/U(1)$ coset model with level $k$ is equivalent (up to chirality) to the $\mathcal{N}=2$ Liouville field theory (see e.g. \cite{Nakayama:2004vk} for reference): \begin{align} L = \int \mbox{d}^4\theta \Phi^\dagger \Phi + \int \mbox{d}^2\theta W(\Phi) + h.c. \cr W(\Phi) = \mu e^{\frac{1}{Q}\Phi} \ , \end{align} where $Q^2 = \frac{2}{k}$. The appearance of the chirality flip suggests the T-dual nature of the duality. Indeed, the FZZ duality can be proved in a more general context of the mirror symmetry. Physically, the appearance of the $\mathcal{N}=2$ Liouville potential in the T-dualized set up can be anticipated as follows. As we studied in section \ref{sec:3-1}, the T-dual of the $SL(2;\mathbb{R})^{(A)}/U(1)$ axial coset model, whose classical geometry is the cigar, is classically described by the trumpet geometry. However, the trumpet geometry has a singularity coming from the fixed point of the (T-dualized) $U(1)$ angular direction. To avoid the existence of a naked singularity of the space-time, the (T-dualized) winding tachyon will condensate. From the world-sheet viewpoint, the (winding) tachyon condensation is nothing but the $\mathcal{N}=2$ Liouville superpotential.\footnote{To avoid a possible confusion, we note that the original winding tachyon condensation becomes non-winding tachyon with $\tilde{\theta}$ momentum after taking the T-duality.} The operator correspondence of the FZZ duality is almost clear. In the asymptotic region, one can write the vertex operators of primary states in the $SL(2;\mathbb{R})^{(A)}/U(1)$ coset model by using those of the linear dilaton times $U(1)$ angular direction. We then perform the T-duality, to write them down as asymptotic vertex operators of primary states in the $\mathcal{N}=2$ Liouville theory. The descendant structure is completely fixed by the $\mathcal{N}=2$ superconformal algebra. There are several ``proofs" of the FZZ duality available in the literature. In the original work, FZZ has given a direct computation of the two- and three-point functions of the both models (including winding violating correlation functions) and has shown the equivalence between the two models when the computation based on the screening operator is available. In \cite{Hori:2001ax}, the duality has been established rigorously at the level of the topological field theory from the viewpoint of the mirror symmetry (T-duality of the linear sigma model that flows to $SL(2;\mathbb{R})/U(1)$ coset in the infrared). As is the case with the usual mirror symmetry, it is natural to expect that the full conformal field theory is dual with each other, and indeed there is much supporting evidence for that. In another interesting derivation of the FZZ duality \cite{Tong:2003ik}, the domain wall dynamics of a certain $2+1$ dimensional gauge theory has been studied, resulting in two complementary descriptions --- $SL(2;\mathbb{R})/U(1)$ coset model on one hand and $\mathcal{N}=2$ Liouville theory on the other hand. We will not review the derivation of the FZZ duality (see any of the references above, or consult \cite{Nakayama:2004vk} for a brief summary of the related discussions). Instead, we will see some physical consequences of the duality in the remaining part of this section. Let us begin with the comparison between the classical two-point function of the $SL(2;\mathbb{R})/U(1)$ coset CFT from the minisuperspace approximation and the exact one. The mini-superspace result (see \eqref{cref amp} and \eqref{ref dual}) is \begin{align} {\cal R}_0(j,m,\bar{m}) = \frac{\Gamma(2j+1)\Gamma(-j+m)\Gamma(-j-\bar{m})}{\Gamma(-2j-1) \Gamma(j+1+m)\Gamma(j+1-\bar{m})}, \end{align} where \begin{align} \qquad m=\frac{kw+n}{2}~, \qquad \bar{m} = \frac{kw-n}{2}~ , \nonumber\\ \end{align} while the exact result is \begin{align} {\cal R}(j,m,\bar{m}) &= \nu(k)^{-2j-1}\, \frac{\Gamma(1+\frac{2j+1}{k})}{\Gamma(1-\frac{2j+1}{k})} \frac{\Gamma(2j+1)\Gamma(-j+m)\Gamma(-j-\bar{m})}{\Gamma(-2j-1) \Gamma(j+1+m)\Gamma(j+1-\bar{m})}, \cr \nu(k) &\equiv \frac{1}{\pi}\frac{\Gamma(1-\frac{1}{k})} {\Gamma(1+\frac{1}{k})}~, \label{strt} \end{align} The effects of the winding tachyon condensation can be seen in the $1/k$ suppressed factor in the exact formula as $\frac{\Gamma(1+\frac{2j+1}{k})}{\Gamma(1-\frac{2j+1}{k})}$. As is well-known in Liouville field theory, the poles in the correlation function appear when the screening interaction coming from the $\mathcal{N}=2$ superpotential $W= \mu e^{\frac{1}{Q}\Phi}$ satisfies the screening condition for the Liouville momenta $\phi$. Indeed, the perturbative Liouville insertion predicts poles in the two-point functions exactly as indicated by the factor $\frac{\Gamma(1+\frac{2j+1}{k})}{\Gamma(1-\frac{2j+1}{k})}$. Another important aspect of the FZZ duality is that it has provided a perspective on the winding number non-conservation process. In the $SL(2;\mathbb{R})^{(A)}/U(1)$ axial coset model, one can define an asymptotic winding quantum number by $\omega$. However, since the cigar geometry has a trivial fundamental group, the winding number is not a conserved quantity. In the free field construction of the $SL(2;\mathbb{R})^{(A)}/U(1)$ coset model (such as the one based on the Wakimoto construction of the $SL(2;\mathbb{R})$ current algebra), it is difficult to compute the winding number violating correlation functions. Indeed this was the first motivation of FZZ to propose the dual description.\footnote{At the same time, FZZ has also given an ingenious way to compute the winding violating correlation function within the $SL(2;\mathbb{R})/U(1)$ coset model by introducing the dual operators.} Situations are worse in the naive T-dualized trumpet geometry. In the trumpet metric, it appears that we have a $U(1)$ isometry along $\tilde{\theta}$ that is the dual coordinate for $\theta$, suggesting that the momentum quantum number as well as the winding quantum number are well-defined conserved quantities. The breaking of the winding number (or momentum mode in the T-dual picture) is quite obscure: the origin of the winding non-conservation, i.e. the fixed point of the $U(1)$ action, has now become the singularity of the target space.\footnote{It is well-known that when we gauge the axial symmetry, the vector current has an anomaly and vice versa, and this is indeed the origin of this apparent paradox. In the same token, the $U(1)$ isometry of the vector coset is broken down to $\mathbb{Z}_k$. We should be, therefore, careful when we talk about the ``T-duality" of the trumpet.} The resolution of this puzzle is given by the FZZ duality. In the T-dualized picture, the singularity is removed by the tachyon condensation, or $\mathcal{N}=2$ Liouville superpotential. At the same time, the $\mathcal{N}=2$ superpotential explicitly breaks the translation invariance along the $\theta$ direction, which gives the origin of the momentum non-conservation in the T-dual picture. Actually, the explicit breaking of the momentum conservation is quite useful to compute the winding number violating process in $SL(2;\mathbb{R})/U(1)$ coset model: by a direct insertion of the $\mathcal{N}=2$ Liouville superpotential, the winding number violating process can be computed perturbatively. We end this section with three remarks \begin{itemize} \item Supersymmetric $SL(2;\mathbb{R})/U(1)$ coset model has a conserved $U(1)_R$ current. By taking quotient of the theory with this $U(1)_R$ current, we obtain the duality between the bosonic $SL(2;\mathbb{R})/U(1)$ coset model and the sine-Liouville theory \cite{Karczmarek:2004bw}. The sine-Liouville theory has the potential \begin{align} V(\phi,Y) &= \mu (S^+ + S^-) \cr S^{\pm} &= e^{-\frac{1}{{\cal Q}}(\phi \pm \sqrt{1+{\cal Q}^2}iY)} \equiv e^{-\sqrt{\frac{\kappa-2}{2}} \phi \mp \sqrt{\frac{\kappa}{2}} iY}~, ~~~ ({\cal Q}=\sqrt{2/(\kappa-2)})~. \end{align} We note that the potential preserves the $W_{\infty}$ symmetry as a side remark, which makes the model integrable \cite{Baseilhac:1998eq,Lukyanov:2003nj}. In their original work (FZZ), they proposed the duality between the bosonic $SL(2;\mathbb{R})/U(1)$ coset model and the bosonic sine-Liouville theory. \item There is a small controversial issue in the interpretation of the FZZ duality. Our standpoint has been that the dual description of the cigar geometry is given by the $\mathcal{N}=2 $ Liouville theory, and the $\mathcal{N}=2$ Liouville superpotential does not appear in the original cigar geometry explicitly (otherwise the source of the winding number non-conservation is two-fold). The other common interpretation of the FZZ duality is that the winding tachyon condensation ($\mathcal{N}=2$ Liouville superpotential written in the dual coordinate) also appears in the original cigar geometry. This interpretation is natural in the sense that it gives a natural explanation about the coexistence of the poles coming from the geometry part and the Liouville insertion part. Whichever interpretation one may take, we believe that what we call the supersymmetric $SL(2;\mathbb{R})/U(1)$ theory and the $\mathcal{N}=2$ Liouville theory is identical, and the structure constant, e.g. the two-point function is uniquely given by formulae like \eqref{strt}. \item So far, we have focused on the Euclidean $SL(2;\mathbb{R})/U(1)$ coset model. However, things are unclear in the Lorentzian $SL(2;\mathbb{R})/U(1)$ coset model, where the dual $\mathcal{N}=2$ Liouville theory is unavailable. A naive analytic continuation of the $\mathcal{N}=2$ superpotential gives a wrong Liouville wall, which is localized near the weakly coupled region \cite{Hikida:2004mp}. Furthermore, the Lorentzian coset does not have a winding mode, so the interpretation of the winding tachyon condensation is not evident. Nevertheless, we believe that the exact structure constants are given by the analytic continuation of the exact results for the Euclidean coset since the analytic continuation correctly reproduces the mini-superspace part. The clear explanation of the origin of the extra poles in the Lorentzian coset is still an open question.\footnote{The origin might be given by the degrees of freedom near the horizon on which we mentioned in section \ref{sec:3-2-3}. A related interpretation based on the idea of stretched horizon has been given in \cite{Kutasov:2005rr}.} \end{itemize} \newpage \sectiono{Black Hole - String Transition}\label{sec:4} In this section, we review ``black hole - string transition". The transition is believed to be a fundamental property of the quantum black hole in the non-BPS regime. We will also see that the transition is related to the thermal winding tachyon condensation. The organization of the section is as follows. In section \ref{sec:4-1}, we formulate the ``black hole - string transition" in general dimensions. In section \ref{sec:4-2}, we specialize in the two-dimensional case, where $\alpha'$ exact treatment is possible. In section \ref{sec:4-3}, we briefly summarize the current status of the black hole - string transition in other solvable backgrounds. \subsection{In general dimensions}\label{sec:4-1} One of the most profound results in (semi-)classical gravity is the thermodynamics of the black hole. Thus one of the most significant benchmarks of any theory of quantum gravity is to provide a satisfactory understanding of the thermodynamics of the black hole. Especially, understanding of the black hole entropy from the microscopic viewpoint has been one of the greatest achievements of the string theory as a quantum theory of gravity \cite{Strominger:1996sh}. Let us consider the Schwarzshild black hole in the string theory as a simplest example of the non-extremal black hole system.\footnote{The exact quantization of the string in the Schwarzshild black hole is not known. However, since one can make the curvature of the Schwarzshild black hole arbitrarily small outside the horizon, it is natural to assume the existence of string solutions asymptotically given by the Schwarzshild black hole. The existence of the $SL(2;\mathbb{R})/U(1)$ two-dimensional black hole strongly supports this assumption.} The Schwarzshild black hole in any dimension is completely determined by the parameter $r_h$ that determines the horizon size. When $r_h \gg l_s $, the classical supergravity description is good (at least outside of the horizon), and we can trust the effective supergravity action to discuss the properties of the black hole. If one gradually decreases the horizon size $r_h$, the effects of higher derivative corrections coming from the underlying quantum gravity will become important. Within the superstring theory, some higher derivative corrections are known, and it has been shown that these corrections will beautifully explain the apparent mismatch between the macroscopic derivation of the small charge BPS black hole entropy and the microscopic derivation from the string theory (see e.g. \cite{Dabholkar:2004yr}). In the non-extremal cases we are discussing now, we have not yet completely grasped the structure of the higher derivative corrections and the quantitative match of the black hole entropy, but the guiding principle is summarized by the so-called ``black hole - string transition" or ``black hole - string crossover" introduced in \cite{'tHooft:1987tz,Holzhey:1991bx,Susskind:1993ws,Horowitz:1996nw,Sen:1995in}. When $r_h \le l_s$, the geometrical description of the black hole breaks down and it should be replaced with the microscopic description based on the quantum strings. This is natural because the string theory has a natural cutoff given by the string length $l_s$ as a length scale, and the objects smaller than $l_s$ do not possess an ordinary geometrical meaning. The principle of the ``black hole - string transition" is that the black hole can be understood either as the higher excitation of the strings or as the classical solution of the (higher derivative) gravities. Especially, the crossover is parametrically smooth as a function of the coupling constant $g_s$ and $l_s$. In the Schwarzshild black hole example, we can roughly estimate the transition point and the ``black hole - string crossover" as follows. Let us assume the four dimensional Schwarzshild black hole for definiteness. The four-dimensional Newton constant $G$ is given by $G \sim g_s^2 l_s^2$, so the Schwarzshild radius of the string is estimated as $r_{0} = m_{\rm str} G \sim m_{\rm str} g_s^2 l_s^2$ with the mass of excited string $m^2_{\rm str} \sim \frac{n}{l_s^2}$, where $n$ denotes the oscillator level. At the black hole - string transition point $r_0 \sim l_s$, we have \begin{align} \frac{l_s^2}{G} \sim \frac{1}{g_s^2} \sim \sqrt{n} \ . \end{align} Thus, the classical Bekenstein entropy is given by $S_{Bek} \sim \frac{r_0^2}{G} \sim \frac{l_s^2}{G} \sim \sqrt{n}$, which indeed agrees with the entropy of the perturbative string expected from the Cardy formula up to a numerical factor. Alternatively speaking, one can say that the requirement of the smooth overlap of the entropy demands that $r_0 \sim l_s$ should be the ``black hole - string transition" point. Another important concept associated with the $\alpha'$ corrections to the geometry is the stretched horizon \cite{Susskind:1993aa,Susskind:1993ws,Kutasov:2005rr}. We can formulate the stretched horizon based on the local temperature of the geometry. As is the case with the two-dimensional black hole, any neutral black hole has an intrinsic temperature determined by the periodicity of the Euclidean time (Hawking temperature). From the Lorentzian viewpoint, the temperature is defined by the observer at spacial infinity. From an observer at a fixed proper distance $R$ from the horizon, the Hawking radiation is observed with much higher temperature \begin{align} T_{u}(R) = \frac{T_{\rm Hw}}{\sqrt{g_{00}(R)}} \ , \label{uhw} \end{align} due to the gravitational red-shift. On the other hand, the string theory has the ``highest temperature" determined by the Hagedorn temperature. Since the number of perturbative string states grows exponentially as a function of energy (mass): \begin{align} Z(\beta) = \mathrm{Tr} e^{-\beta E} \sim \int \mbox{d} M \rho(M) e^{-\beta M} \ . \label{hgr} \end{align} with the density of states given by $\rho(M) \sim e^{\beta_{\rm Hg} M} $, the partition function of the perturbative string theory is ill-defined beyond the Hagedorn temperature $\beta < \beta_{\rm Hg}$. There we expect that the string interactions are much more important and the strings will disentangle. Now let us return to the Hawking radiation. From \eqref{uhw}, one can see that the (red-shifted) temperature becomes infinite at the classical horizon. Actually, before reaching the event horizon we will encounter the radius when the local temperature exceeds the Hagedorn temperature. The local Hagedorn transition blurs the local geometry near the black hole horizon. This is what we call the stretched horizon. Note that we can make the curvature at the horizon arbitrarily small, and in this regime, the size of the stretched horizon is of order one in the string unit. It is interesting to consider some extreme limits of the above discussions. The first example is the large $T_{\rm Hw}$ limit: what happens if the Hawking temperature in the asymptotic infinity is larger than $T_{\rm Hg}$? We expect that the stretched horizon completely blur the black hole geometry. Indeed in the leading order estimation of the four-dimensional Schwarzshild black hole, we have $\beta_{\rm Hw} = r_0$ and $\beta_{\rm Hg} = \text{const}$, and such scenario occurs when $r_0 \sim l_s$. It is interesting to note that the condition roughly coincides with that for the ``black hole - string transition". In section \ref{sec:4-2}, we will see this coincidence is exact (after taking $\alpha'$ corrections into account) in the two-dimensional black hole that is an exactly solvable string background. Another limit is the (extremal) charged black hole solution. In the charged black hole examples, the above discussion based on the Hagedorn temperature and the Hawking temperature should be generalized. This is because, as pointed out in \cite{Horowitz:1996nw}, we can arbitrarily lower the Hawking temperature while keeping possible $\alpha'$ corrections large. In other worlds, one can make the transition temperature arbitrarily lower than the Hagedorn temperature. The most extreme case is the (BPS) extremal black hole, where the Hawking temperature is zero. The generalization proposed in \cite{Giveon:2005jv} states that the ``black hole - string transition" occurs when the Hawking temperature coincides with the temperature of the free-string with the same mass {\it and} charge. We will briefly review their discussions later in section \ref{sec:4-3}. It is also instructive to recapitulate the problem from the Euclidean approach. In the flat Minkowski space, the Hagedorn divergence of the partition function can be attributed to the thermal winding tachyon condensation \cite{Polchinski:1985zf,Sathiapalan:1986db,Kogan:1987jd,O'Brien:1987pn,Atick:1988si}. We begin with the more precise version of \eqref{hgr}. \begin{align} \beta F &= \mathrm{Tr}_{\mathrm{phys}} \log (1- e^{-\beta E}) \cr &= \int_{-\infty}^{\infty}\mbox{d}\tau_2 \int_{-1/2}^{1/2} \mbox{d}\tau_1 \frac{1}{\tau_2} \mathrm{Tr}_{\mathrm{CFT}} q^{L_0}q^{\bar{L}_0} \ . \label{hgg} \end{align} In the second line, we have introduced the Schwinger parameter $\tau_2$ and the level matching condition by \begin{align} \int_{-1/2}^{1/2} \mbox{d}\tau_1 e^{2\pi i \tau_1 (L_0-\bar{L}_0)} \ . \end{align} The trace is taken over the original space-like CFT with an additional free $\mathbb{S}_1$ CFT whose radius is $\beta$ {\it restricted to the momentum mode}. The Hagedorn divergence appears in the ultraviolet region $\tau_2 \to 0$. Now let us use the Polchinski's trick \cite{Polchinski:1985zf} to rewrite the thermal partition function \eqref{hgg} as the string 1-loop partition function \begin{align} \beta F &= \int_{\mathcal{F}}\frac{\mbox{d}\tau^2}{\tau_2} \mathrm{Tr}_{\mathrm{CFT}\times \mathbb{S}^1} q^{L_0}q^{\bar{L_0}} \ , \end{align} where $\mathcal{F}$ is the fundamental domain of the torus, and the trace is taken over the original CFT with the free $\mathbb{S}^1$ CFT {\it including winding modes}. The Hagedorn divergence is now translated to the IR instability $\tau_2 \to \infty$. Apart from the ground state tachyon that should be GSO-projected out in the supersymmetric theory, a possible instability comes from the thermal winding tachyon whose mass is given by \begin{align} m(\beta)^2 = -1 + \beta^2 \ . \end{align} When $m(\beta)^2 < 0$, the Hagedorn instability occurs. In this way, we can understand the Hagedorn divergence as the appearance of the winding tachyon in the Euclideanized thermal string theory. The argument above suggests that when the thermal direction shrinks enough to admit ``winding tachyon" in the Euclidean spectrum, the Hagedorn phase transition occurs. Assuming a semiclassical quantization of string in the Schwarzshild black hole, a similar situation occurs in the thermal string theory in the Euclidean Schwarzshild black hole background. The thermal winding tachyon has an effective mass \begin{align} m^2(r) = -1 + r_0^2\left(1-\frac{r_0}{r}\right) \ . \end{align} At the point where $m^2(r)$ becomes negative, the black hole develops a stretched horizon, and when $m^2(\infty) < 0$, we expect the ``black hole - string" phase transition. We will see later that the winding tachyon is crucial in the two-dimensional black hole and its exact ``black hole - string phase transition". Recently Horowitz \cite{Horowitz:2005vp} studied the real-time winding tachyon condensation in the black hole system. If one considers a compactified black string solution, the extra dimension can show a winding tachyon condensation as the direction shrinks toward the black hole singularity. After the winding tachyon condensation, the black hole evaporates as a bubble of nothing. This process is proposed to be a new interesting end point of the Hawking black hole evaporation (see \cite{Ross:2005ms,Bergman:2005qf,Horowitz:2006mr,Dine:2006we} for related studies). The winding tachyon condensation could also give a solution of the cosmological singularity problems as studied in \cite{McGreevy:2005ci,Nakayama:2006gt}. \subsection{Two-dimensional black hole case}\label{sec:4-2} To discuss the ``black hole - string transition" introduced in section \ref{sec:4-1} in a more quantitative manner, it is imperative to study the exact string background rather than the approximate Schwarzshild black hole solution. Especially, the arguments related to the (thermal) winding tachyon condensation is rather speculative, and a demonstration based on the exactly solvable string background would be highly desirable. As we have seen in section \ref{sec:2}, the simplest exactly solvable (non-BPS) black hole background is the two-dimensional black hole. In this subsection, we specialize in the ``black hole - string transition" in the two-dimensional black hole.\footnote{Of course, what we mean by the ``two-dimensional black hole" includes the embedding into the superstring theory such as the black NS5-brane background, so our results have a direct application to the ten-dimensional critical string theories.} Let us consider the supersymmetric $SL(2;\mathbb{R})/U(1)$ coset model. As we discussed in section \ref{sec:2-4}, the two-dimensional black hole has the Hawking temperature \begin{align} T_{\rm Hw} = \frac{1}{\beta_{\rm Hw}} = \frac{1}{2\pi\sqrt{\alpha'k}} \ . \end{align} Since the two-dimensional black hole is asymptotically a linear dilaton theory, the Hagedorn temperature shows a $1/k$ corrected shift compared with the flat Minkowski theory\footnote{When we mention the Hagedorn temperature of the two-dimensional black hole, we always assume that the criticality condition of the string theory is satisfied by adding {\it non-dilatonic} CFTs. The NS5-brane background is a typical example.}: \begin{align} T_{\rm Hg} = \frac{1}{\beta_{\rm Hg}} = \frac{1}{4\pi\sqrt{1-\frac{1}{2k}}} \ . \label{Hg} \end{align} To derive this formula, one should first note that the $SL(2;\mathbb{R})/U(1)$ coset model has a mismatch between the genuine central charge $c^{SL(2;\mathbb{R})/U(1)} = 3 + \frac{6}{k}$ and the effective central charge $c_{\mathrm{eff}}^{SL(2;\mathbb{R})/U(1)} = 3$ due to the asymptotic linear dilaton.\footnote{We can also see this directly from the one-loop partition function and the spectrum. See section \ref{sec:3} and appendix A.} Therefore, the total theory has a deficit effective central charge $c_{\mathrm{eff}}^{\mathrm total} = 12 - \frac{6}{k}$ after the subtraction of the ghost contribution. Now we recall the Cardy formula: \begin{align} \rho(M) \sim \exp\left(2\pi\sqrt{\frac{c_{\mathrm{eff}}}{12}}M + 2\pi \sqrt{\frac{\bar{c}_{\mathrm {eff}}}{12}}M \right) \ , \end{align} which immediately gives the Hagedorn temperature \eqref{Hg}. For later references, we present here a similar formula for the bosonic two-dimensional black hole. The Hawking temperature and the Hagedorn temperature is given by \begin{align} T_{\rm Hw} = \frac{1}{\beta_{\rm Hw}} = \frac{1}{2\pi\sqrt{\alpha'\kappa}} \ . \label{hwb} \end{align} and \begin{align} T_{\rm Hg} =\frac{1}{\beta_{\rm Hg}} = \frac{1}{4\pi\sqrt{2-\frac{1}{2(\kappa-2)}}} \ . \label{Hgb} \end{align} There is no apparent reason to exclude possible $\alpha'$ corrections to the Hawking temperature in the bosonic string theory, but the exact string quantization reveals that the formula \eqref{hwb} is the correct one.\footnote{In general, the Hawking temperature is classically determined solely from the information near the event horizon (the Rindler limit), where the curvature and the $\alpha'$ corrections could become large. The effects of such $\alpha'$ corrections and possible renormalization of the Hawking temperature are interesting subjects to study.} We will return to this problem when we discuss the exact boundary states of the probe rolling D-brane in section \ref{sec:8}. On the other hand, the Hagedorn temperature here is obtained from the exact string quantization and trustful. From the general discussions in section \ref{sec:4-1}, we expect ``black hole - string" at $k=1$ (or $\kappa=3$ for the bosonic case) when the Hawking temperature and the Hagedorn temperature coincide. At this point, the stretched horizon becomes so large that it will swallow the complete space-time. This ``black hole - string transition" in the two dimension black hole is induced by the strong $\alpha'$ corrections: when $k$ is large (recall $1/k$ correction corresponds to $\alpha'$ correction) the Hawking temperature is much larger than the Hagedorn temperature, and the geometry is not disturbed by the back-reaction of the Hawking radiation. When $k$ becomes smaller, $1/k$ corrections will become more and more important, and at the phase transition point, i.e. at $k=1$, physics changes drastically. One of the main focus of this thesis is to study this transition from the rolling D-brane probe. At this point, we would like to point out that the ``black hole - string" phase transition of the two-dimensional black hole does not involve the string coupling $g_s$ in the discussion. This is one of the features of the two-dimensional black hole that we can clearly separate the (typically more difficult) problem of the genus expansion from the more tractable $\alpha'$ corrections in order to understand the ``black hole - string transition". What is the origin of the strong $1/k$ correction? As we have mentioned earlier in section \ref{sec:3-1}, the metric for the supersymmetric two-dimensional black hole does not receive perturbative $1/k$ corrections. The origin of the (nonperturbative) $1/k$ corrections that trigger the ``black hole - string transition" is most clearly seen in the Wick rotated Euclidean two-dimensional black hole for which the dual description is available. In the dual description, the two-dimensional black hole is described by the $\mathcal{N}=2$ Liouville theory. The $\mathcal{N}=2$ superpotential \begin{align} W(\Phi) = \mu \int d^2\theta e^{\frac{1}{Q}\Phi} \label{Liousup} \end{align} can be seen as the localized (winding) tachyon condensation.\footnote{The duality between the $SL(2;\mathbb{R})^{(A)}/U(1)$ coset and the $\mathcal{N}=2$ Liouville theory is a kind of T-duality as we discussed in section \ref{sec:3-4}. Thus the condensation of the momentum mode in $\mathcal{N}=2$ Liouville theory can be regarded as the condensation of the winding mode in the original $SL(2;\mathbb{R})^{(A)}/U(1)$ coset model.} The condensation is a localized mode because the Liouville momentum $j$ corresponding to the superpotential \eqref{Liousup} is not given by the continuous series $j=-\frac{1}{2}+ip$, but lies in the discrete series. The crucial observation has been already made early in \cite{Kutasov:1990ua} in the context of the noncritical superstring theory. The spirit is close to the discussions given in section \ref{sec:2-3} and \ref{sec:3-4}. The superpotential \label{Liousup} is a normalizable perturbation if $\frac{1}{Q}>\frac{Q}{2}$ (i.e. $k >1$) and it is a non-normalizable deformation otherwise. In the language of the noncritical string theory, the $\mathcal{N}=2$ super Liouville potential satisfies the Seiberg bound \cite{Seiberg:1990eb} only when $\frac{1}{Q}<\frac{Q}{2}$ holds. This directly means that the $\mathcal{N}=2$ Liouville description is good for $k<1$ and the two-dimensional black hole description is good for $k>1$. The transition point is exactly at $k=1$.\footnote {Another interesting observation related to the $k=1$ transition is the following. If we consider a two-dimensional $U(1)$ gauge theory in the ultraviolet that flows to $SL(2;\mathbb{R})/U(1)$ coset theory in the infrared (as was introduced in \cite{Hori:2001ax} to prove the mirror duality to the ${\cal N}=2$ Liouville theory), the central charge of the $U(1)$ gauge theory is given by $9$. Since the IR $SL(2;\mathbb{R})/U(1)$ coset theory has a central charge $c=3(1+\frac{2}{k})$, there is an apparent contradiction to Zamolodchikov's $c$-theorem if the level $k<1$ is considered. However, we should note that $SL(2;\mathbb{R})/U(1)$ coset theory is dilatonic so that the effective central charge is always given by $3$.} We can repeat the same analysis for the bosonic $SL(2;\mathbb{R})/U(1)$ coset. The duality between the bosonic $SL(2;\mathbb{R})/U(1)$ and the sine-Liouville theory, together with the Seiberg bound, leads to the conclusion that $\kappa=3$ is the phase transition point. The potential is given by \begin{align} V = \mu (S^+ + S^-) \ , \ \ \ S^{\pm} = e^{-\frac{1}{{\cal Q}}(\phi \pm \sqrt{1+{\cal Q}^2}iY)} \equiv e^{-\sqrt{\frac{\kappa-2}{2}} \phi \mp \sqrt{\frac{\kappa}{2}} iY}~, ~~~ ({\cal Q}=\sqrt{2/(\kappa-2)})~, \end{align} and the normalizability changes precisely at $\kappa = 3$. Assuming that this occurs when the Hawking temperature and the Hagedorn temperature coincides, we have verified that the Hawking temperature of the bosonic two-dimensional black hole is not renormalized. We will see another support from the probe rolling D-brane later in section \ref{sec:8}. We could argue this transition without using the dual Liouville picture \cite{Karczmarek:2004bw}. The black hole perturbation descends from the $SL(2;\mathbb{R})$ states \begin{align} J_{-1}^+ \bar{J}_{-1}^+|j=-1; m = \bar{m} = -1 \rangle \ . \end{align} The normalizability of such states (see section \ref{sec:3-2-2}) demand \begin{align} -\frac{1+k}{2} < j < -\frac{1}{2} \ \end{align} with $j=-1$, which suggests the same phase transition point $k=1$ (or $\kappa=3$). The situation in the Lorentzian two-dimensional black hole is less clear. We cannot perform the Wick rotation to the winding tachyon potential \eqref{Liousup} naively because the time is continuous and there is no apparent winding mode in the Lorentzian two-dimensional black hole. The same thing can be said in the Hagedorn instability of the free string theory in the flat Minkowski space: the existence of the thermal winding tachyon in the Wick rotated theory does not mean the tachyonic instability in the real time physics. Rather it should be understood as the phase transition associated with the thermal dissolution of strings. At the temperature beyond the Hagedorn point, there would be no distinction between the gas of strings and the black hole. \subsection{Other solvable backgrounds}\label{sec:4-3} There are many other exactly solvable string theory backgrounds that exhibit the ``string black hole transition". Most of the examples are more or less related to the $SL(2;\mathbb{R})$ WZNW model. In this subsection, we will briefly review the transition in such backgrounds. The black hole - string transition across $k=1$ also has a natural interpretation in terms of the holographic principle, as recently discussed in \cite{Giveon:2005mi}. Adding $Q_1$ fundamental strings to $k$ NS5-branes (more generally Calabi-Yau singularities) as we reviewed in section 2.4, one obtains the familiar bulk geometry of the $AdS_3/CFT_2$-duality. In this context, the density of states of the dual conformal field theory is given by the naive Cardy formula $S=2\pi\sqrt{\frac{cL_0}{6}}+2\pi\sqrt{\frac{\bar{c}\bar{L}_0}{6}}$ with $c = 6 k Q_1$ for $k>1$, but not for $k<1$. Rather, the central charge that should be used in the Cardy formula is replaced by an effective one $c_{\rm eff}= 6Q_1(2-\frac{1}{k})$ \cite{Kutasov:1990ua}. The origin of the difference between the $c_{\mathrm{eff}}$ and $c$ is again the normalizability of a certain operator. The $SL(2;\mathbb{C})$ vacuum of the dual CFT corresponds to the states \begin{align} J_{-1}^+ \bar{J}_{-1}^+|j=-1; m = \bar{m} = -1 \rangle \ \end{align} in the world-sheet $SL(2;\mathbb{R})$ WZNW model, and as we have seen several times, for $k>1$, the operator is normalizable, and $c_{\mathrm{eff}} = c$. On the other hand, for $k<1$, the operator is non-normalizable, and we expect $c_{\mathrm{eff}} <c$. A short computation based on the string description gives $c_{\rm eff}= 6Q_1(2-\frac{1}{k})$. We note that for $k>1$, the BTZ black hole excitation is normalizable and the partition function and the entropy is dominated by the Bekenstein-Hawking entropy of the BTZ black hole while for $k<1$, the BTZ black hole excitation is non-normalizable and the entropy is solely explained by the string excitations. This argument is completely in agreement with the ``black hole - string transition" picture at $k=1$. Another interesting generalization is the two-dimensional charged black hole. We consider the asymmetric coset \begin{align} \frac{SL(2;\mathbb{R})_k\times U(1)_L}{U(1)} \ , \label{chtbk} \end{align} where the $U(1)$ gauging acts on one of the (space-like) left-moving current in $SL(2;\mathbb{R})$ and a linear combination of the right-moving current of $SL(2;\mathbb{R})$ and $U(1)_L$. After the Kaluza-Klein reduction, the geometry of \eqref{chtbk} is described by the metric ($Q^2 = \frac{2}{k}$) \begin{align} \mbox{d} s^2 = \mbox{d}\phi^2 - \left(\frac{\tanh\frac{Q}{2}\phi}{1-a^2\tanh^2\frac{Q}{2}\phi}\right)^2 \mbox{d}\theta^2 \ , \end{align} the dilaton \begin{align} \Phi = \Phi_0 - \frac{1}{2}\log\left(1+(1-a^2)\sinh^2\frac{Q}{2}\phi\right) \ , \end{align} and the gauge field \begin{align} A = \frac{a\tanh^2\frac{Q}{2}\phi}{1-a^2\tanh^2\frac{Q}{2}\phi} \mbox{d}\theta \ . \end{align} Here $a^2$ is related to the mass $m$ and the charge $q$ of the black hole as \begin{align} a^2 = \frac{m-\sqrt{m^2-q^2}}{m+\sqrt{m^2-q^2}} \ . \end{align} At $a = 0$, the model reduces to the undeformed $SL(2;\mathbb{R})/U(1)$ black hole (and a compact boson). The Hawking temperature of the black hole (e.g. from the Euclidean geometry) is given by \begin{align} \beta_{\rm Hw} = \frac{4\pi}{Q} \frac{1}{1-a^2} \ . \end{align} On the other hand, the Hagedorn temperature is given by \begin{align} T_{\rm Hg} =\frac{1}{\beta_{\rm Hg}} = \frac{1}{4\pi\sqrt{1-\frac{1}{2k}}} \ , \end{align} irrespective of the deformation parameter $a$. From the world-sheet perspective, the ``black hole - string transition" of the charged two-dimensional black hole inherits from the $SL(2;\mathbb{R})$ WZNW model and the transition point should be $k=1$. This is different from the naive guess based on the relation $\beta_{\rm Hw} = \beta_{\rm Hg}$. A resolution proposed in \cite{Giveon:2005jv} is that the more precise definition of the transition temperature is when the Hawking temperature coincides with the temperature of the string that has the same mass and charge of the black hole. In this example, the entropy of the string with charge $q$ is given by\footnote{The shift is due to $q$-amount of right-moving $U(1)$ charge: we are summing over the string states with fixed $U(1)$ charge $q$ instead of summing over all states.} \begin{align} S = 2\pi \sqrt{1-\frac{Q^2}{4}} \left(m+\sqrt{m^2-q^2}\right) \end{align} resulting in the corresponding string temperature \begin{align} \beta_{\rm str} = \left.\frac{\partial S}{\partial m} \right|_{q} = \sqrt{1-\frac{Q^2}{4}}\frac{4\pi}{1-a^2} \ . \end{align} It is easy to see that the condition $\beta_{\rm str} = \beta_{\rm Hw}$ exactly reproduces the CFT computation, i.e. $k=1$. \newpage \sectiono{Tachyon Radion Correspondence}\label{sec:5} In this section, we review the tachyon radion correspondence, which is one of the greatest motivations to study the rolling D-brane in the two-dimensional black hole system. The correspondence says that the dynamics of the open-string tachyon condensation may be geometrically realized by the rolling D-brane system. The organization of the section is as follows. In section \ref{sec:5-1}, we overview the rolling tachyon problem. In section \ref{sec:5-2}, we study the closed string emission rate from the rolling tachyon boundary states and their variations.\footnote{This part of the thesis is based on \cite{Nakayama:2006qm}.} In section \ref{sec:5-3}, we study the correspondence at the classical level. In section \ref{sec:5-4}, we summarize our results on the quantum correspondence. In section \ref{sec:5-5} some cosmological implications are studied. \subsection{Rolling tachyon}\label{sec:5-1} \subsubsection{overview}\label{sec:5-1-1} In the days of early developments of string theory, tachyon used to be thought of as a nuisance in constructing realistic models for particle physics of our world. In recent years, open-string tachyons have obtained civil rights and have played more and more important roles in acquiring our knowledge on the nonperturbative D-brane physics with (spontaneously) broken SUSY. In addition, they have been even providing phenomenological applications such as brane inflation. More recently, the closed string tachyons (especially localized winding tachyon) have attracted much attention in relation to the topological change \cite{Adams:2001sv,Adams:2005rb} and the resolution of singularities \cite{McGreevy:2005ci}. One of the important steps in understanding the physics of unstable D-branes is Sen's conjecture with subsequent advancement (see \cite{Sen:2004nf} for a review), which states that the decaying process of the unstable D-branes can be regarded as the open string tachyon condensation. In particular, the energy difference between the false (perturbative) tachyonic vacuum and the true vacuum of the open string tachyon potential should explain the tension of the decaying D-brane exactly. Furthermore, the cohomology of the open string theory at the true vacuum must vanish. In the context of the open string field theory, these conjectures have been analytically proved in \cite{Schnabl:2005gv,Ellwood:2006ba}. More interesting aspects of the tachyon dynamics is to study its time evolution \cite{Sen:2002nu,Sen:2002in,Sen:2002an}. Based on the effective field theory analysis (which has been confirmed by the exact boundary states analysis later), it was found that the late time evolution of the open-string tachyon gives rise to the so-called ``tachyon matter", which is a pressureless fluid. Such a ``rolling tachyon" evolution has provided us with novel understanding of the tachyon condensation and time-dependent physics in string theory. The feasibility to construct the exact boundary states enables us to study the highly non-supersymmetric time evolution in a quantitative way. Let us begin with the effective DBI type action for the rolling tachyon \begin{align} S = - \int \mbox{d} t V(T)\sqrt{1-\dot{T}^2} \ . \label{tDBI} \end{align} Since we will focus on the homogeneous decay, we have assumed the D0-brane action without loss of generality. The effective potential $V(T)$ takes the form \begin{align} V(T) = M_0 \frac{1}{\cosh\frac{T}{2x}} \ , \end{align} where the D0-brane tension $M_0 \propto \frac{1}{g_s} $. For the non-BPS D-branes in supersymmetric theory, $x=1$, and for the unstable D-branes in bosonic string theory, $x=1/2$. The solution of the equation of motion is given by \begin{align} \sinh\frac{T}{2x} = a\cosh\frac{t}{2x} \ , \label{seom} \end{align} leading to the classical energy momentum tensor: \begin{align} T_{\mu\nu} = \frac{V(T)\partial_\mu T\partial_\nu T}{\sqrt{1+\eta^{\mu\nu}\partial_\mu T\partial_\nu T}} - V(T) \eta_{\mu\nu} \sqrt{1+\eta^{\mu\nu}\partial_\mu T\partial_\nu T} \ . \end{align} We are interested in the late time behavior of the energy momentum tensor, which is explicitly given by \begin{align} T_{00} & \sim E \cr T_{ij} & \sim -E \exp(-t/x) \delta_{ij} \ , \end{align} where $E = M_0/\sqrt{1+a^2}$. As we mentioned before, we have obtained the pressureless dust as a final product of the D0-brane decay. The energy momentum tensor yields the coupling of the rolling D-brane to the gravity. In order to study the coupling to higher string modes, we need the exact boundary state that describes the rolling D-brane. In the boundary conformal field theory approach, we introduce the boundary interaction\footnote{We focus on the bosonic case for simplicity. The generalization to the non-BPS D-branes in superstring theory is straightforward.} \begin{align} \delta S_{\mathrm{full}} = \tilde{\lambda} \int \mbox{d} s \cosh X^0(s) \ , \label{fulli} \end{align} for the ``full S-brane" model, and \begin{align} \delta S_{\mathrm{half}} = \lambda \int \mbox{d} s e^{X^0(s)} \ , \label{halfi} \end{align} for the ``half S-brane" model. Here $X^0$ denotes the target-space time coordinate and the integration is taken over the boundary of the world-sheet parametrized by $s$. There are several different ways to obtain the boundary states. Originally Sen \cite{Sen:2002nu} proposed to obtain the boundary states for \eqref{fulli} by starting with the (compactified) space-like model (boundary sine-Gordon model) and performing the Wick rotation. In the ``half S-brane model", Gutperle and Strominger \cite{Gutperle:2003xf} proposed to use the Wick rotation of the Liouville theory in the zero linear dilaton limit (time-like Liouville theory). In the coordinate space, the behavior of the boundary states from different prescription shows a different behavior (mainly in the region $ X^0 < 0$), but in the momentum (energy) space, they are related with each other in the zero mode sector. To see this, let us expand the rolling tachyon boundary states as \begin{align} |B\rangle &= i\int_C \mbox{d} t \rho (t) |0\rangle + \sigma (t) \alpha^0_{-1}\bar{\alpha}^0_{-1}|0\rangle + \cdots \cr &= i\int \mbox{d}\omega \tilde{\rho} (\omega) |\omega \rangle + \tilde{\sigma} (\omega) \alpha^0_{-1}\bar{\alpha}^0_{-1}|\omega\rangle + \cdots \ . \end{align} Since we are dealing with the time-dependent theory based on the analytic continuation, the contour choice will affect the physics. The zero mode density $\rho(\omega)$ has been computed as \begin{align} i \int_{C_{real}}\mbox{d} t \rho_{full} (t) e^{i\omega t} &= \left( e^{-i\omega \log\hat{\lambda}} - e^{i\omega \log \hat{\lambda}}\right) \frac{\pi}{\sinh\pi \omega} \cr i \int_{C_{real}} \mbox{d} t \rho_{half} (t) e^{i\omega t} & = e^{-i\omega \log\hat{\lambda}} \frac{\pi}{\sinh\pi \omega} \cr i \int_{C_{HH}} \mbox{d} t \rho_{full} (t) e^{i\omega t} & = e^{-i\omega \log\hat{\lambda}} \frac{\pi}{\sinh\pi \omega} \ . \end{align} Note that the Hartle-Hawking contour $C_{HH}$ integral of the full S-brane solution coincides with the real contour $C_{real}$ integral of the half S-brane solution (see figure \ref{fig:realHH}). This is intuitively expected because the half S-brane solution describes the later half dynamics of the rolling D-brane (decaying brane) and the Hartle-Hawking contour effectively sets the initial condition at $t=0$ to give a decaying D-brane. We also note that the boundary time-like Liouville field approach directly gives the same zero mode boundary wavefunction for the half S-brane solution. Thus we can conclude that various approaches yield essentially the identical results for the zero mode boundary wavefunction (i.e. coupling to the scalar tachyon mode). \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\linewidth,keepaspectratio,clip]{realHH.eps} \end{center} \caption{Different contour integration gives different boundary states.} \label{fig:realHH} \end{figure} Nevertheless, there are differences in the nonzero mode sectors between the boundary sine-Gordon approach and the boundary time-like Liouville approach. The origin of the difference is that the descendants for the boundary sine-Gordon model is based on the $SU(2)$ current algebra (at the self dual radius) and those for the boundary Liouville theory is based on the Virasoro algebra. For on-shell amplitudes (and energy-momentum tensor) we can gauge away these differences as we will do in section \ref{sec:5-2} to compute the closed string emission rate. However, at least in the two-dimensional noncritical string example, it has been stressed in \cite{Sen:2004yv} that such off-shell boundary states will be important to generate infinitely many conserved charges in addition to the energy momentum tensor. We will revisit the problem later in the discussion of the rolling D-brane, so we will not delve into the details any further at this point and concentrate on the physics associated with the zero mode. \subsection{Radiation from rolling tachyon boundary states}\label{sec:5-2} In this subsection, we would like to study the closed string emission rate from the rolling tachyon by using the exact boundary states. We will present rather a technical aspects of the computation for two reasons. One is that we are going to compare the results of the rolling tachyon and rolling D-brane in later sections in detail. The other is to understand the nontrivial relation between the unitarity (optical theorem) and the open - closed duality, which we revisit in the more nontrivial rolling D-brane case in section \ref{sec:8-2}. Before entering into the computation, we summarize the main physics involved. \begin{itemize} \item At the one-loop level computation, all the energy of the D0-brane is converted into closed string radiation: the radiation rate shows a power-like divergence. \item Most of the energy is converted into highly massive strings whose mass is effectively cut off by $M\sim 1/g_s$. \item The emitted strings are highly non-relativistic. \item If one considers the D$p$-brane as we increase $p$, the divergence becomes milder, but the spectral density is still power-like and the higher moment diverges. \item The inclusion of the {\it space-like} linear dilaton makes the divergence disappear due to the exponential suppression for the growth of density of states. \item On the contrary, the {\it time-like} linear dilaton (along the rolling tachyon direction) does not affect the divergence. This suggests a first hint of the universality of the decay of unstable D-branes. \end{itemize} Now we will begin our study on the closed string emission rate from the unstable D-brane. For a slight generalization of section \ref{sec:5-1}, we consider the unstable D-brane in the linear dilaton background. For the boundary states, we will use the one obtained from the time-like Liouville theory because with a time-like linear dilaton, the corresponding boundary states from the boundary sine-Gordon theory is unavailable. For zero time-like linear dilaton limit, however, the two computation agrees as expected. The dilaton gradient is set by: \begin{align} \Phi = {1 \over \sqrt{\alpha'}} (Q\, X^0 + {\bf V} \cdot {\bf X}), \qquad \mbox{where} \qquad Q \equiv \beta - {1 \over \beta} ~ \qquad(\beta \ge 1)~. \label{dilaton}\end{align} This puts the critical dimension $D$ for the bosonic string theory to be \begin{align} 26 = D - 6 Q^2 + 6 {\bf V}^2, \qquad \mbox{so} \qquad c_{\rm eff} = 6 Q_\beta^2 - 6 {\bf V}^2~, \end{align} where $Q_\beta \equiv (\beta + 1/\beta)$. The effective central charge $c_{\rm eff}$ sets the growth of density of closed string states \cite{Kutasov:1990ua}: \begin{align} \rho^{(c)}(M) \sim e^{4\pi \sqrt{\frac{c_{\msc{eff}}}{24} \alpha' M^2}} \label{dos} \end{align} up to subleading pre-exponential factor of $M$. It grows slower than the density of states for flat space-time (obtainable by setting $Q={\bf V} = 0$). \subsubsection{closed string emission}\label{sec:5-2-1} Let us consider the decay of an unstable D-brane in the linear dilaton background. The radiative transition of a D$p$-brane to a single closed string state of mass $M$ (set by the integer-valued oscillator level $N=\widetilde{N}$), whose on-shell energy-momentum $(\omega, {\bf k})$ is given by \begin{align} \Big(\omega_E - {i Q \over \sqrt{\alpha'}}\Big)^2 - \Big({\bf k}_E + {i {\bf V} \over \sqrt{\alpha'}} \Big)^2 =(\omega^2 -{\bf k}^2) = M^2 \quad \mbox{where} \quad \frac{1}{4}\, \alpha' M^2 = N - {c_{\rm eff} \over 24}, \end{align} where $(\omega_E, {\bf k}_{E})$ and $(\omega, {\bf k})$ are energy-momenta in the Einstein and the string frame, respectively. In string loop perturbation theory, the transition amplitude is computed by the disk one-point function $\langle \exp ((-i \omega + \frac{Q}{\sqrt{\alpha'}}) X^0) \, \exp ((i {\bf k} + \frac{{\bf V}}{\sqrt{\alpha'}}) \cdot {\bf X}) \rangle_{\msc{disk}}$ with the D$p$-brane boundary condition,\footnote{We only consider the case when the D-brane has Neumann boundary condition in the space-like linear dilaton direction.} where the vertex operator is separated into temporal and spatial parts as indicated. The two parts are factorized in the gauge that no oscillator in temporal direction is allowed. Consequently, the transition probability ${\cal P}(\omega)$ of the radiative process is governed entirely by the temporal part (see (3.29) in \cite{Karczmarek:2003xm}): \begin{align} {\cal P}(\omega) = \Big| \left\langle e^{ (- i \omega +\frac{Q}{\sqrt{\alpha'}}) X^0} e^{(i {\bf k} + \frac{{\bf V}}{\sqrt{\alpha'}})\cdot{\bf X}} \right\rangle_{\msc{disk}} \Big|^2 &= \Big| {1 \over \beta} \Gamma(1 + i \omega\sqrt{\alpha'} \beta) \Gamma(- i \omega \sqrt{\alpha'} /\beta)\Big|^2 \nonumber\\ &={\pi^2/\beta^2 \over \sinh (\pi \omega\sqrt{\alpha'} \beta) \sinh (\pi \omega\sqrt{\alpha'} / \beta)}~. \end{align} Then, at leading order in string perturbation theory, the total number of emitted closed strings from the decay of a D$p$-brane ($p\ge 1$) extended along ${\bf V}$-direction is computed as \begin{align} \overline{\cal N} = N_p^2 V_p\sum_M \sqrt{\rho^{(c)}(M)} \int_{-\infty}^\infty \!{\rmd^{D-1-p} {\bf k} \over (2 \pi)^{D-1-p}} \, {1 \over 2 \omega} {\cal P}(\omega)~, \end{align} \label{closed} \noindent where the overall coefficient abbreviates $N_p = \pi^{\frac{D-4}{4}} (2 \pi)^{\frac{D-2}{4}-p}$ and $V_p$ is the D$p$-brane volume. In (\ref{closed}), the sum is over all final closed string states of mass $M$ and of oscillator excitations symmetric between left- and right-moving sectors. Such oscillator excitations are equivalent in combinatorics to open string excitation, so the density of the final states is given by square-root of \eqref{dos}. Attributed to the Hagedorn growth of the density of states $\rho^{(c)}(M)$, the total emission number $\overline{\cal N}$ in (\ref{closed}) (or higher spectral moment) is ultraviolet convergent so long as linear dilaton has a nonzero spatial component, ${\bf V} \ne 0$, first observed in \cite{Karczmarek:2003xm}. Notice also that temporal component of the linear dilaton does not alter the ultraviolet behavior. This is most readily seen for small ${\bf V}$ by expanding the density of states. To study anatomy of the ultraviolet behavior, we shall now perform Fourier transformation and re-express $\overline{\cal N}$ in the open string channel. \subsubsection{open string channel viewpoint}\label{sec:5-2-2} Physical observables such as $\overline{\cal N}$ ought to be well-defined under the Fourier transform from the closed string channel to the open string one because \begin{enumerate} \item We start with defining expression of $\overline{\cal N}$, consistent with the optical theorem in the closed string channel. \item The expression is closed in the Euclidean signature. Hence we are free from any subtlety that may arise from analytic continuations between Euclidean and Lorentzian signature of the space-time. \end{enumerate} As in \cite{Karczmarek:2003xm}, we expand the transition probability ${\cal P}(\omega)$ in convergent power series, whose terms can be interpreted as D-instantons arrayed along imaginary time coordinate: \begin{align} {\cal P}(\omega) = {4 \pi^2\over \beta^2} \sum_{n,m=0}^\infty e^{-2 \pi \alpha' \omega W(m,n)} \end{align} where the location of the D-instantons is denoted as \begin{align} \alpha' W(m,n) = \sqrt{\alpha'}\Big[\Big(n+{1 \over 2} \Big) \beta + \Big(m+{1 \over 2} \Big){1 \over \beta}\Big] \ge \sqrt{\alpha'}~. \end{align} Thus, we take \begin{align} \overline{\cal N} = \Big({2 \pi N_p\over \beta}\Big)^2 V_p \sum_M \int_{-\infty}^\infty {\rmd^{D-1-p} {\bf k} \over (2\pi)^{D-1-p}} \sum_{m,n=0}^\infty {1 \over 2 \omega} e^{ - 2 \pi \alpha' \omega W(m,n)} \end{align} and rewrite each D-instanton contribution parametrically via the closed string channel modulus $t_c$ as \begin{align} {1 \over 2 \omega} e^{ - 2 \pi \alpha' \omega W(m,n)} &= \frac{\pi \alpha'}{2} \int_{-\infty}^\infty {\rmd k_0 \over 2 \pi} \int_0^\infty \rmd {t_c} \, e^{-2\pi t_c \cdot \frac{1}{4}\alpha'(k_0^2 + {\bf k}^2 + M^2)} e^{2 \pi i \alpha' k_0 W(m,n)} \ , \end{align} which gives \begin{align} \overline{\cal N} \!\!\! = \cr & \!\!\!\! \Big({2 \pi N_p\over \beta}\Big)^2 V_p \frac{\pi \alpha'}{2} \sum_{m,n=0}^\infty \int_0^\infty \rmd {t_c} \int_{-\infty}^\infty {\rmd k_0 \over 2 \pi} \int_{-\infty}^\infty {\rmd^{D-1-p} {\bf k} \over (2 \pi)^{D-1-p}} \, e^{-2\pi t_c \cdot \frac{1}{4}\alpha'(k_0^2 + {\bf k}^2)} e^{2 \pi i \alpha' k_0 W(m,n)}\nonumber\\ & \hskip3.7cm \times \sum_M \sqrt{\rho^{(c)}(M)} e^{-2\pi t_c \cdot \frac{1}{4}\alpha' M^2}. \end{align} Here, we exchanged order of summations and integrations, and first performed integrals over off-shell momenta $(k_0, {\bf k})$ and sum over mass level $M$. The sum over $M$ yields modular covariant partition function $Z^{(c)}(q_c)$ in terms of the Dedekind eta function: \begin{align} Z^{(c)} (q_c) &\equiv \sum_M \sqrt{\rho^{(c)}(M)} \, \, q_c^{\frac{1}{4}\alpha'M^2} \qquad \mbox{where} \qquad q_c \equiv e^{-2\pi t_c} \nonumber\\ &= \eta^{-(D-2)}(q_c)~. \end{align} Integrations over the $(D-p)$-dimensional momenta $(k_0, {\bf k})$ yield $(2 \pi^4 \alpha' t_c)^{-(D-p)/2}$ times Gaussian damping factor $e^{-2\pi \alpha' W^2(m,n)/ t_c}$. We now perform modular transformation to the open string channel $t_c = 1/t_o$, where $t_o$ is modulus of the open string channel and $q_o \equiv e^{- 2\pi t_o}$. Putting all these together, we finally have \begin{align} \overline{\cal N} = C_p \, V_p \sum_{m,n=0}^\infty \int_0^\infty {\rmd t_o \over t_o} t_o^{-p/2}\, e^{- 2\pi t_o \alpha' W^2(m,n) } \, \eta^{-(D-2)} (q_o), \label{openexp}\end{align} with $C_p = \Big({2 \pi N_p\over \beta}\Big)^2 \frac{\pi \alpha'}{2}(2\alpha'\pi^4)^{-\frac{D-p}{2}}$, reproducing the result reported in \cite{Karczmarek:2003xm}. As it stands, the final expression \eqref{openexp} is at odd to the intuition based on, for example, the Schwinger pair production in (time-dependent) electric field, since the integral over the open string modulus $t_o$ is still intact. If the total emission number can be interpreted as arising from on-shell two-particle branch cut in the open string channel, the modulus integral ought to be absent! Therefore, To understand underlying physics better, we shall now compute the cylinder amplitude directly and then extract the imaginary part via the optical theorem. \subsubsection{Lorentzian cylinder amplitude}\label{sec:5-2-3} Unitarity and optical theorem thereof, combined with the open-closed string channel duality, should enable us to extract the emission number $\overline{\cal N}$ of closed strings from decaying D$p$-brane as the imaginary part of the cylinder amplitude. In the closed string channel diagram, the computation reduces to \eqref{closed}, as in quantum field theory. It is, however, somewhat nontrivial to evaluate the imaginary part of the cylinder amplitude directly from the open string channel. Here we present the {\sl ab initio} derivation, refining that in the text of \cite{Karczmarek:2003xm}, by starting with manifestly well-defined Lorentzian cylinder amplitude. We begin with the cylinder amplitude in the closed string channel in which both the world-sheet and the target space-time signatures are taken Lorentzian. Omitting overall numerical factors for the moment, the amplitude is given by \begin{align} Z_{\rm cylinder} = i \pi \alpha' V_p \int_{s_c^{\rm IR}}^{s_c^{\rm UV}} \rmd s_c \int_{-\infty}^{\infty} \frac{\rmd \omega_L}{2\pi} \, \frac{\pi^2/\beta^2 \cdot q_c^{-(1-i\hat{\epsilon})^2\frac{1}{4}\alpha'\omega_L^2} }{\sinh(\pi\beta\omega_L \sqrt{\alpha'}) \sinh(\pi\omega_L \sqrt{\alpha'}/\beta)} \, Z_{{\cal M}}^{(c)}(q_c) \ , \label{cl} \end{align} where $q_c = e^{2\pi i \tau_c}$ with $\tau_c = s_c + i\epsilon$, and $Z_{{\cal M}}(q_c)$ represents the contribution from the closed string zero-modes and oscillator parts.\footnote{We are using different normalization for modulus parameters from \cite{Karczmarek:2003xm}: $t$(KLMS) = $(\pi/4)t$(here). In addition, they adopted $\alpha'=1$ convention.} The Lorentzian world-sheet is regularized by $i \epsilon$ prescription, while the Lorentzian space-time is regularized by $-i \hat{\epsilon}$-prescription. $s_c^{\rm {UV}}$ ($s_c^{{\rm IR}}$) is an ultraviolet (infrared) regulator of the closed string channel modulus. With these prescriptions, the integral over $\omega_L$ is convergent so long as $2 \hat{\epsilon} s_c^{\rm UV} > \epsilon >0$ is retained. Defining the open string modular parameter as $q_o = e^{-2\pi i \tau_o}$ where $\tau_o = s_o - i \epsilon$ with $s_o = 1/s_c$, one can rewrite \eqref{cl} in terms of open string channel energy $\omega_L'$ as \begin{align} Z_{\rm cylinder} &= V_p \int_{s_o^{\rm UV}}^{s_o^{\rm IR}} {\rmd s_o} \int_{-\infty}^{\infty} \rmd \omega'_L \left(i \pi \alpha' \int_{-\infty}^{\infty} \rmd \omega_L \frac{\cos(\pi \alpha' \omega_L \omega'_L)}{\sinh(\pi\beta \omega_L \sqrt{\alpha'} ) \sinh(\pi\omega_L\sqrt{\alpha'}/\beta)} \right) \nonumber \\ & \hskip4cm \times \, q_o^{-(1+i\hat{\epsilon}')^2\frac{1}{4}\alpha'{\omega_L'}^2} Z_{{\cal M}}^{(o)}(q_o) \, , \end{align} where $s^{\rm IR}_o\equiv 1/s^{\rm UV}_c$, $s^{\rm UV}_o\equiv 1/s^{\rm IR}_c$ are the cut-off's in the open string modulus. As opposed to the closed string channel, we have to adopt the $+i\hat{\epsilon}'$-prescription for the Lorentzian space-time, and the above integral is well-defined as long as $2 \hat{\epsilon}' s^{\rm UV}_o > \epsilon$. The expression in the large parenthesis yields the open string density of states, $\rho^{(o)}(\omega'_L)$. It is infrared divergent at $\omega_L= 0$. To regularize it, we subtract minimally the double pole\footnote{This subtraction does not affect the imaginary part of the partition function we are primarily interested in.} so that \begin{align} \rho^{(o)}(\omega'_L)_{\mathrm{reg}} &= i \pi \alpha' \int_{-\infty}^{\infty} \rmd \omega_L \left(\frac{\cos(\pi\alpha' \omega_L\omega'_L)}{\sinh(\pi\beta\omega_L \sqrt{\alpha'}) \sinh(\pi\omega_L \sqrt{\alpha'}/\beta)} - \frac{1}{\pi^2 \alpha'\omega_L^2} \right) \cr &= -{2}\partial_{\omega'_L} \log S_\beta\left({Q}_\beta + i\sqrt{\alpha'}\omega'_L\right) \ , \label{dens} \end{align} where the `$q$-Gamma function' $S_\beta(x)$ is defined by\footnote {Here the normalization of variable $x$ differs with factor 2 from the one given in \cite{FZZ}.} \begin{align} -\partial_x \log S_\beta (x) = \int_{-\infty}^{\infty} \rmd t \left(\frac{\cosh((x-Q_\beta)t)}{2\sinh(\beta t)\sinh(t/\beta)} -\frac{1}{2t^2} \right)\ \end{align} for $\mathrm{Re}(x) < 2 Q_\beta$ and analytically continued to the whole complex plane.\footnote{Notice that the Lorentzian density \eqref{dens} is well-defined without the analytic continuation. We stress that this should be contrasted against the approach of \cite{Karczmarek:2003xm}.} See, for example, \cite{FZZ,Nakayama:2004vk}. Now we perform the Wick-rotation both in the target space and on the world-sheet. First, Wick rotate the open string channel energy as $\omega'_L\, \rightarrow\, e^{i(\frac{\pi}{2}-0)} \omega'_L$ and set $\omega'_L = i \omega'$ $(\omega' \in \mathbb{R})$. Then, we can safely Wick rotate the world-sheet Schwinger parameter as $s_o \, \rightarrow \, - i t_o$ ($t>0$). Notice that we will need to perform the Euclidean rotation in opposite direction for the closed and the open string channels due to the difference of the $i\epsilon$-prescription. There is no obstruction in such contour deformation because $\partial_x \log S_\beta (x) $ has poles only on the real axis. We will see that this is specific to the decaying D-brane situation and do not hold generally. In fact, in section \ref{sec:8-2} dealing with the rolling D-branes, we shall show that there exist extra contributions from crossing poles in the course of the contour rotation and that their contributions are essential for maintaining the unitarity. After Wick rotating the world-sheet, the cylinder amplitude in the open string sector is given by \begin{align} Z_{\rm cylinder} = -2 V_p \int_{0}^\infty \rmd t_o \int_{(1-i0)\mathbb{R}} \rmd \omega' \partial_{\omega'} \log S_\beta \left({Q_\beta} -\sqrt{\alpha'} \omega'\right)q_o^{\frac{1}{4}\alpha'{\omega'}^2} Z_M^{(o)}(q_o) \ . \end{align} Imaginary part of the partition function comes from the simple poles of the $q$-Gamma function $S_\beta \left(Q_\beta -\sqrt{\alpha'} \omega'\right)$ at $\frac{1}{2}\omega' = W(m,n)$ for $n,m \in \mathbb{Z}_{\ge 0} $ and simple zeros for $n,m \in \mathbb{Z}_{< 0}$. Therefore, collecting imaginary parts from the contour integration over $\omega'$ and applying the optical theorem, we finally obtain \begin{align} \overline{\cal N} = \mathrm{Im}\, Z_{\rm cylinder} = C_p \, V_p \sum_{n,m=0}^\infty \int_0^\infty {\rmd t_o \over t_o} t_o^{-\frac{p}{2}}\, e^{-2\pi t_o \alpha' W^2 (m,n)} \, \eta^{-(D-2)}(q_o) \ , \end{align} where we have evaluated the free oscillator part explicitly and reinstated overall numerical factors. This is in perfect agreement with \eqref{dilaton}, and it may be interpreted as a nontrivial check of unitarity and open-closed duality in the Lorentzian signature. \subsubsection{D-brane decay in two-dimensional string theory}\label{sec:5-2-4} In a similar method, one can compute the spectral observables from the D-brane decay in two-dimensional string theory \cite{Klebanov:2003km}. The boundary state for the unstable D-brane in two-dimension is given by the ZZ-brane boundary state \cite{Zamolodchikov:2001ah}: \begin{align} \langle e^{(i k + 2/\sqrt{\alpha'})\phi} \rangle_{\msc{disk}} = \mu^{-\frac{i}{2} \sqrt{\alpha'} k} \frac{2 \sqrt{\pi}}{\Gamma(1-ik\sqrt{\alpha'})\Gamma(ik\sqrt{\alpha'})} \ . \end{align} Combining it with the rolling tachyon boundary states, the total emission number of closed string is given by \begin{align} \overline{\cal N} = N_o^2 \int^{\infty}_{0} \rmd k \int^{\infty}_{0} \frac{\rmd \omega}{2\omega} {\cal P} (\omega, k) \delta(\omega-k) \ , \end{align} where the on-shell condition $\omega = k$ is imposed, and the transition probability is \begin{align} {\cal P} (\omega, k) = \left| \langle e^{- i\omega X^{0}} e^{ (ik+2/\sqrt{\alpha'})\phi} \rangle_{\msc{disk}} \right|^2 = \frac{\sinh^2(\pi k\sqrt{\alpha'})}{\sinh^2(\pi \omega \sqrt{\alpha'})} \ . \end{align} We see that, after performing the $k$-integration, the resultant total emission number is ultraviolet divergent. To express $\overline{\cal N}$ in open string channel, we repeat the analysis of section \ref{sec:5-2-3} and expand the transition probability in arrays of imaginary D-instantons. The result is \begin{align} \overline{\cal N} &= N_o^2 \sum_{m,n = 0}^\infty \int_0^\infty \rmd k \int_0^\infty \rmd t_c \int_{-\infty}^{\infty} \frac{\rmd k_0}{2\pi} e^{-2\pi t_c \cdot \frac{1}{4}\alpha' (k_0^2 + k^2)} e^{2\pi i \alpha' k_0 W(m,n)} \sinh^2(\pi k\sqrt{\alpha'}) \Big\vert_{\beta \rightarrow 1} \cr &= N_o^2 \sum_{m,n = 0}^\infty \int_{0}^\infty \frac{\rmd t_o}{t_o} \Big({1 \over q_o} - 1 \Big) q_o^{ \alpha' W^2(m,n)} \Big\vert_{\beta \rightarrow 1} \ , \label{twodim} \end{align} where we have reinstated $W(m,n)$ for the purpose of regularization.\footnote{Because of the subtraction of singular vector in $(1/q_o - 1)$, the resultant amplitude is {\it non-unitary}.} The expression exhibits ultraviolet divergence as $t_o \to \infty$. On the other hand, it is possible to obtain the same radiation rate from the direct evaluation of the imaginary part of the Lorentzian cylinder amplitude in the open-string channel as was done in section \ref{sec:5-2-3}: \begin{align} Z_{\rm cylinder} = i N_o^2 \int_0^\infty \rmd s_c \int_{-\infty}^{\infty} \frac{\rmd \omega_L}{2\pi} \int_0^{\infty} \frac{\rmd k}{2\pi} \, \frac{\sinh(\pi \sqrt{\alpha'} k)^2 }{\sinh(\pi \sqrt{\alpha'} \omega_L)^2}\, q_c^{\frac{1}{4}\alpha'(-\omega^2_L + k^2)} \ . \end{align} After rewriting the open string density by the $q$-Gamma function as in section \ref{sec:5-2-3}, we obtain open string channel expression of the partition function. We then find the imaginary part from the poles located at $\frac{1}{2}\omega' = W(m,n)$, and reproduce \eqref{twodim}. This confirms that the partition function is manifestly unitary, obeying the optical theorem. Here again, the regularization $\beta \to 1$ is implicit. \subsubsection{ZZ brane decay in various dimensions}\label{sec:5-2-5} It is possible to generalize the discussion of section \ref{sec:5-2-4} for ZZ branes in various dimensions by introducing the time-like linear dilaton theory. If we write the dilaton slope of the Liouville and time-like linear dilaton direction as $V_{\phi} = b+{b}^{-1}$ and $V_{t} = \beta - \beta^{-1}$ respectively, the criticality condition is given by \begin{align} 26 = D+6V_{\phi}^2 - V_{t}^2 \ . \label{crirr} \end{align} We can combine the one-point function for the ZZ brane (with general $b$) and the decaying D-brane boundary states for the boundary time-like Liouville theory to compute the radiation rate as was studied in \cite{He:2006bm}. A similar cancellation as we discussed in section \ref{sec:5-2-1} gives the UV power-like structure of the closed string radiation rate.\footnote{Technically speaking, for $D>26$, we encounter a closed string IR divergence \cite{He:2006bm}.} The result suggests again the universality of the decaying D-brane spectrum. In this construction, owing to the criticality condition \eqref{crirr}, the existence of the time-like dilaton is unavoidable. In the following, we study the decay of the ZZ brane in $\mathcal{N}=2$ Liouville theory (or D0-brane Euclidean $SL(2;\mathbb{R})/U(1)$ coset model. See section \ref{sec:6} for more details) to study more realistic models, where $\beta = 1$ (i.e. flat limit) is feasible (see \cite{Israel:2006ip} for a particular case. This subsection is based on a generalization of their results). The absolute square of the boundary wavefunction for the $\mathcal{N}=2$ ZZ-brane is given by \begin{align} |\Psi(p,m)|^2 = \delta_{m,\bar{m}} \frac{\sinh(2\pi p)\sinh(2\pi p/k)}{\cosh(2\pi p) + \cos(\pi m)} \ , \label{zzwv} \end{align} while that for the ($\mathcal{N}=1$ supersymmetric) rolling D-brane with time-like linear dilaton $V_{t} = \beta - \beta^{-1}$ is \begin{align} |\Psi(E)^2| = \frac{1}{\sinh(\beta E) \sinh(\beta^{-1}E)} \ . \label{rrd} \end{align} We can evaluate the on-shell ($E^2 = M^2 + \frac{p^2}{2k} + \frac{m^2}{2k}$) emission with fixed transverse mass $M$ as \begin{align} N(M) &= \int \mbox{d} p \sum_m |\Psi(p,m)^2||\Psi(E(p,m))|^2 \cr &\sim \int \mbox{d} p e^{\frac{\pi}{k}p - \pi(\beta + \beta^{-1})\sqrt{M^2+\frac{p^2}{2k}}} \cr & \sim e^{-2\pi M \sqrt{(\frac{\beta+\beta^{-1}}{2})^2 - \frac{1}{2k}}} \ , \end{align} where in the last line we have used the saddle point approximation. On the other hand, the density of states for the emitted closed string for large $M$ is given by \begin{align} \sqrt{\rho^{(c)}} \sim e^{4\pi M \sqrt{\frac{c_{\mathrm{eff}}}{24}}} = e^{2\pi M \sqrt{(\frac{\beta+\beta^{-1}}{2})^2 - \frac{1}{2k}}} \ . \end{align} Thus, we see an exact cancellation of the exponential part of the closed emission rate, leaving us with a familiar power-like universal closed string emission rate. We have several comments here \begin{itemize} \item We can analyse the bosonic case in the same way. The first difference is $k$ in \eqref{zzwv} should be replaced with $\kappa-2$. The second difference is $E$ in \eqref{rrd} should be replaced with $\sqrt{2}E$. The final closed string emission rate changes, as a consequence, to $N(M) \sim e^{-2\pi \sqrt{(\beta+\beta^{-1})^2/2 - \frac{1}{2(\kappa-2)}}}$, which will cancel against the bosonic Hagedorn density of states $\sqrt{\rho^{(c)}} \sim e^{4\pi M \sqrt{\frac{c_{\mathrm{eff}}}{24}}} \sim e^{2\pi \sqrt{(\beta+\beta^{-1})^2/2 - \frac{1}{2(\kappa-2)}}}$. \item For simplicity, we studied the emission rate from the closed string perspective. The open string computation like we did in section \ref{sec:5-2-2}, \ref{sec:5-2-3} is straightforward, and we will not repeat it here. \item The conclusion here is independent of the level $k$ of the $SL(2;\mathbb{R})/(1)$, which, on one hand, suggests a universality of the D-brane decay. On the other hand, it seems curious to observe that nothing special happens at $k=1$, where we expect a ``black hole - string transition". As we will see in section \ref{sec:7}, \ref{sec:8} in detail, the rolling (or Euclidean hairpin) D-brane captures or probes the ``black hole - string transition". We will return to this question in section \ref{sec:9}. \end{itemize} \subsubsection{electric field and long string formation}\label{sec:5-2-6} One simple generalization of the rolling D-brane was, as we studied in section 5.1.3, the inclusion of the linear dilaton. Another simple generalization is to introduce constant electric field on the D-brane, i.e. we introduce the fundamental string charge \cite{Mukhopadhyay:2002en,Rey:2003xs}. In order to introduce the constant electric field (say $F^{01} = \epsilon$) on the D-brane, we can use the stringy version of the Lorentz boost. The successive applications of T-duality, Lorentz boost and inverse T-duality, we end up with the boundary states with electric flux. Operationally, the transformation is \begin{align} |0 \rangle \to \gamma |0\rangle \ , \ \ t\to \gamma^{-1} t \ , \ \ \omega \to \gamma \omega \cr \begin{pmatrix} \alpha^0 \\ \alpha^1 \end{pmatrix} \to \Lambda^{-1} \begin{pmatrix} \alpha^0 \\ \alpha^1 \end{pmatrix} \ , \ \ \ \begin{pmatrix} \bar{\alpha}^0 \\ \bar{\alpha}^1 \end{pmatrix} \to \Lambda \begin{pmatrix} \bar{\alpha}^0 \\ \bar{\alpha}^1 \end{pmatrix} \ , \end{align} where \begin{align} \Lambda = \gamma \begin{pmatrix} 1& \epsilon \\ \epsilon & 1 \end{pmatrix} \ , \ \ \gamma = \frac{1}{\sqrt{1-\epsilon^2}} \ . \end{align} From this transformation law, the energy momentum tensor can be easily read as \begin{align} T_{00} & \sim E \gamma \cr T_{01} & \sim -Ee^2\gamma -E \gamma^{-1} \exp(-\gamma^{-1}t) \cr T_{11} & \sim -E \gamma^{-1} \exp(-\gamma^{-1}t) \ , \end{align} in the $t\to \infty$ limit. The study of the closed string radiation from the boundary states is straightforward. When $x^1$ direction is noncompact, the result is \begin{align} \langle N \rangle = \sum_M \int \mbox{d} k \frac{|\Psi(\omega_{k,M})|^2}{2\omega_{k,M}} \simeq \int^{\infty} \mbox{d} M \sqrt{\rho^{(c)}(M)}e^{-2\pi \gamma M} = \int^{\infty}\mbox{d} M e^{-2\pi (\gamma-1) M} \end{align} and the total emission rate is exponentially suppressed essentially due to the Lorentz time delay \cite{Nagami:2003yz,Nagami:2003mr}. Now let us suppose that $x^1$ direction is compactified with the radius $R$. In this case, we have to sum over the winding mode: \begin{align} N(M) = \sum_w \int \mbox{d} k \frac{|\Psi(\omega_{k,M})|^2}{2\omega_{k,M}} \simeq \sum_w\int \mbox{d} k e^{- 2\pi\gamma (\sqrt{(wR)^2 + k^2 + M^2}-\epsilon Rw)} \ . \end{align} For large $M$, the summation over $w$ can be evaluated by the saddle point methods, which leads to \begin{align} \langle N \rangle \sim \int^\infty \mbox{d} M \sqrt{\rho^{(c)}(M)} N(M) \sim \int^\infty \mbox{d} M M^{\beta} \end{align} We recover the power-like behavior of the emission rate \cite{Gutperle:2004be}.\footnote{The power dependence $\beta$ is determined from the details of the model e.g. dimensionality of the D-brane and the details of the internal CFT etc.} The computation reveals that the winding mode dominates the emission rate in the electrified D-brane decay. Mathematically, this is due to the (T-dualized) Lorentz invariance in the $R\to 0$ limit. Physically, the decay of the D-brane produces many long macroscopic strings as a final decay product, which has a cosmological significance as we will review in section \ref{sec:5-5}. \subsection{Classical correspondence}\label{sec:5-3} The Dirac-Born-Infeld form of the rolling tachyon effective action \eqref{tDBI} suggests a possible geometrical interpretation of the open string tachyon condensation. Such a geometrical interpretation of the rolling tachyon process would shed a new light upon our understanding of the nature of the open string tachyon and its condensation. It would also provide a guiding principle for a geometrical interpretation of the closed string tachyon condensation, for qualitative properties of the closed string tachyon condensation are poorly understood compared with the open string tachyon condensation. In \cite{Kutasov:2004dj}, an interesting connection between the D-brane motion in the (near horizon) NS5-brane background and the rolling tachyon dynamics was pointed out. Since the NS5-brane has a tension proportional to $1/g_s^2$, in perturbative string theories, we can regard it as a fixed background, in which the D-brane, whose tension is proportional to $1/g_s$ moves. In other words, in the perturbative string theories, the probe D-brane approximation is good and trustful. The effective action for the D-brane motion in NSNS-background (i.e. without any R-R fields), is given by the Dirac-Born-Infeld action \begin{align} S = -T_p\int \mbox{d}^{p+1} \sigma e^{-\Phi}\sqrt{-\det(X^*[G+B]_{\mu\nu})} \ , \end{align} where $X^*[G+B]$ denotes the pullback to the D$p$-brane world-volume. As proposed in \cite{Kutasov:2004dj}, let us consider the D0-brane motion in near horizon NS5-brane geometry \eqref{NH ext NS5}.\footnote{If one considers a homogeneous motion of the D-brane, the net result does not depend on the spacial dimension of the D-brane. We also assume that D-brane sits at a point in the internal space $\mathbb{S}^3$.} Let us fix the world-sheet reparametrization invariance by taking the static gauge $\sigma^0 = t$. In this gauge, the DBI action reduces to \begin{align} S = -T_0 \int \mbox{d} t e^{\frac{\rho}{\sqrt{2k}}}\sqrt{1-\dot{\rho}^2} \ , \label{DBIr} \end{align} where dot denotes the derivative with respect to $\sigma_0 = t$, and we have rescaled the radial direction $\rho$ so that we have a canonical kinetic term. Let us compare the effective action for the radion field $\rho$ \eqref{DBIr} with the open string tachyon effective action \eqref{tDBI}. It is almost clear in the large (negative) region of $\rho$, these two expressions essentially coincide with each other.\footnote{If one take $k=2$, the coincidence becomes exact including the numerical factor in the tachyon (radion) potential.} This is the classical ``tachyon - radion correspondence": one can identify the effective action for the rolling tachyon problem with the effective action for the rolling D-brane in the NS5-brane, or linear dilaton, background. The ``radion field" $\rho$ plays the role of the tachyon field $T$ here. Note, however, that the radion field is actually not tachyonic, although it has run-away potential, nor has an unstable extremum in the potential because it is a massless field at the tree level. One can readily solve the classical equation of motion based on the action \eqref{DBIr} as \begin{align} e^{-\frac{\rho}{\sqrt{2k}}} = c \cosh\left(\frac{t}{\sqrt{2k}}\right) \ , \label{seoms} \end{align} which agrees with the late time behavior of the rolling tachyon problem \eqref{seom}. The energy momentum tensor can be read as \begin{align} T_{00} &= E\delta(\rho-\rho_0(t)) \cr T_{0\rho} &= E \tanh\left(\frac{t}{\sqrt{2k}}\right) \delta(\rho-\rho_0(t)) \cr T_{ij} &= -E \mathrm{sech^2}\left(\frac{t}{\sqrt{2k}}\right) \delta(\rho-\rho_0(t)) \delta_{ij} \ \ \ (i,j = 1,\cdots,p) \ , \end{align} where $\rho_0(t)$ is the classical solution of the radion motion \eqref{seoms}. As expected, the energy momentum tends to that for a pressure-less dust as $t\to \infty$. The $(0\rho)$ component has a natural interpretation as the momentum transfer in the $\rho$ direction because the decaying D-brane moves in the $\rho$ direction almost at the speed of light as $t\to \infty$. What is the end point of the ``radion condensation"? In the case of the open string tachyon condensation, Sen's conjecture states that we end up with the closed string vacuum, where the open string excitation becomes infinitely massive and disappear from the physical spectrum. From the effective field theory approach taken here, it is difficult to establish this statement in a satisfactory manner because in the large $\rho$ regime, the effective string coupling becomes larger due to the linear dilaton gradient. One way to study this might be to uplift the system to M-theory (e.g. by using the interpolating metric proposed in \cite{Aharony:1998ub}). The subsequent physics, however, is intuitively clear: the D-brane will be absorbed into the NS5-brane and form a non-threshold bound states. The open string spectrum on the D-brane should be modified so that it matches with the excitation on the bound states.\footnote{It would be an interesting open problem to study the tachyon - radion correspondence from the open string field theory and prove the analogue of Sen's conjecture.} There are several generalizations of the problem. One interesting question is whether we can obtain the effective DBI action having the exactly identical potential with the rolling tachyon not only in the large $\rho$ region. This is possible by considering an array of the NS5-brane on $\mathbb{R}^3\times \mathbb{S}^1$ rather than the stack of NS5-branes in $\mathbb{R}^4$ \cite{Kutasov:2004ct}. Because of the oppositely-directed attractive force between two NS5-branes, the potential of the D-brane can have a local extremum: \begin{align} S = -T_0 \int \mbox{d} t \frac{1}{\cosh\frac{\rho}{\sqrt{2k}}}\sqrt{1-\dot{\rho}^2} \ ,\label{aDBIr} \end{align} which completely agrees with \eqref{tDBI}. Unfortunately, unlike the NS5-branes on $\mathbb{R}^4$, the exact quantization of the rolling D-brane in this geometry is unavailable.\footnote{The exact boundary states for {\it static} (unstable) brane in a similar background has been constructed in \cite{Eguchi:2004ik}, which reproduces the mass of the geometrical tachyon (i.e. radion).} Another interesting generalization is to consider the D-brane motion in the non-extremal black NS5-brane background. Interestingly, after a simple coordinate transformation, the classical motion of D-brane in the non-extremal NS5-brane (outside of the horizon) is identical to that in the extremal NS5-brane. To see this we note that, by introducing `tachyon' variable $Y \equiv \log \sinh \rho$, DBI Lagrangian of the D0-brane can be cast to that of rolling tachyon: \begin{align} L_{\rm D0} &= - e^{-\Phi} \sqrt{\left({\mbox{d} s \over \mbox{d} t}\right)^2} = - V(Y) \sqrt{1 - \dot{Y}^2} \qquad \mbox{where} \qquad V(Y) = M_0 \, e^Y ~, \label{nonextremal D0} \end{align} if we restricted ourselves to the region outside of the horizon. An important point is that, in sharp contrast to the extremal background \eqref{NH ext NS5}, the dilaton is finite everywhere. Thus, the strong coupling singularity is now capped off by the horizon. The construction of the exact boundary states for the rolling D-brane in the two-dimensional black hole (or non-extremal NS5-brane) is one of the main themes of this thesis. For another example of exactly solvable deformation, one can introduce constant electric fields as we did in the rolling tachyon example. This has been studied in \cite{Nakayama:2004ge}, where we have constructed exact boundary states and have shown the correspondence between the electrified rolling tachyon problem and the electrified rolling radion problem even with $\alpha' \sim 1/k$ corrections. As yet another generalization, the rotating D-brane solution in NS5-brane background has been also studied in \cite{Kutasov:2004dj}, which could be regarded as a rotational Lorentz boosted solution as pointed out in \cite{Nakayama:2004ge}, but the exact boundary state is yet to be constructed. Other classical studies of D-brane motion in related background include \cite{Yavartanoo:2004wb,Panigrahi:2004qr,Ghodsi:2004wn,Sahakyan:2004cq,Toumbas:2004fe,Bak:2004tp,Chen:2004vw,Kluson:2005qx,Lapan:2005qz,Thomas:2005am,Thomas:2005fw,Kluson:2005dr,Kluson:2005eb,Kluson:2005zw,Papantonopoulos:2006eg,Okuyama:2006zr,Gumjudpai:2006hg}. Before concluding this subsection, we would like to stress again that the correspondence at the level of the effective action is only valid in the large $\rho$ or $T$ regime, where the effective action analysis loses its validity because the effective string coupling grows there. Therefore, the quantum correspondence we will prove in later sections, based on the one-loop string perturbation theory, is actually not so obvious, and we should rather regard it as a highly nontrivial statement of the universality of the properties of decaying D-branes. \subsection{Quantum correspondence}\label{sec:5-4} So far, we have mainly discussed the classical correspondence between the rolling tachyon problem and the rolling radion problem at the level of the effective action. Aside from the debate over the effectiveness of the rolling tachyon DBI-like action \eqref{tDBI} , we have one tunable parameter $k$ in the rolling radion problem, so it is important to analyse a possible $k$ dependence of this correspondence. We know that $1/k$ measures the $\alpha'$ corrections to the background geometry from the discussion in section \ref{sec:2}. When $k$ becomes larger, the classical geometry, and hence, the DBI action is more trustful. On the other hand, when $k$ becomes smaller, the geometry shows large $\alpha'$ corrections and the effective action approach may break down. Especially, the exact correspondence at the level of the effective action requires $k=2$, which is rather in a strongly coupled regime.\footnote{In the bosonic case, we need to set $k=1$.} In particular, if one considers the two-dimensional black hole geometry (as the non-extremal NS5-branes background), the appearance of the stretched horizon blurs the geometry. In addition, we expect a ``black hole - string transition" at $k=1$. It is of utmost interest to probe such a phase transition from the rolling D-brane. In the following sections, we will construct the exact boundary states for such rolling D-branes in NS5-brane background, and reveal the nature of the $1/k \sim \alpha'$ corrections to the tachyon - radion correspondence. After the construction of the exact boundary states, we study the closed string radiation rate as we did in the rolling tachyon case in section \ref{sec:5-2} and compare the results. For convenience, we summarize our main physical results here \cite{Nakayama:2005pk}: \begin{enumerate} \item The closed string emission rate from the rolling D-brane (which will be computed in section \ref{sec:8-2}) yields exactly the same behavior as that from the rolling tachyon (which was computed in section \ref{sec:5-2}). Especially, the power-like behavior of the spectrum density does not depend on $k$ (up to an overall normalization). This is true as long as $k>1$, and confirms the tachyon - radion corresponding from the exact boundary states. \item Independence of the extra parameter $k$, which even governs the world-sheet (stringy) $\alpha'$ correction suggests a universal nature of the decaying D-brane: all the energy of the D-brane will be radiated as a gas of closed strings, whose dominant contribution comes from the highly massive (long) strings. If one introduces the fundamental string charge, as an electric flux, the dominant contributions from the rolling D-brane again comes from the winding strings as we have seen in the rolling tachyon problem in section \ref{sec:5-2-6}. \item The situation changes drastically if one studies the case $k<1$. The closed string emission rate is exponentially suppressed, and the tachyon - radion correspondence breaks down. This is in accord with the ``black hole - string transition" at $k=1$ discussed in section \ref{sec:4}. Our result is the first physical manifestation of the ``black hole - string transition" in the two-dimensional black hole probed by the rolling D-brane. \end{enumerate} \subsection{Cosmological implications}\label{sec:5-5} From the early days of its invention, the rolling tachyon system has also been studied in the context of the cosmological applications. In particular, the realization of the inflation in string theory has attracted more and more attention recently with increasing evidence for the existence of such period in the history of our universe (see \cite{Linde:2005dd} and references therein). Indeed, one of the simplest proposals for the inflation from the string theory is the tachyon inflation, where the (open string) tachyon plays the role of the inflaton \cite{Kofman:2002rh,Frolov:2002rr,Li:2002et,Fairbairn:2002yp,Shiu:2002qe}. The tachyon - radion correspondence discussed so far enables us to consider varieties of radion (or geometrical tachyon) inflation. From the classical tachyon - radion correspondence, many features of the tachyon inflation can be translated into that of the radion inflation with more generalities \cite{Thomas:2005fu}. The starting point of the tachyon (radion) inflation is (minimal) coupling of the DBI-like action \eqref{tDBI} \eqref{DBIr} to the gravity: \begin{align} L_{\mathrm{eff}} = \sqrt{-g} \left(\frac{R}{16\pi G} - V(T)\sqrt{1+g^{\mu\nu}\partial_\mu T \partial_\nu T} \right) \ . \label{frwl} \end{align} For our realistic application, we consider the four-dimensional (non-compact) space-time, and $8\pi G = M_p^{-2}$ with the four-dimensional Planck constant $M_p$. Under the assumption of the Friedman-Robertson-Walker isotropic universe, the four dimensional metric can be written as\footnote{Since the inflation flattens the space in an exponential manner, we have assumed a flat space universe for simplicity.} \begin{align} \mbox{d} s^2 = -\mbox{d} t^2 + a(t)^2 \mbox{d} x^2_i \ \ \ (i = 1,2,3) \ . \end{align} We begin with the equation of motion for $T$: \begin{align} \frac{\ddot{T}}{1-\dot{T}^2} + 3H\dot{T} + \frac{V'}{V} = 0 \ , \end{align} where the prime denotes the derivative with respect to $T$ (i.e. $V' = \partial V(T)/\partial T$) and the dot denotes the time derivative. $H$ here denotes the Hubble parameter $H \equiv \dot{a}/{a}$. In the slow-roll approximation (i.e. $\dot{T}\gg 1$), the Friedman equation reads \begin{align} H^2 = \left(\frac{\dot{a}}{a}\right)^2 = \frac{V(T)}{3M_p^2} \ , \end{align} and the slow-roll equation reduces to \begin{align} 3H\dot{T} = \frac{V'(T)}{V} \ . \end{align} For the slow-roll parameter $\eta \sim (H')^2/H^4$ to be small enough, we must require \begin{align} H^2 \gg \frac{(V')^2}{V^2} \ . \end{align} Suppose our (geometrical) tachyon potential has a local extremum as is the case with the rolling tachyon and the geometrical tachyon in the array of NS5-brane backgrounds. Inflation near the local extremum is possible if $H^2 \gg |m^2|$, where $m^2$ is the mass for the (geometric) tachyon. The condition is equivalent to \begin{align} \frac{g_s}{v l_s} \gg \frac{C}{k l_s} \ , \label{gom} \end{align} where $v$ is the volume of the compactification,\footnote{We are assuming a direct product type compactification. If we consider the warped compactification, we can relax the condition.} and if we kept track of every numerical factor, we could find $C\sim 260$. In the original rolling tachyon problem, $k = 2$ and it is difficult to find a consistent solution in the perturbative string theory while maintaining the COBE normalization $H/M_p \sim 10^{-5}$ \cite{Kofman:2002rh,Frolov:2002rr}. In the geometric tachyon, we have one parameter $k$, and if we choose large enough $k$, it is possible to satisfy the condition \eqref{gom} consistent with the COBE normalization. We can also satisfy the slow-roll condition in the geometric tachyon. For instance, $\eta \ll 1$ is equivalent to the condition $H \gg |m|$ for $T < O(1)$. The key point here is that we have an extra tunable parameter $k$ to obtain a sustainable inflation in the case of the geometric tachyon unlike the original rolling tachyon cosmology, where such a tunable parameter is absent. Nevertheless, we still have a serious drawback of this rolling tachyon (radion) type cosmology as pointed out in \cite{Kofman:2002rh,Frolov:2002rr,Shiu:2002qe}. The problem is related to how the inflation will end. Since the effective potential for the rolling tachyon (radion) runs away exponentially as $T \to \infty$, there is no minimum for the tachyon to oscillate. Therefore, it is allegedly impossible to reheat the universe to produce various matters, i.e. after the tachyon inflation we end up with an empty universe, which is of course unacceptable. Again in the contex of the geometric tachyon (radion), we can avoid this reheating problem by preparing a ring of the NS5-branes and evolution of the D-brane inside the ring \cite{Thomas:2005fu}. The effective potential has global minima and the oscillation around the minima produces the reheating needed to produce matter and hence our galaxies. We can continue this line of reasoning and study for instance the spectral index of the cosmic microwave background etc, but this is not the main scope of this thesis. Rather, we would like to point out how inaccurate this kind of effective action analysis for the rolling D-brane is {\it after coupling to the massive closed string sector}. Especially, the stringy treatment of the decay of the D-brane completely changes the nature of the reheating from such rolling tachyon (or D-brane) systems. Let us begin with the following illustrative toy example. In the above discussions, couplings of the (decaying or rolling) D-brane to higher massive stringy modes (except for graviton) have been neglected. In the usual field theory, we expect corrections to the effective action of order $\sim M_{p}^2/M^2$, where $M^2$ denotes the mass of the fields integrated out. The point is that we have to sum over infinitely many massive fields: e.g. in the Kaluza-Klein theory, $M^2 \propto n^2$, where $n$ denotes the internal momenta, so the summation over $n$ schematically gives \begin{align} \sum_n \frac{1}{M_n^2} \sim \sum_n \frac{1}{n^2} = \frac{\pi^2}{6} \ , \end{align} which is finite. However, in string theory, the number of massive string modes grows exponentially $\rho(M) \sim \exp(\beta_{\rm Hg} M)$ as we discussed the Hagedorn temperature in section \ref{sec:4-1}. Thus the summation over all the massive modes with the coupling $\sim 1/M_s^2$ clearly diverges. In reality, the coupling to the massive closed string sectors is much softer and the exponentially suppressed as $\exp(-\beta M)$. The case-by-case computation is needed to see which exponential factor governs, but from our results (summarized in section \ref{sec:5-5}) it seems universal that the exponential part cancels out and the closed string backreaction is characterized by a power-like behavior irrespective of the superficial strength of the $\alpha' \sim 1/k$ corrections.\footnote{It is also interesting to note that the exponential suppression of the higher massive modes occurs when $k<1$ in the regime where the supergravity approximation is invalid.} In this way, we can conclude that the reheating of the universe through the rolling tachyon - radion is rather effective than one might expect from the naively truncated effective action. As the direct calculation shows, almost all of the energy is radiated as the closed strings without any need for the oscillation around the extremum.\footnote{As we have discussed in section, \ref{sec:5-2}, the emission rate is power-like finite for higher dimensional branes. However, this does not mean an effectiveness of the truncated effective action (DBI+FRW such as \eqref{frwl}) to discuss the closed string backreaction. It just means that it is more effective to decay by disconnecting patches of D-brane as D0 particles (assuming it is uncharged). Mathematically, it is just an artefact of the one-point decay and the one-point decay is no more effective than the higher-point decay.} The actual problem, therefore, is how we can transmit energy from the radiated (massive) closed string to the standard model sector. This problem is rather model dependent and we will not study it any further in detail here (see e.g. \cite{Kofman:2005yz,Frey:2005jk,Chialva:2005zy,Chen:2006ni} for recent studies). As we have studied in section \ref{sec:5-2}, the final decay product of the rolling tachyon and rolling D-brane is highly probable to be long closed strings. This can be directly seen by assigning fundamental string charges to the unstable D-branes, but without assigning such charges, intuitively it is expected to be so by considering a pair production. This is in good agreement with the usual Kibble mechanism of producing long macroscopic strings in the universe: causally disconnected region creates long strings and they evolve independently. It would be of great interest to study this problem quantitatively from the string theory viewpoint and determine a remaining density of cosmic strings associated with the D-brane decay (see \cite{Polchinski:2004ia} for a review of cosmic strings from the superstring theory). Such studies will verify or even exclude the geometric tachyon inflation. It is also of great importance to revisit the reheating process of various D-brane inflation scenarios to see whether the classical oscillatory contribution is really dominant over the emission of the highly massive string modes. \newpage \sectiono{D-branes in Two-dimensional Black Hole}\label{sec:6} We now begin with our studies on D-branes in the two-dimensional black hole background. In this section, we review the D-branes in the Euclidean two-dimensional black hole. The organization of the section is as follows. In section \ref{sec:6-1}, we classically analyze the D-branes in the two-dimensional black hole and derive the mini-superspace boundary wavefunction. In section \ref{sec:6-2}, we review the exact boundary states describing the D-branes in the two-dimensional black hole system. \subsection{Classical D-branes}\label{sec:6-1} \subsubsection{DBI analysis}\label{sec:6-1-1} The classification of the D-brane in general curved backgrounds is given by the solution of the Dirac-Born-Infeld action coupled with Chern-Simons action.\footnote{Of course, one could imagine unstable D-branes whose effective action is {\it not} given by the DBI action + Chern-Simons, but they are outside the scope of our discussion.} The total effective action is \begin{align} S = -\mu_p \int \mbox{d}^{p+1}\xi e^{-\Phi}\sqrt{-\det(G_{ab}+B_{ab}+F_{ab})} +i\mu_p\int e^{F_2+B_2} \wedge \sum_q C_q \ , \label{DBICS} \end{align} where the summation over $C_q$ should be taken over all R-R fields in the theory we are considering. In this section, we study the D-branes in the Euclidean two-dimensional black hole: \begin{align} \mbox{d} s^2 = k\alpha' (\tanh^2\rho \, \mbox{d} \theta^2 + \mbox{d} \rho^2) \ , \qquad e^{2\Phi} = \frac{k}{\mu \cosh^2 \rho} \ . \end{align} In the Euclidean two-dimensional black hole background, there exist no Kalb-Ramond $B_{\mu\nu}$ field nor the R-R fields, so the effective action is simply given by the DBI term with possible electro-magnetic flux $F_{\mu\nu}$ on it. Since we are in the Euclidean signature, the DBI action \eqref{DBICS} should be Wick-rotated in an appropriate manner:\footnote{Our ``Wick rotation" here is nothing but adding a (dummy) extra decoupling time direction and set trivial Neumann boundary condition along the time. In particular, we will assume $F + B$ is real.} \begin{align} S^E= \mu_p \int \mbox{d}^{p}\xi e^{-\Phi}\sqrt{\det(G_{ab}+B_{ab}+F_{ab})} \ . \end{align} We begin with the (Euclidean) D0-brane. The D0-brane is a point particle and the DBI action on it is simply given by \begin{align} S_{\mathrm{D}0} = \mu_0 e^{-\Phi} \propto \cosh\rho \ . \end{align} It is clear that the extremum of the action is obtained when $\rho=0$. Thus we conclude that the D0-brane is localized at the tip of the cigar. Next we study the D1-brane. The DBI action for the D1-brane is given by \begin{align} S_{\mathrm{D}1} = \mu_1 \int \mbox{d}\theta \cosh\rho(\theta) \sqrt{\rho'(\theta)^2 + \tanh^2\rho(\theta)} \ , \end{align} where we have fixed the reparametrization invariance by using the gauge $\xi = \theta$. The equation of motion is easily solved from the ``energy" conservation: \begin{align} \text{const} = \cosh\rho \frac{\tanh^2{\rho}}{\sqrt{(\rho')^2 + \tanh^2\rho}} \ \end{align} as \begin{align} \sinh(\rho) \cos(\theta -\theta_0) = \sinh\rho_0 \ . \label{trsce} \end{align} For later purposes, we note that if one uses the complex coordinate \begin{align} u = \sinh\rho e^{i\theta} \ , \ \ \bar{u} = \sinh\rho e^{-i\theta} \ , \end{align} the classical trajectory \eqref{trsce} takes the form of a straight line in the complex $u$ plane. This can be also seen from the fact that the DBI action takes the flat form \begin{align} S_{D1} = \mu_1 \int \mbox{d}\xi \sqrt{\frac{du}{d\xi}\frac{d\bar{u}}{d\xi}} \ \end{align} in this coordinate. We finally examine the D2-brane. In this case, we can introduce a magnetic flux $F_{\rho\theta} = f(\rho,\theta)$. By fixing the reparametrization invariance as $\xi_1 = \rho$, $\xi_2 =\theta$, the DBI action reads \begin{align} S_{\mathrm{D}2} = \mu_2 \int d\theta d\rho \cosh\rho \sqrt{\tanh^2\rho + f^2(\rho,\theta)} \ . \end{align} From the Gauss law constraint, we have \begin{align} c = \frac{\cosh\rho f(\rho,\theta)}{\sqrt{\tanh^2\rho + f^2(\rho,\theta)}} \ , \end{align} which determines the magnetic flux as \begin{align} f^2(\rho,\theta) = \frac{c^2\tanh^2\rho}{c^2 - \cosh^2\rho} \ . \label{magfl} \end{align} If $c>1$, the D2-brane partially wraps the cigar and has a boundary at $\rho = \mathrm{arccosh}(c)$ because at that value of $\rho$, the magnetic field blows up. On the other hand, if $c<1$, the D2-brane wraps the whole cigar. In the latter case, the magnetic field on the D2-brane induces a D0-brane charge near the tip of the cigar, which should be quantized. Writing $c = \sin\sigma \le 1$, we obtain the classical quantization condition as \begin{align} \frac{\sigma - \sigma'}{2\pi} k \in \mathbb{Z} \ . \end{align} In quite a similar fashion, we can also study the classical D-branes in the T-dualized trumpet background: \begin{align} \mbox{d} s^2 = \mbox{d}\rho^2 + \frac{1}{\tanh^2\rho} \mbox{d}\tilde{\theta}^2 \ , \ \ e^{\Phi} = \frac{k}{\mu \sinh \rho} \ . \end{align} Since the discussion is completely in parallel, we only present the results. The D0-brane (probably D1-brane?) could be localized at $\rho = 0$. Since $\rho=0$ is a singularity in the trumpet geometry, the presence of such D-branes are not obvious at all. Formally, we can regard it as a T-dual of the D0-brane of the cigar geometry. The D1-brane is given by the solution of the DBI action \begin{align} S_{\mathrm{D}1} = \mu_{\mathrm{D}1} \int \mbox{d}\tilde{\theta}\sinh\rho\sqrt{\frac{1}{\tanh^2\rho}+({\rho}')^2} \ , \end{align} in the static gauge. The solution is given by \begin{align} \cosh\rho \cos(\tilde{\theta}-\tilde{\theta}_0) = \gamma \ . \end{align} when $\gamma>1$, the D1-brane is connected, while when $\gamma<1$, the D1-branes go through the singularity and possibly they become disconnected. Naturally, the D1-brane in the trumpet geometry is regarded as a T-dual of the D2-brane in the cigar geometry. The parameter $\gamma$ corresponds to the parameter $c$ in the cigar geometry.\footnote{The parameter $\tilde{\theta}_0$ can be T-dualized to the holonomy of the gauge field $A_0$ in the cigar. Since the D2-brane has a nontrivial fundamental group $\pi_1 = \mathbb{Z}$, different $A_0$ gives a different D-brane (for $c>1$).} The D2-brane is classified by the solution of the DBI action \begin{align} S_{D2} = \mu_{D2}\int \mbox{d}\rho \mbox{d}\tilde{\theta} \sinh\rho \sqrt{\frac{1}{\tanh^2\rho} + F^2} \ . \end{align} The Gauss law constraint gives \begin{align} F^2 = \frac{\beta^2}{\tanh^2\rho(\sinh^2\rho-\beta^2)} \ . \end{align} The D2-brane always has a boundary at $\rho = \mathrm{arcsinh}(\beta)$. The D2-brane in the trumpet geometry naturally corresponds to the T-dual of the D1-brane in the cigar. The parameter identification is obviously given by $\beta = \sinh\rho_0$ appearing in \eqref{trsce}. \subsubsection{group theoretical viewpoint}\label{sec:6-1-2} In section \ref{sec:6-1-1}, we have studied the classical D-brane in the Euclidean two-dimensional black hole from the effective DBI action. Since the two-dimensional black hole system can be realized as the $SL(2;\mathbb{R})/U(1)$ coset model, we can study the classification of the D-brane from the gauged WZNW model \cite{Maldacena:2001ky,Gawedzki:2001ye,Elitzur:2001qd,Fredenhagen:2001kw,Ishikawa:2001zu,Yogendran:2004dm}. Indeed all the D-branes discussed in section \ref{sec:6-1-1} descend from the branes in the parent $SL(2;\mathbb{R})$ WZNW model. The starting point is the D-branes in the parent $SL(2;\mathbb{R})$ WZNW model. We focus on the maximally symmetric D-branes for technical simplicity. As we proceed, we will see that the maximally symmetric D-branes are enough to obtain all the D-branes constructed from the DBI analysis done in section \ref{sec:6-1-1}. The maximally symmetric D-branes in the WZNW model are classified by the (twined) conjugacy class of the group $G$ with a possible quantization condition \cite{Kato:1996nu,Alekseev:1998mc,Birke:1999ik,Behrend:1999bn,Felder:1999ka}. We call them A-branes (conjugacy class) and B-branes (twined conjugacy class) respectively. In our $SL(2;\mathbb{R})$ group with the Euler angle parametrization $g = e^{i\sigma_2 \frac{t-\theta}{2}} e^{\rho\sigma_1}e^{i\sigma_2\frac{t+\theta}{2}}$ , the conjugacy class is given by \begin{align} \mathrm{Tr} (g) = 2 \cos{t} \cosh{\rho} \equiv 2\kappa , \label{conj} \end{align} and the twined conjugacy class is given by \begin{align} \mathrm{Tr} (\sigma_1 g) = 2\cos{\theta} \sinh{\rho} \equiv 2\kappa' \ . \label{tconj} \end{align} up to conjugation. The D-branes in the axial coset model is obtained by gauging $g$ by $hgh$, where $h^a= e^{i\sigma_2 a}$ in our case. For A-brane, we have to sum over the gauge orbit of the parent D-brane \label{conj} parametrized by $\kappa$ in order to obtain a gauge invariant object. The gauge transformation of the conjugacy class is given by \begin{align} \mathrm{Tr}(h^agh^a) = 2\cos(t+a) \cosh{\rho} , \end{align} so the gauge invariant orbit of the parent D-brane is given by \begin{align} \cosh{\rho} \ge \kappa \ . \label{d2wzw} \end{align} Projecting it down on the coset coordinate (in the gauge $t=0$) is now trivial, and we have obtained the D2-brane wrapped (partially) around the cigar whose world volume is restricted by the condition \eqref{d2wzw}. The shapes of the A-branes obtained here are in complete agreement with the ones obtained from the DBI analysis in section \ref{sec:6-1-1}. The precise parameter identification is $c=\kappa$ for the D2-brane.\footnote{There are several independent ways to justify this parameter identification. For instance, one can show it directly from the detailed study on the boundary conditions of the gauged WZNW model with boundaries. In \cite{Walton:2002db}, they have shown that the parameter $\kappa$ is indeed the field strength appearing in the effective action of the D-brane at the boundary by using the T-duality technique. Their study of the $SU(2)/U(1)$ model can be translated to our Euclidean $SL(2;\mathbb{R})/U(1)$ model with no essential modifications.} We also note that A-brane is invariant under the isometry of the coset in this construction. Similarly from the parent B-brane, we can construct the D1-brane of the coset. In this case, since the twined conjugacy class is already gauge invariant, we can directly project \eqref{tconj} down onto the coset coordinate. The resulting D1-brane trajectory is given by \begin{align} \sinh{\rho}\cos{\theta} = \kappa' \ , \end{align} which is nothing but the one obtained in \eqref{trsce} from the DBI analysis (with $\theta_0 =0$). The B-brane constructed in this way breaks the isometry of the coset, so it has a Nambu-Goldstone mode along the $\theta$ direction. This corresponds to the rotation of $\sigma_1$ and $\sigma_3$ in the definition of the twined conjugacy class \eqref{tconj}. We could repeat the same analysis for the vector coset ($\sim$ trumpet geometry). Since the argument is completely in parallel, we skip the detailed discussion, and simply note that the results agree with the DBI analysis. \subsubsection{mini-superspace boundary wavefunction}\label{sec:6-1-3} In the context of the string theory, D-branes can be described either from the open string viewpoint or from the closed string viewpoint (i.e. channel duality). Technically, this is achieved by the modular transformation of the cylinder amplitudes. The boundary state $|B\rangle$ is defined by \begin{align} Z_{\mathrm{cylinder}} \equiv \int \mbox{d} t_o \mathrm{Tr}_o e^{-\pi H_o t_o} = \int \frac{\mbox{d} t_c}{t_c} \langle B| e^{-\pi H_ct_c}| B\rangle \ , \end{align} where $H_o = L_0$ is the open string Hamiltonian while $H_c = L_0 + \bar{L}_0$ is the closed string Hamiltonian. The boundary state $|B\rangle$ satisfies the gluing condition \begin{align} (L_{n} - \bar{L}_{-n}) |B \rangle = 0 , \end{align} for the energy-momentum tensor (and similar gluing conditions for any other conserved currents if any: see section \ref{sec:6-2} for further details). At the level of the minisuperspace approximation, the boundary states can be seen as the coupling of the D-brane to the closed string zero mode: \begin{align} \langle B |_{\mathrm{mini}} = \int_0^\infty \frac{\mbox{d} p}{2\pi} \Psi_{0}(p,n) \langle\langle p,n| \ , \label{miniwv} \end{align} where $|p,n\rangle\rangle$ is the so-called Ishibashi state \cite{Ishibashi:1988kg} associated with the primary states $|p,n\rangle$ (see section \ref{sec:6-2} for details), but in the mini-superspace approximation, there is no difference between the two $|p,n\rangle\rangle \sim |p,n\rangle$ because they are different only in the non-zero mode sector. The semiclassical boundary wavefunction $\Psi_{0} (p,n)$ is obtained from the overwrap between the D-brane and the primary state $|p,n\rangle$ as \begin{align} \Psi_0(p,n) = \langle B|p,n\rangle \ . \end{align} The explicit form of the primary state $|p,n\rangle$ and the classical trajectory have been given in the form of the minisuperspace approximation as we have studied in section \ref{sec:3-2-3} and section \ref{sec:6-1-1}. In the following, we compute the minisuperspace boundary wavefunction $\Psi_0$ for each D-branes studied in section \ref{sec:6-1-1}. The results will be compared with the proposed exact boundary states in section \ref{sec:6-2}. We expect that they will agree with each other in the semi-classical limit ($k \to \infty$), and indeed they do as we will see. Let us begin with the D0-brane. Classically, the D0-brane is localized at the tip of the cigar $\rho = 0$, and the boundary wavefunction is simply given by the minisuperspace wavefunction for $|p,n\rangle$ evaluated at $\rho = 0$. From the explicit minisuperspace wavefunction \eqref{ef}, we can easily derive \begin{align} \Psi_0^{\mathrm{D}0}(p,n) = - \delta_{n,0} \frac{\Gamma^2(-j)}{\Gamma(-2j-1)} = -\delta_{n,0} \frac{\Gamma^2(\frac{1}{2}-\frac{ip}{2})}{\Gamma(-ip)} \ . \label{minico} \end{align} We note that D0-brane does not couple to the momentum mode along $\theta$, which is consistent with the interpretation that the D0-brane is an A-brane (see section \ref{sec:6-1-2}). The exact analysis shows that it couples to the winding mode and the discrete states localized near the tip of the cigar, but the minisuperspace analysis cannot capture them. Next let us consider the D1-brane. Classically, the D1-brane has the shape of the hairpin. A semiclassical D-brane boundary wavefunction is the weighted sum of the wavefunction of closed string states restricted to the location of the D-brane. In the mini-superspace approximation, as is implicit in \cite{Ribault:2003ss}, the weighted sum equals to the overlap between the mini-superspace wavefunction and the delta function constraint enforcing $(\rho, \theta)$ coordinates over the hairpin trajectory \eqref{trsce} (with respect to the volume element \eqref{vol cigar}). The result is \begin{align} & \int_0^\infty \sinh \! \rho \, \mbox{d} \sinh \! \rho \int_{-\frac{\pi}{2}+\theta_0}^{\frac{\pi}{2}+\theta_0} \mbox{d}\theta\, \delta \Big(\cos (\theta-\theta_0) \sinh \rho-\sinh \rho_0 \Big) \phi^p_{n}(\rho,\theta) \cr &= \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \!\mbox{d} \theta' \, \frac{\sinh \rho_0}{\cos^2 \theta'} \phi^p_{n}(\widehat{\rho}(\rho_0,\theta'),\theta') e^{i n \theta_0} , \end{align} where $\theta'= (\theta-\theta_0)$ and $\widehat{\rho}(\rho_0,\theta')$ refers to the solution of $\cos \theta' \sinh \rho= \sinh \rho_0$. Using the decomposition \eqref{decomp ef}, we are then to evaluate integrals: \begin{align} & \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \mbox{d} \theta \, \frac{\sinh \rho_0}{\cos^2 \theta} \, \phi^p_{L,n}(\hat{\rho}(\rho_0,\theta),\theta) = \frac{2\pi\Gamma(ip)} {\Gamma\left(\frac{1}{2}+\frac{ip+n}{2}\right) \Gamma\left(\frac{1}{2}+\frac{ip-n}{2}\right)} \, e^{-ip\rho_0} ~, \nonumber\\ & \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \mbox{d} \theta \, \frac{\sinh \rho_0}{\cos^2 \theta} \, \phi^p_{R,n}(\hat{\rho}(\rho_0,\theta),\theta) = \frac{2\pi\Gamma(-ip)} {\Gamma\left(\frac{1}{2}-\frac{ip+n}{2}\right) \Gamma\left(\frac{1}{2}-\frac{ip-n}{2}\right)} \, e^{+ ip\rho_0} ~. \label{evaluation overlap phi} \end{align} Details of the computation are relegated in Appendix \ref{mini}. Using the mini-superspace reflection amplitude \eqref{cref amp}, we then obtain \begin{align} &\Psi^{(0)}_{\rm D1}(\rho_0,\theta_0;p,n) = \frac{2\pi\Gamma(ip)} {\Gamma\left(\frac{1}{2}+\frac{ip+n}{2}\right) \Gamma\left(\frac{1}{2}+\frac{ip-n}{2}\right)} \, e^{in\theta_0} \left(e^{-ip\rho_0} + (-1)^n e^{+ip\rho_0}\right)~. \label{hairpin D1 classical} \end{align} The D1-brane couples to the momentum mode as is clear from the geometry, which is consistent with the interpretation that the D1-brane is a B-brane (see section \ref{sec:6-1-2}). Finally, we study the D2-brane. The D2-brane is parametrized by the parameter $c$ appearing in the amount of the magnetic flux \eqref{magfl}. Since the qualitative features of the D2-brane seem to be different for $c>1$, and $c<1$, it is natural to study them separately. Since the mini-superspace analysis for the D2-brane has not been available in the literature, we would like to present it slightly in detail here.\footnote{The author would like to thank S.~Ribault for stimulating discussions on this problem.} Let us begin with the case when $c>1$. The D2-brane only partially wraps the cigar because at $\rho = \mathrm{arccosh}(c)$, the field strength diverges. We parametrize $c = \cosh r_0$. Since the D2-brane couples to the winding states, the minisuperspace analysis is only possible for the zero winding sector.\footnote{We could avoid this problem in the T-dual picture, which will be discussed later.} The (zero momentum/winding) minisuperspace wavefunction is given by \begin{align} \phi_{p,m=0}(\rho) &= -\frac{\Gamma^2(-j)}{\Gamma(-2j-1)}F(j+1,-j;1;-\sinh^2\rho) \cr &= (\sinh\rho)^{-1-ip} F\left(\frac{1}{2} + \frac{ip}{2}, \frac{1}{2} + \frac{ip}{2}; 1+ ip;-\frac{1}{\sinh^2\rho}\right) \cr &+ \frac{\Gamma(ip)\Gamma^2(\frac{1}{2}-\frac{ip}{2})}{\Gamma(-ip)\Gamma^2(\frac{1}{2}+\frac{ip}{2})}(\sinh\rho)^{-1+ip} F\left(\frac{1}{2} - \frac{ip}{2}, \frac{1}{2} - \frac{ip}{2}; 1- ip;-\frac{1}{\sinh^2\rho}\right) \ , \end{align} where $j = -\frac{1}{2} + \frac{ip}{2}$. The boundary wavefunction in the minisuperspace approximation is given by \begin{align} \Psi_{2}(r_0)^{\mathrm{mini}} = \int_{r_0}^{\infty} \mbox{d}\rho \cosh\rho \frac{\sinh\rho}{\sqrt{\cosh^2\rho-\cosh^2r_0}} \phi_p(\rho) \ . \end{align} Now we can perform the integration as follows \begin{align} & \int_{r_0}^\infty \mbox{d}\rho \cosh\rho \frac{\sinh\rho}{\sqrt{\cosh^2\rho-\cosh^2r_0}} (\sinh\rho)^{-1-ip} F\left(\frac{1}{2}+\frac{ip}{2},\frac{1}{2}+\frac{ip}{2};1+ip;-\frac{1}{\sinh^2\rho}\right) \cr &= \frac{\Gamma(ip+1)}{\Gamma(\frac{1}{2}+\frac{ip}{2})^2} \sum_{n=0}^{\infty} (-1)^n (\sinh^2 r_0)^{-n-\frac{ip}{2}}\frac{\sqrt{\pi}}{2}\frac{\Gamma(n+\frac{ip}{2})\Gamma(\frac{1}{2}+\frac{ip}{2}+n)}{\Gamma(ip+1+n)n!} \cr &= \frac{\Gamma(\frac{ip}{2})}{\Gamma(\frac{1}{2}+\frac{ip}{2})} (\sinh^2 r_0)^{-\frac{ip}{2}} \frac{\sqrt{\pi}}{2} F\left(\frac{ip}{2},\frac{1}{2}+\frac{ip}{2};ip+1,-\frac{1}{\sinh^2 r_0}\right) \cr &= \frac{\pi\Gamma(ip)}{\Gamma(\frac{1}{2}+\frac{ip}{2})^2} e^{-ipr_0} \ . \end{align} We refer to the appendix \ref{mini} for the last equality (see also \cite{Nakayama:2005pk}). Combining it with the second integration that can be treated in the same manner, we obtain \begin{align} \Psi_2(r_0)^{\mathrm{mini}} = \frac{\pi\Gamma(ip)}{\Gamma(\frac{1}{2}+\frac{ip}{2})^2} \cos(pr_0) \ . \label{direce} \end{align} We can see that the boundary wavefunction for the partially wrapped D2-brane is consistent with the class 2 boundary wavefunction proposed in \cite{Fotopoulos:2004ut} at least for the zero winding sector (see section \ref{sec:6-2} for details). We can repeat our analysis when $\beta \le 1$ and reproduces the minisuperspace limit of the class 3 boundary states in the zero-winding sector. In the T-dual picture, (partially wrapped) D2-brane in the cigar geometry is supposed to be given by the D1-brane in the trumpet geometry.\footnote{One subtle point of the trumpet geometry is that the semiclassical limit is unclear. We regard $k\to \infty$ as the semiclassical limit for $\rho$ direction, but $\tilde{\theta}$ direction is apparently not. We will neglect this subtlety for a moment.} Let us now move on to the minisuperspace wavefunction for the D1-brane in the trumpet geometry. From the semiclassical DBI action \begin{align} L = \sinh\rho \sqrt{\dot{\rho}^2 + k^{-2} \coth^2\rho} \ , \end{align} the equation of motion is easily integrated with the help of the energy conservation: \begin{align} L - \dot{\rho} \frac{\partial L}{\partial \dot{\rho}} = \text{const} \ , \end{align} and one can see that the classical D1-brane is described by the trajectory \begin{align} \cosh \rho = \frac{\cosh r_0}{\cos \left[(\tilde{\theta} - \tilde{\theta}_0)/k\right]} \ . \end{align} The appearance of the $1/k$ in the argument of the cosine is important. If $k$ is an even integer, the asymptotic form of the D-brane trajectory is given by the coincident two-branes, while for an odd integer $k$, it is given by the parallel two-branes placed at the anti-podal points in $\mathbb{S}^1$.\footnote{For general $k$, asymptotic position of the two branes breaks the (discrete) periodic symmetry.} The semiclassical boundary wavefunction is obtained by integrating the classical closed string wavefunction over the classical D-brane trajectory as \begin{align} \Psi_{2}(\tilde{\theta}_0,r_0)^{\mathrm{mini}} &= e^{iw\tilde{\theta}_0}\int_1^{\infty} \cosh\rho \mbox{d}(\cosh\rho) \int_{-\frac{k\pi}{2}}^{\frac{k\pi}{2}} \mbox{d}\tilde{\theta} \delta(\cos(\tilde{\theta}/k)\cosh\rho-\cosh r_0) \phi_{p,w}(\rho,\tilde{\theta}) \cr &= \int_{-\frac{k\pi}{2}}^{\frac{k\pi}{2}} \mbox{d}\tilde{\theta} \frac{\cosh r_0}{\cos^2(\tilde{\theta}/k)} \phi_{p,w}(\hat{\rho}(r_0,\tilde{\theta}),\tilde{\theta}) \ , \end{align} where $\hat{\rho}(r_0,\tilde{\theta})$ is the solution of $\cos (\tilde{\theta}/k) \cosh \rho = \cosh r_0 $. The integration is feasible due to the formula \begin{align} \int_{-\frac{k\pi}{2}}^{\frac{k\pi}{2}} \mbox{d}\tilde{\theta} \frac{\cosh r_0}{\cos^2(\tilde{\theta}/k)} e^{iw\tilde{\theta}} (\cosh\rho)^{-1-ip} F\left(\frac{1}{2}-\frac{kw}{2}+\frac{ip}{2},\frac{1}{2}+\frac{kw}{2}+ \frac{ip}{2};1+ip;\frac{\cos^2(\tilde{\theta}/k)}{\cosh^2r_0}\right)\ \cr = \frac{2\pi \Gamma(ip)}{\Gamma(\frac{1}{2}+\frac{ip}{2}+\frac{kw}{2})\Gamma(\frac{1}{2}+\frac{ip}{2}-\frac{kw}{2})} e^{-ip r_0} \ , \label{evaluation overlap phi2} \end{align} whose derivation is relegated to the appendix B (see also \cite{Nakayama:2005pk}). In this way, we derive the minisuperspace limit of the boundary wavefunction that describes the D1-brane in the trumpet geometry: \begin{align} \Psi_{2}(\tilde{\theta}_0,r_0)^{\mathrm{mini}} = N(b)\frac{\Gamma(2j+1)}{\Gamma(1+j+\frac{kw}{2})\Gamma(1+j-\frac{kw}{2})} e^{iw \tilde{\theta}_0} \cos(r_0(2j+1)) \ , \label{cltrum} \end{align} where $j = -\frac{1}{2} + \frac{ip}{2}$. This should be identified with the boundary wavefunction for the partially wrapped D2-brane via the T-duality. Note that the for zero winding sector $w=0$, the wavefunction agrees with the direct evaluation \eqref{direce}. In a similar manner, the classical boundary wavefunction for the totally wrapped D2-brane is given by \begin{align} \Psi_{3}(\sigma,\theta_0) = \Gamma(2j+1)e^{i\theta_0\omega} \left[\frac{\Gamma(-j+\frac{kw}{2})}{\Gamma(j+1+\frac{kw}{2})}e^{i\sigma(2j+1)} + \frac{\Gamma(-j-\frac{kw}{2})}{\Gamma(j+1-\frac{kw}{2})}e^{-i\sigma(2j+1)} \right] \ . \label{clth} \end{align} The parameters $r_0$ and $\sigma$ are supposedly related to the magnetic flux on the D2-brane: \begin{align} F = \frac{\beta^2 \tanh^2\rho}{\cosh^2\rho - \beta^2} d\theta d\rho \ , \end{align} where $\beta = \sin \sigma $ for $\beta \le 1$ and $\beta = \cosh r_0$ for $\beta \ge 1$. We expect that these two classes of branes coincide in the limit $\beta = 1$ (i.e. $\sigma = \pm \frac{\pi}{2}$ and $r_0 = 0$). From the geometry of the semiclassical D-brane, we expect that if one takes a suitable limit of the boundary states, the class 2 D-brane (partially wrapped D-brane) will coincide with the class 3 D-brane (totally wrapped D-brane). To compare these two branes, we can directly show that \begin{align} \Psi_3(\pi/2,-k\pi/2) + \Psi_3(\pi/2,k\pi/2) = \Psi_{2}(0,0) \ . \end{align} This result also shows that the boundary wavefunction \eqref{clth} describes the half cut D2-brane. \subsubsection{embedding into NS5-branes}\label{sec:6-1-4} We have obtained the classical D-brane solutions in the (Euclidean) two-dimensional black hole background. As we have reviewed in section \ref{sec:2}, we can embed the two-dimensional black hole into the superstring theory as NS5-branes (or more generally little string theories on singular Calabi-Yau spaces). Here we would like to summarize some of the D-brane solutions in the NS5-brane background to see how one can construct them from those in the two-dimensional black hole system \cite{Elitzur:2000pq,Lerche:2000uy,Gava:2001gv,Eguchi:2003ik,Eguchi:2004yi,Eguchi:2004ik,Israel:2005fn}. Let us first concentrate on the D1-brane solution in the ring-likely separated NS5-brane solution \eqref{rmet} corresponding to \begin{align} \frac{\Big[ {SL(2;\mathbb{R})_{k} \over U(1)} \times \frac{SU(2)_{k}}{U(1)} \Big]_\perp}{\mathbb{Z}_k} \ . \end{align} Naturally, we can combine various D-branes in the $SL(2;\mathbb{R})/U(1)$ coset and $SU(2)/U(1)$ coset to construct D-branes in this background. We further focus on the two-plane $x^8=x^9 = 0$, setting $\theta =0$. The first combination is the D0-brane in the cigar and the D1-brane in the bell. The result is the D1-brane stretching between NS5-branes as in figure \ref{fig:nsbranes}. In the context of the LST, we interpret them as W-bosons. The second combination is the (uncut) D1-brane in the trumpet and D0-brane in the bell. The resulting geometry is the straight line on the $x^8=x^9=0$ plane as in figure \ref{fig:nsbranes}. The third combination is the cut D1-brane in the trumpet and D0-brane in the bell. It corresponds to the semi-infinite D-brane attached to the NS5-branes as in figure \ref{fig:nsbranes}. The geometries of the D3-brane are much more complicated. We would like to refer to \cite{Ribault:2003sg,Israel:2005fn} for detailed study of the D3-brane geometries in the NS5-brane background. Recently, a static D-brane configuration in the black hole background has attracted much attention for a possible application to the phase transition of the fundamental matters in QCD \cite{Mateos:2006nu}. In our two-dimensional black hole setup, it amounts to the study of the D-brane in the black NS5-brane background. In the Rindler limit studied in \cite{Mateos:2006nu}, the difference between the black NS5-brane and the black D-brane does not exist. It would be interesting to study the exact boundary states for the D-branes in the black NS5-brane background to probe the $\alpha'$ corrections to the phase transition discussed there. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\linewidth,keepaspectratio,clip]{nsbranes.eps} \end{center} \caption{The left figure shows W-bosons in LST. The central figure shows an uncut D1-brane. The right figure shows cut D1-branes attached to the NS5-branes.} \label{fig:nsbranes} \end{figure} \subsection{Exact boundary states}\label{sec:6-2} \subsubsection{Ishibashi states}\label{sec:6-2-1} To construct the exact Cardy boundary states for the D-branes in the two-dimensional black hole background, we begin with the Ishibashi states. For definiteness, we first concentrate on the bosonic axial coset, which is given by the Euclidean cigar. The coset Ishibashi states naturally descend from those for the parent current algebra. The Ishibashi state satisfies the boundary condition \begin{align} (L_{n} - \bar{L}_{-n}) |A \rangle \rangle &= 0 \cr (J_{n} -\bar{J}_{-n}) |A \rangle \rangle &= 0 \ , \end{align} for A-brane and \begin{align} (L_{n} - \bar{L}_{-n}) |B \rangle \rangle &= 0 \cr (J_{n} + \bar{J}_{-n}) |B \rangle \rangle &= 0 \ , \end{align} for B-brane. In terms of the primary states of the coset, A-boundary condition means $m = \bar{m} = \frac{k\omega}{2}$, and B-boundary condition means $m = -\bar{m} = \frac{n}{2}$. Physically, the A-branes couple to the winding states while B-brane couple to the momentum states in the coset. The Ishibashi state is naturally endowed with the classification via the character of the coset model. For continuous series, we have the following normalization \begin{align} _B\langle \langle p',n'|e^{-\pi t(L_0 + \bar{L}_0)}| p,n \rangle \rangle_B = \left[\delta(p-p') + R(p,n)\delta(p+p')\right]\delta_{n,n'}\frac{q^{-\frac{p^2}{4(\kappa-2)}+\frac{n^2}{4k}}}{\eta(\tau)^2} \cr _A\langle \langle p',\omega'|e^{-\pi t(L_0 + \bar{L}_0)}| p,\omega \rangle \rangle_A = \left[\delta(p-p') + R(p,\omega)\delta(p+p')\right]\delta_{\omega,\omega'}\frac{q^{-\frac{p^2}{4(\kappa-2)}+\frac{\omega^2}{4}}}{\eta(\tau)^2} \ , \end{align} where the subscript denote the boundary condition (either A-type or B-type), and $R(p,n)$ (or $R(p,\omega))$ denote the reflection amplitude. The Ishibashi state is parametrized by the radial momentum $p$ and the angular momentum $n$ (or the winding number $\omega$). For the supersymmetric coset, we impose the following boundary condition for the Ishibashi states: \begin{align} (L_n - \bar{L}_{-n}) | A \rangle \rangle = 0 \cr (G^{\pm}_r - i\bar{G}^{\mp}_{-r}) | A \rangle \rangle = 0 \cr (J_n - \bar{J}_{-n}) |A \rangle \rangle = 0 \ , \end{align} for A-type boundary conditions, and \begin{align} (L_n - \bar{L}_{-n}) | B \rangle \rangle = 0 \cr (G^{\pm}_r - i\bar{G}^{\pm}_{-r}) | B \rangle \rangle = 0 \cr (J_n + \bar{J}_{-n}) |B \rangle \rangle = 0 \ , \end{align} for B-type boundary conditions. Both types of the boundary conditions are compatible with the diagonal $\mathcal{N}=1$ superconformal symmetry \begin{align} (G_r - i \bar{G}_{-r})|A \ \mathrm{or} \ B\rangle\rangle \ , \end{align} where $G_r = G^+_r + G^-_r$ that should be gauged in the fermionic string theory. Physically, A-type boundary condition corresponds to Dirichlet boundary condition along the cigar angular direction, and B-type boundary condition corresponds to Neumann boundary condition. The Ishibashi states for the supersymmetric coset for continuous series is parametrized by three quantum number $(p,m,s)$. Our normalization is \begin{align} _A\langle\langle p',\omega',s'| e^{-\pi\tau_c(L_0+\bar{L}_0)} e^{i\pi y(J_0+\bar{J_0})}|p,\omega,s\rangle \rangle_A \cr = \delta_{\omega',\omega}(\delta(p-p')+\delta(p+p')R(j,\frac{k\omega}{2},\frac{k\omega}{2})) \mathrm{ch}_{j,\frac{k\omega}{2},s}(i\tau_c,y) \ , \end{align} for the A-brane, and \begin{align} _B\langle\langle p',n',s'| e^{-\pi\tau_c(L_0+\bar{L}_0)} e^{i\pi y(J_0+\bar{J_0})}|p,n,s\rangle \rangle_B \cr = \delta_{n',n}(\delta(p-p')+\delta(p+p')R(j,\frac{n}{2},-\frac{n}{2})) \mathrm{ch}_{j,\frac{n}{2},s}(i\tau_c,y) \ , \end{align} for the B-brane. Here $s$ denotes the spectral flow parameter. Note that the boundary condition demands $m=\bar{m}=\frac{k\omega}{2}$ for the A-brane and $m=-\bar{m}= \frac{n}{2}$ for the B-brane. The $\mathcal{N}=2$ character $\mathrm{ch}_{j,m,s}$ is defined as \begin{align} \mathrm{ch}_{j,m,s}(\tau,y) = q^{\frac{p^2}{4k}+\frac{(m+s)^2}{k}+\frac{s^2}{2}}z^{\frac{2m}{k}+s} \frac{\theta_3(\tau,y)}{\eta(\tau)^3} \ , \end{align} for NS sector ($z=e^{2\pi i y}$). \subsubsection{exact boundary wavefunction}\label{sec:6-2-2} Let us first summarize the exact boundary wavefunction for the D-branes whose classical properties we discussed in section \ref{sec:6-1}. We relegate a (partial) derivation of the exact boundary wavefunctions based on the modular bootstrap to section \ref{sec:6-2-3}. For the bosonic two-dimensional black hole, we expand the Cardy boundary states as \begin{align} \langle B | = \int \mbox{d} p \sum_m \Psi(p,m) \langle\langle p,m | + (\text{discrete}) \ . \end{align} Compared with the minisuperspace approximation \eqref{miniwv}, we have allowed the winding states (for B-brane) and a possible discrete state contribution. In the following, we focus on the continuous part. The discrete part can be read from the analytic continuation of the boundary wavefunction $\Psi(p,m)$ with respect to the parameters of the continuous series restricted to the value corresponding to the discrete series (i.e. $\Psi(j=m,m)$). The exact boundary wavefunction for the D0-brane (class 1 A-type brane) is given by \begin{align} \Psi_{\mathrm{D}0}(j,\omega) = \nu_b^{2j+1} \frac{\Gamma(-j+\frac{k\omega}{2})\Gamma(-j-\frac{k\omega}{2})}{\Gamma(-2j-1)\Gamma(1-b^2(2j+1))} \ , \label{exco} \end{align} where $b = (k-2)^{-1/2}$, and $\nu_b = \frac{\Gamma(1-b^2)}{\Gamma(1+b^2)}$. It is easy to see that the exact boundary wavefunction \eqref{exco} reduces to the mini-superspace result \eqref{minico} in the large $k$ limit by setting $\omega = 0$ up to a $p$ independent overall normalization factor. The exact boundary state for the D0-brane couples to winding states. It also couples to the discrete series localized near the tip of the cigar. The exact boundary wavefunction for the D1-brane (class 2' B-type brane) is given by \begin{align} \Psi_{\mathrm{D}1}(j,n)^{r,\theta_0} = \nu_b^{2j+1} e^{in\theta_0} \frac{\Gamma(2j+1)\Gamma(1+b^2(2j+1))}{\Gamma(1+j+\frac{n}{2})\Gamma(1+j-\frac{n}{2})} (e^{-r(2j+1)}+(-1)^n e^{r(2j+1)}) \ . \label{clss2'} \end{align} The D1-brane only couples to the momentum states. In particular, it does not couple to any discrete states. In the classical limit $k\to \infty$, the boundary wavefunction reproduces that of the minisuperspace approximation \eqref{hairpin D1 classical}. The exact boundary wavefunction for the partially wrapped D2-brane (class 2 A-type brane) is given by \begin{align} \Psi_{\mathrm{D}2}(j,\omega)^{r_0,\tilde{\theta}_0} = \nu_b^{2j+1} \frac{\Gamma(2j+1) \Gamma(1+b^2(2j+1))}{\Gamma(1+j+\frac{kw}{2})\Gamma(1+j-\frac{kw}{2})} e^{iw \tilde{\theta}_0} \cos(r_0(2j+1)) \ . \label{exbt} \end{align} It does not couple to the discrete states localized at the tip of the cigar as is expected from the geometry. We can readily see that the classical limit ($k\to \infty$) of \eqref{exbt} reduces to the minisuperspace wavefunction \eqref{cltrum}. Finally, the exact boundary wavefunction of the totally wrapped D2-brane (class 3 A-type brane) is given by \begin{align} \Psi_{\mathrm{D}2'}(j,\omega)^{\sigma,{\theta}_0} &= \nu_b^{2j+1} \Gamma(1+b^2(2j+1))\Gamma(2j+1)e^{i\theta_0\omega} \times \cr &\times \left[\frac{\Gamma(-j+\frac{kw}{2})}{\Gamma(j+1+\frac{kw}{2})}e^{i\sigma(2j+1)} + \frac{\Gamma(-j-\frac{kw}{2})}{\Gamma(j+1-\frac{kw}{2})}e^{-i\sigma(2j+1)} \right] \ ,\label{exbthr} \end{align} with the relative quantization condition $\sigma-\sigma' = 2\pi\frac{m}{k-2}$ , $m \in \mathbb{Z}$. The other possible exact boundary states for the two-dimensional black hole have been proposed in the literatures \cite{Ahn:2004qb,Hosomichi:2004ph,Ribault:2005pq}. However, they do not possess sensible open string spectra\footnote{Most of them contain tachyon in their spectra. Furthermore, they often have imaginary conformal weights when we study overlaps with class 2 (class 2') branes.} nor corresponding semiclassical limits, so we will not discuss them in detail here. Their properties are in many sense similar to the generalized ZZ branes proposed in \cite{Zamolodchikov:2001ah}. As such they could be important so as to understand the nonperturbative contributions to the partition function of the two-dimensional Euclidean black hole. The boundary wavefunctions for the supersymmetric two-dimensional black hole are essentially the same as those for the bosonic one. In the NS sector, the only difference is to replace $b^2 = \frac{1}{k-2}$ with $\frac{1}{k}$. The boundary wavefunction for the other sector is obtained by the spectral flow. \subsubsection{Cardy condition and modular bootstrap}\label{sec:6-2-3} There are several different ways to derive the boundary wavefunctions for the D-branes in the $SL(2;\mathbb{R})/U(1)$ coset model. One of the simplest ways to obtain them is to descend them from the branes in the parent $SL(2;\mathbb{R})$ WZNW model (or $\mathbb{H}^3_+$ model). This method has a small drawback for our purposes because we should derive the boundary states for $SL(2;\mathbb{R})$ WZNW model (or $\mathbb{H}_3^+$ model) first \cite{Ponsot:2001gt}. In this section, we take another root, which uses the so-called ``modular bootstrap" method to derive all the A-branes in the Euclidean two-dimensional black hole.\footnote{As we will see, we cannot derive the boundary wavefunctions for B-branes in this approach. We need a more technically involved strategy such as the conformal bootstrap to derive them.} The modular bootstrap method is intimately related to the Cardy condition for boundary states \cite{Cardy:1989ir}. The Cardy condition is the physical constraint on the open string spectra between two different D-branes. Let us denote the boundary states for any pair of these two branes as $|a\rangle$ and $|b\rangle$. The Cardy condition says that the open string spectra between these two D-branes should have open string characters with positive multiplicities: \begin{align} Z_{a,b} = \mathrm{Tr}_{a,b} q^{L_o} = \sum_i n^i_{a,b}\chi_i(q) \ , \label{cardyc} \end{align} where $\chi_i(q)$ is the open string character, and $n^i_{a,b}$ should be positive integers from the unitarity of the theory.\footnote{Implicitly here we are assuming that the open string between $|a\rangle$ and $|b\rangle$ are bosonic. Otherwise the negative multiplicity is allowed as fermions. For NS-NS overlap, we expect that the overlap should contain bosonic excitations.} Now we modular transform the open string character $\tau \to -1/\tau$ in order to obtain the closed string description: \begin{align} \langle a|e^{i\tau \pi H_c}| b\rangle = Z_{ab}(q) = \sum_{i,j} n^{i}_{a,b}S_{ij} \chi_j(\tilde{q}) \ , \end{align} with $\tilde{q} \equiv e^{-2\pi i/\tau}$, where $S_{ij}$ is the modular $S$-matrix for characters $\chi_i$. At this point, it is not immediately obvious whether the multiplicities $n^{i}_{a,b}$ are all positive integers if one introduces an arbitrary set of boundary states $|a\rangle$. This integrality condition is the Cardy condition for boundary states. The Cardy condition guarantees a physical interpretation of the open string spectra from the open-closed duality. Actually, there is a canonical solution of the Cardy condition based on the Verlinde formula \cite{Verlinde:1988sn}. We assume that the boundary state is a superposition of the Ishibashi states with normalization \begin{align} \langle\langle i|e^{i\tau \pi H_c}|j\rangle\rangle = \delta_{ij}\chi_i(q) \ . \end{align} We then assume the existence of the simplest boundary states (identity brane) $|\hat{0}\rangle$ whose self overlap gives an identity representation in the open string sector: $n^i_{\hat{0},\hat{0}} = \delta^i_0$. Since the fusion of the identity operator is itself, we have a relation \begin{align} |\langle \hat{0}|j\rangle|^2 = S_{0,j} \ . \end{align} As a consequence, the state $|\hat{0}\rangle$ can be written as \begin{align} |\hat{0}\rangle = \sum_j\sqrt{S_{0,j}}|j\rangle\rangle \ , \label{sob} \end{align} up to an overall phase factor (possibly dependent on $j$). Now we {\it define} Cardy boundary states as \begin{align} |a\rangle = \sum_j \frac{S_{aj}}{\sqrt{S_{0j}}}|j \rangle\rangle \label{sobb} \end{align} which contains the open string spectrum $n^{i}_{\hat{0}a} = \delta_a^i $ in the overlap with the identity brane. These Cardy states satisfy the Cardy condition \eqref{cardyc} \begin{align} \langle a|j \rangle \langle j| b \rangle = \frac{S_{aj}S_{jb}}{S_{0j}} = \sum_i S_{ij} n^{i}_{ab} \ , \end{align} where $n^{i}_{ab}$ is given by the Fusion coefficient $\mathcal{N}^{i}_{ab}$ that is a positive integral matrix. The last equality is due to a remarkable identity under the name of the Verlinde formula. The Verlinde formula can be shown for unitary compact CFTs by studying the monodromy constraint for the torus amplitudes. Let us move on to the boundary wavefunction for A-branes in the two-dimensional black hole. Our first assumption is the existence of the identity brane, which will be identified as the D0-brane at the tip of the cigar. We assume that the self-overlap of this identity brane yields the identity representation summed over the spectral flow in the open string spectrum. Summation over the spectral flow is needed in order to guarantee the quantization of the $U(1)_R$ charge in the closed string spectrum.\footnote{Otherwise, we obtain the D-brane in the decompactified theory, which has been studied in \cite{Ahn:2003tt,Ahn:2004qb}, in the context of the $\mathcal{N}=2$ Liouville theory. We also note that our summation over the spectral flow is different from \cite{Eguchi:2003ik}, where the summation is taken over $n\in k \mathbb{Z}$ for integral $k$. For A-brane, the latter summation leads to the integral $U(1)_R$ charge (i.e. fractional $\omega$ quantum number).} For definiteness, we study the NS-sector of the supersymmetric $SL(2;\mathbb{R})/U(1)$ coset. Our assumption mentioned above is \begin{align} Z_{00} = \langle 0|e^{-\pi\tau_c(L_0+\bar{L}_0 -\frac{c}{12})+iy(J_0+\bar{J}_0)}|0\rangle = \sum_n \frac{\mathrm{ch}_{0,n,n}(\tau_o)(1-q_o)}{(1+yq_o^{\frac{1}{2}+n})(1+y^{-1}q_o^{\frac{1}{2}-n})} \ . \label{ajka} \end{align} The modular S-transformation of the extended character of the identity representation (the identity character summed over the spectral flow) is given by \cite{Eguchi:2003ik}: \begin{align} Z_{00} = \int_{-\frac{1}{2}+i\mathbb{R}_+} \mbox{d} j \sum_{m\in \frac{k}{2}Z} \frac{i\sin(\pi(2j+1))\sin\frac{\pi}{k}(2j+1)}{2\sin(j+m)\pi\sin(j-m)\pi} \mathrm{ch}_{j,m,0}(\tau_c) + \text{(discrete)} \ . \label{idmod} \end{align} The discrete terms are a little bit trickier to obtain, and we refer the complete form to original papers \cite{Eguchi:2003ik}. The boundary wavefunction is essentially obtained by taking the square root of the modular S-matrix in analogy with \eqref{sob}. Expanding the identity boundary state as \begin{align} \langle 0|= \int_{-\frac{1}{2}+i\mathbb{R}_+} \mbox{d} j \sum_m \Psi(j,m)_0 \langle\langle j,m|\ , \end{align} we obtain \begin{align} \Psi(j,m)_0 = \nu_b^{2j+1} \frac{\Gamma(-j+\frac{k\omega}{2})\Gamma(-j-\frac{k\omega}{2})}{\Gamma(-2j-1)\Gamma(1-b^2(2j+1))} \ . \end{align} We should note that the condition does not determine the $(j,m)$ dependent phase factor of the boundary wavefunction. The ambiguity of the phase, however, is completely fixed by the reflection relation \begin{align} \Psi(-p,m) = R(p,m) \Psi(p,m) \ , \end{align} together with the Hermiticity condition $\Psi(p,m)^\dagger = \Psi(-p,-m)$. The next assumption is the overlap between the identity brane $|0\rangle$ and the general brane $|a \rangle$ labeled by the character of the $SL(2;\mathbb{R})/U(1)$ coset model in analogy with \eqref{sobb}: \begin{align} \langle 0|e^{-\pi\tau_c(L_0+\bar{L}_0 -\frac{c}{12})+iy(J_0+\bar{J}_0)}|a\rangle = \chi_a(i\tau_o,y) \ . \label{anz} \end{align} We assume that the open string character appearing in the right hand side is given by the continuous series summed over the spectral flow (class 2 brane) and the discrete series summed over the spectral flow (class 3 brane). The S-modular transformation of the extended character for the continuous series (parametrized by $J$ and $M$) is particularly easy: \begin{align} \sum_{n\in \mathbb{Z}} \mathrm{ch}_{J,M+n,n}(\tau_o,y) = -i \sum_{m\in \frac{k}{2}Z} \int_{-\frac{1}{2}+i\mathbb{R}_+} \mbox{d} j \mathrm{ch}_{j,m,0}(\tau_c,y)e^{-\frac{4\pi iMm}{k}}\cos[\frac{\pi}{k}(2j+1)(2J+1)] \ . \end{align} From the modular bootstrap ansatz \eqref{anz}, we obtain the boundary wavefunction corresponding to the continuous series as \begin{align} \Psi(p,m)_{J,M} &= \Psi(-p,-m)_{0}^{-1} e^{-\frac{4\pi iMm}{k}} \cos[\frac{\pi}{k}(2j+1)(2J+1)] \cr &= \nu_b^{2j+1}\frac{\Gamma(2j+1)\Gamma(1+b^2(2j+1))}{\Gamma(1+j+\frac{kw}{2})\Gamma(1+j-\frac{kw}{2})} e^{-\frac{4\pi iMm}{k}}\cos[\frac{\pi}{k}(2j+1)(2J+1)] \ . \end{align} With the parameter identification $ r_0 = \frac{\pi}{k}(2J+1)$, $\theta_0 = -2\pi M$, we have obtained the boundary wavefunction for the partially wrapped D2-brane. The S-modular transformation of the extended character for the discrete series are more involved: \begin{align} &\sum_{n\in \mathbb{Z}} \frac{y^{-\frac{2}{k}(M+n)}\mathrm{ch}_{J,M+n,n}(\tau_o,y)}{1+y^{-1}q_o^{\frac{1}{2}+J-M-n}} \cr &= -i \sum_{m\in \frac{k}{2}Z} \int_{-\frac{1}{2}+\mathbb{R}_+} \mbox{d} j \mathrm{ch}_{j,m}(\tau_c,y) e^{2\pi i M \omega} \left[\frac{e^{i(2J+1) (2j+1)}}{\sin\pi(j-\frac{kw}{2})} - \frac{e^{-(2J+1)(2j+1)}}{\sin(\pi(j+\frac{kw}{2}))}\right] \cr &+ \text{(discrete)} \ . \end{align} From the modular bootstrap ansatz \eqref{anz}, we obtain the boundary wavefunction corresponding to the discrete series as \begin{align} &\Psi(p,m)_{J,M} \cr &= \nu_b^{2j+1}\Gamma(1+b^2(2j+1))\Gamma(2j+1)e^{2\pi i M \omega} \times \cr &\times \left[\frac{\Gamma(-j+\frac{kw}{2})}{\Gamma(j+1+\frac{kw}{2})}e^{i\pi(m-j)+i(2J+1)(2j+1)} - \frac{\Gamma(-j-\frac{kw}{2})}{\Gamma(j+1-\frac{kw}{2})}e^{i\pi(m+j)-i(2J+1)(2j+1)} \right] \ , \end{align} where the parameter identification with the class 3 brane is $\theta_0 = 2\pi M + \frac{k\pi}{2} $ , and $\sigma = -\frac{\pi}{2} + \frac{\pi}{k}(2J+1)$. So far, we have obtained as many branes as the extended character of the $\mathcal{N}=2$ supersymmetry (or $SL(2;\mathbb{R})/U(1)$ coset). We, however, have to check whether the obtained D-branes satisfy the Cardy condition among themselves. Namely, we have to compute the cylinder amplitudes and decompose them into the open string characters and verify the positive definiteness of the density of states for continuous series and the positive integral multiplications for the discrete series. This was automatically guaranteed for the compact unitary CFTs thanks to the Verlinde formula. In our noncompact case, it is not a trivial problem. Indeed, although, almost all overlaps are consistent with the Cardy condition, the self-overlaps between class 3 branes for irrational value of $k$, we encounter negative multiplicities of discrete series in their spectra \cite{Ribault:2003ss}.\footnote{For integral value of $k$, this subtlety is avoided \cite{Israel:2005fn}. For general fractional value of $k$, the situation is more involved and the results depend on the combination of the other sectors embedded in the full string theory and the appropriate GSO condition we impose. The case by case study of these cases can be found in \cite{Eguchi:2003ik}.} One might wonder what goes wrong with the modular bootstrap for the B-branes. The gist is that there is no identity brane for the B-boundary conditions. One can formally write down the modular bootstrap equations like \eqref{idmod}, but there does not exist any analytic solution compatible with the reflection amplitudes for B boundary conditions. Due to this lack of the identity B-brane, the whole construction of the modular bootstrap breaks down. The coset construction from the descent of branes in $\mathbb{H}_3^+$ model was given in \cite{Ribault:2003ss}. The conformal bootstrap for the dual $\mathcal{N}=2$ Liouville theory can be found in \cite{Ahn:2003tt,Hosomichi:2004ph} \newpage \sectiono{Rolling D-brane in Two-dimensional Black Hole}\label{sec:7} In this section, we study the D-branes in Lorentzian two-dimensional black hole. The organization of the section is as follows. In section \ref{sec:7-1}, we study the classical D-branes in the Lorentzian two-dimensional black hole. In section \ref{sec:7-2}, we construct the boundary states for the rolling D-brane from the Wick rotation of the class 2 brane in the Euclidean two-dimensional black hole system.\footnote{This part of the thesis is based on \cite{Nakayama:2005pk}.} In section \ref{sec:7-3}, we study some properties of our boundary wavefunction focusing on $1/k$ corrections. \subsection{Classical D-branes}\label{sec:7-1} \subsubsection{DBI analysis}\label{sec:7-1-1} The classical D-branes in Lorentzian two-dimensional black hole is classified by the solution of the equation of motion coming from the DBI action \begin{align} S^L = \mu_{p+1} \int \mbox{d}^{p+1}\xi e^{-\Phi} \sqrt{-\det(G_{ab}+B_{ab}+F_{ab})} \ . \end{align} The classical background is given by \begin{align} \mbox{d} s^2 =k\alpha'( - \tanh^2\rho \, \mbox{d} t^2 + \mbox{d} \rho^2) , \qquad e^{2\Phi} = \frac{k}{\mu \cosh^2 \rho} \ , \end{align} or when we are interested in the global structure of the solution, we use the Kruscal coordinate \begin{align} \mbox{d} s^2 = -2k\frac{\mbox{d} u\mbox{d} v}{1-uv} \ , \ \ e^{2\Phi} = \frac{k}{\mu(1-uv)} \ . \label{krscc} \end{align} by the coordinate transformation: $u = \sinh \rho e^{t}$, $v=-\sinh\rho e^{-t}$. We begin with the D(-1) instanton. A physical meaning of such D-brane is a little bit unclear in the Lorentzian signature, but the ``effective action" \begin{align} S_{-1} \propto e^{-\phi} = \sqrt{1-uv} \end{align} could be extremized at $u=v=0$ or $\rho = 0$. Next we study the D0-brane. In the local coordinate outside the horizon, we can write down the DBI action as \begin{align} S_0 = \mu_0 \int \mbox{d} t \cosh\rho(t)\sqrt{-\dot{\rho}(t)^2 + \tanh^2\rho(t)} \ , \label{dbidzero} \end{align} where we have fixed the reparametrization invariance by taking the temporal gauge $\xi_0 = t$. From the energy conservation, we obtain \begin{align} \text{const} = \cosh\rho\frac{\tanh^2\rho}{\sqrt{-\dot{\rho}^2 + \tanh^2\rho}} \ , \end{align} which can be integrated to \begin{align} \sinh(\rho) \cosh(t-t_0) = \text{const} \ . \label{mot} \end{align} The D0-brane motion \eqref{mot} also follows from the Wick rotation $\theta \to it$ to the hairpin brane \eqref{trsce} in the Euclidean two-dimensional black hole. As we mentioned in section \ref{sec:5-3}, The action \eqref{dbidzero} can be rewritten in the same form as the rolling D-brane in the linear dilaton background by introducing `tachyon' variable $Y \equiv \log \sinh \rho$: \begin{align} L_{\rm D0} = - V(Y) \sqrt{1 - \dot{Y}^2} \qquad \mbox{where} \qquad V(Y) = M_0 \, e^Y ~, \label{nonextremal D0} \end{align} which leads us to the ``tachyon - radion correspondence" discussed in section \ref{sec:5}. To study the global structure, we use the Kruscal coordinate \eqref{krscc}. In this coordinate system, the DBI action takes the flat form \begin{align} S = \mu_0 \int \mbox{d}\xi \sqrt{\frac{\mbox{d} u}{\mbox{d} \xi}\frac{\mbox{d} v}{\mbox{d}\xi}} \ . \end{align} The equation of motion is solved by a straight line in the $(u,v)$ plane. It is interesting to note that the D0-brane does not feel the existence of the singularity at $uv=1$. The classical trajectory is analytically continued inside the singularity in a trivial way. This is because the curvature singularity is cancelled against the dilaton singularity, which appears in the DBI action in the opposite way. The coupling to the dilaton is a crucial difference between the D-brane and a usual particle (such as a point like F-string or folded string solution discussed in section \ref{sec:3-3-2}) in the two-dimensional black hole background. Let us finally consider the D1-brane. The DBI action in the Kruscal coordinate is \begin{align} S_1 = \mu_1 \int \mbox{d} u\mbox{d} v \sqrt{1-uv}\sqrt{\frac{1}{(1-uv)^2} - F_{uv}^2} \ \end{align} in the gauge $\xi_0 = u$, $\xi_1 = v$. The Gauss law constraint \begin{align} f = \frac{\sqrt{1-uv}F_{uv}}{\sqrt{\frac{1}{(1-uv)^2}-F^2_{uv}}} \end{align} is solved by \begin{align} F_{uv}^2 = \frac{f^2}{(1-uv+f^2)(1-uv)^2} \ . \end{align} When $f^2>0$, the world-sheet of the D1-string covers the whole physical region of the two-dimensional black hole, and possibly it has a boundary inside the singularity at $uv=1+f^2$. When $f^2 = -\kappa^2 <0$, the D1-string has a boundary at $uv = 1-\kappa^2$, and describes a long folded string. However, the DBI action for the long folded string becomes imaginary, so the solution is overcritical and unphysical.\footnote{This is a general feature of the Lorentzian solution for the D-brane with the boundary coming from the blow-up of the field strength. In the Lorentzian signature, the terms inside the square root of the DBI action is bounded, or in other words the field strength has a critical value. Thus any D-brane that has a boundary due to the divergence of the field strength is overcritical and hence unphysical unlike the case in the Euclidean signature.} \subsubsection{group theoretical viewpoint}\label{sec:7-1-2} As we have done in section \ref{sec:6-1-2}, we can also study the classical D-brane in the Lorentzian two-dimensional black hole from the coset construction. We parametrize the parent $SL(2;\mathbb{R})$ element $g$ as \begin{align} g = \begin{pmatrix} a \ u \cr -v \ b \end{pmatrix} \ , \ \ uv + ab = 1 \ . \end{align} Under the axial gauge transformation $\delta g = \epsilon(\sigma_3 g + g \sigma_3) $, the $(u,v)$ is invariant, and serves as a gauge invariant coordinate describing the two-dimensional black hole (i.e. we can identify them as $(u,v)$ in the Kruscal coordinate \eqref{krscc}). The maximally symmetric D-brane in the parent $SL(2;\mathbb{R})$ WZNW model is classified by the (twined) conjugacy class of the group. The conjugacy class is given by \begin{align} \mathrm{Tr}(g) = a + b \end{align} and the twined conjugacy class is given by \begin{align} \mathrm{Tr}(\sigma_1 g) = u-v \ , \label{twcl} \end{align} up to a conjugation (i.e. Lorentz boost between $\sigma_1$ and $\sigma_2$). To derive the D-branes in the coset model, we have to project the twined conjugacy class to the coset variables for A-branes. For B-branes, we first take a superposition of the gauge orbit of the conjugacy class so that we obtain a gauge invariant object as a D-brane. Let us begin with the A-branes. The twined conjugacy class \eqref{twcl} is invariant under the axial gauge transformation, so the D-brane (D0-brane) is classified by the equation \begin{align} 2\kappa = \mathrm{Tr}(\sigma_1 g ) = u- v \ , \end{align} which gives a straight line in the Kruscal coordinate of the two-dimensional black hole as we observed in section \ref{sec:7-1-1} from the DBI analysis. More general branes are obtained by the Lorentz boost: \begin{align} ue ^{t_0} - ve^{-t_0} = 2\kappa \ . \end{align} The existence of such boosted D-branes are consistent with the fact that the existence of the Nambu-Goldstone modes associated with the symmetry breaking due to the A-brane as we have reviewed in section \ref{sec:6-1-2}. Let us move on to the B-branes. In this case, the conjugacy class is not invariant under the axial gauge transformation. In order to obtain an invariant object that can be projected down to the coset, we need to sum over the gauge orbit. With fixing the trace as $a+b = 2\kappa$, the determinant constraint reads \begin{align} uv = 1-\kappa^2 + (a-\kappa)^2 \ge 1-\kappa^2 \ . \end{align} It is not difficult to see that the gauge orbit of the conjugacy class precisely agrees with the domain bounded by the last inequality. The string configuration reproduces the folded D1-string obtained from the DBI analysis in section \ref{sec:7-1-1}. We note, however, that the solution is overcritical and unphysical as we have seen in section \ref{sec:7-1-1}.\footnote{It is not uncommon that the group theoretical classification of the D-branes in the Lorentzian coset gives unphysical D-branes (see e.g. \cite{Hikida:2005vd}). Our identification of the parameter is different from the one given in \cite{Yogendran:2004dm}, which solves a small puzzle raised there. The extra $i$ comes from a (hypothetical) time-like T-duality \cite{Hull:1998vg} which we need to perform to obtain the parameter identification according to the discussion given in \cite{Walton:2002db}. } We also see that the D-string that covers the whole physical region of the two-dimensional black hole cannot be obtained from a simple descent from the D-branes in the $SL(2;\mathbb{R})$ WZNW model without an analytic continuation. \subsection{Boundary states from Wick rotation}\label{sec:7-2} \subsubsection{analytic continuation of boundary states}\label{sec:7-2-1} In this section, we shall construct the exact boundary state describing the D0-brane moving in the Lorentzian two-dimensional black hole background. Recall that the Lorentzian two-dimensional black hole (`Lorentzian cigar') background is obtainable by the Wick rotation $\theta = it$ of the Euclidean one \eqref{Euclidean cigar} \begin{equation} \mbox{d} s^2 = 2k(\mbox{d} \rho^2 - \tanh^2\! \rho \, \mbox{d} t^2) \qquad \mbox{and} \qquad e^{\Phi} = \frac{e^{\Phi_0}}{\cosh\rho} ~. \label{Lorentzian cigar} \end{equation} Wick-rotating the geodesic of the Euclidean D1-brane, we found the geodesic of the Lorentzian D0-brane in \eqref{mot} as \begin{align} \cosh(t-t_0) \sinh \rho = \sinh \rho_0~, \label{trajectory D0} \end{align} where $t_0$, $\rho_0$ are free parameters. Notice that the D0-brane reaches the horizon $\rho = 0$ at $t \rightarrow \pm \infty$ irrespective of the values of $\rho_0$ and $t_0$. Thus, formally, the Lorentzian D0-brane boundary state is obtainable by Wick rotation of the Euclidean D1-brane boundary state \eqref{clss2'} if we are interested in the physics outside the event horizon.\footnote{Some classical analysis of D-brane dynamics was attempted in \cite{Yogendran:2004dm} within the Dirac-Born-Infeld approach.} Reconstructing boundary states of the Lorentzian D-brane from those of the Euclidean D-brane is generically not unique. Rather, the following potential subtleties need to be faced: \begin{itemize} \item The Euclidean momentum $n$ along the asymptotic circle of cigar is quantized, while the corresponding quantum number in the Lorentzian theory ({\em i.e.} the energy) takes a continuous value. \item The Wick rotations of primary states are not necessarily unique. Often, appropriate boundary conditions should be specified. \end{itemize} As for the first point, which has to do with Matsubara formulation, we can formally avoid the difficulty of quantized momentum by the following heuristic consideration. Suppose the boundary wave function $\hat{f}(n,\alpha)$ ($n\in \mathbb{Z}$ is the quantized Euclidean energy, and $\alpha$ denotes the remaining quantum numbers not touched here) is given by the Fourier transform of a periodic function $f(x+2\pi, \alpha)=f(x, \alpha)$. We then obtain \begin{align} \hspace{-1cm} \bra{B}= \sum_{\alpha}\sum_{n\in\mathbb{Z}}\,\widetilde{f}(n,\alpha)\dbra{n,\alpha} &= \sum_{\alpha}\sum_{n\in\mathbb{Z}}\, \frac{1}{2\pi}\int_{-\pi}^{\pi} \mbox{d} x\, f(x,\alpha) e^{in x} \, \dbra{n,\alpha} \nonumber\\ &= \sum_{\alpha} \int_{-\infty}^{\infty} \frac{\mbox{d} q}{2\pi}\, \int_{-\infty}^{\infty} \mbox{d} x\, f(x,\alpha) e^{iqx} \, \dbra{q,\alpha}~, \label{formal extension} \end{align} where we used the identity $ \sum_{n\in \mathbb{Z}}\, \delta(q-n) = \sum_{m\in \mathbb{Z}}\, e^{2\pi i m q} $ in obtaining the last expression. Assuming that $f(x, \alpha)$ is analytic along the entire real $x$ axis, the Wick rotation can be performed. Often, $f(x, \alpha)$ is non-analytic over the real $x$ axis, and the integral in the last expression is ill-defined. This turns out to be the case for the boundary wave function of the Euclidean D1-brane \eqref{clss2'}: in the coordinate space, the wave function has branch cuts and singularities along the real $x$-axis. In such cases, the best we can do is to adopt the slightly deformed integration contour ${\cal C}$ in $x$-space\footnote {To be more precise, we should allow to use some decomposition $$ f(x,\alpha) = f_1(x,\alpha)+ f_2(x,\alpha)+\cdots~, $$ and to take the different contours for each piece $f_i(x,\alpha)$. } to render the Fourier integral well-defined: \begin{align} && \bra{B'} \Big\vert_{\rm Euclidean} := \sum_{\alpha}\int_{-\infty}^{\infty} \frac{\mbox{d} q}{2\pi}\, \int_{{\cal C}} \mbox{d} x\, f(x,\alpha) \, e^{iqx} \, \dbra{q,\alpha} ~. \label{formal extension 2} \end{align} Likewise, disk one-point function of vertex operator $\Phi^{\rm Euclidean}_{q,\alpha}$ (associated with the Ishibashi state $\dbra{q,\alpha}$) is evaluated as the deformed contour integral: \begin{align} & \Big< \Phi^{\rm Euclidean}_{q,\alpha} \Big>_{\msc{disk}} = {}_{\rm E}\!\langle B' \vert q, \alpha \rangle\rangle = \int_{{\cal C}} \mbox{d} x\, f(x,\alpha) e^{iqx}~. \label{formal disk amp 1} \end{align} Assuming sufficient analyticity, one then defines Wick rotation of the states \eqref{formal extension 2} by the contour deformation of ${\cal C}$ accompanied by the continuation $q\,\rightarrow\, i\omega, x \rightarrow \, i t$; \begin{align} \bra{B'} \Big\vert_{\rm Lorentzian} := \sum_{\alpha}\int_{-\infty}^{\infty} \frac{i \mbox{d} \omega}{2\pi}\, \int_{-\infty}^{\infty} i\mbox{d} t\, f(it, \alpha) e^{-i\omega t} \, \dbra{i\omega,\alpha} ~. \label{Wick rotation 0} \end{align} This is essentially the procedure taken in \cite{Nakayama:2004yx}. Of course, we potentially have an ambiguity in the choice of the contour ${\cal C}$, and the correct choice should be determined by the physics under study. In the present case $\bra{B}$ corresponds to \eqref{clss2'} and $\bra{B'}$ is given by \begin{align} {}_{\rm D1}\bra{B';\rho_0,\theta_0} = \int_0^{\infty} \frac{\mbox{d} p}{2\pi}\, \int_{-\infty}^{\infty} \frac{\mbox{d} q}{2\pi}\, \Psi'_{\rm D1} (\rho_0,\theta_0;p,q) \, \dbra{p,q} ~, \end{align} where \begin{align} &\Psi'_{\rm D1}(\rho_0,\theta_0;p,q) \cr &= \frac{\sinh(\pi p)} {\left|\cosh\Big(\pi\frac{p+iq}{2}\Big)\right|^2} \, \frac{\pi\Gamma(ip)\Gamma\Big(1+\frac{ip}{k}\Big)} {\Gamma\Big(\frac{1}{2}+\frac{ip+q}{2}\Big) \Gamma\Big(\frac{1}{2}+\frac{ip-q}{2}\Big)} \, e^{iq\theta_0} \left[e^{-ip\rho_0} +\frac {\cosh\left(\pi\frac{p-i|q|}{2}\right)} {\cosh\left(\pi\frac{p+i|q|}{2}\right)} e^{ip\rho_0}\right] \cr &\equiv B\left(\frac{1}{2}-\frac{ip-q}{2}, \frac{1}{2}-\frac{ip+q}{2}\right) \Gamma\left(1+\frac{ip}{k}\right) \, e^{iq\theta_0}\left[e^{-ip\rho_0} +\frac{\cosh\left(\pi\frac{p-i|q|}{2}\right)} {\cosh\left(\pi\frac{p+i|q|}{2}\right)} e^{ip\rho_0}\right]. \label{D1'} \end{align} \begin{figure}[htbp] \begin{center} \includegraphics[width=13cm,height=10cm] {cont0.eps} \end{center} \caption{The red (green broken) line is the contour ${\cal C}^+$ for $p > 0$ to the Lorentzian time. Notice that an infinite number of branch cuts repeats in the Euclidean time: $\frac{\pi}{2}+2n\pi < x < \frac{3\pi}{2} +2n\pi$, $(n\in \mathbb{Z})$ along the real $x$-axis.} \label{c-array} \end{figure} Here $B(p,q) \equiv \Gamma(p)\Gamma(q)/\Gamma(p+q)$ denotes Euler's beta function. The integration contour ${\cal C}$ we choose is shown in Figure \ref{c-array} \cite{Nakayama:2004yx}. As in \eqref{evaluation overlap phi}, we separately evaluated the integrals of $\phi^{p}_{L,q}$ and $\phi^p_{R,q}$ based on the decomposition \eqref{decomp ef}. For the convergence of integrals, we choose the contour ${\cal C}^+$ for $\phi^{p}_{L,q}$ ($p>0$ sector) and ${\cal C}^-$ for $\phi^{p}_{R,q}$ ($p<0$ sector). Such choice of integration contours rendered an extra damping factor $\sinh(\pi p) / {|\cosh\left(\pi\frac{p+iq}{2}\right)|^2}$, which improves the ultraviolet behavior of the wavefunction and makes it possible to take the Wick rotation sensibly. The non-trivial phase factor $ {\cosh\left(\pi\frac{p-i|q|}{2}\right)}/ {\cosh\left(\pi\frac{p+i|q|}{2}\right)} $ in the second term originates from the reflection amplitude, and it reduces to $(-1)^n$ when $q=n\in \mathbb{Z}$. The second subtlety implies that $\dbra{i\omega, \alpha}$ is not uniquely defined in \eqref{Wick rotation 0}. This is the issue that arises in a background with horizon, equivalently, non-existence of globally definable timelike Killing vector. As such, this subtlety did not arise for the extremal NS5-brane geometry (described asymptotically by free linear dilaton theory) considered in \cite{Nakayama:2004yx}. In section \ref{sec:7-2-4}, within the mini-superspace analysis for the Lorentzian two-dimensional black hole, we shall clarify this subtlety. An alternative, sensible prescription of the analytic continuation is to define the disk one-point correlator {\em directly\/} via the Lorentzian Fourier transform: \begin{align} & \Big< \Phi^{\msc{Lorentzian}}_{\omega,\alpha} \Big>_{\msc{disk}} = \int_{-\infty}^{\infty} \mbox{d} t\, f(it,\alpha) e^{-i\omega t}~. \label{formal disk amp 2} \end{align} This is {\em not\/} always equivalent to the the former method elaborated above. In fact, the latter method does not necessarily assert that the boundary state constructed so is expandable in terms of the Lorentzian Ishibashi states that are analytically continued from the Euclidean ones.\footnote{Recently, we have succeeded the direct evaluation of the overlap integral. See appendix \ref{direct} for details.} In section \ref{sec:3-3-4}, we have reviewed the primary states for the Lorentzian two-dimensional black hole. Having obtained the Lorentzian primary states, we shall now construct several interesting class of boundary states for a D0-brane propagating in the black hole background. We have seen that the D0-brane propagates along the trajectory \eqref{trajectory D0}. The two-dimensional black hole is eternal, so, in addition to the past and the future asymptotic infinities, the causal propagation region has the past horizon ${\cal H}^-$ surrounding the white hole singularity and the future horizon ${\cal H}^+$ surrounding the black hole singularity. As such, by taking variety of possible boundary conditions, we can construct interesting class of boundary states. \subsubsection{boundary state of D0-brane absorbed to future horizon}\label{sec:7-2-2} Consider first the boundary state obeying the boundary condition $\psi(\rho,t)\,\rightarrow\, 0$ at the past horizon ${\cal H}^-$, viz. the primary states $\ket{U^p_{\omega}}$. This boundary condition is relevant for scattering of a D0-brane off the black hole, since the condition represents absorption only and no emission of the D0-brane by the black hole. D0-brane boundary state obeying such absorbing boundary condition is then expanded solely by the Ishibashi states ${}^{\widehat{U}}\dbra{p,\omega}$, $\dket{p,\omega}^U$ that are associated with the primary states $\widehat{\bra{U^p_{\omega}}}$, $\ket{U^p_{\omega}}$: \begin{align} & {}_{\msc{absorb}}\!\bra{B;\rho_0,t_0} = \int_0^{\infty}\frac{\mbox{d} p}{2\pi} \int_{-\infty}^{\infty}\frac{\mbox{d} \omega}{2\pi}\, \Psi_{\msc{absorb}}(\rho_0,t_0;p,\omega) \, {}^{\widehat{U}}\!\dbra{p,\omega}~, \nonumber\\ & \ket{B;\rho_0,t_0}_{\msc{absorb}} = \int_0^{\infty}\frac{\mbox{d} p}{2\pi} \int_{-\infty}^{\infty}\frac{\mbox{d} \omega}{2\pi}\, \Psi^*_{\msc{absorb}}(\rho_0,t_0;p,\omega) \, \dket{p,\omega}^U~. \label{falling D0 0} \end{align} The boundary wavefunction $\Psi_{\msc{absorb}}(\rho_0,t_0;p,\omega)$ is then interpreted as the disk one-point correlators: \begin{align} \Psi_{\msc{absorb}}(\rho_0,t_0;p,\omega) &= \langle U^p_{\omega} \rangle_{\msc{disk}} \equiv {}_{\msc{absorb}}\!\bra{B;\rho_0,t_0} U^p_{\omega} \rangle~, \label{falling disk} \end{align} The boundary wavefunction \eqref{falling disk} is then obtained by taking the Wick rotation $q\,\rightarrow\, i\omega$ ($q\,\rightarrow\, -i\omega$) for $q<0$ ($q>0$) in \eqref{D1'} (recall \eqref{ac UVLR}):\footnote {In reality, there is a further overall factor $i$, but, for notational simplicity, we will absorb it to the definition of the Ishibashi states.} \begin{align} & \hspace{-1cm} \Psi_{\msc{absorb}}(\rho_0,t_0;p,\omega) = B(\nu_+, \nu_-) \Gamma\Big(1+\frac{ip}{k}\Big) \, e^{-i\omega t_0}\left[ e^{-ip \rho_0} - \frac{\cosh\left(\pi \frac{p-\omega}{2}\right)} {\cosh\left(\pi \frac{p+\omega}{2}\right)} e^{ip\rho_0 } \right]~, \label{falling D0} \end{align} The relative minus sign in the second term of $\Psi_{\msc{absorb}}(\rho_0,t_0;p,\omega)$ originates from the fact that the contour rotation defining the Wick rotation has opposite directions for ${\cal C}^+$ (suitable for $p>0$) and ${\cal C}^-$ (suitable for $p<0$). See figure \ref{c-array}. This boundary wavefunction \eqref{falling D0} satisfies the exact reflection relation \begin{align} \Psi_{\msc{absorb}}(\rho_0,t_0;-p,\omega)= {\cal R}(-p,\omega) \, \Psi_{\msc{absorb}}(\rho_0,t_0;p,\omega)~. \label{ref falling D0} \end{align} With such boundary condition, the boundary wavefunction \eqref{falling D0} would have no overlap with D0-brane's trajectory \eqref{trajectory D0} in the far past region $t\ll t_0$. In fact, the trajectory \eqref{trajectory D0} starts from the past horizon ${\cal H}^-$ at $t=-\infty$, reaches the time-symmetric point $\rho = \rho_0$ at $t = t_0$, and then falls back the future horizon ${\cal H}^+$ at $t=+\infty$, while the wavefunction $U^p_{\omega}$ does not have any component outgoing from ${\cal H}^-$. We thus interpret that the boundary state \eqref{falling D0} describes the future half of the classical trajectory \eqref{trajectory D0}. We shall hence call it the `absorbed D-brane'. By utilizing the radion-tachyon correspondence, the rolling radion (as described by the boundary state \eqref{falling D0}) can be also interpreted as the rolling tachyon. In the latter interpretation, the D0-brane absorbed to the future horizon is the counterpart of the future-half S-brane \cite{Gutperle:2002ai,Strominger:2002pc,Larsen:2002wc}, in which the tachyon rolls down the potential hill at asymptotic future $t \rightarrow + \infty$ and emits radiation. \subsubsection{boundary state of D0-brane emitted from past horizon}\label{sec:7-2-3} Consider next the boundary condition: $\psi(\rho,t)\,\rightarrow\, 0$ at ${\cal H}^+$, viz. use the basis $\dket{p,\omega}^V$, ${}^{\widehat{V}}\!\dbra{p,\omega}$ instead of $\dket{p,\omega}^U$, ${}^{\widehat{U}}\!\dbra{p,\omega}$. Utilizing the reflection relation, we can first rewrite \eqref{D1'} as the form which only includes the $p<0$ Ishibashi states by means of the reflection relation. Then, we can analytically continue the states $\ket{\phi^{-p}_q}$ ($p>0$) into $\ket{V^p_\omega}$. The resultant boundary state is obtained by simply replacing $p\,\rightarrow\,-p$, $\omega\, \rightarrow\, -\omega$ in \eqref{falling D0}; \begin{align} & {}_{\msc{emitted}}\!\bra{B;\rho_0,t_0} = \int_0^{\infty}\frac{\mbox{d} p}{2\pi} \int_{-\infty}^{\infty}\frac{\mbox{d} \omega}{2\pi}\, \Psi_{\msc{emitted}}(\rho_0,t_0;p,\omega) \, {}^{\widehat{V}}\!\dbra{p,\omega}~. \nonumber\\ & \ket{B;\rho_0,t_0}_{\msc{emitted}} = \int_0^{\infty}\frac{\mbox{d} p}{2\pi} \int_{-\infty}^{\infty}\frac{\mbox{d} \omega}{2\pi}\, \Psi^*_{\msc{emitted}}(\rho_0,t_0;p,\omega) \, \dket{p,\omega}^V ~. \label{emitted D0} \end{align} where \begin{align} \Psi_{\msc{emitted}}(\rho_0,t_0;p,\omega) = B(\nu^*_+, \nu^*_-) \Gamma\left(1-\frac{ip}{k}\right) \, e^{-i\omega t_0}\left[ e^{ip\rho_0} - \frac{\cosh\left(\pi \frac{p-\omega}{2}\right)} {\cosh\left(\pi \frac{p+\omega}{2}\right)} e^{-ip\rho_0} \right]~. \nonumber \end{align} Obviously, the emitted D0-brane wavefunction is the time-reversal of the absorbed D0-brane wavefunction \eqref{falling D0}: \begin{align} \Psi_{\msc{emitted}}(\rho_0,t_0;p,\omega) = \Psi^*_{\msc{absorb}}(\rho_0,-t_0;p,\omega) ~. \nonumber \end{align} Namely, it describes the D0-brane emitted from the past horizon at asymptotic past $t=-\infty$. By the choice of the boundary condition, this boundary state \eqref{emitted D0} describes only the past half of the classical D0-brane trajectory \eqref{trajectory D0}. The exact reflection relation has the form \begin{align} & \Psi_{\msc{emitted}}(\rho_0,t_0;-p,\omega)= {\cal R}^*(-p,\omega) \, \Psi_{\msc{emitted}}(\rho_0,t_0;p,\omega)~. \label{ref emiited D0} \end{align} Again, in light of the radion-tachyon correspondence, the D0-brane emitted from the past horizon is the counterpart of the past-half S-brane in tachyon rolling. The radiation creeps up the tachyon potential hill from past infinity and forms an unstable D-brane. \subsubsection{boundary state of time-symmetric D0-brane}\label{sec:7-2-4} The third possible boundary state is obtainable by {\em directly\/} taking the analytic continuation in the disk one-point amplitudes, as we already mentioned. Recalling \eqref{ac UVLR}, we shall analytically continue the disk amplitudes as (assume $p>0$) \begin{align} \langle \phi^{+p}_q \rangle_{\msc{disk}}~\longrightarrow~ \langle U^p_{\omega} \rangle_{\msc{disk}} \qquad \mbox{and} \qquad \langle \phi^{-p}_q \rangle_{\msc{disk}}~\longrightarrow~ \langle V^p_{\omega} \rangle_{\msc{disk}}~. \label{ac disk amp} \end{align} The Euclidean one-point amplitudes $\langle \phi^{\pm p}_q \rangle_{\msc{disk}}$ are given in \eqref{D1'}, and can be expressed in contour integrals as in \eqref{formal disk amp 1}. Recall that $\langle \phi^p_{L,q} \rangle_{\msc{disk}}$, $\langle \phi^{p}_{R,q} \rangle_{\msc{disk}}$ are prescribed by the contour integrals over ${\cal C}^+$, ${\cal C}^-$ in figure \ref{c-array}. We shall thus analytically continue them to the real time axis (imaginary $x$-axis). In this way, we extract the Lorentzian disk one-point amplitudes as \begin{align} & \langle U^p_{\omega} \rangle_{\msc{disk}} = \langle U^p_{\omega} \rangle_{\msc{disk}}^{(\msc{absorb})} \qquad \mbox{and} \qquad \langle V^p_{\omega} \rangle_{\msc{disk}} = \langle V^p_{\omega} \rangle_{\msc{disc}}^{(\msc{emitted})}~, ~~~ \label{rel disc amp} \end{align} where the right-hand sides are simply the amplitudes associated with the `absorbed' and `emitted' D0-branes considered in the previous subsections and explicitly given in \eqref{falling D0} and \eqref{emitted D0}. Since $U^p_{\omega}$ and $V^p_{\omega}$ constitute the complete set of basis for Lorentzian primary fields, the amplitudes \eqref{rel disc amp} would yield yet another Lorentzian D0-brane boundary states. As is obvious from the above construction, this state keeps the time-reversal symmetry manifest and reproduces the entire classical trajectory \eqref{trajectory D0}, that is, it describes a D0-brane emitted from the past horizon and reabsorbed to the future horizon. From the viewpoint of the boundary conformal theory, this would be considered the most natural one since it captures the entire classical trajectory of the D0-brane. In the radion-tachyon correspondence, this state is the counterpart of the full S-brane \cite{Gutperle:2002ai,Sen:2002nu,Sen:2002in,Sen:2002an}. Explicitly, the time-symmetric boundary states are given by \begin{align} &{}_{\msc{symm}}\!\bra{B;\rho_0,t_0} \cr &={}_{\msc{absorb}}\!\bra{B;\rho_0,t_0} + {}_{\msc{emitted}}\!\bra{B;\rho_0,t_0} \cr &= \int_0^{\infty}\frac{\mbox{d} p}{2\pi} \int_{-\infty}^{\infty}\frac{\mbox{d} \omega}{2\pi}\, \left[ 2\Psi_{\msc{symm}}(\rho_0,t_0;p,\omega) \, {}^L\!\dbra{p,\omega} + 2\Psi^*_{\msc{symm}}(\rho_0,-t_0;p,\omega) \, {}^R\!\dbra{p,\omega} \right] \cr & \ket{B;\rho_0,t_0}_{\msc{symm}} \cr &= \ket{B;\rho_0,t_0}_{\msc{absorb}} + \ket{B;\rho_0,t_0}_{\msc{emitted}} \cr &= \int_0^{\infty}\frac{\mbox{d} p}{2\pi} \int_{-\infty}^{\infty}\frac{\mbox{d} \omega}{2\pi}\, \left[ 2\Psi^*_{\msc{symm}}(\rho_0,t_0;p,\omega) \, \dket{p,\omega}^L + 2\Psi_{\msc{symm}}(\rho_0,-t_0;p,\omega) \, \dket{p,\omega}^R \right] ~, \label{symmetric D0} \end{align} where \begin{align} \Psi_{\msc{symm}}(\rho_0,t_0;p,\omega) = B(\nu_+, \nu_-) \Gamma\left(1+\frac{ip}{k}\right) \, e^{-ip\rho_0-i\omega t_0} \end{align} and ${}^L\!\dbra{p,\omega}$, $\dket{p,\omega}^L$, ${}^R\!\dbra{p,\omega}$, $\dket{p,\omega}^R$ are the Ishibashi states constructed over the primary states $\bra{L^p_{\omega}}$, $\ket{L^p_{\omega}}$, $\bra{R^p_{\omega}}$, $\ket{R^p_{\omega}}$,\footnote{The extra factor of `2' was introduced for convenience. Recall \eqref{inner product UVLR}.} respectively. One can readily check that the second lines in \eqref{symmetric D0} are indeed correct by evaluating the disk one-point amplitudes from them. For instance, using \eqref{inner product UVLR}, we obtain \begin{align} \langle U^p_{\omega} \rangle_{\msc{disk}}^{(\msc{symm})} &= {}_{\msc{symm}}\!\bra{B;\rho_0,t_0} U^p_{\omega} \rangle \nonumber\\ &= \Psi_{\msc{symm}}(\rho_0,t_0;p,\omega) +{\cal R}(p,\omega) \Psi^*_{\msc{symm}}(\rho_0,-t_0;p,\omega) \nonumber\\ &= B(\nu_+,\nu_-) \Gamma\left(1+\frac{ip}{k}\right) e^{-i\omega t_0} \, \left[ e^{-ip\rho_0} - \frac{\cosh\left(\pi \frac{p-\omega}{2}\right)} {\cosh\left(\pi \frac{p+\omega}{2}\right)} e^{ip\rho_0} \right] \nonumber\\ &= \langle U^p_{\omega} \rangle^{(\msc{absorb})}_{\msc{disk}} \equiv {}_{\msc{absorb}}\!\bra{B;\rho_0,t_0}U^p_{\omega}\rangle ~. \end{align} Other one-point amplitudes can be checked analogously. Two remarks are in order. First, notice that, though the disk one-point amplitudes are, the symmetric boundary states \eqref{symmetric D0} by themselves are {\em not\/} analytically continuable to the Euclidean boundary state \eqref{D1'}. This should not be surprising as the Lorentzian Hilbert space is generated by {\sl twice} as many generators as the Euclidean theory. In other words, the Lorentzian bases $\ket{U^p_{\omega}}$, $\ket{V^p_{\omega}}$ correspond to $\ket{\phi^p_n}$, $\ket{\phi^{-p}_n}$ in the Euclidean theory, which were however linearly dependent due to the reflection relation. Nevertheless, the boundary state \eqref{symmetric D0} is a consistent one and yields disk one-point amplitudes that can be correctly continued to the Euclidean ones. Second, the full Lorentzian Hilbert space is decomposed as \begin{align} {\cal H} = {\cal H}^U \oplus {\cal H}^V \qquad \mbox{and} \qquad \widehat{{\cal H}} = \widehat{{\cal H}^U} \oplus \widehat{{\cal H}^V}~, \label{decomp Hilb} \end{align} where ${\cal H}^U$ (${\cal H}^V$) is spanned by $\ket{U^p_{\omega}}$ , ($\, \ket{V^p_{\omega}}\, $) and their descendants. The dual space $\widehat{{\cal H}^U}$ ($\widehat{{\cal H}^V}$) is similarly spanned by $\widehat{\bra{U^p_{\omega}}}$, ($\widehat{\bra{V^p_{\omega}}}$). Here, the Hilbert subspaces ${\cal H}^{U}$, $\widehat{{\cal H}^{U}}$ (${\cal H}^{V}$, $\widehat{{\cal H}^{V}}$) correspond to the boundary condition $\psi(\rho,t)\,\rightarrow\, 0$ at ${\cal H}^{-}$ (${\cal H}^+$). The `absorbed' and `emitted' D0-brane boundary states \eqref{falling D0}, \eqref{emitted D0} are consistent {\sl only} in the subspaces ${\cal H}^U$, ${\cal H}^V$ ($\widehat{{\cal H}^U}$, $\widehat{{\cal H}^V}$), while the `symmetric' D0-brane boundary state \eqref{symmetric D0} is well-defined in the entire Hilbert space ${\cal H}$ ($\widehat{{\cal H}}$). We thus have simple relations \begin{align} \ket{B;\rho_0,t_0}_{\msc{absorb}} &= P_U \, \ket{B;\rho_0,t_0}_{\msc{symm}} & \mbox{and} \qquad {}_{\msc{absorb}}\! \bra{B;\rho_0,t_0} &= {}_{\msc{symm}}\!\bra{B;\rho_0,t_0}\,\widehat{P_U}~,\cr \ket{B;\rho_0,t_0}_{\msc{emitted}} &= P_V \, \ket{B;\rho_0,t_0}_{\msc{symm}} & \mbox{and} \qquad {}_{\msc{emitted}}\!\bra{B;\rho_0,t_0} &= {}_{\msc{symm}}\!\bra{B;\rho_0,t_0}\,\widehat{P_V}~, \label{proj symmetric D0} \end{align} where $P_{U, V}$ ($\widehat{P_{U,V}}$) denotes projection of the Hilbert space ${\cal H}$ to ${\cal H}^{U,V}$ ($\widehat{{\cal H}^{U, V}}$). \subsection{Rolling D-brane gathers moss}\label{sec:7-3} As we have discussed in section \ref{sec:4}, it is of critical importance to study the $1/k$ corrections to the boundary states in order to understand the ``black hole - string transition" probed by our rolling D-brane. We first note that the boundary wavefunction itself is an analytic function with respect to $k$,\footnote{A possible exception is the factor $\nu_b^{2j+1}$. However, this factor can be absorbed (renormalized) into the cosmological constant operator of the $\mathcal{N}=2$ Liouville theory, or the mass of the two-dimensional black hole, so we will neglect this small subtlety.} so the boundary wavefunction itself is a well-defined quantity even for $k<1$. An alternative way to confirm this is to note that, at least in the Euclidean signature, the boundary wavefunction satisfies the conformal bootstrap equation for the dual $\mathcal{N}=2$ Liouville theory whose description is more reliable for $k<1$. To see the effect of the $1/k$ corrections clearly, it is convenient to go to the coordinate space representation rather than the momentum space representation. For simplicity, we take the linear dilaton (extremal) limit of the boundary wavefunction (see section \ref{sec:8-5} for details of this limit). In the momentum space we have, \begin{align} \Psi(\rho_0,t_0;p,\omega) = \frac{1}{2} B(\nu_+,\nu_-) \Gamma\left(1+i\frac{p}{k}\right)\, e^{-ip\rho_0-i\omega t_0}~, \quad \mbox{where} \quad \nu_{\pm} \equiv \frac{1}{2}- i\frac{p\pm \omega}{2}~. \end{align} We can Fourier transform this boundary wavefunction to obtain the boundary wavefunction in the coordinate space: \begin{align} \Psi(\rho,t) = \frac{\sqrt{k}}{\pi(2\cosh t)^{k+1}} \exp\left[-k\rho- \frac{e^{-k\rho}}{(2\cosh t)^{k}} \right] \ . \end{align} It will be localized along the classical trajectory: \begin{align} \rho_0(t) = - \log(2\cosh t) \ \end{align} in the semiclassical limit (i.e. $k\to \infty$). For finite $k$, the classical trajectory is smeared. To go further, we study the energy momentum distribution for finite $k$. Expanding the boundary states and reading the coupling to the gravity, we obtain (see \cite{Nakayama:2004ge} for details of the computation) \begin{align} T_{00} = \left(\frac{e^{-\rho}}{2\cosh t}\right)^{k-1} \exp\left[-\left(\frac{e^{-\rho}}{2\cosh t}\right)^{k}\right] \ . \end{align} The distribution of the energy is Poisson type and the maximum of the energy density is now located at \begin{align} \frac{e^{-\rho}}{2\cosh t} = 1-\frac{1}{k} \ . \end{align} The variance of the distribution is computed as \begin{align} \Delta\rho \simeq \sqrt{\frac{1}{2(k-1)}} \ , \label{ditvar} \end{align} which can be regarded as the smearing factor for the classical trajectory due to the $\alpha'$ corrections. One might say that the rolling D-brane gathers moss in the $\alpha'$ corrected black hole background. The moss could be identified with the analytic continuation of the winding tachyon \cite{Kutasov:2005rr}. Indeed the similar smearing factor in the Euclidean hairpin brane can be understood from the open string winding tachyon condensation near the tip of the Euclidean hairpin. We again emphasize that the coordinate space wavefunction itself is an analytic function with respective $k$. However, below $k=1$, the variance of the smeared D-brane trajectory \eqref{ditvar} diverges, which means that the boundary wavefunction does not have a sensible interpretation as a rolling D-brane in the classical two-dimensional black hole any more. The transition point exactly coincides with the ``black hole - string transition" point we discussed in section \ref{sec:4}. The classical black hole appears no more black hole at this point, and the D-brane cannot role down into the hole as a probe. In section \ref{sec:8}, we compute the closed string radiation rate from the rolling D-brane, and see explicitly that the radiation rate also reveals such a phase transition as expected. As a consequence, we will see that the ``tachyon - radion correspondence" and the universality of the decaying D-brane breaks down. As a generalization of the construction, we can introduce the fundamental string charge (electric flux) along the rolling D-brane boundary states. The construction is based on the Lorenz boost technique reviewed in section \ref{sec:5-2-6}. The corresponding boundary states have been studied in \cite{Nakayama:2004ge,Chen:2004vw}. \newpage \sectiono{Black Hole - String Transition from Probe Rolling D-brane}\label{sec:8}In this section we compute the closed string radiation rate from the rolling D-brane. The organization of this section is as follows. In section \ref{sec:8-1}, we compute the closed string radiation rate from the closed string perspective. In section \ref{sec:8-2}, we study the same closed string radiation rate from the open string perspective, establishing the consistency between unitarity and channel duality. In section \ref{sec:8-3}, we discuss the black hole - string transition from the probe rolling D-brane. In section \ref{sec:8-4}, the boundary states and radiation in R-R sector are discussed. In section \ref{sec:8-5}, we study the extremal NS5-brane limit. Finally in section \ref{sec:8-6}, we present the physical interpretations of Hartle-Hawking states for rolling D-branes.\footnote{This section is based on \cite{Nakayama:2005pk,Nakayama:2006qm}.} \subsection{Radiation out of rolling D-brane from closed string viewpoint}\label{sec:8-1} In the background of the black hole, the D0-brane moves along the geodesic and we have constructed a variety of boundary states describing the geodesic motion, specified by appropriate boundary conditions. Both by gravity and by strong string coupling gradient, the D$p$-brane is pulled in and finds its minimum energy and mass at the location of the NS5-brane. The D$p$-brane is supersymmetric in flat space-time, but preserves no supersymmetry in black NS5-brane background. Even in extremal NS5-brane background, until the D$p$-brane dissociates into the NS5-brane and form a non-threshold bound-state, the space-time supersymmetry is completely broken. In these respects,the D$p$-brane propagating in the NS5-brane background is much like excited D$p$-brane (many excited open strings attached on it) in flat space-time. Decay of the latter via closed string emission was studied extensively for $p=1$ \cite{Callan:1996dv,Das:1996wn}: the decay spectrum was found to match exactly with the Hawking radiation of the non-extremal black hole made out of these excited D-branes, and the effective temperature of excited open string modes agrees exactly with the Hawking temperature. In this section, we shall find certain analogous results for the closed string radiation off the rolling D0-brane, though special features also arise. As the D0-brane is pulled in, acceleration would grow and radiates off the binding energy into closed string modes. Details of the radiation spectra would differ for different choice of the boundary conditions, viz. for different boundary states of the D0-brane. In this section, as a probe of the black hole geometry and D-brane dynamics therein, we shall analyze spectral distribution of the closed string radiation off the rolling D0-particle. By applying the optical theorem, the radiation rate during the radion-rolling process is obtainable as the imaginary part of the annulus amplitude in the closed string channel.\footnote{For the tachyon rolling process in flat space-time background, the amplitude was evaluated first in \cite{Lambert:2003zr,Karczmarek:2003xm}.} Denote the differential number density $\mbox{d} {\cal N}(p, M)$ of the radiation at a fixed value of the radial momentum $p$ and the mass-level $M$. By the definition of the D-brane boundary state, the radiation number density $\mbox{d} {\cal N}$ is then given in terms of the boundary wave functions: \begin{align} \mbox{d} {\cal N} (p,M) &:= {\mbox{d} p \over 2 \pi} {\mbox{d} M \over (2 \pi)^d} \int{\mbox{d} \omega} \, \Big< \Psi(\omega, p, M) \Big\vert \delta(L_0 + \overline{L}_0) \Big\vert \Psi (\omega, p, M) \Big> \nonumber \\ &= \frac{\mbox{d} p}{2 \pi} \frac{\mbox{d} M}{(2 \pi)^d} \frac{1}{2 \omega(p,M)}\, \Big| \Psi(p,\omega(p,M))\Big|^2 ~. \label{radiation rate 0} \end{align} Here, $\omega, p$ are the energy and the radial momentum in two-dimensional Lorentzian background, $M$ is the total mass (conformal weight) of the remaining subspaces of dimension $d$ (including mass gap), $\Psi(\omega, p, M)$ is the boundary wave function (including that of the remaining subspace), and $\omega(p,M)(>0)$ is the on-shell energy of the radiated closed string state determined by the on-shell condition $L_0 + \overline{L}_0 = 0$ including the ghost contribution. From the kinematical consideration, it is obvious that the differential number density \eqref{radiation rate 0} is nonzero only when the D-brane is rolling. Of particular physical interest is the spectral distribution in the phase-space, as measured by the independent moments, {\em e.g.} \begin{align} \Big< \omega^m M^n \Big> &= \int \frac{\mbox{d} p}{2 \pi} \frac{\mbox{d} M}{(2 \pi)^d} \omega^m(p, M) M^n \frac{1}{2 \omega(p, M)} \Big|\Psi(p, \omega(p,M))\Big|^2 \nonumber \end{align} for $m, n =0, 1, 2, \cdots$. We shall evaluate these spectral observables by first evaluating the integral over the radial momentum $p$ by saddle-point approximation. In doing so, we pay particular attention to the asymptotic behavior as the mass-level $M$ becomes asymptotically large. We shall then evaluate the integral over the mass-level (conformal weight) $M$, and extract the spectral observables. Consider the boundary state \eqref{falling D0} describing a D0-brane absorbed by the future horizon. The radiation emitted by the D0-brane is decomposable into `incoming' (toward the horizon) and `outgoing' (toward the null infinity) components in the far future. The positive energy sector is expanded by the wavefunction $U^p_{\omega}$, and has the following asymptotic behavior at $t\rightarrow +\infty$: \begin{align} & U^p_{\omega}(\rho,t) \sim e^{- i\omega \ln \rho -i \omega t} + d(p, \omega) e^{-\rho} e^{+ ip \rho -i\omega t} \qquad \mbox{where} \qquad |d(p, \omega)| \sim e^{-\pi p}~. \label{as U} \end{align} Here, we assumed $\omega \sim M \gg 0$. The first and the second terms correspond to the incoming wave supported around $\rho=0$ and the outgoing wave supported in the region $\rho\sim +\infty$, respectively. The damping factor $d(p)$ originates from the exact reflection amplitude ${\cal R}(p,\omega)$. (See \eqref{decomp ef 2}, \eqref{decomp ef 2-2}.) To obtain the radiation number density, we need to evaluate $\left|\Psi(p,\omega)\right|^2 \times |U^{p}_{\omega}(\rho,t)|^2$. At far future infinity, the interference term in $|U^p_\omega|^2$ drops off upon taking the $p$-integral. Therefore, after integrating over the radial momentum $p$, the partial radiation distribution is seen to consist of the `incoming' and `outgoing' parts: \begin{align} {\cal N}(M)_{\msc{in}} &\equiv \int_0^M \mbox{d} M {\mbox{d} {\cal N}_{\msc{in}} \over \mbox{d} M} = \int_0^{\infty} \frac{\mbox{d} p}{2 \pi} \frac{1}{2\omega(p,M)} \Big|\Psi(p,\omega(p,M))\Big|^2 \nonumber\\ {\cal N}(M)_{\msc{out}} &\equiv \int_0^M \mbox{d} M {\mbox{d} {\cal N}_{\msc{out}} \over \mbox{d} M} = \int_0^{\infty} \frac{\mbox{d} p}{2 \pi}\frac{1}{2\omega(p,M)} \Big|d(p) \Big|^2 \Big|\Psi(p,\omega(p,M))\Big|^2~. \label{radiation rate 1} \end{align} We shall now evaluate the branching ratio between the two radiation rates \eqref{radiation rate 1} with emphasis on possible string world-sheet effects. To this end, consider the conformal field theory defined by $SL(2;\mathbb{R})/U(1) \times {\cal M}$, where $SL(2;\mathbb{R})/U(1)$ denotes the (super)coset model and ${\cal M}$ denotes a unitary (super)conformal field theory of central charge $c_{\cal M}$. Such (super)conformal field theory covers a variety of interesting string theory backgrounds. For the fermionic string, superconformal invariance asserts that the central charge ought to be critical: \begin{align} 3\Big(1 + \frac{2}{k}\Big) + c_{\cal M} =15~, \nonumber \end{align} where $k$ denotes the level of the super $SL(2;\mathbb{R})$ current algebra. If the background describes a stack of black NS5-branes, ${\cal M}= SU(2)_{k} \times \mathbb{R}^5$ where $k$ equals to the NS5-brane charge. Likewise, for the bosonic string case, conformal invariance asserts that the central charge should take the critical value: \begin{align} 2 + \frac{6}{\kappa-2} + c_{{\cal M}} =26~, \end{align} where now $\kappa$ refers to the level of the bosonic $SL(2;\mathbb{R})$ current algebra. For the background describing the black hole in two-dimensional string theory, ${\cal M}$ is empty and $\kappa$ should be set to $9/4$. It would be illuminating to analyze the branching ratio for the `rolling closed string', viz. a closed string state of fixed transverse mass $M$ and radial momentum $p$ propagating in black hole geometry. The branching ratio is simply given by the reflection amplitude (see \eqref{exactra}): \begin{align} \left. {{\cal N}_{\rm out}(p, \omega) \over {\cal N}_{\rm in } (p, \omega)} \right|_{\rm closed \, string} = |{\cal R}(p, \omega)|^2 = {\cosh^2 \pi \left(\omega-p \over 2 \right) \over \cosh^2 \pi \left( {\omega+p \over 2}\right)}~. \label{tachyon} \end{align} As emphasized below \eqref{exactra}, string world-sheet effects are present for the reflection amplitude ${\cal R}$ itself but, being an overall phase, it drops out of \eqref{tachyon}. The $k$-dependence enters in the branching ratio \eqref{tachyon} only through the on-shell dispersion relation $\omega = \sqrt{p^2 + 2 k M^2}$. For two-dimensional case, first studied in \cite{Dijkgraaf:1992ba} and \cite{Giveon:2003wn}, $k=1/2$, $M=0$ and $\omega = p$, so the scattering probability is exponentially suppressed as the energy increases. For a fixed transverse mass $M$ {\sl and} the forward radial momentum $p$, the reflection probability of the infalling D0-brane is given precisely by the same result as \eqref{tachyon}: \begin{align} \left. {{\cal N}_{\rm out}(p, \omega) \over {\cal N}_{\rm in } (p, \omega)} \right|_{\rm D0-brane} = |{\cal R}(p, \omega)|^2 = {\cosh^2 \pi \left(\omega-p \over 2 \right) \over \cosh^2 \pi \left( {\omega+p \over 2}\right)}~. \end{align} This is simply because back-scattering of the boundary wave function originates from that of the closed string wave function: roughly speaking, the boundary wave function is defined by overlap of the closed string wave function with the classical trajectory of the D0-brane. Radiation out of the falling D0-brane is coherent, so we integrate over the radial momentum $p$ as in \eqref{radiation rate 1} in extracting the branching ratio. We shall first analyze the partial radiation distribution at large mass-level, $M \rightarrow \infty$. More precisely, we shall examine asymptotic behavior of ${\cal N} (M)$ multiplied by the phase-space `degeneracy factor' $\rho(M)\sim e^{\frac{1}{2}M \beta_{\rm Hg}}$, where $\beta_{\rm Hg}$ denotes inverse of the Hagedorn temperature. The closed string states that couple to the boundary states are left-right symmetric, so we need to take the square root of the usual degeneracy factor in the closed string sector. Here, inverse of the Hagedorn temperature is given by \begin{align} \beta_{\rm Hg} = 4\pi \sqrt{1-\frac{1}{2k}}~, ~~~ \label{Hagedorn super} \end{align} for the superstring theory, and \begin{align} \beta_{\rm Hg} = 4\pi \sqrt{2-\frac{1}{2(\kappa-2)}}~, ~~~ \label{Hagedorn bosonic} \end{align} for the bosonic string theory, where the $1/k $ $(1/\kappa)$-correction is interpreted as the string world-sheet effects of the two-dimensional background. These results are derivable from the Cardy formula with the `effective central charge' $c_{\msc{eff}}=c-24 h_{\msc{min}} $ \cite{Kutasov:1990ua}, where $h_{\msc{min}}$ refers to the lowest conformal weight of normalizable primary states. \subsubsection{radiation distribution in superstring theory}\label{sec:8-1-1} \label{radiation super} Let us begin with the spectral distribution in superstring theories. We shall focus exclusively on the NS-NS sector of the radiation and defer the analysis of the R-R sector to section \ref{sec:8-4}. The on-shell condition of closed string state in NS-NS sector is given by \begin{align} & -\frac{\omega^2}{4k} + \frac{p^2}{4k}+ \frac{1}{4k}+ \Delta_{\cal M} = \frac{1}{2}~, \label{on-shell super} \end{align} where $\Delta_{\cal M}$ denotes the conformal weight of the ${\cal M}$-part. The on-shell energy is given by \begin{align} \omega \equiv \omega(p,M) = \sqrt{p^2+2k M^2} \qquad \mbox{where} \qquad M^2 \equiv 2 \left(\Delta_{\cal M} + \frac{1}{4k} - \frac{1}{2} \right)~. \nonumber \end{align} Consider now a D0-brane propagating outside the black hole and absorbed into the future horizon. The relevant boundary wave function was constructed in \eqref{falling D0} and, from them, the differential radiation number distributions \eqref{radiation rate 1} can be computed. At large $\omega$ and $p$, using Stirling's approximation, we find that \begin{align} {\cal N} (M)_{\msc{in}} &= \int_0^{\infty}\frac{\mbox{d} p}{2 \pi} \frac{1} {2\omega(p,M)}\, \Big| \Psi_{\rm absorb} (\rho_0,t_0;p,\omega(p,M))\Big|^2 \nonumber \\ &\sim {1 \over M} \int_0^{\infty}{\mbox{d} p}\, e^{+\pi\left(1-\frac{1}{k}\right)p - \pi\sqrt{p^2+2k M^2}}~ \label{N M super in}\\ \nonumber \\ {\cal N} (M)_{\msc{out}} &= \int_0^{\infty}\frac{\mbox{d} p}{2 \pi} \, \Big|d(p,\omega(p,M))\Big|^2 \, \frac{1} {2 \omega(p,M)} \Big| \Psi_{\rm absorb} (\rho_0,t_0;p,\omega(p,M))\Big|^2 \nonumber \\ &\sim {1 \over M} \int_0^{\infty} {\mbox{d} p} \, e^{-\pi \left(1+\frac{1}{k}\right)p - \pi\sqrt{p^2+ 2k M^2}}~. \label{N M super out} \end{align} In the second lines, we have taken $M$ large, viz. $\omega \gg p \gg 1$, and keep the leading terms only. Thus, for each fixed but large $M$, the partial number distributions take the forms: \begin{align} {\cal N} (M)_{\msc{in}} \sim \int_0^{\infty} \mbox{d} p \, \sigma_{\msc{in}}(p) e^{-\frac{1}{2}\beta_{\rm Hw} M} \qquad \mbox{and} \qquad N(M)_{\msc{out}} \sim \int_0^{\infty} \mbox{d} p \, \sigma_{\msc{out}}(p) e^{-\frac{1}{2}\beta_{\rm Hw} M}~, \label{grey body} \end{align} where \begin{align} \beta_{\rm Hw} = 2\pi \sqrt{2k}~, \label{Hawking temp super} \end{align} is the inverse Hawking temperature of the fermionic two-dimensional black hole. As discussed above, the radiation off the D-brane in NS5-brane background is analogous to the decay of excited D-brane in flat ambient space-time. Indeed, asymptotic expression \eqref{grey body} suggests that open string excitations of energy $M$ on the rolling D0-brane are populated as the distribution function $\exp (-{1 \over 2} \beta_{\rm Hw} M)$ and decay into closed string radiation. In this interpretation, the distribution function encodes change of available states for open string excitations on the D0-brane after emitting radiations of energy $M$. Curiously, `effective temperature' of the excited closed strings is set by the Hawking temperature of the nonextremal NS5-brane, not that of a black hole that would have been made of the D0-brane. It is tempting to interpret this as indicating that the D0-brane represents a class of possible excitation modes of the black NS5-brane. The closed string states of energy $M$ emitted by the D0-brane are certainly coherent, but according to this interpretation, they still can be recasted in effective thermal distribution set by the Hawking temperature of the two-dimensional black hole. We will later discuss again the origin of such effective thermal behavior of the rolling D0-brane from the viewpoints of Euclidean cylinder amplitudes, extending the argument of \cite{Dijkgraaf:1992ba} about the Hawking radiation in the purely closed string background. The functions $\sigma_{\rm in}$ and $\sigma_{\rm out}$ are interpretable as the black hole `greybody' factors for incoming and outgoing parts of the radiation. The factor 1/2 in the exponent of the Boltzmann distribution function reflects the fact that only left-right symmetric closed string states can appear in the boundary states and the radiated closed string modes. The `greybody factors' $\sigma_{*}(p)$ depend on the radial momentum $p$ exponentially, so the radiation distribution would be modified {\sl once} the radial momentum $p$ is integrated out. Below, we shall show this explicitly. We are primarily interested in keeping track of string world-sheet effects set by the value of the level $k$. We shall consider different ranges of the level $k$ separately, and focus on the asymptotic behaviors at large $M$ via the saddle point methods. \begin{description} \item[(i) \underline{$k > 1$}: ] \hfill\break This is the case for the black NS5-brane background. Consider first the incoming part. Since $1- \frac{1}{k}> 0$, the dominant contribution in the $p$-integral arises from the saddle point: \begin{align} p \sim p_* = \frac{k-1}{\sqrt{1-\frac{1}{2k}}} M~.\nonumber \end{align} Substituting this to \eqref{N M super in}, we obtain \begin{align} {\cal N} (M)_{\msc{in}} \sim e^{-2\pi M \sqrt{1-\frac{1}{2k}}} = e^{-\frac{1}{2}M \beta_{\rm Hg}}~, \label{eq:in} \end{align} up to pre-exponential powers of $M$. Taking account of the density of states $\rho(M) \sim e^{\frac{1}{2}M \beta_{\rm Hg}}$, we find that $\rho(M) {\cal N}(M)_{\msc{in}}$ scales with powers of $M$, and is independent of $k$. More explicitly, for the black NS5-brane ${\cal M}= SU(2)_{k} \times \mathbb{R}^5$, the incoming radiation distribution of the D$p$-brane parallel to the NS5-brane yields \begin{align} {\cal N}(M)_{\msc{in}} &\sim {1 \over M} \int {\mbox{d}^{5-p} {\bf k}_{\perp} \over (2 \pi)^{5-p}} \int_0^\infty {\mbox{d} p} \, e^{\pi (1-\frac{1}{k})p-\pi\sqrt{p^2+2k(M^2+{\bf k}_{\perp}^2)}} \nonumber \\ &\sim M^{2-\frac{{p}}{2}}\, e^{-2\pi M\sqrt{1-\frac{1}{2k}}} \ . \nonumber \end{align} Taking account of the density of states $\rho(M) \sim M^{-3}e^{2\pi M\sqrt{1-\frac{1}{2k}}}$, the average radiation number distribution is given by \begin{align} \frac{\overline{{\cal N}}_{\msc{in}}}{V_p} \sim \int^{M_{\rm D}} {\mbox{d} M \over M} \, M^{-\frac{p}{2}} \qquad \mbox{where} \qquad M_{\rm D} \sim {\cal O}({1 \over g_{\rm st}}) \ . \label{conL} \end{align} This result coincides with the computations of \cite{Lambert:2003zr,Karczmarek:2003xm}, and corroborates with the radion-tachyon correspondence. Interestingly, the incoming part of the radiation number distribution in the the nonextremal NS5-brane background is exactly the same as the distribution in the extremal NS5-brane background. Later, we shall examine carefully taking the extremal limit and its consequence in section \ref{sec:8-6}. As in the extremal case, \eqref{conL} implies that nearly all the D0-brane potential energy is released into closed string radiations before it falls into the black hole. On the other hand, for the outgoing radiation, the far infrared $p \sim 0$ dominates the momentum integral. We thus obtain \begin{align} {\cal N} (M)_{\msc{out}} \sim e^{-2\pi M \sqrt{\frac{k}{2}}} = e^{-\frac{1}{2}M \beta_{\rm Hw}}~, \nonumber \end{align} displaying effective thermal distribution set by the Hawking temperature. Taking account of the density of states, \begin{align} \rho(M) {\cal N} (M)_{\msc{out}} \sim e^{\frac{1}{2}M \left(\beta_{\rm Hg}-\beta_{\rm Hw}\right)} = e^{2\pi M \left(\sqrt{1-\frac{1}{2k}}-\sqrt{\frac{k}{2}} \right)}~. \nonumber \end{align} This is ultraviolet finite for any $k$ since \begin{align} \left(1-\frac{1}{2k}\right) -\frac{k}{2} = -\frac{1}{2k} \left(k-1\right)^2 < 0~. \label{eq:ini} \end{align} We thus conclude that the radiation number distribution is mostly in the incoming part: \begin{align} \left. {{\cal N}_{\rm out}(M) \rho (M) \over {\cal N}_{\rm in}(M) \rho (M)} \right\vert_{\rm falling \,\, D0} \sim {e^{-{1 \over 2} \beta_{\rm Hg} M } \over e^{-{1 \over 2} \beta_{\rm Hw} M}} = e^{2 \pi M \left( \sqrt{1 -{1 \over 2 k}} - \sqrt{k \over 2} \right)} \ll 1. \nonumber \end{align} Intuitively, this may be understood as follows: for the absorbed boundary state, the boundary condition is such that the D0-brane flux is directed from past null infinity to the future horizon. This also corroborates the observation that $T_{t\rho}$-component of D0-brane's energy-momentum tensor is nonzero and increases monotonically as the D0-brane approaches the future horizon. The outgoing part of the distribution is exponentially small compared to the incoming part and exhibits effective thermal distribution at the Hawking temperature. Notice that, despite being so, this outgoing part has nothing to do with the Hawking radiation of the black hole. The latter is the feature of the background by itself. A priori, the outgoing radiation could be in a distribution characterized by a temperature different from the Hawking temperature. As mentioned above, it is tempting to interpret coincidence of the two temperatures as a consequence of maintaining equilibrium between the black NS5-brane and the D0-brane. \item[(ii) \underline{$\frac{1}{2} < k \leq 1$}: ] \hfill\break This is the regime which includes the conifold geometry at $k=1$. Since $1-\frac{1}{k} \leq 0$, the dominant contribution to the momentum integral is from $p \sim 0$, not only for the outgoing radiation but also for the incoming one. We thus obtain \begin{align} {\cal N}(M)_{\msc{in}} \sim {\cal N} (M)_{\msc{out}} \sim e^{-2\pi M \sqrt{\frac{k}{2}}} \equiv e^{-\frac{1}{2}M \beta_{\rm Hw}}~, \end{align} viz. both are in effective thermal distribution set by the Hawking temperature. All spectral moments are manifestly ultraviolet finite since, at large $M$, exponential growth of the density of the final closed string states is insufficient to overcome the suppression by the distribution. Thus, \begin{align} {{\cal N}_{\rm out}(M) \rho(M) \over {\cal N}_{\rm in} (M) \rho (M)} \Big|_{\rm falling \, D0} \sim 1. \nonumber \end{align} We interpret this as indicating that the D0-brane does not radiate off most of its energy before falling into the horizon. \item[(iii) \underline{$k=\frac{1}{2}$} : ] \hfill\break This special case corresponds to empty ${\cal M}$. The two-dimensional background permits no transverse degrees of freedom of the string. The physical spectrum includes massless tachyon only, with $M=0$ and $\rho(M)=1$. We now have a crucial difference from the previous cases for the on-shell configurations. The radial momentum $p$ is fixed by the on-shell condition as $\omega = \pm p$, so it should not be integrated over for the final states. Consequently, we cannot decompose the radiation distribution into incoming and outgoing radiations, and only the total distribution is physically relevant. We thus obtain the following large $\omega$ behavior of the radiation distribution: \begin{align} {\cal N}(\omega)\sim e^{-2\pi \omega} \equiv e^{-\omega \beta_{\rm Hw}}~. \label{radiation 2D BH super} \end{align} Again, we have found effective thermal distribution at the Hawking temperature! Notice the absence of extra 1/2-factor in contrast to the previous regimes. This is not a contradiction. In the present case, the transverse oscillators are absent and the string behaves as a point particle. Again, the D0-brane does not radiate off most of its energy before falling across the black hole horizon. In the linear dilaton regime, the boundary states and possible connection with the matrix model for the two-dimensional type 0A/0B string theory have been discussed in \cite{Lapan:2005qz}. Given the phase transition we observed, however, the classical intuition of such ``rolling D-brane" in the two-dimensional noncritical string theory is rather questionable. It would be interesting to give an interpretation from the dual matrix models. \end{description} \subsubsection{radiation distribution in bosonic string theory}\label{sec:8-1-2} The analysis for the bosonic string case proceeds quite the same route. The boundary state for the infalling D0-brane includes the string world-sheet correction factor $\Gamma\left(1+i\frac{p}{\kappa-2}\right)$, where again $\kappa$ refers to the level of bosonic $SL(2;\mathbb{R})/U(1)$ coset model. The on-shell condition now reads \begin{align} & -\frac{\omega^2}{4\kappa} + \frac{p^2}{4(\kappa-2)}+ \frac{1}{4(\kappa-2)} + \Delta_{\cal M} = 1~, \label{on-shell bosonic} \end{align} where $\Delta_{\cal M}$ denotes the conformal weight in the ${\cal M}$-sector. This is solved by \begin{align} & \omega \equiv \omega(p,M) = \sqrt{\frac{\kappa}{\kappa-2}p^2+2\kappa M^2} \qquad \mbox{where} \qquad M^2 \equiv 2 \left(\Delta_{\cal M} + \frac{1}{4(\kappa-2)} - 1\right)~. \end{align} The partial radiation number distribution at large $M$ limit is given by: \begin{align} & {\cal N}(M)_{\msc{in}} \sim {1 \over M} \int_0^{\infty} {\mbox{d} p} \, e^{+\pi \left(1- \frac{1}{\kappa-2}\right)p - \pi \sqrt{\frac{\kappa}{\kappa-2} p^2 + 2\kappa M^2}}~. \label{N M bosonic in} \\ & {\cal N}(M)_{\msc{out}} \sim {1 \over M} \int_0^{\infty} {\mbox{d} p} \, e^{-\pi \left(1+ \frac{1}{\kappa-2}\right)p - \pi \sqrt{\frac{\kappa}{\kappa-2}p^2+ 2\kappa M^2}}~. \label{N M bosonic out} \end{align} Thus, as in the superstring case, there can arise several distinct behaviors depending on how stringy the background is. \begin{description} \item[(i) \underline{$\kappa > 3$}: ] \hfill\break Consider first the incoming radiation part. Since $1-\frac{1}{\kappa-2} > 0$, the dominant contribution to the momentum integral in \eqref{N M bosonic in} is from the saddle point \begin{align} p \sim p_* = \frac{\kappa-3} {\sqrt{2-\frac{1}{2(\kappa-2)}}} M~. \nonumber \end{align} We thus obtain, up to pre-exponential powers of $M$, \begin{align} {\cal N}(M)_{\msc{in}} \sim e^{-2\pi M \sqrt{2-\frac{1}{2(\kappa-2)} } } \, = \, e^{-\frac{1}{2} M \beta_{\rm Hg}}~, \nonumber \end{align} where $\beta_{\rm Hg}$ denotes the Hagedorn temperature of the bosonic string theory \eqref{Hagedorn bosonic}. In this way, we again find the power-law behavior of $\rho(M) {\cal N} (M)_{\msc{in}}$ at large $M$, independent of the level $\kappa$. For the outgoing radiation part, again the $p\sim 0$ dominates the momentum integral in \eqref{N M bosonic out}. The result is \begin{align} {\cal N} (M)_{\msc{out}} \sim e^{-2 \pi M \sqrt{\frac{\kappa}{2}}} = e^{-\frac{1}{2} M \beta_{\rm Hw}}~. \nonumber \end{align} Here, \begin{align} \beta_{\rm Hw} \equiv 2\pi \sqrt{2\kappa} \nonumber \end{align} is the Hawking temperature of the bosonic two-dimensional black hole. We then obtain \begin{align} \rho(M) {\cal N} (M)_{\msc{out}} \sim e^{\frac{1}{2}\left(\beta_{\rm Hg}-\beta_{\rm Hw}\right)M} = e^{2\pi M \left[\sqrt{2-\frac{1}{2(\kappa-2)}} -\sqrt{\frac{\kappa}{2}} \right] }~. \nonumber \end{align} As in the superstring case, the exponent is always negative definite: \begin{align} \left(2-\frac{1}{2(\kappa-2)}\right) -\frac{\kappa}{2} = -\frac{\left(\kappa-3\right)^2}{2(\kappa-2)} \leq 0~. \nonumber \end{align} so the outgoing radiation distribution (as well as spectral moments) is manifestly ultraviolet finite. Physical interpretation of the above results is the same as the superstring case: The D0-brane falling into the black hole has nonzero component $T_{t\rho}$ of the energy-momentum tensor, and entails that dominant part of the closed string radiation is incoming toward the future horizon. The outgoing part of the radiation is exponentially suppressed, and is in effective thermal distribution set by the Hawking temperature. Again, this distribution is distinct from the Hawking radiation of the two-dimensional black hole. As for the fermionic string, the branching ratio is exponentially suppressed. \item[(ii) \underline{$\frac{9}{4} < \kappa \leq 3$}: ] \hfill\break In this regime, $1- \frac{1}{\kappa-2} < 0$ and the momentum integrals for both incoming and outgoing radiation distributions are dominated by $p \sim 0$: \begin{align} {\cal N}(M)_{\msc{in}} \sim {\cal N}(M)_{\msc{out}} \sim e^{-2 \pi M \sqrt{\frac{\kappa}{2}}} \equiv e^{-\frac{1}{2}M \beta_{\rm Hw}}~. \nonumber \end{align} Both are in effective thermal distribution at the Hawking temperature, and all spectral moments are manifestly ultraviolet finite since, at large $M$, the growth of the density of state does not overcome the suppression by the distribution. The branching ratio remains order unity. \item[(iii) \underline{$\kappa=\frac{9}{4}$} : ] \hfill\break This is the most familiar situation: black hole in two-dimensional bosonic string theory, originally studied in \cite{Witten:1991yr,Elitzur:1991cb,Mandal:1991tz,Bars:1990rb}. The physical spectrum of closed string consists only of the massless tachyon, so we again need to set $M=0$ and $\rho(M)=1$. The calculation is slightly more complicated than the supersymmetric case: The canonically normalized energy is \begin{align} E = \frac{\sqrt{2}}{3}\omega = \sqrt{2}p~, \nonumber \end{align} so we obtain \begin{align} & {\cal N} (E) \sim e^{\pi \left(1-\frac{1}{1/4}\right) p -\pi \sqrt{\frac{9/4}{1/4}}p} = e^{-3 \sqrt{2}\pi E} \equiv e^{- E \beta_{\rm Hw}}~. \nonumber \end{align} It again shows effective thermal distribution of the radiated closed string modes at the Hawking temperature: $\beta_{\rm Hw} = 2\pi \sqrt{2\kappa} = 3\pi \sqrt{2}$. \end{description} \subsubsection{radiation distribution for emitted or time-symmetric boundary states}\label{sec:8-1-3} The closed string radiations for the other types boundary states, viz. the `emitted' \eqref{emitted D0} or the `symmetric' \eqref{symmetric D0} D0-branes, can be studied analogously. For the emitted D0-brane boundary state \eqref{emitted D0}, by the time-reversal, we should observe the radiation distribution at the far past: $t\sim -\infty$. The relevant decomposition corresponding to \eqref{as U} is given by (assuming $\omega>0$, $p>0$) \begin{align} && V^p_{\omega}(\rho,t) \sim e^{i\omega \ln \rho-i\omega t} + d^*(p,\omega) e^{-\rho} e^{-ip\rho -i\omega t} ~, \label{as V} \end{align} where the first term is supported near the past horizon and the second term corresponds to the incoming wave from the null infinity. Obviously we find precisely the same behavior of the radiation distribution as the absorbed D0-brane once the role of `in' and `out' states are reversed. So, for $k>1$, ${\cal N}(M)_{\rm in} \sim \exp ( - {1 \over 2} \beta_{\rm Hw} M)$ while ${\cal N}(M)_{\rm out} \sim \exp (-{1 \over 2} \beta_{\rm Hg} M)$ and, for $ 1 \ge k > 1/2$, ${\cal N}(M)_{\rm in}$, ${\cal N}(M)_{\rm out} \sim \exp (-{1 \over 2} \beta_{\rm Hw} M)$. Consider next the boundary state describing D0-brane in symmetric boundary condition \eqref{symmetric D0}. Recalling the relations \eqref{rel disc amp}, one finds that the radiation rates are simply obtained by adding contributions from `absorbed' and `emitted' D0-brane boundary states. Thus, the radiation distributions behave as ${\cal N}(M)_{\rm in}$, ${\cal N}(M)_{\rm out} \sim \exp (-{1 \over 2} \beta_{\rm Hg} M)$ for $k>1$ and the dependence on Hawking temperature disappeared.\footnote{Dependence on the Hawking temperature exponentially suppressed, so completely negligible compared to other power-suppressed subleading terms.} We then find that the `detailed balance' ${\cal N}(M)_{\msc{in}} = {\cal N}(M)_{\msc{out}}$ is obeyed. This is as expected since the boundary state \eqref{symmetric D0} is defined so that it keeps the time-reversal symmetry and the one-particle state unitarity manifest. \subsubsection{revisit to the radiation distribution from thermal sting propagator}\label{sec:8-1-4} To close this section we discuss the radiation distribution from a different angle. Although the argument given here would be somewhat heuristic, it is quite helpful to grasp physical intuition and to understand where the thermal-like behavior of closed string radiation comes from. This argument is much like the one given in \cite{Dijkgraaf:1992ba}, where the Hawking radiation of 2-dimensional black-hole is discussed by the closed string thermal propagator. In a sense, our discussion is an extension of it to the open string sector. We start with the (thermal) cylinder amplitude for the D1-brane on the Euclidean cigar \eqref{clss2'} \footnote{To be more precise, we consider the fermionic black-hole of level $k$ and focus on the space-time bosons. If considering the space-time fermions, the thermal KK momentum should be half integer $n\in 1/2 + \mathbb{Z}$, rather than $n\in \mathbb{Z}$, which leads to the fermionic distribution $1/(e^{\beta_{\msc{Hw}}\omega_{p,M}}+1) $ instead of $1/(e^{\beta_{\msc{Hw}}\omega_{p,M}}-1)$ in the following argument.}. This is approximately evaluated as (we omit the parameters $\rho_0$, $\theta_0$ for simplicity) \begin{align} & {\cal A}_{\msc{cylinder}}^{(E)} \cr &= \int_0^{\infty} dT\, {}_{D1} \bra{B} e^{-\pi T H^{(c)}} \ket{B}_{D1} \approx \sum_M \sum_{n\in \mathbb{Z}} \int dp \, \frac{1} {p^2+ \left(\frac{2\pi n}{\beta_{\msc{Hw}}} \right)^2+M^2} \, \sqrt{\rho(M)} \left|\Psi_{D1}(p,n)\right|^2~ \cr & = \frac{\beta_{\msc{Hw}}}{2\pi} \sum_M \int dpdq \, \sqrt{\rho(M)} \frac{\left|\Psi_{D1}\left(p,\frac{\beta_{\msc{Hw}}q}{2\pi} \right)\right|^2}{p^2+q^2+M^2}\, \left(1+ \sum_{m\in \mathbb{Z}_{>0}}e^{i \beta_{\msc{Hw}} m q} + \sum_{m\in \mathbb{Z}_{>0}}e^{-i \beta_{\msc{Hw}} m q} \right)~. \label{thermal cylinder} \end{align} Here $p$ is the radial momentum and $n$ is the KK momentum along the asymptotic circle of cigar (thermal circle). $M$ is again the transverse mass in the ${\cal M}$-sector and $\rho(M)$ is the density of closed string states. $\beta_{\msc{Hw}}\equiv 2\pi \sqrt{2k}$ again denotes the inverse Hawking temperature. Let us try to Wick rotate it by the contour deformation of $q$-integration in the similar manner to \cite{Dijkgraaf:1992ba}. Setting $q=i \omega$\footnote {Here, $\omega$, $p$ are normalized as $L_0 = - \frac{1}{2} \omega^2 + \frac{1}{2}p^2 + \cdots$, rather than $L_0 = -\frac{1}{4k} \omega^2 + \frac{1}{4k} p^2 + \cdots$.}, ${\cal A}^{(L)}_{\msc{cylinder}} = -i {\cal A}^{(E)}_{\msc{cylinder}}$, we formally obtain \begin{align} & {\cal A}^{(L)}_{\msc{cylinder}} \cr & \approx \frac{\beta_{\msc{Hw}}}{2\pi} \sum_M \int dp\, \sqrt{\rho(M)}\, \left\lbrack \int d\omega \, \frac{ \left|\Psi_{D1}\left(p,\frac{i \beta_{\msc{Hw}}\omega}{2\pi} \right) \right|^2 }{p^2+M^2-\omega^2+i\epsilon} -\frac{2\pi i }{\omega_{p,M}} \frac{\left|\Psi_{D1}\left(p, \frac{i \beta_{\msc{Hw}}\omega_{p,M}} {2\pi}\right)\right|^2} {e^{\beta_{\msc{Hw}} \omega_{p,M}}-1} \right\rbrack ~, \label{evaluation AL} \end{align} where $\omega_{p,M} \equiv \sqrt{p^2+M^2}$ is the on-shell energy and we here used $$ \left|\Psi_{D1}\left(p,-\frac{i \beta_{\msc{Hw}}\omega_{p,M} } {2\pi}\right)\right|^2 = \left|\Psi_{D1}\left(p,\frac{i \beta_{\msc{Hw}}\omega_{p,M} } {2\pi}\right)\right|^2~. $$ Because we have $ \left|\Psi_{D1}\left(p,\frac{i \beta_{\msc{Hw}}\omega}{2\pi} \right) \right|^2 \propto e^{\frac{1}{2}\beta_{\msc{Hw}} |\omega|}$, the first term (including the Feynman propagator) gives a UV divergent contribution. This is not surprising and shows the reason why the naive Wick-rotation of \eqref{clss2'} does not work. The second term shows a `thermal-like' form and actually contributes to the imaginary part of cylinder amplitude we are interested in. It gives the expected behavior; \begin{align} \Im \, {\cal A}_{\msc{thermal}}^{(L)} &\propto \frac{1}{\omega_{p,M}} \frac{1}{e^{\beta_{Hw}\omega_{p,M}}-1} \, \sqrt{\rho(M)} \left|\Psi_{D1}\left(p,\frac{i \beta_{\msc{Hw}}\omega}{2\pi} \omega\right)\right|^2 \cr &\sim \frac{\sqrt{\rho(M)} \sigma(p)}{\omega_{p,M}}\, e^{-\frac{1}{2}\beta_{\msc{Hw}} \omega_{p,M}}~, \label{thermal behavior} \end{align} which reproduces the previous results \eqref{grey body}, including the correct grey body factor $\sigma(p)$. Recall that, in our construction of Lorentzian boundary states, the presence of the damping factor was crucial, which reads as $\frac{\sinh \pi \sqrt{2k} p} {\cosh \left\lbrack \pi \sqrt{\frac{k}{2}}(p+\omega) \right\rbrack \cosh \left\lbrack \pi \sqrt{\frac{k}{2}}(p-\omega)\right\rbrack} $ in the convention here. This factor shows the same asymptotic behavior in the large $\omega$ or large $p$ region as the Boltzmann distribution functions $1/(e^{\beta_{\msc{Hw}}\omega}\pm 1)$. In this sense, our Wick-rotation of boundary states would be roughly identified with the procedure which keeps only the second term in \eqref{evaluation AL}, suggesting the origin of the thermal-like distribution derived from our Lorentzian boundary states. Another helpful argument is achieved by starting with the open string channel of thermal cylinder amplitude \eqref{thermal cylinder}. Let us focus on the asymptotic region $\rho \gg 0$ for simplicity, in which the hairpin D1-brane \eqref{clss2'} appears just as two halves of $D1$-$\bar{D}1$ system, which are Dirichlet along the thermal circle, (so, identified as the `$sD$-$s\bar{D}$ system' \cite{Strominger:2002pc}) as pointed out in \cite{Nakayama:2004yx}. In this set up, by a simple kinematical reason, we find {\em on-shell} closed string states in the cylinder amplitude, while only {\em off-shell} states in the open string channel. As discussed {\em e.g.} in \cite{Sugawara:2002rs,Sugawara:2003xt}, using the modular transformation, we can show the thermal distribution of {\em physical} closed string states emitted/absorbed by the $sD$-$s\bar{D}$ system is captured by the {\em unphysical} open string winding modes along the thermal circle\footnote {This is a simple extension of the standard argument of the thermal toroidal partition functions \cite{Polchinski:1985zf,Sathiapalan:1986db,Kogan:1987jd,O'Brien:1987pn,Atick:1988si}. For instance, the Hagedorn behavior is interpretable as the tachyonic instability due to the {\em unphysical} winding modes along the thermal circle. }. Especially, the unit of winding energy should determine the temperature of thermal distribution of closed string states coupled with the $sD$-$s\bar{D}$ system. In the present case it is identified with the interval of the hairpin $(= \frac{1}{2}\beta_{\msc{Hw}})$, which is just associated to the $D1$-$\bar{D}1$ open string. (Note that, taking suitably the GSO projection into account, we can find the zero winding modes, {\em i.e.} the $D1$-$D1$ or $\bar{D}1$-$\bar{D}1$ strings, are canceled out. See \cite{Nakayama:2004yx}.) This is the simplest explanation of why we get the thermal-like distribution $\propto e^{-\frac{1}{2}\beta_{\msc{Hw}}\omega_{p,M}}$ from the cylinder amplitude \eqref{thermal cylinder}. Curiously, all the {\em regular} solutions of $D0$-brane motion are just straight lines in the Kruscal coordinates, and thus Wick-rotated to the hairpin profiles with the {\em same} interval; $\frac{1}{2}\beta_{\msc{Hw}}$. This fact leads us to the same thermal-like behaviors \eqref{grey body} characterized by the Hawking temperature (before integrating $p$ out)\footnote {One might ask why the D0-brane motion with different `temperature' is not considered. However, such D0-branes correspond to singular hairpin profiles and thus do to singular Lorentzian trajectories. These cannot be the solutions of DBI action of D0-branes by the divergence of the velocity at the singular points. Quite interestingly, this feature is similar to the original Hawking's idea : requiring the smooth Euclidean geometry, we can fix the particular asymptotic periodicity of Euclidean time, which yields the temperature characterizing the radiation from black-hole.}, as is already pointed out. \subsection{Radiation out of rolling D-brane from open string viewpoint}\label{sec:8-2} \subsubsection{open string channel viewpoint}\label{sec:8-2-1} What is the nature of the ultraviolet behavior of the emission number $\overline{\cal N}$ and how is it compared to the decay of rolling D-brane? To answer these, we shall now recast \eqref{ImZ 0} in the open string channel, following technical procedures considered in section \ref{sec:5-2} and appendix of \cite{Karczmarek:2003xm}. In the closed string channel, the closed string radiation has been computed as (see section \ref{sec:8-1}) \begin{align} \overline{\cal N} = N^2_{\rm NS} \sum_M \sqrt{\rho^{(c)}(M)} \int_0^{\infty} {\rmd p \over 2 \pi} \, \frac{1}{2\omega} {\cal P}(p, \omega) ~, \label{ImZ 0}\end{align} where $N_{\rm NS}$ is an appropriate numerical factor, $\omega = \sqrt{p^2 + 2kM^2}$ is the on-shell energy of the emitted closed string, and \begin{align} {\cal P}(p, \omega) \equiv \left|\Psi(p,\omega)\right|^2 = \frac {\sinh\pi\sqrt{\frac{\alpha'}{2}} p} {\left(\cosh \pi \sqrt{\frac{\alpha'}{2}} p + \cosh \pi \sqrt{\frac{\alpha'}{2}} \omega \right) \sinh \frac{\pi}{k}\sqrt{\frac{\alpha'}{2}} p }~ \label{powerspec} \end{align} is the transition probability. We begin with expanding the transition probability ${\cal P}(p,\omega)$ of the D0-brane \eqref{powerspec} in power series of contribution of imaginary branes: \begin{align} & {\cal P}(p,\omega) = \sum_{n=1}^{\infty} a_n(p) e^{-\pi n\omega\sqrt{\frac{\alpha'}{2}}} ~, \label{expansion} \\ & a_n(p)= 2(-1)^{n+1} \frac{\sinh\left(\pi n \sqrt{\frac{\alpha'}{2}} p \right)} {\sinh (\frac{\pi}{k}\sqrt{\frac{\alpha'}{2}} p)}~. \label{a n} \end{align} As before, we parametrically rewrite \eqref{ImZ 0} as \begin{align} \overline{\cal N} &= N_{\rm NS}^2 \sum_M \sqrt{\rho^{(c)}(M)} \nonumber\\ % &\times \int_0^{\infty} {\rmd p \over 2 \pi\sqrt{2k}} \sum_{n=1}^{\infty} \int_{-\infty}^{\infty} {\rmd k_0 \over 2 \pi \sqrt{2k}}\, \int_0^{\infty} \frac{\alpha'}{2} \rmd t_c \, a_n(p) e^{\pi i n \sqrt{\frac{\alpha'}{2}} k_0} e^{-2\pi t_c \frac{1}{4}\alpha'\left(\frac{k_0^2+p^2}{2k}+M^2\right)}~, \qquad \label{ImZ 1} \end{align} by introducing the Schwinger parameter $t_c$ in the closed string channel.\footnote {Strictly speaking, we could have the closed string tachyon $M^2<0$, and the rewriting \eqref{ImZ 1} would not be completely correct due to the infrared divergence. We can avoid this difficulty by considering the GSO projected amplitude. We are concerned with the large $M$ asymptotics, so shall go on ignoring it to avoid unessential complexity. } We now evaluate each contribution separately. Begin with the sum over the transverse mass $M$. By definition, the sum gives modular invariant cylinder amplitude of the ${\cal M}$-sector: \begin{align} \sum_M \sqrt{\rho^{(c)}(M)} e^{-2\pi t_c \frac{\alpha'}{4} M^2} &= Z^{(c)}_{{\cal M}}(q_c) \qquad \mbox{where} \qquad q_c = e^{- 2 \pi t_c} \nonumber\\ &= Z^{(o)}_{{\cal M}}(q_o) \qquad \mbox{where} \qquad q_o = e^{- 2 \pi t_o} \quad (t_o \equiv 1/t_c) \end{align} by applying the standard open-closed duality and expressing the result in terms of the open string Schwinger parameter $t_o$. The amplitude $Z^{(o)}_{{\cal M}}(t_o)$ asymptotes at large $t$ to (corresponding to the ultraviolet behavior in the closed string channel): \begin{align} Z_{{\cal M}}^{(o)}(t_o) \sim t_o^{\gamma}\, e^{2\pi t_o \cdot \frac{c_{{\cal M}}}{24}} = t_o^{\gamma} \, e^{\pi t_o \left(1-\frac{1}{2k}\right)} \qquad \mbox{for} \qquad t_o \, \rightarrow\, +\infty. \end{align} Here, the exponent $\gamma$ is determined by the number of non-compact Neumann directions in the ${\cal M}$-sector. Such details, however, are not relevant for our discussions. The Gaussian integral over $k_0$ is readily evaluated, resulting in \begin{align} & \overline{\cal N} = N^2_{\rm NS} \sqrt{\frac{\alpha'}{2}} \int_0^{\infty} {\rmd p \over 2 \pi\sqrt{2k}} \sum_{n=1}^{\infty} \int_0^{\infty} \frac{\rmd t_o}{t_o^2} \sqrt{t_o} \, a_n(p) \, e^{-\pi t_o \frac{n^2k}{2} -\frac{2\pi}{t_o} \frac{\alpha'}{4}\frac{p^2}{2k}} \cdot Z^{(o)}_{{\cal M}}(t_o)~. \end{align} The $k_0$-integral yields the Boltzmann factor with the temperature determined by the Euclidean periodicity ($1/Q$ in our case) for the `hairpin brane' \cite{Ribault:2003ss,Eguchi:2003ik,Ahn:2003tt,Lukyanov:2003nj}, which is the Euclidean rotation of the rolling D-brane, as clarified in \cite{Nakayama:2005pk,Kutasov:2005rr}. This is essentially the same as the standard argument for thermal tachyon in the thermal string theory \cite{Polchinski:1985zf,Sathiapalan:1986db,Kogan:1987jd,O'Brien:1987pn,Atick:1988si}. Our goal is to re-express the rate \eqref{ImZ 0} in the open string channel, so we shall Fourier transform the closed string momentum $p$ to the open string momentum $p'$. This requires a careful treatment, because the momentum-dependent coefficients $a_n(p)$ in \eqref{a n} could be exponentially growing functions. In such cases, the Fourier transform may not exists in a naive sense. We start with the identity: \begin{align} & e^{-2\pi t_c \cdot \frac{1}{4} \alpha' \frac{p^2}{2k}} = \sqrt{t_o} \int_{\mathbb{R}+i\xi} \sqrt{\frac{\alpha'}{2}}\frac{\rmd p'}{\sqrt{2k}}\, e^{-2\pi t_o \cdot \frac{1}{4} \alpha' \frac{p^{'2}}{2k}+2\pi i \cdot \frac{1}{2} \alpha' \frac{p p'}{2k}} \qquad \mbox{for} \qquad \xi \in \mathbb{R}~. \label{gauss xi} \end{align} In the $p$-integral, the function $e^{2\pi i \frac{1}{2} \alpha' \frac{p p'}{2k}}$ works as a damping factor and renders the integral finite if the parameter $\xi$ is chosen suitably. For later convenience, we shall decompose $a_n(p)$ as \begin{align} & a_n(p) = a_n^+(p)-a_n^-(p)~, \nonumber\\ & a^{\pm}_n(p) \equiv (-1)^{n+1} \frac{e^{\pm\pi n \sqrt{\frac{\alpha'}{2}} p}} {\sinh (\pi \frac{\pi}{k} \sqrt{\frac{\alpha'}{2}})}~. \label{a pm} \end{align} Observing the asymptotic behavior of the coefficients $a^+_n(p)$, we readily find that the closed string channel momentum integral $\displaystyle \int \rmd p\, a^+_n(p) \, e^{2\pi i \cdot \frac{1}{2} \alpha' \frac{p p'}{2k}}$ is well-defined as long as $\xi^+_n$ is chosen within the range $(nk-1)<\sqrt{\frac{\alpha'}{2}}\xi^+_n < (nk+1)$. We can then safely exchange the order of the integrals. Carrying out the $p$-integral first, we find~\footnote {Here, we are temporarily shifting the contour as $\mathbb{R} \, \rightarrow\, \mathbb{R}-i0$ to avoid the pole $p=0$. We eventually restore it back to $\mathbb{R}$ {\sl after} taking the difference $a_n(p) \equiv a_n^+(p)-a_n^-(p)$. The final result \eqref{ImZ final} remains intact, even if another contour shift $\mathbb{R}+i0$ is taken, as is easily checked.} \begin{align} & \int_{\mathbb{R}-i0}\frac{\rmd p}{\sqrt{2k}}\, a^+_n(p) e^{-2\pi t_c \frac{\alpha'}{4} \frac{p^2}{2k}} = (-1)^{n+1}\frac{i\sqrt{tk}}{\sqrt{2}} \int_{\mathbb{R} + i\xi^+_n} \frac{\rmd p'}{\sqrt{2k}}\, \frac {e^{\pi \left(\sqrt{\frac{\alpha'}{2}}\frac{p'}{2}-i \frac{nk}{2}\right) -2\pi t_o \frac{\alpha'}{4}\frac{p^{'2}}{2k}}} {\cosh \pi \left(\sqrt{\frac{\alpha'}{2}}\frac{p'}{2}-i\frac{nk}{2}\right)}~. \qquad \label{evaluation 2} \end{align} Finally, we shift the contour back: $\mathbb{R}+i\xi_n^+ \, \rightarrow\, \mathbb{R}$ so that the open string momentum $p'$ is real-valued. In this step, we cross the poles so need to take care of pole contributions. (See Figure 1.) \begin{figure}[htbp] \begin{center} \includegraphics[width=13cm,height=10cm] {contour1.eps} \end{center} \caption{ Deformation of the contour from the broken line to the solid line picks up pole contributions.} \label{contour1} \end{figure} The relevant poles are located at \begin{align} & \sqrt{\frac{\alpha'}{2}}p' = i {\alpha_m }~, ~~~ \alpha_m \equiv nk -2\left(m+\frac{1}{2}\right) \quad \mbox{where} \quad m=0,1, \ldots, \left\lbrack \frac{nk}{2}-\frac{1}{2}\right\rbrack~, \qquad \quad\label{poles} \end{align} where $\lbrack ~~ \rbrack$ denotes the Gauss symbol, and their residues are evaluated as $(-1)^{n+1}\frac{i}{\pi} e^{\pi t_o \frac{\alpha_m^2}{2k}}$. We thus obtain \begin{align} \int_{\mathbb{R}-i0}\sqrt{\frac{\alpha'}{2}}\frac{\rmd p}{\sqrt{2k}}\, a^+_n(p) e^{-2\pi t_c \frac{\alpha'}{4} \frac{p^2}{2k}} &= (-1)^{n+1}\frac{i\sqrt{kt_o}}{\sqrt{2}} \int_{-\infty}^{\infty} \sqrt{\frac{\alpha'}{2}}\frac{\rmd p'}{2k}\, \frac {e^{\pi \left(\sqrt{\frac{\alpha'}{2}}\frac{p'}{2}-i \frac{nk}{2}\right) -2\pi t_o \frac{\alpha'}{4} \frac{p^{'2}}{2}}} {\cosh \pi \left(\sqrt{\frac{\alpha'}{2}}\frac{p'}{2}-i\frac{nk}{2}\right)} \nonumber\\ &+ 2(-1)^{n+1} \sqrt{t_o} \sum_{m=0}^{\left\lbrack \frac{nk}{2}-\frac{1}{2}\right\rbrack}\, e^{\frac{\pi t_ok}{2} \left[n-\frac{2}{k}\left(m+\frac{1}{2}\right) \right]^2}~. \label{evaluation 3} \end{align} The integral of $a^-_n(p)$ is calculated in a similar way. This time, we should start with the contour $\mathbb{R}+i \xi^-_n$ with $(-nk- 1) < \sqrt{\frac{\alpha'}{2}}\xi^-_n < (-nk+1)$ and, after performing the $p$-integral first, again shift it back to $\mathbb{R}+i\xi^-_n\,\rightarrow\,\mathbb{R}$. The relevant pole contributions come from $\sqrt{\frac{\alpha'}{2}}p'=-i\alpha_m$ ($m=0,1,\ldots, \left\lbrack \frac{nk}{2}-\frac{1}{2}\right\rbrack$), and we obtain \begin{align} \int_{\mathbb{R}-i0} \sqrt{\frac{\alpha'}{2}} \frac{\rmd p}{\sqrt{2k}}\, a^-_n(p)\, e^{-\pi t_c \alpha' \frac{p^2}{2k}} &= (-1)^{n+1}\frac{i\sqrt{t_ok}}{\sqrt{k}} \int_{-\infty}^{\infty} \sqrt{\frac{\alpha'}{2}}\frac{\rmd p'}{\sqrt{2k}}\, \frac {e^{\pi \left(\sqrt{\frac{\alpha'}{2}}\frac{p'}{2}+i \frac{nk}{2}\right) -2\pi t_o \frac{\alpha'}{4} \frac{p^{'2}}{2k}}} {\cosh \pi \left(\sqrt{\frac{\alpha'}{2}}\frac{ p'}{2}+i\frac{nk}{2}\right)} \nonumber\\ &- 2(-1)^{n+1} \sqrt{t_o} \sum_{m=0}^{\left\lbrack \frac{nk}{2}-\frac{1}{2}\right\rbrack}\, e^{\frac{\pi t_o k}{2} \left[ n-\frac{2}{k}\left(m+\frac{1}{2}\right) \right]^2}~. \label{evaluation 4} \end{align} Notice that the relative sign change in the pole term compared to $a^+_n(p)$ integral originates from the orientation of integration contour surrounding each pole. Therefore, we find \begin{align} \int_0^{\infty}\sqrt{\frac{\alpha'}{2}}\frac{\rmd p}{\sqrt{2k}}\, a_n(p)e^{-2\pi t_c \frac{\alpha'}{4} \frac{p^2}{2k}} &= \frac{1}{2} \int_{-\infty}^{\infty} \sqrt{\frac{\alpha'}{2}}\frac{\rmd p}{\sqrt{2k}}\, \left(a_n^+(p) - a_n^-(p)\right)e^{-2\pi \alpha' \frac{t_c}{4} \frac{p^2}{2k}} \nonumber\\ &= (-1)^{n+1}\frac{\sqrt{t_ok}}{\sqrt{2}} \int_{-\infty}^{\infty}\sqrt{\frac{\alpha'}{2}}\frac{\rmd p'}{\sqrt{2k}}\, \frac{\sin \left(\pi n k \right) e^{-2\pi t_o \frac{\alpha'}{4} \frac{p^{'2}}{2k}}} {\cosh \left(\pi \sqrt{\frac{\alpha'}{2}} p'\right) +\cos \left(\pi n k\right)} \nonumber\\ &+ 2 (-1)^{n+1} \sqrt{t_o} \sum_{m=0}^{\left\lbrack \frac{nk}{2}-\frac{1}{2} \right\rbrack} \, e^{\frac{\pi t_o k}{2} \left[ n-\frac{2}{k}\left(m+\frac{1}{2}\right) \right]^2}~. \label{evaluation 5} \end{align} In this way, we derive the desired open string channel expression of the total radiation rate; \begin{align} \overline{{\cal N}} &= N^2_{\rm NS} \int_0^{\infty} \frac{\rmd t_o}{t_o}\, \Big( F_{\rm naive} (t_o) + F_{\rm pole} (t_o) \Big)~, \nonumber\\ F_{\rm naive} (t_o) &= \sqrt{\frac{k}{2}}\int_{-\infty}^{\infty} \sqrt{\frac{\alpha'}{2}} \frac{\rmd p'}{\sqrt{2k}} \sum_{n=1}^{\infty} \, (-1)^{n+1} \frac{\sin \left(\pi nk\right) e^{-\pi t_o \left( \frac{\alpha'}{2} \frac{p^{'2}}{2k} + \frac{2n^2}{k}\right)}} {\cosh \left(\pi \sqrt{\frac{\alpha'}{2}} p' \right) + \cos \left(\pi n k\right)} \, Z_{{\cal M}}^{(o)}(t_o) \nonumber\\ F_{\rm pole}(t_o) &= 2 \sum_{n=1}^{\infty} (-1)^{n+1} \sum_{m=0}^{\left\lbrack \frac{nk}{2}-\frac{1}{2}\right\rbrack} \,e^{\pi t_o \left[\frac{2}{k}\left(m+\frac{1}{2}\right)^2- 2n \left(m+\frac{1}{2}\right)\right]}\, Z_{{\cal M}}^{(o)}(t_o)~. \label{ImZ final} \end{align} The first term in \eqref{ImZ final} coincides with the total radiation claimed by \cite{Okuyama:2006zr} modulo inessential numerical factor~\footnote {It differs slightly from the one given in \cite{Okuyama:2006zr} in that we study the fermionic string, while \cite{Okuyama:2006zr} studies the bosonic string.}. It remains finite as $t_o\,\rightarrow\, + \infty$. The second term, which the analysis of \cite{Okuyama:2006zr} missed altogether, is of crucial importance. It is evident that the $m=0$ term is the leading contribution for each $n$.\footnote{Here we have assumed that $k>1$.} Recalling $Z^{(o)}_{{\cal M}}(t_o)\, \sim \, e^{\pi t_o \left(1-\frac{1}{2k}\right)}$ asymptotically (up to pre-exponential power corrections), each $m=0$ term behaves as \begin{align} \sim e^{\pi t_o \left(\frac{1}{2k}- n\right) + \pi t_o \left(1-\frac{1}{2k}\right)} = e^{\pi t_o (1-n)} \qquad \mbox{as} \qquad t_o\,\rightarrow\, + \infty~. \label{evaluation m=0 term} \end{align} Therefore, we get the leading contribution from the $n=1$ term, which shows a massless behavior. Hence, we have reproduced the Hagedorn-growth behavior expected in \cite{Nakayama:2004yx,Sahakyan:2004cq, Nakayama:2004ge,Nakayama:2005pk}. Notice that all the $n>1$ contributions are massive, and thus are not relevant in the ultraviolet regime of closed string radiations. \subsubsection{Lorentzian cylinder amplitude}\label{sec:8-2-2} In the previous section, we recasted the total emission number $\overline{{\cal N}}$ of the rolling D0-brane, defined as the sum over the on-shell states of emitted closed string \eqref{ImZ 0}, in the open string channel. Now, by the optical theorem and the channel duality, we ought to be able to obtain $\overline{{\cal N}}$ equally well from the cylinder amplitude evaluated in the open channel. In this section, we shall compute explicitly the cylinder amplitude in the open string channel and show that its imaginary part reproduces precisely the result \eqref{ImZ final}. This would serve as a non-trivial check-point of our previous analysis for the consistency with unitarity and the open-closed channel duality. Notice in particular that the channel duality is far from being obvious in the world-sheet in Lorentzian signature. For definiteness, we continue to focus on the NS sector. We start with the cylinder amplitude with Lorentzian world-sheet~\footnote {Here, we stress the importance of taking the world-sheet Lorentzian. The Fourier transformation from the closed to open channel is well-defined only for the Lorentzian $\omega_L$ in space-time. Accordingly, we need to take the Lorentzian world-sheet so that the cylinder amplitude becomes well-defined.} $Z_{\rm cylinder}$: \begin{align} & Z_{\rm cylinder} \nonumber\\ &= i \frac{\alpha'}{2} \int_{s_c^{\rm UV}}^{s_c^{\rm IR}} \rmd s_c \int_{0}^{\infty}\frac{\rmd p}{\sqrt{2k}} \int_{-\infty}^{\infty} \frac{\rmd \omega_L}{\sqrt{2k}} \, \frac {\sinh \left(\pi \sqrt{\frac{\alpha'}{2}}p\right)} {\left[\cosh\left(\pi \sqrt{\frac{\alpha'}{2}} \omega_L\right) +\cosh\left(\pi \sqrt{\frac{\alpha'}{2}}p \right)\right] \sinh(\frac{\pi}{k} \sqrt{\frac{\alpha'}{2}} p)} \nonumber\\ & \hskip3cm \times \frac{q_c^{\frac{1}{4}\alpha'(\frac{p^{2}}{2k}-(1-i\hat{\epsilon})\frac{\omega_L^{2}}{2k})}} {\eta(q_c)^2} \frac{\th_3(q_c)}{\eta(q_c)} \cdot Z_{{\cal M}}^{(c)}(q_c) \cdot \eta(q_c)^2 \frac{\eta(q_c)}{\th_3(q_c)}~. \label{zl} \end{align} Here, we again adopt the $i\epsilon$-prescription for the Lorentzian world-sheet, while the $-i \hat{\epsilon}$-prescription for the Lorentzian space-time. The integration is well-defined so long as $2 \hat{\epsilon} s_c^{\rm UV} > \epsilon >0$ is retained. The second line in \eqref{zl} combines contributions of the SL(2)$_k$/U(1), ${\cal M}$, and the world-sheet ghosts. The ghost contribution $\eta(q_c)^2 \frac{\eta(q_c)}{\th_3(q_c)}$ is seen to cancel out the contribution of longitudinal oscillators. Thus, the amplitude simplifies to \begin{align} & \hskip0.5cm Z_{\rm cylinder} \nonumber\\ &= i\frac{\alpha'}{2} \int_{s_c^{\rm UV}}^{s_c^{\rm IR}} \rmd s_c \int_{0}^{\infty} \frac{\rmd p }{\sqrt{2k}}\int_{-\infty}^{\infty} \frac{\rmd \omega_L}{\sqrt{2k}} \, \frac {\sinh \left(\pi \sqrt{\frac{\alpha'}{2}}p \right) q_c^{\frac{1}{4}\alpha'(\frac{p^{2}}{2k}-(1-i\hat{\epsilon})^2\frac{\omega_L^{2}}{2k})} \cdot Z_{{\cal M}}^{(c)}(q_c) } {\left[\cosh\left(\pi \sqrt{\frac{\alpha'}{2}} \omega_L\right) +\cosh\left(\pi \sqrt{\frac{\alpha'}{2}} p \right)\right] \sinh(\frac{\pi}{k} \sqrt{\frac{\alpha'}{2}} p)}~. \nonumber\\ & \label{L cylinder closed} \end{align} We now modular transform \eqref{L cylinder closed} to the open string channel. Define again the open string modulus as $q_o=e^{-2\pi i \tau_o}$, where $\tau_o = s_o - i \epsilon$ and $s_o = 1/s_c$. Using the Fourier transform identity: \begin{align} & \int_{-\infty}^{\infty}\rmd x\, \frac{\sin(\pi a x)}{\sinh(\pi x)} e^{-2\pi i kx} = \frac{\sinh (\pi a)}{\cosh(2\pi k)+ \cosh (\pi a)} ~, \qquad (\left|\mbox{Im}\,a\right| < 1) \label{FT formula} \end{align} we then obtain \begin{align} & \hskip+0.5cm Z_{\rm cylinder} = \nonumber\\ & \hskip-0.8cm \frac{i\alpha'}{4} \int_{s_o^{\rm UV}}^{s_o^{\rm IR}} \frac{\rmd s_o}{s_o} \int_{-\infty}^{\infty}\frac{\rmd p'}{\sqrt{2k}} \int_{-\infty}^{\infty}\rmd \frac{\omega_L'}{\sqrt{2k}}\, \frac {\sinh \left(\pi \sqrt{\frac{\alpha'}{2}} \omega'_L \right) q_o^{\frac{1}{4}\alpha'(\frac{p^{'2}}{2k}-(1+i\hat{\epsilon}')^2\frac{\omega_L^{'2}}{2k})} \cdot Z_{{\cal M}}^{(o)}(q_o) } {\left[\cosh\left(\pi \sqrt{\frac{\alpha'}{2}} \omega'_L \right) +\cosh\left(\pi \sqrt{\frac{\alpha'}{2}}p' \right)\right]\sinh(\frac{\pi}{k} \sqrt{\frac{\alpha'}{2}} \omega_L')}.\,\,\,\,\,\, \nonumber\\ & \label{L cylinder open} \end{align} Again $s_o^{\rm UV} \equiv 1/s_c^{\rm IR}$, $s_o^{\rm IR}\equiv 1/s_c^{\rm UV}$ are the cut-off's and the expression \eqref{L cylinder open} is well-defined so long as $2 \hat{\epsilon}' s_o^{\rm UV} > \epsilon$. \subsubsection{analytic continuation}\label{sec:8-2-3} We shall now analytically continue both the space-time and the world-sheet to the Euclidean signature. We have to carefully make the continuation so that keeping the original amplitude \eqref{L cylinder open} unchanged (up to cut-off's). As in the previous section, we should first Wick rotate in space-time $\omega'_L\, \rightarrow\, e^{i(\frac{\pi}{2}-0)} \omega'_L$ with $\omega'_L = i \omega'$ $(\omega' \in \mathbb{R})$, and then rotate the world-sheet $s_o \, \rightarrow \, - i t_o$ ($t>0$). We shall omit the cutoff's from now on. We reach the expression \begin{align} Z_{\rm cylinder} = Z_{\rm naive} + Z_{\rm pole}~, \end{align} where the first part is the contribution from naive continuation, while the second parts originates from the poles passed over by the rotated contour: $\omega'_L\, \rightarrow\, e^{i(\frac{\pi}{2}-0)} \omega'_L$. See Figure 2. \begin{figure}[htbp] \begin{center} \includegraphics[width=13cm,height=10cm] {contour2.eps} \end{center} \caption{ The $\omega_L$-integral with Lorentzian contour (broken line) and the Euclidean contour (solid line).} \label{contour2} \end{figure} The first part $Z_{\rm naive}$ is given by \begin{align} & \hspace{+0.2cm} Z_{\rm naive} = \nonumber\\ & \int_{0}^{\infty} \frac{\rmd t_o}{t_o} \int_{-\infty}^{\infty}\!\!\frac{\rmd p'}{\sqrt{2k}} \int_{(1-i0)\mathbb{R}}\frac{\rmd \omega'}{\sqrt{2k}}\, \frac {- \frac{1}{4}\alpha' \sin \left(\pi \sqrt{\frac{\alpha'}{2}}\omega' \right) q_o^{\frac{1}{4}\alpha'(\frac{p^{'2}}{2k}+\frac{\omega^{'2}}{2k})} \cdot Z_{{\cal M}}^{(o)}(q_o) } {\left[\cos\left(\pi \sqrt{\frac{\alpha'}{2}} \omega' \right) +\cosh\left(\pi \sqrt{\frac{\alpha'}{2}} p'\right)\right] \sin(\frac{\pi}{k}\sqrt{\frac{\alpha'}{2}} \omega')}~. \quad \label{Z naive} \end{align} The second part $Z_{\rm pole}$ arises from the poles located at \begin{align} & \sqrt{\frac{\alpha'}{2}}\omega'_L = \left\{ \begin{array}{ll} \sqrt{\frac{\alpha'}{2}}|p'| + {i} \left(2m+1\right) & ~~ m\in \mathbb{Z}_{\geq 0} \\ -\sqrt{\frac{\alpha'}{2}}|p'| + {i} \left(2m+1\right) & ~~ m\in \mathbb{Z}_{<0 } \end{array} \right. \end{align} whose residues are (after taking the open string channel modulus Euclidean, $q_o=e^{-2\pi t}$) \begin{align} & \frac{i}{2} \sqrt{\frac{2}{\alpha'}}\cdot \frac{\sqrt{2}}{2 \pi\sqrt{k}} \frac{e^{\pm \frac{i\pi}{k}t(2m+1)\sqrt{\frac{\alpha'}{2}} |p'| - \pi t \frac{2}{k}\left(m+\frac{1}{2}\right)^2}} {\sinh \frac{\pi}{k} \left(\pm \sqrt{\frac{\alpha'}{2}} |p'|+i \left(2m+1\right)\right)} ~. \end{align} We thus obtain \begin{align} & Z_{\msc{pole}} = \int_0^{\infty} \frac{\rmd t_o}{t_o}\, \left\lbrack \sum_{m\geq 0} \int_{0}^{\infty}\sqrt{\frac{\alpha'}{2}}\frac{\rmd p'}{\sqrt{2k}} -\sum_{m<0} \int_{-\infty}^{0}\sqrt{\frac{\alpha'}{2}}\frac{\rmd p'}{\sqrt{2k}} \right\rbrack \, {e^{\frac{i\pi}{k} t (2m+1)\sqrt{\frac{\alpha'}{2}} p' - \pi t \frac{2}{k} \left(m+\frac{1}{2}\right)^2}} \nonumber\\ & \hskip3cm \times 2\pi i \cdot \frac{i\sqrt{2}}{4 \pi\sqrt{k}} \left\lbrack \frac{1}{\sinh \frac{\pi}{k} \left(\sqrt{\frac{\alpha'}{2}} p'+i \left(2m+1\right)\right)} + (p' \leftrightarrow -p') \right\rbrack Z_{{\cal M}}^{(o)}(q_o)\nonumber\\ & \hspace{1cm} = - 2\sqrt{\frac{2}{k}} \int_0^{\infty} \frac{\rmd t_o}{t_o}\, \sum_{m=0}^{\infty} \int_{0}^{\infty}\sqrt{\frac{\alpha'}{2}}\frac{\rmd p'}{\sqrt{2k}}\, \frac{e^{\frac{i\pi}{k} t (2m+1)\sqrt{\frac{\alpha'}{2}} p' - \pi t \frac{2}{k}\left(m+\frac{1}{2}\right)^2}} {\sinh \frac{\pi}{k} \left( \sqrt{\frac{\alpha'}{2}} p'+i \left(2m+1 \right)\right)} \cdot Z_{{\cal M}}^{(o)}(q_o) ~. \nonumber\\ & \label{Z pole} \end{align} We thus obtained manifestly convergent open string channel expressions \eqref{Z naive}, \eqref{Z pole} for the cylinder amplitude in Lorentzian signature of the space-time. \subsubsection{optical theorem at work}\label{sec:8-2-4} With the Lorentzian (in space-time) cylinder amplitude \eqref{Z naive}, \eqref{Z pole} available, we now apply the unitarity and obtain total emission number $\overline{\cal N}$ via imaginary part of $Z_{\rm cylinder}$. In the analysis of \cite{Okuyama:2006zr} only the naive contribution $Z_{\msc{naive}}$ was considered. Taking the imaginary part picks up infinite poles located at the real $\omega'$-axis (the imaginary $\omega'_L$-axis), depicted in Figure 2. Their contributions yield \begin{align} & \mbox{Im}\, Z_{\msc{naive}} \nonumber\\ % & = - \frac{1}{2} \int_{0}^{\infty} \frac{\rmd t_o}{t_o} \int_{-\infty}^{\infty} \sqrt{\frac{\alpha'}{2}}\frac{\rmd p'}{\sqrt{2k}} \sum_{\stackrel{n\neq 0}{n\in \mathbb{Z}}} \, \pi \mbox{sgn}\,(n) \, \frac{(-1)^n\sqrt{k}}{\pi \sqrt{2}} \frac {\sin\left(\pi n k\right) e^{-\pi t_o \left( \frac{\alpha'}{2} \frac{p^{'2}}{2k}+\frac{n^2 k}{2}\right)}} {\cos\left(\pi n k \right) +\cosh \left(\pi \sqrt{\frac{\alpha'}{2}} p' \right)} \cdot Z_{{\cal M}}^{(o)}(q_o) \nonumber\\ &= -\sum_{n=1}^{\infty}\int_{0}^{\infty} \frac{\rmd t_o}{t_o} \int_{-\infty}^{\infty} \sqrt{\frac{\alpha'}{2}} \frac{\rmd p'}{\sqrt{2k}} \, \frac{(-1)^n\sqrt{k}}{\sqrt{2}} \frac {\sin\left(\pi n k \right) e^{-\pi t_o \left(\frac{\alpha'}{2} \frac{p^{'2}}{2k}+\frac{n^2k}{2}\right)}} {\cos\left(\pi n k\right) +\cosh \left(\pi \sqrt{\frac{\alpha'}{2}}p'\right)} \cdot Z_{{\cal M}}^{(o)}(q_o)~, \label{ImZ 1st} \end{align} reproducing the first term in \eqref{ImZ final}. We next evaluate the contribution from the pole contribution $Z_{\msc{pole}}$ \eqref{Z pole}. As is easily seen, taking the imaginary part just amounts to extending the integration region of $p'$ in \eqref{Z pole} to the whole real axis $(-\infty, \infty)$. By closing the $p'$-contour in the upper half plane, we thus obtain \begin{align} & \mbox{Im}\, Z_{\msc{pole}} \nonumber\\ &= i\sqrt{\frac{2}{k}} \int_0^{\infty} \frac{\rmd t_o}{t_o}\, \sum_{m=0}^{\infty} \int_{-\infty}^{\infty}\sqrt{\frac{\alpha'}{2}}\frac{\rmd p'}{\sqrt{2k}}\, \frac{e^{\frac{i\pi}{k} t_o (2m+1)\sqrt{\frac{\alpha'}{2}} p' - \pi t_o \frac{2}{k}\left(m+\frac{1}{2}\right)^2}} {\sinh \frac{\pi}{k} \left(\sqrt{\frac{\alpha'}{2}} p'+i \left(2m+1\right)\right)} \cdot Z_{{\cal M}}^{(o)}(q_o) \nonumber\\ &= 2\pi i \cdot i\sqrt{\frac{2}{k}} \int_0^{\infty} \frac{\rmd t_o}{t_o}\, \sum_{m=0}^{\infty} \sum_{\stackrel{n> \frac{1}{k}\left(2m+1\right)} {n\in \mathbb{Z}_{>0}}} \, \frac{(-1)^n\sqrt{k}}{\pi \sqrt{2}} e^{-\pi t_o n (2m+1)+\pi t_o \frac{2}{k}\left(m+\frac{1}{2}\right)^2} \cdot Z_{{\cal M}}^{(o)}(q_o) \nonumber\\ &= -2 \int_0^{\infty} \frac{\rmd t_o}{t_o} \, \sum_{n=1}^{\infty} \sum_{m=0}^{\left\lbrack \frac{nk}{2}-\frac{1}{2}\right\rbrack} \, (-1)^n e^{-\pi t_o n (2m+1)+\pi t_o \frac{2}{k}\left(m+\frac{1}{2}\right)^2} \cdot Z_{{\cal M}}^{(o)}(q_o)~. \label{ImZ pole} \end{align} In the last line, we exchanged order of the double summations. The final result agrees perfectly with the total emission number $\overline{\cal N}$ in \eqref{ImZ final} evaluated via direct computation of the transition amplitudes in Euclidean world-sheet. \subsubsection{imaginary D-instantons: decaying versus rolling} In the previous sections, we studied spectral observables in causal processes involving decay of unstable D-brane and rolling of accelerated D-brane. The main result of this work is that transformation of the total emission number $\overline{\cal N}$ and the cylinder amplitude $Z_{\rm cylinder}$ from the closed string channel to the open string channel require careful analytic continuation on the world-sheet and that, unlike other results claimed in the literatures, the analytic continuation we adopt gives results consistent with the unitarity via the optical theorem $\overline{\cal N} = \mbox{Im}\, Z_{\rm cylinder}$. In particular, we found that the cylinder amplitude consists in general of two parts $Z_{\rm cylinder} = Z_{\rm naive} + Z_{\rm pole}$, and the second part is crucial for ensuring the unitarity through its imaginary part. While we dealt with decaying or rolling process of the D-brane, the rules we developed ought to extend to other real-time processes such as open string and D-brane dynamics in electric field or plane-wave field background. In this section, we highlights several important steps we noted in establishing consistency between the channel duality and the unitarity. Throughout this work, the strategy for recasting the closed string emission spectra in open string channel was to expand the transition probability ${\cal P}(\omega, {\bf p})$ in power series of `imaginary D-instantons' \cite{Maloney:2003ck,Lambert:2003zr,Gaiotto:2003rm}, viz. contributions of localized states at time $2 \pi i \alpha' W(m,n)$ for decaying D-branes and at time $(2 \pi i \sqrt{\frac{k}{2}}) n \sqrt{\frac{\alpha'}{2}}$ for rolling D-branes, respectively. A crucial difference we noted for the rolling D-brane in NS5-brane background, $k>1$, that weight of the $n$-th imaginary D-instanton, $a_n(p)$, is a non-trivial function of $p$. We emphasized above that the momentum dependence came about because accelerated D-brane rolls in the two-dimensional subspace $\mathbb{R}_t \times \mathbb{R}_\phi$. Being process dependent, it could be that, in general, {\em the weights are exponentially growing functions of momentum, and their Fourier transformations are not necessarily well-defined.} This was indeed the case for the rolling D-brane case. We thus prescribed the Fourier transform of the D-instanton weight by analytic continuation via a deformed integration contour. The prescription then yielded in the open string channel the contribution $Z_{\rm pole}$ beyond the naive one $Z_{\rm naive}$. Moreover, whereas the naive contribution is always ultraviolet finite, the pole contribution exhibited ultraviolet divergence. Since $\overline{\cal N}$ (or higher spectral moment) is ultraviolet divergent, we concluded that the presence of ultraviolet divergent $Z_{\rm pole}$ is crucial for consistency with the unitarity and the channel duality. From mathematical viewpoint, we found that the pole contribution $Z_{\rm pole}$ in \eqref{ImZ final} is present in so far as we adopt mathematically well-posed prescription of the Fourier transform. From physics viewpoint, we can also argue that the first term $Z_{\rm naive}$ by itself cannot be the correct answer and the second term $Z_{\rm pole}$ ought to dominate over the first one. In the range $k>1$, it is easy to see that $\mbox{Im}\,Z_{\rm naive}$ can take a negative value if we tune the value $k$ suitably within this range. If $Z_{\rm naive}$ is all there is for the cylinder amplitude, the negative value of its imaginary part contradicts with the fact that the total emission number $\overline{\cal N}$ is positive by definition. Moreover, for integral value of $k$, which corresponds to rolling D-branes in $k$ coincident NS5 backgrounds, we observe that the first term $Z_{\rm naive}$ vanishes identically since the integrand vanishes. The above observations indicate that extra contribution ought to be present to the cylinder amplitude beyond the naive contribution, $Z_{\rm naive}$. On other other hand, we do not have any contradiction of the cylinder amplitude with the unitarity once the contribution $Z_{\rm pole}$ is taken into account. This is because $Z_{\rm pole}$ is dominant (generically divergent) over $Z_{\rm naive}$ and always positive. We conclude that our prescription for the cylinder amplitude renders the total emission number, as extracted from the optical theorem as $\overline{\cal N} = \mbox{Im}\,Z$ always positive and well-defined. The situation is in sharp contrast to that for decaying D-brane case. There, as recapitulated in section 2, the D-instanton weights were constant ($a_{n,m} = 1$), so the issue of Fourier transform was void from the outset. Again, as explained in section 2, the momentum independence came about because unstable D-brane decays at rest (or trivially Lorentz boosted). The situation in NS5-brane phase $k > 1$ is also in contrast to that in extreme string phase \cite{Giveon:2005mi}, $1/2 \le k \leq 1$, or in `out-going' radiation in nonextremal NS5-brane background (which involves two-dimensional black hole geometry) \cite{Nakayama:2005pk}. For these, the leading weight $a_1(p)$ is a bounded function and have well-defined Fourier transformation. Thus, there does not arise any extra contribution beyond $Z_{\rm naive}$. We thus obtain via optical theorem an ultraviolet finite total emission number.\footnote {Even in the deep stringy phase $1/2 \le k \leq 1$, $a_n(p)$ is exponentially divergent for sufficiently large $n$. Therefore, the formula given in \cite{Okuyama:2006zr} have to be still corrected. However, only the $n=1$ term could cause the Hagedorn divergence as noted above. Hence, this correction does not modify ultraviolet behavior of the emission number density.} In the previous work \cite{Nakayama:2005pk}, we also noted that the first D-instanton weight $a_1(p)$ is identifiable with the `grey body factor' $\sigma(p)$ in the total emission number $\overline{\cal N}$. There, the identification was based on saddle-point analysis valid at large mass $M \rightarrow \infty$ in the closed string channel. The present result in the open string channel, where the leading ultraviolet divergence arises from the weight $a_1(p)$, then supports the identification.\footnote{Footnote 3 of \cite{Okuyama:2006zr} claims the saddle-point approximation used in our earlier works is invalid. We disagree with their claim: the relevant integral is of the type \begin{align} \int^\infty \rmd p \, \exp \Big[-M f\Big({p \over M}\Big)\Big]. \nonumber \end{align} As $M \rightarrow \infty$, the saddle point approximation is well justified in so far as $$ f(p_*/M) \sim {\cal O}(1)~, ~~~ f''(p_*/M) >0~, ~~~ f^{(2n)}(p_*/M) \sim {\cal O}(1)~, ~~ (n \geq 2)~, ~~~ (p_*~:~ \mbox{saddle})~, $$ and this is indeed our case.} \subsubsection{comparisons} From our analysis, it became clear the reason why \cite{Karczmarek:2003xm} obtained the correct result for the decay of unstable D-brane is because the contour rotation in Fourier transform did not encounter any pole (since the D-instanton weights $a_n(p)$ were $p$-independent constants), and the naive manipulation yielded the correct result. In \cite{Okuyama:2006zr}, the prescription of \cite{Karczmarek:2003xm} was taken literally also for the rolling of accelerated D-brane. It was then concluded that $Z_{\rm naive}$ refers to the total cylinder amplitude. We showed throughout this work that this is incorrect since it overlooked the pole contribution $Z_{\rm pole}$. After all, only after taking this extra contribution into account, we showed that the cylinder amplitude is consistent with the channel duality and the unitarity. Finally, we find it illuminating to understand why $\overline{\cal N}$ exhibited Hagedorn divergence in the two-dimensional string theory studied in \cite{Klebanov:2003km}, whereas it is ultraviolet finite in the linear dilaton background studied in \cite{Karczmarek:2003xm} in two-dimensional space-time (that is, $c_{\msc{eff}}=0$). The reason is because the boundary wave function (D-brane transition amplitude) of the former has non-trivial $p$-dependence that exponentially diverges, whereas the latter does not. We finish this subsection with a comment on the origin of the ``black hole - string transition" from the open string viewpoint. Operationally, the transition occurs because in the double summation of \eqref{ImZ pole}, the lightest contribution $(m=0, n=1)$ is outside the rage of summation for $ k<1$. In other words, the lightest open string exchange mode, contributing to the power-like divergence of the imaginary part is projected out. This suggests that the long range interaction between the brane is drastically different between $k<1$, and $k>1$. It would be of great interest to uncover this phenomenon and explain the intuitive reason for the breaking of the tachyon - radion correspondence from the open string viewpoint. \subsection{Black hole - string transition}\label{sec:8-3} It has been a recurrent theme \cite{'tHooft:1987tz,Holzhey:1991bx,Horowitz:1996nw,Susskind:1993ws,Sen:1995in} that an elementary particle or a string is a black hole: a configuration consisting of (multiple) strings with high enough total mass is equivalent to a black hole of the same mass and other conserved charges as we have reviewed in section \ref{sec:4}. This brings a question whether a given configuration is most effectively described in terms of strings or black holes. By the black hole - string transition, we will refer to such change of the effective description for a configuration involving massive string excitations. Roughly speaking, the string is dual to the black hole and vice versa. An immediate, interesting question is whether the two-dimensional black hole geometries is also subject to the black hole - string transition and if so what precisely the dual of the geometries would be. In this section, we shall investigate this transition by studying rolling dynamics of a D0-brane placed on the background. If the background undergoes the transition between the black hole and the string configurations, propagation of a probe D0-brane would be affected accordingly. The transition is triggered by $k$ or $\kappa$, which measures characteristic curvature scale of the background measured in sting unit and hence string world-sheet effects. We shall explore a signal of the transition by examining spectral distribution of the closed string radiation out of the rolling D0-brane. Other physical observables associated with D0-brane would certainly be equally viable probes. Though straightforward to analyze, in this work, we shall not consider them. \subsubsection{probing black hole - string transition via D-brane}\label{sec:8-3-1} In section \ref{sec:8-2}, we observed that ${\cal N}(M)_{\msc{in}} \gg {\cal N}(M)_{\msc{out}}$ for both the supersymmetric and bosonic string theories in case the string world-sheet effects are weak enough, viz. $k > 1$ and $\kappa > 3$, respectively. Obviously, such behavior can be interpreted as indicating that the background on which the radiative process takes place is indeed a black-hole: D0-brane falls into the horizon and subsequent radiation is mostly absorbed by the black hole. On the other hand, the behavior that ${\cal N} (M)_{\msc{in}}\sim {\cal N} (M)_{\msc{out}} \gg \rho(M)^{-1}$ for $k<1$ or $\kappa < 3$ does not seem to bear features present in the black hole background: while D0-brane falls inward, subsequent radiation is not mostly absorbed by the black hole but disperse away. Since this is the regime where the string world-sheet effects are significant, the background may be described most effectively in terms of strings. We are thus led to conclude that the background, whose stringy effects are controlled by the parameter $k$ or $\kappa$, would make a phase-transition between the black hole and the string across $k=1$ or $\kappa = 3$. In a different physical context, this so-called ``black hole - string transition" was studied recently \cite{Karczmarek:2004bw,Giveon:2005mi}. What distinguishes our consideration and result from \cite{Karczmarek:2004bw,Giveon:2005mi} is that we are probing possible phase-transition of the (closed string) background by introducing a D0-brane in it and studying open string dynamics. Possible existence of such a phase transition was first hinted in \cite{Kutasov:1990ua} in the closed string sector, where they observed that the ${\cal N}=2$ Liouville superpotential becomes normalizable once $k>1$ and it violates the Seiberg bound. Recall that the marginal interaction term is \begin{align} S^{\pm} = \psi^{\mp} e^{-\frac{1}{{\cal Q}}(\phi \pm iY)}~, ~~~({\cal Q}=\sqrt{2/k}) \end{align} for the ${\cal N}=2$ Liouville theory, and \begin{align} S^{\pm} = e^{-\frac{1}{{\cal Q}}(\phi \pm \sqrt{1+{\cal Q}^2}iY)} \equiv e^{-\sqrt{\frac{\kappa-2}{2}} \phi \mp \sqrt{\frac{\kappa}{2}} iY}~, ~~~ ({\cal Q}=\sqrt{2/(\kappa-2)})~, \end{align} for the bosonic sine-Liouville theory, respectively. Both interactions are normalizable (exponentially falling off in the asymptotic far region) if the curvature is sufficiently small that $k>1$ or $\kappa >3$ is satisfied. As is well-known, ${\cal N}=2$ Liouville or sine-Liouville theory is T-dual to the $SL(2;\mathbb{R})/U(1)$ coset theory \cite{FZZ,Giveon:1999px,Giveon:1999tq,Hori:2001ax}, so the condition on the level $k$ or $\kappa$ ought naturally to descend to the two-dimensional black hole description. Indeed, such aspect was discussed in \cite{Karczmarek:2004bw} purely in the language of the $SL(2;\mathbb{R})/U(1)$ coset theory (see also \cite{Hori:2001ax}). Their reasoning is closely related to the non-formation of the black hole in two-dimensional string theory (see also \cite{Friess:2004tq} for the discussion concerning this issue from the matrix model viewpoint). In the strong curvature regime, $k<1$, the background is described more effectively in terms of the ${\cal N}=2$ Liouville theory as it is weakly coupled. Evidently, the black hole interpretation of the $SL(2;\mathbb{R})/U(1)$ theory is less clear in this region, because the classical ${\cal N}=2$ Liouville theory does not admit an interpretation in terms of black hole geometry in any obvious way. We emphasize that such black hole - string transition is not likely to arise perturbatively and could arise only from nonperturbative string world-sheet effects as we have reviewed in section \ref{sec:3} and \ref{sec:4}. For instance, tree-level closed string amplitudes are manifestly analytic with respect to the level $k$. These amplitudes exhibit a finite absorption rate (thus displaying the non-unitarity of the reflection amplitudes) regardless of the value of $k$. In fact, finite-$k$ correction to the amplitudes yield an irrelevant phase-factor \cite{Dijkgraaf:1992ba,Giveon:2003wn}. However, as was first observed in \cite{Nakayama:2004ge}, situation changes drastically if we consider the closed string radiation from the rolling D-brane in such a background. In \cite{Nakayama:2004ge}, it was shown that the distribution of radiation off D0-brane in extremal NS5-brane background becomes ultraviolet finite for $k < 1$. In the previous section, extending the analysis of \cite{Nakayama:2004ge}, we have shown that the $k=1$ transition shows up manifestly in the open string sector in the sense that branching ratio between the incoming and the outgoing radiation distribution (as well as spectral moments) behaves very differently across $k=1$. Remarkably, retaining finite $1/k$-correction, which originated from consistency with the exact reflection relations, was crucial in obtaining physically sensible results {\sl even for} $k\gg 1$. Cancellation between the radiation distribution and the exponential growth of the density of states at large $M$ is quite nontrivial, and relied crucially on precise functional dependence on $k$. An `order-parameter' of the transition is thus provided by the radiation distribution of rolling D-brane. The phase transition across $k=1$ is that while the radiation distribution from the falling D-brane exhibits powerlike ultraviolet divergence for $k>1$, it becomes finite for $k<1$. Thus, the rolling D-brane in the $k<1$ regime does {\it not} yield a large back-reaction unlike the $k>1$ case. This is also consistent with the assertion that black hole cannot be formed in the two-dimensional string theory: It seems difficult to construct two-dimensional black hole by injecting D-branes to the linear dilaton (or usual Liouville) theory.\footnote {Such a possibility was proposed in \cite{Karczmarek:2004bw}.} It is also worth mentioning that the radion-tachyon correspondence is likely to fail in the two-dimensional string theory ($k=1/2$). In fact, had we have such a correspondence, the rolling radion of the D0-brane could be identified with the rolling tachyon of the ZZ-brane in the Liouville theory. On the other hand, it is known that the radiation distribution of the-ZZ brane exhibits a powerlike ultraviolet divergence \cite{Klebanov:2003km} at leading order in string perturbation theory, while that of the falling D0-brane does not. \subsubsection{holographic viewpoint}\label{sec:8-3-2} The black hole - string transition across $k=1$ also has a natural interpretation in terms of the holographic principle, as recently discussed in \cite{Giveon:2005mi}. Adding $Q_1$ fundamental strings to $k$ NS5-branes, one obtains the familiar bulk geometry of the $AdS_3/CFT_2$-duality. In this context, the density of states of the dual conformal field theory is given by the naive Cardy formula $S=2\pi\sqrt{\frac{cL_0}{6}}+2\pi\sqrt{\frac{\bar{c}\bar{L}_0}{6}}$ with $c = 6 k Q_1$ for $k>1$, but not for $k<1$. Rather, the central charge that should be used in the Cardy formula is replaced by an effective one $c_{\rm eff}= 6Q_1(2-\frac{1}{k})$ \cite{Kutasov:1990ua}. The similar effects also showed up in the double scaling limit of the `little string theory'(LST) \cite{Giveon:1999px,Giveon:1999tq}.\footnote {Even though the original `little string theory' is the theory of NS5-brane, so $k$ should be positive integer-valued, one can also consider models with fractional value of the level $k$, which is less than 1 generically. This is achieved by considering the {\em wrapped\/} NS5-brane backgrounds, or compactifications on a Calabi-Yau threefold having rational singularity \cite{Giveon:1999zm}. From the regularized torus partition function, one can prove that there is no normalizable massless states (corresponding to the `Lehmann-Symanzik-Zimmerman-poles' \cite{Aharony:2004xn}) in such string vacua if $k<1$, as was discussed in {\em e.g.} \cite{Eguchi:2004ik,Eguchi:2004yi}. } We shall now show that such change of the central charge is also imperative for reproducing the closed string radiation distribution correctly from the dual holographic picture. It is an interesting attempt to reproduce the phase transition in the radiation distribution of rolling D-brane across $k=1$ from the holographic viewpoint. In \cite{Sahakyan:2004cq}, it was proposed that the rolling D-brane should correspond to the decay of a certain defect in the dual LST. We shall now extend that analysis to the $k<1$ case and explore the phase-transition. The relevant holographic description is based on the following two assumptions. \begin{enumerate} \item \underline{fixed radiation number distribution}: The radiation distribution for a fixed mass $M$ is determined by large $k$ behavior of the pressure in the far future (past). This is equivalent to the statement that the decay of the radion is described by a `holographic tachyon condensation'. We assume that there is no phase transition at $k=1$ for a fixed mass $M$.\footnote {Theoretically, there is no reason to exclude a finite $1/k$ correction here. We only need this assumption phenomenologically in order to reproduce the ten-dimensional calculation even for $k>1$. A priori, the tachyon condensation (in the critical bosonic string) itself may receive large string world-sheet corrections. In the Dirac-Born-Infeld action analysis, such potential corrections were completely dropped out.} In our convention, the distribution is given by \begin{equation} {\cal N}(M)_{\rm LST} \sim e^{-2\pi M \sqrt{\frac{k}{2}}} \ . \label{asum} \end{equation} \item \underline{change of density of states}: The final density of closed little string states in the `holographic tachyon condensation' is given by the square root of the full nonperturbative density of states in LST. As is discussed in \cite{Giveon:2005mi}, the full nonperturbative density of states of the LST is believed to exhibit a phase transition at $k=1$: for $k>1$, the density of states is related to the Hawking temperature as \begin{equation} n(M)_{\rm LST} \sim e^{{4\pi} M \sqrt{\frac{k}{2}}} \ . \label{asuma} \end{equation} In other words, the Hagedorn temperature in LST should be equated with the Hawking temperature \cite{Aharony:1998ub} (see also, {\em e.g.} \cite{Harmark:2000hw,Berkooz:2000mz,Kutasov:2000jp}). On the other hand, for $k<1$, because of the non-normalizability of the black hole excitation, the nonperturbative density of states of the LST is equivalent to the density of states of the (dual) perturbative string theory \cite{Giveon:2005mi}: \begin{equation} n(M)_{\rm LST} \sim e^{4\pi M \sqrt{1-\frac{1}{2k}}} \ . \label{asumb} \end{equation} \end{enumerate} With these assumptions, we can estimate the average radiation number of the `holographic tachyon condensation' to be \begin{align} \overline{{\cal N}}_{\rm LST} = \int_0^\infty \mbox{d} M {\cal N}(M) \, \sqrt{n(M)_{\rm LST}} \ . \nonumber \end{align} Note that, in contrast to the bulk string theory calculation, we have no integration over the radial momentum. Substituting \eqref{asum} and \eqref{asuma} or \eqref{asumb} according to the value of $k$, we obtain \begin{align} \overline{{\cal N}}_{\rm LST} \sim \int^\infty \mbox{d} M \, e^{-{2\pi}M {\sqrt{\frac{k}{2}}} +{2\pi}M \sqrt{\frac{k}{2}}} \nonumber \end{align} for $k>1$, showing powerlike ultraviolet divergent behavior because of the complete cancellation in the exponent, and \begin{align} \overline{{\cal N}}_{\rm LST} \sim \int^\infty \mbox{d} M \, e^{-{2\pi M } {\sqrt{\frac{k}{2}}}+2\pi M \sqrt{1-\frac{1}{2k}}} \ , \nonumber \end{align} for $k<1$, showing exponential suppression in the ultraviolet. It is easy to see that this holographic dual computation reproduces the bulk computation presented in section \ref{radiation super} up to a subleading power dependence \eqref{eq:in}, \eqref{eq:ini}.\footnote {The exact determination of the pre-exponential power part is beyond the scope of the rough estimate presented here. It requires the full computational ability in the LST.} It should be noted, however, that the cancellation between the radiation distribution and the density of states has a different origin in the dual holographic description as compared to the bulk side. In the holographic description, the origin of the phase transition is the nonperturbative density of the states in LST while the radiation distribution at a fixed mass-level $M$ keeps its functional form unchanged. On the other hand, in the bulk theory, origin of the cancellation was that the radiation distribution changes at $k=1$ due to the disappearance of the non-trivial saddle point in the integration of the radial momentum $p$, while the density of states is always given by the same formula. Thus the agreement between the two descriptions is quite non-trivial and we believe that our results provide yet another evidence of the holographic duality for the NS5-brane and black hole physics. Though we presented the dual description based on some assumptions, we can turn the logic around and regard our results as a support for such assumptions. In particular the quantum gravity phase transition at $k=1$ in the dual theory proposed in \cite{Giveon:2005mi} is crucial for understanding the radiation distribution out of a defect decay in the dual LST. We thus propose our discussion in this section as a strong support for black hole - string transition. \subsection{Boundary states and radiation in Ramond-Ramond sector}\label{sec:8-4} In the case of fermionic black hole background, the rolling D0-brane would also radiate off closed string states in the Ramond-Ramond (R-R) sector. In this section, we shall construct R-R boundary state of the D0-brane and compute radiation rates. Since the world-sheet theory corresponds to ${\cal N} =2$ superconformal field theory, correlation functions of the R-R sector and boundary states are readily obtainable by performing the standard ${\cal N}=2$ spectral flow. We shall begin with discussion regarding properties of reflection amplitudes for the R-R sector (see \cite{Giveon:2003wn} in the context of two-dimensional black hole). Recall that the reflection relation was given in the NS-NS sector as \begin{align} U^{-p}_{\omega}(\rho,t)^{\rm NS}= {\cal R}^{\rm NS}(-p,\omega) U^{p}_{\omega}(\rho,t)^{\rm NS} \quad \mbox{and} \quad V^{-p}_{\omega}(\rho,t)^{\rm NS}= {\cal R}^{\rm NS *}(-p,\omega) V^{p}_{\omega}(\rho,t)^{\rm NS}~, \nonumber \end{align} where the exact reflection amplitude $ {\cal R}^{\rm NS}(-p,\omega)$ was defined by \begin{align} {\cal R}^{\rm NS}(p,\omega) = \frac{\Gamma(1+\frac{ip}{k})\Gamma(+ip) \Gamma^2(\frac{1}{2}-i\frac{p+\omega}{2})} {\Gamma(1-\frac{ip}{k})\Gamma(-ip) \Gamma^2(\frac{1}{2}+i\frac{p-\omega}{2})} \ . \nonumber \end{align} To obtain the reflection relation of the R-R sector, we shall perform the spectral flow by half unit of the ${\cal N}=2$ $U(1)$ current. In sharp contrast to the ${\cal N}=2$ Liouville theory, the reflection amplitude now depends on the spin structure of the R-R sector.\footnote {This is because, in the ${\cal N}=2$ Liouville theory, the reflection amplitudes for the momentum modes have a symmetry under $\omega \to -\omega$.} Explicitly, the spectral flow is defined as $\omega \to \omega \pm i$, where the $+$ sign corresponds to spin ($+,-$) states and $-$ sign corresponds to spin ($-,+$) states (in the $(\frac{1}{2},\frac{1}{2})$ picture): in the $\rho \to \infty$ limit, they are described by $S^{\pm} e^{-\rho} e^{-ip \rho-i\omega t}$ and the conformal weight is given by $h = \frac{p^2-\omega^2+1}{4k} + \frac{1}{8}$. Therefore, for the R-R states with spin ($+,-$), the exact reflection amplitudes become \begin{equation} {\cal R}^{\rm R+}(p,\omega) = \frac{\Gamma(1+\frac{ip}{k})\Gamma(+ip) \Gamma^2(1-i\frac{p+\omega}{2})} {\Gamma(1-\frac{ip}{k})\Gamma(-ip) \Gamma^2(1+i\frac{p-\omega}{2})} \ . \label{refRp} \end{equation} Equivalently, if we take spin ($-,+$) R-R states, the exact reflection amplitudes become \begin{equation} {\cal R}^{\rm R-}(p,\omega) = \frac{\Gamma(1+\frac{ip}{k})\Gamma(+ip)\Gamma^2(-i\frac{p+\omega}{2})} {\Gamma(1-\frac{ip}{k})\Gamma(-ip)\Gamma^2(+i\frac{p-\omega}{2})} \ . \label{refRm} \end{equation} It is important to notice that the latter amplitudes have a second order zero in the light-cone direction $p = \omega >0$ (recall that $p>0$ in our convention). Similarly, we could derive the reflection relation for $(\pm,\pm)$ spin structure, but the resultant amplitudes are compatible only with the analytic continuation to the `winding time' (in the interior of the singularity), so we would not delve into details anymore. Consider next the boundary wave function of the R-R sector. For definiteness, we shall take the absorbed D0-brane \eqref{falling D0} (We focus on the $t_0=0$ case for simplicity.) \begin{align} {}_{\msc{absorb}}\!\bra{B,{\rm NS};\rho_0} = \int_0^{\infty}\frac{\mbox{d} p}{2\pi} \int_{-\infty}^{\infty}\frac{\mbox{d} \omega}{2\pi}\, \Psi_{\msc{absorb:NS}}(\rho_0;p,\omega) \, {}^{\widehat{U}}\!\dbra{p,\omega} \nonumber \end{align} where \begin{align} \Psi_{\msc{absorb:NS}}(\rho_0;p,\omega) = \frac{\Gamma(\frac{1}{2}-i\frac{p+\omega}{2}) \Gamma(\frac{1}{2}-i\frac{p-\omega}{2})}{\Gamma(1-ip)} \Gamma\left(1+\frac{ip}{k}\right) \, \left[ e^{-ip\rho_0} - \frac{\cosh\left(\pi \frac{p-\omega}{2}\right)} {\cosh\left(\pi \frac{p+\omega}{2}\right)} e^{+ip\rho_0} \right]~. \nonumber \end{align} The boundary wave functions of the R-R sector are then derived by applying the ${\cal N}=2$ spectral flow $\omega \to \omega \pm i$: \begin{align} \Psi_{\msc{absorb:R}+}(\rho_0;p,\omega) \frac{\Gamma(-i\frac{p+\omega}{2}) \Gamma(1-i\frac{p-\omega}{2})}{\Gamma(1-ip)} \Gamma\left(1+\frac{ip}{k}\right) \, \left[ e^{-ip\rho_0} + \frac{\sinh\left(\pi \frac{p-\omega}{2}\right)} {\sinh\left(\pi \frac{p+\omega}{2}\right)} e^{+ip\rho_0} \right]~, \nonumber \end{align} and \begin{align} \Psi_{\msc{absorb:R}-}(\rho_0;p,\omega) = \frac{\Gamma(1-i\frac{p+\omega}{2}) \Gamma(-i\frac{p-\omega}{2})}{\Gamma(1-ip)} \Gamma\left(1+\frac{ip}{k}\right) \, \left[ e^{-ip\rho_0} + \frac{\sinh\left(\pi \frac{p-\omega}{2}\right)} {\sinh\left(\pi \frac{p+\omega}{2}\right)} e^{+ip\rho_0} \right]~, \nonumber \end{align} for the two opposite spin structures. These boundary wave functions are of course consistent with the exact reflection amplitudes \eqref{refRp},\eqref{refRm}. From these boundary wave functions, we can deduce some physical properties of the boundary states in the R-R sector: \begin{itemize} \item For $k > {1 \over 2}$, in the saddle point approximation of the radial momentum integral, radiation distribution of the R-R sector behaves the same as that of the NS-NS sector. In particular, the absolute value of the reflection amplitudes behave in the similar manner. Thus, the radiation distribution of the R-R sector is the same as that of the NS-NS sector. \item For $k = {1 \over 2}$, viz. the two-dimensional black hole, considerable differences arise. Both boundary wave function and reflection amplitudes show singularity (or zero) when we take particular spin structure. It is not clear what the origin of these singularities of lightlike on-shell states $p = \omega$ would be. We note that some related discussions were given in \cite{Giveon:2003wn}. \item In the mini-superspace limit $k \to \infty$, the mass gap in the R-R sector vanishes. Therefore, it is well-posed to question radiation of the massless R-R states off the R-R charge. From the boundary states given above, we observe that, assuming $p, \omega > 0$, there is no lightlike pole in $R+$ state while there is a pole at $p = + \omega$ in the $R-$ state. It is also interesting to note that, in the subleading contribution proportional to $e^{+ip\rho_0}$, the pole from the gamma function is cancelled by the zero in the $\sinh(\pi\frac{p-\omega}{2})$ factor. A possible interpretation is that, roughly speaking, R-R charge is localized on the incoming light-cone $p = \omega$.\footnote {This is true only in the asymptotic region $\rho \to \infty$ since the distribution near $\rho = 0$ is further related to the basis of Ishibashi states used in the expansion. In the case of `absorbed' basis, there is no contribution from the past horizon. In addition, because the reflection amplitude vanishes in the $R-$ sector, an observer at $\rho \to \infty$ do not detect any outgoing wave.} \end{itemize} \subsection{Back to extremal NS5-brane background}\label{sec:8-5} By tuning off $\mu \rightarrow 0$, we are back to the extremal NS5-brane background. Roughly speaking, the extremal background is described by the free linear dilaton theory, but crucial differences from the non-extremal counterpart studied in this work are the followings: \begin{itemize} \item We have no reflection relation, and the $p>0$ and $p<0$ states should be treated as independent states.\footnote {In this sense, the arguments given in \cite{Nakayama:2004yx} are not completely precise, although the main part of physical results, say, the closed string radiation rates, are not altered. } \item The conformal field theory description is not effective in the entire space-time: the string coupling diverges at the location of the NS5-brane. We cannot completely trace the classical trajectory of the D0-brane \eqref{trajectory D0} without facing strong coupling problem. \end{itemize} We thus have to keep it in mind that the validity of the conformal field theory description of extremal NS5-brane is limited to the sufficiently weak string coupling region. For the extremal NS5-brane, since the relevant conformal field theory involves linear dilaton and hence is a free theory, we can introduce the basis of the Ishibashi states as $\dket{p,\omega}$, $(p,\omega \in \mathbb{R})$ associated with the wave function $\psi^p_{\omega}(\rho,t)\propto e^{-\rho} e^{-ip \rho-i\omega t}$. Another non-trivial difference from the non-extremal case is the volume form of the space-time. Since we have the linear dilaton $\Phi = \mbox{const}-\rho$ and a flat metric $G_{ij}=\eta_{ij}$, the relevant volume form becomes \begin{align} \mbox{d} \mbox{Vol}= e^{-2\Phi}\sqrt{G}\mbox{d} \rho \mbox{d} t = e^{2\rho} \mbox{d} \rho \mbox{d} t~. \label{vol linear dilaton} \end{align} Now, the classical trajectory of D0-brane in the extremal NS5-brane is given by \cite{Kutasov:2004dj}: \begin{align} 2\cosh(t-t_0) e^{\rho} = e^{\rho_0}~. \label{trajectory 2} \end{align} The boundary state describing the D0-brane moving along \eqref{trajectory 2} ought to have the following form: \begin{align} \bra{B;\rho_0,t_0} = \int_{-\infty}^{\infty}\frac{\mbox{d} p}{2\pi}\, \int_{-\infty}^{\infty} \frac{\mbox{d} \omega}{2\pi}\, \Psi (\rho_0,t_0;p,\omega) \dbra{p,\omega}~. \label{symmetric D0 extremal} \end{align} The boundary wave function is evaluated as \begin{align} \Psi(\rho_0,t_0;p,\omega) &\sim \int \mbox{d} v\, \delta\Big( 2\cosh(t-t_0)e^{\rho}-e^{\rho_0} \Big)\, e^{-\rho-ip\rho-i\omega t} \nonumber\\ &= \int_{-\infty}^{\infty} \mbox{d} t \, e^{-ip\rho_0} e^{-i\omega t} \Big[2 \cosh(t-t_0)\Big]^{ip-1} \nonumber\\ & = \frac{1}{2}B\left(\frac{1}{2}-i\frac{p+\omega}{2}, \frac{1}{2}-i\frac{p-\omega}{2} \right) \, e^{-ip \rho_0-i \omega t_0}~. \nonumber\\ \end{align} In the last expression, we used the formula \eqref{formula 2}. This is essentially the calculation given in \cite{Nakayama:2004yx}. Finally, by restoring the important `world-sheet correction factor' $\Gamma\left(1+i\frac{p}{k}\right)$,\footnote {Since in this case we do not have the reflection relation, the inclusion of the factor $\Gamma\left(1+i\frac{p}{k}\right)$ may sound less affirmative than the nonextremal NS5-brane background. We argue that the procedure is actually justified by considering the limit from the non-extremal case.} we obtain the boundary wave function \begin{align} \Psi(\rho_0,t_0;p,\omega) = \frac{1}{2} B(\nu_+,\nu_-) \Gamma\left(1+i\frac{p}{k}\right)\, e^{-ip\rho_0-i\omega t_0}~. \qquad \mbox{where} \qquad \nu_{\pm} \equiv \frac{1}{2}- i\frac{p\pm \omega}{2}~,\nonumber \end{align} This is the extremal counterpart of the `symmetric D0-brane' in the non-extremal NS5-brane background \eqref{symmetric D0}. We can also consider the `half S-brane' counterpart by taking the Hartle-Hawking contours depicted in the Figures \ref{HH-future} and \ref{HH-past}. Namely, for the `absorbed brane', we obtain \begin{align} {}_{\msc{absorb}}\bra{B;\rho_0,t_0} = \left( \int_0^{\infty}\frac{\mbox{d} p}{2\pi} \int_0^{\infty}\frac{\mbox{d} \omega}{2\pi} + \int_{-\infty}^0\frac{\mbox{d} p}{2\pi} \int_{-\infty}^0\frac{\mbox{d} \omega}{2\pi}\right)\, \Psi(\rho_0,t_0;p,\omega)\, \dbra{p,\omega} ~, \label{falling D0 extremal} \end{align} and for the `emitted brane', \begin{align} {}_{\msc{emitted}}\bra{B;\rho_0,t_0} = \left( \int_0^{\infty}\frac{\mbox{d} p}{2\pi} \int^0_{-\infty}\frac{\mbox{d} \omega}{2\pi} + \int_{-\infty}^0 \frac{\mbox{d} p}{2\pi} \int^{\infty}_0\frac{\mbox{d} \omega}{2\pi}\right)\, \Psi(\rho_0,t_0;p,\omega)\, \dbra{p,\omega} ~. \label{emitted D0 extremal} \end{align} They are regarded as the counterparts of \eqref{HH symm D0 1} and \eqref{HH symm D0 2}. The radiation rates were already evaluated in \cite{Nakayama:2004yx,Sahakyan:2004cq}.\footnote{ In this paper, we scaled energy and momentum differently from \cite{Nakayama:2004yx}. In light of normalization as in \eqref{on-shell super}, $\omega, p$ in this work should be read as $2 \sqrt{k}$ times $\omega, p$ in \cite{Nakayama:2004yx}.} Crucial differences from the non-extremal case are the followings: We have the `forward radiations' ({\em e.g.}, the incoming radiation for the absorbed D-brane \eqref{falling D0 extremal}) only and no `backward radiations' ({\em e.g.}, the outgoing radiation for the absorbed D-brane). This is because there is no reflection relation in the extremal case. The forward radiations behave in the completely same way as the non-extremal case (that is, in a fermionic two-dimensional black hole with $k >1 $), giving rise to the Hagedorn-like ultraviolet divergence again. At fixed but large $M$ before integrating over $p$, the partial radiation number distribution takes again exactly the same asymptotic form as in \eqref{grey body} except that now the coefficient $2 \pi \sqrt{2k}$ is {\sl not} interpretable as the inverse Hawking temperature of the black hole.\footnote {An obvious alternative interpretation could be that, even for extremal background, the falling D0-brane excites the NS5-brane above the extremality.} Again, this has to do with the peculiarity that the Hawking temperature of the two-dimensional black hole is set by the level $k$, not by the nonextremality $\mu$. On the other hand, the absence of the backward radiation matches with the extremality of the background; there is no Hawking radiation. \subsection{More on physical interpretations : Hartle-Hawking states}\label{sec:8-6} We shall now revisit the boundary states we constructed in this work and elaborate further on their physical interpretations with particular emphasis on analogy with the rolling tachyon problem via the radion-tachyon correspondence. We also elaborate on the fate of R-R charge carried by the D0-brane. To be concrete, we shall focus on the cases $k \geq 2$ admitting interpretation in terms of near horizon geometry of black NS5 branes. The boundary state \eqref{falling D0} describes the late-time rolling ($t \gg t_0$) of the D0-brane rolling into the black NS5 branes. The relevant D0-brane has the initial condition $\rho=\rho_0$, $\frac{d\rho}{dt}=0$ at $t=t_0$ and starts to roll down toward the black hole. After sufficiently long coordinate time elapsed, the D0-brane gets close to the future horizon (${\cal H}^+$). As examined in section 4, almost all energy of the D0-brane is absorbed by the black hole in the form of incoming radiation. The incoming radiation is dominated by very massive, and hence highly non-relativistic closed string excitations. Via the radion-tachyon correspondence, these states are identifiable with the `tachyon matter' in the rolling tachyon problem in flat space-time. On the other hand, we have seen that a small part of energy escapes to the spatial infinity (${\cal I}^+$) as the outgoing radiation. We have seen that the spectral distribution is characterized by the Hawking temperature, and is necessarily dominated by light modes. This interpretation is quite natural from the viewpoint of the radion-tachyon correspondence for the extremal NS5-brane background \cite{Kutasov:2004dj}. Since we are now working with the non-extremal NS5-brane background, our analysis may be considered as an evidence that the correspondence is valid even at finite temperature. What about evolution in the far past $t < t_0$? Here, we face a subtlety. Recall that the boundary condition defining \eqref{falling D0} does not allow contributions from the past horizon (${\cal H}^-$), namely, the basis of Ishibashi states $\dket{p,\omega}^U$ does not reproduce the past half of the classical trajectory \eqref{trajectory D0}. Rather, the NS-NS sector of the D0-brane boundary wave function appears widely distributed in the space-time in the far past. This may be interpreted as radiations imploding to $\rho = \rho_0$ from spatial infinity, but then it is subtle to trace the R-R charge carried by the D0-brane, created out of the imploding radiation. Classically, the D0-brane charge density ought to be localized along the classical trajectory \eqref{trajectory D0} and hence emanates from the past horizon. Once stringy effects are taken into account, the charge appears to originate from asymptotic infinity along the light-cone coordinate. Complete understanding of this curious feature is highly desirable but we shall relegate it to future study. Here, instead, we present a simple prescription of avoiding this subtlety: a version of `Hartle-Hawking' boundary condition. We shall first focus on the absorbed D0-brane boundary state \eqref{falling D0}. Formally, by construction, we can regard the boundary wave function specified by the time-integration over the `real contour' ${\cal C}= \mathbb{R}$ as in \eqref{Wick rotation 0}. Now, let us discuss what happens if we choose the `Hartle-Hawking' type contour instead of the real contour, which connect the Euclidean time with the future or past half of real time axis at $t=t_0$: \begin{align} {\cal C}_{\msc{future}}^{\pm} = \left(t_0+i\mathbb{R}_{\mp}\right) \cup \left(t_0+\mathbb{R}_{+}\right)~,~~~ {\cal C}_{\msc{past}}^{\pm} = \left(t_0+i\mathbb{R}_{\mp}\right) \cup \left(t_0+\mathbb{R}_{-}\right)~. \end{align} More precisely, we should avoid suitably the branch cuts on $t_0+i\mathbb{R}$ to render the integral convergent. See Figures \ref{HH-future} and \ref{HH-past} for details. The superscript $+$ $(-)$ is associated with the positive (negative) energy sector. Note that the phase-factor $e^{-i\omega t}$ behaves well on the lower (upper) half of complex $t$-plane if $\omega $ is positive (negative). Let us pick up ${\cal C}_{\msc{future}}$. Following the traditional interpretation of the Hartle-Hawking type wave function, we may suppose that both the D0-brane and black NS5-brane are created from `nothing' at $t=t_0$, and then the D0-brane starts to fall down toward the future horizon along the classical trajectory \eqref{trajectory D0}. In this prescription, the subtlety we mentioned above is completely circumvented. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\linewidth,keepaspectratio,clip]{cont1.eps} \end{center} \caption{`future Hartle-Hawking contour' : the red (green broken) line is the contour ${\cal C}^+_{\msc{future}}$ for $\omega > 0$ (${\cal C}^-_{\msc{future}}$ for $\omega <0$). The `$L$' (`$R$') contour should be used if calculating the overlap with $L^p_{\omega}(\rho,t)$ ($R^p_{\omega}(\rho,t)$) for the convergence of integral.} \label{HH-future} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.5\linewidth,keepaspectratio,clip]{cont2.eps} \end{center} \caption{`past Hartle-Hawking contour' : the red (green broken) line is the contour ${\cal C}^+_{\msc{past}}$ for $\omega > 0$ (${\cal C}^-_{\msc{past}}$ for $\omega <0$).} \label{HH-past} \end{figure} One may paraphrase the prescription as follows: choosing the Hartle-Hawking contour ${\cal C}_{\msc{future}}$, we explicitly obtain \begin{align} & \hspace{-5mm} {}_{HH +,\,\msc{absorb}}\!\bra{B;\rho_0,t_0} \cr &= \int_0^{\infty}\frac{\mbox{d} p}{2\pi}\, \left[ \int_{0}^{\infty}\frac{\mbox{d} \omega}{2\pi}\, \Psi_{\msc{symm}}(\rho_0,t_0;p,\omega) + \int_{-\infty}^{0}\frac{\mbox{d} \omega}{2\pi}\, {\cal R}(p,\omega)\Psi^*_{\msc{symm}}(\rho_0,-t_0;p,\omega) \right] \, {}^{\widehat{U}}\!\dbra{p,\omega}~, \cr \label{HH falling D0} \end{align} where $\Psi_{\msc{symm}}(\rho_0,t_0;p,\omega)$ is defined in \eqref{symmetric D0}. In fact, by taking ${\cal C}_{\msc{future}}$, we are only left with the $L^p_{\omega}$ ($R^p_{\omega}$)-part of the one-point function for the $\omega>0$ ($\omega<0$) sector. See figure \ref{HH-future}. This boundary wave function is formally regarded as the limit of \eqref{falling D0} under $t_0\,\rightarrow\, -\infty$, $\rho_0\,\rightarrow\,+\infty$ while keeping $|\rho_0|/|t_0|$ finite. Note that the second (first) term $\propto e^{ip\rho_0-i\omega t_0}$ ($\propto e^{-ip\rho_0-i\omega t_0}$) in \eqref{falling D0} oscillates very rapidly in this limit for $\omega >0$ ($\omega <0$) and hence drops off.\footnote {More precise argument would be as follows: The disk amplitude for a wave packet {\em e.g.} $\int \frac{\mbox{d} p}{2\pi} \int \frac{\mbox{d} \omega}{2\pi}\, f(p,\omega) \ket{L^p_{\omega}}$ is evaluated as $ \lim_{\rho_0\,\rightarrow\,+\infty , \, t_0\,\rightarrow\,-\infty}\, \int \frac{\mbox{d} p}{2\pi} \int \frac{\mbox{d}\omega}{2\pi} f(p,\omega) \Psi(\rho_0,t_0;p,\omega)$. Then, the rapidly oscillating term in the boundary wave function $\Psi(\rho_0,t_0;p,\omega)$ cannot contribute for any $L^2$-normalizable wave packet $f(p,\omega)$ due to the Riemann-Lebesgue theorem. } The limit just means that the D0-brane moving along the trajectory \eqref{trajectory D0} is coming from the past infinity $({\cal I}^-)$, and falling into the future horizon (${\cal H}^+$). Everything is supposed to be localized over the classical trajectory in this case. Adopting the past Hartle-Hawking contour ${\cal C}_{\msc{past}}$ for the boundary state of emitted D0-brane \eqref{emitted D0} is completely parallel. We take the time-reversal of the above: \begin{align} & {}_{HH -,\,\msc{emit}}\!\bra{B;\rho_0,t_0} \cr &= \int_0^{\infty}\frac{\mbox{d} p}{2\pi} \left[ \int_{0}^{\infty}\frac{\mbox{d} \omega}{2\pi}\, \Psi^*_{\msc{symm}}(\rho_0,-t_0;p,\omega) + \int_{-\infty}^{0}\frac{\mbox{d} \omega}{2\pi}\, {\cal R}^*(p,\omega)\Psi_{\msc{symm}}(\rho_0,t_0;p,\omega) \right] {}^{\widehat{V}}\!\dbra{p,\omega}~, \nonumber\\ \label{HH emitted D0} \end{align} which is regarded as the $t_0\,\rightarrow\,+\infty$, $\rho_0\,\rightarrow\,+\infty$ limit of \eqref{emitted D0}. It describes the trajectory of D0-brane emitted from the past horizon ${\cal H}^-$ and escaping to the future infinity ${\cal I}^+$. Let us turn to the `symmetric' D0-brane \eqref{symmetric D0}. Naively, it appears that the prescription is that \begin{align} & {}_{HH +,\,\msc{symm}}\!\bra{B;\rho_0,t_0}' \cr &= \int_0^{\infty}\frac{\mbox{d} p}{2\pi} \left[ \int_{0}^{\infty}\frac{\mbox{d} \omega}{2\pi}\, 2 \Psi_{\msc{symm}}(\rho_0,t_0;p,\omega) \, {}^L\!\dbra{p,\omega} +\int_{-\infty}^{0}\frac{d\omega}{2\pi}\, 2 \Psi^*_{\msc{symm}}(\rho_0,-t_0;p,\omega) {}^R\!\dbra{p,\omega} \right] \nonumber\\ \label{HH symm D0 1} \end{align} for the future Hartle-Hawking contour ${\cal C}_{\msc{future}}$, and \begin{align} & {}_{HH -,\,\msc{symm}}\!\bra{B;\rho_0,t_0}' \cr &= \int_0^{\infty}\frac{\mbox{d} p}{2\pi} \left[ \int_{0}^{\infty}\frac{\mbox{d} \omega}{2\pi}\, 2 \Psi^*_{\msc{symm}}(\rho_0,-t_0;p,\omega) {}^R\!\dbra{p,\omega} +\int_{-\infty}^{0}\frac{d\omega}{2\pi} 2 \Psi_{\msc{symm}}(\rho_0,t_0;p,\omega) {}^L\!\dbra{p,\omega} \right] \nonumber\\ \label{HH symm D0 2} \end{align} for the past Hartle-Hawking contour ${\cal C}_{\msc{past}}$. However, this cannot be the whole story. The existence of Euclidean part of the Hartle-Hawking path-integral enforces the boundary states to be expanded by the basis smoothly connected to the Euclidean ones, while $\ket{L^p_{\omega}}$, $\ket{R^p_{\omega}}$ do not possess such a property. Consequently, to achieve the correct Hartle-Hawking states, we ought to make further the projection to ${\cal H}^U$, ($\widehat{{\cal H}^U}$) for the contour ${\cal C}_{\msc{future}}$, and to ${\cal H}^V$, ($\widehat{{\cal H}^V}$) for ${\cal C}_{\msc{past}}$. We thus obtain as the correct Hartle-Hawking states: \begin{align} & {}_{HH +,\,\msc{symm}}\!\bra{B;\rho_0,t_0} = {}_{HH +,\,\msc{symm}}\!\bra{B;\rho_0,t_0}' \widehat{P_U} \equiv {}_{HH +,\,\msc{absorb}}\!\bra{B;\rho_0,t_0} ~, \nonumber\\ & {}_{HH -,\,\msc{symm}}\!\bra{B;\rho_0,t_0} = {}_{HH -,\,\msc{symm}}\!\bra{B;\rho_0,t_0}' \widehat{P_V} \equiv {}_{HH -,\,\msc{emitted}}\!\bra{B;\rho_0,t_0} ~, \label{relation HH} \end{align} where the right-hand sides are already given in \eqref{HH falling D0}, \eqref{HH emitted D0}. Remarkably, this feature resembles much that of the S-branes discussed in \cite{Lambert:2003zr}. Namely, it was shown there that \begin{equation} \mbox{half S-brane} ~ \cong ~ \mbox{full S-brane with the Hartle-Hawking contour} ~. \label{half full S} \end{equation} In our case, \eqref{symmetric D0} corresponds to the full S-brane, while the Hartle-Hawking state \eqref{HH falling D0} (\eqref{HH emitted D0}) is identifiable as the analogue of the half S-brane describing unstable D-brane decay (creation) process. The equalities \eqref{relation HH} suggest that we have roughly identical relation to \eqref{half full S}. Notice that the parameters $\rho_0$, $t_0$ appear just as phase factors of boundary wave functions in \eqref{HH falling D0}, \eqref{HH emitted D0} contrary to \eqref{falling D0}, \eqref{emitted D0}. Namely, the choice of parameters $\rho_0$, $t_0$ does not cause any physical difference for the Hartle-Hawking type states : They all can be regarded as describing the D0-brane moving from ${\cal I}^-$ to ${\cal H}^+$ (from ${\cal H}^-$ to ${\cal I}^+$) for \eqref{HH falling D0} (for \eqref{HH emitted D0}) irrespective of $\rho_0$, $t_0$. These two parameters merely parameterize displacing the trajectory in two-dimensional black hole background. Similar feature comes about for the full S-brane with Hartle-Hawking contour as well: It is equivalent to the half S-brane not depending on any shift of the origin (the point connecting the real and imaginary times). Finally, we remark a comment from the viewpoints of boundary conformal field theory: in contrast to the original ones \eqref{falling D0}, \eqref{emitted D0} and \eqref{symmetric D0}, the Hartle-Hawking boundary states \eqref{relation HH} (or equivalently \eqref{HH falling D0}, \eqref{HH emitted D0}) are not compatible with the reflection relations. One may regard the boundary states \eqref{falling D0} and \eqref{emitted D0} as the `completions' of the Hartle-Hawking states \eqref{relation HH} so that they satisfy the reflection relations. \newpage \sectiono{Conclusion and Discussions}\label{sec:9} In this thesis, we have examined the exact boundary states describing the rolling D-brane in the two-dimensional black hole system. In this final section, we would like to summarize our main results and discuss their physical relevance. In the introduction, we asked three fundamental questions about the nature of the quantum gravity, or string theory as a candidate for the theory of everything: \begin{itemize} \item Small charge black hole v.s. large charge black hole. \item Analyticity v.s. non-analyticity in physical amplitudes. \item Unitarity v.s. open closed duality. \end{itemize} It would be natural to conclude this thesis by asking how far we can answer these questions after our studies on the rolling D-brane in two-dimensional black hole system. To answer these three fundamental questions, in this paper, we have constructed the exact boundary states describing the rolling D-brane in the two-dimensional black hole system (section \ref{sec:7}) to probe the quantum geometry. Our main results are \begin{itemize} \item The tachyon - radion correspondence is proved for $k>1$ by studying the closed string radiation rate from the $\alpha'$ exact rolling D-brane solution (section \ref{sec:8-1}). \item The black hole - string transition is observed at $k=1$ in the closed string radiation rate as a physical order parameter (section \ref{sec:8-3}). \item The consistency between the unitarity and open closed duality is shown to be recovered after a careful treatment of the Wick rotation (section \ref{sec:8-2}). \end{itemize} Although our model is rather a specific one, we can naturally extend our results to draw many universal features of the quantum gravity. In the rest of this section, we recapitulate our arguments and present some discussions with possible future directions to pursue. First of all, we have shown in section \ref{sec:8} that the total emission rate of the rolling D-brane into the two-dimensional black hole system behaves exactly same as that for the rolling tachyon in flat Minkowski space studied in section \ref{sec:5}. This result strongly suggests universal features of the physics associated with the D-brane decay. The universality is an important concept in any physical system. In our decaying D-brane system, we have shown that the closed string radiation rate is independent of the free parameter $k$ representing the level of the current algebra. Since $1/k$ correction governs the $\alpha'$ correction to the geometry, the physical quantity observed in the decaying D-brane process is independent of the stringy corrections. The classical tachyon - radion correspondence still holds even after introducing the stringy corrections independent of its strength (as long as $k>1$). Indeed, the universal behavior of the closed string radiation should be true from the following simple argument. The D-brane energy that should be released during the decay is always proportional to $1/g_s$, so in the perturbative string computation, we expect a divergence in the radiation rate: otherwise we have to face the missing energy problem.\footnote{At first sight, this viewpoint contradicts our computation that the higher dimensional D-brane shows a power-like {\it finite} emission rate (energy), but this is an artefact of the one-particle decay.} Furthermore the universality holds under almost every exactly solvable deformations of the model such as an inclusion of the time-like linear dilaton, electric field etc (see section \ref{sec:5},\ref{sec:8}). In \cite{Israel:2006ip}, a similar computation has been performed in the $AdS_3$ space, supporting the universality of the decaying D-brane systems in yet another solvable background. We would like to emphasize again that our results do not depend on the level $k$ which governs the strength of the world-sheet $\alpha'$ corrections {\it as long as} $k>1.$ This indeed provides a strong support for the tachyon-radion correspondence even at the quantum level. At this point, it is worthwhile mentioning Sen's open-string completeness conjecture \cite{Sen:2003iv,Sen:2003mv,Sen:2004nf}: {\it There is a quantum open string theory (OSFT) that describes the full dynamics of an unstable Dp-brane without an explicit coupling to closed strings. Furthermore, Ehrenfest theorem holds in the weakly coupled OSFT; the classical results correctly describes the time evolution of the quantum expectation values}. The tachyon-radion correspondence directly results in the same conjecture for the rolling D-brane system. The smeared trajectory (or ``moss" around the rolling D-brane) we observed in our rolling D-brane system with exact $\alpha'$ correction is an interesting twist to this conjecture. Secondly, we have shown that the interplay between the analyticity and non-analyticity of the physical amplitudes are crucial to discuss the black hole - string phase transition. The integration over the radial momenta, which at the same time is crucial to prove the tachyon - radion correspondence, introduces the non-analyticity in the physical observables, resulting in the phase transition. More precisely, we have directly observed the black hole - string phase transition from the exact boundary states for the probe rolling D-brane in the two-dimensional black hole background. The phase transition occurs exactly when the Hawking temperature of the two-dimensional black hole coincides with the Hagedorn temperature of the string background as we decrease the charge of the two-dimensional black hole (level of the current algebra). Below the phase transition point, the physical interpretations of the $SL(2;\mathbb{R})/U(1)$ coset model as a black hole geometry break down and become obscure. Our results show that the tachyon - radion correspondence fails at the phase transition point, and the physics associated with the D-brane decay changes drastically. Indeed, as we have shown in section \ref{sec:8}, a drastic change occurs when we study the dynamics of rolling D-branes in the two-dimensional black hole with $k<1$. From the arguments given in section \ref{sec:4}, we expect ``black hole - string transition". This transition is subtle even from the exact CFT analysis because in deriving every formulae in the closed string scattering amplitudes, we assume an analyticity in $k$. How can we probe the ``black hole - string transition" with respect to $k$ when the amplitude is analytic in $k$? In the open string channel, $k$ dependence is also analytic in the amplitudes as well. However, if we compute the physical quantities such as closed string emission rate, the non-analyticity with respect to $k$ emerges. In this way, we have succeeded in probing the ``black hole - string transition", as an emergent phase transition, by studying the rolling D-brane dynamics in the background. It would be interesting to note that not every D-brane can probe the ``black hole - string transition". As we have seen in section \eqref{sec:5-2-5}, the decay of the unstable D0-brane in the Euclidean two-dimensional black hole does {\it not} show any ``black hole - string transition" at $k=1$. The decay rate of unstable D-branes shows a universal property irrespective of the value of $k$. We do not have a good physical explanation of this phenomenon at this moment, but it would be interesting to give a further study and determine which objects can probe the phase transition. In the Euclidean signature target-space theory after the analytic continuation, the phase transition is induced from the non-perturbative $\alpha'$ corrections related to the winding-tachyon condensation. In the original Lorentzian signature target-space, one might understand it as a thermal (winding) tachyon condensation, or in the real time picture at the phase transition point, we would encounter associated (local) Hagedorn divergence of the black hole thermodynamics. It is natural to expect that our results on the string - black hole phase transition is rather robust and universal. Indeed, the transition is barely affected under various marginal deformations of the solvable model such as incorporation of the linear dilaton or the electric field. It would be interesting to extend our analysis to more realistic higher dimensional black hole systems realized in superstring theory.\footnote{Recently, the black hole - string transition has been studied in the context of $AdS_5/CFT_4$ correspondence in \cite{Alvarez-Gaume:2006jg}.} Philosophically, the concept of the phase in the quantum gravity is rather obscure. We know that in smaller dimensional field theories or in finite size theories, there is no notion of the phase transition. What happens, then, if the dimensionality or size of the universe fluctuate as is supposed to be the case with the quantum gravity?\footnote{A good example is the de-Sitter space, where the quantum gravity is supposed to consist of finite degrees of freedom.} Our study only touches a possible hidden nature of the phase transition as non-analytic behaviors of the physical quantities (not amplitudes themselves). It would be worth studying further the potential origins of the non-analyticity in physical quantities in more general situations. So far, we have restricted ourselves to $g_s \to 0$ limit throughout this thesis, but finite $g_s$ effects cannot be neglected in any realistic string theory. It is natural to assume that the finite $g_s$ effect sets a cut-off for the emitted closed string energy because it cannot emit energy greater than the tension of the decaying D-brane $\sim 1/g_s$. Therefore, the emission rate roughly behaves as \begin{eqnarray} \mathcal{N} \sim \int^{1/g_s} \mbox{d} dM N(M) \sqrt{\rho^{(c)}(M)} \ \end{eqnarray} with an explicit cut-off at $1/g_s$. This also means that the radial momentum $p/\sqrt{k}$ should be less than $1/g_s$. Does this constraint smoothen out the phase transition? The saddle point approximation is not accurate as $M$ becomes smaller, so we expect that the phase transition becomes smooth as we introduces $g_s$ corrections. This is consistent with the statement that the ``black hole - string transition" is actually a ``black hole - string crossover". We have constructed several boundary states for the rolling D-branes in the two-dimensional black hole system. The failure of the uniqueness is physically relevant because in the time-dependent problems in string theory, the boundary conditions should be always imposed in accordance with the physics we are interested in. Mathematically, the different choices of the contour integration give rise to different physics. The tachyon - radion correspondence beautifully connects different solutions (contours) of the rolling tachyon with those for the rolling radion. For the absorbed D-brane boundary condition studied in section \ref{sec:7-2-2}, the dominant infalling closed string radiation (at the Hagedorn temperature) accompanies the outgoing closed string radiation (at the Hawking temperature). The existence of the anomalously small outgoing radiation originates from the boundary condition imposed at the horizon. This reminds us of the closed string Hawking radiation discussed in section \ref{sec:2-4}. Combining the discussion given in section \ref{sec:8-1-4}, we can see that the origin of the thermal-like behaviour of the rolling D-brane radiation is closely related to the boundary condition imposed on the wavefunction (Ishibashi states). It would be interesting to make more precise the relation between the boundary conditions imposed on the string theory and the apparent anomaly as Hawking-like radiation. Finally, we have discussed the consistency between the unitarity and the open-closed duality in the radiative process for the decay of unstable D-brane and rolling of accelerated D-brane dynamics in section \ref{sec:5-2} and \ref{sec:8-2} respectively. From ``ab initio" derivation in the open string channel, both in Euclidean and Lorentzian world-sheet approaches, we have found heretofore overlooked contribution to the spectral amplitudes and observables. The contribution is fortuitously absent for decay of unstable D-brane, but is present for rolling of accelerated D-brane. We have shown that the contribution is imperative for ensuring unitarity and optical theorem. Our notion of the unitarity is rather specific, so we have not discussed more fundamental questions e.g. about the unitarity of the quantum black hole systems associated with Hawking evaporation. The information paradox of the black hole system should be resolved within the contex of the string theory if it is really a fundamental theory of everything. These three questions raised in this thesis are basic but profound ones that people might first come up with when they would like to discuss the fundamental properties of the quantum gravity. We have attacked them from the concrete examples of the exactly solvable string black hole background. At this moment, we do not have complete answers to these questions in the tremendously huge string landscape, but we believe that our little step in the small corner will ultimately lead to their final answers. \section*{Acknowledgements} The author would like to thank all my friends, my family, and my teachers for supporting him to write up this thesis. In particular the author would like to express his sincere thanks to his supervisor Tohru Eguchi for continuous encouragement and advice. Also he would like to express his special thanks to Soo-Jong Rey and Yuji Sugawara for the fruitful collaborations. The most of the main results in this thesis is based on the collaborations with them. He also acknowledges Sylvain Ribault and Yuji Tachikawa for stimulating discussions on the type 2 boundary states and a-theorem violation for generalized conifolds. This research is supported in part by JSPS Research Fellowships for Young Scientists. \newpage
1,314,259,993,701
arxiv
\section{Introduction} The formation histories of early-type galaxies remains a controversial and often-debated topic in modern astrophysics. The bulk of the past effort on early-type systems has focussed, almost exclusively, on their \emph{optical} properties. The optical colours of the early-type population are predominantly red, implying that the bulk of the stellar mass in these systems forms at high redshift \citep[e.g.][]{BLE92}. Furthermore, high $\alpha$-enhancement ratios in their stellar spectra indicate that this star formation plausibly takes place over short ($<1$ Gyr) timescales \citep[e.g.][]{Thomas1999}. However, a drawback of optical data is its lack of sensitivity to moderate amounts of \emph{recent star formation} (RSF), within the last Gyr or so. The optical spectrum remains largely unaffected by the minority of stellar mass that forms in these systems at low and intermediate redshift ($z<1$), which makes it difficult to measure early-type star formation histories (SFHs) over the last 8 billion years {\color{black} with significant accuracy using optical colours alone.} A first step towards probing early-type SFHs over this period is to quantify their RSF at $z\sim0$. Rest-frame UV photometry provides an attractive \emph{photometric} indicator of RSF. While its impact on the optical spectrum is relatively weak (and virtually undetectable, given typical observational and theoretical uncertainties), a small mass fraction ($<3$\%) of young ($<1$ Gyr old) stars strongly affects the rest-frame UV spectrum shortward of $3000\AA$ \citep{Kaviraj2007d}. Furthermore, the UV remains largely unaffected by the age-metallicity degeneracy \citep{Worthey1994} that typically plagues optical analyses (Kaviraj et al. 2007a), making it an ideal photometric indicator of RSF. {\color{black}In Figure \ref{fig:rsf_plots} we demonstrate the sensitivity of the UV to small mass fractions of young stellar populations. We construct a model in which an old (10 Gyr old) population contributes 99\% of the stellar mass (shown in red), with a 1\% contribution from stars that are $0.3$ Gyrs old (show in blue). The combined spectral energy distribution (SED) is shown in black. The UV output of the combined SED comes \emph{purely} from the young population (blue) and the \emph{shape} of the optical spectrum - which determines the optical colours - changes only very slightly. We also compare the $(g-r)$ and $(NUV-r)$ colours of the combined population (black) and those of the purely old population (red). While the ($g-r$) colour changes by $\sim0.1$ mag from that of a purely old population, the ($NUV-r$) colour changes by $\sim2.5$ mags!} \begin{figure} \includegraphics[width=\columnwidth]{rsf_plots} \caption{The sensitivity of the UV to small mass fractions of young stellar populations. We construct a model in which an old (10 Gyr old) population contributes 99\% of the stellar mass (shown in red), with a 1\% contribution from stars that are $0.3$ Gyrs old (show in blue). The combined spectral energy distribution (SED) is shown in black. The UV output of the combined SED comes \emph{purely} from the young population (blue) and that the \emph{shape} of the optical spectrum - which determines the optical colours - changes only very slightly. We also indicate the $(g-r)$ and $(NUV-r)$ colours of the combined population (black) with those of the purely old population (red). While the ($g-r$) colour changes by $\sim0.1$ mag from that of a purely old population, the ($NUV-r$) colour changes by $\sim2.5$ mags!} \label{fig:rsf_plots} \end{figure} A new generation of early-type studies that have exploited UV data from the GALEX space telescope (Martin et al. 2005) have shown that, in contrast to their optical colour-magnitude relations (CMRs), nearby ($0<z<0.11$), {\color{black}luminous} ($M(V)<-21$) early-types show a large spread in their UV colour distribution of almost 5 mags {\color{black} - a direct consequence of the sensitivity of the UV to small amounts of recent star formation that is demonstrated in Figure \ref{fig:rsf_plots} above.} Following the early work of \citet{Yi2005}, Kaviraj et al. (2007b; K07 hereafter) have demonstrated that, while a negligible fraction of the early-type population has photometry consistent with no star formation within the last 2 Gyrs, \emph{at least} 30\% show unambiguous signs of RSF, with stellar mass fractions of 1-3\% forming within the last Gyr, with luminosity-weighted ages of $\sim300-500$ Myrs. {\color{black} In the {\color{black}top} left-hand panel of Figure \ref{fig:nuvr_cmr} we show the ($NUV-r$) CMR of nearby ($0.05<z<0.06$) early-type galaxies drawn from the SDSS DR4. Note that the GALEX $NUV$ filter is centred at $\sim2300$ \AA. The large spread in the UV colours is in contrast to the small spread (a few tenths of a mag) in the optical $(g-r)$ colour (bottom left-hand panel). Following K07, early-type morphology is established by visually inspecting each individual object using its SDSS image. The galaxies shown in this figure are selected to have $r<16.8$, since this is the redshift range within which visual inspection can be robustly performed using SDSS images (see Section 2 in K07). Note that the luminosity of a typical early-type galaxy, in the SDSS $r$-band, lies around $M(r)^* \sim -21.15$ (see Figure 2 in Bernardi et al. 2003). In the right-hand panel of this figure we present the ($NUV-r$) colours of the SDSS early-types plotted against their \emph{stellar masses}, taken from \citet{Gallazzi2005}. Note that the Gallazzi et al. masses have been released through the public Garching SDSS catalog which can be found here: http://www.mpa-garching.mpg.de/SDSS/DR4/. The nominal stellar mass adopted for the spheroid in our simulations is $4\times 10^{10}$ M$_{\odot}$ (see also Section 2 below) where the scatter in the UV CMR becomes broad. This value is indicated using a {\color{black}dotted} line.} \begin{figure} \includegraphics[width=\columnwidth]{nuvr_Mr_mass} \caption{TOP LEFT: The ($NUV-r$) colour-magnitude relation of nearby ($0.05<z<0.06$) early-type galaxies drawn from the SDSS DR4. Note that the GALEX $NUV$ filter is centred at $\sim2300$ \AA. TOP RIGHT: The ($NUV-r$) colours of SDSS early-types plotted against their stellar masses, taken from \citet{Gallazzi2005}. Note that the Gallazzi et al. masses have been released through the public Garching SDSS catalog which can be found here: http://www.mpa-garching.mpg.de/SDSS/DR4/. The nominal stellar mass adopted for the spheroid in our simulations ($4\times 10^{10}$ M$_{\odot}$, see Section 2 below) is indicated using the solid line. BOTTOM LEFT: The ($g-r$) colour-magnitude relation of nearby ($0.05<z<0.06$) early-type galaxies drawn from the SDSS DR4.} \label{fig:nuvr_cmr} \end{figure} {\color{black} Note that, while the effects of RSF will be present in all diagnostics of star formation (including e.g. the optical $(g-r)$ colour), the sensitivity of a particular indicator depends on the proportional change due to RSF compared to the typical uncertainties in that indicator. These uncertainties come both from measurement errors and from uncertainties in stellar models that are used to convert spectro-photometric quantities (e.g. colours) into estimates of physical parameters such as the ages and mass fractions of young stars. Typical model and observational errors in optical filters are $\sim0.05$ mags (Sukyoung Yi priv. comm., see also Yi 2003) and at least 0.02 mags (including calibration uncertainties) respectively. The total resultant uncertainty in the optical $(g-r)$ colours is $\sim0.08$ mags (compared to a spread in this colour of $\sim0.3$ mags from Figure \ref{fig:nuvr_cmr}). In a similar vein, the typical model and theoretical errors in the $NUV$ magnitudes are both around 0.2 mags, yielding uncertainties in the $(NUV-r)$ colours of $\sim0.3$ mags (compared to a spread in this colour of almost 5 mags). The enhanced sensitivity of the $NUV$ to RSF stems from the fact that the spread in the $NUV$ colours is significantly (possibly an order of magnitude or more) larger than the uncertainties in the colours. While RSF does leave an imprint on the optical spectrum, the overall sensitivity of the $(NUV-r)$ colour is much greater, making it a much more stringent test of the minor merger hypothesis than can be performed using optical colours alone.} While the UV has revealed the unexpected presence of widespread low-level RSF in the local early-type population, past efforts have only measured the star formation activity without exploring the \emph{physical mechanism} for RSF in these galaxies. In this paper, we explore the potential role of minor mergers in reproducing the UV colour-magnitude relation of {\color{black}{\color{black}intermediate-mass}} early-type galaxies at $z\sim0$. It is worth noting that, although low level RSF can produce blue UV colours, only stars formed in the last $\sim$Gyr contribute significantly to the UV flux. Hence, an event like a minor merger - where one expects an \emph{age profile} in the recent star formation - may not automatically produce blue UV colours, even if the supply of cold gas is adequately high to produce the observed mass fraction of young stars. In this study, we combine the results of numerical simulations of minor mergers with standard stellar models to probe the photometric properties of such events and test consistency with the observations, given reasonable assumptions for the properties of the merger progenitors and the predicted frequency of merging activity in the $\Lambda$CDM paradigm. \section{Simulations} {\color{black}In this section we describe the numerical methodology used to study minor mergers between a typical elliptical galaxy and a satellite, where the mass ratio of the system is between 1:4 and 1:10}. A more complete description will be provided in Peirani et al. (in prep). {\color{black}Note that the nominal stellar mass of the spheroid used to describe the photometric properties of the simulations in Section 3 matches the region of the $(NUV-r)$ vs. mass plot (see Figure \ref{fig:nuvr_cmr} above) where the scatter is broadest.} \subsection{Initial conditions} The elliptical is modelled using spherical dark matter (DM) halo and stellar components, with a total mass of $10^{12} M_{\odot}$. Following \citet{Jiang2007}, stars contribute 4\% of this value. The DM halo follows a Hernquist profile \citep{Hernquist1990}, with parameters chosen so that the inner part coincides with an NFW profile \citep{NFW1996} with a virial radius $r_{200}=206$ kpc and a concentration parameter $C=10$, as indicated by previous cosmological N-body simulations \citep{Dolag2004}. A Hernquist profile reproduces the `de Vaucouleurs' $R^{1/4}$ surface brightness profiles of typical elliptical galaxies. The effective radius of the projected brightness is $r_e=4.3$ kpc. The satellite is constructed using a spherical DM halo (with a Hernquist profile) which contains a disk containing stars and gas but no bulge. The mass of the disk represents 5\% of the total mass, with a gas fraction of either 20\% or 40\%. The gas fractions are consistent with the observed values from the SDSS \citep{Kannappan2004}. Satellites are created following \citet{Springel2005a} and their rotation curves satisfy the baryonic Tully-Fisher relation. In all simulations satellites are put on prograde parabolic orbits \citep{Khochfar2006b}, with a pericentric distance $R_{p}=8$ kpc, and initial separations of $100$ kpc. \subsection{Numerical method} The simulations are performed using the public GADGET2 code \citep{Springel2005b} with added prescriptions for cooling, star formation and feedback from Type Ia and II supernovae (SN). Approximately 380,000 particles are used for each experiment and the particle masses and gravitational softening lengths ($\epsilon$) involved are summarized in the table below. \small \begin{center} \begin{tabular}{c|c|c|c|c} \hline & DM & gas & star (disk) & star (E)\\ \hline Mass ($10^{5} M_\odot$) & 30.3 & 4.5 & 4.5 & 13.5\\ \hline $\epsilon$ (kpc) & 0.4 & 0.5 & 0.5 & 0.4\\ \hline \end{tabular} \end{center} \normalsize The cooling and star formation (SF) recipes follow the prescriptions of \citet{Thomas1992} and \citet{Katz1996} respectively. Gas particles with $T>10^4$K cool at constant density (with the assumption of solar metallicity) for the duration of each timestep. Gas particles with $T< 2\times 10^4 K$, number density $n > 0.1\, cm^{-3}$, overdensity $\delta \rho_{gas}> 100$ and ${\bf \nabla . \upsilon} <0$ form stars according to the standard star formation prescription: $d\rho_*/dt = c_* \rho_{gas}/t_{dyn}$, where $\rho_*$ refers to the stellar density, $t_{dyn}$ is the dynamical timescale of the gas and $c_*$ is the SF efficiency. Instead of creating new (lighter) star particles, we implement the SF prescription in a probabilistic fashion. Assuming a constant dynamical time across the timestep, the fractional change in stellar density, $\Delta \rho_*/\rho_* = 1-\exp(-c_* \Delta t/t_{\rm dyn})$. For each gas particle, we draw a random number ($r$) between 0 and 1 and convert it to a star if $r<\Delta \rho_*/\rho_*$. The energy injection into the inter-stellar medium (ISM) from SN, which regulates the star formation rate (SFR), is modelled following the approach of Durier \& de Freitas Pacheco (2007, in prep.). Instead of assuming `instantaneous' energy injection, we include the effective lifetime of SN progenitors using the rate of energy injection $H_{SN}$. For this, we consider stellar lifetimes in the mass ranges $0.8\,M_\odot<m<8.0\,M_\odot$ and $8.0\,M_\odot<m<80.0\,M_\odot$ for Type Ia and Type II progenitors respectively. Using a Salpeter initial mass function for Type II SN gives: \begin{equation} H_{SN_{II}}=2.5\times10^{-18}\Big(\frac{m_*}{M_\odot}\Big)E_{SN}\Big(\frac{1300}{\tau(\textnormal{Myr})-3}\Big)^{0.24} \textnormal{erg.s$^{-1}$} \end{equation} \noindent where $E_{SN}=10^{51}$ erg, $m_*$ is the mass of the star particle and $3.53 <\tau <29$ Myr. For Type Ia SN, the heating is delayed, since they appear $t_0=0.8-1.0$ Gyr after the onset of star formation. Following \citet{deFreitasPacheco1998}, the probability of one event in a timescale $\tau$ after the onset of star formation is given by: \begin{equation} H_{SN_{I_a}}=4.8\times10^{-20}\Big(\frac{m_*}{M_\odot}\Big)E_{SN}\Big(\frac{t_0}{\tau}\Big)^{3/2} \textnormal{erg.s$^{-1}$} \end{equation} Eqns (1) and (2) are used to compute the energy released ($E_i$) by SN derived from a star particle $i$, and a fraction $\gamma$ of this energy is deposited in the j$^{th}$ neighbour gas particle by applying a radial kick to its velocity with a magnitude $\Delta v_j = \sqrt{(2w_j\gamma E_i/m_j)}$, where $w_j$ is the weighting based on the smoothing kernel and $m_j$ is the mass of gas particle j. We note that all gas neighbours are located in a sphere of radius $R_{SN}$, centered on the SN progenitor, to avoid spurious injection of energy outside the SN's region of influence. In the simulations presented in this study, we use $\gamma=0.1$, $R_{SN}=0.4$ kpc and vary $c_*$ in the range $0.01<c_*<0.1$. When satellites are isolated, these parameters lead to a quasi-constant SFR between 0.2 and $3.0\,M_\odot$yr$^{-1}$, depending on the mass and initial gas fraction of the satellite. These SFRs are in good agreement with those for low-mass objects found in previous simulations of isolated galaxies \citep[e.g.][]{Stinson2006}. Finally, we have checked that increasing the resolution of the simulations (by a factor of 10) does not affect the star formation history or the derived colours so that the conclusions presented in this work are robust. A typical example of the SFH is shown in Figure \ref{fig:sfr}. \begin{figure} \rotatebox{0}{\includegraphics[width=\columnwidth]{sfr.ps}} \caption{The star formation rate of a minor major with mass ratio 1:6, where the gas fraction of the satellite is 20\% and $c_*=0.05$. The dashed line represents the same simulation with 10 times more particles. At the `final plunge' the satellite disappears into the parent elliptical, so that images (e.g. through an SDSS $r$-band filter) would indicate a single spheroidal object.} \label{fig:sfr} \end{figure} \section{The case for minor mergers} We begin by exploring the predicted photometry from an ensemble of minor merger simulations (described in Section 2) with the assumptions that (a) both merger progenitors have solar metallicity, (b) the elliptical and satellite have luminosity-weighted ages of 9 and 5 Gyrs respectively and (c) the dust extinction due to the ISM in the remnant is given by $E_{B-V}^{ISM}\sim0.05$. {\color{black}We briefly describe the motivation for modelling the underlying population of the spheroid using a 9 Gyr old simple stellar population (SSP). It is well established that the bulk of the stellar populations in elliptical galaxies are uniformly old. This is demonstrated by the fact that (a) they show red optical colours with small scatter \citep[e.g.][]{BLE92,Stanford98} and (b) they exhibit high alpha-enhancement (e.g. [Mg/Fe]) ratios, which implies that the star formation took place on timescales shorter than the typical onset timescales of Type Ia supernovae i.e. $<1$ Gyrs \citep[see e.g.][]{Thomas1999}. Hence the underlying population can be approximated by an old SSP. The particular choice of 9 Gyrs is motivated by the fact that Bernardi et al. (2003), who recently studied the SDSS elliptical galaxy population in the nearby Universe, were able to fit their optical colour-magnitude relations with an SSP with an age of 9 Gyrs. It is worth noting that \emph{optical} colour evolution virtually stops after $\sim6$ Gyrs \citep{Yi2003}. This means that a 6 Gyr old SSP looks very similar to an older stellar population e.g. 9 Gyrs. In other words we could replace the 9 Gyr SSP with a 6 or 10 Gyr SSP and our results will not change. Note that similar techniques (i.e. using an old SSP to represent the underlying stellar population of ellipticals) have been frequently used by previous studies. e.g. \citet{Trager2000b} and \citet{Ferreras2000}. Finally, stellar populations that are older than $\sim2$ Gyrs do not contribute to the UV (see the bottom panel of Figure 7 in K07). As a result, the \emph{underlying} population of an elliptical galaxy will not contribute to the UV at all.} The estimate for the ISM extinction is an average value for local early-types, derived by K07 from parameter estimation using GALEX (UV) and SDSS (optical) photometry. Since the youngest stars are expected to reside in molecular clouds (MCs) which have short lifetimes of a few tens of Myrs \citep[e.g][]{Blitz1980,Hartmann2001} and dust extinction several times larger than that due to the ISM alone \citep[e.g.][]{Charlot2000}, we explore MC lifetimes in the range 0-50 Myrs and MC extinctions in the range $0.05<E_{B-V}^{MC}<0.5$. Thus, when the synthetic photometry from the simulations is constructed, stars with ages less than the MC lifetime in question are subject to the prescribed MC extinction. The stellar models used in this study are described in \citet{Yi2003}. \begin{figure} \begin{center} \includegraphics[width=3.5in]{bcprops1_10} \caption{Predicted photometry from an ensemble of minor merger simulations with mass ratios of 1:10. Both progenitors are assumed to have solar metallicity with the elliptical and the satellite having ages of 9 and 5 Gyrs respectively. The ISM dust extinction in the remnant is given by $E_{B-V}^{ISM}\sim0.05$. Symbol types represent merger configurations - the star formation efficiency ($c*$) and gas fraction ($G$) of the accreted satellites are shown in the top left legend. MC extinctions are shown using colours, while symbol sizes represent MC lifetimes. Grey dots represent the observed colours of luminous ($L^*$ or above) early-type galaxies in the redshift range $0.05<z<0.06$. Note that typical uncertainties in the observed colours are 0.2 mag and that the synthetic photometry has been redshifted to $z=0.065$ for a direct comparison.} \label{fig:bcprops1_10} \end{center} \end{figure} In Figure \ref{fig:bcprops1_10} we present the synthetic photometry from various configurations for mergers with mass ratios of 1:10. The remnant is `observed' at the point where the satellite finally disappears into the parent elliptical, so that images (e.g. through an SDSS $r$-band filter) would indicate a single spheroidal object. The colours shown are therefore the \emph{bluest} possible for each scenario where the system appears to be one object. Star formation declines after this `final plunge' (see Figure \ref{fig:sfr}) and the remnant reddens, in the $(NUV-r)$ colour, by $\sim0.8$ mag/Gyr for mass ratios of 1:4 and 1:6 and by $\sim0.5$ mag/Gyr for a mass ratio of 1:10. We find that, with reasonable MC properties - e.g. lifetimes $\geq30$ Myrs (red and green colours) and high dust extinctions (large symbols) - 1:10 mergers can reproduce most but not all of the scatter in the observed UV colours. In particular, the \emph{bluest} UV colours ($NUV-r\lesssim4$) cannot be accounted for by 1:10 mergers because the RSF induced is inadequate, even when the satellite has a high gas fraction ($\sim40$\%). The observed early-type UV colour-magnitude relation (CMR; shown using filled grey circles in Figure \ref{fig:bcprops1_10}) is restricted to the redshift range $0.05<z<0.06$ (to ensure robust morphological classification from SDSS images) and $r<16.2$ (which corresponds to galaxies more luminous than $L^*$ at this redshift). Recalling that Figure \ref{fig:bcprops1_10} assumes solar metallicity and a single age for the satellite (5 Gyrs), we now explore a wider parameter space where we vary the metallicity of the satellite in the range $0.2Z_{\odot}$ to $2Z_{\odot}$ and its luminosity-weighted age in the range 1-9 Gyrs. The analysis is restricted to realistic MC properties - lifetimes $\gtrsim$ 30 Myrs and extinction $\gtrsim$ 0.3. We present these results in Figure \ref{fig:age_met_spread}. While the predicted photometry falls in the centre of the locus of observed galaxies in Figure \ref{fig:bcprops1_10}, we find that a reasonable spread in the age and metallicity of the satellite can reproduce the `horizontal' spread in the $(u-r)$ colours. However, such a spread in age and metallicity cannot mimic low MC lifetimes/extinctions i.e. if we restrict our analysis to realistic MC properties only, low satellite ages and metallicities remain unable to reproduce the bluest UV colours. We also indicate, in Figure \ref{fig:age_met_spread}, the early-type galaxy fractions bluer than $(NUV-r)<3.8$ (the colour limit of the 1:10 mergers with realistic MC properties) and in the colour range $3.8<NUV-r\leq5.5$. K07 estimated that, within theoretical and observational uncertainties, galaxies bluer than $(NUV-r)\sim5.5$ are {\color{black}very} likely to have had some RSF (within the last $\sim$Gyr), while galaxies with $(NUV-r)\geq5.5$ can be consistent with both RSF and purely old ($>2$ Gyrs old) stellar populations. \begin{figure} \begin{center} $\begin{array}{c} \includegraphics[width=0.5\textwidth]{fiducial_bc} \end{array}$ \caption{TOP: Predicted photometry from an ensemble of minor mergers with mass ratios of 1:10. We assume a spread in the age (1-9 Gyrs; shown using symbol sizes) and metallicity ($0.2Z_{\odot}-2Z_{\odot}$; shown using colours) of the satellite. The observed colours of the local, {\color{black}{\color{black}intermediate-mass}} ($L^*$ or above) early-type population is shown by the black dots. Large grey triangles indicate the positions of simple stellar populations with half-solar and solar metallicities that form at $z=3$.} \label{fig:age_met_spread} \end{center} \end{figure} We now repeat the analysis with mergers that have mass ratios of 1:6 and 1:4. Figure \ref{fig:bcprops1_6and1_4} indicates that the blue end of the early-type UV colours, which is inconsistent with 1:10 mergers, can indeed be reproduced by 1:6 and 1:4 mergers with realistic assumptions for MC properties. \begin{figure} \begin{center} $\begin{array}{c} \includegraphics[width=0.5\textwidth]{mergers1614}\\ \end{array}$ \caption{Predicted photometry from the same ensemble of minor merger simulations as in Figure \ref{fig:bcprops1_10} but with mass ratios of 1:6 (blue) and 1:4 (red). Both progenitors are assumed to have solar metallicity with the elliptical and the satellite having ages of 9 and 5 Gyrs respectively. The ISM dust extinction in the remnant is given by $E_{B-V}^{ISM}\sim0.05$. We only show configurations with realistic MC properties - lifetimes $\gtrsim$ 30 Myrs and extinction $\gtrsim$ 0.3} \label{fig:bcprops1_6and1_4} \end{center} \end{figure} \section{Merger statistics in $\Lambda$CDM and reproduction of the observed early-type colours} While the UV colours of {\color{black}{\color{black}intermediate-mass}} early-type galaxies appear consistent with minor mergers, the reproducibility of the entire UV CMR depends on the frequency of merging activity at low redshift. In Figure \ref{fig:merger_statistics} we present the average fraction of early-type galaxies {\color{black}(in dark matter halos of mass $10^{12}$ M$_{\odot}$ or above)} that are predicted to have had one (solid), two (dashed), three (dot-dashed) and four (triple dotted) 1:X mergers in a given look-back time in $\Lambda$CDM. The value of `X' is indicated in each panel. Merger trees are generated using the semi-analytical model of Khochfar and Burkert (2003, 2005) and \citet{Khochfar2006} and morphology is traced using stellar bulge:total ($B/T$) ratios - early-type galaxies are assumed to have $B/T>0.7$. {\color{black}The merger fractions are calculated by dividing the number of mergers that occurred in look-back time ($t$) bins in the histories of spheroids by the number of such spheroids at $z=0$. For example, the point at $t=0$ represents the merger fraction within look-back times of 0 and 1 Gyr, while the point at $t=1$ represents the merger fraction within look back times of 1 and 2 Gyrs and so on. The merger fraction increases with look-back time because the merger rate in the Universe increases at higher redshift \citep[see also e.g.][]{Gottlober2001,LeFevre2000}. Note that the merger fraction increases monotonically until the merger rate in the Universe peaks and then drops off.} \begin{figure} \begin{center} \includegraphics[width=3.5in]{merger_statistics} \caption{{\color{black}The average fraction of spheroidal galaxies (in dark matter halos of mass $10^{12}$ M$_{\odot}$ or above) that are predicted to have had one (solid), two (dashed), three (dot-dashed) and four (triple dotted) 1:X mergers in a given look-back time in $\Lambda$CDM. The value of `X' is indicated in each panel and the dispersions in the fractions are $\sim20$\%. Fractions are calculated by dividing the number of mergers that occurred in look-back time ($t$) bins in the histories of spheroids by the number of such spheroids at $z=0$. For example, the point at $t=0$ represents the merger fraction within a look-back time of 0 and 1 Gyr, while the point at $t=1$ represents the merger fraction within a look back time of 1 and 2 Gyrs and so on. The merger fraction increases with look-back time because the merger rate in the Universe increases at higher redshift \citep[see also e.g.][]{Gottlober2001,LeFevre2000}. Note that the merger fraction increases monotonically until the merger rate in the Universe peaks and then drops off.}} \label{fig:merger_statistics} \end{center} \end{figure} \begin{comment} We check first whether the predicted merging activity can reproduce the bluest ($NUV-r<3.8$) 6\% of the early-type population, which are not consistent with 1:10 mergers. Since the bluest observed early-types have ($NUV-r$)$\sim2.5$, and recalling that the simulated UV colours redden by $\sim0.8$ mag every Gyr, we compute the total fraction of early-types that have had mergers with mass ratios between 1:4 and 1:6 within the last $\sim1.5$ Gyrs. We find that $\sim$15\% of early-types satisfy this criterion, so that such mergers are sufficiently frequent to satisfy the early-type UV CMR where $(NUV-r)<3.8$. Early-type galaxies satisfying $3.8<NUV-r<5.5$ may have experienced either recent minor mergers with mass ratios $<$1:6 or mergers with higher mass ratios more than $\sim1.5$ Gyrs in the past. {\color{black} Note that while major mergers (which have mass ratios between 1:1 and 1:3) might also be expected to contribute to the scatter in the UV CMR, a study of the GALEX NUV photometry of a sample of \emph{ongoing} major mergers, identified by \citet{McIntosh2007} from the SDSS, indicates that major merger progenitors lie on the broad UV red sequence ($NUV-r>4.7$). While a caveat here is that the McIntosh et al. sample is small (compared to the SDSS galaxy population and especially after cross-matching with GALEX), it is reasonable to conclude that major mergers will contribute only to the broadness of UV `red sequence' ($NUV-r>4.7$), not to the full extent of the UV scatter and certainly not to the blue end of the UV CMR.} \end{comment} {\color{black}Before comparing the photometric predictions from our model machinery to the observations, we briefly note that, while major mergers (which have mass ratios between 1:1 and 1:3) might also be expected to contribute to the scatter in the UV CMR, a study of the GALEX $NUV$ photometry of a sample of \emph{ongoing} major mergers, identified by \citet{McIntosh2007} from the SDSS, indicates that major merger progenitors lie on the broad UV red sequence ($NUV-r>4.7$). While a caveat here is that the McIntosh et al. sample is small (compared to the SDSS galaxy population and especially after cross-matching with GALEX), it is reasonable to conclude that major mergers will contribute only to the broadness of UV `red sequence' ($NUV-r>4.7$), not to the full extent of the UV scatter and certainly not to the blue end of the UV CMR.} {\color{black}We now check if the predicted LCDM minor merger activity (i.e. events with mass ratios between 1:4 and 1:10), convolved with the predictions from the basic set of numerical simulations described above, is able to simultaneously reproduce the UV and optical CMRs of the low-redshift early-type population. Using the SDSS luminosity function of low-redshift early-type galaxies extracted in K07, we construct a large Monte-Carlo (MC) ensemble of 50,000 simulated objects and compare the synthetic CMRs from this ensemble with the observed early-type CMRs in both the $(NUV-r)$ and $(g-r)$ colours. We assume (a) realistic MC properties i.e. random MC lifetimes between 30 and 50 Myrs and $0.3<E_{B-V}^{MC}<0.5$ (b) a gaussian distribution in the SSP-weighted satellite ages, which peaks at 5 Gyrs, with a width of 2 Gyrs and (c) a uniform distribution in the satellite metallicities in the range $0.2Z_{\odot}-2Z_{\odot}$. We only consider merger events within the last 2.5 Gyrs and note that including larger time windows does not alter our results because the UV flux decays rapidly after 2 Gyrs anyway. In Figure \ref{fig:mcsims} we draw a random subset, equal to the number of observed galaxies in our comparison redshift range ($0.05<z<0.06$), from the parent MC ensemble of 50,000 objects and compare their $(NUV-r)$ and $(g-r)$ colours to that of the observed early-type population. A comparison between the colour histograms of the parent MC ensemble and the observed early-type population is also shown in the inset to each figure. Note that both histograms are normalised to 1. We find that, given the assumptions listed above, there is good quantitative agreement between the synthetic and observed CMRs and colour distributions in both UV and optical colours. In other words, the predicted minor merger activity in the standard model is able to reproduce the UV/optical properties of the early-type population (in particular the distribution scatter to blue UV colours) with a satisfactory level of accuracy. This, in turn, provides a very strong plausibility argument for minor merging driving the recent star formation observed in early-type galaxies at low redshift.} \begin{figure} \begin{center} $\begin{array}{c} \includegraphics[width=0.5\textwidth]{mcnr}\\ \includegraphics[width=0.5\textwidth]{mcgr} \end{array}$ \caption{TOP: Comparison of the simulated $(NUV-r)$ colours, generated by convolving the photometric predictions from minor merger simulations with the predicted minor merger activity in LCDM (black), with the observed $(NUV-r)$ colours of the low-redshift early-type population. Colour histograms (normalised to 1) are shown in the inset. BOTTOM: The corresponding plot for the optical $(g-r)$ colour.} \label{fig:mcsims} \end{center} \end{figure} \begin{comment} Figure \ref{fig:merger_statistics} indicates that the predicted minor merger activity within the last few Gyrs is sufficient to reproduce the $\sim28$\% of the early-type population that occupies intermediate colours ($3.8<NUV-r<5.5$) on the UV CMR. For example, $\sim20$\% of early-types have had a minor mergers with mass ratios between 1:7 and 1:10 within the last $\sim1.5$ Gyrs. In addition, $\sim35$\% have had mergers with larger mass ratios (1:4 to 1:6) within the last $\sim4$ Gyrs, which is the time window within which such mergers would generate UV colours bluer than $(NUV-r)<5.5$, given a reddening rate of $\sim0.8$ mag per Gyr (see Section 3). In summary, we find that the minor merger activity predicted by $\Lambda$CDM exceeds that required to satisfy the scatter in the UV CMR of {\color{black}{\color{black}intermediate-mass}} early-type galaxies at low redshift.} \end{comment} {\color{black}\subsection{A note about observational signatures of merging} It is worth noting here that the arguments presented above imply that an appreciable number of early-type systems in the nearby Universe should either be in a `closepair' or `pre-merger' system with a satellite or exhibit morphologies that are consistent with recent merging events. We should mention first that several observational studies have looked at closepairs from a variety of surveys across a range of redshifts \citep[e.g.][]{Patton2000,LeFevre2000}. While such studies are very useful, a potential complication is that the detection of closepairs relies on a range of criteria. \citet{Patton2000}, for example, tag pairs of galaxies as `pre-mergers' where the tranverse separation on the sky is less than 20$h^{-1}$ kpc and the relative velocities are less than 500 kms$^{-1}$. The number of closepairs naturally depends on the criteria being used. For example, making the criteria stricter increases the likelihood that each candidate system is truly a `pre-merger'. However it also results in fewer pre-mergers being found in total. An added problem is that computing relative velocities requires spectroscopic redshifts. As a result smaller, fainter galaxies at the magnitude limit of typical spectroscopic surveys will not have redshifts and therefore minor mergers cannot be efficiently identified (since only the larger progenitor has a redshift). Hence the lack of large numbers of minor merger closepairs in typical observational studies is not a good indicator of the true frequency of such events in the nearby Universe. \begin{comment} The SDSS spectroscopic survey, for example, has an $r$-band limit of 17.77 - at a redshift of $z\sim0.08$, this translates to $M(r)\sim -20$ and a stellar mass of $\sim 1.5 \times 10^{10}$ M$_{\odot}$. This implies that, if the larger progenitor has a mass of $\sim 10^{11}$ M$_{\odot}$ then minor mergers with mass ratios lower than 1:10 cannot be identified. \end{comment} A better alternative is to look for \emph{post-mergers} i.e. objects that show tell-tale signatures of recent merger activity. An added advantage of studying post-merger systems is that one is sure that the merger has actually taken place, whereas with pre-merger systems it is just a guess (albeit a well-educated one!). While a large observational survey like the SDSS could be the perfect test-bed for performing such a study, the typical 50 second exposure SDSS images are not deep enough to detect faint morphological disturbances from minor mergers. However, van Dokkum (2005) have recently used very deep optical photometry to show that over 70\% of early-types on the optical \emph{red} sequence show morphological signatures of merging. The morphological features, e.g. fans, tails, shells, are faint and red to surface brightness limits of $\mu\sim28$ mag arcsec$^{-2}$) and consistent with minor mergers (where the induced star formation is at a very low-level). These features do not appear in the corresponding SDSS images and an analysis of the UV magnitudes of these red mergers (using GALEX) indicates that their UV CMR is similarly broad to what has been found for the general SDSS early-type population in K07 (Kaviraj and van Dokkum, in prep).} \section{Discussion and summary} We have compared the UV colours of nearby ($0.05<z<0.06$) early-type galaxies with synthetic photometry derived from numerical simulations of minor mergers, with reasonable assumptions for the ages, metallicities and dust properties of the merger progenitors. {\color{black}\emph{Observational} estimates for satellite gas fractions have been taken from \citet{Kannappan2004} and minor merger simulations have been performed using these gas fractions. We have then appealed to the merger statistics in the standard $\Lambda$CDM paradigm to check whether the minor merger activity could plausibly drive the scatter in the UV CMR at low redshift. We have found that the bluest end of the early-type UV CMR ($NUV-r<3.8$) is consistent with mergers that have mass ratios between 1:4 and 1:6 (and cannot be reproduced by events with mass ratios less than or equal to 1:10), assuming that the infalling satellites have gas fractions around $\sim20$\% or higher, {\color{black}which are consistent with the observationally constrained gas fractions from \citet{Kannappan2004}}. Early-types with intermediate UV colours ($3.8<NUV-r<5.5$) are consistent with either recent minor mergers with mass ratios less than 1:6 or mergers with higher mass ratios more than $\sim1$ Gyr in the past. {\color{black}Major mergers are likely only to contribute to the broadness of the UV red sequence and not to the blue scatter in the UV CMR.} {\color{black}Furthermore, we have demonstrated that the predicted minor merger activity in the standard model, convolved with photometric predictions from our fiducial set of numerical simulations, is able to simultaneously reproduce the UV and optical CMRs and colour distributions of the low-redshift early-type population, in particular the large scatter to blue colours. This, in turn, provides a strong plausibility argument for minor mergers being responsible for the RSF in early-type galaxies in the nearby Universe.} We note here that our study does not utilise a full-blown semi-analytical model for two important reasons. Firstly, the amount of gas available at late times in such a model depends on the baryonic recipes implemented within it. Different models can predict different gas fractions in satellites at late times. Secondly, in current semi-analytical models the merger event is modelled `instantaneously' i.e. the full age profile of stars formed in the merger is not taken into account. The correct way to implement minor merger events in a semi-analytical model is to apply star formation histories from simulations such as those presented in this paper to merger events in the model. A full analysis with such an implementation will be provided in a forthcoming paper (Khochfar et al., in prep). In this paper we have appealed \emph{only} to the merger fractions predicted by LCDM, which are robust. Finally, the primary reason for using the Khochfar and Burkert model is that it uses the Extended Press Schechter (EPS) formalism to generate merger trees, resulting in infinite mass resolution which is important because we are specifically looking at minor mergers involving small (satellite) objects. {\color{black}Finally, we note some possible caveats to the analysis presented here. Firstly, although it is reasonable to infer relatively high gas fractions ($>20$\%) for infalling satellites based \citet{Kannappan2004}, it is worth noting that the gas fractions are derived from calibrations calculated by combining SDSS photometry and HI measurements. Confirmation of these gas fractions (especially in satellite galaxies) requires larger and deeper surveys that yield gas mass measurements, which are unavailable right now but may be possible using future instruments such as Herschel. Secondly, we note that although cooling from hot gas halos could potentially contribute to the supply of cold gas that then drives star formation, this channel is unlikely in galaxies in the mass range studied here. \citet{Dekel2006} show that, at late epochs ($z<2$), the gas in halos above a critical shock-heating mass ($10^{12}$M$_{\odot}$; consistent with the galaxies considered in this study) is heated by a virial shock, leading to long cooling timescales that effectively shut down the gas supply and subsequent star formation. Furthermore, unchecked accretion from hot gas halos would result in early-types becoming too massive and too blue \citep{Benson2003}. Since there is no reason to believe that that $z\sim0$ is a preferential epoch for gas cooling, evidence from the luminosity function of massive galaxies renders star formation from cooling flows very unlikely. This, combined with the abundance of minor merger features in the \citet{VD2005} study, strongly indicates that the recent star formation is driven not by gas cooling but by merging activity.} {\color{black}While this study provides a strong plausibility argument for minor mergers being the principal mechanism behind the large UV scatter and associated low-level recent star formation in early-type galaxies, similar studies are required at intermediate redshifts ($0.5<z<1$) to check whether the evolution of this scatter is consistent with the $\Lambda$CDM merger statistics. In addition, deep/high-resolution imaging i.e. with the Hubble Space Telescope are required to confirm the \emph{coincidence} of morphological signatures produced by merging with UV excess in these galaxies. Current galaxy surveys such as COSMOS (Scoville et al. 2007) make such analyses possible and results from such studies will be presented in a forthcoming paper.} \section*{Acknowledgements} We warmly thank the anonymous referee for an insightful review that considerably improved the quality of the original manuscript. Sugata Kaviraj acknowledges a Leverhulme Early-Career Fellowship (till Oct 2008), a Research Fellowship from the Royal Commission for the Exhibition of 1851 (from Oct 2008), a Beecroft Fellowship from the BIPAC Institute and a Senior Research Fellowship from Worcester College, Oxford. S. Peirani acknowledges support from ANR. Finally, we thank J. A. de Freitas Pacheco, F. Durier, I Ferreras, S. K. Yi and A. Pipino for stimulating conversations. \nocite{Kaviraj2006} \nocite{Martin2005} \nocite{Khochfar2003} \nocite{Khochfar2005} \nocite{Kaviraj2007a} \nocite{Kaviraj2007b} \nocite{Bernardi2003c} \nocite{Scoville2007} \bibliographystyle{mn2e}
1,314,259,993,702
arxiv
\section{Introduction} The idea that the light and fragile elements Li, Be and B are produced by the interaction of the energetic nuclei of galactic cosmic rays (CGR) with the nuclei of the interstellar medium (ISM) was introduced 40 years ago (Reeves et al. 1970, Meneguzzi et al. 1971, hereafter MAR). In those early works it was shown that, taking into account the relevant cross-sections and with plausible assumptions about the GCR properties - source composition, intensity and spectrum - one may reproduce reasonably well the abundances of those light elements observed in GCR and in meteorites (pre-solar). Among the required ingredients for such a calculation, the relevant spallation cross sections of CNO nuclei are accurately measured in the laboratory. The source composition and the equilibrium energy spectrum of GCR are inferred from a combination of observations and models of GCR propagation in the Milky Way (e.g. in the framework of the so-called ``leaky box" model). Once the equilibrium spectra of GCR in the ISM are established, the calculation of the resulting abundances of LiBeB is straightforward, at least to first order\footnote{The full calculation should include production by spallation of other primary and secondary nuclides, such as $^{13}$C; however, this has only second order effects.}. The production rate (s$^{-1}$) of the abundance $Y_L=N_L/N_H$ (by number) of LiBeB nuclei is given by \begin{equation} \frac{dY_L}{dt} \ = \ F^{GCR}_{p,a}\sigma_{pa+CNO}Y^{ISM}_{CNO} \ + F^{GCR}_{CNO}\sigma_{pa+CNO}Y^{ISM}_{p,a} P_L\ + F^{GCR}_{a}\sigma_{a+a}Y^{ISM}_{a} P_L \end{equation} where: $F$ (cm$^{-2}$ s$^{-1}$) is the average GCR flux of protons, alphas or CNO, $Y$ the abundances by number of those nuclei in the ISM, and $\sigma$ (cm$^2$) is the average (over the equilibrium energy spectrum of GCR) cross-section for the corresponding spallation reactions producing LiBeB. The first term in the right hand member of this equation (fast protons and alphas hitting CNO nuclei of the ISM) is known as the ``direct" term, the second one (fast CNO nuclei being fragmented on ISM protons and alphas) is the ``reverse" term and the last one involves ``spallation-fusion" reactions, concerning only the Li isotopes. $P_L$ is the probability that nuclide $L$ (produced at high energy) will be thermalized and remain in the ISM (see, e.g. Prantzos 2006). Obviously, the GCR flux term $F^{GCR}_{CNO} \propto Y^{GCR}_{CNO}$ is proportional to the abundances of CNO nuclei in GCR, a fact of paramount importance for the evolution of Be and B (see next sections). Substituting appropriate values for GCR fluxes ($F^{GCR}_p\sim$10 p cm$^{-2}$ s$^{-1}$ for protons and scaled values for other GCR nuclei), for the corresponding cross sections (averaged over the GCR equilibrium spectrum $\sigma_{p,a+CNO\longrightarrow Be}\sim$10$^{-26}$ cm$^{-2}$) and for ISM abundances $Y_{CNO}\sim$10$^{-3}$, and integrating for $\Delta t \sim$10$^{10}$ yr, one finds $Y_{Be}\sim$2 10$^{-11}$, i.e. approximately the meteoritic Be value. Satisfactory results are also obtained for $^6$Li and $^{10}$B. Two problems were identified with the GCR production, compared to meteoritic composition: the $^7$Li/$^6$Li ratio ($\sim$2 in GCR, but $\sim$12 in meteorites) and the $^{11}$B/$^{10}$B ratio ($\sim$2.5 in GCR, but $\sim$4 in meteorites). It was then suggested in MAR that supplementary sources are needed for $^7$Li and $^{11}$B. Modern solutions to those problems involve {\it stellar} production of $\sim$60\% of $^7$Li (in the hot envelopes of AGB stars and/or novae, see Sec. 7) and of $\sim$40\% of $^{11}$B (through $\nu$-induced spallation of $^{12}$C in SN, see Sec. 5). In both cases, however, uncertainties in the yields are such that observations are used to constrain the yields of the candidate sources rather than to confirm the validity of the scenario. \begin{figure}[t!] \begin{center} \includegraphics[width=0.99\textwidth]{Prantzos_Be_OFe.eps} \caption[]{ Observations of Be vs. Fe ({\it left}) and vs. O ({\it right}). In both panels, dotted lines indicate slopes of 1 (primary) and 2 (secondary). Be clearly behaves as a primary vs. Fe, whereas there is more scatter in the data vs. O. } \label{eps1} \end{center} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[angle=-90,width=0.7\textwidth]{Prantzos_Energy.eps} \caption[]{Energy input required from energetic particles accelerated by one CCSN in order to produce a given mass of Be, such as to have [Be/Fe]=0 (solar), assuming that a core collapse SN produces, on average, 0.1 M$_{\odot}$ of Fe. {\it Solid} curve corresponds to the case of a constant composition for GCR, {\it dotted} curve corresponds to a time variable composition, following the one of the ISM. In the former case, the required energy is approximately equal to the energy imparted to energetic particles by supernovae, namely $\sim$0.1 of their kinetic energy of $\sim$1.5 10$^{51}$ ergs; in the latter case, the energy required to keep [Be/Fe]=0 becomes much larger than the total kinetic energy of a CCSN for metallicities [Fe/H]$\leq$-1.6.} \label{eps1} \end{center} \end{figure} \section{Primary Be: the problem} Observations of halo stars in the 90s revealed a linear relationship between Be/H and Fe/H (Gilmore et al. 1991, Ryan et al. 1992) as well as between B/H and Fe/H (Duncan et al. 1992). That was unexpected, since Be and B were thought to be produced as {\it secondaries}, by spallation of the increasingly abundant CNO nuclei. Indeed, the first two terms in Eq. 1.1 were thought to evolve in the same way with time (or metallicity), since the composition of GCR Y$_{CNO}^{GCR}$ was supposed to evolve in step with the one of the ISM Y$_{CNO}^{ISM}$. Only the Li isotopes, produced at low metallicities mostly by $\alpha+\alpha$ reactions were thought to be produced as primaries (Steigman and Walker 1992) . The only way to produce primary Be is by assuming that GCR have always the same CNO content, as suggested in Duncan et al. (1992). Other efforts to enhance the early production of Be, by e.g. invoking a better confinement - and thus, higher fluxes - of GCR in the early Galaxy (Prantzos et al. 1993) failed. The reason for that failure was clearly revealed by the ``energetics argument" put forward by Ramaty et al. (1997): if SN are the main source of GCR energy, there is a limit to the amount of light elements produced per SN, which depends on GCR and ISM composition. If the metal content of {\it both} ISM and GCR is low, there is simply not enough energy in GCR to keep the Be yields constant (Fig. 2)\footnote{For reasons unknown to the author, the energetics argument was obviously not understood by many prolific researchers in the field in the late 90ies.}. Since the ISM metallicity certainly increases with time, the ``direct" component in Eq. 1.1 produces only secondary LiBeB. The only possibility to have $\sim$constant LiBeB yields is by assuming that the ``reverse" component is primary, i.e. that GCR have a $\sim$constant metallicity. This has profound implications for our understanding of the GCR origin. It should be noted that before those Be and B observations, no one would have the idea to ask ``what was the GCR composition in the early Galaxy?". \begin{figure}[t!] \begin{center} \includegraphics[width=0.9\textwidth]{Prantzos_CR_Origin.eps} \caption[]{ Scenarios for the origin of Galactic cosmic rays (GCR). {\bf {\it A}}: GCR originate from the interstellar medium (ISM) and are accelerated from the forward shock (FS) of supernovae (SN). {\bf {\it B}}: GCR originate from the interior of supernovae and are accelerated by the reverse shock (RS), propagating inwards. {\bf {\it C}}: GCR originate from superbubble material (SBM), enriched by the metals ejected by supernovae and massive star winds; they are accelerated by the forward shocks of supernovae {\it and} stellar winds. {\bf {\it D}}: GCR originate from the wind material of massive {\it rotating} stars, {\it always rich in CNO} (but not in heavier nuclei); they are accelerated by the forward shock of the SN explosion. } \label{eps2} \end{center} \end{figure} \section{Origin of cosmic rays} For quite some time it was thought that GCR originate from the average ISM, where they are accelerated by the {\it forward shocks} of SN explosions (Fig. 3.A). However, this can only produce secondary Be. A $\sim$constant abundance of C and O in GCR can ``naturally" be understood if SN accelerate their own ejecta, trough their {\it reverse schock} (Ramaty et al. 1997, see Fig. 3.B). However, the absence of unstable $^{59}$Ni (decaying through e$^-$ capture within 10$^5$ yr) from observed GCR suggests that acceleration occurs $>$10$^5$ yr after the explosion (Wiedenbeck et al. 1999) when SN ejecta are presumably already diluted in the ISM. Furthermore, the reverse shock has only a small fraction of the SN kinetic energy, while observed GCR require a large fraction of it\footnote{The power of GCR is estimated to $\sim$10$^{41}$ erg s$^{-1}$ galaxywide, i.e. about 10\% of the kinetic energy of SN, which is $\sim$10$^{42}$ erg s$^{-1}$ (assuming 3 SN/century for the Milky Way, each one endowed with an average kinetic energy of 1.5 10$^{51}$ ergs).}. Higdon et al. (1998) suggested that GCR are accelerated out of {\it superbubbles} (SB) material (Fig. 3.C), enriched by the ejecta of many SN as to have a large and $\sim$constant metallicity. In this scenario, it is the forward shocks of SN that accelerate material ejected from other, previously exploded SN. Furthermore, it has been argued that in such an environment GCR could be accelerated to higher energies than in a single SN remnant (Parizot et al. 2004). That scenario has also been invoked in order to explain the present day source isotopic composition of GCR (Binns et al. 2005, Rauch et al. 2009). Notice that the main feature of that composition, namely a large $^{22}$Ne/$^{20}$Ne ratio, is explained as due to the contribution of winds from Wolf-Rayet (WR) stars (e.g. Prantzos et al. 1987), and the SB scenario offers a plausible (but not unique) framework in bringing together contributions from both SN and WR stars. \begin{figure}[t!] \begin{center} \includegraphics[angle=-90,width=0.95\textwidth]{Prantzos_Be_OFeNew.eps} \caption[]{{\it Left:} Evolution of the chemical composition (in corresponding solar abundances) of He-4 ({\it solid}), C-12 ({\it dotted}), N ({\it short dashed}) and O ({\it long dashed})in: ISM ({\it top}), massive star winds ({\it middle}) and GCR ({\it bottom}). {\it Dots} in lower panel indicate estimated GCR source composition (from Elison et al. 1997). {\it Right}: Evolution ({\it solid curves} of O/Fe ({\it top}), Be/H ({\it middle}) and Be/Fe ({\it bottom}); {\it dotted lines} indicate solar values in top and bottom panels, primary and secondary Be in middle panel. } \label{eps3} \end{center} \end{figure} However, the SB scenario suffers from (at least) two problems. First, core collapse SN are observationally associated to HII regions (van Dyk et al. 1996) and it is well known that the metallicity of HII regions reflects the one of the {\it ambient ISM} (i.e. it can be very low, as in IZw18) rather than the one of SN. Moreover, Higdon et al. (1998) evaluated the time interval $\Delta t$ between SN explosions in a SB to a comfortable $\Delta t \sim$3 10$^5$ yr, leaving enough time to $^{59}$Ni to decay before the next SN explosion and subsequent acceleration. However, Prantzos (2005) noticed that SB are constantly powered not only by SN but also by the strong winds of massive stars (with integrated energy and acceleration efficiency similar to the SN one, e.g. Parizot et al. 2004), which should continuously accelerate $^{59}$Ni, as soon as it is ejected from SN explosions. Binns et al. (2008) argued that the problem may be alleviated from the fact that only the most massive (and thus, short-lived) stars of an OB association emit strong winds; during the late (and longest) fraction of the lifetime of the SB (a few 10$^7$ years) particles are accelerated episodically (by SN explosions only) and no more continuously. Still, it is hard to imagine that superbubbles have always the same average metallicity, especially during the early Galaxy evolution, where metals were easily expelled out of the shallow potential wells of the small sub-units forming the Galactic halo (e.g. Prantzos 2008). \section{Cosmic rays from stellar winds and primary Be} In this work we propose a different explanation for the origin of GCR, which can also provide a satisfactory explanation for the primary nature of Be evolution. We first notice that there is now substantial evidence that GCR are indeed accelerated in SN remnants (e.g. Berezhko et al. 2009 and references therein). We then notice that, contrary to the case of non-rotating massive stars, which lose mass only at high metallicity, {\it rotating} massive stars display substantial mass loss down at very low (or even zero) metallicities (e.g. Meynet, this volume). The winds of those stars are enriched in CNO (products of H and He burning {\it within} the star itself) at all metallicities and at about the same level; it is precisely this enrichment of the WR winds at all metallicities that allows us to understand the observed primary behaviour of N down to the lowest metallicity halo stars (Chiappini et al. 2006). This gives some confidence in using the same model results to predict the composition of GCR over the history of the Milky Way. We assume then that GCR are accelerated when the forward shocks of SN propagate into the previously ejected envelopes of rotating massive stars, which have been partially mixed with the surrounding ISM. The calculation of the resulting GCR composition $Y^{GCR}(M)$ is far from trivial: it will be mostly $Y^{Wind}(M)$ in the case of SN with initial mass $M>$20 M$_{\odot}$ (having lost a large fraction of their mass in the wind) and mostly $Y^{ISM}$ in the case of $M$=10-20 M$_{\odot}$ stars, having suffered low mass losses. For ilustration purposes we adopt here, as a function of metallicity $Z$, $Y^{GCR}_{paCNO}(Z)$=0.5 [$Y^{Wind}_{paCNO}(Z)$+$Y^{ISM}_{paCNO}(Z)$], where $Y^{Wind}(Z)$ is provided by the Geneva models (G. Meynet, private communication) and is integrated over a stellar IMF, whereas $Y^{ISM}(Z)$ is provided by the chemical evolution model (left panels in Fig. 4). The calculation of the Be evolution is then straightforward and nicely fits the data (right panels in Fig. 4); it is the first time that such a calculation is performed {\it not by assuming} a given $Y^{GCR}_{paCNO}(Z)$ but by {\it calculating} it in a (hopefully) realistic way. \begin{figure}[t!] \begin{center} \includegraphics[angle=-90,width=0.85\textwidth]{Prantzos_BeB_evol.eps} \caption[]{{\it Left:} Evolution of B ({\it top}) and Be ({\it bottom}); in both panels, {\it dotted lines} indicate primary and secondary evolution and {\it solid} curves indicate model evolution, including apropriately normalised $\nu$-yields for $^{11}$B. {\it Right}: Evolution of B/Fe ({\it top}) and B/Be ({\it bottom}). In the latter case, data indicate a subsolar mean value of B/Be$\sim$14, compatible with exclusively GCR production of both elements, but the uncertainties (not shown here) are too large to allow conclusions. } \label{eps4} \end{center} \end{figure} \section{Boron-11 from $\nu$-nucleosynthesis ?} As mentioned in Sec. 2, a supplementary source of $^{11}$B is required in order to obtain the meteoritic $^{11}$B/$^{10}$B=4 ratio. That source may be the $\nu$-process in SN, extensively studied in Woosley et al. (1990): a fraction of the most energetic among the $\sim$10$^{59}$ neutrinos of a SN explosion spallate $^{12}$C nuclei in the C-shell of the stellar envelope to provide $^{11}$B (but no other light nuclide). Soon after the HST observations of the primary behaviour of B (Duncan et al. 1992) it was realised that the $\nu$-process can provide just such a primary B (Olive et al. 1994). But, if Be is produced as primary by GCR (Sec. 5), then more than $\sim$50\% of B is also produced as primary, leaving a rather small role to the $\nu$-process. In fact, the large uncertainties in the $\nu$ yields of $^{11}$B do not allow an accurate evaluation of the B evolution: rather the B evolution (resulting from both GCR and $\nu$-process) has to be used in order to constrain the B yields of SN. The results of such an``exercise'' appear in Fig. 5. In order to fit the observations, the $\nu$ yields of Woosley and Weaver (1995) had to be divided by a factor of $\sim$6, otherwise B/H and B/Fe would be overproduced. Notice the model B/Be ratio is always $\sim$24 (i.e. solar), substantially higher than the observed, but {\it highly uncertain}, B/Be$\sim$14 ratio in halo stars (which is consistent with pure GCR production of both elements!). Clearly, future observations with HST are required to clarify that important issue. \begin{figure}[t!] \begin{center} \includegraphics[angle=-90,width=0.85\textwidth]{Prantzos_EarlyLi6.eps} \caption[]{Evolution of total Li ({\it upper} set of data points and {\it solid} curve for model assuming high primordial $^7$Li), Be ({\it lower} set of points and {\it solid} curve) and $^6$Li ({\it intermediate} set of points and curves). $^6$Li data are from Asplund et al. (2006, small filled circles with error bars) and Garcia-Perez et al. (2009, large open circles with - large - error bars not displayed), while model curves are for a canonical (``low") pre-galactic $^6$Li ({\it dotted}) and a ``high" pre-galactic $^6$Li ({\it dashed}). In the latter case, a minimum amunt of depletion within stars (equal to that of $^7$Li) has been conservatively assumed. } \label{eps4} \end{center} \end{figure} \section{Early $^7$Li and $^6$Li: ``high'' or ``low'' ?} For a long time, the Li ``plateau" in low metallicity halo stars (discovered by Spite and Spite 1982) was considered to reflect the primordial abundance of $^7$Li. However, the precise determination of baryonic density through observations of the cosmic microwave background, combined to results of standard Big bang nucleosynthesis (SBBN), suggests that the true value of primordial $^7$Li should be 2-3 times higher. It is not yet clear whether this discrepancy is due to some problems with SBBN, whether non-standard particle physics might cure it, or whether primordial $^7$Li is depleted in the surface convective zones of low metallicity stars with such an astonishing uniformity (see many contributions in this volume). Other suggestions, like e.g. astration by a pre-galactic Pop. III population of massive stars (Piau et al. 2006) face severe problems of metal overproduction (Prantzos 2006). This issue, one of the most important ones for our understanding of mixing in stellar interiors, has also important implications for the chemical evolution of Li, as we shall see below. \begin{figure}[t!] \begin{center} \includegraphics[width=0.7\textwidth]{Prantzos_Li_contr.eps} \caption[]{Evolution of total Li ({\it top}) and percentages of its various components ({\it bottom}): Li-7 from GCR ({\it dot-dashed}), Li-6 from GCR ({\it dotted}), Li-7 from $\nu$-nucleosynthesis (NN, {\it dashed}) and Li-7 from a delayed stellar source (novae and/or AGB stars, {\it long dashed}). {\it Solid} curves indicate total Li ({\it upper} panel) and primordial$^7$Li ({\it lower} panel). } \label{eps4} \end{center} \end{figure} The report of an ``upper envelope" for $^6$Li/H in low metallicity halo stars (Asplund et al. 2006) gave a new twist to the LiBeB saga. The reported $^6$Li/H value at [Fe/H]=-2.7 is much larger (by a factor of 20-30) than expected if GCR are the only source of the observed $^6$Li/H in that star, assuming that GCR can account for the observed evolution of Be (see Fig. 6). But, if it turns out that the true primordial Li is the one corresponding to the WMAP+SBBN value, then the initial $^6$Li values in halo stars should be at least a factor of 3 higher than evaluated by Asplund et al. (2006, see Fig. 6). It should be noticed, however, that such high $^6$Li values are not obtained in other investigations (Cayrel et al. 200, Steffen et al. 2009). In the past few years, the possibility of important pre-galactic production of $^6$Li by non-standard GCR has drawn considerable attention from theoreticians, who proposed several scenarios: 1) Primordial, non-standard, production during Big Bang Nucleosynthesis: the decay/annihilation of some massive particle (e.g. neutralino) releases energetic nucleons/photons which produce $^3$He or $^3$H by spallation/photodisintegration of $^4$He, while subsequent fusion reactions between $^4$He and $^3$He or $^3$H create $^6$Li (e.g. Jedamzik 2004, and this meeting). Observations of $^6$Li/H constrain then the masses/cross-sections/densities of the massive particle. 2) Pre-galactic, by fusion reactions of $^4$He nuclei, accelerated by the energy released by massive stars (Reeves 2005) or by shocks induced during structure formation (Suzuki and Inoue 2002). 3) In situ production by stellar flares, through $^3$He+$^4$He reactions involving large amounts of accelerated $^3$He (Tatischeff and Thibaud, 2007). Prantzos (2006) showed that the energetics of $^6$Li production by accelerated particles constrain severely any scenario proposed in category (2) above, including jets accelerated by massive black holes [this holds also for the ``stellar flare" scenario, the parameters of which have to be pushed to their extreme values in order to obtain the ``upper envelope" of the Asplund et al. (2006) observations]. This difficulty is confirmed by Evoli et al. (2008), who calculated pre-galactic $^6$Li production by $\alpha+\alpha$ reactions with a semi-analytical model for the evolution of the early Milky Way; they found maximum values shorter by factors $>$10 (and plausible values shorter by 3 orders of magnitude) than the values reported by Asplund et al. (2006). \begin{figure}[t!] \begin{center} \includegraphics[angle=-90,width=0.7\textwidth]{Prantzos_Li67_ratio.eps} \caption[]{{\it Left:} Evolution of Li6/Li7 ratio as a function of [Fe/H] ({\it left}) and of time ({\it right}). Data are from Asplund et al. (2006, As06), Garcia-Perez et al. (2009, GP09) and Kawanomoto et al. (2009, K09). {\it Solid} curves corerspond to a ``high" pre-galactic Li-6 and {\it dotted} curves to standard (low) pre-galactic Li-6. } \label{eps5} \end{center} \end{figure} \section{Evolution of Li and $^6$Li/$^7$Li} Since GCR can only produce a $^7$Li/$^6$Li ratio of $\sim$2, instead of the meteoritic (pre-solar) value of $\sim$12, another source of $^7$Li had to be found. In the two decades following the original MAR paper, four such sources were identified: three possible stellar sources and the hot early Universe of the Big Bang. The latter has certainly operated, as testified by the observed Li ``plateau" in low metallicity halo stars; depending on the true primordial value (see Fig. 7), it may contribute from 8 to 20\% of the solar $^7$Li. Among the stellar sources, observational evidence exists only for AGB stars, where high Li abundances have been detected in some cases. But the corresponding model yields (from $^3$He+$^4$He in the bottom of the convective envelope) are highly uncertain, and this is also the case for the other two candidate sources of novae (from explosive H-burning) and core collapse SN (from $\nu$-induced nucleosynthesis); notice that both novae and AGBs enter the Galactic scene with some time delay (``slow" $^7$Li component), contrary to SN and GCR. $^7$Li is thus the only isotope having three distintinctively different types of sources: stellar, BBN and GCR. {\it Assuming} that the $\nu$-yields of $^7$Li are well established (through the corresponding $^{11}$B yields, see Sec. 6), one may try to estimate the evolution of the remaining ``slow" stellar contribution to $^7$Li, from the combined action of novae and AGB stars, i.e. by removing from the observed evolutionary curve of Li/H vs Fe/H the BBN, GCR and $\nu$ contributions. The result is displayed in Fig. 7. The ``slow" stellar component contributes from 50-65\% of the solar $^7$Li (depending on whether high or low primordial $^7$Li is adopted); similar numbers are found in the analysis of Matteucci (this volume). Finally, Fig. 8 displays the evolution of $^6$Li/$^7$Li ratio, compared to data for the early halo (highly uncertain, see previous section) and in the nearby Galactic disk (along three different lines of sight). Theoretical predictions depend on the adopted pre-galactic $^6$Li/$^7$Li ratio, but a generic feature is a late rise of $^6$Li/$^7$Li, due to the late secondary production of $^6$Li from GCR.
1,314,259,993,703
arxiv
\section{Introduction} \label{Introduction} Variability in ultra-cool dwarfs (spectral types $>$ M7; \citealt{Kirkpatrick1997}) is caused by large-scale atmospheric structures, such as spots or longitudinal bands \citep{Artigau2009,Radigan2014a,Apai2017}. As inhomogenities rotate in and out of view, they change the object's observed flux on the time scale of the rotation period \citep{Tinney1999, Bailer-Jones2002}. The largest and most sensitive ultra-cool dwarf monitoring surveys (e.g., \citealt{Radigan2014a, Radigan2014b, Buenzli2014, Metchev2015}) have found that variability is common across L and T dwarfs. \citet{Metchev2015} estimate that $53\%^{+16\%}_{-18\%}$ of L3--L9.5 and $36\%^{+26\%}_{-17\%}$ of T0--T8 dwarfs are variable at $> 0.4\%$. The rotational periods inferred for L and T dwarfs range over at least an order of magnitude: from 1.4~h \citep{Clarke2008} to likely longer than 20~h \citep{Metchev2015}. Spectroscopic observations have shown that many ultra-cool dwarfs have relatively large projected rotational velocities ($v\sin i \ge 10$\,km\,s$^{-1}$; \citealt{Mohanty2003,Basri2000,Zapatero2006,Reiners2008,Reiners2010,Blake2010,Konopacky2012}), and in some cases rotate at $\sim$30\,\% of their break-up speed (e.g., \citealt{Konopacky2012}). Ultra-cool dwarfs with halo kinematics also exhibit rapid rotation \citep{Reiners2006}, indicating that they maintain relatively large rotational velocities during their entire lifetimes. In this paper we present the discovery of three ultra-cool dwarfs with the shortest known photometric---and likely rotational---periods. In Section~\ref{sec:photometry} we present our photometric monitoring with the Spitzer Space Telescope (Spitzer) and the discovery of the short periodicities. In Section~\ref{sec:spectra} we present moderate-resolution infrared spectroscopy to confirm the rapid rotation of each target. In Section~\ref{sec:SpectraAnalysis} we fit photospheric models to the spectra to determine the objects' projected rotational velocities and physical parameters, and find the highest $v\sin i$ value yet reported for ultra-cool dwarfs. We discuss the objects' rapid spins and oblateness in Section~\ref{sec:Discussion}. Our findings are summarized in Section \ref{Conclusions}. \section{Spitzer Photometry, Variability, and Periods} \label{sec:photometry} The photometric observations were obtained as part of the GO 11174 (PI: S.\ Metchev) Spitzer Exploration Science Program, ``A Paradigm Shift in Substellar Classification: Understanding the Apparent Diversity of Substellar Atmospheres through Viewing Geometry.'' The program targeted 25 of the brightest known L3--T8 dwarfs to complement our earlier sample of 44 photometrically monitored L and T dwarfs \citep{Metchev2015} and to investigate viewing geometry effects on photometric variability and brown dwarf colors. A full description of the program will be presented in a later publication. We focus on three variables from the GO 11174 Spitzer program with photometric periods shorter than the shortest previously known: the 1.41 $\pm 0.01$ h period of 2MASS J22282889$-$4310262 \citep{Clarke2008,Buenzli2012,Metchev2015}. Our targets are: the L3.5 dwarf 2MASS J04070752$+$1546457 (\citealt{Reid2008}; herein 2MASS J0407+1546), the L8 dwarf 2MASS J12195156$+$3128497 (\citealt{Chiu2006}; herein 2MASS J1219+3128), and the T7 dwarf 2MASS J03480772$-$6022270 (\citealt{Burgasser2003}; herein 2MASS J0348$-$6022). \subsection{Warm Spitzer Observations} \label{sec:spitzer_observations} We observed the three objects in staring mode with Spitzer's Infrared Array Camera's (IRAC; \citealt{Werner2004}, \citealt{Fazio2004}) channels 1 (3.6 $\mu$m, [3.6]) and 2 (4.5 $\mu$m, [4.5]). The dates of the observations are given in Table~\ref{table:fastperiods}. The observing sequence was a 10 h staring observation in channel 1, followed immediately by a 10 h staring observation in channel 2. All exposures were 12 s long, taken in full-array readout mode. At the beginning of each staring sequence an additional 0.5 h were used for pointing calibration with the Pointing Calibration and Reference Sensor (PCRS). The PCRS peak-up procedure is intended to correct telescope pointing over long staring observations. We used nearby bright stars for peak-up, as none of our targets were sufficiently bright to perform the peak-up on-target. The Spitzer IRAC detector is subject to intrapixel sensitivity variations, known as the ``pixel phase effect'' \citep{Reach2005}. Precise photometry requires correcting for an object's positioning to sub-pixel precision. The pixel phase effect is well characterized in a $0.5\times0.5$ pixel ($0\farcs6\times0\farcs6$) ``sweet spot'' \citep{Mighell2008} near a corner of the IRAC array, and flux correction routines are available at the Spitzer Science Centre IRAC High Precision Photometry website.\footnote[13]{\url{https://irachpp.spitzer.caltech.edu}} We sought to acquire our targets as closely as possible to the center of the IRAC sweet spot. We used observation epoch-dependent positional corrections for proper and parallactic motions derived from a 2MASS-AllWISE cross-correlation. However, we were not entirely successful. The average centroid position for each of our three targets was up to a pixel away from the center of the sweet spot: i.e., twice its half-width. We therefore used our own custom pixel phase correction code \citep{Heinze2013} developed for the Spitzer Cycle 8 ``Weather on Other Worlds'' program \citep{Metchev2015} and summarized in Section~\ref{sec:spitzer_variability}. Experiments with archival Spitzer GO~13067 data on TRAPPIST-1, which was acquired on the sweet spot, confirmed that for point sources at least as bright as TRAPPIST-1 (WISE $W1=10.07$~mag), our pixel phase correction approach is at least as accurate as the set of sweet spot pixel phase corrections on the IRAC High Precision Photometry website. \subsection{Photometry and Initial Variability Assessment} \label{sec:spitzer_variability} We conducted a two-stage photometric and variability assessment, using the Spitzer Basic Calibrated Data images. We first performed approximate [3.6]- and [4.5]-band photometry in 1.5 pixel-radius apertures with the {\sc IDL} Astronomy User's Library\footnote[14]{\url{https://idlastro.gsfc.nasa.gov}} task {\sc aper}. We applied the corresponding aperture correction from the IRAC Instrument Handbook, and a custom pixel phase correction derived as a two-dimensional quadratic function of the centroid position on the detector. \begin{figure*} \centering \includegraphics[trim={1.9cm 6.8cm 3cm 12.2cm}, clip,width=0.98\textwidth]{fig1_pvalvsMagbothCh.pdf} \caption{ Results from the initial periodogram-based variability assessment on the three L and T dwarfs (red points) in the two Spitzer bands, compared to 469 other stars (black points) in the IRAC field of view for our full Spitzer sample. The horizontal dashed lines mark the $p$-value thresholds below which we claim variability, with 95\% of the comparison stars below this line. We separately compute the 95\% threshold for each IRAC channel for the brighter half of comparison stars (log($p$-value) $ = -2.8$ and $-1.9$ in [3.6] and [4.5], respectively) and the fainter half of companion stars (log($p$-value) $ = -3.5$ and $-2.2$ in [3.6] and [4.5], respectively). } \label{fig:FAPvsmag} \end{figure*} We identified variable targets by Lomb-Scargle periodogram analysis \citep{Scargle1982}, sampling periods between 0.1 h and the full 20 h duration of our Spitzer observations. We use the $p$-value, a measure of the likelihood that any variations are caused by random noise, to determine the significance of the periodogram peaks. The $p$-value is $Me^{-P}$, where $P$ is the periodogram power of the highest peak, and $M$ is the number of independent periods considered \citep{Scargle1982, Press1992}. Relative to other non-variable field stars, a sinusoidal signal yields the lowest $p$-value at a given amplitude, making it ideal for detecting rotation-induced photometric variations. We determined a threshold to identify variables by calculating the $p$-value from pixel phase-corrected light curves of 469 field stars in the IRAC field of view for our full Spitzer sample, with obvious variables (e.g., eclipsing binaries) rejected by visual inspection. We split the field stars into two equal-sized groups based on their magnitudes. From the full Spitzer sample, we selected as candidate variables those L and T dwarfs for which the $p$-value was below that of 95\% of field star $p$-values. The $p$-values of the current three L and T dwarfs and of the field stars are shown in Figure~\ref{fig:FAPvsmag}. The relevant thresholds are shown as dashed horizontal lines. The most significant periodogram peaks above the $p$-value threshold for our three targets are in the 1.1--1.2 hour range (Fig.~\ref{fig:periodogram}). In all three cases, significant periodicity is detected in only one of the two Spitzer IRAC channels: at [3.6] for the L dwarfs 2MASS~J0407+1546 and 2MASS~J1219+3128 and at [4.5] for the T dwarf 2MASS~J0348--6022. \begin{figure*} \centering \includegraphics[trim={0.8cm 9.3cm 0.8cm 4.1cm},clip,width=0.99\textwidth]{fig2_Periodograms.pdf} \caption{Lomb-Scargle periodogram power distributions of the light curves of our three targets for both Spitzer channels after the preliminary pixel phase correction (Sec.~\ref{sec:spitzer_variability}). We use the 95 percentile $p$-value thresholds determined from field stars (Fig.~\ref{fig:FAPvsmag}) to identify significant periodogram peaks. The relevant thresholds (dotted lines) at [3.6] and [4.5] are at periodogram powers of $P_{[3.6]}=10.7$ and $P_{[4.5]}=7.9$. } \label{fig:periodogram} \end{figure*} At this stage of our analysis, the applied pixel phase correction is not in the final form presented in Section~\ref{sec:spitzer_periods}. The preliminary periodicities are potentially affected by Spitzer's known pointing `wobble.' The telescope's boresight follows a small sawtooth quasi-periodic oscillation with a mostly sub-hour time scale: the result of heater cycling to maintain adequate battery temperature \citep{Grillmair2012,Grillmair2014}. The amplitude of the pointing oscillation, up to 0.4~pix, can be sufficiently high to impact photometric measurements because of the pixel phase effect. During 2015, when our observations were taken, the mean pointing oscillation period was 49 minutes, with an inner quartile range of 43 minutes to 54 minutes \citep{Krick2018}. However, a small fraction of year 2015 observations in the Spitzer archive have pointing oscillation periods up to 80 minutes, similar to the 1.1--1.2 hour-long periods identified in our periodograms (Fig.~\ref{fig:periodogram}). We do not believe that Spitzer's pointing wobble is responsible for the detected 1.1--1.2 hour periodicities for three reasons. First, we expect roughly similar pixel phase-induced behavior of all point sources in our target fields. Therefore, by setting a global 95\% $p$-value threshold in our preliminary analysis, we select for variability beyond what may be incurred by the pointing wobble. Second, in Section~\ref{sec:spitzer_periods} we describe a more sophisticated photometric analysis that includes an astrophysical variability and a pointing oscillation model, and we clearly identify the wobble separately from the astrophysical periods. Finally, in Section~\ref{sec:SpectraAnalysis} we confirm that the rapid rotations implied by such short periods are expressed as wide Doppler line broadening in moderate-dispersion spectra of our three science targets. \subsection{Simultaneous Fitting for Pixel Phase and Astrophysical Variability} \label{sec:spitzer_periods} Having identified candidate variables among the science targets with approximate photometry, we iterated our variability assessment with higher-precision photometry. We determined the optimal aperture for each object by seeking the lowest root-mean-square scatter in the measured fluxes. The optimal apertures in the [3.5]- and [4.5]-band data respectively were 1.4 and 2.1 pixels for 2MASS~J0348--6022, 1.4 and 2.0 pixels for 2MASS~J1219+3128, and 1.5 and 2.1 pixels for 2MASS~J0407+1546. We binned the photometry in groups of 10 consecutive measurements to lower the random noise. Our binning interval of 120 s still ensures fast enough sampling to retain sensitivity to the hour-long timescales of interest. We incorporated the initial period estimates from Section~\ref{sec:spitzer_variability} in an iterative least-squares method to simultaneously fit an astrophysical model (a truncated Fourier series) and a correction for the pixel phase effect in both channels \citep{Heinze2013}. We show the raw and corrected Spitzer light curves for our three targets in the left panel of Figure~\ref{fig:lightcurves}. \begin{figure*} \centering \includegraphics[trim={0.7cm 5.5cm 0.9cm 4.2cm},clip,width=0.95\textwidth]{fig3_Spitzerlightcurves.pdf} \caption{{\it Left:} Spitzer [3.6]- (blue) and [4.5]-band (red) light curves. Each target is shown in a separate panel, with the raw data on top. The bottom sequences show the final light curves after correcting for the pixel phase effect. All light curves are normalized to unity, and the raw data are offset by a constant for clarity. A combined astrophysical and pixel phase model fit (Sec.~\ref{sec:spitzer_periods}) to the raw data is shown in black. The astrophysical model fits to the corrected data are shown in blue for [3.6] and red for [4.5]. The models are shown with a solid line over the channel that exhibits significant variability and with a dashed line over the other channel that exhibits no significant variability. {\it Right:}~Period-folded light curves in the channels with significant variability after the pixel phase correction. The mean flux level is represented as a horizontal dashed line at unity flux. The astrophysical model fit is shown as a solid line. } \label{fig:lightcurves} \end{figure*} The raw photometry in Figure~\ref{fig:lightcurves} shows the sawtooth pointing oscillation of Spitzer in the light curves of two of the three science targets. The effect is present mostly throughout the [3.6]- and [4.5]-band staring observation of 2MASS~J1219+3128 (middle-left panel of Fig.~\ref{fig:lightcurves}), with a sawtooth-like pattern that repeats 10 times over 10 hours at [3.6]. The corresponding 60 minute time scale of the sawtooth pattern is distinct from the 68 minute astrophysical period seen in the corrected [3.6]-band light curve. The latter half of the [3.6]-band observation of 2MASS~J0348--6022 also shows sawtooth variations on a 60 minute time scale. However, no astrophysical variability is detected in 2MASS~J0348--6022 at [3.6]. This T7 dwarf is variable only at [4.5], where no effect of the sawtooth pattern is seen. We further verified that there is no residual periodicity effect from the pointing wobble by confirming that there is no correlation between the flux and centroid position on the detector after correcting our photometry for pixel phase (Fig.~\ref{fig:centroids}). We computed Pearson correlation coefficients of $|r| \leq 0.04$ between flux and centroid position for each object and Spitzer channel. We conclude that Spitzer's pointing oscillation is not the cause of the variability we observe. We adopt the results from the simultaneous astrophysical and pixel phase model as the true periods and peak-to-trough amplitudes of our variables, rather than the preliminary results from the periodogram fitting shown in Figure~\ref{fig:periodogram}. In all three cases the final and the preliminary periods agree to within 1\%, and the periodogram power of the significant peaks increased for the final, corrected data. From our best-fit astrophysical model, the two L dwarfs require only a single Fourier term for an adequate light-curve fit. The T7 dwarf 2MASS~J0348--6022 requires a two-term Fourier fit, and so both significant peaks seen in the [4.5]-periodogram (Fig.~\ref{fig:periodogram}) are astrophysical in nature. However, the higher-frequency oscillation is less significant, and is a harmonic at half the period: 0.54 h versus 1.08 h. It may indicate a two-spot configuration on opposite hemispheres of the T dwarf. We use a Markov chain Monte Carlo (MCMC) analysis, as described in Section 3.4 of \citet{Heinze2013}, to determine the uncertainties on the periods and the amplitudes. We fit the [3.6] and [4.5] photometry simultaneously, by requiring the same period but different amplitudes for the two channels. Since in all cases only one of the channels shows significant variability, we set 2$\sigma$ upper limits on the amplitude ratios of the ``non-variable'' channels to the variable channels. The periods of the T7, L3.5, and L8 dwarfs range between 1.08 h and 1.23 h: faster than any measured before (see Section~\ref{sec:Discussion}). We show the phase-folded light curves in the variable channel for each target in the right panel of Figure~\ref{fig:lightcurves}. The object names, spectral types, magnitudes, variable channels, and photometric periods, peak-to-trough amplitudes, and amplitude ratios are listed in Table~\ref{table:fastperiods}. \\ \begin{figure} \centering \includegraphics[trim={0.7cm 5.5cm 4.5cm 4.cm},clip,width=0.75\textwidth]{fig4_fluxvscentroid.pdf} \caption{Pixel phase-corrected flux at [3.6] (blue) and [4.5] (red) as a function of centroid position in both the $x$- and $y$-directions. The centroids are measured relative to the average centroids across all exposures. The Pearson correlation coefficients ($r$) are given in each panel and we find that there is no correlation between the flux and centroid positions on the detector. We conclude that there is no residual periodic effect on the photometry after correcting for Spitzer's pointing wobble. } \label{fig:centroids} \end{figure} \begin{deluxetable}{l c c c c c c c c} \tabletypesize{\scriptsize} \tablecolumns{10} \tablewidth{0pt} \tablecaption{Spitzer Photometry and Results from Markov Chain Monte Carlo Analysis of Periods and Peak-to-trough Amplitudes \label{table:fastperiods}} \tablehead{ \colhead{Object}\vspace{-0.2cm} & \colhead{Spectral} & \colhead{Date} & \colhead{[3.6]} & \colhead{[4.5]} & \colhead{Variable}& \colhead{Period\tablenotemark{c}}& \colhead{Amplitude} & \colhead{Amplitude} \\ \colhead{}\vspace{-0.2cm} & \colhead{Type\tablenotemark{a}} & \colhead{Observed } & \colhead{(mag)} & \colhead{(mag)} & \colhead{Channel\tablenotemark{b}} & \colhead{(h)} & \colhead{in Variable} & \colhead{Ratio\tablenotemark{d}} \\ \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{Channel\tablenotemark{c} (\%)} & \colhead{} } \startdata 2MASS J04070752$+$1546457 & L3.5 & 2015 Apr 26 & $12.83 \pm 0.01$ & $12.91 \pm 0.01$ & [3.6] & $1.23~[1.22, 1.24]$ & $0.36~[0.24, 0.46]$ & $<$1.2\\ 2MASS J12195156$+$3128497 & L8 & 2015 Sep 16 & $13.40 \pm 0.02$ & $13.26 \pm 0.02$ & [3.6] & $1.14~[1.13, 1.17]$ & $0.55~[0.42, 0.69]$ & $<$0.53\\ 2MASS J03480772$-$6022270 & T7 & 2015 Apr 22 & $14.36 \pm 0.03$ & $12.86 \pm 0.02$ & [4.5] & $1.080~[1.075, 1.084]$ & $1.5~[1.4, 1.7]$ & $<$0.58\\ \enddata \tablenotetext{a}{Spectral type references, in row order: \citet{Reid2008}, \citet{Chiu2006}, \citet{Burgasser2003}.} \tablenotetext{b}{Each target varies in only one of the two Spitzer IRAC channels.} \tablenotetext{c}{Square brackets denote the 2$\sigma$ confidence intervals on the periods and aplitudes determined from our MCMC analysis (Sec.~\ref{sec:spitzer_periods}).} \tablenotetext{d}{This is the 2$\sigma$ upper limit on the amplitude ratio between the two channels. The ratio is of the non-variable channel to the variable channel.} \end{deluxetable} \subsection{Discussion of Photometric Variability: Periods and Mechanisms} \label{sec:SpitzerDiscussion} Two of our three targets have been previously reported as potential variables. For 2MASS~J1219+3128 (L8), \citet{Buenzli2014} find a lower limit of 2\% on the variability amplitude in a 1.12--1.20 \micron\ subset of their 1.1--1.7 \micron\ HST/WFC3 spectra, over a 36 minute sequence of nine spectroscopic exposures. However, the variability is not significant over any other part of their 1.1--1.7 \micron\ spectra, and they classify the detection as tentative. For 2MASS~J0348$-$6022 (T7), \citet{Wilson2014} report a $J$-band amplitude of $2.4\%\pm0.5\%$ in a three hour long observation. However, a re-analysis of their NTT/SofI observations by \citet{Radigan2014b} shows that the reported variability is likely spurious, and related to residual detector and sky-background systematics. \citet{Radigan2014b} revised the $J$-band variability in the \citet{Wilson2014} observations to a $< 1.1\%\pm0.4\%$ upper limit. Similarly, a 1\% upper limit for 2MASS J0348$-$6022 is deduced from a prior six hour $J$-band monitoring observation by \citet{Clarke2008}, also with NTT/SofI. No variability has been previously reported for 2MASS~J0407+1546 (L3.5). The small periodogram $p$-values and large periodogram powers (Figs.~\ref{fig:FAPvsmag} and \ref{fig:periodogram}) of our Spitzer observations confidently establish that all three L and T dwarfs exhibit periodic variability. Each of our three targets varies in only one of the two IRAC channels within the photometric precision limits. The two L dwarfs vary only at [3.6], whereas the T7 dwarf varies only at [4.5]. Such behavior is consistent with prior observations of infrared variability trends with spectral type. \citet{Metchev2015} found that five of their 19 variable L3--T8 dwarfs varied only at [3.6] (two L3s, two L4s, and a T2), and one (T7) dwarf varied only at [4.5]. Single-band [3.6] variations in an L dwarf have also been reported by \citet{Gizis2015}, while [4.5]-only variations are seen in Y dwarfs \citep{Cushing2016, leggett2016}. Wavelength-dependent amplitude differences are explained by the dominant gas absorption species in the atmosphere. In wavelength regions of strong molecular gas opacity, clouds reside below the photosphere and so cloud heterogeneities are obscured. Cloud structures are detectable only in relatively transparent spectral regions, away from dominant molecular bands \citep[e.g.,][]{Ackerman2001}. With CO being a dominant source of upper-atmosphere gas opacity in L dwarfs, cloud condensate-induced variability will be suppressed around the 4.5~$\mu$m fundamental CO band (i.e., in IRAC channel 2). Conversely, variability around the 3.3~$\mu$m CH$_4$ fundamental band (within IRAC channel 1) will be suppressed in T dwarfs. Alternative variability mechanisms that do not require clouds have also been proposed. Such scenarios do not imply that clouds may not exist at all in the atmospheres of brown dwarfs, just that they are not responsible, or not entirely responsible, for the observed variability. For example, some variable brown dwarfs show radio emission that may be best explained as auroral in nature (e.g., \citealt{Antonova2008,Hallinan2015,Kao2018}). \citet{Richey-Yowell2020} correlate such auroral signatures with the presence of H$\alpha$ emission. One of our three variables, the L3.5 dwarf 2MASS~J0407+1546, is a strong H$\alpha$ emitter \citep[equivalent width of 60\AA;][]{Reid2008}. While \citet{Miles2017b} find no correlation between H$\alpha$ emission and large-amplitude ($\gtrsim 1\%$) variations (a result also confirmed by \citealt{Richey-Yowell2020}), the more subdued 0.36\% variation in 2MASS~J0407+1546 could well be magnetic in origin. \citet{Robinson2014} propose atmospheric temperature fluctuations as a potential cause of photometric variability. They show that thermal perturbations occuring deep in the atmosphere can cause surface brightness fluctuations at infrared wavelengths. \citet{Tremblin2015,Tremblin2020} show that fingering convection in a cloudless atmosphere can also result in variability. Ultimately, the observations that we present are not decisive of the variability mechanism, and our focus is instead on the short rotation periods. The periodic regularity seen in the light curves of our three targets (Fig.~\ref{fig:lightcurves}) argues for one (2MASS~J0407+1546, 2MASS~J1219+3128) or two (2MASS~J0348$-$6022) dominant photospheric spots. An alternative interpretation of these data is that we are seeing a repeating spot pattern extended along a band on a more slowly rotating object, e.g., as in the case for Jupiter \citep{dePater2016}. Additionally, \citet{Apai2017} show that the variability of infrared brightness in T dwarfs can be dominated by planetary-scale waves. They find that the combined variability effect of multiple sets of planetary waves or spots may place the periodogram peak at half the true period, or that double peaks may occur near the true rotation period due to differential rotation. To the sensitivity of our data, none of our objects show the kind of complex light modulations seen in the \citet{Apai2017} T dwarfs. However, Jupiter-like repeated spot patterns remain a possibility. In Sections~\ref{sec:spectra} and \ref{sec:SpectraAnalysis} we use near-infrared spectroscopy to measure the projected rotation velocities $v\sin i$ of our targets and confirm that all three rotate rapidly. \section{Spectroscopic Observations} \label{sec:spectra} If our objects are truly rapidly rotating, then their spectroscopic line profiles will be significantly Doppler-broadened, while more slowly rotating objects with repeating spot patterns will not show much line broadening. Herein we report $R=6000 - 12,000$ near-infrared spectroscopy which we use to confirm the rapid rotations and in Section~\ref{sec:SpectraAnalysis} to estimate the objects' fundamental parameters. We present a previously unpublished spectrum of the T7 dwarf 2MASS J0348$-$6022 and a new observation of the L8 dwarf 2MASS J1219$+$3128 at a resolution of 6000 over 0.91--2.41 $\mu$m with the Folded-port InfraRed Echellette (FIRE; \citealp{Simcoe_etal2008, Simcoe_etal2013}) at the Magellan Baade telescope. We also observed the L3.5 dwarf 2MASS J0407$+$1546 at a resolution of 12,000 over 2.275--2.332 $\mu$m with the Gemini Near-InfraRed Spectrograph (GNIRS; \citealt{Elias2006}) at the Gemini North Observatory. The spectroscopic observations are summarized in Table~\ref{table:observations}. \begin{deluxetable}{l c c c c c c c c c} \tablecolumns{10} \tablewidth{0pt} \tablecaption{Magellan/FIRE and Gemini North/GNIRS Spectroscopic Observations. \label{table:observations}} \tablehead{ \colhead{Target}\vspace{-0.2cm} & \colhead{$K_s$} & \colhead{Date} & \colhead{Instrument} & \colhead{Resolution} & \colhead{Exposure}& \colhead{S/N}& \colhead{Target} & \colhead{Telluric}& \colhead{Telluric} \\ \colhead{}\vspace{-0.2cm} & \colhead{(mag)} & \colhead{Observed} & \colhead{} & \colhead{} & \colhead{Time} & \colhead{} & \colhead{Airmass} & \colhead{Standard} & \colhead{Standard}\\ \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{(minutes)} & \colhead{} & \colhead{} & \colhead{} & \colhead{Airmass} } \startdata 2MASS J03480772$-$6022270 & 15.60 & 2012 Jan 3 & Magellan/FIRE & 6000 & 30.3 & 36 & 1.25--1.27 & HD 28667 & 1.28\\ 2MASS J12195156$+$3128497 & 14.31 & 2017 Feb 16 & Magellan/FIRE & 6000 & 26.6 & 46 & 2.12--2.31 & HD 96781 & 1.98\\ 2MASS J04070752$+$1546457 & 13.56 & 2017 Oct 10 & Gemini North/GNIRS & 12,000 & 80.0 & 29 & 1.07--1.28 & HD 17971 & 1.03\\ \enddata \tablecomments{$K_s$ magnitudes are from 2MASS \citep{Cutri2003}. The signal-to-noise ratio is the median around the $K$-band peaks of the FIRE spectra of for 2MASS~J0348$-$6022 (between 2.05 and 2.15 $\mu$m) and 2MASS~J1219$+$3128 (between 2.1 and 2.2~$\mu$m), and the median over the full range of the GNIRS spectrum of 2MASS~J0407$+$1546. } \end{deluxetable} \subsection{Magellan/FIRE Spectroscopy: 2MASS J0348$-$6022 (T7) and 2MASS J1219$+$3128 (L8)} For our FIRE observations we used the cross-dispersed echelle mode with the 0$\farcs$6 (3.3 pixel) slit aligned to the parallactic angle to obtain $R\approx6000$ spectra over 0.91--2.41 $\mu$m. We observed 2MASS J0348$-$6022 on 2012 January 3 (UT) under clear skies with 0$\farcs$7 $J$-band seeing and airmass of 1.25--1.27. We obtained two 909~s exposures, dithered along the slit. We observed the A0~V star HD~28667 ($V = 6.87$ mag) in four 1~s dithered exposures following the 2MASS J0348$-$6022 observations at a similar airmass (1.28). We observed 2MASS J1219$+$3128 on 2017 February 16 (UT) under clear skies with 1$\farcs$2--1$\farcs$4 $J$-band seeing and airmass of 2.12--2.31. We obtained four 400~s exposures dithered pair-wise along the slit. We observed the A0~V star HD~96781 ($V = 10.2$ mag) in six 1~s dithered exposures at a similar airmass (1.96--2.05). For both sets of observations we obtained ThAr emission lamp spectra after each target. We obtained dome and sky flat-field observations at the beginning of each night for pixel response and slit illumination calibration. \begin{figure} \centering \includegraphics[trim={1.65cm 5cm 0.25cm 2.5cm},clip,width=0.95\textwidth]{fig5_2M0348_withtemplates.pdf} \caption{Magellan/Folded-port InfraRed Echellette (FIRE) $z$, $J$, $H$, and $K$-band spectra (from top to bottom) of 2MASS J0348$-$6022 (black), compared to template T7 spectra where available. The uncertainty on the FIRE spectrum is shown in gray along the bottom of each panel. The template spectrum of 2MASS J07271824+1710012 is from the Brown Dwarf Spectroscopic Survey \citep{BDSS2003}. Major molecular features \citep{BDSS2003, cushing_etal06, Bochanski2011} are indicated. } \label{fig:0348templates} \end{figure} \begin{figure} \centering \includegraphics[trim={1.65cm 5cm 0.25cm 2.5cm}, clip,width=0.95\textwidth]{fig6_2M1219_withtemplates.pdf} \caption{Magellan/FIRE $z$-, $J$-, $H$-, and $K$-band spectra (from top to bottom) of 2MASS J1219$+$3128 (black), compared to template L8 spectra where available. The uncertainty on the FIRE spectrum is shown in gray along the bottom of each panel. The template spectrum of 2MASS J16322911$+$1904407 is from the Brown Dwarf Spectroscopic Survey \citep{BDSS2003}. Major molecular features \citep{BDSS2003, cushing_etal06} are indicated. } \label{fig:1219templates} \end{figure} The FIRE data were reduced using the Interactive Data Language ({\sc IDL}) pipeline FIREHOSE v2 \citep{Gagnezenodo}, which is based on the MASE \citep{Bochanski_etal2009} and SpeXTool \citep{Vacca_etal2003, Cushing_etal2004} packages.\footnote[15]{\url{https://github.com/jgagneastro/FireHose_v2/}} Details for standard reduction of point-source data with FIREHOSE are described in \citet{Bochanski2011}. The ThAr lamp images were used to trace the spectral orders and derive pixel response and illumination corrections which were applied to the science frames. A combination of OH telluric lines in the science frames and ThAr emission lamp lines were used to determine the wavelength solution along the center of each order and the order tilt along the spatial direction, so as to construct a two-dimensional vacuum wavelength map. The typical uncertainty of the wavelength solution was 0.20 pixels, corresponding to a precision of 3.0 km\,s$^{-1}$. The sky background in each frame was fit with a two-dimensional sky model constructed using basis splines \citep{Kelson2003}, which was then subtracted from the frame. One-dimensional spectra were optimally extracted \citep{Horne1986} in each order onto a heliocentric wavelength frame. Correction for telluric absorption and overall flux calibration was determined from the A0~V star spectra using a modified version of \texttt{xtellcor} from SpeXtool \citep{Vacca_etal2003, Cushing_etal2004}. Spectra from the individual frames of the FIRE data were combined for each target after relative flux normalization, and the individual orders were merged into one-dimensional spectra. The resulting reduced spectra are shown in Figure~\ref{fig:0348templates} for 2MASS J0348$-$6022 and in Figure~\ref{fig:1219templates} for 2MASS J1219$+$3128, along with comparison spectra, where data at similar resolution of other objects of the same spectral types were available from the literature. \subsection{Gemini/GNIRS Spectroscopy: 2MASS J0407$+$1546 (L3)} Our GNIRS observations of the L3.5 dwarf 2MASS J0407$+$1546 took place on 2017 October 10 (UT). We followed the same procedure and instrument settings as used by \citet{allers_etal16} for radial and rotation velocity measurements of an L dwarf. We used the 111 lines mm$^{-1}$ grating with the 0$\farcs$15 (3.0~pixels) slit aligned to the parallactic angle to obtain $R\approx 12,000$ 2.27--2.33 $\mu$m spectra at an airmass of 1.07--1.28. We obtained eight 600~s exposures, dithered between two positions on the slit. We observed the A0~V star HD~17971 ($V = 8.78$ mag) for telluric absorption correction in eight 60~s dithered exposures at a similar airmass (1.03) using the same instrument setup. ThAr emission lamp observations were obtained immediately after the 2MASS~J0407$+$1546 observations. The 2MASS~J0407$+$1546 data were reduced using a combination of general and Gemini-specific IRAF\footnote[16]{Image Reduction and Analysis Facility, distributed by the National Optical Astronomy Observatories.} routines. Data were prepared, sky-subtracted, and flat-fielded using the Gemini tasks \texttt{nsprepare}, \texttt{nsreduce}, and \texttt{nsflat}. Individual spectra were extracted using \texttt{apall}. The XeAr lamp spectrum was extracted once for each of the science spectra, using the science extraction traces as references, and \texttt{identify} and \texttt{dispcor} were used to identify calibration lines and generate a wavelength solution for each science spectrum. A Legendre polynomial was used with \texttt{identify}, typically of second order. The typical uncertainty in the wavelength calibration was 0.25~pixels, which corresponds to a precision of 2.0~km~s$^{-1}$. The individual wavelength-calibrated spectra were median-combined, and the standard deviation was adopted as the uncertainty. The same reduction steps were repeated for the standard star, and the science spectrum was divided by the standard spectrum to remove telluric lines, and multiplied by a $T_{\rm eff}=9600$ K blackbody. The resulting spectrum is shown in Figure~\ref{fig:0407templates}, along with a comparison spectrum of another L3.5 dwarf. \begin{figure} \centering \includegraphics[trim={1.65cm 16cm 0.25cm 3.25cm},clip,width=0.9\textwidth]{fig7_2M0407_withtemplates.pdf} \caption{Gemini/Gemini Near-InfraRed Spectrograph (GNIRS) $K$-band spectrum of 2MASS J0407$+$1546 (black), compared to a template L3.5 spectrum \citep[2MASS J00361617$+$1821104 from IRTF/SpeX;][]{Rayner2009}. The uncertainty on the GNIRS spectrum is shown in gray along the bottom of the panel. Major molecular features \citep{cushing_etal06} are indicated. } \label{fig:0407templates} \end{figure} \section{Confirmation of Rapid Rotations and Determination of Physical Parameters} \label{sec:SpectraAnalysis} We compared our spectra to the photosphere models of \citet[SM08]{Saumon2008}, \citet[BT-Settl]{Allard2012}, \citet[Morley]{Morley2012}, and \citet[Sonora]{Marley_Sonora_spectra} to determine the physical properties of our objects. All but the BT-Settl models are based on the \citet{Ackerman2001} cloud model. The model photospheres are provided on fixed grids of effective temperature ($T_{\rm eff}$) and surface gravity ($\log g$). The SM08 and Morley models also have a sedimentation efficiency ($f_{\rm sed}$) parameter. The $T_{\rm eff}$ grids are in steps of 100~K for all except the BT-Settl models, which are in 50~K steps, and the $\log g$ grids are in steps of 0.5~dex for all except the Sonora models, which are in steps of 0.25~dex. We also implemented grids for radial velocity (RV) and $v \sin i$ in steps of 0.1 km\,s$^{-1}$. For the RV we applied a simple Doppler shift to the wavelength of the models. For $v \sin i$ we simulated rotational broadening by convolving the model spectra with the standard rotation kernel from \citet{Gray1992} using the \texttt{lsf\_rotate} task in the {\sc IDL} Astronomy User's Library.\footnote[17]{\url{https://idlastro.gsfc.nasa.gov/ftp/pro/astro/lsf\_rotate.pro}} We first verified the spectral types of our targets by overlaying the spectra of other well-studied L and T dwarfs (Figs.~\ref{fig:0348templates}--\ref{fig:0407templates}). For each object we restricted the effective temperature grids to 300~K above and below the expected values for each spectral type based on \citet{Filippazzo2015}. We did not restrict the $\log g$ and $f_{\rm sed}$ (where available) grids. The quality of the model fits to the full-band FIRE spectra is dominated by the low-order continuum which is mostly affected by the effective temperature and, when using the SM08 and Morley models, the sedimentation efficiency. Instrument systematics may also affect the continuum shape of our Magellan/FIRE spectra that cover a wide (0.91--2.41~$\mu$m) wavelength range. Broadband model fits thus preclude us from obtaining accurate information about the RV and $v\sin i$, both of which do not depend on the continuum but entirely on the positioning and profiles of spectral lines. The pressure-broadening effect of surface gravity is also well reflected in the theoretical line profiles, even though surface gravity does affect the continuum of model ultra-cool photospheres. To extract accurate estimates of RV, $v\sin i$, and $\log g$, we fit models to select narrow-wavelength sub-regions of the FIRE spectra that are dominated by dense sequences of H$_2$O, FeH, or CH$_4$ absorption lines, as marked in Figures \ref{fig:0348templates} to \ref{fig:0407templates}. It is likely that in doing so we may still be affected by wavelength systematics among the theoretical line lists for the different molecules. In addition, the different wavelength sub-regions probe different atmospheric depths and pressures. Hence, a wavelength region where flux originates deeper in the atmosphere could exhibit greater pressure broadening compared to a region where the flux originates higher up. We account for these effects by selecting several different narrow-wavelength regions from the Magellan/FIRE spectra (see Table~\ref{table:modelfits}) and, as much as possible, different molecular absorbers. Overall, we find that the values for RV, $v\sin i$, and $\log g$ obtained from the narrow-wavelength regions are more self-consistent, with uncertainties 1.5--3 times smaller, than those from the full bands. Our approach was first to fit each of the narrow regions to determine RV, $v \sin i$, and $\log g$ and then to fit the full bands ($z$: $0.91-1.11~\mu$m, $J$: $1.14-1.345~\mu$m, $H$: $1.48-1.79~\mu$m, and $K$: $1.96-2.35~\mu$m) to determine $T_{\rm eff}$ and $f_{\rm sed}$. We adopt the RV, $v\sin i$, and $\log g$ values determined from the narrow regions of the FIRE spectra of 2MASS~J0348--6022 (T7) and 2MASS~J1219+3128 (L8) as the fiducial values for these objects (Table~\ref{table:modelfits}). While the narrow-band fits also produce estimates for $T_{\rm eff}$, the full-band spectra are likely more sensitive to it. Then, in re-applying the models to the full-band spectra, we allow RV and $v \sin i$ to vary only within 2$\sigma$ of the values adopted from the narrow regions. That is, we constrain RV and $v \sin i$ within a small range, as they should not effect the determinations of $T_{\rm eff}$ and (where applicable) $f_{\rm sed}$. We still allow $\log g$ to be a free parameter in the full-band fitting because of its stronger effect on the continuum. This mirrors our approach for fitting the narrow regions, where we allow $T_{\rm eff}$ to be a free parameter, even if we adopt the results from the broad regions. In this manner we probe the full parameter space for both $\log g$ and $T_{\rm eff}$ in each case, and obtain a more reliable estimate for each. Ultimately, the two sets of determinations for $T_{\rm eff}$ are consistent with each other (Tables~\ref{sec:spectra}--\ref{table:inclinations}). Estimates for $\log g$ tend to be 0.5--1.0~dex higher based on the line profile fits compared to the continuum fits in all models. We favor the former, as they are closer to the fundamental radiative transfer calculations for each species. The latter involve additional considerations of convection and relative chemical abundances. The spectral range of our Gemini/GNIRS observation of 2MASS~J0407+1546 is much narrower, so we consider it only in its entirety. In terms of specific steps to fit models to the data, we started by normalizing the data to unity. In the narrow regions we divided by the median flux value, and for the full-band data, we divided by a constant such that the peak flux was unity. We shifted the model for radial velocity, broadened for $v \sin i$, and then smoothed the model to the resolution of the data. We also determined a flux zero-point to be added to the data and a multiplicative factor to scale the model which minimized the $\chi^2$ statistic ($\chi^2 = \sum_{i=1}^{N} [(O_i - M_i)^2 / \sigma_i^2]$, where $O_i$ is the observed flux, $M_i$ is the flux of the model, and $\sigma_i$ is the uncertainty of the data). We computed the offset, multiplicative factor, and $\chi^2$ statistic for every model on the grid of $T_{\rm eff}$, $\log g$, RV, $v \sin i$, and $f_{\rm sed}$ (where available). The probability of a given $\chi^2$ value is $p \propto e^{-\chi^2/2}$. We computed the probabilities for every model on our grid, normalized the sum of the $p$-values to unity, then marginalized over each of the parameters to obtain the probability distributions for each parameter. The distributions for RV and $v \sin i$ were Gaussian in shape, and we report the mean values and 1$\sigma$ error bars for in Table~\ref{table:modelfits}. For the other parameters the model grid spacing was coarse, and the probability of values other than the values presented in Tables~\ref{table:modelfits} and~\ref{table:modelfitsbroad} is negligible. We report the results for these parameters with error bars corresponding to the grid-spacing. Tables~\ref{table:modelfits} and~\ref{table:modelfitsbroad} give the most probable values for each family of models in each wavelength region. We find that the best-fit parameter values can vary significantly between model families and between different wavelength regions, while giving comparable reduced $\chi^2$ statistics. Understanding the subtle differences between the families of models is related to the molecular line lists and opacities used to compute these models, and is beyond the scope of this paper. To determine the final values of the parameters, we take a weighted average of the values from each model family and region fit. For the FIRE data, we determine the RV, $v \sin i$, and $\log g$ from our narrow-region fits (Table~\ref{table:modelfits}), and the $T_{\rm eff}$ and $f_{\rm sed}$ from our full-band fits (Table~\ref{table:modelfitsbroad}). For the GNIRS data we determine all parameters from the full wavelength coverage available. We assign the weights in the weighted average as $e^{-\chi^2_{\rm reduced}}$, where the $\chi^2_{\rm reduced}$ for each best-fit model is given in Tables~\ref{table:modelfits} and~\ref{table:modelfitsbroad}. We report the final values from the weighted averages in Table~\ref{table:inclinations}, with the unbiased weighted sample standard deviation as our uncertainties. We describe the outcomes of our model fitting and $\chi^2$ analysis in detail for each target in Sections~\ref{sec:0348models}--\ref{sec:0407models}. \begin{deluxetable}{l l c c c c c c c } \tabletypesize{\footnotesize} \tablecolumns{9} \tablewidth{0pt} \tablecaption{Best-fit Photospheric Model Parameters for the Narrow-wavelength Regions \label{table:modelfits} } \tablehead{ \colhead{Model} & \colhead{Region} & \colhead{Wavelength} & \colhead{$T_{\rm eff}$} & \colhead{$f_{\rm sed}$} & \colhead{$\log g$} & \colhead{$v\sin i$} & \colhead{RV} & \colhead{$\chi^2_{\rm reduced}$} \\ \colhead{} & \colhead{} & \colhead{($\mu$m)} & \colhead{(K)} & \colhead{} & \colhead{(dex)} & \colhead{(km\,s$^{-1}$)} & \colhead{(km\,s$^{-1}$)} & \colhead{} } \startdata \multicolumn{9}{c}{2MASS J03480772$-$6022270 (T7, FIRE data)}\\ BT-Settl & J narrow & 1.260 -- 1.300 & $950 \pm 25$ & \nodata & $5.50 \pm 0.25$ & $102.4 \pm 3.9$ & $-11.8 \pm 0.8$ & 1.2 \\ BT-Settl & H narrow & 1.520 -- 1.562 & $950 \pm 25$ & \nodata & $5.00 \pm 0.25$ & $94.9 \pm 1.5$ & $-14.1 \pm 0.9$ & 0.9 \\ BT-Settl & K narrow & 2.110 -- 2.190 & $700 \pm 25$ & \nodata & $4.50 \pm 0.25$ & $115.4 \pm 2.2$ & $-17.1 \pm 1.3$ & 2.0 \\ Morley & J narrow & 1.260 -- 1.300 & $900 \pm 50$ & 4 & $5.50 \pm 0.25$ & $ 105.7 \pm 1.8 $ & $-15.1 \pm 0.9$ & 1.6 \\ Morley & H narrow & 1.520 -- 1.562 & $1000 \pm 50$ & 5 & $5.50 \pm 0.25$ & $ 114.3 \pm 2.2 $ & $-18.0 \pm 1.0$ & 2.1 \\ Morley & K narrow & 2.110 -- 2.190 & $800 \pm 50$ & 5 & $5.00 \pm 0.25$ & $ 103.2 \pm 1.9 $ & $-16.5 \pm 1.3$ & 2.1 \\ Sonora & J narrow & 1.260 -- 1.300 & $1000 \pm 50$ & \nodata & $5.00 \pm 0.13$ & $ 96.5 \pm 1.5$ & $-12.6 \pm 0.8$ & 1.6 \\ Sonora & H narrow & 1.520 -- 1.562 & $1000 \pm 50$ & \nodata & $5.00 \pm 0.13$ & $ 110.7 \pm 1.5$ & $-14.2 \pm 0.9$ & 1.1 \\ Sonora & K narrow & 2.110 -- 2.190 & $800 \pm 50$ & \nodata & $4.75 \pm 0.13$ & $ 99.6 \pm 2.6$ & $-11.2 \pm 1.4$ & 2.0 \\ \multicolumn{2}{l}{Adopted values} & \nodata & \nodata & \nodata & $5.1 \pm 0.3$ & $103.5 \pm 7.4$ & $-14.1 \pm 3.7$ & \nodata \\ \hline \multicolumn{9}{c}{2MASS J12195156$+$3128497 (L8, FIRE data)}\\ BT-Settl & H narrow 1 & 1.500 -- 1.550 & $1250 \pm 25$ & \nodata & $5.00 \pm 0.25$ & $77.4 \pm 2.6$ & $-17.2 \pm 1.6$ & 1.3 \\ BT-Settl & H narrow 2 & 1.720 -- 1.780 & $1150 \pm 25$ & \nodata & $4.00 \pm 0.25$ & $85.7 \pm 1.4$ & $-19.0 \pm 0.9$ & 2.6 \\ BT-Settl & K narrow & 1.970 -- 2.055 & $1400 \pm 25$ & \nodata & $5.00 \pm 0.25$ & $76.8 \pm 1.4$ & $-16.6 \pm 1.1$ & 2.7 \\ SM08 & H narrow 1 & 1.500 -- 1.550 & $1400 \pm 50$ & 4 & $5.50 \pm 0.25$ & $78.1 \pm 2.4$ & $-19.6 \pm 1.4$ & 1.4 \\ SM08 & H narrow 2 & 1.720 -- 1.780 & $1500 \pm 50$ & 4 & $5.00 \pm 0.25$ & $84.3 \pm 1.3$ & $-25.9 \pm 0.9$ & 2.6 \\ SM08 & K narrow & 1.970 -- 2.055 & $1400 \pm 50$ & 2 & $5.50 \pm 0.25$ & $77.1 \pm 1.5$ & $-20.0 \pm 1.1$ & 2.7 \\ \multicolumn{2}{l}{Adopted values} & \nodata & \nodata & \nodata & $5.1 \pm 0.5$ & $79.0 \pm 3.4$ & $-19.0 \pm 4.2$ & \nodata \\ \hline \multicolumn{9}{c}{2MASS J04070752$+$1546457 (L3.5, GNIRS data)}\\ BT-Settl & K & 2.275 -- 2.332 & $1700 \pm 25$ & \nodata & $5.00 \pm 0.25$ & $82.7 \pm 0.9$ & $43.7 \pm 0.9$ & 1.0 \\ SM08 & K & 2.275 -- 2.332 & $2000 \pm 50$ & 4 & $5.50 \pm 0.25$ & $ 82.4 \pm 0.9$ & $43.1 \pm 0.8$ & 1.1 \\ \multicolumn{2}{l}{Adopted values} & \nodata & $1840 \pm 210$ & 4 & $5.2 \pm 0.4$ & $82.6 \pm 0.2$ & $43.4 \pm 2.1$ & \nodata \\ \enddata \tablecomments{Best-fit photospheric model parameters for our three L and T dwarfs over the narrow regions within each FIRE band, and over the entirety of the GNIRS spectrum. We fit each wavelength region independently. We adopt the $\log{g}$, $v \sin i$, and RV values determined from the narrow wavelength regions of the FIRE spectra of the T7 and L8 dwarfs. The $T_{\rm eff}$ and $f_{\rm sed}$ estimates are adopted from the full-band fits (Table \ref{table:modelfitsbroad}), although we include the findings $T_{\rm eff}$ and $f_{\rm sed}$ from the narrow region fitting for completeness. For the GNIRS data of the L3.5 dwarf we adopt all parameters from the wavelength region shown here. The $f_{\rm sed}$ parameter is only applicable to the SM08 and Morley models. The adopted values are the weighted averages for each object, where the weights are $e^{-\chi^2_{\rm reduced}}$, and the uncertainties are the unbiased weighted sample standard deviations (as described in Section~\ref{sec:SpectraAnalysis}). The adopted RVs include systematic uncertainties of $\pm$3.0~km~s$^{-1}$ (for the T7 and L8 dwarfs) or $\pm$2.0~km~s$^{-1}$ (for the L3.5 dwarf) added in quadrature to account for the wavelength calibration uncertainties of the FIRE and GNIRS spectra, respectively (Sec.~\ref{sec:spectra}). } \end{deluxetable} \begin{deluxetable}{l l c c c c c c c} \tablecolumns{9} \tablewidth{0pt} \tablecaption{Best-fit Photospheric Model Parameters for the Full Bands \label{table:modelfitsbroad} } \tablehead{ \colhead{Model} & \colhead{Band} & \colhead{Wavelength} & \colhead{$T_{\rm eff}$} & \colhead{$f_{\rm sed}$} & \colhead{$\log{g}$\tablenotemark{a}} & \colhead{$v\sin{i}$\tablenotemark{a}} & \colhead{RV\tablenotemark{a}} & \colhead{$\chi^2_{\rm reduced}$} \\ \colhead{} & \colhead{} & \colhead{($\mu$m)} & \colhead{(K)} & \colhead{} & \colhead{(dex)} & \colhead{(km\,s$^{-1}$)} & \colhead{(km\,s$^{-1}$)} & \colhead{} } \startdata \multicolumn{9}{c}{2MASS J03480772$-$6022270 (T7, FIRE data)}\\ BT-Settl & J & 1.140 -- 1.345 & $900 \pm 25$ & \nodata & $5.0 \pm 0.25$ & 118.3 & -14.2 & 7.4 \\ BT-Settl & H & 1.480 -- 1.790 & $750 \pm 25$ & \nodata & $4.5 \pm 0.25$ & 118.3 & -11.3 & 11 \\ BT-Settl & K & 1.960 -- 2.350 & $700 \pm 25 $ & \nodata & $4.0 \pm 0.25$ & 107.9 & -13.7 & 2.8 \\ Morley & J & 1.140 -- 1.345 & $800 \pm 50 $ & 5 & $4.0 \pm 0.25$ & 118.3 & -15.1 & 10 \\ Morley & H & 1.480 -- 1.790 & $700 \pm 50 $ & 5 & $4.0 \pm 0.25$ & 118.3 & -10.0 & 28 \\ Morley & K & 1.960 -- 2.350 & $900 \pm 50 $ & 5 & $4.0 \pm 0.25$ & 107.2 & -19.5 & 2.5 \\ Sonora & J & 1.140 -- 1.345 & $900 \pm 50 $ & \nodata & $5.25 \pm 0.13$ & 94.0 & -16.8 & 4.2 \\ Sonora & H & 1.480 -- 1.790 & $850 \pm 50 $ & \nodata & $4.25 \pm 0.13$ & 118.3 & -21.5 & 2.0 \\ Sonora & K & 1.960 -- 2.350 & $1000 \pm 50 $ & \nodata & $4.50 \pm 0.13$ & 104.5 & -15.5 & 2.3 \\ \multicolumn{2}{l}{Adopted values} & \nodata & $880 \pm 110$ & 5 & \nodata & \nodata & \nodata & \nodata \\ \hline \multicolumn{9}{c}{2MASS J12195156$+$3128497 (L8, FIRE data)}\\ BT-Settl & J & 1.140 -- 1.345 & $1400 \pm 25$ & \nodata & $5.50 \pm 0.25$ & 72.2 & -24.5 & 2.3 \\ BT-Settl & H & 1.480 -- 1.790 & $1200 \pm 25$ & \nodata & $4.50 \pm 0.25$ & 85.8 & -18.9 & 2.0 \\ BT-Settl & K & 1.960 -- 2.350 & $1250 \pm 25$ & \nodata & $4.50 \pm 0.25$ & 85.8 & -17.8 & 2.1 \\ SM08 & J & 1.140 -- 1.345 & $1200 \pm 50$ & 3 & $5.50 \pm 0.25$ & 72.2 & -14.6 & 2.0 \\ SM08 & H & 1.480 -- 1.790 & $1500 \pm 50$ & 3 & $5.00 \pm 0.25$ & 83.2 & -23.2 & 1.9 \\ SM08 & K & 1.960 -- 2.350 & $1500 \pm 50$ & 3 & $4.50 \pm 0.25$ & 85.8 & -18.6 & 2.4 \\ \multicolumn{2}{l}{Adopted values} & \nodata & $1330 \pm 140$ & 3 & \nodata & \nodata & \nodata & \nodata \\ \enddata \tablecomments{Best-fit photospheric model parameters for the FIRE data of the T7 and L8 dwarfs, fit over each of the full $J$, $H$, and $K$ bands. We use these fits to inform our final $T_{\rm eff}$ and $f_{\rm sed}$ determinations. The adopted values are the weighted averages (as described in Section~\ref{sec:SpectraAnalysis}). The GNIRS data for the L3.5 dwarf are not shown here as they only cover a narrow-wavelength region, and so the fitting results for that object are shown in their entirety in Table~\ref{table:modelfits}. \tablenotetext{a}{ We adopt the $\log g$, RV and $v \sin i$ values from the narrow-wavelength regions (Table~\ref{table:modelfits}). To better determine $T_{\rm eff}$ and $f_{\rm sed}$, we still allow $\log g$ to vary unconstrained in the full-band fitting, while RV and $v\sin i$ are allowed to probe within $2\sigma$ of the adopted values. In some cases the best-fitting RVs and $v\sin i$ values correspond to the extremes of their allowed range, so we do not include uncertainties for them here. } } \end{deluxetable} \subsection{2MASS J0348$-$6022 (T7)} \label{sec:0348models} \begin{figure} \centering \includegraphics[trim={0.5cm 6.2cm 1.cm 2.5cm},clip,width=0.95\textwidth]{fig8_2M0348_withmodels_JHKnarrow.pdf} \caption{Three narrow regions within the $J$ (top left), $H$ (top right), and $K$ (bottom) bands of our $R\approx6000$ FIRE spectra of 2MASS J0348$-$6022 (T7), with the best-fitting BT-Settl, Morley, and Sonora models. The parameters of the best-fit models shown here are listed in Table~\ref{table:modelfits}. These regions were selected for their density of H$_2$O and CH$_4$ absorption lines \citep{BDSS2003, cushing_etal06, Bochanski2011} to allow precise RV, $v\sin i$, and $\log g$ determinations. The reduced $\chi^2$ statistic is shown for each model ($\chi^2_R$). The residuals are shown in the lower section of each panel, where the colors match those of the corresponding models. } \label{fig:0348spectrum_narrow} \end{figure} \begin{figure} \centering \includegraphics[trim={0.5cm 6.2cm 1.cm 2.5cm},clip,width=0.95\textwidth]{fig9_2M0348_withmodels_JHKbroad.pdf} \caption{FIRE $R\approx6000$ $J$- (top left), $H$- (top right), and $K$-band (bottom) spectra of 2MASS J0348$-$6022 (T7) with the best-fitting BT-Settl, Morley, and Sonora models for each band. The parameters of the best-fit models shown here are listed in Table~\ref{table:modelfitsbroad}. The reduced $\chi^2$ statistic is shown for each model. The residuals are shown in the lower sections of each panel, where the colors match those of the corresponding models. } \label{fig:0348spectrum_full} \end{figure} Based on its T7 spectral type, 2MASS J0348$-$6022 is expected to have an effective temperature $T_{\rm eff} \lesssim 1000$~K (e.g., \citealt{Stephens2009, Filippazzo2015}). Its photosphere should be governed by gas opacity, with a cloud layer buried deeply \citep[$f_{\rm sed}\geq3$;][]{Ackerman2001, Marley2002} within the atmosphere. Thus we expect the atmosphere of this object to be relatively clear and cloudless. So, the cloud-free Sonora models are appropriate for fitting this object's spectra. A cloudless atmosphere does not imply a completely homogeneous surface, and it is possible that one of the alternative mechanisms presented in Section~\ref{sec:SpitzerDiscussion} (e.g., temperature variations; \citealt{Robinson2014}) is responsible for the observed variability. We also compared this target to the BT-Settl and Morley models. The effective temperature grid of the available SM08 models does not extend to the low temperatures expected for a T7 spectral type. We selected a grid of parameters ranging from $T_{\rm eff} = 700$ to 1000~K, $\log g = 4.0$ to 5.5~dex in steps of 0.5~dex (0.25 for the Sonora models), and, for the Morley models, condensate sedimentation efficiencies from $f_{\rm sed} = 2$ to 5 in unit steps. We selected our $\log g$ grid based on the range in surface gravities predicted by the SM08 evolutionary models for brown dwarfs. We selected the RV and $v \sin i$ grids by first testing a wide, coarse grid to determine approximate RV and $v \sin i$ values. We then narrowed it down to between $-$5\,km\,s$^{-1}$ and $-$30\,km\,s$^{-1}$ for RV and between 75\,km\,s$^{-1}$ and 115\,km\,s$^{-1}$ for $v \sin i$, both in steps of 0.1\,km\,s$^{-1}$. We find that a wide range in parameters fit the $z$ band equally well, and it is therefore not diagnostic for our study. We exclude the $z$ band from our analysis, and only consider the $J-$, $H-$, and $K-$ band spectra for this target. To reliably determine $\log g$, RV, and $v \sin i$, we selected narrow regions dominated by molecular lines within each band: the 1.26--1.30 $\mu$m $J$-band region dominated by H$_2$O and CH$_4$ (Fig.~\ref{fig:0348spectrum_narrow}, top left), the 1.520--1.562 $\mu$m $H$-band region dominated by H$_2$O (Fig.~\ref{fig:0348spectrum_narrow}, top right), and the 2.11-2.19 $\mu$m $K$-band region containing primarily CH$_4$ lines (Fig.~\ref{fig:0348spectrum_narrow}, bottom). The best-fit photospheric models for the narrow wavelength regions are shown in Figure~\ref{fig:0348spectrum_narrow} and for the full bands in Figure~\ref{fig:0348spectrum_full}. The high $f_{\rm sed}$ values of the \citet{Morley2012} models in all of the full band fits indicate an optically thin, relatively cloudless atmosphere, as expected for a late-T type brown dwarf. We adopt the weighted average and unbiased weighted sample standard deviation (Sec.~\ref{sec:SpectraAnalysis}) of the values in Table~\ref{table:modelfits} as our estimates for $\log g$, $v \sin i$, and RV. For $v \sin i$ in particular, we find a very high degree of rotational broadening: $v \sin i = 103.5 \pm 7.4$\,km\,s$^{-1}$. This is consistent with the short rotational period (Sec.~\ref{sec:SpitzerDiscussion}), and will be discussed further in Section~\ref{sec:Discussion}. The adopted parameters from the spectroscopic fitting are shown in Table~\ref{table:inclinations}. \subsection{2MASS J1219+3128 (L8)} \label{sec:1219models} \begin{figure} \centering \includegraphics[trim={0.5cm 6.2cm 1.cm 2.5cm},clip,width=0.95\textwidth]{fig10_2M1219_withmodels_HKnarrow.pdf} \caption{Three narrow regions within the $H$ (top left and right) and $K$ (bottom) bands of our $R\approx6000$ FIRE spectrum of 2MASS J1219$+$3128 (L8), with the best-fitting BT-Settl, and SM08 models. These regions were selected for their density of H$_2$O absorption lines \citep{BDSS2003, cushing_etal06} to allow precise $v\sin i$, RV, and $\log g$ determinations. The parameters of the models shown here are listed in Table~\ref{table:modelfits}. The reduced $\chi^2$ statistic is shown for each model. The residuals are shown in the lower sections of each panel, where the colors match those of the corresponding models. A strong telluric feature has been masked between 2.00 and 2.02 $\mu$m in the $K$ band. } \label{fig:1219spectrum_narrow} \end{figure} \begin{figure} \centering \includegraphics[trim={0.5cm 6.2cm 1.cm 2.5cm},clip,width=0.95\textwidth]{fig11_2M1219_withmodels_JHKbroad.pdf} \caption{FIRE $R\approx6000$ $J$- (top left), $H$- (top right), and $K$-band (bottom) spectra of 2MASS 1219$+$3128 (L8) with the best-fitting BT-Settl and SM08 models for the entire bands. The parameters of the models shown here are listed in Table~\ref{table:modelfitsbroad}. The reduced $\chi^2$ statistic is shown for each model. The residuals are shown in the lower sections of each panel, where the colors match those of the corresponding models. The best-fitting SM08 model has $T_{\rm eff}=1500$~K and $\log g=5.0$ and shows a methane feature at 1.665~$\mu$m. Despite the presence of this feature in the model and its absence in the data, the shown photospheric model offers the best overall fit to the $H$-band spectrum. The appearance of the methane feature in the photospheric model may suggest that this L8 dwarf is close to transitioning to a T-type atmosphere. A strong telluric feature has been masked between 2.00 and 2.02 $\mu$m in the $K$ band. } \label{fig:1219spectrum_full} \end{figure} Based on its L8 spectral type, we expect 2MASS J1219+3128 to have an effective temperature of $T_{\rm eff} \sim 1400$~K (e.g., \citealt{Stephens2009, Filippazzo2015}). The Morley models are not suitable for this target, as that model grid extends to a maximum of $T_{\rm eff}$ = 1300 K. The Sonora models are also not appropriate, as they are cloud-free, while late-L dwarfs are very dusty and are expected to have thick, patchy clouds. Therefore, we instead adopt only the SM08 and BT-Settl models as they cover sufficiently high temperatures for this spectral type and include treatments of dust. We selected a grid of parameters ranging from $T_{\rm eff}$ = 1100~K to 1700~K, $\log g$ = 4.0 to 5.5~dex, and for the SM08 models, condensate sedimentation efficiencies from $f_{\rm sed}$ = 1 to 4 in unit steps. We selected the RV and $v \sin i$ grids using the same method as before (Sec.~\ref{sec:0348models}): by first testing a large, coarse grid to determine the approximate RV and $v \sin i$ values. The final grid was between $-$5\,km\,s$^{-1}$ and $-$30\,km\,s$^{-1}$ for RV and between 70\,km\,s$^{-1}$ and 110\,km\,s$^{-1}$ for $v \sin i$, both in steps of 0.1\,km\,s$^{-1}$. As for 2MASS~J0348--6022 (Sec.~\ref{sec:0348models}), we find that a wide range in parameters fit the $z$ band equally well. It is not diagnostic for our study, and we exclude the $z$ band from our analysis. The $J$-band data had fairly low signal-to-noise ratio and had no regions with clearly defined lines from which we could measure $v \sin i$. We instead selected two narrow regions in the $H$ band, along with a narrow region in the $K$ band. In the $H$ band we selected 1.50--1.55~$\mu$m and 1.72--1.78~$\mu$m, where the first region is dominated by H$_2$O, and the second is dominated by FeH, H$_2$O, and potentially some CH$_4$. The best lines in our data set for measuring $v \sin i$ in the $K$ band are H$_2$O lines between 1.970~$\mu$m and 2.055~$\mu$m, located on either side of a major telluric feature where our data have very low quality. We opted to mask out this region (2.00--2.02~$\mu$m) before fitting the models. We show the narrow band fits in Figure~\ref{fig:1219spectrum_narrow}, and the full-band fits in Figure~\ref{fig:1219spectrum_full}. We also find a high degree of rotational broadening for 2MASS J1219+3128, with $v \sin i = 79.0 \pm 3.4$ km\,s$^{-1}$ (Table~\ref{table:modelfits}). This is consistent with the short photometric period for this object. The adopted parameters from the spectroscopic fitting are shown in Table~\ref{table:inclinations}. \subsection{2MASS J0407+1546 (L3.5)} \label{sec:0407models} \begin{figure} \centering \includegraphics[trim={1.7cm 12.5cm 0.5cm 2.9cm},clip,width=0.9\textwidth]{fig12_2M0407_withmodels.pdf} \caption{GNIRS 2.275--2.332 $\mu$m $R\approx12,000$ spectrum of 2MASS J0407$+$1546, with the best-fitting SM08 and BT-Settl models overlaid. The parameters of the models shown here are listed in Table~\ref{table:modelfits}. The reduced $\chi^2$ statistic is shown for each model. Major molecular features \citep{cushing_etal06} are indicated. The residuals are shown in the lower panel, where the colors match those of the corresponding models. } \label{fig:0407spectrum} \end{figure} Based on its L3.5 spectral type, 2MASS J0407+1546 is expected to have an effective temperature of $\sim$1800~K \citep[e.g.,][]{Stephens2009, Filippazzo2015}, with a fairly cloudy atmosphere. We therefore select the SM08 and BT-Settl models. We do not include the Sonora models as they are cloudless, or the Morley models as they are for temperatures below 1300~K. We selected the following parameter grid for fitting: $T_{\rm eff}$ = 1500~K to 2100~K in steps of 100~K, $\log g$ = 4.0 dex to 5.5~dex in steps of 0.5~dex, and condensate sedimentation efficiency $f_{\rm sed} = 1$ to 4 in unit steps. We selected 30\,km\,s$^{-1}$ to 60\,km\,s$^{-1}$ for RV and 75\,km\,s$^{-1}$ to 100\,km\,s$^{-1}$ for $v \sin i$, both in steps of 0.1\,km\,s$^{-1}$. Our GNIRS observations cover the narrow region from 2.275 $\mu$m to 2.332~$\mu$m, which contains primarily H$_2$O and CO lines. We show the best-fitting models in Figure~\ref{fig:0407spectrum}. The narrower-wavelength coverage of our GNIRS data means we have limited effective temperature and sedimentation efficiency information compared to the full-band spectra of the two other objects. Although we cannot place a high confidence on the results for these two parameters, we find that the effective temperature is consistent with expectations for an L3.5 dwarf, with $T_{\rm eff} = 1840 \pm 210$~K. We find a high degree of rotational broadening, at $v \sin i = 82.6 \pm 0.2$~km\,s$^{-1}$, consistent with the short rotational period. Table~\ref{table:inclinations} lists all of the physical parameters detemined from the spectroscopic fits. \section{Discussion} \label{sec:Discussion} \subsection{The Three Most Rapidly Rotating Ultra-cool Dwarfs: Possibility for Auroral Emissions} \begin{figure} \centering \includegraphics[trim={1.1cm 6.1cm 1.75cm 3.3cm},clip,width=0.7\textwidth]{fig13_PeriodvsSpectraltype.pdf} \caption{ Rotation period as a function of spectral type for all 78 periodically variable L0--Y0 dwarfs known as of this writing. The full list of rotation periods is given in the Appendix in Table~\ref{table:allperiods} with references. The ``ultra-fast rotators'' of this work are shown in red. Black circles are periods from \citet{Metchev2015}, where solid circles denote variables with well-determined periodicities and open circles denote variables with uncertainties of $\geq 50\%$. An upward-facing triangle denotes the 50 hour lower limit on the periodicity of 2MASS 1753$-$6559. Open diamonds denote the possible period harmonics of WISEPC J1122+2550 \citep{Route2016}, while an open square denotes the revised rotational period for this target from \citet{Williams2017}. The points for WISEPC J1122+2550 are offset slightly to the left to avoid ambiguity with another T6 dwarf. Other previously published periods are denoted by the ``$\times$'' symbol. } \label{fig:periodicity} \end{figure} The ${1.080}^{+0.004}_{-0.005}$~h, ${1.14}^{+0.03}_{-0.01}$~h, and ${1.23}^{+0.01}_{-0.01}$~h photometric periods of our three L and T dwarfs are shorter than any others yet observed (Fig.~\ref{fig:periodicity}; Table~\ref{table:allperiods}). The previously reported shortest photometric period for an ultra-cool dwarf was $1.41\pm0.01$ h for the T6 2MASS J22282889$-$4310262 \citep{Clarke2008, Buenzli2012, Metchev2015}. \citet{Route2016} have reported an even shorter possible period, 0.288 h for the T6 dwarf WISEPC J112254.73+255021.5, based on radio flare observations from the Arecibo Observatory radio telescope. However, they indicate that the 0.288 h period may be a harmonic of a longer period, or that the flares may in fact not have been periodic. They base their period on five flaring events with the first and last separated by $\sim $240 days. When analyzing the data with the flares removed, they do not find any indication of variability. A later study by \citet{Williams2017} using the Very Large Array confirmed the same object to have variable polarized emission, but with a longer period of 1.93 h. They also observed the target photometrically in the $z$ band using Gemini/GMOS-N, and did not find any indication of variability. We therefore do not consider WISEPC J112254.73+255021.5 as an ultra-fast rotator, leaving the three objects reported here as the fastest known L or T dwarf rotators. We find a high degree of rotational line broadening for all three targets, consistent with the short photometric periods. At projected rotation velocities of 103.5 km\,s$^{-1}$ for 2MASS~J0348$-$6022 (T7), 79.0 km\,s$^{-1}$ for 2MASS~J1219$+$3128 (L8) and 82.6 km\,s$^{-1}$ for 2MASS~J0407$+$1546 (L3.5), these are among the most rapidly spinning ultra-cool dwarfs known to date. In the comprehensive compilation of ultra-cool dwarf rotation measurements by \citet{Crossfield2014}, he lists only two other ultra-cool dwarfs with $v\sin i > 80$ km\,s$^{-1}$: HD~130948C (86 km\,s$^{-1}$, L4) and LP~349--45B (83 km\,s$^{-1}$, M9), both from \citet{Konopacky2012}. The rapid projected rotational velocities of our targets confirm that the $\sim 1$ h periodicities of their light curves correspond to their true rotation periods, and that they are not more slowly rotating brown dwarfs with multiple spots at semi-regular longitudinal intervals, as seen on Jupiter \citep{dePater2016}, or beat patterns arising from planetary-scale waves \citep{Apai2017}. \begin{deluxetable}{l c c c} \tablecolumns{4} \tablewidth{0pt} \tablecaption{Physical Parameters for the Three L and T Dwarfs \label{table:inclinations} } \tablehead{ \colhead{Parameter} & \colhead{2MASS J0348-6022} & \colhead{2MASS J1219+3128} & \colhead{2MASS J0407+1546} } \startdata Spectral Type & T7 & L8 & L3.5 \\ $P_{\rm rot}$ (h) & $1.080^{+0.004}_{-0.005}$ & $1.14^{+0.03}_{-0.01}$ & $1.23^{+0.01}_{-0.01}$ \\ $T_{\rm eff}$ (K) & $880 \pm 110$ & $1330 \pm 140$ & $1840 \pm 210$ \\ $\log{g}$ & $5.1 \pm 0.3$ & $5.1 \pm 0.5$ & $5.2 \pm 0.4$ \\ $v \sin i$ (km\,s$^{-1}$) & $103.5 \pm 7.4$ & $79.0 \pm 3.4$ & $82.6 \pm 0.2$ \\ RV (km\,s$^{-1}$) & $-14.1 \pm 3.7$ & $-19.0 \pm 4.2$ & $43.4 \pm 2.1$ \\ $R$ ($R_\odot$) & $0.093^{+0.016}_{-0.010}$ & $0.100^{+0.027}_{-0.013}$ & $0.100^{+0.024}_{-0.008}$ \\ $M$ ($M_\odot$) & $0.041^{+0.021}_{-0.017}$ & $0.047^{+0.022}_{-0.025}$ & $0.064^{+0.009}_{-0.027}$ \\ Age (Gyr) & $3.5^{+11.5}_{-2.9}$ & $0.9^{+12.8}_{-0.8}$ & $0.8^{+11.2}_{-0.65}$ \\ $v_{\rm eq}$ (km\,s$^{-1}$) & $105^{+18}_{-12}$ & $107^{+29}_{-15}$ & $99^{+24}_{-8}$ \\ Inclination (\degree) & $81^{+9}_{-27} $ & $47^{+9}_{-17}$ & $57^{+7}_{-21}$\\ Oblateness & 0.08 & 0.08 & 0.05\\ \enddata \tablecomments{$P_{\rm rot}$ is determined from our photometric data. $T_{\rm eff}$, $\log{g}$, $v \sin i$, and RV are determined from our spectra by comparing to model photospheres. $R$, $M$, and the ages are determined by interpolation of the $\log g$-$T_{\rm eff}$ grid in the evolutionary models of SM08. The equatorial velocities ($v_{\rm eq}$) and spin-axis inclinations ($i$) are computed using the aforementioned values.\\ The evolutionary model radii listed are assumed to be the equatorial radii. With oblateness factors between 0.05 and 0.08, the difference between the polar and equatorial radii is 5\%--8\%. In reality, the evolutionary models (which ignore rotation) likely produce ``mean'' radii that are in between the equatorial and the polar radii. Hence, any difference between the ``mean'' and the equatorial radii would be 3\%--4\%. This would revise our estimates for the equatorial velocities up by $\sim$3\%--4\%, but such systematic offsets would still be $\sim$3 times smaller than the quoted uncertainties. The effect on the inclination estimates would be negligible. } \end{deluxetable} The $v \sin i$ measurements give lower limits on the true rotational velocities and may so be used to constrain the spin-axis inclinations of our targets. We assume that these brown dwarfs rotate as rigid spheres so that the equatorial rotation velocity is $v = 2 \pi R / P$, where $P$ is the photometric rotation period, and $R$ is the radius. We estimate the radii, masses, and ages by comparing our findings for surface gravities and effective temperatures to the $\log g$-$T_{\rm eff}$ grid in the evolutionary models of SM08. Oblateness due to the rapid rotation (see Section~\ref{sec:breakup}; notes in Table~\ref{table:inclinations}) and the corresponding increase in equatorial radius produce a second-order effect, which we have ignored in these calculations. Combining the radii ($R$), the photometric periods ($P_{\rm rot}$), and the spectroscopically determined projected rotational velocities ($v \sin i$), we calculate the inclinations ($i$) and the equatorial rotation velocities ($v_{\rm eq}$) of our targets (Table~\ref{table:inclinations}). All three L and T dwarfs have equatorial velocities~$\gtrsim$100~km~s$^{-1}$, and 2MASS~J0348--6022 (T7) is seen near equator-on. All three objects are also likely substellar. At a spectral type of L3.5, 2MASS~J0407+1546 is the warmest and potentially most massive among our three L and T dwarfs. Its evolutionary model-dependent mass estimate is 0.037--0.073~$M_\odot$ (Table~\ref{table:inclinations}). Optical spectroscopy from \citet{Reid2008} does not reveal lithium absorption, so it must be $>0.060 M_\odot$ \citep[e.g.,][]{burrows_etal97}. This still leaves the estimated 0.060--0.073~$M_\odot$ mass of 2MASS~J0407+1546 mostly in the substellar ($<0.072 M_\odot$) domain. The L3.5 dwarf 2MASS~J0407+1546 is also known to be chromospherically active based on the strong (60~\AA~equivalent width) H$\alpha$ emission reported by \citet{Reid2008}. Its rapid rotation and H$\alpha$ emission may well indicate the presence of an aurora. Based on radio detections of three L and T dwarfs with short (1.5~h--2.2~h) rotation periods, \citet{Kao2018} conclude that rapid rotation is key to powering auroral emissions via the electron cyclotron maser instability \citep{Hallinan2007, Hallinan2015}. \citet{Kao2016, Pineda2017} and \citet{Richey-Yowell2020} further demonstrate that brown dwarf H$\alpha$ and radio luminosities and radio aurorae are correlated. It is possible that all three of our rapidly rotating brown dwarfs have strong dipole fields that power auroral emission \citep{Kao2018}. In particular, the near equator-on view of the T7 dwarf 2MASS~J0348--6022 makes it an excellent candidate for seeking pulses of circularly polarized electron cyclotron maser emission. This is already known from other rapidly rotating ultra-cool dwarfs seen at their equators \citep{Berger2001,Hallinan2007}. \subsection{Proximity to Rotational Break-up and Oblateness} \label{sec:breakup} An upper limit on the spin rate of brown dwarfs exists from simple arguments of rotational stability. \citet{Konopacky2012} estimate that their two most rapidly rotating ultra-cool dwarfs, HD~130948C ($v\sin i=86$~km\,s$^{-1}$) and LP~349--45B ($v\sin i=83$~km\,s$^{-1}$) rotate at approximately 30\% of break-up speed. The break-up periods for typical $>$1~Gyr-aged field brown dwarfs are in the tens of minutes. For example, a massive 0.07~$M_\odot$, 0.09~$R_\odot$ brown dwarf has $P_{\rm breakup} = 2\pi(R^3/GM)^{1/2} = 17$~min, while a low-mass 0.02~$M_\odot$, 0.12~$R_\odot$ brown dwarf has $P_{\rm breakup} = 49$~min. These are approximately consistent with extrapolations from the shortest known (5~h) brown dwarf rotation period at 5 Myr \citep{Scholz_etal2015}, assuming that the fastest rotators at ~5 Myr remain the fastest when they contract and age. Using the evolutionary models of (non-accreting) brown dwarfs from \citet{Baraffe_etal2015}, by 3~Gyr conservation of angular momentum, dictates periods in the 10--70~min range \citep[e.g.,][]{Schneider_etal2018}. The 65--74~min periods of our three fast rotators are at the long end of this range. They would be near break-up only if they all had very low masses and large radii, i.e., were young brown dwarfs with low surface gravities. This is highly unlikely, given the wide range in spectral types (L3.5--T7) of our three rapid rotators, and the fact that their moderate-to-high surface gravities ($\log g\gtrsim 5.0$; Table~\ref{table:inclinations}) point to $>$0.1~Gyr ages. Using our measured RVs and precise proper motions and parallaxes determined from the Hawaii Infrared Parallax Program \citep{Liu2016, Best2020} or from Spitzer \citep{Kirkpatrick2019}, the BANYAN $\Sigma$ young moving group tool \citep{Gagne2018} reports that the space motions of all three L and T dwarfs are $\geq$99\% consistent with field-dwarf kinematics. Only for 2MASS~J0407+1546 (L3.5) is there a 1\% chance of membership in the 40--50~Myr Argus association \citep{zuckerman19}, and \citet{Gagne2015} independently discuss that this object may either be $\sim$200~Myr old or have peculiar metallicity, based on weaker FeH and slightly weaker alkali line widths. Thus, 2MASS~J0407+1546 may indeed be moderately young, even if it is not a member of any of the known young stellar moving groups. To assess the proximity to break-up spin-velocity, we consider the effect of the centrifugal acceleration on surface gravity. Rapid rotation decreases the surface gravity near the equator, and may make the object appear younger. We can determine the surface gravity decrement due to the centrifugal acceleration using the inferred radii and equatorial velocities (Table~\ref{table:inclinations}). For our potentially fastest and largest rotator, the L8 dwarf 2MASS J1219+3128, the centrifugal acceleration is $a_c = v^2 / R = 1.6 \times 10^4$ cm s$^{-2}$ ($\log{a_c} = 4.2$), where $v_{\rm eq}=107$ km\,s$^{-1}$ and $R = 0.100 R_{\odot}$. The centrifugal acceleration thus reduces the surface gravity at the equator by about 13\%, when compared to the $\log g = 5.1 \pm 0.5$ surface gravity inferred from the photospheric model fitting (Table~\ref{table:inclinations}). While this does indicate that the rotation speed amounts to a significant fraction (35\%) of the break-up speed, we note that it has a minor effect on our ability to assess the surface gravity spectroscopically. The rotational stability limit for brown dwarfs may not necessarily be set by the centrifugal levitation argument above. The stability limit for the oblateness $f$ (fractional difference between polar and equatorial radii) of axisymmetric rotating polytropes for brown dwarf-like structures ($n\sim 1 $ to 1.5) is about 0.4 \citep{James1964}. The Darwin-Radau relationship (e.g., \citealt{Barnes2003}) connects the oblateness, mass, radius, rotation, and moment of inertia for objects with smoothly varying interiors. Using the central values from Table~\ref{table:inclinations}, we compute oblateness factors of 0.08, 0.08 and 0.05 for the most (2MASS~J0348--6022, 2MASS~J1219+3128) and least (2MASS~J0407$+$1546) oblate objects. This places the spin rates of both 2MASS~J0348--6022 and 2MASS~J1219+3128 at about 45\% of their rotational stability limits: closer to instability than indicated by the rigid-body rotation break-up velocity estimates. For comparison, Saturn, the most oblate planet in the solar system, has an oblateness of 0.1. The brown dwarfs have surface gravities about 100 times greater than Saturn but rotation rates 10 times faster. Since oblateness scales as $\Omega^2/g$ (where $\Omega$ is the rotation rate), it is not surprising the oblateness of these objects are comparable to that of Saturn. Finally, the preceding discussion ignores the effect of any magnetic dynamo from the metallic hydrogen interior, which may be an important contributor to the energy balance in such rapid rotators, and may further limit the maximum spin velocity. So these three objects may be even closer to instability than indicated by estimates that ignore magnetic fields. From an observational standpoint, the three rapid rotators delineate a clear lower boundary to the envelope of all 78 L-, T-, and Y-dwarf rotation periods measured to date (Fig.~\ref{fig:periodicity}). This limit holds over a broad range of spectral types, for objects that presumably have different ages. Hence, $\sim$1~h may be close to a physical lower limit to the spin period of field-aged Jupiter-sized brown dwarfs. Because of their significant oblateness, the three rapid rotators are potentially good targets for searches for polarized thermal emission (e.g., \citealt{Marley2011, deKok2011, Stolker2017}). Several surveys have been successful in detecting polarized thermal emission from brown dwarfs (e.g., \citealt{Menard2002, Zapatero2005, Miles2013, Miles2017a, Millar-Blanchaer2020}), which could be attributed to inhomogeneous cloud cover or oblateness. Intriguingly, \citet{Miles2013} find that ultra-cool dwarfs with the fastest rotation ($v \sin i \geq$~60~km~s$^{-1}$) are more likely to exhibit linear polarization and at a larger degree than slower rotators. \section{Conclusions} \label{Conclusions} We present a T7, L3.5, and an L8 dwarf with the shortest photometric periodicities measured to date: 1.08 h, 1.14 h, and 1.23 h. We confirm these extremely short rotation periods with moderate-dispersion spectroscopy and comparisons to Doppler-broadened model photospheres. The inferred $v \sin i$ value of the T7 dwarf 2MASS~J0348--6022 is the highest known to date for an ultra-cool dwarf. Combining the projected rotation velocities of our targets with their photometric periods and photospheric model-dependent radii, we determine their equatorial velocities. All three L and T dwarfs spin at $\gtrsim$100~km~s$^{-1}$ at their equators, and are the most rapidly spinning field ultra-cool dwarfs known to date. As such, they are excellent candidates for seeking auroral radio emission, which has been linked to rapid rotation in ultra-cool dwarfs. We consider the role of the centrifugal acceleration on surface gravity, and find that, while the effect can be significant, at $\lesssim$0.1~dex in surface gravity it can be difficult to discern with current photospheric models. We find that the objects have oblateness factors of between 5\% and 8\%, which ranks them among the best targets for seeking net optical or infrared polarization. Given that the three rapid rotators presented in this paper appear to lie near a short-period limit of approximately 1~h across all brown dwarf spectral types, we consider it unlikely that rotation periods much shorter than 1~h exist for brown dwarfs. \acknowledgments { We would like to thank the anonymous referee for their considerate and constructive comments that helped us improve this paper. Support for this work was provided by NASA through an award issued by JPL/Caltech (RSA \#1533692), by an NSERC Discovery Grant and the NSERC Canada Research Chairs program, by the Canada Space Agency (grant \#18FAWESC13), and by an Ontario Graduate Scholarship. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This paper includes data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile. IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. } \facilities{Spitzer (IRAC), Magellan:Baade (FIRE), Gemini:Gillett (GNIRS)} \software{PyRAF \citep{pyraf}, FIREHOSE v2 \citep{Gagnezenodo}, BANYAN $\Sigma$ \citep{Gagne2018}}
1,314,259,993,704
arxiv
\section*{Introduction} \bigskip The behaviour of the classical cardinal invariants in the Cicho\'n diagram is very well described. See e.g.~the monograph \cite{BJ} for definitions and details. We also follow the terminology of \cite{BJ} throughout the paper. The invariant we are interested in is \emph{$\textrm{cov}(\iN)$}, that is the least cardinal $\kappa$ for which it is possible to cover $\RR$ by $\kappa$ many nullsets (sets of Lebesgue measure zero), and also some variants of $\textrm{cov}(\iN)$. There are two natural ways to modify this definition. (See \cite{BJ} Chapter 2.6 and 2.7.) First, \emph{$\textrm{cov}^*(\iN)$} is the least cardinal $\kappa$ for which it is possible to cover $\RR$ by $\kappa$ many translates of some nullset. In other words, $\textrm{cov}^*(\iN) = \min\{|A| \ | \ A\subset\RR, \exists N \in \iN, A+N=\RR \}$, where $|A|$ is the cardinality of $A$, $\iN$ is the $\sigma$-ideal of nullsets and $A+N=\{a+n\ |\ a\in A, n\in N\}$. The other possible modification is \emph{$\textrm{cov}(c\iN)$}, that is the least cardinal $\kappa$ for which it is possible to cover $\RR$ by $\kappa$ many compact nullsets. (At this point we depart from the terminology of \cite{BJ} as this notion is denoted by $\textrm{cov}(\iE)$ there.) It can be found in these two chapters of this monograph that both $\textrm{cov}^*(\iN)<2^\omega$ and $\textrm{cov}(c\iN)<2^\omega$ are consistent with $ZFC$. G. Gruenhage posed the natural question whether \emph{$\textrm{cov}^*(c\iN)<2^\omega$} is also consistent, that is, whether we can consistently cover $\RR$ by less than continuum many translates of a compact nullset. The main goal of this paper is to answer this question in the affirmative via an answer (in $ZFC$) to a question of U. B. Darji and T. Keleti that is also interesting in its own right. We remark here that under $CH$ (the Continuum Hypothesis, or more generally under $\textrm{cov}(\iN)=2^\omega$) the real line obviously cannot be covered by less than $2^\omega$ many nullsets. Therefore it is consistent that the type of covering we are looking for does not exist. So the interesting case is when the consistent inequality $\textrm{cov}^*(\iN)<2^\omega$ holds. The nullset in this statement can obviously be chosen to be $G_\delta$. So the content of Gruenhage's question actually is whether this can be an $F_\sigma$ or closed or compact nullset. We formulate the strongest version. \bq (Gruenhage) Is it consistent that there exists a compact set $C\subset \RR$ of Lebesgue measure zero and $A\subset \RR$ of cardinality less than $2^\omega$ such that $C+A=\RR$? \eq For example Gruenhage showed that no such covering is possible if $C$ is the usual ternary Cantor set (see \cite{DK} and for another motivation of this question see \cite{GL}). Working on this question Darji and Keleti \cite{DK} introduced the following notion. \bd (Darji - Keleti) Let $C\subset\RR$ be arbitrary. A set $P\subset\RR$ is called a \emph{witness for} $C$ if $P$ is perfect and for every translate $C+x$ of $C$ we have that $(C+x)\cap P$ is countable. \ed Obviously, if there is a witness $P$ for $C$ then less than $2^\om$ many translates of $C$ cannot cover $P$, so they cannot cover $\RR$. Motivated by a question of D. Mauldin, who asked what can be said if $C$ is of Hausdorff dimension strictly less than 1, Darji and Keleti proved the following. \bt (Darji - Keleti) If $C\subset\RR$ is a compact set of packing dimension $\dim_p(C)<1$ then there is a witness for $C$, and consequently less than $2^\om$ translates of $C$ cannot cover $\RR$. \et They posed the following question, an affirmative answer to which would also answer the original question of Gruenhage in the negative. \bq (Darji - Keleti) Is there a witness for every compact set $C\subset\RR$ of Lebesgue measure zero? \eq We will answer this question in the negative, which still leaves the original question of Gruenhage open. Then we will show that using the same ideas it is also possible to give an affirmative answer to Gruenhage's question. \section{Answer to the question of Darji and Keleti} The following set is fairly well known in geometric measure theory, as it is probably the most natural example of a compact set of measure zero but of Hausdorff and packing dimension 1. It was investigated for example by Erd\H os and Kakutani \cite{EK}. \bd Denote \[ C_0 = \left\{ \left. \sum_{n=2}^{\infty} \frac{d_n}{n!} \ \right| \ d_n \in\{0,1,\dots, n-2\}\ \forall n \right\}. \] \ed Think of $d_n$ as digits with ``increasing base''; then all but countably many $x\in[0,1]$ have a unique expansion \[ x=\sum_{n=2}^\infty \frac{x_n}{n!}, \] where $x_n\in \{0,1,\dots, n-1\}$ for every $n=2,3,\dots$ The set of real numbers with prescribed first $n$ digits is a closed interval. Let us call these \emph{basic intervals of level n} and denote this collection by $\iB_n$. Let $\iB=\cup_{n=0}^\infty \iB_n$ be the set of all \emph{basic intervals}. So $\iB$ forms a tree under inclusion, the $n^{th}$ level of which is $\iB_n$, which consists of $n!$ nonoverlapping intervals of length $\frac{1}{n!}$. Using the above expansion it is easy to see that $C_0\subset\RR$ is a compact set of Lebesgue measure zero. \bigskip The following theorem answers the question of Darji and Keleti. \bt\lab{intersects} For every perfect set $P\subset\RR$ there exists a translate $C_0+x$ of the compact nullset $C_0$ such that $(C_0+x)\cap P$ is uncountable. \et \bp We will show that there exists a perfect set $Q\subset P$ and $y\in \RR$ such that $Q+y\subset C_0$. This is clearly sufficient, as then for $x=-y$ we get $Q\subset (C_0+x)\cap P$, so this intersection is uncountable. First we will construct $Q$ via a dyadic tree of basic intervals, then we will construct the ``digits'' of $y$. By translating $P$ if necessary we can assume that $P$ intersects $(0,\frac{1}{5!})$. Instead of $P$ we may as well work with any perfect subset of it, for if we find the set $Q$ inside this subset then this $Q$ also works for $P$. So we may find a perfect subset of $P$ in $[0,\frac{1}{5!}]$ and therefore we can assume that $P$ itself is inside $[0,\frac{1}{5!}]$. Moreover, as $P$ is uncountable and the endpoints of the basic intervals form a countable set, we can find a perfect subset of $P$ that is disjoint from the set of endpoints (we used here twice the well known fact that every uncountable Borel set contains a perfect set). Therefore we can assume that $P$ itself is disjoint from the endpoints. Now we recursively pick an increasing sequence of levels $(l_k)_{k=0}^\infty$ and for every $k$ choose a set $\iI_{l_k}\subset\iB_{l_k}$ of size $2^k$ such that \begin{enumerate} \item for each $I\in \iI_{l_k}$ there are exactly two intervals in $\iI_{l_{k+1}}$ (at level $l_{k+1}$) that are contained in $I$, the so called \emph{successors} of $I$ \item\lab{intersect} for each $I\in \iI_{l_k}$ we have $I\cap P \neq \emptyset$ \item\lab{far} $l_k \geq 2^{k+2}+1$. \end{enumerate} The recursion is carried out as follows. Fix $p_0\in P$. Let $l_0=5$, and at level 5 we pick the (unique) basic interval $I$ containing $p_0$ in its interior. Let $\iI_{l_0}=\iI_5=\{I\}$. The recursion step is as follows. As $P$ is disjoint from the endpoints of the basic intervals, each interval $I\in\iI_{l_k}$ (at level $l_k$) contains some point $p_I\in P$ in its interior by condition \ref{intersect}. As $P$ is perfect, we can choose a distinct point $p'_I\in P$ in $I$. We can find a large enough $n$ such that the $2^{k+1}$ distinct points $p_I$ and $p'_I \ (I\in\iI_{l_k})$ are all separated by $\iB_n$. Define \[ l_{k+1}=\max\{n,2^{k+3}+1\}. \] Clearly, condition \ref{far} is also satisfied. Let $\iI_{l_{k+1}}$ be the subcollection of $\iB_{l_{k+1}}$ consisting of the $2^{k+1}$ basic intervals containing all the points $p_I$ and $p'_I\ (I\in\iI_{l_k})$. This recursion clearly provides a system of intervals satisfying the required properties. Now we can define \[ Q=\bigcap_{k=0}^\infty \bigcup \iI_{l_k}. \] Let us extend this tree of intervals to the intermediate levels in the natural way, that is, for every $I\in\iI_{l_k}$ and successor $J\in\iI_{l_{k+1}}$ and every $n\in(l_k,l_{k+1})$ let us add to the tree the unique basic interval of level $n$ that is contained in $I$ and contains $J$. For $n=2,3,4,5$ define $\iI_n=\{[0,\frac{1}{n!}]\}$. Hence we get $\iI_n$ for every $n=2,3,\dots$ so that \[ \bigcap_{n=2}^\infty \bigcup \iI_n=\bigcap_{k=0}^\infty \bigcup \iI_{l_k}=Q. \] Our next goal is to define $y=\sum_{n=2}^\infty \frac{y_n}{n!}$ so that $Q+y\subset C_0$. Define $y_2=y_3=y_4=y_5=0$. For every $n\geq 6$ there exists $k$ such that $l_k<n\leq l_{k+1}$. Clearly, the size of $\iI_n$ is $2^{k+1}$, and $Q\subset \cup \iI_n$. This means that there are at most $2^{k+1}$ possible values for $q_n$, where $q\in Q$ and $q=\sum_{n=2}^\infty \frac{q_n}{n!}$ (we do not have to worry about nonunique expansions, as $Q\subset P$ so $Q$ is disjoint from the endpoints of the basic intervals). For every such $q_n$ there are at most two values of $m$ such that $q_n+m\in\{n-2,n-1\}$. Hence altogether there are at most $2\cdot2^{k+1}$ such ``bad'' values, so if $n-1>2\cdot2^{k+1}$ then we can fix a $y_n\in\{0,1,\dots,n-2\}$ such that $q_n+y_n\notin\{n-2,n-1\}$ for every possible $q_n$. But our requirement on $n$ and $k$, namely $n-1>2\cdot2^{k+1}$, is clearly satisfied as $n>l_k\geq 2^{k+2}+1$ by condition \ref{far}. So we can define \[ y=\sum_{n=2}^\infty \frac{y_n}{n!} \] so that for every $n\geq 6$ we have $y_n\in\{0,1,\dots,n-2\}$ and that for every $q\in Q$ with $q=\sum_{n=2}^\infty \frac{q_n}{n!}$ we have $q_n+y_n\notin\{n-2,n-1\}$. We claim that $Q+y\subset C_0$, which will complete the proof. Fix $q\in Q$ with $q=\sum_{n=2}^\infty \frac{q_n}{n!}$, then \[ q+y = \sum_{n=2}^\infty \frac{q_n+y_n}{n!} = \sum_{n=2}^\infty \frac{q_n+y_n-n\varepsilon_n}{n!} + \sum_{n=2}^\infty \frac{n\varepsilon_n}{n!}, \] where $\e_n$ is the ``carried digit'', so \[ \e_n=\left\{ \begin{array}{ll} 0 & \textrm{if } q_n+y_n \leq n-1\\ 1 & \textrm{otherwise}. \end{array} \right. \] Continuing the above calculation we get \[ q+y = \sum_{n=2}^\infty \frac{q_n+y_n-n\varepsilon_n}{n!} + \sum_{n=2}^\infty \frac{\varepsilon_n}{(n-1)!} = \sum_{n=2}^\infty \frac{q_n+y_n-n\varepsilon_n}{n!} + \sum_{n=1}^\infty \frac{\varepsilon_{n+1}}{n!} = \] \[ = \e_2 + \sum_{n=2}^\infty\frac{q_n+y_n-n\varepsilon_n+\e_{n+1}}{n!} = \sum_{n=2}^\infty \frac{q_n+y_n-n\varepsilon_n+\e_{n+1}}{n!}, \] since $\e_2=0$ by e.g. $y_2=0$. We now check that for every $n\geq 2$ the numerator $q_n+y_n-n\varepsilon_n+\e_{n+1} \in \{0,1,\dots,n-2\}$, which shows that $q+y\in C_0$. For $n<6$ this is clear, as $y_n=0$ and also $q_n=0$ by the assumption $P\subset[0,\frac{1}{5!}]$. For $n\geq 6$ recall that $q_n\leq n-1$, $y_n\leq n-2$, so $q_n+y_n \leq 2n-3$ and also that $q_n+y_n\notin\{n-2,n-1\}$. We separate the cases $\e_n=0$ and $\e_n=1$. If $\e_n=0$, then $q_n+y_n \leq n-1$, but then also $q_n+y_n \leq n-3$. Therefore $q_n+y_n-n\varepsilon_n+\e_{n+1} = q_n+y_n+\e_{n+1} \leq n-2$, and we are done. On the other hand, if $\e_n=1$, then $q_n+y_n-n\varepsilon_n \leq n-3$, so $q_n+y_n-n\varepsilon_n+\e_{n+1} \leq n-2$, so this case is also done. This completes the proof. \ep \section{Answer to the question of Gruenhage} Now we answer the original question of Gruenhage. Recall that \emph{$\textrm{cof}(\iN)$} is the minimal cardinality of a family $\iF\subset\iN$ for which every nullset is contained in some member of $\iF$. \bt\lab{less} $\RR$ can be covered by $\textrm{cof}(\iN)$ many translates of $C_0$, consequently $\textrm{cov}^*(c\iN) \leq \textrm{cof}(\iN)$. \et \bp It is clearly sufficient to cover the unit interval. Fix $f:\omega\setminus\{0,1\}\rightarrow\omega\setminus\{0\}$. A set of the form $S=\Pi_{n=2}^\infty A_n$, where $A_n\subset \{0,1,\dots,n-1\}$ is of cardinality at most $f(n)$ for every $n$ is called an \emph{$f$-slalom}. Suppose $\lim_{n\to\infty}f(n) =\infty$, $f(2)=f(3)=f(4)=f(5)=1$ and also that $f(n)<\frac{n-1}{2}$ for every $n\geq 6$. Combining \cite{BJ} Thm 2.3.9 and \cite{GL} Thm 2.10 we obtain a cover of $\Pi_{n=2}^\infty \{0,1,\dots,n-1\}$ by $\textrm{cof}(\iN)$ many $f$-slaloms $\{S_\alpha \left| \ \alpha< \textrm{cof}(\iN) \}\right.$. (We actually obtain a cover of $\omega^\omega$ first, which is a larger space, so it is trivial to restrict this cover to get a cover of $\Pi_{n=2}^\infty \{0,1,\dots,n-1\}$. Moreover, \cite{BJ} works with inclusion mod finite, but that makes no difference, as we can replace each slalom by countably many slaloms to get around this difficulty.) For a slalom $S$ define \[ S^* = \left\{ \left. \sum_{n=2}^\infty \frac{s_n}{n!} \right|\ \left(s_n\right)_{n=2}^\infty \in S \right\}. \] Clearly $\{S^*_\alpha \left| \ \alpha< \textrm{cof}(\iN) \}\right.$ covers the unit interval. The following lemma will complete the proof of the theorem. \bl Let $f$ be as above and $S$ be an $f$-slalom. Than there exists $y\in\RR$ such that $S^*+y\subset C_0$. \el \bp The proof is based on the ideas used in Theorem \ref{intersects}. $S^*$ plays the role of $Q$. Our goal is to define \[ y=\sum_{n=2}^\infty \frac{y_n}{n!} \] with $y_n\in\{0,1,\dots,n-2\}$ so that for every $\left(s_n\right)_{n=2}^\infty \in S$ and for every $n \geq 6$ we have $s_n+y_n\notin\{n-2,n-1\}$. But this is clearly possible by our assumptions on $f$, as there are at most $f(n)<\frac{n-1}{2}$ possibilities for $s_n$, hence there are two consecutive values excluded, and so we can find a suitable $y_n\in\{0,1,\dots,n-2\}$. Then by the same calculation as in the last part of the proof of Theorem \ref{intersects} we check that \[ \sum_{n=2}^\infty \frac{s_n+y_n}{n!} \in C_0. \] This completes the proof of the lemma. \ep Hence for every $\alpha < \textrm{cof}(\iN)$ there exists $y_\alpha$ such that $S_\alpha^*+y_\alpha\subset C_0$, but than for $x_\alpha=-y_\alpha$ we have $S_\alpha^*\subset C_0+x_\alpha$, so we obtain a cover of the unit interval by $\textrm{cof}(\iN)$ many translates of $C_0$ and therefore the proof of Theorem \ref{less} is also complete. \ep \bcor\lab{covers} It is consistent that less than continuum many translates of a compact set of measure zero cover the real line, that is, $\textrm{cov}^*(c\iN) < 2^\omega$ is consistent. \ecor \bp $\textrm{cof}(\iN)$ is consistently less than the continuum \cite{BJ} p.~388. \ep \section{Remarks and open problems} There are alternative ways to prove Theorem \ref{covers}. Using Theorem \ref{intersects} one can directly use Sacks forcing to show that the translates of $C_0$ by the ground model reals cover $\RR$. Another method is to use the so called $CPA$ axiom (see \cite{CP}). However, in all these cases $\textrm{cof}(\iN)$ is less than the continuum. This is not necessary, as in the Laver model $\textrm{cof}(\iN)=2^\omega=\omega_2$ but it can be shown that the above slalom argument still works, that is, using the above $f$ the space $\Pi_{n=2}^\infty \{0,1,\dots,n-1\}$ can be covered by $\omega_1$-many $f$-slaloms in the Laver model. (This can be derived from the so called \emph{Laver property}, see \cite{BJ}.) Therefore $\textrm{cov}^*(c\iN)$ is not the same as $\textrm{cof}(\iN)$. \bq Is $\textrm{cov}^*(c\iN)$ equal to one of the known cardinal invariants? \eq Another natural question is the following. In most cases the values of the classical cardinal invariants in the Cicho\'n diagram remain the same if we replace $\RR$ with an arbitrary (uncountable) Polish space. However, in case of $\textrm{cov}^*(c\iN)$ the situation is not clear. The authors were able to reprove the results of this paper in case of the Cantor group or more generally for countable products of finite discrete groups equipped with the Haar measure, but not in the general case. \bq Can it be shown (without resorting to extra set theoretic axioms) that there is an uncountable locally compact Polish group G with Haar measure $\mu$ such that for every compact set $C\subset G$ with $\mu(C)=0$ and every $A\subset G$ of cardinality less than $2^\omega$ we have $C+A\neq G$? \eq As for the cardinal invariants, even the following is open. \bd If $G$ is an uncountable locally compact Polish group with Haar measure $\mu$ then \emph{$\textrm{cov}^*_G(c\iN)$} is the smallest cardinal $\kappa$ for which it is possible find a compact set of $\mu$-measure zero and cover $G$ by $\kappa$ many translates of it. \ed \bq Is it true that for every $G_1$ and $G_2$ uncountable locally compact Polish groups $\textrm{cov}^*_{G_1}(c\iN) = \textrm{cov}^*_{G_2}(c\iN)$? \eq
1,314,259,993,705
arxiv
\section{Introduction to replicable functions} We assume familiarity with the notation and contents of \cite{ACMS92}, \cite{ConNor79}, \cite{ForMcKNor94} and \cite{McK01}. Replicable functions are definable in terms of a generalized Hecke operator or by constraints on their coefficients, see \cite{Nor84}. Each such function $f$ is fixed by a group $G_f$ that is commensurable with the modular group, $\mathrm{PSL}(2,\mathbb{Z})$. There is a natural poset formed by these stabilizer groups, considered up to conjugation by $z\mapsto kz,\ k\in\mathbb{Z}^{>0}$. Replicable functions have a $q$-series (Fourier series) expansion at $i\infty$ of the form \[f(q) = \frac 1 q + \sum_{k\geq1} a_k\,q^k\ , \qquad q = e^{2\pi iz},\ \Im(z) > 0, \quad \forall\ k:\ \ a_k\in\mathbb{Z} .\] Cummins proves in \cite{Cum93} that a finite series implies that $f(q)=1/q + cq, c\in\{0,1,-1\}$ -- the modular fictions, exp, cos, sin which we shall hereafter ignore. Computations suggest that there are 616 other replicable functions of which 171 are monstrous moonshine functions, see \cite{ConMcKSeb04}. There is no satisfactory proof of the completeness of this list although it is compatible with several independent computational checks. The following remarkable result of Norton is fundamental, see \cite{Nor84}, \cite{Cum93}. \begin{theorem} A replicable function is determined by its coefficients in the Norton basis, $\{a_k\}\,,\ k\in B=\{1,2,3,4,5,7,8,9,11,17,19,23\}$. \end{theorem} \section{Algorithms for functional decomposition and computation of relations} \subsection{Functional decomposition} We sketch the theory of univariate rational decomposition and an algorithm for the computation of the poset of replicable functions with respect to rational relations. \begin{definition} In $T=\mathbb{Q}(t)\setminus\mathbb{Q}$ we define the binary operation of \textbf{composition} as \[g(t)\circ h(t)=g(h(t))=g(h)(t).\] $(T,\circ)$ is a semigroup with $t$ as neutral element. If $f=g\circ h$, we call this a \textbf{decomposition} of $f$ and say that $g$ is a \textbf{left component} of $f$ and $h$ is a \textbf{right component} of $f$. A decomposition is \textbf{trivial} if $g$ or $h$ is a unit with respect to composition. Decompositions $f=g_1\circ h_1=(g_1\circ u)\circ(u^{-1}\circ h_2)$ are called \textbf{equivalent}, where $u$ is invertible with respect to composition and $u^{-1}$ is its functional inverse. Given a rational function $f\in T$, we call it \textbf{indecomposable} if it is not a unit and all its decompositions are trivial. A decomposition of $f\in\mathbb{Q}(t)$ of length $r$, $f=g_1\circ\cdots\circ g_r$, is called \textbf{refined} if each $g_i$ is indecomposable. \end{definition} The units with respect to composition are linear fractional transformations. The decomposition problem is: given $f\in\mathbb{Q}(t)$, compute all the decompositions of $f$, i.e., find a representative $(h_i,g_i)$ for each class of decompositions with respect to the equivalence relation above. Solving this problem leads to the computation of all refined decompositions. \begin{definition} For a non-constant rational function $f(t)=f_N(t)/f_D(t)$ with $f_N,f_D\in\mathbb{Q}[t]$ and $\gcd(f_N,f_D)=1$ we define the \textbf{degree} of $f$ as \[\deg f=\max\{\deg f_N,\ \deg f_D\}.\] We also define $\deg a=0$ for all non-zero $a\in\mathbb{Q}$. \end{definition} Because the solution to the problem may not be unique, most decomposition algorithms have two steps: first, we compute candidates for the right components, then check for their associated left components. \begin{remark} Given $f,h\in\mathbb{Q}(t)$, we can efficiently test if there is a $g\in\mathbb{Q}(t)$ with $f=g\circ h$. It is necessary that $\deg h$ divides $\deg f$. We then solve the equations resulting from the $q-$expansion of $f-(g\circ h)$. This is fast as the equations are linear. \end{remark} We introduce a useful notion that will be the starting point for the decomposition algorithm, see \cite{GutRubSev01} and \cite{Sev04}. \begin{definition} Let $f=f_N/f_D\in\mathbb{Q}(t)$ with $f_N,f_D\in\mathbb{Q}[t]\ ,\ \gcd(f_N,f_D)=1$. We say that $f$ is in \textbf{normal form} when $\deg f_N>\deg f_D$ and $f_N(0)=0$ (or simply, $f(\infty)=\infty$, $f(0)=0$). \end{definition} \begin{theorem} Let $f\in T$. \begin{itemize} \item[(i)] There exist units $u,v\in\mathbb{Q}(t)$ such that $u\circ f\circ v$ is in normal form, with both numerator and denominator monic. \item[(ii)] Let $f\in\mathbb{Q}(t)$ be in normal form. If $f=g\circ h$, there is a unit $u$ such that $g\circ u$ and $u^{-1}\circ h$ are in normal form. \end{itemize} \end{theorem} The following is the key to the decomposition algorithm. \begin{theorem} Let $f,g,h\in\mathbb{Q}(t)$ with $f=f_N/f_D$, $h=h_N/h_D$ where $f_N,f_D,h_N,h_D\in\mathbb{Q}[t]$, $\gcd(f_N,f_D)=1$ and $\gcd(h_N,h_D)=1$, and $f=g\circ h$. If $f,g,h$ are in normal form, then $h_N | f_N$ and $h_D | f_D$. \end{theorem} \begin{proof} Let \[g=\frac{t^r+c_{r-1}t^{r-1}+\cdots+c_1t}{d_{r-1}t^{r-1}+\cdots+d_0},\qquad d_0\neq 0,\] then \[f=\frac{h_N^r+c_{r-1}h_N^{r-1}h_D+\cdots+c_1h_Nh_D^{r-1}}{d_{r-1}h_N^{r-1}h_D+\cdots+d_0h_D^r}\] and, as the degree is multiplicative with respect to composition, there is no simplification in this expression. The result follows. \end{proof} We describe the algorithm now. \begin{algorithm}[Rational decomposition] $ $ \begin{description} \item [Input:] $f\in T$. \item [Output:] all non-trivial decompositions $(g,h)$ of $f$, if any exists. \end{description} \begin{description} \item[A] Compute $u$ and $v$ so that $\overline{f}=u\circ f \circ v$ is in normal form. Let $f_N,f_D$ be the monic numerator and denominator of $\overline{f}$. \item[B] Factor $f_N$ and $f_D$. From this compute $D=\{(A_1,B_1),\ldots,(A_m,B_m)\}$, the set of pairs $(A,B)$ such that $A,B$ are monic polynomials dividing $f_N,f_D$ respectively. Set $i=1$. \item[C] Check if there exists $g\in\mathbb{Q}(t)$ with $\overline{f}=g(A_i/B_i)$; if it does, add $\left(u^{-1}(g),h(v^{-1})\right)$ to the list of decompositions of $f$. \item[D] If $i<m$, increase $i$ and go to \textbf{C}, otherwise return the list of decompositions. \end{description} \end{algorithm} \begin{analysis} The description above shows that the algorithm correctly computes at least one representative for each equivalence class of decompositions (an extra step would be needed to avoid having more than one representative for each decomposition class). The algorithm has exponential complexity due to the possibility of having an exponential number of candidates in the worst case. In practice, degree conditions reduce the number of candidates. In tests we have found that about 85\% of the time is spent on the factoring, and the number of candidates is small (random polynomials are irreducible). Because of this, the algorithm is fast. \end{analysis} \subsection{Rational relations} To find relations between two $q$-series we follow a simple procedure. We use the fact that replicable functions correspond to groups acting on the upper half plane, and a rational function of degree $n$ is an $n:1$ map. \begin{algorithm}[Computation of rational relations] $ $ \begin{description} \item [Input:] two $q$-series $s_1,s_2$ as described in the introduction. \item [Output:] all rational relations of the form $s_1(q^k)=f(s_2(q))$, $k\geq1$. \end{description} \begin{description} \item [A] Compute the orders $e_1,\ldots,e_r$ of the generators $M_1,\ldots,M_r$ of the fundamental region of $s_1$. The hyperbolic area of the region is $A_1:=(r-2)\pi-\sum \pi/e_i$. Compute the area $A_2$ for $s_2$. \item [B] If $d:=A_2/A_1$ is not an integer, then there are no relations. Otherwise, put $r=1$. \item [C] Let \[f=\frac{t^d+a_{d-1}t^{d-1}+\cdots+a_0}{t^{d-r}+b_{d-r-1}t^{d-r-1}+\cdots+b_0}\] and solve for $a_i,b_j$ the linear system given by $f(s_2(q^r))-s_1(q)$. \item [D] If there is a non-trivial solution to the system, store the corresponding $f$ and $r$. If $r<d$, increase $r$ and go to \textbf{C}, otherwise return all relations found. \end{description} \end{algorithm} \begin{analysis} In each relation, the degree of the numerator is the ratio of the areas, and the difference between the degrees of numerator and denominator is the exponent of $q$ in the function $s_2$. In step \textbf{A}, the orders of the non-identity elements are determined by the trace squared divided by the determinant. In step \textbf{C}, solving for the $a_i,b_j$ requires less than $2d$ coefficients. \end{analysis} \begin{remark} The values $k\geq1$ correspond to the conjugation $z\mapsto kz,\ k\in\mathbb{Z}^{>0}$, that is, $q\mapsto q^k$. \end{remark} \section{The computation} For each of the 616 replicable functions, we have: \begin{itemize} \item the coefficients $a_1,\ldots,a_{23}$, \item parabolic and elliptic generators for the fixing groups, i.e. generators of the stabilizers of the vertices of a fundamental region. \end{itemize} For each pair of series we determine whether there is a rational relation between them as in Section 3.2. We decompose any rational relations as described in Section 3.1 in order to refine the decompositions, we repeat until we have all refined decompositions. In terms of the poset graph we use: \begin{algorithm} [Poset refinement] $ $ \begin{description} \item[A] Draw a vertex for each of the 616 functions. \item[B] For each pair of functions $s_1$ and $s_2$, compute all rational relations of the form $s_1(q^k)=f(s_2(q))$, if any. For each relation, draw a labelled directed edge \[s_1\ \stackrel{\deg f,\;k}{\longrightarrow}\ s_2 \quad\mbox{where}\quad s_1(q^k)=f(s_2(q)).\] \item[C] For each of these, compute all the decompositions of $f$. For each decomposition $(g,h)$ compute $s_3(q^j):=h(s_2(q))$, $j\geq1$ and replace the edge with the two edges \[s_1\ \stackrel{\deg g,\;k\!/\!j}{\longrightarrow}\ s_3\ \stackrel{\deg h,\;j}{\longrightarrow}\ s_2\] \item[D] Repeat step \textbf{B} until all the rational functions are indecomposable. \end{description} \end{algorithm} The computations described in this section were performed in a Pentium-IV 2GHz using Maple 7. First, once the areas were known, it was seen that the maximum possible degree of a rational relation would be 96. The search for rational relations took about 20 hours, about 10\% of this time was spent in precomputing 200 coefficients for each series from the initial 23. We found 2419 relations in this step, we show their degrees below. \[\begin{array}{cccc} degree & number& degree & number\\ 2 & 698 & 16 & 52 \\ 3 & 243 & 18 & 60 \\ 4 & 422 & 20 & 2 \\ 5 & 26 & 24 & 71 \\ 6 & 333 & 28 & 2 \\ 8 & 178 & 30 & 8 \\ 9 & 40 & 32 & 4 \\ 10 & 14 & 36 & 40 \\ 12 & 209 & 48 & 5 \\ 14 & 4 & 72 & 2 \\ 15 & 6 \\ \end{array}\] In Step \textbf{C}, we decompose all the functions we found previously, remove the repeated relations, and continue until all functions are indecomposable. In this way, and since our decomposition algorithm outputs all possible decompositions up to units, we ensure that we find all missing functions, if any, from the lists available. The computation of all possible decompositions is fundamental since there exists no formal proof of the completeness of our initial data. Our computation provides this. The decomposition of all rational relations took around 30 hours overall. In the end, we obtained 1049 indecomposable rational relations. We summarize them in Table \ref{table1} in the appendix. We also list the connected components of the graph there. For each function we give its immediate predecessors and successors. For each edge we give two numbers, the degrees of the numerator and denominator of the rational relation; the first is the degree of the relation, and the power of $q$ is given by the difference of the two numbers. For example, $(1A,2:0)$ in line $2a$ means that $j(q^2)$ is a degree two polynomial in the principal modulus $2a$. Notice that in some cases there are two edges between two given functions. \section{Remarks} It is noteworthy that for two series $s_1,s_2$ we may find more than one relation, i.e. $s_1(q^{k_1})=f_1(s_2(q))$ and $s_1(q^{k_2})=f_2(s_2(q))$ with $k_1\neq k_2$. By computation of a resultant, we can find a polynomial relation of the type $P(s_1(q),s_1(q^{k_1/k_2}))=0$. Computation reveals a remarkable fact about the relation between $j$ (labelled $f =1A$) and the principal modulus for $\Gamma(3)$, $t=s(z/3)$ where $s = (\eta(q)/\eta(q^9))^3+3$, labelled $9B$. Specifically, refined decomposition chains of different lengths exist for \begin{figure}[ht!] \[\xy (-80,18)*{f=\displaystyle\frac{t^3(t^3+6^3)^3}{(t^3-3^3)^3}\,;}; (-65,9)*{\mbox{namely} \quad\displaystyle f=t^3\circ\frac{t(t-12)}{t-3}\circ\frac{t(t+6)}{t-3}}; (-64,0)*{\mbox{and} \quad\displaystyle f=\frac{t^3(t+24)}{t-3}\circ\frac{t(t^2-6\,t+36)}{t^2+3\,t+9}\,.}; {\ar@{-}^3 (0,0)*+{9B}; (20,6)*+{3B};}; {\ar@{-}^4 (20,6)*+++{}; (0,18)*+{1A};}; {\ar@{-}^3 (0,18)*+++{}; (-20,12)*+{3C};}; {\ar@{-}^2 (-20,12)*+++{}; (-10,6)*+{9A};}; {\ar@{-}^2 (-10,6)*+++{}; (0,0)*+++{};}; \endxy\] \end{figure} We believe this is the first example of a rational function in $\mathbb{Q}(t)$ with refined decomposition chains of different lengths. This does not occur with polynomials, see \cite{GutSev06}. Norton points out that to every component (other than the fictions) there is at least one function that is either monstrous or the translate of a monstrous function. \section*{Appendix} \input{latextable-allgraph2} The connected components of the graph are: \input{latextable-graphcomponents} \section*{Acknowledgement} We thank Simon P. Norton for his help and insight. \bibliographystyle{jcm}
1,314,259,993,706
arxiv
\section{Introduction} Two sets of gauge-theoretic equations that we consider in this article originated in $\mathcal{N}=4$ super Yang--Mills theory in Theoretical Physics; they appear after {\it topological twists} of the theory. However, we introduce these equations as 4-dimensional analogues of Hitchin's equations on Riemann surfaces, since many ideas and techniques for the Hitchin equations seem to be enhanced for the studies of these equations. \paragraph{Hitchin's equations on compact Riemann surfaces.} Let $\Sigma$ be a compact Riemann surface with genus greater than one, and let $E$ be a holomorphic vector bundle on $\Sigma$. Hitchin \cite{Hi} introduced the following equations seeking for a pair $(A, \Phi)$ consisting of a connection $A$ on $E$ and a section $\Phi$ of $\text{End}(E) \otimes \Lambda^{1,0}$, where $\Lambda^{1,0} := ( T^* X \otimes \C)^{1,0}$. \begin{gather} \bar{\partial}_A \Phi =0, \\ F_A + [ \Phi, \Phi^* ] = 0. \end{gather} The outcome of the studies of the Hitchin equations has been wonderfully abundant, so perhaps one may like to look at similar equations in higher dimensions, say, on a complex surface. In fact, things that we describe in this article are ones of them. In that case, since $\Lambda^{1,0}$ can be thought of as either the holomorphic cotangent bundle $\Omega^1_{\Sigma}$ or the canonical bundle $K_{\Sigma}$ on a Riemann surface, there could be at least two possibilities of analogues of the Hitchin equations for a complex surface $X$: one is obtained by looking at a section of $\text{End}(E) \otimes \Omega^1_X$; and the other one considers a section of $\text{End}(E) \otimes K_X$. The former is the Kapustin--Witten equations, and the latter is the Vafa--Witten equations as described below. \paragraph{The Kapustin--Witten equations on closed four-manifolds.} Let $X$ be a compact, oriented, Riemannian four-manifold, and let $P \to X$ be a principal $SO(3)$ or $SU(2)$ bundle over $X$. We denote by $\mathfrak{g}_{P}$ the adjoint bundle of $P$, and by $\mathcal{A}_{P}$ the space of connections on $P$. By using the Riemannian metric, we decompose the space of two-forms as $\Omega^2 (X) = \Omega^{+} (X) \oplus \Omega^{-} (X)$. We call elements in $\Omega^{+} (X)$ {\it self-dual two-forms}, and ones in $\Omega^{-}(X)$ {\it anti-self-dual two-forms}. We consider the following equations, the {\it Kapustin--Witten equations}, that ask a pair $(A, \mathfrak{a}) \in \mathcal{A}_{P} \times \Gamma ( X, \mathfrak{g}_{P} \otimes T^{*} X )$ to satisfy \begin{gather} d_{A}^{*} \mathfrak{a} =0 , \quad ( d_{A} \mathfrak{a} )^{-} =0 , \label{eq:KW1} \\ F_{A}^{+} - [\mathfrak{a} \wedge \mathfrak{a} ]^{+} =0, \label{eq:KW2} \end{gather} where $F_{A}$ is the curvature two-form of the connection $A$, and superscripts ``-'' and ``+'' indicate taking the anti-self-dual part or self-dual part respectively. These equations were introduced by Kapustin and Witten \cite{KW} (see also \cite{Wi}) in the context of the geometric Langlands programme from the viewpoint of $\mathcal{N}=4$ super Yang--Mills theory in four dimensions. See also a paper by Gagliardo and Uhlenbeck \cite{GU}. Note that the equation $d_{A}^{*} \mathfrak{a} =0$ makes \eqref{eq:KW1} and \eqref{eq:KW2} an elliptic system after a gauge fixing equation. Note also that there are no solutions to the equations \eqref{eq:KW1} and \eqref{eq:KW2} in the case when $X$ is compact if the first Pontrjagin number of $\mathfrak{g}_{P}$ is positive (see \cite[\S 3.3]{KW}, \cite[\S 2.b)]{T2}). \paragraph{Taubes' analysis on $SL(2, \C)$ connections and the singular sets.} The analytic properties of solutions to the above equations were successfully revealed by Taubes. In \cite{T2}, Taubes studied the Uhlenbeck style compactness problem for $SL(2,\C)$-connections, including solutions to the above equations, on four-manifolds (see also \cite{T1}, \cite{T3}). One of major difficulties to be overcome here was the lack of a priori boundedness of the ``extra fields'' $\mathfrak{a}$. However, Taubes introduces some real codimension two singular set, to be denoted $Z$, outside which a sequence of ``partially rescaled'' $SL(2,\C)$-connections converges after gauge transformations except bubbling out at a finite set of points (see also \cite{D}, \cite{H}, \cite{HW}, \cite{Mo}, \cite{MSWW}, \cite{MSWW2}, \cite{Tak} and \cite{Tan1} for related problems). We describe this briefly in Section \ref{sec:rtaubes} to this article. \paragraph{The Kapustin-Witten equations on compact K\"{a}hler surfaces and the singular sets.} Our interest in this article is the structure of the above singular set $Z$. In fact, Taubes further studied the structure of the singular set $Z$ in general setting \cite{T3} (see also \cite{Tak}), however, we restrict ourselves to the K\"{a}hler surface case. To state our observation in this article, let $X$ be a compact K\"{a}hler surface with K\"{a}hler form $\omega$, and let $E$ be a Hermitian vector bundle on $X$ with Hermitian metric $h$. We denote by $\mathcal{A}_{(E,h)}$ the space of all connections on $E$ which preserve the metric $h$, and by $\mathfrak{u} (E) = \text{End}(E,h)$ the bundle of skew-Hermitian endomorphisms of $E$. Then the equations \eqref{eq:KW1} and \eqref{eq:KW2} become the following form for a pair $(A,\phi) \in \mathcal{A}_{(E,h)} \times \Omega^{1,0} (\mathfrak{u} (E)) $. \begin{gather} \bar{\partial}_{A} \phi = 0, \quad [ \phi \wedge \phi ] =0, \label{eq:S1}\\ F_{A}^{0,2} = 0, \quad \Lambda \left( F_{A}^{1,1} + 2 [ \phi \wedge \phi^{*}] \right) =0 . \label{eq:S2} \end{gather} For the deduction of these, see \cite[\S 6(iii)]{N} or Section \ref{sec:eqK} to this article. These are the equations studied by Simpson in \cite{Si}. (Simpson considered these in arbitrary dimensions.) We now assume the rank of $E$ to be two. We consider a sequence $\{ (A_n , \phi_n )\}$ of solutions to Simpson's equations \eqref{eq:S1} and \eqref{eq:S2}, and apply the above Taubes analysis to it, putting $r_n := || \phi_n ||_{L^2}$ and assuming that $\{ r_n \}$ has no bounded subsequence. We then obtain a singular set, to be denoted by $Z$, outside which the sequence $\{ (A_n , \phi_n / r_n \} $ has an $L^2_1$ convergent subsequence after a suitable choice of gauge transformations except bubbling out at a finite set of points in $X$. In this article, we prove the following. \begin{theorem}[Corollary \ref{cor:hs}] The singular set $Z$ has a structure of an analytic subvariety of $X$. \label{th:main} \end{theorem} This can be done by a simple observation that the singular set is identified with the zero set of a section of the holomorphic bundle $(\Omega_{X}^1)^{\otimes 2}$ on $X$ up to a finite set of points. \paragraph{The Vafa--Witten equations on closed four-manifolds.} We next consider another analogue of the Hitchin equatinos, called the {\it Vafa--Witten equations} \cite{VW}. The Vafa--Witten equations look for a triple $(A, B, \Gamma)$ consisting of a connection $A$ on a principal $G$-bundle with $G$ being a compact Lie group over a smooth oriented Riemannian four-manifold $X$; a section $B$ of the associated bundle $\mathfrak{g}_{P} \otimes \Lambda^{+}$; and a section $\Gamma$ of $\mathfrak{g}_{P}$, where $\mathfrak{g}_{P}$ is the adjoint bundle of $P$ and $\Lambda^+$ is the self-dual part of $\Lambda^2 T^*X$. The exact form of the equations is as follows. \begin{gather} d_A^* B + d_A \Gamma =0, \label{VW1} \\ F_A^+ + [B.B] + [B, \Gamma] =0, \label{VW2} \end{gather} where $F_A^+$ is the self-dual part of the curvature of the connection $A$, and $[B.B] \in \Gamma (X, \mathfrak{g}_{P} \otimes \Lambda^+)$ is defined through the Lie brackets of $\mathfrak{g}_{P}$ and $\Lambda^+$ (see \cite[\S A.1.6]{M} or \cite[\S 2]{Tan1} for details). Taubes \cite{T4} proved an analogous theorem to the Kapustin--Witten equations also for the Vafa--Witen equations case. In addition, our observation on the singular sets holds for the Vafa--Witten equations case, namely, the singular set can be identified with the zero set of a section of the square of the canonical bundle of $X$, so it also has a structure of an analytic subvariety of $X$ (Corollary \ref{cor2}). \vspace{0.3cm} The organization of this article is as follows. In Section \ref{sec:KW}, we deal with the Kapustin-Witten equations. Section \ref{sec:rtaubes} is a brief review of the results by Taubes in \cite{T2}. In Section \ref{sec:eqK}, we deduce that the Kapustin--Witten equations on a compact K\"{a}hler surface are the same as Simpson's equations. Then Theorem \ref{th:main} is proved in Section \ref{sec:stsing}. We treat the Vafa--Witten equations in Section \ref{sec:VW}. In Sections \ref{sec:rtaubesVW}, we review results by Taubes \cite{T4} on the Vafa--Witten equations on smooth four-manifolds. We then look into the equations on compact K\"{a}hler surfaces, and prove a similar statement to Theorem \ref{th:main} for the Vafa--Witten case in Section \ref{sec:singVW}. \paragraph{Acknowledgements.} I would like to thank Cliff Taubes for drawing my attention to the Kaputstin--Witten equations and generously helpful suggestions about these. I have a huge debt to him for his insight and ingenious computations which had tremendously inspired this article. I am grateful to Mikio Furuta, Tomoyuki Hisamoto and Hiroshi Iritani for useful discussions and valuable comments around the subject. I would also like to thank Ben Mares for helpful comments on the computations in Section \ref{sec:eqK}. I am also grateful to Seoul National University, NCTS at National Taiwan University, Kyoto University, BICMR at Peking University and Institut des Hautes \'{E}tudes Scientifiques for support and hospitality, where part of this work was done during my visits in 2015--17. This work was partially supported by JSPS Grant-in-Aid for Scientific Research Nos. 15H02054 and 16K05125. \section{The singular sets of solutions to the Kapustin--Witten equations} \label{sec:KW} \subsection{Results by Taubes} \label{sec:rtaubes} In this section, we briefly describe the work by Taubes on the Uhlenbeck style compactness for $SL(2, \C)$-connections on closed four-manifolds \cite{T2}. Let $X$ be a compact, oriented, Riemannian 4-manifold, and let $P \to X$ be a principal $G$-bundle over $X$. We take $G$ to be $SO(3)$ or $SU(2)$ in the below. We denote by $\mathfrak{g}_{P}$ the adjoint bundle of $P$. We consider a sequence $\{ (A_{n} , \mathfrak{a}_{n}) \}_{n \in \N}$ of solutions to the equation \eqref{eq:KW1} and \eqref{eq:KW2}. We put $r_{n} := || \mathfrak{a}_{n}||_{L^2}$, and take $a_{n} := \mathfrak{a}_{n} / r_{n}$ for each $n \in \N$. Note that the pairs $(A_{n} , a_{n})$ satisfy the following. \begin{gather} ( d_{A_n} a_n )^{-} =0 , \quad d_{A_n}^{*} a_n =0 , \label{eq:rKW1} \\ F_{A_n}^{+} - r_n^2 [ a_n \wedge a_n ] ^{+} =0, \label{eq:rKW2} \end{gather} where $r >0$. In the case that $\{ r_n \}$ has a bounded subsequence, the Uhlenbeck compactness theorem with the Bochner--Weitzenb\"{o}ck formula deduces the following. \begin{proposition}[\cite{T2}] If $\{ r_{n} \}$ has a bounded sequence, then there exist a principal $G$-bundle $P_{\Delta} \to X$ and a pair $(A_{\Delta} , \mathfrak{a}_{\Delta})$, where $A_{\Delta}$ is a connection on $P_{\Delta}$ and $\mathfrak{a}_{\Delta}$ is a section of $\mathfrak{g}_{P} \otimes T^{*} X$, which satisfies the equations \eqref{eq:KW1} and \eqref{eq:KW2}, and a finite set $\Theta \subset X$; a subsequence $\Xi \subset \N$; and a sequence $\{ g_{n} \}_{n \in \Xi}$ of automorphisms of $P_{\Delta}|_{X \setminus \Theta}$ such that the sequence $\{ (g_{n}^{*} A_{n} , g_{n}^{*} \mathfrak{a}_{n})\}_{ n \in \Xi}$ converges to $(A_{\Delta} , \mathfrak{a}_{\Delta})$ in the $C^{\infty}$-norm on compact subsets in $X \setminus \Theta$. \end{proposition} Analysis for the case that $\{ r_{n} \}$ has no bounded sequence was the bulk of \cite{T2}. Firstly, a subsequence of $\{ | a_n | \}$ converges in $L^2_1$, we denote it suggestively by $|\hat{a}_{\diamond}|$. Moreover, Taubes proves the following. \begin{proposition}[\cite{T2}] There exists a finite set $\Theta \subset X$ such that the function $| \hat{a}_{\diamond} |$ is continuous on $X \setminus \Theta$, and it is smooth at points in $X \setminus \Theta$ where $| \hat{a}_{\diamond}|$ is positive. Furthermore, the sequence $\{ | a_{n} | \}_{n \in \Lambda \subset \N}$ converges to $| \hat{a}_{\diamond} |$ in the $C^{0}$-topology on compact subsets in $X \setminus \Theta$. \label{prop:C0conv} \end{proposition} In the above proposition, the finite set $\Theta$ corresponds to ``bubbling points'' of the connections. We denote by $Z$ the zero locus of the limit function $| \hat{a}_{\diamond}|$. Then Taubes proves that a subsequence of $\{ (A_{n} , a_{n} )\}$ converges in $L^2_1$ topology outside $\Theta \cup Z$. More precisely, Taubes proved the following. \begin{theorem}[\cite{T2}] There exist a real line bundle $\mathcal{I}$ over $X \setminus \{ \Theta \cup Z \}$, a section $\nu$ of $\mathcal{I} \otimes T^*X |_{X \setminus \{ \Theta \cup Z \}}$ with $d \nu = d^* \nu =0$ and $| \nu | = |\hat{a}_{\diamond}|$, a principal $G$-bundle $P_{\Delta}$ over $X \setminus \{ \Theta \cup Z \} $ and a connection $A_{\Delta}$ of $P_{\Delta}$ with $d_{A_{\Delta}} * F_{A_{\Delta}}=0$, an $A_{\Delta}$-covariantly constant homomorphism $\sigma_{\Delta} : \mathcal{I} \to \mathfrak{g}_{P_{\Delta}}$ and a sequence of isomorphisms $\{ g_n \}$ from $P_{\Delta}$ to $P|_{X \setminus \{ \Theta \cup Z \} } $ such that $\{ g_n (A_n), g_n (a_n) \}$ converges in $L^2_1$-topology outside $\Theta \cup Z$. Furthermore, $\{ g_n (a_n) \}$ converges in $C^0$-topology outside $\Theta$. \end{theorem} As for the structure of the above $Z$, Taubes proved the following. \begin{proposition}[\cite{T3}] $Z$ has the Hausdorff dimension at most $2$. \end{proposition} In Section \ref{sec:stsing}, we prove that the set $Z$ has a structure of an analytic set of $X$ when the underlying manifold is a K\"{a}hler surface. \subsection{The equations on compact K\"{a}hler surfaces} \label{sec:eqK} From here, we take $X$ to be a compact K\"{a}hler surface with K\"{a}hler form $\omega$, and $E$ to be a Hermitian vector bundle with Hermitian metric $h$ on $X$. We assume that $c_1 (E) =0$. We denote by $\mathcal{A}_{(E,h)}$ the space of all connections on $E$ which preserve the metric $h$, and by $\mathfrak{u} (E) = \text{End}(E,h)$ the bundle of skew-Hermitian endomorphisms of $E$. In these setting, we have $d_{A} = \partial_{A} + \bar{\partial}_{A}$, $d_{A}^{*} = \partial_{A}^{*} + \bar{\partial}_{A}^{*}$ and $\mathfrak{a} = \phi - \phi^{*}$, where $\phi \in \Gamma (X, \mathfrak{u} (E) \otimes \Omega^{1}_{X} ) = \Omega^{1 ,0} ( \mathfrak{u} (E))$ with $\Omega^{1}_{X}$ being the holomorphic cotangent bundle of $X$. In addition, the space of complexified two-forms is decomposed as $\Omega^{2} (X) \otimes \C = \Omega^{2,0} (X) \oplus \Omega^{1,1} (X) \oplus \Omega^{0,2} (X)$, and we have the following identification. \begin{gather*} \Omega^{+} (X) \cong \Omega^2 (X) \cap \left( \Omega^{2,0}(X) \oplus \Omega^{0,2} (X) \oplus \Omega^0 (X) \omega \right) , \\ \quad \Omega^{-} (X) \cong \Omega^2 (X) \cap ( \Omega_{0}^{1,1} (X)), \end{gather*} where $\Omega^{1,1}_{0} (X)$ denotes the orthogonal subspace in $\Omega^{1,1} (X)$ to $\Omega^{0} (X)\omega$. Thus, the equation \eqref{eq:KW2} has the following form on a compact K\"{a}hler surface. \begin{equation} F_{A}^{0,2} - [ \phi^{*} \wedge \phi^{*} ] = 0, \quad i \Lambda (F_{A}^{1,1} +2 [ \phi \wedge \phi^{*}] ) =0, \label{KWk1} \end{equation} where $\Lambda$ denotes the adjoint of $\wedge \omega$ Furthermore, we have the following. \begin{proposition} On a compact K\"{a}hler surface, the equations \eqref{eq:KW1} and \eqref{eq:KW2} have the following form that asks $(A, \phi) \in \mathcal{A}_{(E,h)} \times \Omega^{1,0 } (\mathfrak{u} (E))$ to satisfy \begin{gather} \bar{\partial}_{A} \phi = 0 , \quad [ \phi \wedge \phi ] =0, \label{KWcK1}\\ F_{A}^{0,2} = 0, \quad \Lambda \left( F_{A}^{1,1} + 2 [ \phi \wedge \phi^{*}] \right) =0 . \label{KWcK2} \end{gather} \label{prop:Simp} \end{proposition} \vspace{-0.8cm} Namely, the Kapustin--Witten equations and Simopson's equations in \cite{Si} are the same on a compact K\"{a}hler surface. Note that the above statement for the flat bundle case was proved by Simpson \cite{Si2} (in arbitrary dimensions). \vspace{0.3cm} \hspace{-0.8cm} {\it Proof of Propositoin \ref{prop:Simp} \footnote{The proof presented here was partly inspired by some proficient calculations by Clifford Taubes \cite{Tp}.}.} The proof consists of the following three steps. \vspace{0.2cm} \hspace{-0.6cm}\underline{Step 1}: As $d_{A}^{*} = \partial_{A}^{*} + \bar{\partial}_{A}^{*}$ and $\mathfrak{a} = \phi - \phi^{*}$ with $\phi \in \Omega^{1 ,0} ( \mathfrak{u} (E))$, from the first equation $d_{A}^{*} \mathfrak{a} = 0$ in \eqref{eq:KW1}, we have $ \partial_{A}^{*} \phi - \bar{\partial}_{A}^{*} \phi^{*} =0 $. From this with the K\"{a}hler identities: $i \partial_{A}^{*} = [ \bar{\partial}_{A} , \Lambda]$ and $i \bar{\partial}_{A}^{*} = - [ \partial_{A} , \Lambda]$, we obtain $i \Lambda ( \bar{\partial}_{A} \phi + \partial_{A} \phi^{*} ) =0$. Hence, the equations in \eqref{eq:KW1} take the following form when $X$ is a compact K\"{a}hler surface. \begin{equation} i \Lambda ( \bar{\partial}_{A} \phi + \partial_{A} \phi^{*} ) =0, \quad ( \bar{\partial}_{A} \phi - \partial_{A} \phi^{*} )^{-} = 0. \label{eq2} \end{equation} \vspace{0.2cm} \hspace{-0.6cm}\underline{Step 2}: We next prove that \begin{equation} \bar{\partial}_{A} \phi - \partial_{A} \phi^{*} = 0, \quad \Lambda \bar{\partial}_{A} \phi = \Lambda \partial_{A} \phi^{*} =0 . \label{eq:step2} \end{equation} \vspace{0.1cm} To prove \eqref{eq:step2}, we write the second equation in \eqref{eq2} in the following form. \begin{equation} (\bar{\partial}_{A} \phi - \partial_{A} \phi^{*} ) - (\Lambda \bar{\partial}_{A} \phi ) \wedge \omega + ( \Lambda \partial_{A} \phi^{*} ) \wedge \omega =0 \label{eq:asd2} \end{equation} Acting on the left hand side of this by $\partial_{A}$, we get the following. \begin{equation*} \begin{split} 0 &= \partial_{A} \left( (\bar{\partial}_{A} \phi - \partial_{A} \phi^{*} ) - (\Lambda \bar{\partial}_{A} \phi ) \wedge \omega + ( \Lambda \partial_{A} \phi^{*} ) \wedge \omega \right) \\ & = \partial_{A} \bar{\partial}_{A} \phi - \partial_{A} \partial_{A} \phi^{*} - \partial_{A} (\Lambda \bar{\partial}_{A} \phi ) \wedge \omega + \partial_{A} ( \Lambda \partial_{A} \phi^{*} ) \wedge \omega \\ & = \partial_{A} \bar{\partial}_{A} \phi - \partial_{A} \partial_{A} \phi^{*} -( \Lambda \partial_{A} \bar{\partial}_{A} \phi ) \wedge \omega \\ & \qquad + i ( \bar{\partial}_{A}^{*} \bar{\partial}_{A} \phi ) \wedge \omega + ( \Lambda \partial_{A} \partial_{A} \phi^{*} ) \wedge \omega - i ( \bar{\partial}_{A}^{*} \partial_{A} \phi^{*} ) \wedge \omega \\ & = i ( \bar{\partial}_{A}^{*} \bar{\partial}_{A} \phi ) \wedge \omega - i ( \bar{\partial}_{A}^{*} \partial_{A} \phi^{*} ) \wedge \omega . \end{split} \end{equation*} Here we used the K\"{a}hler identities at the third equality, and the fact that the multiplication by $\omega$ is an isometry from 1-forms to 3-forms and the $\Lambda$ is the inverse of it at the last equality above. Then the $L^2$ inner product of this with $\phi$ yields \begin{equation} i \langle \phi, \bar{\partial}_{A}^{*} \bar{\partial}_{A} \phi \rangle_{L^2} - i \langle \phi, \bar{\partial}_{A}^{*} \partial_{A} \phi^{*} \rangle_{L^2} =0 . \label{eq:l21} \end{equation} Similarly, acting on the left hand side of \eqref{eq:asd2} by $\bar{\partial}_{A}$, we obtain \begin{equation*} \begin{split} 0 &= \bar{\partial}_{A} \left( (\bar{\partial}_{A} \phi - \partial_{A} \phi^{*} ) - (\Lambda \bar{\partial}_{A} \phi ) \wedge \omega + ( \Lambda \partial_{A} \phi^{*} ) \wedge\omega \right) \\ & = \bar{\partial}_{A} \bar{\partial}_{A} \phi - \bar{\partial}_{A} \partial_{A} \phi^{*} - \bar{\partial}_{A} (\Lambda \bar{\partial}_{A} \phi ) \wedge \omega + \bar{\partial}_{A} ( \Lambda \partial_{A} \phi^{*} ) \wedge \omega \\ & = \bar{\partial}_{A} \bar{\partial}_{A} \phi - \bar{\partial}_{A} \partial_{A} \phi^{*} -( \Lambda \bar{\partial}_{A} \bar{\partial}_{A} \phi ) \wedge \omega \\ & \qquad - i ( \partial_{A}^{*} \bar{\partial}_{A} \phi ) \wedge \omega + ( \Lambda \bar{\partial}_{A} \partial_{A} \phi^{*} ) \wedge\omega + i ( \partial_{A}^{*} \partial_{A} \phi^{*} ) \wedge \omega \\ & = - i ( \partial_{A}^{*} \bar{\partial}_{A} \phi ) \wedge \omega + i ( \partial_{A}^{*} \partial_{A} \phi^{*} ) \wedge \omega \end{split} \end{equation*} Then from the $L^2$ inner product of this with $\phi^*$ we get \begin{equation} - i \langle \phi^*, \partial_{A}^{*} \bar{\partial}_{A} \phi \rangle_{L^2} + i \langle \phi^*, \partial_{A}^{*} \partial_{A} \phi^{*} \rangle_{L^2} =0 . \label{eq:l22} \end{equation} Then adding \eqref{eq:l21} and \eqref{eq:l22}, we obtain \begin{equation*} || \bar{\partial}_{A} \phi - \partial_{A} \phi^{*} ||^{2}_{L^2} =0. \end{equation*} Thus $\bar{\partial}_{A} \phi - \partial_{A} \phi^{*} =0$. Substituting this to the first term of \eqref{eq:asd2}, we get $ (\Lambda \bar{\partial}_{A} \phi ) \omega - ( \Lambda \partial_{A} \phi^{*} ) \omega = 0$. Thus from this and the first equation in \eqref{eq2}, we obtain $ \Lambda \bar{\partial}_{A} \phi = \Lambda \partial_{A} \phi^{*} = 0$. \vspace{0.2cm} \hspace{-0.6cm}\underline{Step 3}: We now prove that $\bar{\partial}_{A} \phi =0$ and $[ \phi \wedge \phi ] = 0$; and consequently $F_{A}^{0,2}=0$. Acting on the first equation in \eqref{eq:step2} by $\bar{\partial}_{A}^*$, we get $$ \bar{\partial}_{A}^* \bar{\partial}_{A} \phi - \bar{\partial}_{A}^* \partial \phi^{*} = 0 $$ Taking the $L^2$ inner product of this with $\phi$, we obtain \begin{equation} \langle \phi , \bar{\partial}_{A}^* \bar{\partial}_{A} \phi \rangle_{L^2} - \langle \phi, \bar{\partial}_{A}^* \partial_{A} \phi^{*} \rangle_{L^2} = 0. \label{eq:step31} \end{equation} Here, the second term of the above can be computed as follows. \begin{equation*} \begin{split} \langle \phi, \bar{\partial}_{A}^* \partial_{A} \phi^{*} \rangle_{L^2} &= i \langle \phi, \partial_{A} \Lambda \partial_{A} \phi^{*} \rangle_{L^2} - i \langle \phi, \Lambda \partial_{A} \partial_{A} \phi^{*} \rangle_{L^2} \\ &= - i \langle \phi, \Lambda \partial_{A} \partial_{A} \phi^{*} \rangle_{L^2} \\ &= - i \langle \omega \wedge \phi, [F_{A}^{2,0} \wedge \phi^{*} ] \rangle_{L^2} \\ &= - i \langle \omega \wedge \phi, [[\phi \wedge\phi] \wedge \phi^{*} ] \rangle_{L^2} \\ &= - \int_{X} \text{tr} ( [[\phi \wedge\phi] \wedge \phi^{*} ] \wedge \phi^{*} ) \\ &= - \int_{X} \text{tr} ( [\phi \wedge\phi] \wedge [\phi^{*} \wedge \phi^{*} ] ) \\ &= - || [ \phi \wedge \phi ] ||_{L^2}^{2} . \end{split} \end{equation*} In the above, we used the K\"{a}hler identity at the first equality; the fact that $\Lambda \partial_{A} \phi^* =0$ from \eqref{eq:asd2} at the second equality; and the conjugate of the first equation in \eqref{KWk1} at the forth equality. Thus \eqref{eq:step31} becomes $$ || \bar{\partial}_{A} \phi ||_{L^2}^{2} + || [ \phi \wedge \phi ] ||_{L^2}^{2} = 0. $$ Hence the assertion holds. \qed \begin{remark} Nakajima \cite[\S6(iii)]{N} obtained the complex form of the equations described above in an elegant way even including the Vafa--Witten equations case. \end{remark} \subsection{The structure of singular sets in the K\"{a}hler case} \label{sec:stsing} From here, we assume the rank of $E$ to be two. Let $\{ (A_n , \phi_n) \}_{n \in \N}$ be a sequence of solutions to the equations \eqref{KWcK1} and \eqref{KWcK2}, and put $r_{n} := || \phi_{n} ||_{L^2}$ and $\varphi_{n} := \phi_{n} / r_n$ for each $n \in \N$. Then the analysis by Taubes described in Section \ref{sec:rtaubes} to this article holds for solutions to the equations \eqref{KWcK1} and \eqref{KWcK1}. In particular, if we assume that $\{ r_n \}$ has no bounded subsequence, then there exist a finite set of points in $X$ to be denoted by $\Theta$, a closed nowhere dense subset $Z$, a real line bundle $\mathcal{J}$ on $X \setminus \{ \Theta \cup Z \}$, a section $\mu$ of the bundle $\mathcal{J} \otimes \Omega^{1}_{X}$, a Hermitian vector bundle $E_{\Delta}$ with hermitian metric $h_{\Delta}$ on $X \setminus \{ \Theta \cup Z \}$, a connection $A_{\Delta}$ of $E_{\Delta}$, and a isometric bundle homomorphism $\tau_{\Delta} : \mathcal{J} \to \mathfrak{u} (E_{\Delta})$. In addition, there exist a subsequence $\Xi \in \N$ and a sequence of isomorphism $\{ g_{i} \}_{i \in \Xi}$ from $E_{\Delta}$ to $E|_{X \setminus \{ \Theta \cup Z\}} $ such that $\{ g_{i}^* \varphi_{i} \} $ converges to $\tau_{\Delta} \circ \mu$ in the $L^2_1$ topology on compact subsets in $X \setminus \{ \Theta \cup Z \} $; and in the $C^{0}$ topology on $X \setminus \Theta$, and $\{ g_{i} ^* A_{i} \} $ converges on compact subsets of $X \setminus \{ \Theta \cup Z \}$ to $A_{\Delta}$ in the $L^2_1$ topology. Here $Z$ is the zero set of the $L^2_1$ limit $|\hat{\varphi}_{\diamond}|$ of a subsequence $\{ |\varphi_{i}| \}$, which is obtained in the same way to $|\hat{a}_{\diamond}|$ in Section \ref{sec:rtaubes}. As mentioned at the beginning of this section, the above singular set $Z$ has a structure of a holomorphic subvariety in $X$ in this case. To see that, we consider\footnote{The idea of taking this section was pointed out to the author by Clifford Taubes, also the proof of Proposition \ref{prop:ZT} below uses his ideas throughout. } a section $\text{tr} ( \varphi_i \otimes \varphi_i)$ of $( \Omega_{X}^{1})^{\otimes 2} $ for each $\varphi_{i} \in \Gamma (\mathfrak{u} (E) \otimes \Omega_{X}^{1})$. Note that the connection on the bundle does not do anything to this as we take the trace. We then have the following. \begin{proposition} Assume that $\{ r_n \}$ has no bounded subsequence. Then there exists a subsequence $\Lambda \subset \N$ such that $\{ \text{\rm tr} ( \varphi_i \otimes \varphi_i) \}_{i \in \Lambda}$ converges to a holomorphic section, which we denote by $\text{\rm tr} ( \hat{\varphi}_{\diamond} \otimes \hat{\varphi}_{\diamond})$, of the holomorphic bundle $ ( \Omega_{X}^1 )^{\otimes 2}$. \label{prop:det} \end{proposition} \begin{proof} From the definitions of $\varphi_{i}$ and $\text{tr} ( \varphi_i \otimes \varphi_i)$, we get $$| \text{tr} ( \varphi_i \otimes \varphi_i) | \leq | \varphi_{i} |^2 \leq C . $$ Thus, $\{ \text{tr} ( \varphi_i \otimes \varphi_i)\}_{i \in \N}$ has a convergent subsequence. The regularity follows since $\bar{\partial} ( \text{tr} ( \varphi_i \otimes \varphi_i) ) = 0$ as $\bar{\partial}_{A_{i}} \varphi_{i} =0$ for each $i \in \N$. \end{proof} We denote by $T$ the zero set of the section $\text{\rm tr} ( \hat{\varphi}_{\diamond} \otimes \hat{\varphi}_{\diamond})$. We then prove the following. \begin{proposition} $Z = T \setminus \Theta'$, where $\Theta' \subset \Theta$. \label{prop:ZT} \end{proposition} \begin{proof} We obviously have $ Z \subset T$. In order to prove the opposite inclusion, we assume that there exists a point $p \in T \setminus \{ \Theta \cup Z\} $. We then recall the following inequality, which holds for $2 \times 2$ trace free matrices $\Phi$. \begin{equation} |\Phi|^4 \leq 4 | \det \Phi |^2 + | [ \Phi , \Phi^{*}] |^2 . \label{eq:ineqPhi} \end{equation} With the above inequality in mind, we prove that $p \in Z = | \hat{\varphi}_{\diamond}|^{-1} (0)$. We take a local orthonormal coframe $\{ \hat{e}^1 , \hat{e}^2 \}$ of $\Omega_{X}^{1}$ around $p$ to write $ \varphi_{i} = \mathfrak{c}_{i,1} \hat{e}^{1} + \mathfrak{c}_{i, 2} \hat{e}^2$. We view $\mathfrak{t}_{i} := \text{tr} ( \varphi_{i} \otimes \varphi_{i} ) $ as a symmetric matrix with components $\{ \mathfrak{t}_{i, \alpha \beta} \}_{\alpha, \beta = 1, 2}$. The $\mathfrak{t}_{i, 11}$ component is $\text{tr} ( \mathfrak{c}_{i,1} {}^2)$. Since $\text{tr} ( \mathfrak{c}_{i, 1}) =0$, we have $\mathfrak{t}_{i ,11} = -2 \det (\mathfrak{c}_{i,1})$. As $p \in T \setminus \{Z \cup \Theta \}$, there exists a number $N_{1} \in \Lambda$ such that for $i \geq N_{1}$, we have $ | 2 \det (\mathfrak{c}_{i,1} )| (p) < \varepsilon$ for a given $\varepsilon >0$. Next, as described in Section \ref{sec:rtaubes}, $\{ \varphi_{i} \}$ converges in $L^2_1$ topology on compact subsets of $X \setminus \{ \Theta \cup Z \}$ to $\tau_{\Delta} \circ \mu$ after gauge transformations. Furthermore, this convergence is in $C^{0}$ topology on compact subsets of $X \setminus \Theta$. This implies in particular $\Lambda [ \varphi_{i} , \varphi_{i}^{*}]$ converges to $0$ in $C^{0}$ topology on compact subsets of $X \setminus \{ \Theta \cup Z \}$. We take a compact subset $B$ of $X \setminus \{ \Theta \cup Z \}$, which contains the point $p$. Then one concludes that the limit of $ \Lambda [ \hat{\varphi}_{\diamond} , \hat{\varphi}_{\diamond}^{*} ] $ is identically zero on the whole of $B$. Thus, for a given $\varepsilon >0$, there exists a number $N_2 \in \Lambda$ such that $| \Lambda [ \varphi_{i} , \varphi_{i}^{*} ] | (p) = | [ \mathfrak{c}_{i,1} , \mathfrak{c}_{i,1}^{*} ] + [ \mathfrak{c}_{i,2} , \mathfrak{c}_{i,2}^{*} ] | (p) < \varepsilon$ for $i \geq N_2$. On the other hand, form the second equation in \eqref{KWcK1}, we have $[ \mathfrak{c}_{i,1} , \mathfrak{c}_{i,2} ] =0$, and both $\mathfrak{c}_{i,1}$ and $\mathfrak{c}_{i,2}$ are trace free, we obtain that $\mathfrak{c}_{i,2} = \alpha \mathfrak{c}_{i,1}$ with $\alpha$ being a complex number. Thus we have $ | ( 1 + |\alpha|^2) [ \mathfrak{c}_{i,1} , \mathfrak{c}_{i,1}^{*} ] | (p) < \varepsilon$ for $i \geq N_{2}$. Summarizing these above with the inequality \eqref{eq:ineqPhi}, we find an $N' \in \Lambda$ for a given $\varepsilon >0$ such that $|\mathfrak{c}_{i,1} | (p) < \varepsilon$ for all $i \geq N'$. A similar argument does for $\mathfrak{c}_{i ,2}$. Hence $p \in Z = |\hat{\varphi}_{\diamond}|^{-1} (0)$. Thus the assertion holds. \end{proof} Since $\text{\rm tr} ( \hat{\varphi}_{\diamond} \otimes \hat{\varphi}_{\diamond})$ is a holomorphic section of the holomorphic bundle $( \Omega^1_{X})^{\otimes 2}$ on $X$, $T$ has the structure of an analytic subvariety. We thus obtain the following. \begin{Corollary} $Z$ has a structure of an analytic subvariety of $X$. \label{cor:hs} \end{Corollary} \begin{example} We give simple examples of the above $T$ for the case that $X$ is the direct product of two Riemann surfaces. (i) When $X= \mathbb{P}^1 \times \mathbb{P}^1$, there are no non-trivial $\phi \in \Gamma ( \mathfrak{u} (E) \otimes \Omega^{1}_{X})$ satisfying the equations from the beginning. (ii) When $X = \mathbb{P}^1 \times T^2$, then $\Omega^{1}_{X} \cong \Omega^{1}_{\mathbb{P}^1} \oplus \Omega^{1}_{T^2} \cong K_{\mathbb{P}^1} \oplus \underline{\C}$. Thus, the holomorphic section $\text{\rm tr} ( \hat{\varphi}_{\diamond} \otimes \hat{\varphi}_{\diamond})$ of $( \Omega^{1}_{X} )^{\otimes 2}$ has the form $( \underline{0} \oplus \underline{a} ) \otimes ( \underline{0} \oplus \underline{b} )$, where $a, b \in \C$ with either $a$ or $b$ being non-zero. Since $a \neq 0$ or $b \neq 0$, thus $Z = \emptyset$. (iii) If $X = \mathbb{P}^1 \times \Sigma_{g}$ then $\Omega_X^1 \cong \Omega_{\mathbb{P}^1} ^1 \oplus \Omega_{\Sigma_g}^1 \cong K_{\mathbb{P}^1} \oplus K_{\Sigma_g}$; and $\text{tr} ( \hat{\varphi}_{\diamond} \otimes \hat{\varphi}_{\diamond} )$ of $(\Omega_X^1)^{\otimes 2}$ has the form $(\underline{0} \oplus s ) \times (\underline{0} \oplus t )$, where $s,t \in \Gamma( \Sigma_{g})$. Hence $Z = ( \mathbb{P}^1 \times s^{-1} (0) ) \cap ( \mathbb{P}^1 \times t^{-1} (0))$. This is generically an empty set. (iv) For the case that $X = T^2 \times T^2$, $\text{\rm tr} ( \hat{\varphi}_{\diamond} \otimes \hat{\varphi}_{\diamond})$ has the form $( \underline{a} \oplus \underline{b}) \otimes ( \underline{c} \oplus \underline{d})$, where $a, b, c ,d \in \C$. Since at least one of $a, b, c, d$ can not be zero, so $Z = \emptyset$. (v) When $X = \Sigma_{g} \times T^2$, where $\Sigma_{g}$ is a Riemann surface with genus $g > 1$, then $\text{\rm tr} ( \hat{\varphi}_{\diamond} \otimes \hat{\varphi}_{\diamond})$ has the form $( s \oplus \underline{a} ) \otimes ( t \oplus \underline{b} )$, where $s, t \in \Gamma (K_{\Sigma_{g}})$ and $a, b \in \C$. Thus, if $a=0$ and $b=0$, $Z = (s^{-1} (0) \times T^2 ) \cap (t^{-1} (0) \times T^2)$; and otherwise $Z = \emptyset$. (vi) When $X = \Sigma_{g} \times \Sigma_{h}$ with $g, h >1$, then $\text{\rm tr} ( \hat{\varphi}_{\diamond} \otimes \hat{\varphi}_{\diamond})$ has the form $(s \oplus t) \otimes (v \oplus w)$, where $s, v \in \Gamma (K_{\Sigma_{g}} )$, $t , w \in \Gamma (K_{\Sigma_{h}})$. Then $Z$ is $(s^{-1} (0) \times t^{-1} (0) ) \cap ( v^{-1}(0) \times w^{-1}(0))$. For instance, if $s$ and $t$ are identically zero; and one of $v$ or $w$ is identically zero, then $Z$ is either $\Sigma_g \times w^{-1} (0)$ or $v^{-1} (0) \times \Sigma_h $. Or, if $v$ and $w$ are identically zero; and one of $s$ and $t$ is identically zero, then $Z$ is either $\Sigma_g \times t^{-1} (0)$ or $s^{-1} (0) \times \Sigma_h$. \qed \end{example} \section{The singular sets of solutions to the Vafa--Witten equations} \label{sec:VW} \subsection{Results by Taubes} \label{sec:rtaubesVW} The Vafa--Witten equations look similar to the Kapustin--Witten equations, but one of the crucial differences for us is that there is no good control of the curvatures of connections. However, Taubes managed to prove the convergence of the Higgs fields outside a singular set in a similar style to the Kapustin--Witten equations case. Here we briefly state some of his results. Firstly, in the same way to the Kapustin--Witten equations case, Taubes obtained the following. \begin{proposition}[\cite{T4}] Let $\{ (A_n , B_n ) \}$ be a sequence of solutions to the Vafa--Witten equations. Assume that $|| B_n ||_{L^2}$ diverges as $n$ goes to the infinity. We rescale $B_n$ by $|| B_n ||_{L^2}$, namely, put $\beta_n := B_n / || B_n||_{L^2}$ for each $n \in \mathbb{N}$. Then $\{ \beta_n \}$ has a converging subsequence in $L^2_1$ and $C^0$ topologies on compact subsets of $X$. \end{proposition} We denote by $| \hat{\beta}_{\diamond}|$ the limit and define a closed subset $Z'$ of $X$ by the zero set of $| \hat{\beta}_{\diamond}|$. Despite the fact that no obvious control of the connections, Taubes proved the following. \begin{theorem}[\cite{T4}] The Hausdorff dimension of $Z'$ is at most two, and there exists a real line bundle $\mathcal{I}$ on $X \setminus Z'$; a section $\nu$ of $\mathcal{I} \otimes \Lambda^+$ wit $d \nu =0$ and $|\nu| = |\hat{\beta}_{\diamond}|$; and a sequence of isometric homomorphisms $\{ \sigma_n \}$ from $\mathcal{I}$ to $\mathfrak{g}_P |_{X \setminus Z'}$ such that $\{ \beta_n - \sigma_n \circ \nu \}$ converges to zero in $C^0$-topology on compact subsets of $X$. \end{theorem} \subsection{The equation on compact K\"{a}hler surfaces and the singular sets} \label{sec:singVW} As in the case of the Kapustin--Witten equations, we have the complex form of the equations, when the underlying manifold is a compact K\"{a}hler surface (see \cite[Ch.7]{M} or \cite[\S 6(iii)]{N}). The exact form is as follows: Let $X$ be a compact K\"{a}hler surface, and let $E \to X$ be a Hermitian vector bundle of rank $r$ over $X$. Then the Vafa--Witten equations \eqref{VW1}, \eqref{VW2} become the following equations seeking for a pair $(A, \phi)$ consisting of a connection $A$ of $E$ and a section $\phi$ of $\mathfrak{u} (E) \otimes K_X$, where $K_X$ is the canonical bundle of $X$. \begin{gather} \bar{\partial}_{A} \phi =0 , \label{VWk1} \\ F_A^{1,1} \wedge \omega + [ \phi , \phi^*] = 0, \quad F_A^{0,2} =0 . \label{VWk2} \end{gather} Note that $[\phi \wedge \phi ]=0$ automatically holds as $K_X$ is a line bundle. Then almost the same argument works for the Vafa--Witten case as well, even in a simpler way. Let us consider a sequence of solutions $\{ (A_n , \phi_n) \}_{n \in \N}$ to the equations \eqref{VWk1}, \eqref{VWk2}. We also assume here that the rank of $E$ is two. We are interested in the case that $|| \phi ||_{L^2}$ diverges as $n$ goes to the infinity. So put $r_n := || \phi_n ||_{L^2}$ for each $n \in \mathbb{N}$ and suppose that $\{ r_n \}_{n \in \N}$ has no converging subsequence. We then put $\varphi_n := \phi_n / || \phi_n ||_{L^2}$ for each $n \in \N$ and consider, in this case, the determinant $\det \varphi_n$, which is a section of $K_{X}^{\otimes 2}$ for the rank two case. As in the case of the Kapustin--Witten equations, we firstly get the following. \begin{lemma} There exists $\Lambda \subset \N$ such that $\{ \det \varphi_i \}_{i \in \Lambda} $ converges to a holomorphic section $\det \varphi_{\diamond}$ of $K_X^{\otimes 2}$. \end{lemma} We denote by $D$ the zero set of $\det \varphi_{\diamond}$. Then we have the following. \begin{proposition} $Z' = D$. \end{proposition} \begin{proof} The proof goes in the same way as in the Kapustin--Witten equations case (Proposition \ref{prop:ZT}). Namely, we use the inequality \begin{equation} |\Phi|^4 \leq 4 | \det \Phi |^2 + | [ \Phi , \Phi^{*}] |^2 . \end{equation} for $2 \times 2$ trace free matrices $\Phi$ again. For the Vafa--Witten equations case, we can directly use this inequality as we are thinking $\det \varphi_n$'s. We omit the repetition of following the argument in the proof of Proposition \ref{prop:ZT} here. \end{proof} Hence we obtain the following. \begin{Corollary} $Z'$ has a structure of an analytic subvariety of $X$. \label{cor2} \end{Corollary}
1,314,259,993,707
arxiv
\section{Introduction} In this work we consider a matrix model of strength distributions. Although the approach may seem somewhat abstract we should note that the ideas came from very concrete calculations, especially calculations of magnetic dipole transitions. An early experiment by Bohle et al. {[}1{]} which found these low lying states in $^{196}$Gd lead to a flood of papers (including some by one of us) but for brevity we cite only the review article by Heyde et al. {[}2{]}. A more complete list is given in the work of Harper and Zamick {[}3{]}. In references {[}3,4 and 5{]} our group also focused, for the most part, on these low lying excitations, but we also calculated the strength distributions at higher energies. The results were rather messy but a crude examination seemed to indicate an exponential decrease of the strength with excitation energy. The ``mess'' was considerably reduced by a process called binning {[}5{]}. Since we showed log(B(M1)) vs. excitation energy we saw a linear behavior with a negative slope. We here wish to address the same problem but using matrix models. As is well known from the works of Heisenberg {[}6{]} and Born and Jordan {[}7{]} the Hamiltonians can be represented by matrices. In this work we will choose matrices with a somewhat simple structure but ones that can still display complex behavior. By dealing with matrices we can have better control and see how the distributions are affected by the variations of the parameters. \section{The model} We represent a Hamiltonian by a matrix. On the diagonal we have what can be considered unperturbed energy levels. We take them to be equally spaced, E$_{n}$= n*E. We introduce a constant coupling v which for a level E$_{n}$ occurs only with the nearest neighbors E$_{(n-1)}$ and E$_{(n+1)}$. We consider a transition operator T such that the matrix element is non-zero only if n and n' differ by one and we take the non-vanishing matrix element to be a constant: \bigskip \textless{}\textless{}n T (n+1)\textgreater{}\textgreater{} = \textless{}\textless{}(n+1) T n\textgreater{}\textgreater{} = 1 for all n. All other values are taken to be zero. \section{The matrix} The matrix that we will diagonalize is shown below. \setcounter{MaxMatrixCols}{15} \begin{gather*} H=\begin{bmatrix}0 & v & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ v & E & v & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & v & 2E & v & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & v & 3E & v & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & v & 4E & v & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & v & 5E & v & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & v & 6E & v & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & v & 7E & v & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & v & 8E & v & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & v & 9E & v\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & v & 10E \end{bmatrix} \end{gather*} Note that along the diagonal we have equally spaced energies E$_{n}$= n*E. There is only one coupling parameter, v, and the coupling is only to the nearest neighbors. We choose E to be 1 MeV. We will consider four choices of the coupling, v=0.5 (weak), v=1, v=2, and finally v=3 (strong). (The relevant parameter is really v/E) The ``wave functions'' are column vectors with 11 entries ($a_{1}$, a$_{2}$,....., a$_{11}$). We have a ground state and 10 excited states. \section{The distributions} The strength matrix element O between a state \{a\} and a state \{b\} is simply \begin{equation} O=(a_{1}b_{2}+.....+a_{10}b_{11}) + (b_{1}a_{2}+.....+b_{11}a_{10})\textless{}\textless{}n T (n+1)\textgreater{} \textgreater{}. \end{equation} As mentioned above,we take \textless{}\textless{}nT(n+1)\textgreater{}\textgreater{} to be 1. The Strength is O$^{2}$ and we plot ln(O$^{2}$). \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig1.png} \caption{Log of the Strength vs. Excitation Energy} \label{fig1} \end{figure} \FloatBarrier \section{Special Features of our model.} There are some special features of our model. First of all, if we change the coupling v to -v we get the same results. This can be understood easily. Suppose we multiply every other basis state by a minus sign (second, fourth, sixth, etc.). The negative of a wave function is the same as the original wave function. What this does in effect is change the coupling v to -v. But, since we are not really changing the basis wave functions we expect the same results as before. One consequence of this is that the absolute value of the energy of the fifth excited state is 5 MeV no matter what the value of the coupling strength is. This is connected with the fact that the fifth is the middle state with 5 states pushing up on it from below and five from above. Another perhaps more interesting fact is that there is a simple relation between the eigenfunctions of the lowest state and the highest state. If the lowest eigenstate is of the form $(a_{1}$, $a_{2}$, $a_{3}$, $a_{4}$.....$a_{11}$) then the highest one has as an eigenfunction $(a_{11}, -a_{10}, a_{9}, -a_{8}.....a_{2},-a_{1})$. There are similar relationships between the second lowest and second highest states etc.. This has the consequence that the strength matrix element between the lowest state and the highest state vanishes no matter what the coupling strength is. This is convenient because when things become very small they test the precision of the calculations. Knowing this we do not include the low-high transitions in our fits. \section{Discussion of results } In Figure 1 we show a log plot (base e) of the strength from the ground state to the 10 excited states. If the behavior is that of a decreasing exponential then in a log plot we should see a straight line with a negative slope. We performed linear fits to ln(\textless{}O\textgreater{}$^{2}$) vs excitation energy, i.e. ln (\textless{}O\textgreater{}$^{2}$) =a +b E{*}. As mentioned above we do not include the transition from the lowest state to the highest state because in that case \textless{}O\textgreater{} must have a zero value. The linear fits for various v's are as follows: v=0.5: 7.38-5.72E{*} v=1: 5.44-3.84E{*} v=2: 3.60-2.20E{*} v=3: 1.42-1.34E{*} Indeed, in the log plots in Figure 1 corresponding to v=0.5, v=1, v=2 and v=3 the dominant behavior is of a linear decrease, hence an exponential decrease in strength. We should however not expect an exponential behavior in all cases. Considering the extreme situation where there is no coupling (v=0).In that case there would only be one transition--from the ground to the first excited state. This would look like a spike in a plot of strength vs excitation energy - very far from a decreasing exponential. However even with the smallest v=0.5 shown here we get to a good approximation of an exponential decrease. The standard deviation for v=0.5 is 0.2001. For v=1 it is 0.2089. For v=2 it is 0.5379 and for v=3 it is 0.5683. To a first approximation the results for all these cases show an exponential fall-off. Note that the magnitude of the slope decreases with increasing v. This could be understood from the fact that when v=0 one only reaches the first excited state. There is no strength to higher states. As we increase v we increase the mixing of the basis states and so it is more favorable to reach states at higher energy. The values of\textless{}\textbar{}O$^{2}$\textbar{}\textgreater{} for transitions from ground to the first excited state , for v=0, 0.5, 1, 2, and 3 are respectively 1, 0.6838, 0.3833, 0.2195, and 0.0983. \section{Degenerate energies. } What happens if in the Hamiltonian we take E to be zero, i.e. all the energies along the diagonal are the same? Then we only have off diagonal v's. A very interesting phenomenon occurs. All transitions rates, as defined by Eq 1 in Sec 4 from any state to any other state vanish. This can be easily explained. When we have all zeros on the diagonal of our Hamiltonian, the Hamiltonian matrix becomes proportional to the transition operator, represented as a matrix. The Hamiltonian, acting on an eigenstate gives back the same eigenstate and hence there will be no transition. One thus verifies that all transition matrix elements are zero. One can however have diagonal non-vanishing matrix elements. \section{Other matrix models} Our contention that matrix models can be insightful is strengthened by the fact that there are examples in the literature where this is the case. We cite examples where the couplings in the matrices are different from ours and the problems that are addressed are also different. Consider an n by n matrix M in which all elements are the same, say c. Then it is easy to show that {\centering $M^{2} = cnM$ or $M(1-cnM) = 0$\par } There are n-1 degenerate states with energy zero and one ``collective state'' with energy nc. For c positive the collective state is at a high energy and can have an association with isovector modes like the giant dipole state. For c negative the collective state is at a lower energy than the degenerate states. One common association in this case is with isoscalar octupole states. Note the difference with our model where, as we mention above, we get the same result when we change the sign of the coupling. Bohr and Mottelson{[}10{]} used a matrix model to present an alternate derivation of the Breit-Wigner formula{[}11{]} for the strength function of a resonance. In their words ``We wish to describe how the amplitude for a particular channel a may be distributed over many stationary states of a complicated system.'' Their coupling is from a single particle state a to nearby complicated states and in one example they have a constant coupling to all these states. Their coupling causes the single particle sharp state to get broadened out to a resonance state with a width $\varGamma$. In the model considered here we only have couplings to nearest neighbor states, and the exponential behavior we show in Fig 1. is drastically different from the Breit-Wigner resonance shape. Still it is nice to see that matrix models can be useful in a wide variety of problems. \section{Closing remarks } We have made a plausible case here that an exponential decrease of strength, although not universal, is widespread. Except for the case where the coupling is very weak we will get such a decrease. We find this to be the case with the simple matrix we have constructed and also in realistic calculations of magnetic dipole transitions discussed in refs.{[}4,5{]}. In the future we will consider other matrix models and strength distributions for other operators.
1,314,259,993,708
arxiv
\section{Introduction} \label{SectionI} Consider communication over a discrete-time memoryless channel modeled by a conditional point mass function (PMF) or probability density function (PDF) $p_{Y|X}(y|x)$, where $x\in X$ and $y\in Y$ are the input and output symbols, $X$ and $Y$ are the input and output alphabets, respectively. Let $\mathcal{C}$ be the Shannon capacity. Fano showed in \cite{ref Fano61} that the minimum error probability $P_e$ for block channel codes of rate $R$ and length $N$ is bounded by \begin{equation} \lim_{N\to \infty}-\frac {\log P_e}{N}\ge E(R), \end{equation} where $E(R)$ is a positive function of channel transition probabilities, known as the error exponent. For finite input and output alphabets, without coding complexity constraint, the maximum achievable $E(R)$ is given by Gallager in \cite{ref Gallager65}, \begin{equation}\label{GallagerE} E(R)= \max_{p_X}E_L(R,p_X), \end{equation} where $p_X$ is the input distribution, and $E_L(R,p_X)$ is given for different values of $R$ as follows, \begin{eqnarray}\label{GallagerE1} \begin{array}{ll} \max_{\rho \ge1}\left \{-\rho R+E_x(\rho, p_X)\right \}& 0 \le R \le R_x \\ -R+E_{0}(1,p_X) & R_x \le R \le R_{crit} \\ \max_{0\le\rho\le1}\left \{-\rho R+E_0(\rho, p_X)\right \} & R_{crit} \le R \le \mathcal{C}. \end{array} \end{eqnarray} The definitions of other variables in (\ref{GallagerE1}) can be found in \cite{ref Forney66}. If we replace the PMF by PDF, the summations by integrals and the $\max$ operators by $\sup$ in (\ref{GallagerE}), (\ref{GallagerE1}), the maximum achievable error exponent for continuous channels, i.e., channels whose input and/or output alphabets are the set of real numbers \cite{ref Gallager65}, is still given by (\ref{GallagerE}). In \cite{ref Forney66}, Forney proposed a one-level concatenated coding scheme, which can achieve the following error exponent, known as Forney's exponent, for any rate $R<\mathcal{C}$ with a complexity of $O(N^4)$. \begin{equation} E_c(R)=\max_{r_o\in\left [\frac{R}{\mathcal{C}},1 \right ]}(1-r_o)E\left (\frac{R}{r_o}\right ), \label{EcR} \end{equation} where $r_o$ and $R$ are the outer and the overall rates, respectively. Forney's coding scheme concatenates a maximum distance separable (MDS) outer error-correction code with well performed inner channel codes. To achieve $E_c(R)$, the decoder is required to exploit reliability information from the inner codes using a general minimum distance (GMD) decoding algorithm \cite{ref Forney66}. Forney's GMD algorithm essentially carries out outer code decoding, under various conditions, for $O(N)$ times. The overall decoding complexity of $O(N^4)$ is due to the fact that the outer code (which is a Reed-Solomon code) used in \cite{ref Forney66} has a decoding complexity of $O(N^3)$. Forney's concatenated codes were generalized to multi-level concatenated codes, also known as the generalized concatenated codes, by Blokh and Zyablov in \cite{ref Blokh82}. As the order of concatenation goes to infinity, the error exponent approaches the following Blokh-Zyablov bound (or Blokh-Zyablov error exponent) \cite{ref Blokh82}\cite{ref Barg05}. \begin{equation}\label{BZBound} E^{(\infty)}(R)=\max_{p_X, r_o\in {\left[\frac{R}{\mathcal{C}},1\right ]}}\left (\frac{R}{r_o}-R\right)\left [\int^{\frac{R}{r_o}}_0\frac{dx}{E_L(x, p_X)}\right ]^{-1}. \end{equation} In \cite{ref Guruswami05}, Guruswami and Indyk proposed a family of linear-time encodable/decodable nearly MDS error-correction codes. By concatenating these codes (as outer codes) with {\it fixed-lengthed} binary inner codes, together with Justesen's GMD algorithm \cite{ref Justesen72}, Forney's error exponent was shown to be achievable over binary symmetric channels (BSCs) with a complexity of $O(N)$ \cite{ref Guruswami05}, i.e., linear in the codeword length. The number of outer code decodings required by Justesen's GMD algorithm is only a constant\footnote{Strictly speaking, the required number of outer code decodings is linear in the inner codeword length, which is fixed at a reasonably large constant.}, as opposed to $O(N)$ in Forney's case \cite{ref Forney66}. Since each outer code decoding has a complexity of $O(N)$, upper-bounding the number of outer code decodings by a constant is required for achieving the overall linear complexity. Because Justesen's GMD algorithm assumes binary channel outputs \cite{ref Justesen72}\cite{ref Guruswami01}, achievability of Forney's exponent was only proven for BSCs in \cite[Theorem 8]{ref Guruswami05}. In this paper, we show that Forney's GMD algorithm can be revised to carry out outer code decoding for only a constant number of times\footnote{The revision can also be regarded as an extension to Justesen's GMD decoding given in \cite{ref Justesen72}.}. With the help of the revised GMD algorithm, by using Guruswami-Indyk's outer codes with fixed-lengthed inner codes, one-level and multi-level concatenated codes can arbitrarily approach Forney's and Blokh-Zyablov exponents with linear complexity, over general discrete-time memoryless channels. \section{Revised GMD Algorithm and Its Impact on Concatenated Codes} \label{SectionII} Consider one-level concatenated coding schemes. Assume, for an arbitrarily small $\varepsilon_1>0$, we can construct a linear encodable/decodable outer error-correction code, with rate $r_o$ and length $N_o$, which can correct $t$ symbol errors and $d$ symbol erasures so long as $2t+d<N_o(1-r_o-\varepsilon_1)$. Note that this is possible for large $N_o$ as shown by Guruswami and Indyk in \cite{ref Guruswami05}. To simplify the notations, we assume $N_o(1-r_o-\varepsilon_1)$ is an integer. The outer code is concatenated with suitable inner codes with rate $R_i$ and fixed length $N_i$. The rate and length of the concatenated code are $R=r_oR_i$ and $N=N_oN_i$, respectively. In Forney's GMD decoding, inner codes forward not only the estimates $\hat{\mbox{\boldmath $x$}}_m=[\hat{x}_1, \dots, \hat{x}_i, \dots, \hat{x}_{N_o}]$ but also a reliability vector $\mbox{\boldmath $\alpha$}=[\alpha_1, \dots, \alpha_i, \dots, \alpha_{N_o}]$ to the outer code, where $\hat{x}_i\in GF(q)$, $0\le \alpha_i \le 1$ and $1 \le i \le N_o$. Let \begin{equation} s(\hat{x},x)= \left \{ \begin{array}{ll}+1 & x=\hat{x}\\ -1 & x\neq \hat{x}\end{array} \right . . \end{equation} For any outer codeword $\mbox{\boldmath $x$}_m=[x_{m1},x_{m2},\dots,x_{mN_o}]$, define a dot product $\mbox{\boldmath $\alpha$}\cdot\mbox{\boldmath $x$}_m$ as follows \begin{equation} \mbox{\boldmath $\alpha$}\cdot\mbox{\boldmath $x$}_m = \sum_{i=1}^{N_o} \alpha_is(\hat{x}_i, x_{mi})=\sum_{i=1}^{N_o} \alpha_is_i. \end{equation} \begin{theorem}{\label{Theorem1}} There is at most one codeword $\mbox{\boldmath $x$}_m$ that satisfies \begin{equation} \mbox{\boldmath $\alpha$}\cdot\mbox{\boldmath $x$}_m>N_o(r_o+\varepsilon_1). \end{equation} \end{theorem} Theorem \ref{Theorem1} is implied by Theorem 3.1 in \cite{ref Forney66}. Rearrange the weights in ascending order of their values and let $i_1, \dots, i_j, \dots, i_{N_o}$ be the indices such that \begin{equation} \alpha_{i_1}\le \dots\le \alpha_{i_j}\le \dots\le \alpha_{i_{N_o}}. \end{equation} Define $\mbox{\boldmath$q$}_k=[q_k(\alpha_1),\dots, q_k(\alpha_j), \dots, q_k(\alpha_{N_o})]$, for $0 \le k < 1/\varepsilon_2$, where $\varepsilon_2>0$ is a positive constant with $1/\varepsilon_2$ being an integer, and $q_k(\alpha_{i_j})$ is given by \begin{eqnarray} q_k(\alpha_{i_j})= \left \{ \begin{array}{ll}0 & \mbox{if} \quad \alpha_{i_j} \le k\varepsilon_2\quad \\ & \mbox{and}\quad i_j\le N_o(1-r_o-\varepsilon_1)\\ 1 & \mbox{otherwise}\end{array} \right . . \end{eqnarray} Define dot product $\mbox{\boldmath $q$}_k\cdot \mbox{\boldmath $x$}_m$ as \begin{equation} \mbox{\boldmath $q$}_k\cdot \mbox{\boldmath $x$}_m=\sum_{i=1}^{N_o} q_k(\alpha_{i})s(\hat{x}_i,x_{mi})=\sum_{i=1}^{N_o} q_k(\alpha_i)s_i. \end{equation} Then following theorem gives the key result that enables the revision of Forney's GMD decoder. \begin{theorem}{\label{Theorem2}} If $\mbox{\boldmath$\alpha$}\cdot\mbox{\boldmath $x$}_m>N_o\left (\frac{\varepsilon_2}{2}+(r_o+\varepsilon_1)(1-\frac{\varepsilon_2}{2})\right)$, then for some $0 \le k < 1/\varepsilon_2$, $\mbox{\boldmath$q$}_k\cdot$$\mbox{\boldmath $x$}_m>N_o(r_o+\varepsilon_1)$. \end{theorem} \begin{proof} Define a set of values $c_j=(j-1/2)\varepsilon_2$ for $1 \le j \le 1/\varepsilon_2$ and an integer $p=\lceil \alpha_{i_{N_o(1-r_o-\varepsilon_1)}}/\varepsilon_2\rceil$, where $1\le p\le 1/\varepsilon_2$. \footnote{Note that the value of $p$ cannot be $0$. Because if $p=0$, i.e., $\alpha_{i_{N_o(1-r_o-\varepsilon_1)}}=0$, then there are at least $N_o(1-r_o-\varepsilon_1)$ zeros in vector $\mbox{\boldmath $\alpha$}$. Consequently, $\mbox{\boldmath $\alpha$}\cdot\mbox{\boldmath $x$}_m\le N_o(r_o+\varepsilon_1)<N_o\left(\frac{\varepsilon_2}{2}+(r_o+\varepsilon_1)\left(1-\frac{\varepsilon_2}{2}\right)\right)$, which contradicts the assumption that $\mbox{\boldmath$\alpha$}\cdot\mbox{\boldmath $x$}_m>N_o\left (\frac{\varepsilon_2}{2}+(r_o+\varepsilon_1)(1-\frac{\varepsilon_2}{2})\right)$.} Let \begin{eqnarray} && \lambda_0 = c_1 \nonumber\\ && \lambda_k = c_{k+1}-c_k, 1\le k\le p-1 \nonumber\\ && \lambda_{p} = \alpha_{i_{N_o(1-r_o-\varepsilon_1)+1}}-c_p \nonumber\\ && \lambda_h = \alpha_{i_{h-p+N_o(1-r_o-\varepsilon_1)+1}}-\alpha_{i_{h-p+N_o(1-r_o-\varepsilon_1)}},\nonumber \\ && \qquad \mbox{if}\quad p< h<p+N_o(r_o+\varepsilon_1)\nonumber\\ && \lambda_{p+N_o(r_o+\varepsilon_1)}= 1-\alpha_{i_{N_o}}. \end{eqnarray} We have \begin{equation} \sum_{k=0}^{j-1}\lambda_k= \left\{ \begin{array}{ll}c_j & 1 \le j \le p\\ \alpha_{i_{j-p+N_o(1-r_o-\varepsilon_1)}} & p < j \le p+N_o(r_o+\varepsilon_1)\end{array} \right . , \end{equation} and \begin{equation} \sum_{k=0}^{p+N_o(r_o+\varepsilon_1)}\lambda_k=1. \end{equation} Define a new weight vector $\tilde{\mbox{\boldmath $\alpha$}}=[\tilde{\alpha}_1, \dots, \tilde{\alpha}_i, \dots, \tilde{\alpha}_{N_o}]$ with \begin{equation} \tilde{\alpha}_i= \left\{ \begin{array}{ll} \mbox{argmin}_{c_j, 1\le j \le p}|c_j-\alpha_i| & \alpha_i\le \alpha_{i_{N_o(1-r_o-\varepsilon_1)}}\\ \alpha_i & \alpha_i>\alpha_{i_{N_o(1-r_o-\varepsilon_1)}} \end{array} \right . . \end{equation} Define $\mbox{\boldmath $p$}_k=[p_k(\alpha_1), \dots, p_k(\alpha_i), \dots, p_k(\alpha_{N_o}) ]$ with $1\le k\le p+N_o(r_o+\varepsilon_1)$ such that for $0\le k<p$ \begin{equation} \mbox{\boldmath $p$}_k = \mbox{\boldmath $q$}_k, \end{equation} and for $p\le k\le p+N_o(r_o+\varepsilon_1)$ \begin{equation} \mbox{\boldmath $p$}_k(\alpha_i) = \left\{ \begin{array}{ll}0 & \alpha_i\le \alpha_{i_{k-p+N_o(1-r_o-\varepsilon_1)}}\\ 1 & \alpha_i>\alpha_{i_{k-p+N_o(1-r_o-\varepsilon_1)}} \end{array} \right . . \end{equation} We have \begin{equation} \mbox{\boldmath $\tilde{\alpha}$}=\sum_{k=0}^{p+N_o(r_o+\varepsilon_1)}\lambda_k\mbox{\boldmath $p$}_k. \end{equation} Define a set of indices \begin{equation} \mathcal{U}=\{i_1, i_2, \ldots, i_{N_o(1-r_o-\varepsilon_1)}\}. \end{equation} According to the definition of $\tilde{\alpha}_i$, for $i \notin \mathcal{U}$, $\tilde{\alpha}_i=\alpha_i$. Hence \begin{equation} \tilde{\mbox{\boldmath $\alpha$}} \cdot \mbox{\boldmath $x$}_m=\mbox{\boldmath $\alpha$} \cdot \mbox{\boldmath $x$}_m+\sum_{i \in \mathcal{U}}\left (\tilde{\alpha}_i-\alpha_i\right )s_i. \end{equation} Since $|\tilde {\alpha}_i-\alpha_i| \le \varepsilon_2/2$, and $s_i=\pm 1$, we have \begin{equation} \sum_{i \in \mathcal{U}}\left (\tilde {\alpha}_i-\alpha_i\right )s_i \ge -N_o(1-r_o-\varepsilon_1)\frac{\varepsilon_2}{2}. \end{equation} Consequently, $\mbox{\boldmath $\alpha$} \cdot \mbox{\boldmath $x$}_m>N_o\left(\frac{\varepsilon_2}{2}+(r_o+\varepsilon_1)\left(1-\frac{\varepsilon_2}{2}\right)\right)$ implies \begin{equation}\label{AlphaX} \mbox{\boldmath $\tilde{\alpha}$} \cdot \mbox{\boldmath $x$}_m>N_o(r_o+\varepsilon_1). \end{equation} If $\mbox{\boldmath $p$}_k \cdot \mbox{\boldmath $x$}_m\le N_o(r_o+\varepsilon_1)$ for all $\mbox{\boldmath $p$}_k$'s, then \begin{eqnarray} \mbox{\boldmath $\tilde{\alpha}$} \cdot \mbox{\boldmath $x$}_m&=&\sum_{k=0}^{p+N_o(r_o+\varepsilon_1)}\lambda_k\mbox{\boldmath $p$}_k\cdot \mbox{\boldmath $x$}_m\nonumber\\ &\le& N_o(r_o+\varepsilon_1)\sum_{k=0}^{p+N_o(r_o+\varepsilon_1)}\lambda_k \nonumber\\ &=&N_o(r_o+\varepsilon_1), \end{eqnarray} which contradicts (\ref{AlphaX}). Therefore, there must be some $\mbox{\boldmath $p$}_k$ that satisfies \begin{equation}\label{PX} \mbox{\boldmath $p$}_k\cdot \mbox{\boldmath $x$}_m>N_o(r_o+\varepsilon_1). \end{equation} Since for $k\ge p$, $\mbox{\boldmath $p$}_k$ has no more than $N_o(r_o+\varepsilon_1)$ number of $1$'s, which implies $\mbox{\boldmath $p$}_k\cdot \mbox{\boldmath $x$}_m\le N_o(r_o+\varepsilon_1)$, the vectors that satisfy (\ref{PX}) must exist among $\mbox{\boldmath $p$}_k$ with $1\le k<p$. In words, for some $k$, $\mbox{\boldmath $q$}_k\cdot\mbox{\boldmath $x$}_m>N_o(r_o+\varepsilon_1)$. \end{proof} Theorems \ref{Theorem1} and \ref{Theorem2} indicate that, if $\mbox{\boldmath $x$}_m$ is transmitted and $\mbox{\boldmath$\alpha$}\cdot\mbox{\boldmath $x$}_m>N_o\left (\frac{\varepsilon_2}{2}+(r_o+\varepsilon_1)(1-\frac{\varepsilon_2}{2})\right)$, for some $0 \le k < 1/\varepsilon_2$, errors-and-erasures decoding specified by $\mbox{\boldmath $q$}_k$ (where symbols with $\mbox{\boldmath $q$}_k(\alpha_i)=0$ are erased) will output $\mbox{\boldmath $x$}_m$. Since the total number of $\mbox{\boldmath $q$}_k$ vectors is upper bounded by a constant $1/\varepsilon_2$, the outer code carries out errors-and-erasures decoding only for a constant number of times. Consequently, a GMD decoding that carries out errors-and-erasures decoding for all $\mbox{\boldmath $q$}_k$'s and compares their decoding outputs can recover $\mbox{\boldmath $x$}_m$ with a complexity of $O(N_o)$. Since the inner code length $N_i$ is fixed, the overall complexity is $O(N)$. The following theorem gives an error probability bound for one-level concatenated codes with the revised GMD decoder. \begin{theorem}{\label{Theorem3}} Assume inner codes achieve Gallager's error exponent given in (\ref{GallagerE}). Let the reliability vector $\mbox{\boldmath $\alpha$}$ be generated according to Forney's algorithm presented in \cite[Section 4.2]{ref Forney66}. Let $\mbox{\boldmath $x$}_m$ be the transmitted outer codeword. For large enough $N$, error probability of the one-level concatenated codes is upper bounded by \begin{eqnarray} P_e &\le & P\left\{\mbox{\boldmath$\alpha$}\cdot \mbox{\boldmath $x$}_m\le N_o\left(\frac{\varepsilon_2}{2}+(r_o+\varepsilon_1)\left(1-\frac{\varepsilon_2}{2} \right)\right)\right\} \nonumber \\ &\le & \exp \left[-N\left(E_c(R)-\varepsilon\right)\right], \end{eqnarray} where $E_c(R)$ is Forney's error exponent given by (\ref{EcR}) and $\varepsilon$ is a function of $\varepsilon_1$ and $\varepsilon_2$ with $\varepsilon\rightarrow0$ if $\varepsilon_1, \varepsilon_2\rightarrow0$. \end{theorem} The proof of Theorem \ref{Theorem3} can be obtained by first replacing Theorem 3.2 in \cite{ref Forney66} with Theorem \ref{Theorem2}, and then following Forney's analysis presented in \cite[Section 4.2]{ref Forney66}. The difference between Forney's and the revised GMD decoding schemes lies in the definition of errors-and-erasures decodable vectors $\mbox{\boldmath $q$}_k$, the number of which determines the decoding complexity. Forney's GMD decoding needs to carry out errors-and-erasures decoding for a number of times linear in $N_o$, whereas ours for a constant number of times. Although the idea behind the revised GMD decoding is similar to Justesen's GMD algorithm \cite{ref Justesen72}, Justesen's work has focused on error-correction codes where inner codes forward Hamming distance information (in the form of an $\mbox {\boldmath $\alpha$}$ vector) to the outer code. Applying the revised GMD algorithm to multi-level concatenated codes \cite{ref Blokh82}\cite{ref Barg05} is quite straightforward. Achievable error exponent of an $m$-level concatenated codes is given in the following Theorem. \begin{theorem}{\label{Theorem4}} For a discrete-time memoryless channel with capacity $\mathcal{C}$, for any $\varepsilon>0$ and any integer $m>0$, one can construct a sequence of $m$-level concatenated codes whose encoding/decoding complexity is linear in $N$, and whose error probability is bounded by \begin{eqnarray}\label{MExponent} \lim_{N\rightarrow\infty}-\frac{\log P_e}{N}\ge E^{(m)}(R)-\varepsilon, \qquad \qquad \qquad \qquad \qquad \nonumber\\ E^{(m)}(R)=\max_{p_X,r_o\in {\left [\frac{R}{\mathcal{C}},1\right]}}\frac{\frac{R}{r_o}-R}{\frac{R}{r_om}\sum_{i=1}^m\left [E_L\left((\frac {i}{m})\frac{R}{r_o},p_X\right )\right ]^{-1}} \nonumber \\ \end{eqnarray} \end{theorem} The proof of Theorem \ref{Theorem4} can be obtained by combining Theorem \ref{Theorem3} and the derivation of $E^{(m)}(R)$ in \cite{ref Blokh82}\cite{ref Barg05}. Note that $\lim_{m\rightarrow \infty}E^{(m)}(R)=E^{(\infty)}(R)$, where $E^{(\infty)}(R)$ is the Blokh-Zyablov error exponent given in (\ref{BZBound}). Theorem \ref{Theorem4} implies that, for discrete-time memoryless channels, Blokh-Zyablov error exponent can be arbitrarily approached with linear encoding/decoding complexity. \section{Conclusions} We proposed a revised GMD decoding algorithm for concatenated codes over general discrete-time memoryless channels. By combining the GMD algorithm with Guruswami and Indyk's error correction codes, we showed that Forney's and Blokh-Zyablov error exponents can be arbitrarily approached by one-level and multi-level concatenated coding schemes, respectively, with linear encoding/decoding complexity. \section*{Acknowledgment} The authors would like to thank Professor Alexander Barg for his help on multi-level concatenated codes.
1,314,259,993,709
arxiv
\section{Introduction} DNA-based data storage has been a hot topic in information theory society. As deletion/insertion are common in DNA data storage \cite{Heckel20}, codes correcting such errors have attracted significant attention in recent years. It was proved in \cite{Levenshtein65} that the optimal redundancy of binary $t$-deletion correcting codes is asymptotically between $t\log n+o(\log n)$ and $2t\log n+o(\log n)$, where $n$ is the length of the code and the redundancy of a binary code $\mathcal C$ is defined as $n-\log|\mathcal C|$.\footnote{In this paper, for any positive real number $x$, $\log_qx$ is the logarithm of $x$ with base $q$, where $q\geq 2$ is a positive integer. If the base $q=2$, then for simplicity, we write $\log_2x=\log x$.} The well-known Varshamov-Tenengolts (VT) codes \cite{Varshamov65}, which is defined as \begin{align*}\text{VT}_a(n)\!=\!\left\{\!(c_1,\ldots,c_n) \!\in\!\{0,1\}^n\!: \!\sum_{i=1}^nic_i\!\equiv a ~\text{mod}\!~(n+1)\!\right\}\!,\end{align*} is a class of binary single-deletion correcting codes with asymptotically optimal redundancy. Construction of multiple-deletion correcting codes with low redundancy were considered in \cite{Brakensiek18}$-\!\!$\cite{Wentu22}. By using the higher order VT syndromes and the syndrome compression technique \cite{Sima20-1}, Sima \emph{et al}. constructed a family of systematic $t$-deletion correcting codes with $4t\log n+o(\log n)$ bits \cite{Sima20}. The method in \cite{Sima20} was improved in \cite{Wentu22} to give a construction of $t$-deletion correcting codes with redundancy $(4t-1)\log n+o(\log n)$, which is the best known result in redundancy. For the special case of $t=2$, an explicit construction of $2$-deletion correcting codes with redundancy $4\log n+o(\log n)$ was proposed by Guruswami and H\aa stad \cite{Gur2020}, which matches the existential upper bound of the asymptotically optimal codes. As a special case of deletion errors, a burst of $t$ deletions $($or a $t$-burst-deletion$)$ refers to $t$ deletions that occur at consecutive positions. It was proved in \cite{Schoeny2017} that the redundancy of a $t$-burst-deletion-correcting code is approximately lower bounded by $\log n+t-1$. Levenshtein \cite{Levenshtein67} constructed a class of binary codes that can correct a burst of at most two deletions with asymptotically optimal redundancy of $\log n+1$. Binary codes capable of correcting a burst of \emph{exact} $t$ deletions for $t\geq 2$ are constructed in \cite{Schoeny2017}, which also have an asymptotically optimal redundancy of $\log n+(t-1)\log\log n+t-\log t$. In \cite{Lenz20}, binary codes capable of correcting a burst of \emph{at most} $t$ deletions are constructed, which also have an asymptotically optimal redundancy of $\log n+(t(t-1)/2)\log\log n+\gamma_t$, where $\gamma_t$ is a constant that depends only on $t$ Besides binary codes, nonbinary deletion correcting codes are also investigated in the literature. In \cite{Levenshtein02}, it was shown that the optimal redundancy of a $q$-ary $t$-deletion correcting code is asymptotically lower bounded by $t\log n+t\log q+o(\log q\log n)$ and upper bounded by $2t\log n+t\log q+o(\log q\log n)$ in bits $(q\geq 2)$. A class of $q$-ary single-deletion correcting codes with redundancy close to the asymptotic optimality was constructed in \cite{Tenengolts84}. For $q$-ary $t$-deletion correcting codes, the best known construction is presented in \cite{Sima20-2}, which achieve optimal redundancy up to a constant factor. Quaternary codes capable of correcting a single edit error for DNA data storage were studied in \cite{Cai19}. In \cite{Wang21}, a $q$-ary code that can correct a burst of at most $2$ deletions with redundancy $\log n+O(\log q \log\log n)$ bits was constructed, where $q\geq 2$ is an even integer. In this paper, we construct nonbinary two-deletion correcting codes and burst-deletion correcting codes. Our contributions includes: \begin{itemize} \item[1)] We construct a class of systematic $q$-ary two-deletion correcting codes, with redundancy $5\log n+O(\log q\log\log n)$, where $q\geq 2$ is an even integer and $n$ is the length of the message sequences. \item[2)] We present a construction of binary codes with redundancy $\log n+9\log\log n+\gamma_t+o(\log\log n)$ bits $(\gamma_t$ is a constant that depends only on $t)$ and capable of correcting a burst of at most $t$ deletions, which improves the Lenz-Polyanskii Construction (ISIT 2020). \item[2)] We give a construction of $q$-ary codes with redundancy $\log n+(8\log q+9)\log\log n+o(\log q\log\log n)+\gamma_t$ bits and capable of correcting a burst of at most $t$ deletions, where $q\geq 2$ is an even integer. \end{itemize} Note that each symbol in $\mathbb Z_q$ can be viewed as a binary string of length $\lceil\log q\rceil$, so a binary code of length $\lceil\log q\rceil n$ and capable of correcting a burst of $\lceil\log q\rceil t$ deletions can also be viewed as a $q$-ary code of length $n$ and capable of correcting a burst of $t$ deletions. By this observation and by the construction in \cite{Lenz20}, we can obtain a $q$-ary code of length $n$ and capable of correcting a burst of $t$ deletions that has redundancy $$\log(n\log q)+\frac{t\log q (t\log q+1)}{2}\log\log(n\log q)+\gamma_t.$$ Our construction has improved redundancy than this naive construction. The rest of this paper is organized as follows. In Section \uppercase\expandafter{\romannumeral 2}, we introduce some basic concepts and notations of deletion correcting codes, and review some related constructions in the literature. In Section \uppercase\expandafter{\romannumeral 3}, we construct $q$-ary two-deletion correcting codes. In Section \uppercase\expandafter{\romannumeral 4}, we present an improved construction of binary codes correcting a burst of at most $t$ deletions. In Section \uppercase\expandafter{\romannumeral 5}, we construct of $q$-ary codes correcting a burst of at most $t$ deletions. The paper is concluded in Section \uppercase\expandafter{\romannumeral 6}. \section{Preliminaries} For any integers $m$ and $n$ such that $m\leq n$, we denote $[m,n]=\{m,m+1,\ldots,n\}$ and call it an \emph{interval}. If $m>n$, let $[m,n]=\emptyset$. For simplicity, denote $[n]=[1,n]$ for any positive integer $n$. For any positive real number $x$, $\log x$ is the logarithm of $x$ with base $2$, i.e., $\log x=\log_2x$. The size (cardinality) of any set $S$ is denoted by $|S|$. For any positive integer $q\geq 2$, denote $\mathbb Z_q=\{0,1,2,\cdots,q-1\}$, which will be used as the alphabet of $q$-ary codes. For any string (also called a sequence) $\bm{x}\in\mathbb Z_q^n$, $n$ is called the length of $\bm{x}$ and denote $|\bm x|=n$. Unless otherwise specified, we use $x_i$ to denote the $i$th coordinate of $\bm{x}$, where $i\in[n]$. Usually, we denote $\bm{x}=(x_1,x_2,\ldots,x_n)$ or $\bm{x}=x_1x_2\cdots x_n$. For any $I=\{i_1, i_2, \ldots, i_d\}\subseteq [n]$ such that $i_1<i_2<\cdots<i_d$, denote $x_I=x_{i_1} x_{i_2} \cdots x_{i_d}$ and call $x_D$ a \emph{subsequence} of $\bm x$. If $I\subseteq[n]$ is an interval $($i.e., $I=[i,j]$ for some $i,j\in[1,n]$, $i\leq j)$, then $x_{I}=x_{[i,j]}=x_{i}x_{i+1}\cdots x_{j}$ is called a \emph{substring} of $\bm x$. In other words, a substring of $\bm{x}$ is a subsequence of $\bm{x}$ consisting of some consecutive symbols of $\bm{x}$. We say that $\bm x$ contains $\bm p~($or $\bm p$ is contained in $\bm x)$ if $\bm{p}$ is a substring of $\bm{x}$. For two substrings $x_{I}$ and $x_{I'}$ of $\bm x$, where $I,I'\subseteq[n]$ are two intervals, we say that $x_{I}$ and $x_{I'}$ are \emph{disjoint} if $I\cap I'=\emptyset$. Let $t\leq n$ be a nonnegative integer. For any $\bm{x}\in\mathbb Z_q^n$, let $\mathcal D_t(\bm x)$ denote the set of subsequences of $\bm x$ of length $n-t$, and let $\mathcal B_{t}(\bm x)$ denote the set of subsequences $\bm y$ of $\bm x$ that can be obtained from $\bm x$ by a burst of $t$ deletions, that is $\bm y=x_{I}$ such that $I=[n]\backslash D$ for some interval $D\subseteq[n]$ of length $t~($i.e., $D=[i,i+t-1]$ for some $i\in[n-t+1])$. Moreover, let $\mathcal B_{\leq t}(\bm x)=\bigcup_{t'=0}^t\mathcal B_{t'}(\bm x)$ be the set of subsequences of $\bm x$ that can be obtained from $\bm x$ by a burst of at most $t$ deletions. Clearly, $\mathcal D_{ 1}(\bm x)=\mathcal B_{1}(\bm x)=\mathcal B_{\leq 1}(\bm x)$. However, $\mathcal B_{t}(\bm x)\subseteq \mathcal D_{t}(\bm x)\cap\mathcal B_{\leq t}(\bm x)$ for $t\geq 2$. A code $\mathcal C\subseteq\mathbb Z_q^n$ is said to be a $t$-\emph{deletion correcting code} if for any $\bm x\in\mathcal C$ and any $\bm y\in\mathcal D_t(\bm x)$, $\bm x$ can be uniquely recovered from $\bm y$; the code $\mathcal C\subseteq\mathbb Z_q^n$ is said to be capable of \emph{correcting a burst of at most $t$ deletions} if for any $\bm x\in\mathcal C$ and any $\bm y\in\mathcal B_{\leq t}(\bm x)$, $\bm x$ can be uniquely recovered from $\bm y$. \subsection{Some Constructions Related to Binary Single-deletion and Two-deletion Correcting Codes} From the VT construction, we can obtain the following lemma about single-deletion correcting codes. \begin{lem}\label{VT-Code-Skch} For any integer $n\geq 3$, there exists a function $\text{VT}: \{0,1\}^n\rightarrow\{0,1\}^{\log n}$, computable in linear time, such that for any $\bm{c}\in\{0,1\}^n$, given $\text{VT}(\bm c)$ and any $\bm b\in\mathcal D_1(\bm c)$, one can uniquely recover $\bm c$. \end{lem} The following lemma can be obtained from the results of \cite{Sima19-1}, and so its proof is omitted. \begin{lem}\label{Ind-Bnry-2del} For any integer $n\geq 3$, there exists a function $\xi: \{0,1\}^n\rightarrow\{0,1\}^{7\log n+o(\log n)}$, computable in linear time, such that for any $\bm{c}\in\{0,1\}^n$, given $\xi(\bm c)$ and any $\bm b\in\mathcal D_2(\bm c)$, one can uniquely recover $\bm c$. \end{lem} Lemma \ref{Ind-Bnry-2del} can be used to construct systematic binary two-deletion correcting codes with redundancy not greater than $7\log n+o(\log n)$. Another construction, which uses the so-called regular strings and has lower redundancy, was proposed in \cite{Gur2020}, but it is not systematic. \begin{defn}[Regularity]\label{Regu} A binary string $\bm{c}\in\{0,1\}^n$ is said to be \emph{regular} if each (contiguous) sub-string of $\bm c$ of length at least $d\log n$ contains both $00$ and $11$. \end{defn} In Definition \ref{Regu}, $d$ is a constant that can be chosen properly. In this paper, we will always choose $d=7$. The following two lemmas are from \cite{Gur2020}. \begin{lem}\cite[Lemma 11]{Gur2020}\label{Enc-Reg-Bnry} There exist an integer $M\geq 2^{n-1}$ and a one-to-one mapping $\text{RegEnc}: \{1,2,\cdots,M\}\rightarrow\{0,1\}^n$ such that its image is contained in the set of regular strings. Moreover, the function $\text{RegEnc}$ can be computed in near-linear time with a polynomial size lookup table. \end{lem} \begin{lem}\cite[Theorem 7]{Gur2020}\label{Reg-Bnry-2-Del} There is a function $\eta$, computable in linear time, that maps $n$ bits to $4\log n+10\log\log n+O(1)$ bits such that for any regular $\bm{c}\in\{0,1\}^n$, given $\eta(\bm c)$ and any $\bm b\in\mathcal D_2(\bm c)$, one can uniquely recover $\bm c$. \end{lem} \subsection{Some Constructions Related to Binary Burst-Deletion Correcting Codes} The following lemma can be obtained from the results in Section IV of \cite{Sima20-1}. \begin{lem}\label{lem-Bnry-burst-Sima} Suppose $t$ is a constant with respect to $n$. There is a function $\phi:\{0,1\}^n\rightarrow\{0,1\}^{4\log n+o(\log n)}$, computable in time $O(2^tn^3)$, such that for any $\bm c\in\{0,1\}^n$, given $\phi(\bm c)$ and any $\bm b\in\mathcal B_{\leq t}(\bm c)$, one can uniquely recover $\bm c$. \end{lem} Let $m\leq\delta\leq n$ be positive integers and $\bm p\in\{0,1\}^m$, where $\bm p$ is called a \emph{pattern}. A string $\bm c\in\{0,1\}^n$ is called $(\bm p, \delta)$-\emph{dense}, if each substring of $\bm c$ of length $\delta$ contains at least one pattern $\bm p$. As in \cite{Lenz20}, in this paper, we take $$\delta=t2^{t+1}\log n\footnotemark{}$$ and $$\bm p=0^t1^t,$$ where $0^t$ is the string consists of $t$ symbol $0$s, and $1^t$ is the string consists of $t$ symbol $1$s. In other words, $\bm p=p_1p_2\cdots p_{2t}$ such that $p_1=p_2=\cdots=p_t=0$ and $p_{t+1}=p_{t+2}=\cdots=p_{2t}=1$. It was proven in \cite{Lenz20} that one bit of redundancy is sufficient to construct $(\bm p, \delta)$-dense string. \footnotetext{In \cite{Lenz20}, $\delta$ is taken to be $t2^{t+1}\lceil\log n\rceil\footnotemark[2]$. In this paper, for notational simplicity, we omit the ceiling function and write $\delta=t2^{t+1}\log n$. \begin{lem}\cite[Lemma 1]{Lenz20}\label{lem-p-dense} For any $n\geq 5$, the number of $(\bm p, \delta)$-dense strings of length $n$ is at least $$2^n(1-n^{1-\log e})\geq 2^{n-1}.$$ \end{lem} The following lemma can be obtained from Construction 1 and Lemma 2 of \cite{Lenz20} and so its proof is omitted. \begin{lem}\label{lem-Bnry-burst-Lenz} For any positive integer $n$, there is a function $\mu$, computable in linear time, that maps $n$ bits to $\log n+3$ bits such that for any $(\bm p, \delta)$-dense $\bm c\in\{0,1\}^n$, given $\mu(\bm c)$ and any $\bm b\in\mathcal B_{\leq t}(\bm c)$, one can find in time $O(n)$ an interval $L\subseteq[n]$ of length at most $\delta+t$ such that $\bm b=c_{[n]\backslash D}$ for some interval $D\subseteq L~($i.e., the deletions are located in the interval $L)$. \end{lem} \subsection{Matrix Representation of $q$-ary Strings} In the rest of this paper, we always assume $q>2$ is a fixed even integer. As in \cite{Sima20-2}, each $q$-ary string $\bm{x}=x_1x_2\ldots x_n\in\mathbb Z_q^{n}$ can be represented by a $\lceil\log q\rceil\times n$ binary matrix \begin{align}\label{q-B-Repr} M_{\bm{x}}=(c_{i,j})=\left(\begin{array}{cccc} c_{1,1} & \cdots & c_{1,n} \\ \vdots & \ddots & \vdots \\ c_{\lceil\log q\rceil,1} & \cdots & c_{\lceil\log q\rceil,n} \\ \end{array}\right), \end{align} where $c_{i,j}\in\{0,1\}$, such that the $j$th column of $M_{\bm{x}}$ is the binary representation of $x_j$. Specifically, $x_j=\sum_{i=1}^{\lceil\log q\rceil}c_{i,j}2^{i-1}$. We call $M_{\bm{x}}$ the \emph{matrix representation} of $\bm x$. For any $i\in\{1,2,\cdots,\lceil\log q\rceil\}$ and any interval $J=[j_1,j_2]=\{j_1,j_1+1,\cdots,j_2\}\subseteq[n]$, where $1\leq j_1<j_2\leq n$, denote \begin{align}\label{1-row-Repr} c_{i,J}\triangleq c_{i,j_1}c_{i,j_1+1}\cdots c_{i,j_2},\end{align} which is a substring of the $i$th row of $M_{\bm x}$ consisting of $c_{i,j_1}$, $c_{i,j_1+1}$, $\cdots$, $c_{i,j_2}$. In particular, $c_{i,[n]}$ is the $i$th row of $M_{\bm x}$. Clearly, if $\bm y\in\mathbb Z_q^{n-t}$ is obtained from $\bm x$ by deleting $x_{j_1},\cdots,x_{j_t}$, then the matrix representation $M_{\bm{y}}$ of $\bm y$ can be obtained from $M_{\bm{x}}$ by deleting columns $j_1, \cdots,j_t$ of $M_{\bm{x}}$. Moreover, $\bm x$ can be recovered from $\bm y$ if and only if its matrix representation $M_{\bm{x}}$ can be recovered from $M_{\bm{y}}$. \begin{lem}\label{lem-En-B2Q} Suppose $\mathcal E_{0}:\{0,1\}^{n-1}\rightarrow\{0,1\}^{n}$ is a one-to-one mapping and $q>2$ is an even integer. Then there is a one-to-one mapping $\bar{\mathcal E}_{0}:\mathbb Z_q^{n-1}\rightarrow\mathbb Z_q^{n}$, with the same computing time as $\mathcal E_{0}$, such that for any $\bm u\in\mathbb Z_q^{n-1}$ and $\bm x=\bar{\mathcal E}_{0}(\bm u)$, if $M_{\bm{u}}=(b_{i,j})_{\lceil\log q\rceil\times(n-1)}$ and $M_{\bm{x}}=(c_{i,j})_{\lceil\log q\rceil\times n}$ are the matrix representation of $\bm u$ and $\bm x$ respectively, then $$c_{1,[n]}=\mathcal E_{0}(b_{1,[n-1]}).$$ \end{lem} \begin{proof} For each $\bm u\in\mathbb Z_q^{n-1}$, where the matrix representation of $\bm u$ is \begin{align*} M_{\bm{u}}=\left(\begin{array}{ccccc} b_{1,1} & \cdots & b_{1,n-1} \\ b_{2,1} & \cdots & b_{2,n-1} \\ \vdots & \ddots & \vdots \\ b_{\lceil\log q\rceil,1} & \cdots & b_{\lceil\log q\rceil,n-1} \\ \end{array}\right), \end{align*} denote $\mathcal E_{0}(b_{1,[n-1]})=\bm c=c_{1,1}\cdots c_{1,n-1}c_{1,n}$ and let \begin{align*} M=\left(\begin{array}{ccccc} c_{1,1} & \cdots & c_{1,n-1} & c_{1,n} \\ b_{2,1} & \cdots & b_{2,n-1} & 0 \\ \vdots & \ddots & \vdots & \vdots \\ b_{\lceil\log q\rceil,1} & \cdots & b_{\lceil\log q\rceil,n-1} & 0 \\ \end{array}\right). \end{align*} Specifically, $M=(c_{i,j})$ is a $\lceil\log q\rceil\times n$ binary matrix satisfying the following three properties: i) the first row of $M$ is equal to $\bm c$; ii) $c_{2,j}\cdots c_{\lceil\log q\rceil,j}=b_{2,j}\cdots b_{\lceil\log q\rceil,j}$ for each $j\in[n-1]$; iii) $c_{2,n}\cdots c_{\lceil\log q\rceil,n}=0^{\lceil\log q\rceil-1}$, where $0^{\lceil\log q\rceil-1}$ is the string consisting of $\lceil\log q\rceil-1$ symbol $0$s. Let $\bar{\mathcal E}_{0}(\bm u)=\bm x$ such that the matrix representation of $\bm x$ is $M_{\bm{x}}=M$. It is easy to see that $c_{1,[n]}=\mathcal E_{0}(b_{1,[n-1]})$ and the computing time of $\bar{\mathcal E}_{0}$ is the same as that of $\mathcal E_{0}$. Moreover, since $\mathcal E_{0}$ is a one-to-one mapping, it is also easy to see that $\bar{\mathcal E}_{0}$ is a one-to-one mapping. It remains to prove that $\bm x\in\mathbb Z_q^{n}$, equivalently, each column of $M_{\bm{x}}$ is the binary representation of some integer in $\mathbb Z_q$. According to property iii) of the constructed matrix $M$, we have $c_{\lceil\log q\rceil,n}\cdots c_{2,n}c_{1,n}=0^{\lceil\log q\rceil-1}c_{1,n}$, so the last column of $M$ is the binary representation of $c_{1,n}\in\{0,1\}\subseteq\mathbb Z_q$. For each $j\in[n-1]$, according to property ii) of $M$, we have $c_{\lceil\log q\rceil,j}\cdots c_{2,j}c_{1,j}=b_{\lceil\log q\rceil,j}\cdots b_{2,j}c_{1,j}$, so $x_j=\sum_{i=1}^{\lceil\log q\rceil}c_{i,j}2^{i-1}=\sum_{i=2}^{\lceil\log q\rceil}b_{i,j}2^{i-1}+c_{1,j}=u_{j}-b_{1,j}+c_{1,j}$, where the last equality holds because according to the definition of the matrix representation, $b_{\lceil\log q\rceil,j}\cdots b_{2,j}b_{1,j}$ is the binary representation of $u_j$. If $b_{1,j}=1$, then $x_j=u_{j}-b_{1,j}+c_{1,j}\leq u_j\leq q-1$. If $b_{1,j}=0$, then $u_{j}$ is even. Noticing that $q$ is even, so $u_{j}\leq q-2$, and hence $x_j=u_{j}-b_{1,j}+c_{1,j}=u_{j}+c_{1,j}\leq q-1$. In both cases, we have $x_j\in\mathbb Z_q$. Thus, each column of $M$ is the binary representation of some integer in $\mathbb Z_q$, and so $\bm x\in\mathbb Z_q^{n}$. \end{proof} \section{Nonbinary Two-deletion Correcting Codes} In this section, we consider $q$-ary two-deletion correcting codes. We assume that $q>2$ is an even integer and is a constant with respect to the code length $n$. Each binary sequence $\bm a$ will also be viewed as a non-negative integer whose binary representation is $\bm a$, and conversely, each non-negative integer $m$ will also be viewed as a binary sequence with length $\lceil\log (m+1)\rceil$, i.e., the binary representation of $m$. Therefore, summation and multiplication of binary strings and integers are performed in the set of integers. We need to introduce some concepts and notations for binary strings, which will be used in our construction. Let $\bm c\in\{0,1\}^n$ be a binary string of length $n$. A \emph{run} of $\bm c$ is a maximal substring of $\bm c$ consisting of identical symbols.\footnote{We say that a substring of $\bm c$ satisfying a certain property is maximal if it is contained by no other substring of $\bm c$ that satisfies the same property. Hence, a maximal run of the string $\bm c$ is not contained by any other run of $\bm c$.} A substring $c_{[i_1,i_2]}$ of $\bm c$, where $i_1<i_2$, is called an \emph{alternative substring} of $\bm c$ if $c_{i+1}\neq c_{i}$ for all $i\in[i_1,i_2-1]$. \begin{rem}\label{rem-Regu-length} From Definition \ref{Regu}, it is easy to see that if $\bm{c}\in\{0,1\}^n$ is regular, then each substring of $\bm c$ of length $d\log n$ can not be a run or an alternative substring of $\bm c$ because it contains both $00$ and $11$. Equivalently, each run and each alternative substring of $\bm c$ have length at most $d\log n$. \end{rem} \begin{defn}\label{def-run-set} For each $\bm c\in\{0,1\}^n$, let $c_{I_i}$ be the $i$th run $($counting from the left$)$ of $\bm c$, where $I_i\subseteq[n]$ is the index set of $c_{I_i}$. Then we denote $\mathcal I_{\bm c}=\{c_{I_1},\cdots,c_{I_{n'}}\}$ and call it \emph{the set of runs} of $\bm c$, where $n'$ is the number of runs of $\bm c$. \end{defn} Let $\text{VT}$, $\xi$ and $\eta$ be the functions constructed by Lemma \ref{VT-Code-Skch}, Lemma \ref{Ind-Bnry-2del} and Lemma \ref{Reg-Bnry-2-Del}, respectively. Denote $$\rho=3d\log n$$ and let \begin{equation}\label{def-Ji-intvl} J_j=\!\left\{\!\begin{aligned} &[(j-1)\rho\!+1, (j+1)\rho], ~\text{for}~j\in\!\{1,\cdots, \left\lceil n/\rho\right\rceil-2\},\\ &[(j-1)\rho\!+1, n], ~~~~~~~~~\text{for}~j=\left\lceil n/\rho\right\rceil-1. \end{aligned}\right. \end{equation} Note that each interval $J_j$ has length $2\rho$ and the intersection of two successive intervals $J_{j}$ and $J_{j+1}$ is an interval of length $\rho$. It is easy to see the following remark. \begin{rem}\label{rem-sets-Ji} The intervals $J_j$, $j=1,\cdots,\left\lceil n/\rho\right\rceil-1$ satisfies: \begin{itemize} \item[1)] For any interval $J\subseteq[n]$ of length at most $\rho$, we can find an $j_0\in\{1,2,\cdots,\left\lceil n/\rho\right\rceil-1\}$ such that $J\subseteq J_{j_0}$. \item[2)] $J_j\cap J_{j'}=\emptyset$ for all $j,j'\in\{1,2,\cdots, \left\lceil n/\rho\right\rceil-1\}$ such that $|j-j'|\geq 2$. \end{itemize} \end{rem} For each $q$-ary string $\bm x\in\mathbb Z_q^n$, let $M_{\bm x}=(c_{i,j})$ be the matrix representation of $\bm x$ as defined by \eqref{q-B-Repr} and $c_{1,[n]}$ be the first row of $M_{\bm x}$. We construct a function $f$ as follows. \textbf{Construction 1}: For each $\bm x\in\mathbb Z_q^n$, let $\mathcal I_{\bm c}=\{c_{I_1},\cdots,c_{I_{n'}}\}$ be the set of runs of $\bm c=c_{1,[n]}$ as defined in Definition \ref{def-run-set}. For each $i\in[n']$, let $$g_i(\bm x)=\left(\text{VT}(c_{2,I_i}), \text{VT}(c_{3,I_i}), \cdots, \text{VT}(c_{\lceil\log q\rceil,I_i})\right),$$ and for each $\ell\in\{0,1\}$, let \begin{align}\label{def-2del-g-ell} g^{(\ell)}(\bm x)=\sum_{i=1}^{n'}i^\ell g_i(\bm x)~\text{mod}~2n^\ell N_1,\end{align} where $$N_1=q^{\log\log n+3}.$$ Moreover, for each $j\in\{1,\cdots,\left\lceil n/\rho\right\rceil-1\}$, let $$h_j(\bm x)=\left(\xi(c_{2,J_j}),\xi(c_{3,J_j}),\cdots,\xi(c_{\lceil\log q\rceil,J_j})\right),\footnotemark{}$$ and for each $\ell\in\{0,1\}$, let \begin{align}\label{Constr1-h-ell} h^{(\ell)}(\bm x)=\sum_{\substack{j\in\{1,\cdots,\left\lceil n/\rho\right\rceil-1\}:\\ j\!~\equiv\!~\ell~\text{mod}~2}} h_j(\bm x)~\text{mod}~N_2\end{align} where $$N_2=q^{7\log\log n+o(\log\log n)}.$$ Finally, let $$f(\bm x)=\left(\eta(c_{1,[n]}),g^{(0)}(\bm x),g^{(1)}(\bm x), h^{(0)}(\bm x),h^{(1)}(\bm x)\right).$$ \footnotetext{Note that Lemma \ref{Ind-Bnry-2del} requires that each $|I_j|\geq 3$. If $|I_j|<3$, we can just let $\xi(c_{i,I_j})=c_{i,I_j}$. Then $c_{i,I_j}$ can also be recovered from $\xi(c_{i,I_j})$. This is feasible because in our construction, we only need that each $\xi(c_{i,I_j})$ is a sequence of length not greater than $7\log\log n+o(\log\log n)$. Let $\mathcal R_n$ denote the set of all $\bm x\in\mathbb Z_q^{n}$ such that $c_{1,[n]}$ is a regular string with $d=7~($according to Definition \ref{Regu}$)$. Then we have the following Theorem. \begin{thm}\label{thm-2del-sketch} The function $f(\bm x)$ is computable in linear time and the length $|f(\bm x)|$ of $f(\bm x)$ satisfies $$|f(\bm x)|\leq 5\log n+O(\log q\log\log n).$$ Moreover, if $\bm{x}\in\mathcal R_n$, then $\bm x$ can be uniquely recovered from $f(\bm x)$ and any given $\bm y\in\mathcal D_2(\bm x)$. \end{thm} To prove Theorem \ref{thm-2del-sketch}, we need the following lemma. \begin{lem}\label{lem-2del-pstn} Suppose $\bm c\in\{0,1\}^n$ is regular and $\bm b\in\{0,1\}^{n-2}$ such that $\bm b$ can be obtained from $\bm c$ by deleting two symbols of $\bm c$. Then exact one of the following holds. \begin{itemize} \item[1)] There are two distinct runs $c_{I_{j_1}}$ and $c_{I_{j_2}}$ of $\bm c$, uniquely determined by $\bm b$ and $\bm c$, such that $\bm b$ can be obtained from $\bm c$ by deleting one symbol in $c_{I_{j_1}}$ and one symbol in $c_{I_{j_2}}$. \item[2)] There is an interval $J\subseteq[n]$ of length at most $\rho$ such that $\bm b$ can be obtained from $\bm c$ by deleting two symbols in $c_{J}$. \end{itemize} \end{lem} \begin{proof} This Lemma is proved in Appendix A. \end{proof} Now, we can prove Theorem \ref{thm-2del-sketch}. \begin{proof}[Proof of Theorem \ref{thm-2del-sketch}] Note that by Lemma \ref{VT-Code-Skch}, Lemma \ref{Ind-Bnry-2del} and Lemma \ref{Reg-Bnry-2-Del}, the functions $\text{VT}$, $\xi$ and $\eta$ are all computable in linear time. By Construction 1, the functions $g^{(\ell)}(\bm x)$ and $h^{(\ell)}(\bm x)$, $\ell\in\{0,1\}$, are computable in linear time. Hence, $f(\bm x)=\left(\eta(c_{1,[n]}),g^{(0)}(\bm x),g^{(1)}(\bm x), h^{(0)}(\bm x),h^{(1)}(\bm x)\right)$ is computable in linear time. For each $\bm x\in\mathbb Z_q^n$, by Construction 1, the length $|g^{(\ell)}(\bm x)|$ of $g^{(\ell)}(\bm x)$, $\ell\in\{0,1\}$, satisfies \begin{align*}|g^{(\ell)}(\bm x)|&\leq \log (2n^\ell N_1)\\&=\ell\log n+\log q\log\log n+1\end{align*} Similarly, the length $|h^{(\ell)}(\bm x)|$ of $h^{(\ell)}(\bm x)$, $\ell\in\{0,1\}$, satisfies \begin{align*}|h^{(\ell)}(\bm x)|&\leq \log N_2\\ &=\log q(7\log\log n+o(\log\log n))\end{align*} Moreover, by Lemma \ref{Reg-Bnry-2-Del}, the length of $\eta(c_{1,[n]})$ satisfies $$|\eta(c_{1,[n]})|\leq4\log n+10\log\log n+O(1).$$ Thus, by Construction 1, the length of $f(\bm x)$ satisfies \begin{align*}|f(\bm x)|&=|\eta(c_{1,[n]})| +|g^{(0)}(\bm x)|+|g^{(1)}(\bm x)|\\&~~~~+|h^{(0)}(\bm x)|+|h^{(1)}(\bm x)|\\&\leq 5\log n+(16\log q+10)\log\log n\\&~~~~+o(\log q\log\log n)\\&=5\log n+O(\log q\log\log n).\end{align*} It remains to prove that for each $\bm{x}\in\mathcal R_n$, given $f(\bm x)$ and any $\bm y\in\mathcal D_2(\bm x)$, one can uniquely recover $\bm x$. To prove this, we first prove that $g_j(\bm x)<N_1$ for each $j\in[n']$ and $h_j(\bm x)<N_2$ for each $j\in\{1,2,\cdots,\left\lceil n/\rho\right\rceil-1\}$. Since $\bm{x}\in\mathcal R_n$, by Remark \ref{rem-Regu-length}, each run of $c_{1,[n]}$ has length at most $7\log n~($noticing that we take $d=7$ in this paper$)$, so for each $i\in[2,\left\lceil\log q\right\rceil]$ and $j\in[n']$, we have $c_{i,I_j}\in\{0,1\}^{\leq 7\log n}$. By Lemma \ref{VT-Code-Skch}, for each $j\in[2,\left\lceil\log q\right\rceil]$, the length of $\text{VT}(c_{j,I_i})$ satisfies $|\text{VT}(c_{j,I_i})|\leq \log(7\log n)\leq\log\log n+3.$ By Construction 1, the length of $g_j(\bm x)$ satisfies \begin{align*}|g_j(\bm x)|&\leq (\left\lceil\log q\right\rceil-1)(\log\log n+3)\\&<(\log q)(\log\log n+3),\end{align*} so $$g_j(\bm x)<q^{\log\log n+3}=N_1.$$ Similarly, for each $i\in[2,\left\lceil\log q\right\rceil]$ and each $j\in\{1,2,\cdots,\left\lceil n/\rho\right\rceil-1\}$, by \eqref{def-Ji-intvl}, the length of the interval $c_{i,J_j}$ satisfies $|c_{i,J_j}|\leq 2\rho=42\log n$, so by Lemma \ref{Ind-Bnry-2del}, we have \begin{align*}|\xi(c_{i,I_j})|&\leq 7\log(42\log n)+o(\log(42\log n))\\&=7\log\log n+o(\log\log n).\end{align*} By Construction 1, \begin{align*}|h_j(\bm x)|&\leq(\left\lceil\log q\right\rceil-1)(7\log\log n+o(\log\log n))\\&<(\log q)(7\log\log n+o(\log\log n)).\end{align*} Hence, $$h_j(\bm x)<q^{7\log\log n+o(\log\log n)}=N_2.$$ Now, we prove that each $\bm{x}\in\mathcal R_n$ can be uniquely recovered from $f(\bm x)=\left(\eta(c_{1,[n]}),g^{(0)}(\bm x),g^{(1)}(\bm x), h^{(0)}(\bm x),h^{(1)}(\bm x)\right)$ and any given $\bm y\in\mathcal D_2(\bm x)$. Let $$M_{\bm y}=(d_{i,j})_{\left\lceil\log q\right\rceil\times (n-2)}$$ be the matrix representation of $\bm y$. Then $M_{\bm y}$ can be obtained from $M_{\bm x}$ by deleting two columns, so $d_{1,[n-2]}\in\mathcal D_2(c_{1,[n]})$, where $d_{1,[n-2]}$ is the first row of $M_{\bm y}$. Since $\bm x\in\mathcal R_n$, then $c_{1,[n]}$ is regular. By Lemma \ref{Reg-Bnry-2-Del}, $\bm c\triangleq c_{1,[n]}$ can be correctly recovered from $\bm d\triangleq d_{1,[n-2]}$ and $\eta(c_{1,[n]})$. Moreover, by Lemma \ref{lem-2del-pstn}, exact one of the following two cases holds: \emph{Case 1}: There are two distinct runs $c_{1,I_{j_1}}$ and $c_{1,I_{j_2}}$ of $c_{1,[n]}$ such that $d_{1,[n-2]}$ is obtained from $c_{1,[n]}$ by deleting one symbol in $c_{1,I_{j_1}}$ and one symbol $c_{1,I_{j_2}}$. Correspondingly, $M_{\bm y}$ can be obtained from $M_{\bm x}$ by deleting one column in $I_{j_1}$ and one column in $I_{j_2}$. Without loss of generality, assume $j_1<j_2$. Denoting $I_{j}=[p_{j-1}+1,p_j]$, where $p_0=0<p_1<p_2<\cdots <p_{n'}=n$, then by comparing the symbols of $M_{\bm x}$ and $M_{\bm y}$, we have the following observation: \begin{itemize} \item[i)] $c_{i,I_j}=d_{i,I_j}$ for $1\leq j<j_1$ and each $i\in[2,\left\lceil\log q\right\rceil]$; \item[ii)] $d_{i,[p_{j_1-1}+1,p_{j_1}-1]}\in \mathcal D_1(c_{i,I_{j_1}})$ for each $i\in[2,\left\lceil\log q\right\rceil]$; \item[iii)] $c_{i,I_j}=d_{i,I_j-1}$ for each $j_1<j<j_2$ and $i\in[2,\left\lceil\log q\right\rceil]$, where $I_j-1=\{\ell-1:\ell\in I_{j}\}=[p_{j-1},p_j-1]$; \item[iv)] $d_{i,[p_{j_2-1},p_{j_2}-2]}\in \mathcal D_1(c_{i,I_{j_2}})$ for each $i\in[2,\left\lceil\log q\right\rceil]$; \item[v)] $c_{i,I_j}=d_{i,I_j-2}$ for each $j_2<j\leq n'$ and $i\in[2,\left\lceil\log q\right\rceil]$, where $I_j-2=\{\ell-2:\ell\in I_{j}\}=[p_{j-1}-1,p_j-2]$. \end{itemize} Then $c_{i,[n]}$, $i\in[2,\left\lceil\log q\right\rceil]$, can be recovered by the following three steps. \textbf{Step 1}: By observations i), iii) and v), $c_{i,I_j}$ can be directly obtained from $M_{\bm y}$ for all $i\in[2,\left\lceil\log q\right\rceil]$ and $j\in[n']\backslash\{j_1,j_2\}$. \textbf{Step 2}: Compute $g_j(\bm x)$ from $c_{2,I_j}, \cdots, c_{\left\lceil\log q\right\rceil,I_j}$ for all $j\in[n']\backslash\{j_1,j_2\}$. This is possible because for all $i\in[2,\left\lceil\log q\right\rceil]$ and $j\in[n']\backslash\{j_1,j_2\}$, $c_{i,I_j}$ have been obtained from $M_{\bm y}$ in Step 1. Then by taking $\ell=0$ in \eqref{def-2del-g-ell}, we can obtain \begin{align*}g_{j_1}(\bm x)+g_{j_2}(\bm x)\equiv \left(\!g^{(0)}(\bm x)-\!\sum_{j\in[n']\backslash\{j_1,j_2\}}g_j(\bm x)\!\right)\text{mod}~2N_1.\end{align*} Since $g_{j}(\bm x)<N_1$ for all $j\in[n']$, we in fact have \begin{align}\label{slv-g1} g_{j_1}(\bm x)+g_{j_2}(\bm x)= \left(\!g^{(0)}(\bm x)-\!\sum_{j\in[n']\backslash\{j_1,j_2\}}g_j(\bm x)\!\right)\text{mod}~2N_1.\end{align} Similarly, taking $\ell=1$ in \eqref{def-2del-g-ell} and noticing that $0<j_1g_{j_1}(\bm x)+j_2g_{j_2}(\bm x)<2nN_1$, we can obtain \begin{align}\label{slv-g2}&j_1g_{j_1}(\bm x)+j_2g_{j_2}(\bm x)\nonumber\\ &=\left(g^{(1)}(\bm x)-\sum_{j\in[n']\backslash\{j_1,j_2\}}jg_j(\bm x)\right)~\text{mod}~2nN_1. \end{align} So, $g_{j_1}(\bm x)$ and $g_{j_2}(\bm x)$ can be solved from \eqref{slv-g1} and \eqref{slv-g2}. By Construction 1, we have $$g_{j_1}(\bm x)=\left(\text{VT}(c_{2,I_{j_1}}), \text{VT}(c_{3,I_{j_1}}), \cdots, \text{VT}(c_{\lceil\log q\rceil,I_{j_1}})\right)$$ and $$g_{j_2}(\bm x)=\left(\text{VT}(c_{2,I_{j_2}}), \text{VT}(c_{3,I_{j_2}}), \cdots, \text{VT}(c_{\lceil\log q\rceil,I_{j_2}})\right).$$ \textbf{Step 3}: By observation ii), $d_{i,[p_{j_1-1}+1,p_{j_1}-1]}\in \mathcal D_1(c_{i,I_{j_1}})$ for each $i\in[2,\left\lceil\log q\right\rceil]$. Then by Lemma \ref{VT-Code-Skch}, $c_{i,I_{j_1}}$ can be recovered from $\text{VT}(c_{i,I_{j_1}})$ and $d_{i,[p_{j_1-1}+1,p_{j_1}-1]}$. Similarly, since by observation iv), $d_{i,[p_{j_2-1},p_{j_2}-2]}\in \mathcal D_1(c_{i,I_{j_2}})$ for each $i\in[2,\left\lceil\log q\right\rceil]$, then by Lemma \ref{VT-Code-Skch}, $c_{i,I_{j_2}}$ can be recovered from $\text{VT}(c_{i,I_{j_2}})$ and $d_{i,[p_{j_2-1},p_{j_2}-2]}$. Thus, for case 1, $c_{i,[n]}$, $i\in[2,\left\lceil\log q\right\rceil]$, can be recovered from $\eta(c_{1,[n]})$, $g^{(0)}(\bm x)$, $g^{(1)}(\bm x)$ and $\bm y$. \emph{Case 2}: There is an interval $J\subseteq[n]$ of length at most $\rho=3d\log n$ such that $\bm d\triangleq d_{1,[n-2]}$ can be obtained from $\bm c\triangleq c_{1,[n]}$ by deleting two symbols in $c_{1,J}$. Correspondingly, $M_{\bm y}$ can be obtained from $M_{\bm x}$ by deleting two columns in $J$. By 1) of Remark \ref{rem-sets-Ji}, we can always find an $J_{j_0}$ for some $j_0\in\{1,2,\cdots,\left\lceil n/\rho\right\rceil-1\}$ such that $J\subseteq J_{j_0}$. Denoting $J_{j_0}=[\lambda, \lambda']$, then by comparing the symbols of $M_{\bm x}$ and $M_{\bm y}$, we obtain that for each $i\in[2,\left\lceil\log q\right\rceil]$, $$c_{i,[1,\lambda-1]}=d_{i,[1,\lambda-1]},$$ $$c_{i,[\lambda'+1,n]} =d_{i,[\lambda'-1,n-2]}$$ and $$d_{i,[\lambda,\lambda'-2]}\in\mathcal D_{\leq 2}(c_{i,[\lambda,\lambda']}).$$ Hence, $c_{i,[1,\lambda-1]}$ and $c_{i,[\lambda'+1,n]}$ can be directly obtained from $M_{\bm y}$. Moreover, each $c_{i,[\lambda,\lambda']}$ can be recovered from $d_{i,[\lambda,\lambda'-2]}$ and $h^{(\ell)}(\bm x)$, $\ell\in\{0,1\}$, as follows. By 2) of Remark \ref{rem-sets-Ji}, $J_j\subseteq[1,\lambda-1]$ for all $j\in\{1$, $2$, $\cdots$, $j_0-2\}$, so $c_{i,J_j}$ can be obtained from $d_{i,[1,\lambda-1]}=c_{i,[1,\lambda-1]}$. Similarly, $c_{i,J_j}$ can be obtained from $d_{i,[\lambda'+1-t,n]}=c_{i,[\lambda'+1,n]}$ for all $j\in\{j_0+2,\cdots,\left\lceil n/\delta'\right\rceil-1\}$. Hence, we can compute $h_j(\bm x)=\left(\xi(c_{2,J_j}),\xi(c_{3,J_j}),\cdots,\xi(c_{\lceil\log q\rceil,J_j})\right)$ for each $j\in\{1,2,\cdots,\left\lceil n/\delta'\right\rceil-1\}\backslash\{j_0\}$. Let $\ell\in\{0,1\}$ be such that $j_0\equiv\ell\mod 2$. Then by \eqref{Constr1-h-ell}, and noticing that $h_{j_0}(\bm x)<N_2$, we can obtain \begin{align*} h_{j_0}(\bm x)=h^{(\ell)}(\bm x)-\sum_{\substack{j\in\{1,2,\cdots,\left\lceil n/\delta'\right\rceil-1\}\backslash\{j_0\}:\\ j\!~\equiv\!~\ell~\text{mod}~2}} h_j(\bm x)~\text{mod}~N_2.\end{align*} Note that $b_{i,[\lambda,\lambda'-2]}\in\mathcal D_{\leq 2}(c_{i,[\lambda,\lambda']})=\mathcal D_{\leq 2}(c_{i,J_{j_0}})$, and by Construction 1, $$h_{j_0}(\bm x)=\left(\xi(c_{2,J_{j_0}}),\xi(c_{3,J_{j_0}}),\cdots,\xi(c_{\lceil\log q\rceil,J_{j_0}})\right).$$ Then by Lemma \ref{Ind-Bnry-2del}, for each $i\in[2,\left\lceil\log q\right\rceil]$, $c_{i,[\lambda,\lambda']}=c_{i,J_{j_0}}$ can be recovered from $d_{i,[\lambda,\lambda'-2]}$ and $h_{j_0}(\bm x)$. Thus, for case 2, $c_{i,[n]}$, $i\in[2,\left\lceil\log q\right\rceil]$, can be recovered from $\eta(c_{1,[n]})$, $h^{(\ell)}(\bm x)$ and $\bm y$. By the above discussions, we proved that $M_{\bm x}~($and so $\bm x)$ can be uniquely recovered from $f(\bm x)$ and $\bm y$, which completes the proof. \end{proof} \vspace{0pt}By representing each binary string of length at most $\lfloor\log q\rfloor$ as an integer in $\mathbb Z_q$, each binary string $\bm a$ can be represented as a $q$-ary string of length $\left\lceil|\bm a|/\lfloor\log q\rfloor\right\rceil$. We denote this $q$-ary string by $\mathcal Q(\bm a)$ and call it the $q$-\emph{ary representation} of $\bm a$ for convenience of use. Specifically, divide $\bm a$ into $\left\lceil|\bm a|/\lfloor\log q\rfloor\right\rceil$ disjoint substrings, each having length $\lfloor\log q\rfloor$ except the last substring which has length $|\bm a|-\left(\left\lceil|\bm a|/\lfloor\log q\rfloor\right\rceil-1\right)\lfloor\log q\rfloor$. Then by representing each of these substrings as an integer in $\mathbb Z_q$, we can obtain a $q$-ary string $\mathcal Q(\bm a)$ of length $\left\lceil|\bm a|/\lfloor\log q\rfloor\right\rceil$. Let $\text{RegEnc}: \{1,2,\cdots,M\}\rightarrow\{0,1\}^n$ be the one-to-one mapping constructed in Lemma \ref{Enc-Reg-Bnry}. Since $M\geq 2^{n-1}$, then $\text{RegEnc}$ can also be viewed as a mapping from $\{0,1\}^{n-1}$ to $\{0,1\}^n$. By Lemma \ref{lem-En-B2Q}, the mapping $\text{RegEnc}$ can be extended to a one-to-one mapping, denoted by $$\mathcal E_{\text{Reg}}:\mathbb Z_q^{n-1}\rightarrow\mathbb Z_q^{n},$$ such that for any $\bm u\in\mathbb Z_q^{n-1}$ and $\bm x=\mathcal E_{\text{Reg}}(\bm u)$, if $M_{\bm{u}}=(b_{i,j})_{\lceil\log q\rceil\times(n-1)}$ and $M_{\bm{x}}=(c_{i,j})_{\lceil\log q\rceil\times n}$ are the matrix representation of $\bm u$ and $\bm x$ respectively, then $$c_{1,[n]}=\text{RegEnc}(b_{1,[n-1]}).$$ By Lemma \ref{Enc-Reg-Bnry}, $c_{1,[n]}=\text{RegEnc}(b_{1,[n-1]})$ is regular, so for any $\bm u\in\mathbb Z_q^{n-1}$, we have $\bm x=\mathcal E_{\text{Reg}}(\bm u)\in\mathcal R_n$. Using the mapping $\mathcal E_{\text{Reg}}:\mathbb Z_q^{n-1}\rightarrow\mathcal R_n$ and the function $f$ constructed in Construction 1, we can give an encoding function of a $q$-ary two-deletion correcting code as follows. Let $\mathcal E$ be the function defined on $\mathbb Z_q^{n-1}$ of the form \begin{align}\label{2del-enc-fun} \mathcal E(\bm u)=(\bm v, \bm v', \bm v''), ~\forall \!~\bm u\in\mathbb Z_q^{n-1}, \end{align} such that $\bm v=\mathcal E_{\text{Reg}}(\bm u)$, $\bm v'=\mathcal E_{\text{Reg}}(\mathcal Q(f(\bm v)))$ and $\bm v''=\text{Rep}_3(\mathcal Q(f(\bm v')))$, where $\text{Rep}_{3}(\cdot)$ is the encoding function of the $3$-fold repetition code. \begin{thm}\label{thm-2del-enc} The code $\mathcal C=\{\mathcal E(\bm u): \bm u\in\mathbb Z_q^{n-1}\}$, where $\mathcal E$ is given by \eqref{2del-enc-fun}, is a $q$-ary two-deletion correcting code with redundancy $5\log n+O(\log q\log\log n)$ in bits. The encoding complexity of $\mathcal C$ is near-linear in $n$ with a polynomial size lookup table. \end{thm} \begin{proof} Let $$\bm x=\mathcal E(\bm u)=(\bm v, \bm v', \bm v'')\in\mathcal C,$$ where $\bm u\in\mathbb Z_q^{n-1}$, $\bm v=\mathcal E_{\text{Reg}}(\bm u)$, $\bm v'=\mathcal Q(f(\bm v))$ and $\bm v''=\text{Rep}_3(\mathcal Q(f(\bm v')))$ as in \eqref{2del-enc-fun}. Given any $\bm y\in\mathcal D_2(\bm x)$, we have $y_{[1,m_1-2]}\in\mathcal D_2(\bm v)$, $y_{[m_1,m_2-2]}\in\mathcal D_2(\bm v')$ and $y_{[m_2,m_3-2]}\in\mathcal D_2(\bm v'')$, where $|\bm v|=m_1, |(\bm v,\bm v')|=m_2$ and $|\bm x|=|(\bm v,\bm v',\bm v'')|=m_3$. First, since $\bm v''=\text{Rep}_3(\mathcal Q(f(\bm v')))$ is a codeword of a two-deletion code, then $\mathcal Q(f(\bm v'))$ can be recovered from $y_{[m_2,m_3-2]}$, and hence $f(\bm v')$ can be recovered from $\mathcal Q(f(\bm v'))$. Then by Theorem \ref{thm-2del-sketch}, $\bm v'$ can be recovered from $y_{[m_1,m_2-2]}$ and $f(\bm v')$, and hence $f(\bm v)$ can be recovered from $\bm v'=\mathcal E_{\text{Reg}}(\mathcal Q(f(\bm v)))$. Finally, by Theorem \ref{thm-2del-sketch} again, $\bm v$ can be recovered from $y_{[1,m_1-2]}$ and $f(\bm v)$. Thus, $\bm x=(\bm v, \bm v', \bm v'')$ can be recovered from any $\bm y\in\mathcal D_2(\bm x)$, which proves that $\mathcal C$ is a two-deletion correcting code. Since $\bm u\in\mathbb Z_q^{n-1}$ and $\bm v=\mathcal E_{\text{Reg}}(\bm u)\in\mathbb Z_q^{n}$, so $\bm v$ has $\log q$ bits redundancy. Moreover, by Theorem \ref{thm-2del-sketch}, the length of $\bm v'$ is $$|\bm v'|=5\log n+O(\log q\log\log n)$$ bits and the length of $\bm v''$ is $$|\bm v''|=3(5\log |\bm v'|+O(\log q\log\log |\bm v'|))=O(\log\log n)$$ bits. So the total redundancy of $\bm x=\mathcal E(\bm u)$ is \begin{align*}\text{redundancy~of}~\bar{\mathcal C}&=\log q+|\bm v'|+|\bm v''|\\&=5\log n+O(\log q\log\log n)\end{align*} in bits. By Lemma \ref{Enc-Reg-Bnry} and Lemma \ref{lem-En-B2Q}, the encoding complexity of $\bm v=\mathcal E_{\text{Reg}}(\bm u)$ is near-linear in $n$ with a polynomial size lookup table. Moreover, by Theorem \ref{thm-2del-sketch}, the encoding complexity of $\bm v'=\mathcal E_{\text{Reg}}(\mathcal Q(f(\bm v)))$ and $\bm v''=\text{Rep}_3(\mathcal Q(f(\bm v')))$ is linear in $n$ and $\log n$ respectively. Therefore, the encoding complexity of $\mathcal E(\bm u)=(\bm v, \bm v', \bm v'')$ is near-linear in $n$ with a polynomial size lookup table, which completes the proof. \end{proof} \section{Binary Codes Correcting a Burst of at most $t$ Deletions} In this section, we present a construction of binary codes that are capable of correcting a bursting of at most $t$ deletions improving the Lenz-Polyanskii Construction in \cite{Lenz20}. We assume that $t$ is a constant with respect to the code length $n$, and for notational simplicity, we use $\gamma_t$ to denote any constant that depends only on $t$. As in Section III, each binary sequence $\bm a$ is identified with the positive integer whose binary representation is $\bm a$, and summation and multiplication of binary strings and integers are performed in the set of integers. Recall that a string $\bm c\in\{0,1\}^n$ is called $(\bm p, \delta)$-dense, if each substring of $\bm c$ of length $\delta$ contains at least one pattern $\bm p$. As stated in Section II, we take $$\delta=t2^{t+1}\log n$$ and $$\bm p=0^t1^t.$$ The basic idea of our construction is to replace the shifted VT code in the Lenz-Polyanskii Construction with the function $\phi$ constructed in Lemma \ref{lem-Bnry-burst-Sima}. To apply the function $\phi$, we need to divide each binary string into substrings of length at most $2(\delta+t)$. Specifically, we denote $\delta'=\delta+t$ and let \begin{equation}\label{def-Li-intvl} L_i=\!\left\{\!\begin{aligned} &[(i-1)\delta'\!+1, (i+1)\delta'], ~\text{for}~i\in\!\{1,\cdots, \left\lceil n/\delta'\right\rceil-2\},\\ &[(i-1)\delta'\!+1, n], ~~~~~~~\!~~~\text{for}~i=\left\lceil n/\delta'\right\rceil-1,\vspace{12pt} \end{aligned}\right. \end{equation} where $i\in\{1,\cdots, \left\lceil n/\delta'\right\rceil-1\}$, be the index sets of the expected substrings. Then we can construct a function $\bar{f}^{\text{b}}$ over $\{0,1\}^n$ as follows, which is the main component of our construction of binary burst-deletion correcting codes. \textbf{Construction 2}: Let $\phi$ and $\mu$ be the functions constructed in Lemma \ref{lem-Bnry-burst-Sima} and Lemma \ref{lem-Bnry-burst-Lenz} respectively. For each $\bm c\in\{0,1\}^n$, let \begin{align* \bar{f}^{\text{b}}(\bm c)=\left(\mu(\bm c), \bar{g}^{(0)}(\bm c), \bar{g}^{(1)}(\bm c)\right),\end{align*} such that for each $\ell\in\{0,1\}$, \begin{align}\label{Constr2-g-b} \bar{g}^{(\ell)}(\bm c)=\sum_{\substack{i\in\{1,2,\cdots,\left\lceil n/\lceil\delta'\rceil\right\rceil-1\}:\\ i~\equiv~ \ell~\text{mod}~2}} \phi(c_{L_i})~\text{mod}~\overline{N}^{\text{b}},\end{align} where \begin{align*}\overline{N}^{\text{b}}\triangleq 2^{4\log(2\delta')+o(\log(2\delta'))}=2^{4\log\log n+\gamma_t+o(\log\log n)}.\footnotemark{}\end{align*} \footnotetext{Since $\delta'=\delta+t=t2^{t+1}(\log n+2^{-t-1})$, so more accurately, it should be $\overline{N}^{\text{b}}\triangleq 2^{4\log(2\delta')+o(\log(2\delta'))}=2^{4\log(\log n+2^{-t-1})+\gamma_t+o(\log\log n)}$. However, because $\overline{N}^{\text{b}}$ is an integer, so for sufficiently large $n$, we can always obtain $\overline{N}^{\text{b}}\triangleq 2^{4\log(2\delta')+o(\log(2\delta'))}=2^{4\log\log n+\gamma_t+o(\log\log n)}$.} For Construction 2, we have the following theorem. \begin{thm}\label{thm-b-burst-del-sketch} For each $\bm c\in\{0,1\}^n$, $\bar{f}^{\text{b}}(\bm c)$ is computable in linear time and the length $|\bar{f}^{\text{b}}(\bm c)|$ of $\bar{f}^{\text{b}}(\bm c)$ satisfies $$|\bar{f}^{\text{b}}(\bm c)|\leq\log n+8\log\log n+\gamma_t+o(\log\log n).$$ Moreover, if $\bm c$ is $(\bm p, \delta)$-dense, then given $\bar{f}^{\text{b}}(\bm c)$ and any $\bm b\in\mathcal B_{\leq t}(\bm c)$, one can uniquely recover $\bm c$. \end{thm} Before proving Theorem \ref{thm-b-burst-del-sketch}, we give some remark on the properties of the sets $L_j, j=1,2,\cdots,\left\lceil n/\delta'\right\rceil-1$. \begin{rem}\label{rem-sets-Li} Similar to Remark \ref{rem-sets-Ji}, it is easy to see that \begin{itemize} \item[1)] For each interval $L\subseteq[n]$ of length at most $\delta'=\delta+t$, we can always find an $i_0\in\{1,2,\cdots,\left\lceil n/\delta'\right\rceil-1\}$ such that $L\subseteq L_{i_0}$. \item[2)] $L_i\cap L_{i'}=\emptyset$ for all $i,i'\in\{1,2,\cdots, \left\lceil n/\delta'\right\rceil-1\}$ such that $|i-i'|\geq 2$. \end{itemize} \end{rem} Now, we can prove Theorem \ref{thm-b-burst-del-sketch}. \begin{proof} Note that by Lemma \ref{lem-Bnry-burst-Lenz}, $\mu(\bm c)$ is computable in linear time. By Lemma \ref{lem-Bnry-burst-Sima}, each $\phi(c_{L_i})$ is computable in time $O(2^t(2\delta)^3)=O((\log n)^3)$, so $(\bar{g}^{(0)}(\bm c),\bar{g}^{(1)}(\bm c))$ are also computable in linear time. Hence, by Construction 2, $\bar{f}^{\text{b}}(\bm c)=\left(\mu(\bm c), \bar{g}^{(0)}(\bm c), \bar{g}^{(1)}(\bm c)\right)$ is computable in linear time. Moreover, by Lemma \ref{lem-Bnry-burst-Lenz} and \eqref{Constr2-g-b}, the length $|\bar{f}^{\text{b}}(\bm c)|$ of $\bar{f}^{\text{b}}(\bm c)$ satisfies \begin{align*}|\bar{f}^{\text{b}}(\bm c)|&=|\mu(\bm c)|+ |\bar{g}^{(0)}(\bm c)|+|\bar{g}^{(1)}(\bm c)|\\&\leq\log n+3+2\big(4\log\log n+\gamma_t+o(\log\log n)\big)\\&=\log n+8\log\log n+\gamma_t+o(\log\log n).\end{align*} Suppose $\bm c$ is $(\bm p, \delta)$-dense and $\bm b\in\mathcal B_{\leq t}(\bm c)$. We need to prove that $\bm c$ can be uniquely recovered from $\bm b$ and $\bar{f}^{\text{b}}(\bm c)$. By Lemma \ref{lem-Bnry-burst-Lenz}, we can find an interval $L\subseteq[n]$ of length at most $\delta'=\delta+t$ such that $\bm b=c_{[n]\backslash D}$ for some interval $D\subseteq L$ of length $t'=|\bm c|-|\bm b|$. By 1) of Remark \ref{rem-sets-Li}, we can always find an $i_0\in\{1,2,\cdots,\left\lceil n/\delta'\right\rceil-1\}$ such that $L\subseteq L_{i_0}$. Denoting $L_{i_0}=[\lambda, \lambda']$, then we can obtain $$c_{[1,\lambda-1]}=b_{[1,\lambda-1]},$$ $$c_{[\lambda'+1,n]} =b_{[\lambda'-t'+1,n]}$$ and $$b_{[\lambda,\lambda'-t']}\in\mathcal B_{\leq t}(c_{[\lambda,\lambda']}).$$ Therefore, $c_{[1,\lambda-1]}$ and $c_{[\lambda'+1,n]}$ can be directly obtained from $\bm b$. In the following, we will show how to recover $c_{[\lambda,\lambda']}$ from $b_{[\lambda,\lambda'-t']}$ and $\bar{g}^{(\ell)}(\bm c)$ for some $\ell\in\{0,1\}$. By 2) of Remark \ref{rem-sets-Li}, for all $i\in\{1,2,\cdots,i_0-2\}$, we have $L_i\subseteq[1,\lambda-1]$, so $c_{L_i}$ can be obtained from $b_{[1,\lambda-1]}$ and hence $\phi(c_{L_i})$ can be computed. Similarly, for all $i\in\{i_0+2,\cdots,\left\lceil n/\delta'\right\rceil-1\}$, $c_{L_i}$ can be obtained from $b_{[\lambda'-t'+1,n]}$ and hence $\phi(c_{L_i})$ can be computed. Let $\ell_0\in\{0,1\}$ be such that $\ell_0~\equiv~ i_0~\text{mod}~2$. By \eqref{Constr2-g-b}, we have \begin{align*} \phi(c_{L_{i_0}})\equiv\bar{g}^{(\ell_0)}(\bm c)-\sum_{\substack{i\in\{1,\cdots,\left\lceil n/\lceil\delta'\rceil\right\rceil-1\}:\\ i\!~\neq\!~ i_0\!~\text{and}\!~ i\!~\equiv\!~ \ell_0\!~\text{mod}\!~2}} \phi(c_{L_i})~\text{mod}~\overline{N}^{\text{b}}.\end{align*} By \eqref{def-Li-intvl}, $|L_{i_0}|=2\delta'=2(\delta+t)$, so by Lemma \ref{lem-Bnry-burst-Sima}, $\phi(c_{L_{i_0}})\leq 2^{4\log(2\delta')+o(\log(2\delta'))}=\overline{N}^{\text{b}}$. Therefore, we actually have \begin{align*} \phi(c_{L_{i_0}})=\bar{g}^{(\ell_0)}(\bm c)-\sum_{\substack{i\in\{1,\cdots,\left\lceil n/\lceil\delta'\rceil\right\rceil-1\}:\\ i\!~\neq\!~ i_0\!~\text{and}\!~ i\!~\equiv\!~ \ell_0\!~\text{mod}\!~2}} \phi(c_{L_i})~\text{mod}~\overline{N}^{\text{b}}.\end{align*} Since $b_{[\lambda,\lambda'-t']}\in\mathcal B_{\leq t}(c_{[\lambda,\lambda']})$, again by Lemma \ref{lem-Bnry-burst-Sima}, we can recover $c_{[\lambda,\lambda']}$ from $b_{[\lambda,\lambda'-t]}$ and $\phi(c_{L_{i_0}})$. Note that we have obtained $c_{[1,\lambda-1]}=b_{[1,\lambda-1]}$ and $c_{[\lambda'+1,n]} =b_{[\lambda'-t+1,n]}$, so $\bm c$ can be uniquely recovered, which completes the proof. \end{proof} Let $\mathcal S^{\text{b}}_n$ be the set of all $(\bm p, \delta)$-dense binary strings $\bm c\in\{0,1\}^{n}$, where $\bm p=0^t1^t$ and $\delta=t2^{t+1}\lceil\log n\rceil$. By Lemma \ref{lem-p-dense}, there is a one-to-one mapping that maps each binary string of length $n-1$ to a string in $\mathcal S^{\text{b}}_n$. For convenience, we denote this mapping by \begin{align}\label{E-B-Den}\mathcal E^{\text{b}}_{\text{Den}}:\{0,1\}^{n-1}\rightarrow\mathcal S^{\text{b}}_n.\end{align} Using the function $\bar{f}^{\text{b}}$ constructed in Construction 2, we can construct an encoding function of a binary code capable of correcting a burst of at most $t$ deletions. Let $\bar{\mathcal E}^{\text{b}}$ be a function defined on $\{0,1\}^{n-1}$ of the form \begin{align}\label{BBtdel-enc-fun} \bar{\mathcal E}^{\text{b}}(\bm a)=(\bm b, \bm b', \bm b''), ~~\forall\!~\bm a\in\{0,1\}^{n-1}, \end{align} such that $\bm b=\mathcal E^{\text{b}}_{\text{Den}}(\bm a)$, $\bm b'=\mathcal E^{\text{b}}_{\text{Den}}(\bar{f}^{\text{b}}(\bm b))$ and $\bm b''=\text{Rep}_{t+1}(\bar{f}^{\text{b}}(\bm b'))$, where $\text{Rep}_{t+1}(\cdot)$ is the encoding function of the $(t+1)$-fold repetition code. \begin{thm}\label{thm-bbdel-enc} The code $\bar{\mathcal C}^{\text{b}}=\{\bar{\mathcal E}^{\text{b}}(\bm a): \bm a\in\{0,1\}^{n-1}\}$, where $\bar{\mathcal E}^{\text{b}}$ is given by \eqref{BBtdel-enc-fun}, is a binary code with redundancy $\log n+9\log\log n+\gamma_t+o(\log\log n)$ bits and capable of correcting a burst of at most $t$ deletions. \end{thm} \begin{proof} Let $$\bm c=\bar{\mathcal E}^{\text{b}}(\bm a)=(\bm b, \bm b', \bm b'')\in\bar{\mathcal C}^{\text{b}},$$ where $\bm a\in\{0,1\}^{n-1}$, $\bm b=\mathcal E^{\text{b}}_{\text{Den}}(\bm a)$, $\bm b'=\mathcal E^{\text{b}}_{\text{Den}}(\bar{f}^{\text{b}}(\bm b))$ and $\bm b''=\text{Rep}_{t+1}(\bar{f}^{\text{b}}(\bm b'))$. Given any $\bm d\in\mathcal B_{\leq t}(\bm c)$, denoting $t'=|\bm c|-|\bm b|$, then $t'\leq t$ and we have $d_{[1,m_1-t']}\in\mathcal B_{\leq t}(\bm b)$, $d_{[m_1,m_2-t']}\in\mathcal B_{\leq t}(\bm b')$ and $d_{[m_2,m_3-t']}\in\mathcal B_{\leq t}(\bm b'')$, where $m_1=|\bm b|, m_2=|(\bm b,\bm b')|$ and $m_3=|\bm c|=|(\bm b,\bm b',\bm b'')|$. First, since $\bm b''=\text{Rep}_{t+1}(\bar{f}^{\text{b}}(\bm b'))$ is a codeword of a $t$-deletion code, then $\bar{f}^{\text{b}}(\bm b')$ can be recovered from $d_{[m_2,m_3-t']}$. Further, by Theorem \ref{thm-b-burst-del-sketch}, $\bm b'$ can be recovered from $d_{[m_1,m_2-t']}$ and $\bar{f}^{\text{b}}(\bm b')$, and so $\bar{f}^{\text{b}}(\bm b)$ can be recovered from $\bm b'=\mathcal E^{\text{b}}_{\text{Den}}(\bar{f}^{\text{b}}(\bm b))$. Finally, by Theorem \ref{thm-b-burst-del-sketch} again, $\bm b$ can be recovered from $d_{[1,m_1-t']}$ and $\bar{f}^{\text{b}}(\bm b)$. Thus, $\bm c=(\bm b, \bm b', \bm b'')$ can be recovered from any $\bm d\in\mathcal B_{\leq t}(\bm c)$, which proves that $\bar{\mathcal C}^{\text{b}}$ is capable of correcting a burst of at most $t$ deletions. Since $\bm a\in\{0,1\}^{n-1}$ and $\bm b=\mathcal E^{\text{b}}_{\text{Den}}(\bm a)\in\mathcal S^{\text{b}}_n\subseteq\{0,1\}^{n}$, so $\bm b$ has one bit redundancy. Moreover, by Theorem \ref{thm-b-burst-del-sketch}, the length of $\bm b'$ is $$|\bm b'|=\log n+8\log\log n+\gamma_t+o(\log\log n)$$ bits and the length of $\bm b''$ is \begin{align*} |\bm b''|&=\log |\bm b'|+8\log\log |\bm b'|+\gamma_t+o(\log\log |\bm b'|)\\&=\log\log n+\gamma_t+o(\log\log n)\end{align*} bits. So the total redundancy of $\bm c=\bar{\mathcal E}^{\text{b}}(\bm a)$ is \begin{align*}\text{redundancy~of}~\bar{\mathcal C}&=1+|\bm b'|+|\bm b''|\\&=\log n+9\log\log n+\gamma_t+o(\log\log n)\end{align*} bits. \end{proof} \section{$q$-ary Codes Correcting a Burst of at most $t$ Deletions} In this section, we construct $q$-ary codes correcting a bursting of at most $t$ deletions, where $q>2$ is an even integer. We assume that $q$ and $t$ are constant with respect to the code length $n$. As in Section III, we identify each binary string $\bm a$ with the positive integer whose binary representation is $\bm a$. As stated in Section II, we take $$\delta=t2^{t+1}\log n$$ and $$\bm p=0^t1^t.$$ A string $\bm c\in\{0,1\}^n$ is called $(\bm p, \delta)$-dense, if each substring of $\bm c$ of length $\delta$ contains at least one pattern $\bm p$. For each $\bm x\in\mathbb Z_q^n$, let $M_{\bm{x}}=(c_{i,j})_{\lceil\log q\rceil\times n}$ be the matrix representation of $\bm x$ as defined by \eqref{q-B-Repr}. Then for each $t'\in[t]$, the deletion of $x_{i},x_{i+1},\cdots,x_{i+t'-1}$ results in the deletion of the columns $i,i+1,\cdots,i+t'-1$ of $M_{\bm{x}}$. A basic idea is to protect the first row $\bm c=c_{1,[n]}$ by a burst-deletion correcting code. However, in general, if $\bm c$ can be recovered from a $\bm d\in\mathcal B_{\leq t}(\bm c)$, the location of the deleted symbols can not be determined. For example, consider $\bm c=0111011011010010$ and $\bm d=0111011010010$. Then $\bm d$ can be obtained from $\bm c$ by deleting $c_3c_4c_5=110$, or deleting $c_4c_5c_6=101$. In fact, $\bm d$ can be obtained from $\bm c$ by deleting $c_ic_{i+1}c_{i+2}$ for all $i\in[3,10]$. To proceed, we need to consider period of binary strings. Let $\ell$ and $m$ be two positive integers such that $\ell\leq m$. A string $\bm a\in\{0,1\}^m$ is said to have \emph{period} $\ell~($or $\bm a$ is called a period-$\ell$ string$)$ if $a_{i+\ell}=a_i$ for all $i\in[m-\ell]=\{1,2,\cdots,m-\ell\}$. Clearly, a run of $\bm c$ of length $m$ has period $\ell$ for any $\ell\in[m]$; a period-$2$ substring of $\bm c$ is either a run of $\bm c$ or an alternative substring of $\bm c$. \begin{lem}\label{lem-prd-idx-cap} Suppose $\bm c\in\{0,1\}^n$ is $(\bm p, \delta)$-dense. Given any $\bm d\in\mathcal B_{\leq t}(\bm c)$, it is possible to find an interval $K\subseteq[n]$ of length at most $\delta-1$ such that if $\bm d=c_{[n]\backslash D}$ and $D\subseteq[n]$ is an interval, then it always holds that $D\subseteq K$. \end{lem} \begin{proof} Since $\bm d\in\mathcal B_{\leq t}(\bm c)$, there is an interval $D'\subseteq[n]$ such that $\bm d=c_{[n]\backslash D'}$. Let $K\subseteq[n]$ be the interval such that $c_K$ is the maximal substring of $\bm c$ satisfying: 1) $c_K$ has period $t'=|\bm c|-|\bm d|$; 2) $c_K$ contains $c_{D'}$. We will prove that $D\subseteq K$ for any interval $D\subseteq[n]$ such that $\bm d=c_{[n]\backslash D}$. Suppose $D=[i_1, i_1+t'-1]$ and $D'=[i_2, i_2+t'-1]$. Without loss of generality, assume $i_1\leq i_2$. Since $c_{[n]\backslash D}=\bm d=c_{[n]\backslash D'}$, we have \begin{align*} ~&c_1~\cdots ~c_{i_1-1}~c_{i_1+t'}~c_{i_1+t'+1}\!~\cdots~c_{i_2+t'-1} ~c_{i_2+t'}~\cdots ~c_n\\ =~&c_1~\cdots ~c_{i_1-1}~~c_{i_1}~~~~c_{i_1+1}~~~\!~\cdots ~~c_{i_2-1}~~~c_{i_2+t'}~\dots~c_n. \end{align*} By comparing the symbols of $c_{[n]\backslash D'}$ and $c_{[n]\backslash D''}$ in each position, we can obtain $c_i=c_{i+t'}$ for each $i\in[i_1,i_2-1]$. So, $c_{[i_1,i_2+t'-1]}$ is a substring of $\bm c$ of period $t'$ and contains both $c_{D}$ and $c_{D'}$. As $c_K$ is the maximal substring of $\bm c$ of period $t'$ that contains $c_{D'}$, so $c_{[i_1,i_2+t'-1]}$ is contained in $c_K$. Thus, $c_{D}$ is contained in $c_K$, which implies that $D\subseteq K$. Since $\bm c$ is $(\bm p, \delta)$-dense, where $\bm p=0^t1^t$, then each substring of $\bm c$ of length $\delta$ contains at least one pattern $\bm p$. Note that for each $t'\in[t]$, we have $p_{t}=0\neq 1=p_{t+t'}$, so each substring of $\bm c$ of length $\delta$ can not has period $t'$. In other words, the length of any period-$t'$ substring of $\bm c$ is at most $\delta-1$. Thus, the length of $c_I~($and the length of $I)$ is at most $\delta-1$. \end{proof} Let \begin{equation}\label{def-Ki-intvl} K_j=\!\left\{\!\begin{aligned} &[(j-1)\delta\!+1, (j+1)\delta], ~\text{for}~j\in\!\{1,\cdots, \left\lceil n/\delta\right\rceil-2\},\\ &[(j-1)\delta\!+1, n], ~~~~~~~\!~~\!~\text{for}~j=\left\lceil n/\delta\right\rceil-1.\vspace{12pt} \end{aligned}\right. \end{equation} \begin{rem}\label{rem-sets-Ki} Similar to Remark \ref{rem-sets-Ji}, it is easy to see that \begin{itemize} \item[1)] For any interval $K\subseteq[n]$ of length at most $\delta$, there is an $j_0\in\{1,2,\cdots,\left\lceil n/\delta\right\rceil-1\}$ such that $K\subseteq K_{j_0}$. \item[2)] $K_j\cap K_{j'}=\emptyset$ for all $j,j'\in\{1,2,\cdots, \left\lceil n/\delta\right\rceil-1\}$ such that $|j-j'|\geq 2$. \end{itemize} \end{rem} Let $\phi$ be the function constructed by Lemma \ref{lem-Bnry-burst-Sima} and $\bar{f}^{\text{b}}$ be the function constructed in Construction 2. For each $\bm x\in\mathbb Z_q^n$, let $M_{\bm{x}}=(c_{i,j})_{\lceil\log q\rceil\times n}$ be the matrix representation of $\bm x$ as defined by \eqref{q-B-Repr}. We have the following construction. \textbf{Construction 3}: For each $\bm x\in\mathbb Z_q^n$ and each $j\in\{1,2,\cdots, \left\lceil n/\delta\right\rceil-1\}$, let $$\bar{h}_j(\bm x)=\left(\phi(c_{2,K_j}),\phi(c_{3,K_j}),\cdots,\phi(c_{\lceil\log q\rceil,K_j})\right)$$ and for each $\ell\in\{0,1\}$, let \begin{align}\label{Con3-h-ell} \bar{h}^{(\ell)}(\bm x)=\sum_{\substack{j\in\{1,2,\cdots, \left\lceil n/\delta\right\rceil-1\}:\\ j\!~\equiv\!~\ell~\text{mod}~2}} \bar{h}_j(\bm x)~\text{mod}~\overline{N},\end{align} where $$\overline{N}=q^{4\log\log n+o(\log\log n)+\gamma_t}.$$ Finally, let \begin{align}\label{Constr2-f} \bar{f}(\bm x)=\left(\bar{f}^{\text{b}}(c_{1,[n]}), \bar{h}^{(0)}(\bm x), \bar{h}^{(1)}(\bm x)\right).\end{align} We have the following theorem. \begin{thm}\label{thm-burst-del-sketch} For any $\bm x\in\mathbb Z_q^n$, $\bar{f}(\bm x)$ is computable in linear time, and when viewed as a binary string, the length $|\bar{f}(\bm x)|$ of $\bar{f}(\bm x)$ satisfies $$|\bar{f}(\bm x)|\leq\log n+8(\log q+1)\log\log n+o(\log q\log\log n)+\gamma_t,$$ where $\gamma_t$ is a constant depending only on $t$. Moreover, if $\bm c=c_{1,[n]}$ is $(\bm p, \delta)$-dense, then given $\bar{f}(\bm x)$ and any $\bm y\in\mathcal B_{\leq t}(\bm x)$, one can uniquely recover $\bm x$. \end{thm} \begin{proof} Note that by Theorem \ref{thm-b-burst-del-sketch}, $\bar{f}^{\text{b}}(c_{1,[n]})$ is computable in linear time. Moreover, by Lemma \ref{lem-Bnry-burst-Sima} and \eqref{def-Ki-intvl}, each $\phi(c_{2,K_j})$ is computable in time $O(2^t(2\delta)^3)=O((\log n)^3)$, so by Construction 3, $\bar{h}^{(\ell)}(\bm x), \ell=1,2,$ are computable in linear time. Hence, $\bar{f}(\bm x)=\left(\bar{f}^{\text{b}}(c_{1,[n]}), \bar{h}^{(0)}(\bm x), \bar{h}^{(1)}(\bm x)\right)$ is computable in linear time. By Theorem \ref{thm-b-burst-del-sketch}, the length of $\bar{f}^{\text{b}}(c_{1,[n]})$ satisfies $$|\bar{f}^{\text{b}}(c_{1,[n]})|\leq\log n+8\log\log n+\gamma_t+o(\log\log n).$$ Moreover, by Construction 3, the length of $\bar{h}^{(\ell)}(\bm x), \ell=1,2,$ satisfy $$|\bar{h}^{(\ell)}(\bm x)|\leq\log\overline{N} =\log q(4\log\log n+o(\log\log n)+\gamma_t).$$ Hence, the length of $\bar{f}(\bm x)$ satisfies \begin{align*} |\bar{f}(\bm x)|&=|\bar{f}^{\text{b}}(c_{1,[n]})|+|\bar{g}^{(0)}(\bm x)|+|\bar{g}^{(1)}(\bm x)|\\&\leq\log n+8(\log q+1)\log\log n+o(\log q\log\log n)\\&~~~+\gamma_t.\end{align*} It remains to prove that if $\bm c=c_{1,[n]}$ is $(\bm p, \delta)$-dense, then given $\bar{f}(\bm x)$ and any $\bm y\in\mathcal B_{\leq t}(\bm x)$, one can uniquely recover $\bm x$. To prove this, we first prove that $$\bar{h}_j(\bm x)<\overline{N}$$ for each $j\in\{1,2,\cdots, \left\lceil n/\delta\right\rceil-1\}$. In fact, by \eqref{def-Ki-intvl}, each $c_{i,K_j}$, $i\in[2,\lceil\log q\rceil]$, has length $2\delta=2t2^{t+1}\log n$, so by Lemma \ref{lem-Bnry-burst-Sima}, $\phi(c_{2,K_j})$ has length $4\log(2\delta)+o(\log(2\delta))=4\log\log n+\gamma_t+o(\log\log n)$. Hence, by Construction 3, we have \begin{align*} |\bar{h}_j(\bm x)|&=|\left(\phi(c_{2,K_j}),\phi(c_{3,K_j}),\cdots,\phi(c_{\lceil\log q\rceil,K_j})\right)|\\&=(\lceil\log q\rceil-1)\left(4\log\log n+\gamma_t+o(\log\log n)\right)\\&<\log q\left(4\log\log n+\gamma_t+o(\log\log n)\right), \end{align*} which implies that $\bar{h}_j(\bm x)<q^{4\log\log n+\gamma_t+o(\log\log n)}=\overline{N}$. Now, we prove that $\bm x$ can be uniquely recovered from $\bar{f}(\bm x)$ and any given $\bm y\in\mathcal B_{\leq t}(\bm x)$, provided that $\bm c=c_{1,[n]}$ is $(\bm p, \delta)$-dense. Let $$M_{\bm y}=(d_{i,j})_{\left\lceil\log q\right\rceil\times (n-t')}$$ be the matrix representation of $\bm y$. Since $\bm y\in\mathcal B_{\leq t}(\bm x)$ can be obtained from $\bm x$ by deleting $t'$ consecutive symbols from $\bm x$, where $t'=n-|\bm y|$ and $t'\in[t]$, then $M_{\bm y}$ can be obtained from $M_{\bm x}$ by deleting $t'$ consecutive columns of $M_{\bm x}$. The process of recovering $\bm x$ from $\bar{f}(\bm x)=\left(\bar{f}^{\text{b}}(c_{1,[n]}), \bar{g}^{(0)}(\bm x), \bar{g}^{(1)}(\bm x)\right)$ and $\bm y$ consists of the following three steps. \textbf{Step 1}: Since $\bm c=c_{1,[n]}$ is $(\bm p, \delta)$-dense, then by Theorem \ref{thm-b-burst-del-sketch}, $c_{1,[n]}$ can be recovered from $d_{1,[n-t']}$ and $\bar{f}^{\text{b}}(c_{1,[n]})$. \textbf{Step 2}: According to Lemma \ref{lem-prd-idx-cap}, there is an interval $K\subseteq[n]$ of length at most $\delta-1$ such that $d_{1,[n-t']}$ is obtained from $\bm c=c_{1,[n]}$ by deleting $t'$ consecutive symbols in $K$. Correspondingly, $M_{\bm y}$ is obtained from $M_{\bm x}$ by deleting $t'$ consecutive columns in $K$. By 1) of Remark \ref{rem-sets-Ki}, there is an $j_0\in\{1,2,\cdots,\left\lceil n/\delta\right\rceil-1\}$ such that $K\subseteq K_{j_0}$. Denote $K_{j_0}=[\lambda,\lambda']$. Then we have the following observations: \begin{itemize} \item[i)] $c_{i,[1,\lambda-1]}=d_{i,[1,\lambda-1]}$ for each $i\in[2,\lceil\log q\rceil]$. \item[ii)] $d_{i,[\lambda,\lambda'-t']}\in \mathcal B_{\leq t}(c_{i,[\lambda,\lambda']}$ for each $i\in[2,\lceil\log q\rceil]$. \item[iii)] $c_{i,[\lambda'+1,n]}=d_{i,[\lambda'-t'+1,n-t']}$ for each $i\in[2,\lceil\log q\rceil]$. \end{itemize} By observations i) and iii), for each $i\in[2,\lceil\log q\rceil]$, $c_{i,[1,\lambda-1]}$ and $c_{i,[\lambda'+1,n]}$ can be directly obtained from $M_{\bm y}$. \textbf{Step 3}: For $j\in\{1,\cdots, j_0-2\}$, we have $K_j\cap K_{j_0}=\emptyset$, by 2) of Remark \ref{rem-sets-Ki}, so $K_j\subseteq[1,\lambda-1]$ and by observation i), $\bar{h}_j(\bm x)=\left(\phi(c_{2,K_j}),\phi(c_{3,K_j}),\cdots,\phi(c_{\lceil\log q\rceil,K_j})\right)$ can be computed from $d_{i,[1,\lambda-1]}=c_{i,[1,\lambda-1]}$, $i=2,\cdots,\lceil\log q\rceil$. Similarly, for $j\in\{j_0+2,\cdots,\left\lceil n/\delta\right\rceil-1\}$, by 2) of Remark \ref{rem-sets-Ki}, we have $K_j\cap K_{j_0}=\emptyset$, so $K_j\subseteq[\lambda'+1,n]$ and by observation iii), $\bar{h}_j(\bm x)=\left(\phi(c_{2,K_j}),\phi(c_{3,K_j}),\cdots,\phi(c_{\lceil\log q\rceil,K_j})\right)$ can be computed from $d_{i,[\lambda'-t'+1,n-t']}=c_{i,[\lambda'+1,n]}$, $i=2,\cdots,\lceil\log q\rceil$. Let $\ell_0\in\{0,1\}$ be such that $\ell_0\equiv j_0~\text{mod}~2$. By \eqref{Con3-h-ell}, we can obtain \begin{align*}\bar{h}_{j_0}(\bm x)\equiv\bar{h}^{(\ell_0)}(\bm x)-\sum_{\substack{j\in\{1,2,\cdots, \left\lceil n/\delta\right\rceil-1\}\backslash\{j_0\}:\\ j\!~\equiv\!~\ell_0~\text{mod}~2}} \bar{h}_j(\bm x)~\text{mod}~\overline{N}.\end{align*} Note that we have proved that $\bar{h}_j(\bm x)<\overline{N}$ for each $j\in\{1,2,\cdots, \left\lceil n/\delta\right\rceil-1\}$, so we actually have \begin{align*}\bar{h}_{j_0}(\bm x)=\bar{h}^{(\ell_0)}(\bm x)-\sum_{\substack{j\in\{1,2,\cdots, \left\lceil n/\delta\right\rceil-1\}\backslash\{j_0\}:\\ j\!~\equiv\!~\ell_0~\text{mod}~2}} \bar{h}_j(\bm x)~\text{mod}~\overline{N},\end{align*} where by Construction 3, $$\bar{h}_{j_0}(\bm x)=\left(\phi(c_{2,K_{j_0}}),\phi(c_{3,K_{j_0}}),\cdots,\phi(c_{\lceil\log q\rceil,K_{j_0}})\right).$$ By observation ii) in Step 2, and by Lemma \ref{lem-Bnry-burst-Sima}, $c_{i,[\lambda,\lambda']}$, $i=2,\cdots,\lceil\log q\rceil$, can be recovered from $\bar{h}_{j_0}(\bm x)$ and $d_{i,[\lambda,\lambda'-t']}$, $i=2,\cdots,\lceil\log q\rceil$. By the above discussions, $M_{\bm x}~($and so $\bm x)$ can be uniquely recovered from $\bar{f}(\bm x)$ and any given $\bm y\in\mathcal B_{\leq t}(\bm x)$. \end{proof} Let $\mathcal S_n$ be the set of all $q$-ary strings $\bm x\in\mathbb Z_q^{n}$ such that the first row $c_{1,[n]}$ of $M_{\bm{x}}$ is $(\bm p, \delta)$-dense, where $M_{\bm{x}}=(c_{i,j})_{\lceil\log q\rceil\times n}$ is the matrix representation of $\bm x$. Let $$\mathcal E^{\text{b}}_{\text{Den}}:\{0,1\}^{n-1}\rightarrow\mathcal S^{\text{b}}_n$$ be the one-to-one mapping constructed as in \eqref{E-B-Den}, where $\mathcal S^{\text{b}}_n$ is the set of all $(\bm p, \delta)$-dense strings in $\{0,1\}^{n}$. By Lemma \ref{lem-En-B2Q}, the mapping $\mathcal E^{\text{b}}_{\text{Den}}$ can be extended to a one-to-one mapping, denoted by $$\mathcal E_{\text{Den}}:\mathbb Z_q^{n-1}\rightarrow\mathbb Z_q^{n},$$ such that for each $\bm u\in\mathbb Z_q^{n-1}$ and $\bm x=\mathcal E_{\text{Den}}(\bm u)$, if $M_{\bm{u}}=(b_{i,j})_{\lceil\log q\rceil\times(n-1)}$ and $M_{\bm{x}}=(c_{i,j})_{\lceil\log q\rceil\times n}$ are the matrix representation of $\bm u$ and $\bm x$ respectively, then $$c_{1,[n]}=\mathcal E^{\text{b}}_{\text{Den}}(b_{1,[n-1]}).$$ Since $\mathcal E^{\text{b}}_{\text{Den}}(b_{1,[n-1]})$ is $(\bm p, \delta)$-dense, then for any $\bm u\in\mathbb Z_q^{n-1}$, we have $\bm x=\mathcal E_{\text{Den}}(\bm u)\in\mathcal S_n$. As in Section III, for each binary string $\bm a$, let $\mathcal Q(\bm a)$ be the $q$-ary representation of $\bm a$. Note that $\mathcal Q(\bm a)$ is a $q$-ary string of length $\left\lceil|\bm a|/\lfloor\log q\rfloor\right\rceil$. Using the function $\bar f$ constructed in Construction 3 and the mapping $\mathcal E_{\text{Den}}$, we can construct an encoding function of a $q$-ary code capable of correcting a burst of at most $t$ deletions. Let $\bar{\mathcal E}$ be a function defined on $\mathbb Z_q^{n-1}$ of the form \begin{align}\label{Btdel-enc-fun} \bar{\mathcal E}(\bm u)=(\bm v, \bm v', \bm v''), ~\forall\!~\bm u\in\mathbb Z_q^{n-1}, \end{align} where $\bm v=\mathcal E_{\text{Den}}(\bm u)$, $\bm v'=\mathcal E_{\text{Den}}(\mathcal Q(\bar f(\bm v)))$ and $\bm v''=\text{Rep}_{t+1}(\mathcal Q(\bar f(\bm v')))$ such that $\text{Rep}_{t+1}(\cdot)$ is the encoding function of the $(t+1)$-fold repetition code. \begin{thm}\label{thm-Btdel-enc} The code $\bar{\mathcal C}=\{\bar{\mathcal E}(\bm u): \bm u\in\mathbb Z_q^{n-1}\}$, where $\bar{\mathcal E}$ is given by \eqref{Btdel-enc-fun}, is a $q$-ary code capable of correcting a burst of at most $t$ deletions. The redundancy of $\bar{\mathcal C}$ is at most $\log n+(8\log q+9)\log\log n+o(\log q\log\log n)+\gamma_t$ in bits, where $\gamma_t$ is a constant depending only on $t$. \end{thm} \begin{proof} To prove that $\bar{\mathcal C}$ is capable of correcting a burst of at most $t$ deletions, we adopt a similar strategy as in the proof of Theorem \ref{thm-2del-enc}. Specifically, let $$\bm x=\bar{\mathcal E}(\bm u)=(\bm v, \bm v', \bm v'')\in\bar{\mathcal C},$$ where $\bm v=\mathcal E_{\text{Den}}(\bm u)$, $\bm v'=\mathcal E_{\text{Den}}(\mathcal Q(\bar f(\bm v)))$ and $\bm v''=\text{Rep}_{t+1}(\mathcal Q(\bar f(\bm v')))$. Given any $\bm y\in\mathcal B_{\leq t}(\bm x)$, denoting $t'=|\bm x|-|\bm y|$, then $t'\leq t$ and we have $y_{[1,m_1-t']}\in\mathcal B_{\leq t}(\bm v)$, $y_{[m_1,m_2-t']}\in\mathcal B_{\leq t}(\bm v')$ and $y_{[m_2,m_3-t']}\in\mathcal B_{\leq t}(\bm v'')$, where $m_1=|\bm v|, m_2=|(\bm v,\bm v')|$ and $m_3=|\bm x|=|(\bm v,\bm v',\bm v'')|$. First, since $\bm v''=\text{Rep}_{t+1}(\mathcal Q(\bar f(\bm v')))$ is a codeword of a $t$-deletion code, then $\mathcal Q(\bar f(\bm v'))$, and so $\bar f(\bm v')$, can be recovered from $y_{[m_2,m_3-t']}$. Further, by Theorem \ref{thm-burst-del-sketch}, $\bm v'$ can be recovered from $y_{[m_1,m_2-t']}$ and $\bar{f}(\bm v')$, and so $\bar{f}(\bm v)$ can be recovered from $\bm v'=\mathcal E_{\text{Den}}(\mathcal Q(\bar f(\bm v)))$. Finally, by Theorem \ref{thm-burst-del-sketch} again, $\bm v$ can be recovered from $y_{[1,m_1-t']}$ and $\bar{f}(\bm v)$. Thus, $\bm x=(\bm v, \bm v', \bm v'')$ can be recovered from any $\bm y\in\mathcal B_{\leq t}(\bm x)$, which proves that $\bar{\mathcal C}$ is capable of correcting a burst of at most $t$ deletions. Since $\bm u\in\mathbb Z_q^{n-1}$ and $\bm v=\mathcal E_{\text{Den}}(\bm u)\in\mathcal S_n\subseteq\mathbb Z_q^{n}$, so $\bm v$ has $\log q$ bits redundancy. Moreover, by Theorem \ref{thm-burst-del-sketch}, the length of $\bm v'$ is $$|\bm v'|=\log n+8(\log q+1)\log\log n+o(\log q\log\log n)+\gamma_t$$ bits and the length of $\bm v''$ is \begin{align*}|\bm v''|&=\log|\bm v'|+8(\log q+1)\log\log |\bm v'|+o(\log q\log\log |\bm b'|)\\&~~~+\gamma_t\\&=\log\log n+\gamma_t+o(\log q\log\log n)\end{align*} in bits. So the total redundancy of $\bm c=\bar{\mathcal E}^{\text{b}}(\bm a)$ is \begin{align*}\text{redundancy~of}~\bar{\mathcal C}&=\log q+|\bm v'|+|\bm v''|\\&=\log n+(8\log q+9)\log\log n\\&~~~+o(\log q\log\log n)+\gamma_t\end{align*} bits. \end{proof} \section{Conclusions} We constructed systematic $q$-ary two-deletion correcting codes and $q$-ary burst-deletion correcting codes, where $q\geq 2$ is an even integer. For $q$-ary two-deletion codes, the redundancy of our construction is $\log n$ higher than the best known explicit binary codes and is lower than all existing explicit $q$-ary codes. For $q$-ary burst-deletion codes, our construction is scaling-optimal in redundancy. It is also an interesting problem to generalize our constructions to odd $q$. Another interesting problem is to construct explicit $q$-ary $t$-deletion correcting codes that improves upon the state of the art constructions in redundancy. \appendices \section{Proof of Lemma \ref{lem-2del-pstn}} In this appendix, we prove Lemma \ref{lem-2del-pstn}. We first need to prove the following lemma. \begin{lem}\label{lem-2del-yc} Suppose $\bm{c}\in\{0,1\}^n$ and $\{j_1,j_2\}, \{j_1',j_2'\}\subseteq[n]$ such that $j_1<j_2, j'_1<j'_2$ and $j_1\leq j_1'$. If $c_{[n]\backslash\{j_1,j_2\}} =c_{[n]\backslash\{j'_1,j'_2\}}$, then one of the following holds: \begin{itemize} \item[1)] $c_{j_1},c_{j'_1}$ are in the same run of $\bm{c}$, and $c_{j_2},c_{j'_2}$ are in the same run of $\bm{c}$. \item[2)] There is an alternative substring $c_{[s_1,s_2]}$ of $\bm c$ of length $\geq 3$ such that $c_{j_1}$ and $c_{s_1}$ are in the same run of $\bm{c}$, $j_2=s_1+1$, $j_1'=s_2-1$ and $c_{j_2'}$ and $c_{s_2}$ are in the same run of $\bm{c}$. \end{itemize} \end{lem} \begin{proof} If $\{j_1,j_2\}=\{j'_1,j'_2\}$, then 1) holds. In the following, we assume that $\{j_1,j_2\}\neq\{j'_1,j'_2\}$. Since $c_{[n]\backslash\{j_1,j_2\}} =c_{[n]\backslash\{j'_1,j'_2\}}$, we can denote $\bm b=c_{[n]\backslash\{j_1,j_2\}} =c_{[n]\backslash\{j'_1,j'_2\}}.$ From $\bm b=c_{[n]\backslash\{j_1,j_2\}}$ we can obtain \begin{equation}\label{2-del-yc1} b_i=\!\left\{\!\begin{aligned} &c_i, ~~~~\text{for}~i\in[1,j_1-1],\\ &c_{i+1}, ~\text{for}~i\in[j_1,j_2-2],\\ &c_{i+2}, ~\text{for}~i\in[j_2-1,n-2]. \end{aligned}\right. \end{equation} Similarly, from $\bm b=c_{[n]\backslash\{j'_1,j'_2\}}$ we can obtain \begin{equation}\label{2-del-yc2} b_i=\!\left\{\!\begin{aligned} &c_i, ~~~~\text{for}~i\in[1,j'_1-1],\\ &c_{i+1}, ~\text{for}~i\in[j'_1,j'_2-2],\\ &c_{i+2}, ~\text{for}~i\in[j'_2-1,n-2]. \end{aligned}\right. \end{equation} Since $j_1<j_2$, $j'_1<j'_2$ and $j_1\leq j_1'$, then we can divide our discussions into the following three cases: Case 1: $j_1<j_2\leq j'_1<j'_2$. Combining \eqref{2-del-yc1} and \eqref{2-del-yc2}, we can obtain \begin{align}\label{eq1-2del-cmpr} \bm b=~&c_1\cdots c_{j_1-1}c_{j_1+1}c_{j_1+2}\cdots c_{j_2-1}c_{j_2+1}c_{j_2+2}c_{j_2+3}c_{j_2+4}\cdots c_{j'_1-2}c_{j'_1-1}~c_{j'_1}~~c_{j'_1+1}c_{j'_1+2}c_{j'_1+3}\cdots c_{j'_2-1}~c_{j'_2}~~c_{j'_2+1}\cdots c_n\nonumber\\ =~&c_1\cdots c_{j_1-1}~c_{j_1}~~c_{j_1+1}\cdots c_{j_2-2}c_{j_2-1}~c_{j_2}~~c_{j_2+1}c_{j_2+2}\cdots c_{j'_1-4}c_{j'_1-3}c_{j'_1-2} c_{j'_1-1}c_{j'_1+1}c_{j'_1+2}\cdots c_{j'_2-2}c_{j'_2-1}c_{j'_2+1}\cdots c_n.\end{align} We further need to divide this case into the following two subcases. Case 1.1: $j_1'-j_2$ is odd. By comparing the symbols of the corresponding positions in $c_{[n]\backslash\{j_1,j_2\}}$ and $c_{[n]\backslash\{j'_1,j'_2\}}$, we can obtain \begin{align*} c_{j_1}&=c_{j_1+1}=\cdots=c_{j_2-2}=c_{j_2-1}=c_{j_2+1}=c_{j_2+3}=\cdots\\& =c_{j'_1-4}=c_{j'_1-2}=c_{j'_1}\end{align*} and \begin{align*} c_{j_2}&=c_{j_2+2}=c_{j_2+4}=\cdots=c_{j_1'-3}=c_{j_1'-1}=c_{j_1'+1}\\ &=c_{j_1'+2}=c_{j_1'+3}=\cdots=c_{j'_2}.\end{align*} Case 1.2: $j_1'-j_2$ is even. By comparing the symbols of the corresponding positions in $c_{[n]\backslash\{j_1,j_2\}}$ and $c_{[n]\backslash\{j'_1,j'_2\}}$, we can obtain \begin{align*} c_{j_1}&=c_{j_1+1}=c_{j_1+2}=\cdots=c_{j_2-2}=c_{j_2-1}=c_{j_2+1}\\& =c_{j_2+3}=\cdots=c_{j'_1-3}=c_{j'_1-1}=c_{j'_1+1}=c_{j'_1+2}\\ &=c_{j'_1+3}=\cdots=c_{j'_2-1}=c_{j'_2}\end{align*} and \begin{align*} c_{j_2}&=c_{j_2+2}=c_{j_2+4}=\cdots=c_{j_1'-4}=c_{j_1'-2}=c_{j_1'}\end{align*} For the both subcases, we can see that \begin{itemize} \item If $c_{j_1}=c_{j_2}$, then we have $c_{j_1}=c_{j_1+1}=\cdots=c_{j_2'}$, so $c_{j_1},c_{j_2},c_{j'_1}$ and $c_{j'_2}$ are in the same run of $\bm c$, which implies that 1) of Lemma \ref{lem-2del-yc} holds. \item If $c_{j_1}\neq c_{j_2}$, then $c_{[j_2-1,j_1'+1]}$ is an alternative substring of $\bm c$ of length $\geq 3$. By letting $s_1=j_2-1$ and $s_2=j_1'+1$, we obtain 2) of Lemma \ref{lem-2del-yc}.\end{itemize} Example \ref{exam-2del-c1}) is an illustration of this case. Case 2: $j_1\leq j'_1<j_2\leq j'_2$. In this case, combining \eqref{2-del-yc1} and \eqref{2-del-yc2}, we can obtain \begin{align}\label{eq2-2del-cmpr} \bm b= &c_1\cdots c_{j_1-1}c_{j_1+1}c_{j_1+2}\cdots c_{j'_1-1}~c_{j'_1}~~c_{j'_1+1}c_{j'_1+2}\cdots c_{j_2-1}c_{j_2+1}c_{j_2+2}c_{j_2+3}\cdots c_{j'_2-1}~c_{j'_2}~~c_{j'_2+1}\cdots c_n\nonumber\\ =&c_1\cdots c_{j_1-1}~c_{j_1}~~c_{j_1+1}\cdots c_{j_1'-2}c_{j_1'-1}c_{j_1'+1}c_{j_1'+2}\cdots c_{j_2-1}~c_{j_2}~~c_{j_2+1}c_{j_2+2}\cdots c_{j'_2-2}c_{j'_2-1}c_{j'_2+1}\cdots c_n,\end{align} from which we see that $$c_{j_1}=c_{j_1+1}=\cdots=c_{j_1'}$$ and $$c_{j_2}=c_{j_2+1}=\cdots=c_{j'_2}.$$ Hence, 1) of Lemma \ref{lem-2del-yc} holds. Case 3: $j_1\leq j'_1<j'_2<j_2$. In this case, combining \eqref{2-del-yc1} and \eqref{2-del-yc2}, we can obtain \begin{align}\label{eq3-2del-cmpr} \bm b= &c_1\cdots c_{j_1-1}c_{j_1+1}c_{j_1+2}\cdots c_{j'_1-1}~c_{j'_1}~~c_{j'_1+1}c_{j'_1+2}\cdots c_{j_2'-1}~c_{j_2'}~~c_{j_2'+1}c_{j_2'+2}\cdots c_{j_2-1}c_{j_2+1}c_{j_2+2}\cdots c_n\nonumber\\ =&c_1\cdots c_{j_1-1}~c_{j_1}~~c_{j_1+1}\cdots c_{j_1'-2}c_{j_1'-1}c_{j_1'+1}c_{j_1'+2}\cdots c_{j_2'-1}c_{j_2'+1}c_{j_2'+2}c_{j_2'+3}\cdots ~c_{j_2}~~c_{j_2+1}c_{j_2+2}\cdots c_n,\end{align} from which we can see that $$c_{j_1}=c_{j_1+1}=\cdots=c_{j_1'}$$ and $$c_{j_2'}=c_{j_2'+1}=\cdots=c_{j_2}.$$ Hence, 1) of Lemma \ref{lem-2del-yc} holds. \end{proof} \begin{exam}\label{exam-2del-c1} To illustrate the Case 1 in the proof of Lemma \ref{lem-2del-yc}, let's consider the following two examples. \begin{itemize} \item Consider $\bm c=01000101011110$. Let $j_1=3$, $j_2=6$, $j_1'=9$ and $j_2'=12$. Then $c_{[n]\backslash\{j_1,j_2\}} =c_{[n]\backslash\{j'_1,j'_2\}}=010001011110$ and $j_1'-j_2=9-6=3$ is odd. We can find that $c_{j_1}=\cdots=c_{j_2-1}=c_{j_2+1}=c_{j_2+3}=\cdots=c_{j'_1}=0$ and $c_{j_2}=c_{j_2+2}=\cdots=c_{j'_1-1}=c_{j'_1+1}=\cdots=c_{j'_2}=1$. \item Consider $\bm c=01000101010001$. Let $j_1=3$, $j_2=6$, $j_1'=10$ and $j_2'=12$. Then $c_{[n]\backslash\{j_1,j_2\}} =c_{[n]\backslash\{j'_1,j'_2\}}=010001010001$ and $j_1'-j_2=10-6=4$ is even. We can find that $c_{j_1}=\cdots=c_{j_2-1}=c_{j_2+1}=c_{j_2+3}=\cdots=c_{j'_1-1} =c_{j'_1+1}=c_{j'_1+2}=\cdots=c_{j'_2}=0$ and $c_{j_2}=c_{j_2+2}=\cdots=c_{j'_1}=1$. \end{itemize} \end{exam} Now, we can prove Lemma \ref{lem-2del-pstn}. \begin{proof}[Proof of Lemma \ref{lem-2del-pstn}] Suppose $\bm c\in\{0,1\}^n$ is regular and $\bm b\in\{0,1\}^{n-2}$ such that $\bm b$ can be obtained from $\bm c$ by deleting two symbols of $\bm c$. Then we can always find two symbols of $\bm c$, say $c_{j_1}$ and $c_{j_2}~(j_1<j_2)$, such that, $\bm b=c_{[n]\backslash\{j_1,j_2\}}$. First, suppose $c_{j_1}$ and $c_{j_2}$ are in the same run of $\bm c$. Then for any $\{j_1',j_2'\}\subseteq[n]$ such that $\bm b=c_{[n]\backslash\{j'_1,j'_2\}}$, it is easy to see that 2) of Lemma \ref{lem-2del-yc} can't hold. $($Otherwise, there is an alternative substring $c_{[s_1,s_2]}$ of $\bm c$ of length $\geq 3$ such that $c_{j_1}$, $c_{s_1}$ are in the same run of $\bm{c}$ and $j_2=s_1+1$, which implies that $c_{j_1}=c_{s_1}\neq c_{s_1+1}=c_{j_2}$, which contradicts to the assumption that $c_{j_1}$ and $c_{j_2}$ are in the same run of $\bm c$.$)$ Therefore, 1) of Lemma \ref{lem-2del-yc} must hold, which implies that there is a run $c_{J}$ of $\bm c$, where $J\subseteq[n]$ is an interval, such that $\bm b$ is obtained from $\bm c$ by deleting two symbols in $c_{J}$. Since $\bm c\in\{0,1\}^n$ is regular, by Remark \ref{rem-Regu-length}, the length of $c_{J}$ is at most $d\log n<\rho=3d\log n$. Thus, 2) of Lemma \ref{lem-2del-pstn} holds. In the following, we suppose that $c_{j_1}$ and $c_{j_2}~(j_1<j_2)$ are in two different runs of $\bm c$. Specifically, suppose $j_1\in J=[i_1,i_2]\subseteq[n]$ and $j_2\in J'=[i'_1,i'_2]\subseteq[n]$ such that $c_{J}$ and $c_{J'}$ are two different runs of $\bm c$. Since $j_1<j_2$, then $$i_1\leq i_2<i_1'\leq i_2'.$$ We need to consider the following two cases. Case 1: $i_1'>i_2+1$. Since $j_2\in J'=[i'_1,i'_2]$, then $j_2\geq i_1'>i_2+1$. For any $\{j_1',j_2'\}\subseteq[n]$ such that $\bm b=c_{[n]\backslash\{j'_1,j'_2\}}$, it is easy to see that 2) of Lemma \ref{lem-2del-yc} can't hold. $($Otherwise, there is an alternative substring $c_{[s_1,s_2]}$ of $\bm c$ of length $\geq 3$ such that $c_{j_1}$, $c_{s_1}$ are in the same run of $\bm{c}$ and $j_2=s_1+1$, which implies that $s_1=i_2$ and $j_2=s_1+1=i_2+1$, which contradicts to the fact that $j_2\geq i_1'>i_2+1.)$ Therefore, 1) of Lemma \ref{lem-2del-yc} must hold, which implies that $j_1'\in J=[i_1,i_2]$ and $j_2'\in J'=[i'_1,i'_2]'$. Thus, 1) of Lemma \ref{lem-2del-pstn} holds. Case 2: $i_1'=i_2+1$. We need to consider the following two subcases. Case 2.1: $|J|\geq 2$ and $|J'|\geq 2$. Then for any $\{j_1',j_2'\}\subseteq[n]$ such that $\bm b=c_{[n]\backslash\{j'_1,j'_2\}}$, it is easy to see that 2) of Lemma \ref{lem-2del-yc} can't hold because no such alternative substring $c_{[s_1,s_2]}$ of $\bm c$ can be found. Therefore, 1) of Lemma \ref{lem-2del-yc} must hold, which implies that $j_1'\in J=[i_1,i_2]$ and $j_2'\in J'=[i'_1,i'_2]'$. Thus, 1) of Lemma \ref{lem-2del-pstn} holds. Case 2.2: $|J|=1$ or $|J'|=1$. Without loss of generality, assume $|J|=1$. Then $i_1=i_2$ and $c_{i_1-1}c_{i_1}c_{i_1+1}$ is an alternative substring of $\bm c$. Let $c_{[\lambda_1,\lambda_2]}$ be the maximal alternative substring of $\bm c$ that contains $c_{i_1-1}c_{i_1}c_{i_1+1}$, where $[\lambda_1,\lambda_2]\subseteq[n]$ is an interval. Let $c_{[\lambda_0,\lambda_1]}~($if $\lambda_1>1)$ and $c_{[\lambda_2,\lambda_3]}~($if $\lambda_2<n)$ be two runs of $\bm c$. For any $\{j_1',j_2'\}\subseteq[n]$ such that $\bm b=c_{[n]\backslash\{j'_1,j'_2\}}$, by Lemma \ref{lem-2del-yc}, we have $\{j_1',j_2'\}\subseteq[\lambda_0,\lambda_3]$. Since $\bm c\in\{0,1\}^n$ is regular, by Remark \ref{rem-Regu-length}, the length of the alternative substring $c_{[\lambda_1,\lambda_2]}$ of $\bm c$ is at most $d\log n$, and the lengths of the runs $c_{[\lambda_0,\lambda_1]}$, $c_{[\lambda_2,\lambda_3]}$ of $\bm c$ are both at most $d\log n$. Hence, the length of $c_{[\lambda_0,\lambda_3]}$ is at most $\rho=3d\log n$. Thus, 2) of Lemma \ref{lem-2del-pstn} holds. By the above discussions, we proved that exact one of the two claims of Lemma \ref{lem-2del-pstn} holds. \end{proof} As an example, suppose $\bm c=011000101011110100$. We consider the following cases. \begin{itemize} \item If $\bm b=0110101011110100$, then $\bm b$ can be obtained from $\bm c$ by deleting two symbols in the run $c_{[4,6]}=000$. \item If $\bm b=0110010011110100$, then $\bm b$ can be obtained from $\bm c$ by deleting one symbol in the run $c_{[4,6]}=000$ and one symbol in the run $c_{[9,9]}=1$. This case is an example of Case 1 in the proof of Lemma \ref{lem-2del-pstn}. \item If $\bm b=0100101011110100$, then $\bm b$ can be obtained from $\bm c$ by deleting one symbol in the run $c_{[2,3]}=11$ and one symbol in the run $c_{[4,6]}=000$. This case is an example of Case 2.1 in the proof of Lemma \ref{lem-2del-pstn}. \item If $\bm b=0110001011110100$, then $\bm b$ can be obtained from $\bm c$ by deleting two symbols in the substring $c_{[4,14]}=00010101111$, which may be any of the following cases: i) one symbol in the run $c_{[4,6]}=000$ and the symbol $c_{7}=1$; ii) the symbols $c_i,c_{i+1}$ for $i\in\{7,8,9\}$; iii) one symbol in the run $c_{[11,14]}=1111$ and the symbol $c_{10}=0$. This case is an example of Case 2.2 in the proof of Lemma \ref{lem-2del-pstn}. \end{itemize}
1,314,259,993,710
arxiv
\section{Introduction} Origin of mass is highly related to the mechanism involved in electroweak symmetry breaking (EWSB), which is believed to give masses to matters and gauge bosons. The simplest implementation of EWSB in the standard model (SM) is to introduce a Higgs doublet field \cite{Higgs:1964pj,Englert:1964et,Guralnik:1964eu}. A neutral scalar Higgs boson was then discovered in July 2012 \cite{ATLAS:2012yve,CMS:2012qbp}, which is believed to serve as the role of EWSB. After all the data accumulated till 2018, the scalar boson is best described by the SM Higgs boson \cite{Cheung:2013kla,Cheung:2018ave}. However, the SM Higgs boson cannot be a complete theory because of the gauge hierarchy problem. There is no {\it a priori} reason why the EWSB sector simply contains only one Higgs doublet field. Indeed, many extensions of the EWSB sector consist of more Higgs fields. One of the best ways to probe the structure of the Higgs sector is to probe the Higgs self-couplings. This is because the self-couplings of the Higgs boson are very different among the SM, two-Higgs doublet models (2HDM), MSSM, and any composite Higgs models. One of the probes of Higgs self-couplings is Higgs-pair production via gluon fusion at the LHC~\cite{Glover:1987nx,Dicus:1987ic,Plehn:1996wb,Djouadi:1999rca,Dawson:1998py,Baur:2002qd,Binoth:2006ym,Baur:2003gpa,Baglio:2012np,Grigo:2013rya,Barger:2013jfa}. There have been a large number of works in literature on Higgs-pair production beyond the SM (see for example \cite{Lu:2015jza} and references therein). The predictions for various models are quite different such that the production rates can give valuable information on the self-coupling $\lambda_{3H}$ or on the presence of heavier Higgs bosons which can enhance the production rates for Higgs boson pairs. The Higgs-pair production process receives contributions from both the triangle and box diagrams, which interfere with each other. In 2HDM, the triangle diagram can involve the Higgs self-trilinear coupling $\lambda_{Hhh}$ and $\lambda_{hhh}$. In particular, the resonance effect of the heavier CP-even Higgs boson can substantially enhance the production rate of $hh$ pairs. The production rate largely depends on the parameters of the 2HDM, such as $\cos(\beta-\alpha)$, $\tan \beta$, and $m_{12}^2$ in addition to the $M_H$ and $\Gamma_H$. In this work, we study the signal process $pp \to h h \to (b\bar b) (b\bar b)$ via gluon fusion against the SM multijet background. It is well-known that the signal is overwhelmingly buried under the multijet background. The study using the conventional cut-based approach did not give enough significance even at the High-Luminosity LHC (HL-LHC). We make use of the boosted feature of the final-state Higgs boson pair $hh$, due to the decay of the heavier CP-even Higgs boson. A specific classifier was developed in Ref.~\cite{Chung:2020ysf}, which can be employed to significantly enhance the signal-background ratio. We show that the Three-stream Convolutional Neural Network (3CNN) can substantially improve the significance of the signal compared to the Boosted Decision Tree method (BDT) and the conventional cut-based approach. At the end of the analysis, we show the 95\% sensitivity coverage of the parameter space of the 2HDM Type I, II, III, and IV. The current study focuses on the boosted region of the Higgs boson pair, for which the classifier that we employ is very effective in reducing the SM multijet background. In literature, there are existing analyses in probing the Higgs self-coupling in the channels such as $hh \to b\bar b \gamma\gamma$ \cite{Chang:2018uwu,Chang:2019ncg}, $hh \to b\bar b W W^*$ \cite{Kim:2018cxf,Papaefstathiou:2012qe}, $ hh \to b\bar b b\bar b$~\cite{Amacker:2020bmn}, and resonance search \cite{Adhikary:2018ise}. The organization is as follows. In the next section, we briefly describe the 2HDM's and relevant parameters for Higgs-pair production. In Sec. \ref{sec:Sample}, we describe the signal and background processes, including the sample generation and event selections. In Sec. \ref{sec:classifiers}, we introduce the machine-learning approaches including BDT and 3CNN. In Sec. \ref{sec:Sensitivity}, we scan the parameter space of the 2HDM's for the sensitivity coverage at the HL-LHC, as well as the current restriction on the parameter space due to \textsc{HiggsSignals} and \textsc{HiggsBounds}. We conclude in Sec. \ref{sec:conclusion}. \section{Two Higgs Doublet Models}\label{sec:THDM} Two-Higgs doublet model is an extension of the SM by adding another complex Higgs doublet field, and so the Higgs sector consists of $\Phi_1$ and $\Phi_2$~\cite{Branco:2011iw}: \begin{equation} \Phi_i = \left( \begin{array}{c} w_i^+ \\[3pt] \dfrac{v_i + h_i + i \eta_i }{ \sqrt{2}} \end{array} \right), \quad i=1,2, \end{equation} where $v_{1}$ and $v_2$ are the vacuum expectation values (VEV) of $\Phi_1$ and $\Phi_2$, respectively. The ratio of these two VEV's is defined by $\tan \beta \equiv v_2/v_1$. The dangerous flavor-changing-neutral-currents (FCNC) at tree level are avoided by imposing a discrete $Z_2$ symmetry, under which $\Phi_1 \to \Phi_1$ and $\Phi_2 \to -\Phi_2$~\cite{Glashow:1976nt,Paschos:1976ay}. The scalar potential with softly broken $Z_2$ and \textit{CP} invariance is \begin{eqnarray} \label{eq:VH} V = && m^2 _{11} \Phi^\dagger _1 \Phi_1 + m^2 _{22} \Phi^\dagger _2 \Phi_2 -m^2 _{12} ( \Phi^\dagger_1 \Phi_2 + {\rm H.c.}) \\ \nonumber && + \frac{1}{2}\lambda_1 (\Phi^\dagger _1 \Phi_1)^2 + \frac{1}{2}\lambda_2 (\Phi^\dagger _2 \Phi_2 )^2 + \lambda_3 (\Phi^\dagger _1 \Phi_1) (\Phi^\dagger _2 \Phi_2) + \lambda_4 (\Phi^\dagger_1 \Phi_2 ) (\Phi^\dagger _2 \Phi_1) \\ \nonumber && + \frac{1}{2} \lambda_5 \left[ (\Phi^\dagger _1 \Phi_2 )^2 + {\rm H.c.} \right], \end{eqnarray} where the $m^2 _{12}$ term softly breaks the $Z_2$ symmetry. The model has five physical Higgs bosons: a pair of CP-even scalar bosons $h$ and $H$, a CP-odd pseudoscalar $A$, and a pair of charged Higgs bosons $H^\pm$. The masses of the physical Higgs bosons are related to the $\lambda$'s in the scalar potential, the mixing angle $\alpha$ of the CP-even scalar bosons and $\beta$, given by Ref.~\cite{Song:2019aav}, \begin{eqnarray} \label{eq:quartic} \lambda_1 &=& \frac{1}{v^2 c_\beta^2} \left[ c_\alpha^2 M_H^2 + s_\alpha^2 m_h^2 - t_\beta m_{12}^2 \right], \\ \nonumber \lambda_2 &=& \frac{1}{v^2 s_\beta^2} \left[ s_\alpha^2 M_H^2+c_\alpha^2 m_h^2 - \frac{1}{t_\beta}m_{12}^2 \right], \\ \nonumber \lambda_3 &=& \frac{1}{v^2} \left[ 2 M_{H^\pm}^2 + \frac{s_{2\alpha}}{s_{2\beta} } (M_H^2-m_h^2) -\frac{m_{12}^2}{s_\beta c_\beta } \right], \\ \nonumber \lambda_4 &=& \frac{1}{v^2} \left[ M_A^2- 2 M_{H^\pm}^2 + \frac{m_{12}^2}{s_\beta c_\beta} \right], \\ \nonumber \lambda_5 &=& \frac{1}{v^2} \left[ \frac{m_{12}^2}{s_\beta c_\beta}- M_{A}^2 \right], \end{eqnarray} where $s_\alpha \equiv \sin\alpha$, $c_\alpha \equiv \cos \alpha$, $t_\beta \equiv \tan \beta$, etc. The six free parameters of the 2HDM's are \begin{equation} \{ m_h,\; M_H, \; M_A,\; M_{H^\pm},\; t_\beta,\; c_{\beta-\alpha} \}. \end{equation} Note that we focus on the scenario in which the lighter CP-even scalar Higgs boson $h$ is the SM-like Higgs boson observed, and in this scenario $c_{\beta - \alpha} $ is constrained close to zero by the current Higgs boson data. There have been numerous constraints on 2HDM's. We employ \textsc{HiggsSignals}-v2.6.2~\cite{Bechtle:2020uwn} for constraints on the Higgs signal strengths obtained at the LHC \cite{Aaboud:2018gay,Aaboud:2018jqu,Aaboud:2018pen,Aad:2020mkp,Sirunyan:2018mvw,Sirunyan:2018hbu,CMS:2019chr,CMS:2019kqw}, and \textsc{HiggsBounds}-v5.10.2~\cite{Bechtle:2020pkv} for consistency with direct searches at high energy colliders. \begin{table}[t] \label{table-yukawa} \begin{center} \begin{tabular}{|c||c|c|c||c|c|c||c|c|c|} \hline & ~~$\xi^h_u$~~ & ~~$\xi^h_d$~~ & ~~$\xi^h_\ell$~~ & ~~$\xi^H_u$~~ & ~~$\xi^H_d$~~ & ~~$\xi^H_\ell$~~ & ~~$\xi^A_u$~~ & ~~$\xi^A_d$~~ & ~~$\xi^A_\ell$~~ \\ \hline ~~~type-I~~~ & $\frac{c_\alpha}{s_\beta}$ & $\frac{c_\alpha}{s_\beta} $ & $\frac{c_\alpha}{s_\beta} $ & $\frac{s_\alpha}{s_\beta} $ & $\frac{s_\alpha}{s_\beta} $ & $\frac{s_\alpha}{s_\beta} $ & $\frac{1}{t_\beta} $ & $-\frac{1}{t_\beta}$ & $-\frac{1}{t_\beta}$ \\ type-II & $\frac{c_\alpha}{s_\beta} $ & $-\frac{s_\alpha}{c_\beta}$ & $-\frac{s_\alpha}{c_\beta}$ & $\frac{s_\alpha}{s_\beta}$ & $\frac{c_\alpha}{c_\beta} $ & $\frac{c_\alpha}{c_\beta}$ & $\frac{1}{t_\beta} $ & $t_\beta$ & $t_\beta$ \\ type-III (lepton-specific) & $\frac{c_\alpha}{s_\beta}$ & $\frac{c_\alpha}{s_\beta}$ & $-\frac{s_\alpha}{c_\beta} $ & $\frac{s_\alpha}{s_\beta} $ & $\frac{s_\alpha}{s_\beta}$ & $\frac{c_\alpha}{c_\beta}$ & $\frac{1}{t_\beta} $ & $-\frac{1}{t_\beta}$ & $t_\beta$ \\ type-IV (flipped) & $\frac{c_\alpha}{s_\beta} $ & $-\frac{s_\alpha}{c_\beta}$ & $ \frac{c_\alpha}{s_\beta} $ & $\frac{s_\alpha}{s_\beta} $ & $\frac{c_\alpha}{c_\beta}$ & $\frac{s_\alpha}{s_\beta}$ & $\frac{1}{t_\beta} $ & $t_\beta$ & $-\frac{1}{t_\beta}$ \\ \hline \end{tabular} \end{center} \caption{The Yukawa coupling modifiers in the four types of the 2HDM. }\label{table-yukawa} \end{table} Conventionally, there are four types of the assignments of the $Z_2$ parity for the SM fermions, resulting in 2HDM Type I, II, III, and IV, which differ among themselves in the couplings of Higgs bosons to fermions. The Yukawa couplings in 2HDM can be parameterized as \begin{eqnarray} {\cal L}_{\rm Y} &=& - \sum_f \biggr [ \frac{m_f}{v} \xi^h_f \bar{f} f h + \frac{m_f}{v} \xi^H_f \bar{f} f H -i \frac{m_f}{v} \xi^A_f \bar{f} \gamma_5 f A \biggr ] \\ \nonumber && - \biggr[ \frac{\sqrt2 V_{ud} }{v } H^+ \overline{u} \left(m_u \xi^A_u {P}_L + m_d \xi^A_d {P}_R\right)d +\frac{\sqrt{2} m_\ell}{v}H^+ \xi^A_\ell \overline{\nu_L} \ell_R + {\rm H.c.} \biggr ], \end{eqnarray} where the modifiers $\xi^{h,H,A}_f$ are presented in Table~\ref{table-yukawa}. Since the parameter $\cos(\beta - \alpha)$ is constrained close to zero by the Higgs boson data, it is instructional to expand the relevant couplings in terms of $\cos(\beta - \alpha)$. We are considering the following gluon-fusion process \[ pp \to H \to h h \to (b\bar b) (b\bar b) \;. \] The relevant couplings include the trilinear coupling $\lambda_{hhH}$, Yukawa couplings $\lambda^{h,H}_{t,b}$ of $h$ and $H$. The trilinear coupling $\lambda_{hhH}$ is given by \begin{equation} \lambda_{hhH} = \frac{c_{\beta-\alpha} }{s_{2\beta}} \, \left [ s_{2\alpha} (2 m^2_h + M_H^2 ) - \frac{2 m_{12}^2}{ s_{2\beta}} (3 s_{2\alpha} - s_{2\beta} ) \right ] \end{equation} The Yukawa couplings can be expanded similarly. For example, in Type II they are given by \begin{eqnarray} \lambda^h_t &=& \frac{c_\alpha}{s_\beta} = 1 + \frac{c_{\beta-\alpha} } { t_\beta} - \frac{1}{2} c^2_{\beta-\alpha} + {\cal O}(c^3_{\beta-\alpha} )\\ \lambda^h_b &=& - \frac{s_\alpha}{c_\beta} = 1 - c_{\beta-\alpha} t_\beta - \frac{1}{2} c^2_{\beta-\alpha} + {\cal O}(c^3_{\beta-\alpha} )\\ \lambda^H_t &=& \frac{s_\alpha}{s_\beta} = - \frac{1}{t_\beta} + c_{\beta-\alpha} + \frac{c^2_{\beta-\alpha} }{2 t_\beta} + {\cal O}(c^3_{\beta-\alpha}) \\ \lambda^H_b &=& \frac{c_\alpha}{c_\beta} = t_\beta + c_{\beta-\alpha} - \frac{c^2_{\beta-\alpha} }{2 t_\beta} + {\cal O}(c^3_{\beta-\alpha} ) \end{eqnarray} Other types can be expanded similarly. It is straightforward to see that the production cross section via the resonance CP-even $H$ scales $\cos^2(\beta-\alpha)$. Two-Higgs-doublet models are highly constrained by current Higgs signal strength data from the LHC and direct search bounds on heavy CP-even and CP-odd scalar bosons, and charged Higgs bosons. We employ the public codes $\texttt{HiggsBounds-5.10.2}$~\cite{Bechtle:2008jh,Bechtle:2011sb, Bechtle:2012lvg, Bechtle:2013wla, Bechtle:2015pma} and $\texttt{HiggsSignal-2.6.2}$~\cite{Stal:2013hwa,Bechtle:2013xfa,Bechtle:2014ewa,Bechtle:2020uwn} to test the validity of any parameter space points. \section{Sample Generation and Event Selections}\label{sec:Sample} In this study, the signal is the resonant Higgs boson pair production via gluon fusion in 2HDM, as shown in Fig.\ref{fig:feynman_diagram}. For SM backgrounds, we consider QCD multijet and top-quark pair production($t\bar{t}$) processes to be irreducible SM backgrounds. Other background processes, such as the nonresonant (continuum) SM $hh$ \footnote{We have verified with a parton-level calculation that the resonance peak in the invariant-mass distribution $M_{hh}$ stands tremendously above the continuum $hh$ production.} and electroweak diboson production, are negligible for contributions of event yields~\cite{ATLAS:2022hwc}. \begin{figure}[ht!] \centering \begin{subfigure}{0.43\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/ppHhh} \end{subfigure} \caption{A Feynman diagram for resonant Higgs boson pair production via gluon fusion in two-Higgs-doublet model.} \label{fig:feynman_diagram} \end{figure} We consider that the light Higgs boson $h$ ($m_h$ = 125 GeV) pair comes from heavy $CP$-even scalar $H$ with mass $M_H$ = 1000 GeV. The others physical parameters at this benchmark point are $M_A=M_{H^{\pm}}$ = 1000 GeV, $M_{12}^2$ = 400,000 $\text{GeV}^2$, $\tan\beta$ = 5, and $\cos(\beta-\alpha)$ = 0.01. This benchmark point is still allowed under current limits\footnote{We verified by $\texttt{HiggsBounds-5.10.2}$ and $\texttt{HiggsSignal-2.6.2}$.} and close to alignment limit in Type II. \subsection{Monte Carlo Samples}\label{subsec:MC_sample} The program {\texttt{M{\footnotesize{AD}}G{\footnotesize{RAPH}}5\_{\footnotesize{A}}MC@NLO 2.7.2}}~\cite{Alwall:2014hca} models the signal and background processes in $pp$ collisions at $\sqrt{s}$ = 14 TeV. The hard-scattering events are passed to {\texttt{P{\footnotesize{YTHIA}} 8.244}}~\cite{Sjostrand:2007gs} to simulate the parton shower and hadronization, using the default settings. According to Ref.\cite{ATLAS:2022hwc}, the {\texttt{NNPDF30\_nlo\_as\_0118}} \cite{NNPDF:2014otw} parton distribution function (PDF) is used for signal in next-to-leading-order calculation and {\texttt{NNPDF23\_lo\_as\_0130\_qed}}\cite{Ball:2012cx} for backgrounds in leading-order calculation. For signal, the next-to-leading-order two-Higgs-doublet model~\cite{Degrande:2014vpa} is used in the event generation. The input parameters of the benchmark for loop propagators and $s$–channel Higgs boson widths are submitted to {\texttt{M{\footnotesize{AD}}G{\footnotesize{RAPH}}5\_{\footnotesize{A}}MC@NLO 2.7.2}} through parameter cards in the standard setup. The latter are constructed with the public calculator 2HDMC~\cite{Eriksson:2009ws} with $\texttt{HiggsBounds-5.10.2}$~\cite{Bechtle:2008jh,Bechtle:2011sb, Bechtle:2012lvg, Bechtle:2013wla, Bechtle:2015pma} and $\texttt{HiggsSignal-2.6.2}$~\cite{Stal:2013hwa,Bechtle:2013xfa,Bechtle:2014ewa,Bechtle:2020uwn} extensions. Moreover, the decay chain $H\to h h $ is implemented by $\texttt{MadSpin}$\cite{Artoisenet:2012st} and the light Higgs boson $h$ is set to decay 100\% into b$\bar{\mathrm{b}}$ for the moment to obtain the selection efficiency. Later, we use the actual branching ratio for the event rates. The $t\bar{t}$ process is simulated at leading order and up to two more jets with the matching scale 20 GeV via MLM prescription\cite{Mangano_2007,Alwall:2007fs}. The other background, the multijet process, is flavor-inclusive. For the multijet process, we require $\texttt{ihtmin}$(inclusive $H_t$ for all partons) is 850 in the $\texttt{run\_card}$ to enhance simulation efficiency. The cross sections of the background samples are normalized based on the event yield given in the ATLAS analysis. \textsc{Pyjet}~\cite{noel_dawe_2021_4446849,Cacciari:2011ma} and the anti-$k_t$~\cite{Cacciari:2008gp} algorithm with radius parameter $R$ = 1.0 are used to define the boosted jets. After these boosted jets are formed, we require the transverse energy ($E_T$) of the leading large-$R$ jet $>$ 420 GeV and invariant mass of the leading large-$R$ jet $>$ 35 GeV. It is then followed by a trimming procedure~\cite{Krohn:2009th}. The constituents in large-$R$ jets are reclustered into ''subjets" using the $k_T$ algorithm~\cite{Ellis:1993tq} with $R$ = 0.2. The subjets with less than 5\% of the $p_T$ of the large-$R$ jet are then removed. After the large-$R$ jets are applied with the trimming procedure, we further setup preselection for each event, following Ref.~\cite{ATLAS:2022hwc}. Each event is required to contain at least two large-$R$ jets with $p_T(J_1)$ $>$ 450 GeV and $p_T(J_2)$ $>$ 250 GeV where $J_1$ and $J_2$ are the leading and subleading jet, respectively. These two large-$R$ jets are also required to have $|\eta(J)|$ $<$ 2 and $M(J)$ $>$ 50 GeV. After applying the preselection, there remain 600k events from signal and total background for training. In order to generate enough statistics for the analysis, there are 600k events and 2.3M events from the signal and total background for testing, respectively. Furthermore, the Higgs jet is required to satisfy double b-tagging. Jets are declared double $b$-tagged if they have two or more ghosted-associated~\cite{Cacciari:2008gn,Buckley:2015gua} $B$ hadrons. This approach is similar to the subjet b-tagging in ATLAS~\cite{Lin:2018cin,ATLAS:2018sgt}. We do not include the pileup effects in this study. Reference~\cite{Chung:2020ysf} shows that the classification performance is relatively unchanged when neutral particles are not involved. The charged particles are rather insensitive to pileup effects. When setting a precise limit, it would be relatively stable to various experimental effects such as pileup. \subsection{High-level Features} In order to distinguish the signal from SM backgrounds via Gradient Tree Boosting (BDT), the following fifteen commonly-used high-level features are considered:\\ \quad 1. $M_{JJ}$ : invariant mass of the leading and subleading large-$R$ jets;\\ \quad 2. $M(J_1)$ and $M(J_2)$ : invariant mass of the leading jet and the subleading jet, respectively;\\ \quad 3. $|\Delta\eta(JJ)| \equiv|\eta(J_1) - \eta(J_2)|$;\\ \quad 4. $X_{HH}$: $\sqrt{\left(\frac{M(J_1)-124\text{GeV}}{0.1\times M(J_1)}\right)^2 + \left(\frac{M(J_2)-115\text{GeV}}{0.1\times M(J_2)}\right)^2}$~\cite{ATLAS:2022hwc};\\ \quad 5. $\tau_{21}=\tau_2/\tau_1$ : $n$-subjettiness ratio of the leading jet and the subleading jet~\cite{Thaler:2010tr,Thaler:2011gf};\\ \quad 6. $D_2^{(\beta)}=e_3^{(\beta)}/(e_2^{(\beta)})^3$ with $\beta=1,2$ : energy correlation function ratios of the leading jet and the subleading jet~\cite{Larkoski:2014gra};\\ \quad 7. $C_2^{(\beta)}=e_3^{(\beta)}/(e_2^{(\beta)})^2$ with $\beta=1,2$ : energy correlation function ratios of the leading jet and the subleading jet~\cite{Larkoski:2013eya};\\ \noindent where $e_i$ is the normalized sum over doublets ($i=2$) or triplets ($i=3$) of constituents inside jets, weighted by the product of the constituent transverse momenta and pairwise angular distances. For this analysis, $\beta$ is considered to be 1 and 2. The distributions of these variables are shown in Fig.~\ref{fig:THDM_kinematic_features} and Fig.~\ref{fig:THDM_substructure_features}, in which the capability of each observable to discriminate between signal and background is demonstrated. The salient features of these histograms are described in the following. The dijet invariant mass distribution peaks near the heavy resonance of 1000 GeV for the signal while it is broad for the background. The resonant signal jets tend to be very central since they are produced through $s$-channel processes. In this case, the $|\Delta\eta(JJ)|$ provides good discrimination power. $X_{HH}$ represents the distance of an event from the di-Higgs peak in the $M(J_1)$-$M(J_2)$ plane. In Fig.~\ref{fig:THDM_kinematic_features}, the peaks of invariant mass of leading jet and subleading jet are around 124 GeV and 115 for signal, respectively. It implies that the signal can be distinguished in the small $X_{HH}$ region. The decay of massive objects into two hard QCD partons produces the two-prong structure, which makes the signal jets result in low $\tau_{21}$, $D_2^{(\beta)}$ and $C_2^{(\beta)}$. \begin{figure}[ht!] \centering \begin{subfigure}{0.43\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/MJJ} \end{subfigure} \begin{subfigure}{0.43\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/MJ1} \end{subfigure} \begin{subfigure}{0.43\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/MJ2} \end{subfigure} \begin{subfigure}{0.43\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/delta_eta} \end{subfigure} \begin{subfigure}{0.43\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Xhh} \end{subfigure} \caption{Distributions of the five kinematic variables used in the BDT. In all figures, $E_T(J_1)$ and $M(J_1)$ represent the transverse energy and invariant mass of the leading jet, respectively.} \label{fig:THDM_kinematic_features} \end{figure} \begin{figure}[ht!] \centering \begin{subfigure}{0.43\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/t211} \end{subfigure} \begin{subfigure}{0.43\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/D211} \end{subfigure} \begin{subfigure}{0.43\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/D221} \end{subfigure} \begin{subfigure}{0.43\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/C211} \end{subfigure} \begin{subfigure}{0.43\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/C221} \end{subfigure} \caption{Distributions of the five substructures used in the BDT. In all figures, $E_T(J_1)$ and $M(J_1)$ represent the transverse energy and invariant mass of the leading jet, respectively. Here, we only show substructures of the leading jet. The distributions of substructures of the subleading jet are similar to those of the leading jet.} \label{fig:THDM_substructure_features} \end{figure} \subsection{Low-level Features} The low-level inputs to the three-stream convolutional neural networks are full-event images, and images of the leading and subleading jets \cite{Cogan:2014oua,deOliveira:2015xxd}. The resolution is 40$\times$40 pixels for both sets of images and jet images are in 1R$\times$1R range\cite{Lin:2018cin, Chung:2020ysf}. The images consist of three channels, analogous to the Red-Green-Blue (RGB) channels of a color image. The pixel intensity for the three channels correspond to the sum of the charged particle $p_T$ , the sum of the neutral particle $p_T$ , and the number of charged particles in a given region of the image. There is no $p_T$ threshold for the contributions to pixel intensity. The full-event image covers effectively the entire $\eta$-$\phi$ cylinder ($|\eta|$ $<$ 5 ). Moreover, the full-event images are rotated so that the leading jet is always located at $\phi$ = $\pi$/2. Images are then flipped along the axis defined by $\eta$ = 0 to put the leading jet centroid in the region with positive $\eta$. The jet images are rotated so that the two subjets are aligned along the same axis. The leading subjet is at the origin and the subleading subjet is directly below the leading subjet. If there is a third-leading subjet, the image will be reflected. All images are normalized so that the intensities all summed to unity. After normalization, the pixel intensities are standardized so that their distribution has mean zero and unit variance. These preprocessing procedures significantly improve the stability of the machine learning training\cite{deOliveira:2017pjk}. Figure \ref{fig:THDM_lowlevel_features} shows the average images in the charged $p_T$ channel. The patterns in the charged $p_T$ channel are similar to the other two channels. \begin{figure}[ht!] \centering \begin{subfigure}{1\textwidth} \centering \includegraphics[width=1\textwidth]{Figures/event_image_channel_0} \end{subfigure} \begin{subfigure}{1\textwidth} \centering \includegraphics[width=1\textwidth]{Figures/leadingjet_image_channel_0} \end{subfigure} \begin{subfigure}{1\textwidth} \centering \includegraphics[width=1\textwidth]{Figures/subleadingjet_image_channel_0} \end{subfigure} \caption{The average of 10000 rotated full-event images (top), leading jet images (middle) and subleading jet images (bottom) in the charged $p_T$ channel. The coordinates $\phi'$ and $\eta'$ denote the new axis after the full-event images are rotated and flipped. $Q_1$ and $Q_2$ denote the new axes after the jet's axis is centralized and rotated. The intensity in each pixel is the sum of the charged particle $p_T$. The total intensity in each image is normalized to unity. The resolution is 40$\times$40 pixels for each image.} \label{fig:THDM_lowlevel_features} \end{figure} \section{Baseline and Classifiers}\label{sec:classifiers} In this study, there are three selection methods. The first one is conventional cut-based selection following Ref.\cite{ATLAS:2022hwc}. It is called the baseline in this work. The second one is the Gradient Tree Boosting (BDT), in which the high-level features will serve as inputs to the BDT. The third one, the three-stream convolutional neural networks, which is inspired by Ref.~\cite{Lin:2018cin, Chung:2020ysf} will demonstrate the discriminating power in this work. \subsection{The Cut-based Method} We treat the similar boosted channel analysis in Ref.\cite{ATLAS:2022hwc} as the baseline in this study. After preselection which is described in Sec.\ref{subsec:MC_sample}, we further apply $|\Delta\eta(JJ)|$ $<$ 1.3 and $X_{HH}$ $<$ 1.6 to reduce SM backgrounds. Then we require $M_{JJ}$ should be in the heavy resonance mass window: 900 GeV $<$ $M_{JJ}$ $<$ 1100 GeV. After this mass window selection for the heavy resonance, the signal efficiency contains roughly 90\% before applying this criteria. At last, we require the leading and subleading jets should be the Higgs jets. \subsection{The Boosted Decision Tree} In this study, the BDT uses Gradient Tree Boosting. It has a fixed number of estimators (2000) with maximum depth 5. The minimum number of samples is fixed at 25\% as required to split an internal node and 5\% as required to be at a leaf node. The deviance of the loss function is set with the learning rate 0.005. This BDT model is trained on fifteen high-level features of the jet using the {\texttt{scikit-learn}} library \cite{scikit-learn}. \subsection{The Three-stream Convolutional Neural Networks (3CNN)} The 3CNN in this study is based on Ref.\cite{Lin:2018cin, Chung:2020ysf}. One stream of the 3CNN is dedicated to global full-event information. The other two streams are dedicated to processing local information in the leading jet and subleading jet. In addition, there are two outputs in the last layer for disentangling the signal and SM backgrounds. The three-stream architecture is shown schematically in Fig.~\ref{fig:3CNN_architecture}. Details of the 3CNN are as follows. The convolution filter is 5$\times$5 in three streams, the maximum pooling layers are 2$\times$2, and the stride length is 1. Rectified linear unit (ReLU) activation functions are used for all intermediate layers of the neural network (NN). The first convolution layer in each stream has 32 filters and the second convolution layer in each stream has 64 filters. There are 300 neurons for the dense layer at the end of each stream. The three dense layers from each stream are fully connected to two output neurons with the softmax activation function $e^{x_i}/\sum_{i=1}^4 e^{x_i}$, which is the multidimensional generalization of the sigmoid. The AdaDelta optimizer~\cite{DBLP:journals/corr/abs-1212-5701} is used to select the network weights. Between the last dense layer and output layer, Dropout~\cite{JMLR:v15:srivastava14a} regularization is added to reduce overfitting with the dropout rate = 0.1. The categorical cross entropy loss function is optimized in the neural network training. For effectively utilizing the full information of the detector in the $\phi$ direction, a padding method is used to take the information in the bottom four rows of the input images and append them onto the top of the image. The {\texttt{Keras-2.4.0}} library is used to train a 3CNN model with the {\texttt{T{\footnotesize{ENSORFLOW}}-2.4.0-rc3}} \cite{tensorflow2015-whitepaper} backend, on a {\texttt{NVIDIA RTX A6000 48 GB}}. \begin{figure}[t!] \centering \begin{subfigure}{0.9\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/3CNN_architecture} \end{subfigure} \caption{Architecture of the 3CNN, based on Ref.\cite{Lin:2018cin, Chung:2020ysf}. The first stream (top) is used to process full-event images. The second stream (middle) uses the information from the leading jet. The third stream (bottom) uses the information from the subleading jet.} \label{fig:3CNN_architecture} \end{figure} \section{Results and Sensitivity Reach in 2HDM}\label{sec:Sensitivity} The selection for signal and SM backgrounds are shown in Table.\ref{table:total_selection}. Here we set light Higgs boson mass $m_h$ = 125 GeV, heavy $CP$-even scalar mass $M_H$ = 1000 GeV, $M_A=M_{H^{\pm}}$ = 1000 GeV, $M_{12}^2$ = 400,000 $\text{GeV}^2$, $\tan\beta$ = 5, and $\cos(\beta-\alpha)$ = 0.01 in Type II for the benchmark point. We list two major SM backgrounds in this table: $t\bar{t}$ and multijet. The multijet background is the dominant one between them before applying the selection. The notation $\textbf{preselection}$ in Table.\ref{table:total_selection} is the cut flow that was described in the Sec.\ref{subsec:MC_sample}. We apply extra B-hadron tagging efficiency = 0.77~\cite{ATLAS:2022hwc} for the Higgs jet, requiring double b-tagging via ghosted-associated method, to estimate the event yield. Moreover, in order to compare the background-discriminating power among Baseline, BDT and 3CNN, we choose the BDT score cut and 3CNN score cut to make the number of signal events be close to that in the baseline analysis. From Table.\ref{table:total_selection}, we find that the BDT analysis outperforms the baseline analysis based on the cut-based method. The number of signal events in this benchmark point is around 28 while the total background is around 1390 events in the baseline analysis. On the other hand, in the BDT analysis the efficiency of the signal is about the same, while the background rejection power improves by a factor of 10 over the cut-based method. It turns out that the number of signal event in this benchmark point is around 25 and the total background is around 140 events in the BDT analysis. After we introduce the 3CNN analysis, we still can maintain 25 signal events but the total background is reduced to 56 events. \begin{table}[h!] \scriptsize \begin{center} \begin{tabular}{cccccc} \hline\hline \multicolumn{6}{c}{\textbf{Selection Flow Table}}\\ \hline \multicolumn{2}{c}{} &\textbf{$pp\to H\to h h \to b\bar{b}b\bar{b}$ (Type II)}&\textbf{$t\bar{t}$}&\textbf{Mulitijet}& \textbf{Total Backgrounds}\\ \hline \multicolumn{2}{c}{\textbf{preselection}} & $8.02\times10^{1}$ & $9.23\times10^{5}$ & $2.76\times10^{7}$ & $2.86\times10^{7}$\\ \multicolumn{2}{c}{\textbf{900 GeV $<$ $M_{JJ}$ $<$ 1100 GeV}} & $5.29\times10^{1}$ & $2.77\times10^{5}$ & $6.92\times10^{6}$ & $7.20\times10^{6}$\\ \multicolumn{2}{c}{\textbf{2 Higgs jets}} & $4.74\times10^{1}$ & $1.05\times10^{3}$ & $2.34\times10^{4}$ & $2.45\times10^{4}$\\ \hline\hline \multirow{2}{*}{\textbf{Baseline}} &\textbf{$|\Delta\eta(JJ)|$ $<$ 1.3} & $4.68\times10^{1}$ & $9.99\times10^{2}$ & $2.18\times10^{4}$ & $2.28\times10^{4}$\\ &\textbf{$X_{HH}$ $<$ 1.6}& $2.82\times10^{1}$ & $2.13\times10^{1}$ & $1.37\times10^{3}$ & $1.39\times10^{3}$\\ \hline\hline \multicolumn{2}{c}{\textbf{BDT score $>$ 0.964}} & $2.56\times10^{1}$ & $5.33$ & $1.37\times10^{2}$ & $1.42\times10^{2}$ \\ \hline\hline \multicolumn{2}{c}{\textbf{3CNN score $>$ 0.99}} & $2.56\times10^{1}$ & $2.93\times10^{1}$ & $2.74\times10^{1}$ & $5.67\times10^{1}$ \\ \hline\hline \end{tabular} \end{center} \caption{Table showing the cut flow and event yield for the signal process $pp \to H \to h h \to b \bar b b \bar b$ and the backgrounds at $\sqrt{s}$ = 14 TeV with an integrated luminosity $\mathcal{L}$ = 3000 $fb^{-1}$. The signal is Type II of 2HDMs. The B-hadrons tagging efficiency = 0.77~\cite{ATLAS:2022hwc} is applied to calculate the event yield. The preselection are described in the main text.} \label{table:total_selection} \end{table} Results of these three analysis at $\sqrt{s}$ = 14 TeV with an integrated luminosity $\mathcal{L}$ = 3000 $fb^{-1}$ are interpreted in the parameter space ($\cos(\beta-\alpha)$, $\tan\beta$) in Fig.\ref{fig:allowed_tb_cba}. We fix $M_A=M_{H^{\pm}}$ = 1000 GeV, and $M_{12}^2$ = 400,000 $\text{GeV}^2$ to find the allowed region at 95\% CL in the ($\cos(\beta-\alpha)$, $\tan\beta$) plane. Note that the colored regions are those with the significance $z \equiv \sqrt{2[(s+b)ln(1+s/b)-s]} \le 2$, where $s$ and $b$ stand for the number of signal and background events, respectively. It means that if no excess of events are recorded in HL-LHC, the colored regions would be the remaining allowed regions. We clearly see significant gains using the 3CNN analysis for all four-types of 2HDMs. The 3CNN analysis has the potential to provide stronger constraints than the baseline method and BDT. \begin{figure}[h!] \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Type_1_tb_cba_allowed} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Type_2_tb_cba_allowed} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Type_3_tb_cba_allowed} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Type_4_tb_cba_allowed} \end{subfigure} \caption{Allowed regions in all four types of 2HDM for $M_A=M_{H^{\pm}}$ = 1000 GeV, $M_{12}^2$ = 400,000 $\text{GeV}^2$ in the ($\cos(\beta-\alpha)$, $\tan\beta$) plane if no excess is seen above the SM background at the HL-LHC. The blue, green and red regions are the allowed region based on baseline, BDT and 3CNN analysis, respectively. Allowed region is the area with significance $\le$ 2, significance is $\sqrt{2[(s+b)ln(1+s/b)-s]}$, where $s$ is the number of signal events and $b$ is the number of background events.} \label{fig:allowed_tb_cba} \end{figure} The 3CNN analysis shows stronger background discrimination power and thus provides a better coverage of parameter space at the HL-LHC. Therefore, we focus on the 3CNN analysis in the following and combine with the current constraints from the Higgs-signal strengths obtained at the LHC and direct searches at high energy colliders. The current constraints are calculated from the public code \textsc{HiggsBounds}-v5.10.2 and \textsc{HiggsSignals}-v2.6.2. In the \textsc{HiggsBounds}-v5.10.2, it includes all processes at LEP, Tevatron, and LHC and determines which is the most sensitive channel and whether the point is still allowed or not at the 95\% CL. In the \textsc{HiggsSignals}-v2.6.2, it gives the $\chi^2$ output for 111 Higgs observables~\cite{ATLAS:2018jvf,ATLAS:2018xbv,ATLAS:2018ynr,ATLAS:2020rej,CMS:2018hnq,CMS:2018nak,CMS-PAS-HIG-19-001,CMS-PAS-HIG-19-002}. Since there are six model parameters, the number of degrees of freedom is 105. We require the $p$-value to be larger than 0.05, corresponding to 2$\sigma$. In the Fig.\ref{fig:sensitivity_tb_cba} and Fig.\ref{fig:sensitivity_m12s_cba}, we present the sensitivity region (red) with significance $z>2$ that is still allowed under current constraints and can be covered by the 3CNN at the 14 TeV HL-LHC in the ($\cos(\beta-\alpha)$, $\tan\beta$) plane and ($\cos(\beta-\alpha)$, $m_{12}^2$) plane, respectively. Note that the gray area is the currently allowed region by direct searches at colliders from \texttt{HiggsBounds} at the 95\% CL and the purple area is the allowed region from the SM-like Higgs-boson properties given by \texttt{HiggsSignals} at 2 $\sigma$ level. We can regard the overlapping regions of the gray and purple areas as the currently allowed parameter space. Note that the overlapping regions can be separated into (i) near the alignment limit and (ii) the wrong-sign Yukawa region. In all 4 types of 2HDM, we clearly see that the 3CNN can cover a large area of the overlapping regions. In the ($\cos(\beta-\alpha)$, $\tan\beta$) plane of Fig.~\ref{fig:sensitivity_tb_cba}, we fix $M_A=M_{H^{\pm}}$ = 1000 GeV, and $M_{12}^2$ = 400,000 $\text{GeV}^2$. In all 4 types of 2HDM, the red region is the sensitive region where the significance is larger than 2. We can see that large areas (red) in the overlapping region of the gray and purple areas allowed by both \texttt{HiggsSignals} and \texttt{HiggsBounds} can be covered by 3CNN, indicating that the process $pp \to H \to h h \to 4b$ can test a large chunk of parameter space at the HL-LHC. We notice that the sensitive regions lie close to the alignment limit, $\cos(\beta-\alpha)=0$, and in the wrong-sign region in all four types. Around the alignment limit, \texttt{HiggsBounds} restricts $\tan(\beta)$ to be larger than 1 and the 3CNN analysis shows that the region with $\tan(\beta) \le 10 $ is still sensitive. On the other hand, around the wrong sign Yukawa region, the most severe constraint comes from \texttt{HiggsSignals}. The 3CNN analysis also indicates that it can cover a quite sizable area in the wrong sign Yukawa region. On the other hand, in the ($\cos(\beta-\alpha)$, $m_{12}^2$) plane of Fig.~\ref{fig:sensitivity_m12s_cba}, we fix $M_A=M_{H^{\pm}}$ = 1000 GeV and $\tan\beta$ = 5. The 3CNN analysis indicates the sensitivity in the $m_{12}^2$ interval around alignment limit and along the wrong sign Yukawa region. \begin{figure}[h!] \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Type_1_tb_cba_sensitive} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Type_2_tb_cba_sensitive} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Type_3_tb_cba_sensitive} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Type_4_tb_cba_sensitive} \end{subfigure} \caption{Sensitive regions in all four types of 2HDM for $M_A=M_{H^{\pm}}$ = 1000 GeV, $M_{12}^2$ = 400,000 $\text{GeV}^2$ in the ($\cos(\beta-\alpha)$, $\tan\beta$) plane. The gray area is the currently allowed area by direct searches at colliders from \texttt{HiggsBounds} at the 95\% CL. The purple area is due to the constraints from the SM-like Higgs-boson properties from \texttt{HiggsSignals} at 2 $\sigma$ level. The red region is the sensitive region (significance $>$ 2, where significance is $\sqrt{2[(s+b)ln(1+s/b)-s]}$, and $s$ is the number of signal events and $b$ is the number of background events) passed the 3CNN analysis, where is still allowed in the current constraints at colliders.} \label{fig:sensitivity_tb_cba} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Type_1_m12s_cba_sensitive} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Type_2_m12s_cba_sensitive} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Type_3_m12s_cba_sensitive} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=1.\columnwidth]{Figures/Type_4_m12s_cba_sensitive} \end{subfigure} \caption{Sensitive region in all four types of 2HDM for $M_A=M_{H^{\pm}}$ = 1000 GeV, $\tan\beta$ = 5 in the ($\cos(\beta-\alpha)$, $M_{12}^2$) plane. The gray area is the current allowed area by direct searches at colliders from \texttt{HiggsBounds} at the 95\% CL. The purple area is due to the constraints from the SM-like Higgs-boson properties from \texttt{HiggsSignals} at 2 $\sigma$ level. The red region is the sensitive region (significance $>$ 2, significance is $\sqrt{2[(s+b)ln(1+s/b)-s]}$, where $s$ is the number of signal events and $b$ is the number of background events) passed the 3CNN analysis, where is still allowed in the current constraints at colliders.} \label{fig:sensitivity_m12s_cba} \end{figure} \section{Conclusions}\label{sec:conclusion} In this study, we have employed a modern deep-learning approach to improve the search for Higgs boson pair production arising from resonant heavy Higgs enhancement in the the $b\bar{b}b\bar{b}$ final state in the framework of two-Higgs-doublet models at the HL-LHC. The resonance production channel plays an important role in probing the structure of the EWSB sector. Using our approach, we have pointed out that the gluon-fusion process $pp \to H \to h h \to 4b$ at the HL-LHC can further probe the currently allowed parameter space in the Types I to IV of 2HDM's. The 3CNN architecture in this work is built upon the proposal from Ref.~\cite{Lin:2018cin,Chung:2020ysf}. This architecture has 2-class outputs for the signal and background, and contains one stream acting on global event information, and the other two streams acting on information from the leading and subleading jets. This approach is amenable to visualizations that can provide some insights into what the neural network is using for event classification. We interpret the signal-background discrimination based on our simulations at 14 TeV HL-LHC in the two-Higgs-doublet models' framework. Figures \ref{fig:sensitivity_tb_cba} and \ref{fig:sensitivity_m12s_cba} illustrate our scanning in the parameter space of the 2HDM’s for the sensitivity coverage at the HL-LHC, as well as the current restriction on the parameter space due to \textsc{HiggsSignals} and \textsc{HiggsBounds}. We find that there is sizeable sensitive parameter space covered by the 3CNN analysis. In summary, we employ the 3CNN architecture to incorporate both local and global information for the signal and background identification. Additionally, we have studied the conventional cut-based approach and a boosted decision tree. The conventional cut-based approach does not give enough significance to the signal even at HL-LHC. The BDT is effective but is less potent than the neural network. We have shown that the 3CNN can significantly enhance the significance of the signal at HL-LHC and allows us to probe sensitive parameter space in the currently allowed region. This work is flexible to implement in other Higgs-pair production channels with hadronic or semi-hadronic final state and may be able to enrich the sensitivity of the signal at the HL-LHC. \begin{acknowledgments} We thank SooJin Lee for help with \textsc{HiggsSignals} and \textsc{HiggsBounds}, also thank Professor Benjamin Nachman and Professor Chih-Ting Lu for their valuable comments on the manuscript. K.C. and Y. C. were supported by MoST with grant nos. MoST-110-2112-M-007-017-MY3. S.C.H was supported by the National Science Foundation under Grant No. 2110963. \end{acknowledgments} \clearpage \bibliographystyle{jhep}
1,314,259,993,711
arxiv
\section{Introduction.}\label{intro} \section{Introduction} Derivative pricing with Fourier transforms was first investigated by \cite{heston1993closed}. \cite{Carr1999} published the first method with both the characteristic function and the payoff in the Fourier domain. \cite{fang2008novel,Fang2009pricing} devised the COS method based on the Fourier-cosine expansion. The Hilbert transform \citep{King2009} has also been successfully employed: by \cite{Feng2008} to price barrier options using backward induction in the Fourier space and by \cite{marazzina2012pricing} and \cite{fusai2015} to compute the factorisations required by the Spitzer identities \citep{spitzer1956combinatorial,Kemperman1963} via the Plemelj-Sokhotsky relations. Feng and Linetsky showed that computing the Hilbert transform with the sinc expansion, as studied by \cite{Stenger1993,Stenger2011}, gives errors that reduce exponentially as the number of fast Fourier transform (FFT) grid points increases. Pricing derivatives, especially exotic options, is a challenging problem in the operations research literature. \cite{fusai2015} provide extensive references for this, as well as for many non-financial applications of the Hilbert transform and the related topics of Wiener-Hopf factorisation and Spitzer identities in insurance, queuing theory, physics, engineering, applied mathematics, etc. When working in the Fourier domain one has the advantage that there are many pre-existing robust implementations of the discrete Fourier transform (DFT), e.g.\ the FFTW library by \cite{frigo1998fftw}. However, the use of such numerical solutions means that one must manage issues arising from the approximation of an integral over an infinite domain with a finite sum. As long as the truncation limits in the log-price domain are selected judiciously, the main issue to contend with is the so-called Gibbs (or Gibbs-Wilbraham) phenomenon \citep{wilbraham1848certain,gibbs1898fourier,gibbs1899fourier}. This is commonly observed as oscillations either side of a discontinuity in the original domain and is caused by truncating the function in the Fourier domain. Importantly for the pricing methods considered in this article, it also describes how the shape of the function in the Fourier domain relates to the order of the discontinuities in the original domain. There have been many papers exploring possible solutions to the Gibbs phenomenon in a general setting, notably by \cite{hewitt1979gibbs}, \cite{vandeven1991family}, \cite{gottlieb1997gibbs}, \cite{tadmor2005adaptive} and \cite{tadmor2007filters}. More recently \cite{ruijter2015application} explored the use of spectral filtering techniques to solve the problem of slow polynomial error convergence seen when the COS method is used with non-smooth probability distributions. The application investigated in this paper is the improvement of methods based on the fast Hilbert transform for the pricing of discretely monitored barrier options with L\'evy processes. Recent papers have described significant progress in the valuation of these types of financial contracts. \cite{Feng2008} devised a method which gives exponential convergence for many L\'evy processes but is limited to polynomial convergence for the variance gamma (VG) processes; this method has a computational time which increases linearly with the number of dates $N$. \cite{fusai2015} used the Spitzer identities to devise a method whose computation time is independent of $N$ and which achieves exponential convergence for single-barrier and lookback options, again with the exception of the VG process, but which is limited to polynomial convergence for double-barrier options. This paper explains the origin of the error performance and presents modified versions of the Fusai, Germano and Marazzina (FGM) and Feng and Linetsky (FL) methods with improved convergence. In order to do this we make use of, and extend, the investigation into the error performance of the discrete Hilbert transform by \cite{Stenger1993,Stenger2011} and \cite{Feng2008}. We show that the error performance is related to both the shape of the characteristic function of the underlying process and to the Gibbs phenomenon. Finally, by making use of the filtering techniques suggested by \cite{gottlieb1997gibbs} and \cite{mckechan2010tapering} we are able to achieve improved convergence. The methods we compare are FGM, filtered FGM (FGM-F), Feng and Linetsky (FL) and filtered Feng and Linetsky (FL-F). They are compared for single and double-barrier options for the Kou, normal inverse Gaussian (NIG) and VG processes. The structure of this paper is as follows. In Section \ref{sec:Back} we run briefly through Fourier, Hilbert and $z$-transforms, give a concise overview of the original pricing schemes and explain our modifications to improve convergence. Section \ref{sec:Errperf} includes a discussion of the performances of the pricing techniques and how they relate to the Gibbs phenomenon and the shape of the characteristic function of the underlying processes. Lastly, Section \ref{sec:results} shows numerical results, comparing the filtered algorithms with the original FGM and FL methods. \section{Background}\label{sec:Back} As this method directly extends the FGM \citep{fusai2015} and FL \citep{Feng2008} pricing methods, we refer to the original papers for a comprehensive introduction. Aspects of the methods which are directly relevant to the error investigation are described here in order to provide a background to the changes that were made to improve convergence. \subsection{Fourier and Hilbert transforms}\label{sec:Back_fourhilb} In this paper we make extensive use of the Fourier transform \citep[see e.g.][]{Polyanin1998,Kreyszig2011}, an integral transform with many applications. Historically, it has been widely used in spectroscopy and communications, therefore much of the literature refers to the function in the Fourier domain as its spectrum. According to the usual convention in finance literature, the forward and inverse Fourier transforms are defined as \begin{align} & \widehat{f}(\xi)=\mathcal{F}_{x\rightarrow\xi} \left[f(x)\right]=\int^{+\infty}_{-\infty}f(x)e^{i\xi x}dx, \label{eq:FwdFourier}\\ & f(x)=\mathcal{F}^{-1}_{\xi\rightarrow x} \left[\widehat{f}(\xi)\right]=\frac{1}{2\pi}\int^{+\infty}_{-\infty}\widehat{f}(\xi)e^{-i\xi x}d\xi \label{eq:RevFourier}. \end{align} Let $S_t$ be the price of an underlying asset and $x_t = \log(S_t/S_0)$ its log-price. To find the price $v(x_t,t)$ of an option at time $t=0$ when the initial price of the underlying is $S_0$ and thus its log-price is $x_0=0$, we need to discount the expected value of the undamped option payoff $\phi(x_T)e^{-\alpha x_T}$ at maturity $t=T$ with respect to an appropriate risk-neutral probability distribution function (PDF) $p(x,T)$ whose initial condition is $p(x,0) = \delta(x)$. As shown by \cite{lewis2001simple}, this can be done using the Plancherel relation, \begin{align} v(0,0) & = e^{-rT}\mathrm{E}\left[\phi(x_T)e^{-\alpha x_T}|x_0=0\right]=e^{-rT}\int^{+\infty}_{-\infty}\phi(x)e^{-\alpha x}p(x,T)dx \nonumber\\ & = \frac{e^{-rT}}{2\pi}\int^{+\infty}_{-\infty}\widehat{\phi}(\xi)\widehat{p}\,^*(\xi+i\alpha,T)d\xi = e^{-rT}\mathcal{F}^{-1}_{\xi\rightarrow x}\left[\widehat{\phi}(\xi)\widehat{p}\,^*(\xi+i\alpha,T)\right](0). \label{eq:Planch} \end{align} Here, $\widehat{p}\,^*(\xi+i\alpha,T)$ is the complex conjugate of the Fourier transform of $e^{-\alpha x}p(x,T)$. To price options using this relation, we need the Fourier transforms of both the damped payoff and the PDF. A double-barrier option has the damped payoff \begin{equation} \phi(x) = e^{\alpha x}S_0(\theta(e^x-e^k))^+\mathbf{1}_{[l,u]}(x), \end{equation} where $\alpha$ is the damping factor, $\theta = 1$ for a call, $\theta = -1$ for a put, $\mathbf{1}_A(x)$ is the indicator function of the set $A$, $k=\log(K/S_0)$ is the log-strike, $u=\log(U/S_0)$ is the upper log-barrier, $l=\log(L/S_0)$ is the lower log-barrier, $K$ is the strike price, $U$ is the upper barrier and $L$ is the lower barrier. The Fourier transform of the damped payoff $\phi(x)$ is available analytically, \begin{align} \label{eq:Payoff} &\widehat{\phi}(\xi)=S_0\left(\frac{e^{(1+i\xi+\alpha)a}-e^{(1+i\xi+\alpha)b}}{1+i\xi+\alpha}-\frac{e^{k+(i\xi+\alpha)a}-e^{k+(i\xi+\alpha)b}}{i\xi+\alpha}\right), \end{align} where for a call option $a = u$ and $b = \max(k,l)$, while for a put option $a=l$ and $b=\min(k,u)$. The Fourier transform of the PDF $p(x,t)$ of a stochastic process $X(t)$ is the characteristic function \begin{align} \label{eq:CharFun} & \Psi(\xi,t)=\mathrm{E}\left[e^{i\xi X(t)}\right]=\int^{+\infty}_{-\infty}p(x,t) e^{i\xi x}dx=\mathcal{F}_{x\rightarrow\xi}\left[p(x,t)\right]=\widehat{p}(\xi,t). \end{align} For a L\'evy process the characteristic function can be written as $\Psi(\xi,t)=e^{\psi(\xi)t}$, where the characteristic exponent $\psi(\xi)$ is given by the L\'evy-Khincine formula as \begin{align} \label{eq:CharExp} & \psi(\xi)=ia\xi-\frac{1}{2}\sigma^2\xi^2+\int_{\mathbb{R}}(e^{i\xi\eta}-1-i\xi\eta\mathbf{1}_{[1,1]}(\eta))\nu(d\eta). \end{align} The L\'evy-Khincine triplet $(a,\sigma,\nu)$ uniquely defines the L\'evy process. The value of $a$ defines the linear drift of the process, $\sigma$ is the volatility of the diffusion part of the process, and the jump part of the process is specified so that $\nu(\eta)$ is the intensity of a Poisson process with jump size $\eta$. Under the risk-neutral measure the parameters of the triplet are linked by the equation \begin{equation} a = r-q-\frac{1}{2}\sigma^2-\int_\mathbb{R}(e^\eta-1-i\eta\mathbf{1}_{[1,1]}(\eta))\nu(d\eta), \end{equation} where $r$ is the risk-free interest rate and $q$ is the dividend rate. In general the characteristic function of a L\'evy process is available in closed form, for example for the Gaussian \citep{Schoutens2003}, NIG \citep{Barndorff1998}, CGMY \citep{Carr2002}, Kou double exponential \citep{kou2002jump}, Merton jump diffusion \citep{merton1976option}, L\'evy alpha stable \citep{Nolan2017}, VG \citep{Madan1990} and Meixner \citep{Schoutens2003} processes. Some pricing techniques based on the Fourier transform, e.g.\ FGM and FL, also use the Hilbert transform, which is an integral transform related to the Fourier transform. However, in contrast to the Fourier transform, the function under transformation remains in the same domain, rather than moving between the $x$ and $\xi$ domains. The Hilbert transform of a function in the Fourier domain is defined as \begin{align} \label{eq:HilbTrans} \mathcal{H}\big[\widehat{f}(\xi)\big]&=\,\mathrm{P.V.}\,\frac{1}{\pi}\int^{+\infty}_{-\infty}\frac{\widehat{f}(\xi')}{\xi-\xi'}d\xi'\nonumber\\ &=\lim_{\epsilon\rightarrow0^+}\frac{1}{\pi}\left(\int_{\xi-1/\epsilon}^{\xi-\epsilon} \frac{\widehat{f}(\xi')}{\xi-\xi'} d\xi'+\int_{\xi+\epsilon}^{\xi+1/\epsilon} \frac{\widehat{f}(\xi')}{\xi-\xi'} d\xi'\right), \end{align} where $\mathrm{P.V.}$ denotes the Cauchy principal value. Applying the Hilbert transform in the Fourier domain is equivalent to multiplying the function in the $x$ domain by $-i\,\mathrm{sgn}\,x$. \subsection{Applying barriers with Hilbert transforms}\label{sec:BarrHilb} The Hilbert transform can be used to obtain the Fourier transform of the part of a function above or below a barrier, without leaving the Fourier domain. For example, with a barrier at $0$, the functons $\widehat{f_+}(\xi)=\mathcal{F}_{x\rightarrow\xi}\left[f(x)\mathbf{1}_{\mathbb{R}_+}(x)\right]$ and $\widehat{f_-}(\xi)=\mathcal{F}_{x\rightarrow\xi}\left[f(x)\mathbf{1}_{\mathbb{R}_-}(x)\right]$, can be calculated using the Plemelj-Sokhotsky relations, \begin{align} & \widehat{f_+}(\xi)=\frac{1}{2}\big\{\widehat{f}(\xi)+i\mathcal{H}\big[\widehat{f}(\xi)\big]\big\}\label{eq:PSRelpos}\\ & \widehat{f_-}(\xi)=\frac{1}{2}\big\{\widehat{f}(\xi)-i\mathcal{H}\big[\widehat{f}(\xi)\big]\big\}.\label{eq:PSRelneg} \end{align} The shift theorem $\mathcal{F}_{x\rightarrow\xi}[f(x+b)]=\widehat{f}(\xi)e^{-ib\xi}$ allows the Plemelj-Sokhotsky relations to be generalised to an arbitrary barrier $b$, \begin{align} & \widehat{f_{b+}}(\xi)=\frac{1}{2}\big\{\widehat{f}(\xi)+e^{ib\xi}i\mathcal{H}\big[e^{-ib\xi}\widehat{f}(\xi)\big]\big\}\label{eq:PSRelgenpos}\\ & \widehat{f_{b-}}(\xi)=\frac{1}{2}\big\{\widehat{f}(\xi)-e^{ib\xi}i\mathcal{H}\big[e^{-ib\xi}\widehat{f}(\xi)\big]\big\}.\label{eq:PSRelgenneg} \end{align} Eqs.~(\ref{eq:PSRelgenpos}) and (\ref{eq:PSRelgenneg}) can be combined to obtain the Fourier transform the part of a function between two barriers, i.e.\ $\widehat{f_{lu}}(\xi)=\mathcal{F}_{x\rightarrow\xi}\left[f(x)\mathbf{1}_{[l,u]}(x)\right]$, \begin{align} & \widehat{f_{lu}}(\xi)=\frac{1}{2}\big\{e^{il\xi}i\mathcal{H}\big[e^{-il\xi}\widehat{f}(\xi)\big]-e^{iu\xi}i\mathcal{H}\big[e^{-iu\xi}\widehat{f}(\xi)\big]\big\}.\label{eq:PSRelgenbarr} \end{align} The Hilbert transform was used by \cite{Feng2008} to price discrete barrier options exploiting the relationship between the price at two successive monitoring dates: \begin{align} v(x,t_{n-1}) &= \int^u_lv(x',t_n)p(x-x',\Delta t)dx'. \end{align} Here $v(x,t_N)=\phi(x)e^{-\alpha x}$, i.e.\ the payoff of the option, and $p(\cdot,\Delta t)$ denotes the transition density of the underlying process with step size $\Delta t$. Using the convolution theorem together with the Hilbert transform, Eqs.~(\ref{eq:PSRelgenpos})--(\ref{eq:PSRelgenbarr}) can be employed to express the relationship between the price at two successive dates as \begin{align} \label{eq:Hilbpricsing} \widehat{v}(\xi,t_{n-1})&=\frac{1}{2}\left\{\Psi(\xi+i\alpha,\Delta t)\widehat{v}(\xi,t_{n})+e^{il\xi}i\mathcal{H}\left[e^{-il\xi}\Psi(\xi+i\alpha,\Delta t)\widehat{v}(\xi,t_{n})\right]\right\} \end{align} for a single-barrier down-and-out option and \begin{align} \label{eq:Hilbpricdoub} \widehat{v}(\xi,t_{n-1})=&\frac{1}{2}\left\{e^{il\xi}i\mathcal{H}\left[e^{-il\xi}\Psi(\xi+i\alpha,\Delta t)\widehat{v}(\xi,t_{n})\right]-e^{iu\xi}i\mathcal{H}\left[e^{-iu\xi}\Psi(\xi+i\alpha,\Delta t)\widehat{v}(\xi,t_{n})\right]\right\} \end{align} for a double-barrier option. \subsection{Spitzer identities} If we wish to use Eq.~(\ref{eq:Planch}) to price barrier options, the required characteristic functions are more complicated than the closed-form expressions referred to in Section \ref{sec:Back_fourhilb}. For our example we require the characteristic function of the distribution of the value of a process $X(t)$ at time $t=T$, conditional on the process remaining inside upper and lower barriers at discrete monitoring dates $t_n$, $n=0,1,\dots,N$. Fortunately, for a single barrier we can use the identities by \cite{spitzer1956combinatorial}, and for double barriers their extension by \cite{Kemperman1963}. These provide the Fourier-$z$ transform of the required PDF: the Fourier transform is applied to the log-price $x$ and the $z$-transform is applied to the discrete monitoring times. The $z$-transform of a discrete function $f(t_n)$ with $n\in\mathbb{N}_0$ is defined as \begin{equation} \label{eq:ZFor} \widetilde{f}(q)=\sum_{n=0}^{\infty}f(t_n)q^n, \quad q\in\mathbb{C}. \end{equation} An important aspect in the calculation of the Spitzer identities is the decomposition of a function into $+$ and $-$ parts, $\widehat{f_+}(\xi)=\mathcal{F}_{x\rightarrow\xi}\left[f(x)\mathbf{1}_{\mathbb{R}_+}(x)\right]$ and $\widehat{f_-}(\xi)=\mathcal{F}_{x\rightarrow\xi}\left[f(x)\mathbf{1}_{\mathbb{R}_-}\right]$. As explained in Section \ref{sec:BarrHilb}, this can be done directly in the Fourier domain using the Plemelj-Sokhotsky relations, Eqs.~(\ref{eq:PSRelpos}) and (\ref{eq:PSRelneg}). The calculation of the Spitzer identities also requires the factorisation a function, i.e.\ obtain $\widehat{g_+}(\xi)$ and $\widehat{g_-}(\xi)$ such that $\widehat{g}(\xi)=\widehat{g_+}(\xi)\widehat{g_-}(\xi)$. This is achieved through a log-decompositon, i.e.\ decomposing the logarithm $\widehat{h}(\xi)=\log\widehat{g}(\xi)$ and then exponentiating the results to obtain $\widehat{g_+}(\xi)=\exp\widehat{h_+}(\xi)$ and $\widehat{g_-}(\xi)=\exp\widehat{h_-}(\xi)$. \cite{fusai2015} and \cite{Green2010} go into detail regarding the Spitzer identities and describe methods for single-barrier, double-barrier and lookback options. In this paper we concentrate on the Spitzer identities used for double-barrier and single-barrier down-and-out options. The first step is to factorise $\Phi(\xi,q)=1-q\Psi(\xi,\Delta t)=\Phi_+(\xi,q)\Phi_-(\xi,q)$. In the case of a single-barrier down-and-out option, the $z$-transform of the required characteristic function is given by \begin{align} \widetilde{\widehat{p}}&=\frac{P_+(\xi,q)}{\Phi_+(\xi,q)} \end{align} where $P_+(\xi,q)$ is obtained from the decomposition of $P(\xi,q)=e^{il\xi}/\Phi_-(\xi,q)$. For a double-barrier option, the $z$-transform of the required characteristic function is \begin{align} & \widetilde{\widehat{p}}(\xi,q)=\frac{1}{\Phi(\xi,q)}-e^{il\xi}\frac{J_-(\xi,q)}{\Phi(\xi,q)}-e^{iu\xi}\frac{J_+(\xi,q)}{\Phi(\xi,q)}, \end{align} where $J_+(\xi,q)$ and $J_-(\xi,q)$ are the solution to the coupled equations \begin{align} & \frac{J_-(\xi,q)}{ \Phi_-(\xi,q)}=\left[\frac{e^{-il\xi}-e^{i(u-l)\xi}J_+(\xi,q)}{\Phi_-(\xi,q)}\right]_-\label{eq:Jmin} \\ & \frac{J_+(\xi,q)}{ \Phi_+(\xi,q)}=\left[\frac{e^{-iu\xi}-e^{i(l-u)\xi}J_-(\xi,q)}{\Phi_+(\xi,q)}\right]_+.\label{eq:Jpos} \end{align} The Spitzer identity for a single-barrier option can be solved directly. However so far only an iterative solution has been found to the coupled Eqs.~(\ref{eq:Jmin}) and (\ref{eq:Jpos}) \citep{fusai2015}. \subsection{Numerical methods}\label{sec:Back_num} The methods in the previous section are described analytically. However, as they involve some expressions which cannot be solved in closed form, their implementation requires the use of numerical approximation techniques which we discuss in the following. \subsubsection{Discrete Fourier transform and spectral filtering}\label{sec:Nummeth_DFT} The forward and reverse Fourier transforms in Eqs.~(\ref{eq:FwdFourier}) and (\ref{eq:RevFourier}) are integrals over an infinite domain and in order to compute them numerically one needs to approximate them with a discrete Fourier transform (DFT). Rather than being defined over an infinite and continuous range of $x$ and $\xi$ values, the DFT is defined on grids of size $M$ in the $x$ and $\xi$ domains. For our scheme both the $x$ and $\xi$ grids are centred around zero and are defined based on the maximum value in the $x$ domain $x_{\max}$. The step size is $\Delta x=2x_{\max}/M$ and the $x$ domain grid is defined as \begin{align} \label{eq:xGrid} & x_j=j\Delta x,\quad j=-\frac{M}{2},-\frac{M}{2}+1,\dots,\frac{M}{2}-1. \end{align} The points in the $\xi$ domain are then calculated according to the Nyquist relation by obtaining the step size $\Delta\xi=\pi/x_{\max}$ and range $\xi_{\max}=\pi/\Delta x$ to give the $\xi$ domain grid as \begin{align} \label{eq:xiGrid} & \xi_k=k\Delta\xi, \quad k=-\frac{M}{2},-\frac{M}{2}+1,\dots,\frac{M}{2}-1. \end{align} The discrete Fourier transform is then \begin{align} \widehat{f}_{M,\Delta x}(\xi_k)&=\Delta x \sum^{M/2-1}_{j=-M/2}f\left(x_j\right)e^{ix_j\xi_k} \label{eq:DFT1}\\ f_{M,\Delta \xi }(x_j)&=\frac{\Delta \xi}{2\pi}\sum^{M/2-1}_{k=-M/2}\widehat{f}\left(\xi_k\right)e^{-ix_j\xi_k}.\label{eq:revDFT1} \end{align} In practice, we perform this calculation using the built-in MATLAB FFT function based on the FFTW library by \cite{frigo1998fftw}. It can be seen in Eqs.~(\ref{eq:DFT1}) and (\ref{eq:revDFT1}) that the range over which we calculate the Fourier transform is truncated, so we must consider the effect of the Gibbs phenomenon on the error performance. The Gibbs phenomenon describes the way that the shape of the function $f_{M,\Delta \xi}(x)$ approximated by a truncated Fourier series, i.e.\ the finite sum in Eq.~(\ref{eq:revDFT1}), converges to the analytical function $f(x)$ corresponding to an infinite sum. \cite{hewitt1979gibbs} provided a comprehensive guide to this effect which was first observed by \cite{wilbraham1848certain} and later described by \cite{gibbs1898fourier,gibbs1899fourier}. An example of this can be seen in Figure \ref{fig:pulsegibbs} which shows how $f_{M,\Delta \xi}(x)$ for a rectangular pulse varies as the value of $M$ increases. The error peaks at the discontinuity $f(x_\mathrm{d})$ and oscillates away from it, with the amplitude decreasing as a function of distance from the discontinuity. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{pulsegibbs6.png} \caption{Illustration of the effect of the Gibbs phenomenon on a rectangular pulse recovered applying the inverse FFT with grid size $M$ to $\mathrm{sinc}(\xi/2\pi)$. The approximated function is shown on the left and the error with respect to an analytical rectangular pulse on the right. On increasing $M$, the peak error at the discontinuity remains the same, the error away from the discontinuity reduces and the frequency of the oscillations increases.} \label{fig:pulsegibbs} \end{center} \end{figure} The value of the recovered function at the discontinuity $f_{M,\Delta \xi}(x_\mathrm{d})$ will be the mean of the values immediately before and after the discontinuity, i.e.\ $f_{M,\Delta \xi}(x_{\mathrm{d}})=\frac{1}{2}[f(x_{\mathrm{d}}^+)+f(x_{\mathrm{d}}^-)]$, and thus stays the same even as the value of $M$ increases. In contrast, it can be observed from Figure \ref{fig:pulsegibbs} that the oscillations increase in frequency and decrease in amplitude as the value of $M$ increases. An important aspect of the Gibbs phenomenon is that, even for values of $x$ far away from a discontinuity, the speed of convergence of the recovered function is altered by the presence of the discontinuity. If $f(x)\in C^{\infty},\ x\in\mathbb{R}$, the discrete Fourier transform converges exponentially, i.e.\ $\max_j|f(x_j)-f_{M,\Delta \xi}(x_j)|<e^{-\alpha M}$, where $\alpha>0$ is some constant. However, in the case of a function with a jump we achieve $0^{\mathrm{th}}$ order convergence at the discontinuity and away from the discontinuity we only achieve first order polynomial convergence, i.e.\ for $x_j\neq x_{\mathrm{d}}$, $|f(x_j)-f_{M,\Delta \xi}(x_j)|\sim O(1/M)$ \citep{gottlieb1997gibbs}. In general, if the truncation error has $k^{\mathrm{th}}$ order convergence, then $|f(x)-f_{M,\Delta \xi}(x)|\sim O(1/M^k)$. More generally, from the ``integration by parts coefficient bound'' described by \cite{Boyd2001} \citep[see also][]{ruijter2015application}, if the function is smooth up to and including its $(k-2)^{\mathrm{th}}$ derivative, and its $k^{\mathrm{th}}$ derivative is integrable, then the Fourier coefficients decrease as $O(1/\xi^k)$ . From \cite{Boyd2001} we also have the ``last coefficient error estimate'' which states that for polynomial convergence we can approximately bound the error performance of a function with the discontinuity in the $(k-1)^{\mathrm{th}}$ derivative as $O(1/M^{k-1})$. However, these are upper bounds; as observed by \cite{ruijter2015application}, it is often the case that an error convergence of $O(1/M^k)$ or even better is seen and that this may be due to the alternating behaviour of the Fourier coefficients. Investigating and overcoming the Gibbs phenomenon is a mature field with applications in many areas. As a result, there is a large body of literature proposing different solutions to the problem. Some of these are too computationally heavy to be useful for our application, such as adaptive filtering and mollifiers suggested by \cite{tadmor2005adaptive} and \cite{tadmor2007filters}. In this article we adopt the approach of \cite{ruijter2015application} by using simple spectral filtering techniques which are applied by a pointwise multiplication in the Fourier domain and therefore add very little computational load. In the papers by \cite{vandeven1991family} and \cite{gottlieb1997gibbs} , a filter of order $p$ is defined as a function $\sigma(\eta)$ supported on $\eta\in[-1,1]$ with the following properties: \begin{align} \label{eq:filtDef} & \text{a) }\sigma(0)=1\text{, } \sigma^{(l)}(0)=0 \nonumber\\ & \text{b) }\sigma(\eta)=0 \text{ for }|\eta|=1\nonumber\\ & \text{c) }\sigma(\eta)\in C^{p-1}. \end{align} The scaled variable $\eta$ is related to $\xi$ in our application as $\eta=\xi/\xi_{\max}$. In this paper we investigate the use of two filters. The exponential filter, described by \cite{gottlieb1997gibbs} has the form \begin{align} \label{eq:expFilt} & \sigma(\eta)=e^{-\vartheta\eta^p}, \end{align} where $p$ is even and positive. This does not strictly meet criterion b in Eq.~(\ref{eq:filtDef}) as it does not go exactly to zero when $|\eta|=1$. However, if we select $\vartheta<\varepsilon\log 10$, where $10^{-\varepsilon}$ is machine precision, then the filter coefficients are within computational accuracy of the requirements. An advantage of the exponential filter is that it has a simple form and the order of the filter is equal to the parameter $p$ which is directly input to the filter equation. The other filter we study here is the Planck taper \citep{mckechan2010tapering}, which is defined piecewise as \begin{align} &\sigma(\eta)= \begin{cases} 0, &\eta\leq \eta_1,\ \ \qquad \eta_1=-1 \\ \frac{1}{e^{\;z(\eta)}+1},\ z(\eta)=\frac{\eta_2-\eta_1}{\eta-\eta_1}+\frac{\eta_2-\eta_1}{\eta-\eta_2}, & \eta_1<\eta<\eta_2,\ \, \eta_2=\epsilon-1\\ 1, & \eta_2 \leq\eta\leq\eta_3,\ \,\eta_3=1-\epsilon\\ \frac{1}{e^{\;z(\eta)}+1},\ z(\eta)=\frac{\eta_3-\eta_4}{\eta-\eta_3}+\frac{\eta_3-\eta_4}{\eta-\eta_4}, &\eta_3<\eta<\eta_4,\ \, \eta_4=1\\ 0, &\eta\geq\eta_4. \end{cases} \end{align} The value of $\epsilon$ gives the proportion of the range of $\eta$ which is used for the slope regions. Outside these regions, it is completely flat with a value of $1$. This contrasts with the exponential filter which introduces some, albeit often very minor, distortion for any value of $\eta\neq0$. In addition the Planck taper has the notable property that for all values of $\epsilon>0$, $\sigma(\eta,\epsilon)\in C^{\infty}$ and therefore the order of the Planck taper is $\infty$. However, it is clear that different values of $\epsilon$ give a different filter shape, so the order of a filter alone cannot be taken as a predictor of performance. Examples of the two filters are shown in Figure \ref{fig:expptfilt}. \begin{figure} \begin{center} \includegraphics[width=5in]{expptfilt2.png} \caption{Shape of the exponential filter (left) and Planck taper (right) with different parameter values.} \label{fig:expptfilt} \end{center} \end{figure} \FloatBarrier \subsubsection{Hilbert transform}\label{Sec:Back_transf_Hilb} The calculation of the Hilbert transform of a function $\widehat{f}(\xi)$ can be realised with an inverse/forward Fourier transform pair and multiplication by the signum function, \begin{align} & \mathcal{H}\big[\widehat{f}(\xi)\big]=-i\,\mathcal{F}_{x\rightarrow\xi}\big[\mathrm{sgn}(x)\mathcal{F}^{-1}_{\xi\rightarrow x}\widehat{f}(\xi)\big]. \end{align} However, this gives an error performance which is polynomially decreasing with the number of grid points $M$. In order to obtain exponential error convergence, \cite{Feng2008} and \cite{fusai2015} have implemented the Hilbert transform using the sinc expansion techniques comprehensively studied by \cite{Stenger1993,Stenger2011}. Stenger showed that, given a function $\widehat{f}(\xi)$ which is analytic in the whole plane including the real axis, the function and its Hilbert transform can be expressed as \begin{align} & \widehat{f}(\xi)=\sum^{+\infty}_{k=-\infty}\widehat{f}(k\Delta\xi)\frac{\sin(\pi(\xi-k\Delta\xi)/\Delta\xi)}{\pi(\xi-k\Delta\xi)/\Delta\xi},\label{eq:SincApprox}\\ & \mathcal{H}\big[\widehat{f}(\xi)\big]=\sum^{+\infty}_{k=-\infty}\widehat{f}(k\Delta\xi)\frac{1-\cos(\pi(\xi-k\Delta\xi)/\Delta\xi)}{\pi(\xi-k\Delta\xi)/\Delta\xi}, \label{eq:HilbSincApprox} \end{align} where $\Delta\xi$ is the grid step size in the Fourier domain. \cite{Stenger1993} also showed that, when the function $\widehat{f}(\xi)$ is analytic in a strip of the complex plane including the real axis, the expressions in Eqs.~(\ref{eq:SincApprox}) and (\ref{eq:HilbSincApprox}) are approximations whose error decays exponentially as $\Delta\xi$ decreases. In addition to discretisation, the infinite sum in Eq.~(\ref{eq:HilbSincApprox}) must also be truncated to the grid size $M$, so that the discrete approximation of the Hilbert transform becomes \begin{align} \label{eq:HilbSincApproxTrunc} &\mathcal{H}\big[\widehat{f}(\xi)\big]\approx\sum^{+M/2}_{k=-M/2}\widehat{f}(k\Delta\xi)\frac{1-\cos(\pi(\xi-k\Delta\xi)/\Delta\xi)}{\pi(\xi-k\Delta\xi)/\Delta\xi}. \end{align} \cite{Feng2008,Feng2009} showed that if $\widehat{f}(\xi)$ decays at least exponentially as $\xi\rightarrow\infty$, i.e.\ $\widehat{f}(\xi)\leq \kappa \exp(-c |\xi|^{\nu})$, then the error in the Hilbert transform and the Plemelj-Sokhotsky relations caused by truncating the infinite sum in Eq.~(\ref{eq:HilbSincApprox}) is also exponentially bounded. Furthermore Feng and Linetsky showed that if $\widehat{f}(\xi)$ is polynomially bounded as $\xi\rightarrow\infty$, i.e.\ $\widehat{f}(\xi)\leq c |\xi|^{\nu}$, then the error caused by truncating the series is no longer exponentially bounded \citep{Feng2008,Feng2009}. \subsubsection{Pricing method: single-barrier options with the Spitzer identity}\label{sec:back_num_sing} Two of the pricing methods that we modify in order to reduce the errors from the discrete Hilbert transform were devised and explained in depth by \cite{fusai2015}. The first method which we examine is the pricing procedure for single-barrier options. Without loss of generality we consider only the down-and-out case; the modifications that we propose are equally applicable to other types of single-barrier options. This method is briefly described here in order to provide a backdrop to the changes that were made to improve convergence. \begin{enumerate} \item Set the number of dates to $N-2$ so that the characteristic function acts as a smoothing function for the first and last dates in the scheme. \item Compute the characteristic function $\Psi(\xi+i\alpha,\Delta t)$, where $\alpha$ is the damping factor used in Section \ref{sec:Back_fourhilb}. \item Use the Plemelj-Sokhotsky relations with the sinc method to factorise \begin{align} \label{eq:Phifactsing} &\Phi(\xi,q):=1-q\Psi(\xi+i\alpha,\Delta t) =\Phi_+(\xi,q)\Phi_-(\xi,q) \end{align} with $q$ selected according to the criteria specified by \cite{Abate1992} for the inverse $z$-transform. \item Decompose \begin{align} \label{eq:Pdecompsing} & P(\xi,q):=\frac{e^{-il\xi}\Psi(\xi+i\alpha,\Delta t)}{\Phi_-(\xi,q)} = P_+(\xi,q)+P_-(\xi,q) \end{align} and calculate \begin{align} \label{eq:FinalfpCalc1} & F(\xi,q):=\widehat{\phi}\,^*(\xi)\Psi(\xi+i\alpha,\Delta t)e^{il\xi}\frac{P_+(\xi,q)}{\Phi_+(\xi,q)}. \end{align} \item Calculate the price \begin{align} \label{eq:FinalpriceCalc} & v(0,N):=e^{-rT}\mathcal{F}^{-1}_{\xi\rightarrow x=0}\mathcal{Z}^{-1}_{q\rightarrow n= N-2}[F(\xi,q)]. \end{align} \end{enumerate} The Spitzer identities give the $z$-transform of the characteristic function, so in order to obtain the price the inverse $z$-transform must be applied. The method used was devised by \cite{Abate1992_2,Abate1992} and approximates the inverse $z$-transform by \begin{align} \label{eq:AWInvZ} &f(t_n)\approx\frac{1}{2n\rho^n}\Bigg[\widetilde{f}(\rho)+2\sum^{n-1}_{j=1}(-1)^j\operatorname{Re}\widetilde{f}\Big(\rho e^{\frac{\pi j i}{n}}\Big)+(-1)^n\widetilde{f}(-\rho)\Bigg]. \end{align} The number of terms in the summation in Eq.~(\ref{eq:AWInvZ}) is determined by the number of monitoring dates. However, the Euler acceleration \citep[see e.g.][]{o1997euler} allows one to achieve excellent accuracy with a fixed number of terms in the summation which is independent of the number of dates. This is explained in detail in \cite{fusai2015}; the basic idea is to approximate the results by the binomial average, also called the Euler transform, of a smaller number of terms. First the partial sums \begin{align} \label{eq:AWInvZpartial} & b_k=\frac{1}{2}\widetilde{f}(\rho)+\sum^k_{j=1}(-1)^j\operatorname{Re}\widetilde{f}\Big(\rho e^{\frac{\pi j i}{n}}\Big) \end{align} are calculated for $k=n_\mathrm{E},\dots,n_\mathrm{E}+m_\mathrm{E}$ and then the binomial average of the values of $b_k$ is taken, i.e. \begin{align} & f(t_n)\approx\frac{1}{2^{m_\mathrm{E}}n\rho^n} \sum^{m_\mathrm{E}}_{j=0}\binom{m_E}{j}b_{n_\mathrm{E}+j} . \end{align} The parameters $n_\mathrm{E}$ and $m_\mathrm{E}$ are chosen to be large enough to attain sufficient accuracy and small enough such that $n_\mathrm{E}+m_\mathrm{E}\ll n$. Tests suggest that a choice of $n_\mathrm{E}=12$ and $m_\mathrm{E}=20$ provides good accuracy. This gives $n_\mathrm{E}+m_\mathrm{E}=32$ which is much smaller than the number of dates specified in most option contracts. The parameter $\rho$ controls the accuracy of the inverse $z$-transform; in order to have an accuracy of $10^{-2\gamma}$, one must set $\rho=10^{-\gamma/n}$ \citep{Abate1992}. This can result in very small values of $\rho$ and so it has been found in practice that the best achievable performance is of the order of $10^{-12}$ with $\gamma=6$. However, this is more than sufficiently low for practical purposes and to show whether exponential convergence is achieved. \cite{fusai2015} showed that this method could achieve exponential convergence with a wide range of L\'evy processes. However, the performance of the method with the variance gamma process only achieved polynomial convergence. This is consistent with the error behaviour of the discrete Hilbert transform with the variance gamma process, as explained in Section \ref{Sec:Back_transf_Hilb} above. Section \ref{sec:Errperf} explains in more detail how the error performance is bounded when this process is used. In order to improve the result, we multiplied the characteristic function by a spectral filter $\sigma(\eta)$ so that the input to both the factorisation and decomposition steps decay exponentially. The expressions in Eqs.~(\ref{eq:Phifactsing}) and (\ref{eq:Pdecompsing}) are replaced by \begin{align} &\Phi(\xi,q):=1-q\Psi(\xi+i\alpha,\Delta t)\sigma\left(\frac{\xi}{\xi_{\max}}\right)=\Phi_+(\xi,q)\Phi_-(\xi,q), \label{eq:Phifactsingfilt}\\ & P(\xi,q):=\frac{e^{-il\xi}\Psi(\xi+i\alpha,\Delta t)\sigma\left(\frac{\xi}{\xi_{\max}}\right)}{\Phi_-(\xi,q)} = P_+(\xi,q)+P_-(\xi,q) \label{eq:Pdecompsingfilt} \end{align} Numerical results with the updated method for the double-barrier case are shown in Section \ref{sec:results}. \subsubsection{Pricing method: double-barrier options with the Spitzer identity}\label{sec:back_num_doub} The second method from \cite{fusai2015} which we examine in this paper is the pricing procedure for double-barrier options. This is very similar to the method for the single-barrier options described in Section \ref{sec:back_num_sing}, in that it uses Wiener-Hopf factorisation and decomposition to compute the appropriate Spitzer identitiy. However, the major difference in this case is that the equations cannot be solved directly and so require the use of a fixed-point algorithm. The steps in the pricing procedure for double-barrier options are the same as the procedure described for the single-barrier down-and-out option described in Section \ref{sec:back_num_sing} with the exception of Step 4 which is now replaced by the following fixed-point algorithm: \begin{enumerate} \item[4. (a)] Set $J_+(\xi,q)=J_-(\xi,q)=0$. \item[\phantom{4. }(b)] Decompose \begin{align} \label{eq:Pdecomp} & P(\xi,q):=\frac{e^{-il\xi}\Psi(\xi+i\alpha,\Delta t)}{\Phi_-(\xi,q)}-\frac{e^{i(u-l)\xi}J_+(\xi,q)}{\Phi_-(\xi,q)} = P_+(\xi,q)+P_-(\xi,q) \end{align} and calculate $J_-(\xi,q):=P_-(\xi,q)\Phi_-(\xi,q)$. \item[\phantom{4. }(c)] Decompose \begin{align} \label{eq:Qdecomp} & Q(\xi,q):=\frac{e^{-iu\xi}\Psi(\xi+i\alpha,\Delta t)}{\Phi_+(\xi,q)}-\frac{e^{i(l-u)\xi}J_-(\xi,q)}{\Phi_+(\xi,q)} = Q_+(\xi,q)+Q_-(\xi,q) \end{align} and calculate $J_+(\xi,q):=Q_+(\xi,q)\Phi_+(\xi,q)$. \item[\phantom{4. }(d)] Calculate \begin{align} \label{eq:FinalfpCalc2} F(\xi,q):=\widehat{\phi}\,^*(\xi)\frac{\Psi(\xi+i\alpha,\Delta t)}{\Phi(\xi,q)}\left[\Psi(\xi+i\alpha,q)-e^{il\xi}J_-(\xi,q)-e^{iu\xi}J_+(\xi,q)\right]. \end{align} \item[\phantom{4. }(e)] If the difference between the new and the old value of $F(\xi,q)$ is less than a predefined tolerance or the number of iterations is greater than a certain value, e.g.~5, then calculate the price using Eq.~(\ref{eq:FinalpriceCalc}), otherwise return to step (b). \end{enumerate} Unlike the direct method for single-barrier options described in Section \ref{sec:back_num_sing}, this iterative method is limited to polynomial error convergence for all processes. In Section \ref{sec:Errperf} we show that this is due to the Gibbs phenomenon. In order to improve the error performance we placed a filter $\sigma(\eta)$ on the input to each decomposition step in the fixed-point algorithm. The calculations for $P(\xi,q)$ and $Q(\xi,q)$ in Eqs.~(\ref{eq:Pdecomp}) and (\ref{eq:Qdecomp}) are replaced by \begin{align} P(\xi,q)&:=\sigma\left(\frac{\xi}{\xi_{\max}}\right)\left[\frac{e^{-il\xi}\Psi(\xi+i\alpha,\Delta t)}{\Phi_-(\xi,q)}-\frac{e^{i(u-l)}J_+(\xi,q)}{\Phi_-(\xi,q)} \right] \\ Q(\xi,q)&:=\sigma\left(\frac{\xi}{\xi_{\max}}\right)\left[\frac{e^{-iu\xi}\Psi(\xi+i\alpha,\Delta t)}{\Phi_+(\xi,q)}-\frac{e^{i(l-u)}J_-(\xi,q)}{\Phi_+(\xi,q)} \right]. \end{align} It must also be noted that this change is only designed to provide significant improvements to the double-barrier method with exponentially decaying characteristic functions. In the case of a polynomially decaying characteristic function such as that of the variance gamma process, this method will also be subject to the same limitations on accuracy as described in Section \ref{sec:Errperf_Sing} for single-barrier options. Therefore, if we wish to use this scheme with the variance gamma process, we must also apply filtering to the factorisation step as shown in Eq.~(\ref{eq:Phifactsingfilt}). Numerical results with the updated method for the double-barrier case are shown in Section \ref{sec:results}. \subsubsection{Pricing method: Feng and Linetsky}\label{sec:back_num_fl} The third pricing method that we examine in order to illustrate the improvements obtained by the addition of spectral filtering to the sinc-based Hilbert transform is the recursive one published by \cite{Feng2008} and explained in Section \ref{sec:BarrHilb}. In general, the FL method achieves excellent results for both single and double-barrier options \citep{Feng2008,fusai2015}; the error converges exponentially with grid size and reaches machine accuracy for fairly small grid sizes. However, with respect to the FGM model, it has the disadvantage that the computational time increases linearly with the number of monitoring dates. Similarly to the FGM method for single-barrier options, exponential error convergence is achieved only for processes where the characteristic function reduces exponentially as $|\xi|\rightarrow\infty$. Therefore, poor error performance is achieved for the variance gamma process which has a characteristic function which only reduces polynomially as $|\xi|\rightarrow\infty$. \cite{Feng2008} explained this in some detail, showing how this is linked to the truncation error of the discrete Hilbert transform. In order to improve the results, we altered the FL method by placing a filter on the input to the Hilbert transform to ensure it decays exponentially. We replaced Eqs.~(\ref{eq:Hilbpricsing}) and (\ref{eq:Hilbpricdoub}) by \begin{align} \widehat{v}(\xi,t_{n-1})=&\frac{1}{2}\left\{\sigma\left(\frac{\xi}{\xi_{\max}}\right)\Psi(\xi+i\alpha,\Delta t)\widehat{v}(\xi,t_{n})\right.\nonumber\\&\left.+e^{il\xi}i\mathcal{H}\left[e^{-il\xi}\sigma\left(\frac{\xi}{\xi_{\max}}\right)\Psi(\xi+i\alpha,\Delta t)\widehat{v}(\xi,t_{n})\right]\right\}\label{eq:Hilbpricsingfilt},\\ \widehat{v}(\xi,t_{n-1})=&\frac{1}{2}\left\{e^{il\xi}i\mathcal{H}\left[e^{-il\xi}\sigma\left(\frac{\xi}{\xi_{\max}}\right)\Psi(\xi+i\alpha,\Delta t)\widehat{v}(\xi,t_{n})\right]\right.\nonumber\\&\left.-e^{iu\xi}i\mathcal{H}\left[e^{-iu\xi}\sigma\left(\frac{\xi}{\xi_{\max}}\right)\Psi(\xi+i\alpha,\Delta t)\widehat{v}(\xi,t_{n})\right]\right\}.\label{eq:Hilbpricdoubfilt} \end{align} Numerical results with the updated method are shown in Section \ref{sec:results}. \section{Error performance of the pricing procedure}\label{sec:Errperf} In this section we examine the error performance of the different calculations that make up the original pricing procedures without filtering and show bounds for the individual steps. In doing this, the effect of each step in the procedure on the shape of the output function in the Fourier doimain is examined, as this largely determines the error performance of the successive steps. In the FGM and FL pricing methods, the computation of the characteristic function is done directly in the Fourier domain so there are no numerical errors associated with this calculation. All the L\'evy processes that we are considering have characteristic functions that decay exponentially as $|\xi|\rightarrow\infty$, with the exception of the variance gamma process where the characteristic function decays polynomially and is bounded as $|\xi|^{-2\Delta t/\nu}$. The damping factor $\alpha$ is omitted from the calculations to make the notation more concise. This is appropriate as the value of $i\alpha$ becomes insignificant as $|\xi|\rightarrow\infty$. \subsection{Pricing single-barrier options with the variance gamma process using the Spitzer identity}\label{sec:Errperf_Sing} Following the calculation of the characteristic function, the next step in the pricing procedure is the factorisation of $\Phi(\xi,q)=[1-q\Psi(\xi,\Delta t)]$, which means that we need to apply the discrete Hilbert transform to $\log\Phi(\xi,q)=\log[1-q\Psi(\xi,\Delta t)]$. With the exception of the variance gamma process, as $|\xi|\rightarrow\infty$, $q\Psi(\xi,\Delta t)\sim qe^{-\Delta t\xi^2}$ which quickly becomes very small. Thus we can say that as $|\xi|\rightarrow\infty$, $|\log[1-q\Psi(\xi,\Delta t)]|<ce^{-\kappa\Delta t\xi^2}$ with $c,\kappa$ positive constants. Therefore, from the error bounds for the sinc-based Hilbert transform proved by \cite{Stenger1993} and \cite{Feng2008}, the output of the decomposition of $\log\Phi(\xi,q)$ has exponential error performance for exponentially decaying characteristic functions. In the case of the variance gamma process, the characteristic function is \begin{align} \Psi(\xi,t) &=\left(1-i\nu\theta\xi+\frac{1}{2}\nu^2\xi^2\sigma^2\right)^{-t/\nu}. \end{align} When the value of $\xi$ is very large then $\Psi(\xi,\Delta t)$ is dominated by $\xi^{-2\Delta t/\nu}$, so when $|\xi|\rightarrow\infty$, $|\log[1-q\Psi(\xi,\Delta t)]|<c\xi^{-2\Delta t/\nu}$. Therefore, we can bound the truncation error from the decomposition of $\log[1-q\Psi(\xi,\Delta t)]$. \cite{Feng2008} showed that the truncation error from applying the sinc-based Hilbert transform to a function which decays as $c|\xi|^{-2\Delta t/\nu}$ is bounded by $\frac{2c\nu}{2\Delta t-\nu}(M\Delta\xi)^{-(2\Delta t/\nu-1)}$, where there is a constraint on the process parameters of $\Delta t>\nu/2$. We show that if we take into account the form of the discrete Hilbert transform and the similarity between the positive and negative tails of the characteristic function, a tighter bound can be defined and the constraints on the parameters can be relaxed. Defining $f_{\Delta\xi}(\xi)$ as the output of the infinite sum from Eq.~(\ref{eq:HilbSincApprox}) and $f_{\Delta\xi,M}(\xi)$ as the output of the truncated sum from Eq.~(\ref{eq:HilbSincApproxTrunc}), \begin{align} |f_{\Delta\xi}(\xi)-f_{\Delta\xi,M}(\xi)| &< c_1\Delta\xi\sum_{k=M/2}^{\infty}\left| \frac{(k\Delta \xi)^{-2\Delta t/\nu}}{\xi-k\Delta\xi}\right|+ \sum_{k=-\infty}^{-M/2}\left|\frac{(k\Delta \xi)^{-2\Delta t/\nu}}{\xi-k\Delta\xi} \right|\nonumber\\ &<c_1\Delta\xi\sum_{k=M/2}^{\infty}\left| \frac{(k\Delta \xi)^{-2\Delta t/\nu}}{\xi-k\Delta\xi}+ \frac{(k\Delta \xi)^{-2\Delta t/\nu}}{\xi+k\Delta\xi} \right|\nonumber\\ & <c_2\Delta\xi\sum_{k=M/2}^{\infty}\left|\frac{(k\Delta \xi)^{-2\Delta t/\nu}}{\xi^2-(k\Delta\xi)^2}\right|\nonumber\\ & <c_2\Delta\xi\sum_{k=M/2}^{\infty}\frac{(k\Delta \xi)^{-2\Delta t/\nu}}{(k\Delta\xi)^2}\nonumber\\ & <c_3\int^{+\infty}_{M\Delta\xi/2}\xi^{-\left(\frac{2\Delta t}{\nu}+2\right)}d\xi <c_4(M\Delta\xi)^{-\left(\frac{2\Delta t}{\nu}+1\right)},\label{eq:vgfactbound1} \end{align} where $c_1$, $c_2$, $c_3$ and $c_4$ are positive constants. In this case, for the integral to converge we must have $2\Delta t/\nu+2>1$, which is the case for all possible process parameters. When the output of this decomposition is exponentiated to obtain the results of the factorisation, the error will be bounded by \begin{align} \left|\frac{e^{f_{\Delta\xi}(\xi)}-e^{f_{\Delta\xi,M}(\xi)}}{e^{f_{\Delta\xi}(\xi)}}\right| & < c_5\left[1-e^{c_4(M\Delta\xi)^{-\left(\frac{2\Delta t}{\nu}+1\right)}}\right], \end{align} where $c_4$ and $c_5$ are positive constants. For large $M$ this converges as $O\left(M^{-\left(2\Delta t/\nu+1\right)}\right)$, thus the factorisation error convergence is polynomial. In considering the error performance of the pricing method as a whole, we must also consider the shape of the output of the factorisation in the Fourier domain as this will influence the error performance of the subsequent step. Figure \ref{fig:fact_input_output} shows that the function flattens out at high values of $|\xi|$ and asymptotically approaches $1$. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{fact_input_output3.png} \end{center} \caption{Input and output functions for the factorisation of $\Phi(\xi,q)=1-q\Psi(\xi,\Delta t)$ with the Kou process for $q=\rho$. The plot shows how the decay of the function is changed by the decomposition. } \label{fig:fact_input_output} \end{figure} Therefore, if we were to input $\Phi_{\pm}(\xi,\Delta t)$ directly to the Hilbert transform in the decomposition step then we would not be able to bound the truncation error using Feng and Linetsky's error limit for exponentially bounded functions. However, the last date is taken out of the FGM pricing scheme. This means that we multiply the function to be decomposed by the characteristic function. In the case of exponentially decaying characteristic functions, this restores the exponential decay of the function for high values of $\xi$ which again means that the truncation error of the discrete Hilbert transform is exponentially bounded. However, if the variance gamma process is used then the input to the decomposition is only polynomially decaying and thus we again have polynomial error convergence for this stage. \subsection{Double\,-barrier options with the unfiltered Spitzer identity}\label{sec:Errperf_Doub} The original pricing procedure for double-barrier options shows polynomial convergence for all processes, even those whose characteristic function decays exponentially. The main difference between the pricing procedure for single and double-barrier options is the presence of the fixed-point algorithm and in this section we show how this causes the polynomial error convergence. As shown in Section \ref{sec:Errperf_Sing}, with an exponentially decaying characteristic functions the factorisation has exponential error convergence. In addition we multiply the input to the fixed-point algorithm by the characteristic function, which means that it is exponentially bounded as $|\xi|\rightarrow\infty$. Provided the input function to the first iteration of the fixed-point algorithm is exponentially bounded, the error on the output of the initial decomposition is exponentially bounded. However, the decomposition operation is equivalent to multiplying the function in the $x$ domain by either $\mathbf{1}_{\mathbb{R}_+}(x)$ or $\mathbf{1}_{\mathbb{R}_-}(x)$, which introduces a jump into the output functions. Due to the Gibbs phenomenon, this means that the output function from the decomposition decays as $O(1/\xi)$ as $\xi\rightarrow\infty$. The effect of this is that the input function to the second iteration of the fixed-point algorithm is no longer exponentially bounded and so, according to \cite{Stenger1993} and \cite{Feng2008}, the error from the truncation of the infinite sum in Eq.~(\ref{eq:HilbSincApprox}) to give Eq.~(\ref{eq:HilbSincApproxTrunc}) is no longer exponentially bounded. A bound for this error is \begin{align} |f_{\Delta\xi}-f_{\Delta\xi,M}| &= \left|\frac{\Delta\xi}{\pi}\left( \sum_{k=M/2}^{\infty}\frac{1}{k\Delta\xi(\xi-k\Delta\xi)}+ \sum_{k=-\infty}^{-M/2}\frac{1}{k\Delta\xi(\xi-k\Delta\xi)} \right)\right|\nonumber\\ & <c_1\Delta\xi\sum_{k=M/2}^{\infty}\frac{1}{(k\Delta\xi)^2}\nonumber\\ & <c_2\int^{+\infty}_{M\Delta\xi/2}\frac{1}{\xi^2}d\xi <c_3\frac{1}{M\Delta\xi}.\label{eq:decompbound} \end{align} where $c_1, c_2$ and $c_3$ are positive constants. Therefore, using the fixed-point algorithm with more than one iteration means that the error is no longer exponentially bounded. The bound shown in Eq.~(\ref{eq:decompbound}) is $O(1/M)$. However, the error of the pricing procedure actually decays as $O(1/M^2)$; this better performance may be due to the alternating nature of the Fourier coefficients. \subsection{Feng and Linetsky pricing method with the variance gamma process}\label{sec:Errperf_FL} The FL method is described in Eqs.~(\ref{eq:Hilbpricsing}) and (\ref{eq:Hilbpricdoub}), which show how the Hilbert transform is applied for each monitoring date. As explained in Section \ref{sec:Errperf_Doub}, the application of the Hilbert transform introduces a discontinuity into the function in the log-price domain, therefore the Fourier coefficients on the output of the Hilbert transform will decay as $O(1/\xi)$ as $\xi\rightarrow\infty$. However, before the Hilbert transform is applied for the next monitoring date, the Fourier domain function is multiplied by the characteristic function of the underlying process. Therefore, as explained by \cite{Feng2008}, if the characteristic function is exponentially decaying, this will result in an exponentially convergent error. However, with polynomially decaying characteristic functions, such as that of the variance gamma process, then a polynomially convergent error will be achieved. \subsection{Error performance with filtering on the sinc-based Hilbert transform}\label{sec:Errperf_filt} The multiplication by a filter with exponentially decaying coefficients as $|\xi|\rightarrow\infty$ gives an exponentially convergent truncation error for the sinc-based discrete Hilbert transform compared with the non-truncated version. However, filtering distorts the function somewhat. The numerical results with the updated method are shown in Section \ref{sec:results} and the prices calculated with the filtered version have been compared with the price calculated using the unfiltered FL method with the maximum grid size to confirm that any distortion error is less significant than the improvement in error convergence. Due to the error being influenced by these two opposing effects, we have not attempted to devise a tight error bound which closely matches the improvement in performance achieved in practice. It is often seen in the literature on the Gibbs phenomenon that the empirical results outstrip the calculated error bounds. For example, \cite{ruijter2015application} suggest that the faster convergence they see may be due to the alternating nature of the Fourier coefficients. \section{Numerical results} \label{sec:results} We performed numerical tests using the pricing schemes updated to include filtering, as described in Section \ref{sec:Back}. The results for the FGM method for double-barrier options with exponentially decaying characteristic functions are presented in Section \ref{sec:Res_exp}. Section \ref{sec:Res_pol} contains results for all methods with the variance gamma process. Details of the contract and the model parameters are included in Table \ref{tab:Parasetup} in the Appendix. The numerical results were obtained using MATLAB R2016b running under OS X Yosemite on a 2015 Retina MacBook Pro with a 2.7GHz Intel Core i5 processor and 8GB of RAM. \subsection{Results with exponentially decaying characteristic functions}\label{sec:Res_exp} We present results for the FGM method for double-barrier options with filtering included in the fixed-point algorithm as described in Section \ref{sec:back_num_doub}. We examined the performance for both the Kou and NIG processes with $N=4$, 52 and 252. The values of 52 and 252 represent weekly and daily monitoring over 1 year. Results with $N=4$ are presented in order to show the performance of the method with very few monitoring dates. Figure \ref{fig:N=all_U=1_15_L=0_85_KOU5} shows results for the Kou process and Figure \ref{fig:N=all_U=1_15_L=0_85_NIG} shows results for the NIG process. The original FL and FGM methods are labelled ``FL'' and ``FGM''. The FGM method with filtering is labelled ``FGM-E, $p$=order'' for results with the exponential filter and ``FGM-P, $\epsilon$=parameter'' with the Planck taper. Comparing the results for all methods, we see that the FL method gives the best error convergence versus grid size. This is due to the error of the FGM method being limited by the performance of the inverse $z$-transform. Comparing the filtered FGM methods, the exponential filter gives better results but the Planck taper is less sensitive to variations in the filter shape. The best results were achieved with an exponential filter of order $p=12$. Tables \ref{tab:Kouiterations} and \ref{tab:NIGiterations} present the number of iterations and the computational time for a range of dates. The results demonstrate that as the number of dates increases, the number of iterations and computational time either does not increase, or minimally increases, and thus confirm that the computational time is independent of the number of monitoring dates. Figures \ref{fig:N=all_U=1_15_L=0_85_KOU5} and \ref{fig:N=all_U=1_15_L=0_85_NIG} show how the convergence of the numerical techniques changes with the grid size and Figures \ref{fig:N=all_U=1_15_L=0_85_Kou_CPU} and \ref{fig:N=all_U=1_15_L=0_85_NIG_CPU} show how the convergence behaviour corresponds to computational time with an exponential filter of order 12. The inclusion of a filter in the FGM method produces a large improvement compared to the unfiltered method. Despite this improvement, for low numbers of monitoring dates the FL method shows the best performance. However, for 252 monitoring dates, the filtered FGM method performs around the same as the FL method for errors $>10^{-10}$ and for higher number of dates, the filtered FGM method shows the best performance for errors $>10^{-10}$. Including the filter in the FL method produces a result with very slightly worse absolute error performance but which still retains exponential convergence. We can relate this to the error discussion in Section \ref{sec:Errperf_filt}: the filter causes a slight distortion which degrades the absolute error performance, but there is no improvement to be gained in the rate of convergence as the unfiltered method already achieves exponential convergence. \begin{figure}[h] \begin{center} \includegraphics[width=\textwidth]{N=all_U=1_15_L=0_85_Kou7.png} \end{center} \caption{Error vs.\ grid size $M$ for the Kou process and varying number of monitoring dates $N$. The filter improves the performance of the FGM method from polynomial to exponential. The best results are obtained with an exponential filter of order $p=12$.} \label{fig:N=all_U=1_15_L=0_85_KOU5} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=\textwidth]{N=all_U=1_15_L=0_85_NIG7.png} \end{center} \caption{Error vs.\ grid size $M$ with the NIG process and varying number of monitoring dates $N$. The filter improves the performance of the FGM method from polynomial to exponential. The best results are obtained with an exponential filter of order $p=12$.} \label{fig:N=all_U=1_15_L=0_85_NIG} \end{figure} \begin{table}[h] \centering \begin{tabular}{rcccccc} \hline \hline Dates & Tolerance & $M$ & Average iterations & Price & Error & CPU time \\ \hline 4 & E-8 & 1024 & 2.000 & 0.00721968941 & 4.12E-14 & 5.63E-03 \\ 52 & E-8 & 1024 & 2.000 & 0.00518403635 & 3.07E-13 & 3.81E-02 \\ 104 & E-8 & 1024 & 2.000 & 0.00490517113 & 5.54E-13 & 3.99E-02 \\ 252 & E-8 & 1024 & 2.000 & 0.00465711572 & 4.29E-12 & 3.72E-02 \\ 504 & E-8 & 1024 & 2.000 & 0.00452396360 & 4.31E-09 & 3.80E-02 \\ \hline 4 & E-10 & 1024 & 2.000 & 0.00721968941 & 4.12E-14 & 1.82E-02 \\ 52 & E-10 & 1024 & 2.000 & 0.00518403635 & 3.07E-13 & 3.50E-02 \\ 104 & E-10 & 1024 & 2.091 & 0.00490517113 & 5.62E-13 & 3.88E-02 \\ 252 & E-10 & 1024 & 2.121 & 0.00465711572 & 4.31E-12 & 3.71E-02 \\ 504 & E-10 & 1024 & 2.152 & 0.00452396360 & 4.31E-09 & 3.90E-02 \\ \hline \hline \end{tabular} \caption{Results for the Kou process with the fixed-point algorithm tolerance set to $10^{-8}$ and $10^{-10}$.} \label{tab:Kouiterations} \end{table} \begin{table}[h] \centering \begin{tabular}{rcccccc} \hline \hline Dates & Tolerance & $M$ & Average iterations & Price & Error & CPU time \\ \hline 4 & E-8 & 1024 & 2.000 & 0.00545479385 & 2.38E-13 & 1.50E-02 \\ 52 & E-8 & 1024 & 2.000 & 0.00359559460 & 5.07E-13 & 8.57E-02 \\ 104 & E-8 & 1024 & 2.000 & 0.00341651334 & 5.92E-10 & 8.58E-02 \\ 252 & E-8 & 1024 & 2.091 & 0.00328484367 & 3.15E-07 & 9.63E-02 \\ 504 & E-8 & 1024 & 2.182 & 0.00322814330 & 6.84E-07 & 9.34E-02 \\ \hline 4 & E-10 & 4096 & 2.000 & 0.00545479385 & 7.17E-14 & 1.45E-02 \\ 52 & E-10 & 4096 & 2.242 & 0.00359559460 & 6.70E-13 & 2.20E-01 \\ 104 & E-10 & 4096 & 2.303 & 0.00341651275 & 3.80E-13 & 2.15E-01 \\ 252 & E-10 & 4096 & 2.364 & 0.00328453104 & 2.33E-09 & 2.08E-01 \\ 504 & E-10 & 4096 & 2.485 & 0.00322753427 & 7.53E-08 & 2.21E-01 \\ \hline \hline \end{tabular} \caption{Results for the NIG process with the fixed-point algorithm tolerance set to $10^{-8}$ and $10^{-10}$.} \label{tab:NIGiterations} \end{table} \begin{figure} \begin{center} \includegraphics[width=\textwidth]{N=all_U=1_15_L=0_85_Kou_CPU7.png} \end{center} \caption{Error vs.\ CPU time for a double-barrier option with the Kou process and varying numbers of monitoring dates $N$. The filter improves the FGM method for all $N$; FGM-F is the fastest method for an error of $10^{-8}$ with $N>252$.} \label{fig:N=all_U=1_15_L=0_85_Kou_CPU} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\textwidth]{N=all_U=1_15_L=0_85_NIG_CPU7.png} \end{center} \caption{Error vs.\ CPU time for a double-barrier option with the NIG process and varying numbers of monitoring dates $N$. The filter improves the FGM method for all $N$; FGM-F is the fastest method for an error of $10^{-8}$ with $N\geq504$.} \label{fig:N=all_U=1_15_L=0_85_NIG_CPU} \end{figure} \FloatBarrier \subsection{Polynomially decaying characteristic functions}\label{sec:Res_pol} We present results for the FL and FGM methods for a process with a polynomially decaying characteristic function, i.e.\ the variance gamma process. Figures \ref{fig:N=all_U=trunc_L=0_85_VG} and \ref{fig:N=all_U=1_15_L=0_85_VG} show the results of tests for single and double-barrier options where we have applied exponential filtering as described in Section \ref{sec:Back_num}. \begin{figure}[h] \begin{center} \includegraphics[width=\textwidth]{VG_FL_filtering_SB4.png} \end{center} \caption{Error vs.\ grid size $M$ for a single-barrier down-and-out option with the variance gamma process and varying numbers of monitoring dates $N$. The filter improves both the FGM and FL methods, with the FL-F method performing best at low numbers of dates.} \label{fig:N=all_U=trunc_L=0_85_VG} \end{figure} The performance for a low number of dates shows a good improvement with the addition of filtering for both the FGM and FL methods. This demonstrates that the performance of the sinc-based discrete Hilbert transform of polynomially decaying functions can be improved even when the polynomial decay is a true representation of the function shape and not simply an artefact of the fixed-point algorithm as was the case in Section \ref{sec:Res_exp}. For a higher number of dates, the error convergence vs grid size for the FGM method is improved so that it is the same as the FL method with or without filtering. This is a significant improvement as the FGM method has the advantage over the FL method that its computation time beyond a small threshold is independent of the number of dates, unlike the linear increase of the FL method. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{VG_FL_filtering_DB3.png} \caption{Error vs.\ grid size $M$ for a double-barrier option with the variance gamma process and varying numbers of monitoring dates $N$. The filter improves both the FGM methods for all numbers of monitoring dates and improved the FL method for low numbers of dates.} \label{fig:N=all_U=1_15_L=0_85_VG} \end{center} \end{figure} This is demonstrated by the results shown in Figures \ref{fig:N=all_U=trunc_U_L=0_85_VG_CPU} and \ref{fig:N=all_U=1_15_L=0_85_VG_CPU}. The filtered methods show the best performance for all dates; filtered FL is the best performing method for low numbers of monitoring dates and filtered FGM is the best performing method for higher numbers of dates. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{N=all_U=trunc_U_L=0_85_VG_CPU8.png} \end{center} \caption{Error vs.\ CPU time for a single-barrier option with the variance gamma process and varying numbers of monitoring dates $N$. The best performance of the new filtered methods, FL-F and FGM-F, either equals or exceeds the performance of the existing methods over all numbers of dates.} \label{fig:N=all_U=trunc_U_L=0_85_VG_CPU} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\textwidth]{N=all_U=1_15_L=0_85_VG_CPU6.png} \end{center} \caption{Error vs.\ CPU time for a double-barrier option with the variance gamma process and varying numbers of monitoring dates $N$. The best performance of the new filtered methods, FL-F and FGM-F, either equals or exceeds the performance of the existing methods over all numbers of dates.} \label{fig:N=all_U=1_15_L=0_85_VG_CPU} \end{figure} \FloatBarrier \subsection{Summary of results} Table \ref{tab:BestMeth} shows a summary of the best performing methods in terms of CPU time for different processes and types of options. \begin{table}[h] \begin{center} \begin{tabular}{r |c |c c c} \hline \hline & Single barrier &\multicolumn{3}{c}{Double barrier}\\ \hline Dates & VG & Kou & NIG & VG\\ \hline 4 & \color{green}FL-E & \color{red}FL& \color{red}FL & \color{green}FL-E\\ 52 & \color{green}FL-E & \color{red}FL & \color{red}FL &\color{green}FL-E\\ 104 & \color{green}FL-E, \color{green}FGM-E& \color{red}FL &\color{red}FL &\color{green}FL-E\\ 252 & \color{green}FL-E, FGM-E & \color{blue}FGM-E, FL &\color{blue}FGM-E, FL & \color{blue}FGM-E, FL-E, FL\\ 504 & \color{green}FGM-E$^*$ & \color{green}FGM-E& \color{green}FGM-E& \color{green}FGM-E$^*$\\ 1008 & \color{green}FGM-E$^*$ & \color{green}FGM-E& \color{green}FGM-E& \color{green}FGM-E$^*$\\ \hline \hline \end{tabular} \end{center} \caption{Quickest method for an error of $10^{-8}$. Due to the slower convergence of all methods with the variance gamma process, entries marked with an asterisk show the quickest method for an error of $10^{-5}$. Green: a filtered method provides the best performance. Blue: the performance of the filtered methods equals, but does not exceed, the best performance of an existing method. Red: the few cases where an existing method performs best.} \label{tab:BestMeth} \end{table} \section{Conclusions} In this article we showed that numerical methods for pricing derivatives based on the Hilbert transform computed with a sinc function expansion can be modified with the addition of spectral filters to improve their convergence. Furthermore, we expanded on the work by Stenger and Feng and Linetsky which showed how the shape of the function on the input to the Hilbert transform relates to the resultant error on the output of the Hilbert transform. We showed that due to the Gibbs phenomenon, an algorithm using successive Hilbert transforms will achieve polynomial performance unless additional filtering is applied after the first Hilbert transform. Moreover, we demonstrated that simple spectral filters such as the exponential filter or the Planck taper are sufficient to improve performance so that exponential convergence can be achieved. In addition we showed that the pricing schemes by Feng and Linetsky and Fusai et al., which have relatively poor performance with the variance gamma process, even for single-barrier options, can also be improved by spectral filters. This article directly concerns the pricing of barrier option pricing but the findings are relevant for any application which is related to jump-diffusion in the presence of barriers and requires the solution of the Wiener-Hopf or Fredholm equation. \begin{APPENDIX}{Parameters} Table \ref{tab:Parasetup} contains all the parameters used for the numerical experiments which produced the results presented in Section \ref{sec:results}. \begin{table}[h] \begin{center} \begin{tabular}{llr} \hline\hline Description & Symbol & Value \\ \hline \noalign{\vskip 0.5mm} \multicolumn{3}{c}{Option parameters} \\ \hline Maturity & $T$& 1 year \\ Initial spot price &$S_0$ & 1\\ Strike &$K$ & 1.1\\ Upper barrier (down-and-out) & $U$ & $+\infty$ \\ Upper barrier (double-barrier) & $U$ & 1.15\\ Lower barrier & $L$ & 0.85\\ Risk-free rate &$r$ & 0.05\\ Dividend rate &$q$& 0.02\\ \hline \noalign{\vskip 1mm} \multicolumn{3}{c}{NIG process parameters, $\Psi(\xi,t)=e^{- t\left(\sqrt{\alpha^2-(\beta+i\xi)^2}+\sqrt{\alpha^2-\beta^2}\right)}$}\\ \hline & $\alpha$ & 15\\ & $\beta$ & -5\\ & $\delta$ & 0.5\\ \hline \noalign{\vskip 1mm} \multicolumn{3}{c}{Kou process parameters, $\Psi(\xi,t)=e^{- t\left(\frac{\sigma^2\xi^2}{2}-\lambda\left(\frac{(1-p)\eta_2}{\eta_2+i\xi}+\frac{p\eta_1}{\eta_1-i\xi}-1\right)\right)}$}\\ \hline & $p$ & 0.3\\ & $\lambda$ & 3\\ & $\sigma$ & 0.1\\ & $\eta_1$ & 40\\ & $\eta_2$ & 12\\ \hline \noalign{\vskip 1mm} \multicolumn{3}{l}{Variance gamma parameters, $\Psi(\xi,t)=(1-i\nu\xi\theta+\nu\sigma^2\xi^2/2)^{- t/\nu}$}\\ \hline & $\theta$ & $\frac{1}{9}$\\ \noalign{\vskip 0.5mm} & $\sigma$ &$\frac{1}{3\sqrt{3}}$\\ \noalign{\vskip 0.5mm} & $\nu$ & 0.25 \\ \hline\hline \end{tabular} \end{center} \caption{Parameters for the numerical tests and processes used; $\Psi(\xi,t)$ is the characteristic function of the process that models the underlying asset.} \label{tab:Parasetup} \end{table} \end{APPENDIX} \section*{Acknowledgements} The support of the Economic and Social Research Council (ESRC) in funding the Systemic Risk Centre is gratefully acknowledged (Grant number ES/K002309/1). \bibliographystyle{informs2014}
1,314,259,993,712
arxiv
\section{Introduction} \subsection{\bf Motivation of the work.} Since the 1970s, researchers in several fields have used social network analysis to investigate interpersonal social relationships, communication networks, scientific paper co-authorships and citations, patterns in protein interaction, and so on \cite{Goldenbergetal2009}. Nowadays, this issue is of particular relevance due to the presence of online networking communities such as Facebook and LinkedIn. An important aspect of social network analysis refers to the development of algorithms for estimating optimal parameters of a social network model, using data available from the network itself. This entails solving an optimization problem, such as maximum likelihood estimation. Unfortunately, applying algorithms such as (exact) gradient ascent to solve the optimization problem is often unfeasible, due to high computational cost needed to evaluate the gradient exactly. For this reason, in the paper, a modified gradient ascent method, based on the so-called mean-field approach\footnote{Loosely speaking, mean-field theory investigates the behaviour of complex stochastic models composed of a large number of mutually interacting units, by reducing them to simpler models, in which the effect on any given unit of all the other units is approximated by a single averaged effect, and fluctuations around such an average effect are neglected. Mean-field approximations have been used in optimization (especially in combinatorial optimization \cite{orland85} and variational inference \cite{Xingetal2002}), e.g., in connection with optimization algorithms such as simulated annealing \cite{bilbroetal92}.}, is proposed for maximum likelihood estimation, for a particular social network model, known as the $p$-star model, focusing on its important case with three parameters. \subsection{\bf Preliminaries}. The $p$-star model, also called exponential random graph model, is one of the best-known and widely used statistical models for social networks \cite{Goldenbergetal2009,goodreau2007,lusher2012exponential,robins2007intro,snijders2011statistical,Snijdersetal2006,Handcock,Wainwright}. The model has been applied with success in applications related to fields such as communication, computer science, physics, psychology, and sociology \cite{ShuPal2010}. Compared with previous social network models, the main reason for its success is that the $p$-star model is able to represent interdependencies in a network. The term ``$p$-star'' was coined by Wasserman and Pattison \cite{WassermanPattison} in honor of the work \cite{HollandLeinhardt} by Holland and Leinhardt, which appear to be the first in the literature to have proposed a specific one-parameter instance of the model, called $p_1$ therein. A more general version of the model in \cite{HollandLeinhardt}, based on a Markovianity assumption, was proposed by Frank and Strauss in \cite{FrankStrauss1986}, and called Markov graph model therein\footnote{Such Markov $p$-star model can be extended to the case in which the Markovianity assumption does not hold \cite{PattisonRobins2002}. The class of $p$-star models includes both cases. Since the Markov case is more frequent in applications, in the paper we often omit the term ``Markov'' when referring to the Markov $p$-star model by Frank and Strauss.}. The sufficient statistics of this model (called $\mathbb{P}_{\boldsymbol{\theta}}$ in the following) are expressed in terms of subgraphs called $p$-stars and triangles. In the model, the parameters vector $\boldsymbol{\theta}$ contains the parameters associated with each among the subgraphs above, and provides information about the macroscopic properties of the social network. In principle, the problem of estimating the parameters vector $\boldsymbol{\theta}$ of the $p$-star model $\mathbb{P}_{\boldsymbol{\theta}}$ is solved by applying iterative methods for the maximization of the log-likelihood function obtained from the real data. In doing this, one needs to use, in each iteration, exact values of the moments associated with the three subgraphs defining the model, namely: the mean numbers of edges, two-stars, and triangles. However, because of the high computational cost needed to compute the so-called partition function associated with the $p$-star model, an exact evaluation of such moments (and as a consequence, the estimation problem itself) becomes intractable when the number of vertices $n$ is large. To circumvent this drawback, instead of working directly with the log-likelihood function, the log-pseudo-likelihood function \cite{Handcock} is also used in the literature. This function is defined in terms of vectors of differences of statistics $\Delta \bX^k_{ij}$, one for each sample $\bX^k$ and each edge $(i,j)$. Nevertheless, also the computation of these vectors requires a high computational cost when the number of vertices $n$ of the network is large. The mean-field approach to estimate the moments of a $p$-star model was proposed originally by Park and Newman \cite{ParkNewman}, limiting to the case of a $2$-dimensional parameters vector, with parameters associated, respectively, with the edges and the triangles. After that work, Chatterjee and Diaconis \cite{Diaconis} solved the model asymptotically for the non-negative high-dimensional case, i.e., for a finite-dimensional parameters vector in which all the parameters (apart from the first one) are non-negative, and the number of vertices tends to $+\infty$. {\bf Contributions of the work.} In this paper, following the work \cite{ParkNewman} by Park and Newman, we derive a mean-field approximation to compute approximately the moments of a $3$-parameters $p$-star model. In this model, the features are the numbers of edges, 2-stars, and triangles. Moreover, we apply the mean-field approximation to estimate the parameters vector of such a $p$-star model. Roughly speaking, denoting by $\mathbb{E}[E(\bX)]$, $\mathbb{E}[S_2(\bX)]$ and $\mathbb{E}[T(\bX)]$, respectively, the mean numbers of edges (i.e., 1-stars), 2-stars, and triangles in a network generated under a $p$-star model $\mathbb{P}_{\boldsymbol{\theta}}$ in which the parameters $p$, $q$ and $r$ are, respectively, the ``average probabilities'' that an edge, a $2$-star and a triangle is present in the network, the mean-field approach consists in computing approximately \begin{eqnarray} \mathbb{E}[E(\bX)] &=& p \binom{n}{2}, \nonumber \\ \mathbb{E}[S_2(\bX)] &=& n\binom{n-1}{2} q, \nonumber \\ \mathbb{E}[T(\bX)] &=& \binom{n}{3} r, \nonumber \end{eqnarray} using suitable approximations of $p$, $q$, and $r$, where $\binom{n}{2}$, $n \binom{n-1}{2}$, and $\binom{n}{3}$, are the maximum possible numbers of edges, $2$-stars and triangles in a network with $n$ vertices. To find the mean-field approximations for $p$, $q$ and $r$, we derive appropriate equations linking $p$, $q$, and $r$, which we then solve numerically. The justification for using the mean-field approximation is that when the parameters vector $\boldsymbol{\theta}$ is in the so-called high-temperature phase \cite{Mixingtime}, the $p$-star model $\mathbb{P}_{\boldsymbol{\theta}}$ behaves like an independent Erd\H{o}s-R\'{e}nyi model $G(n, p^*)$ \cite{Bollobas}, for a given $p^*$. The proposed method is tested against maximum log-pseudo-likelihood estimation, confirming the computational advantage of the mean-field approximation, which requires only the solution of a nonlinear system of equations having practically constant computational cost per iteration (i.e., independent of the number of vertices $n$, which can be interpreted as a measure of the size of the problem). {\bf Organization of the work.} The paper is structured as follows. In Section \ref{pstar_model}, the $p$-star model $\mathbb{P}_{\boldsymbol{\theta}}$ for social networks is described, focusing on the case of a model with $3$ parameters. Sections \ref{sec:ML} and \ref{sec:MPL} deal, respectively, with maximum log-likelihood and maximum log-pseudo-likelihood estimation of the parameters vector in a $3$-parameters $p$-star model. In Section \ref{sec:MF}, the proposed mean-field approximation of some moments in the $3$-parameters $p$-star model is introduced, and applied to maximum log-likelihood estimation of the parameters vector. In Section \ref{sec:MH}, the Metropolis-Hastings sampler is described, for the generation of the samples used to define the log-likelihood and log-pseudo-likelihood functions. Section \ref{sec:examples} compares the proposed parameters estimation method with maximum log-pseudo-likelihood estimation, demonstrating its computational advantages considering both a simulated and a real social network. Finally, Section \ref{sec:conclusions} concludes the paper with a discussion of our results and future research directions. Details about the implementation of Newton's method for the approximate solution of the system of nonlinear equations derived from the mean-field approximation are reported in the Appendix. \section{The $p$-star model $\mathbb{P}_{\boldsymbol{\theta}}$ for social networks}\label{pstar_model} In the present context, a social network of $n$ individuals is represented by a symmetric random matrix, in which each entry $\bX_{ij}$ is a random variable accounting for the type of relation between the individuals $i$ and $j$. In the simplest case $\bX_{ij} \in \{0,1 \}$ assumes only binary values (i.e., $\bX$ is an adjacency matrix), and $X_{ii}=0$ for each $i$. Let $\bX:=(X_{ij})$ be an $n \times n$ symmetric zero-one matrix, representing a graph in which the presence or not of an edge is a binary random variable $X_{ij}$ (self-loops are excluded). Under the so-called Markovianity assumption, i.e., assuming that $X_{ij}$ (with $i \neq j$) and $X_{kl}$ (with $k \neq l$) are conditionally independent given the rest of $\bX$ if and only if $\{i,j\} \cap \{k,l\}= \emptyset$, the probability distribution (Gibbs measure) over the set of all symmetric zero-one matrices with zero diagonal can be parametrized as follows \cite{FrankStrauss1986}: \begin{equation} \label{eq:gen} \mathbb{P}_{\boldsymbol{\theta}}(\bX=X):= \exp \left( \sum_{i,j,k} \theta_{ijk} \mathbf{1}_{T_{ijk}}(X)+ \sum^n_{k=1} \sum_{i_0, \dots , i_k} \theta_{i_0 \dots i_k} \mathbf{1}_{S_{i_0 \dots , i_k}}(X)- A(\boldsymbol{\theta}) \right), \end{equation} where $\mathbf{1}_{T_{ijk}}(X)$ and $\mathbf{1}_{S_{i_0 \dots , i_k}}(X)$ are indicator functions of triangles and $k$-stars in the network, and $A(\boldsymbol{\theta})$ is a normalizing factor, which makes the sum of the probabilities of all the possible realizations of the matrix be equal to 1. A triangle is defined as a set of three edges $T_{ijk}:=\{ (i,j), (j,k), (k,i) \}$ with $i\neq j \neq k$, and a $k$-star is a set of edges $S_{i_0, \dots , i_k}:=\{ (i_0, i_1), (i_0, i_2), \dots , (i_0, i_k) \}$ in which $i_0 \neq i_1 \neq \ldots, i_k$, and one of the vertices is always $i_0$. Notice that the $1$-stars are the edges. The proof of Equation \eqref{eq:gen} above is a consequence of the well-known Hammersley-Clifford theorem \cite{Besag,HammCliff,Lauritzen}. The model (\ref{eq:gen}) is referred to in the paper as the (Markov) $p$-star model (or $\mathbb{P}_{\boldsymbol{\theta}}$). Under an additional homogeneity assumption, and truncating the expansion above to the order two, we can rewrite Equation \eqref{eq:gen} as the following 3-parameters $p$-star model: \begin{equation}\label{eq:3params} \mathbb{P}_{\boldsymbol{\theta}}(\bX=X):=\exp \left( \theta_1 E(X)+ \theta_2 S_2(X)+ \theta_3 T(X)- A(\boldsymbol{\theta}) \right), \end{equation} where $E(X)$, $S_2(X)$, $T(X)$ are, respectively, the numbers of edges, 2-stars, and triangles in the network $X$, and $\boldsymbol{\theta}:=(\theta_1, \theta_2, \theta_3)$ is the parameters vector. Qualitative properties of the network are associated with the 3 parameters above as follows: \begin{enumerate} \item the edges parameter $\theta_1$ can be regarded as a measure of the density of the network; \item the $2$-stars parameter $\theta_2$ is a measure of the tendency of the network to clustering; \item the triangles parameter $\theta_3$ is a measure of transitivity. \end{enumerate} The $3$-parameters $p$-star model is defined as $\mathbb{P}_{\boldsymbol{\theta}}(\bX=X)=\exp \left( -H(X) - A(\boldsymbol{\theta}) \right)$, where $H(X)$ is the Hamiltonian, defined as follows: \begin{align}\label{eq:hamiltonian} H(\bX)&:=\theta \sum_{i <j} X_{ij}+ \sigma \sum_{k \neq i,j} \sum_{i <j} X_{ik} X_{ki}- \alpha \sum_{i <j <k} X_{ij} X_{jk} X_{ki} \nonumber \\ &=\theta E(\bX)+ \sigma S_2(\bX)-\alpha T(\bX), \end{align} where $A(\boldsymbol{\theta}):=\log \left( \sum_{X} e^{-H(X)} \right)$ is the normalizing factor (called log-partition function, since $Z:=\exp(A(\boldsymbol{\theta}))$ is the partition function), and the parameters vector is also defined as $\boldsymbol{\theta}:=(\theta_1, \theta_2, \theta_3):=(-\theta, -\sigma, \alpha)$. \section{Maximum log-likelihood estimation}\label{sec:ML} We address now the problem of estimating $\boldsymbol{\theta}$ on the basis of observed data. Consider a $3$-parameters $p$-star model $\mathbb{P}_{\boldsymbol{\theta}}$ as in Equation \eqref{eq:3params}. Let $\{ \bX^1, \dots , \bX^N \}$ be a collection of independent and identically distributed (i.i.d.) networks sampled from $\mathbb{P}_{\boldsymbol{\theta}}$. The aim is to maximize the log-likelihood function, defined as follows: \[ L(\boldsymbol{\theta}; \bX^1, \dots , \bX^N ):= \frac{1}{N} \sum^N_{i=1} \log{\mathbb{P}_{\boldsymbol{\theta}}}( \bX^i) .\] Let $\hat{\boldsymbol{\mu}}:=(\hat{\mu}_1, \hat{\mu}_2, \hat{\mu}_3)$ be the vector of empirical moments, i.e., the empirical mean numbers of edges, 2-stars, and triangles obtained from the data: \[ \hat{\mu}_1:=\frac{1}{N} \sum^N_{i=1} \mathbf{1}_{E}(\bX^i) , \quad \hat{\mu}_2:=\frac{1}{N} \sum^N_{i=1} \mathbf{1}_{S_2}(\bX^i) , \quad \hat{\mu}_3:=\frac{1}{N} \sum^N_{i=1} \mathbf{1}_{T}(\bX^i), \] where $\mathbf{1}_{E}(\bX), \mathbf{1}_{S_2}(\bX), \mathbf{1}_{T}(\bX)$ are the numbers of edges, 2-stars, and triangles in the network $X$. The log-likelihood can be also written as: \begin{equation}\label{eq:male} L(\boldsymbol{\theta}; \bX^1, \dots , \bX^N )= \boldsymbol{\theta} \cdot \hat{ \boldsymbol{\mu} } - A(\boldsymbol{\theta}). \end{equation} The Maximum Log-Likelihood Estimate (MLLE) is the vector of parameters $ \boldsymbol{\theta}^*_{MLLE}$ maximizing the objective function \eqref{eq:male}. It can be shown \cite{Wainwright} that \eqref{eq:male}, as a function of $\boldsymbol{\theta} $, is a concave function, and that its maximum exists. The gradient of $L(\boldsymbol{\theta}; \bX^1, \dots , \bX^N )$ is given by: \begin{equation}\nonumber \nabla_{\boldsymbol{\theta}} L(\boldsymbol{\theta})= \hat{ \boldsymbol{\mu} } - \boldsymbol{\mu}\,, \end{equation} where $\boldsymbol{\mu}:=(\mu_1, \mu_2, \mu_3)$ is the vector whose components are the expected numbers of edges, 2-stars, and triangles under the $3$-parameters $p$-star model $\mathbb{P}_{\boldsymbol{\theta}}$, which are defined as follows: \begin{equation}\label{eq:expectations} \mu_1:=\mathbb{E}[E(\bX)],\quad \mu_2:=\mathbb{E}[S_2(\bX)], \quad \mu_3:= \mathbb{E}[T(\bX)]. \end{equation} In the following, to find the maximum of the log-likelihood function, the gradient ascent method \cite{Bertsekas} is applied. The iterative algorithm reads as follows: \begin{algorithm}[H] \caption{ \textbf{ Maximum Log-Likelihood Estimation (MLLE) via gradient ascent} } \label{alg1} \begin{algorithmic} \State \text{Fix a stepsize $\gamma >0$, a number of iterations $N_{\rm it}$, and an initialization $\boldsymbol{\theta}^{(0)}$ for $\boldsymbol{\theta}$.} \State \text{Evaluate the empirical moments vector $\hat{\boldsymbol{\mu}}$.} \For {$k=0,1,\dots,N_{\rm it}-1$} \State Evaluate the moments vector: $$ \boldsymbol{\mu}^{(k)}=\boldsymbol{\mu}^{(k)}(\boldsymbol{\theta}^{(k)}). $$ \State \text{Update the parameters vector as}: \begin{align*} \boldsymbol{\theta}^{(k+1)}&=\boldsymbol{\theta}^{(k)}+ \gamma\left(\hat{\boldsymbol{\mu} }- \boldsymbol{\mu} ^{(k)} \right). \end{align*} \EndFor \end{algorithmic} \end{algorithm} Of course, also variations of the algorithm above can be used in principle, including, e.g., the case of a variable stepsize, the insertion of a different termination criterion, and the application of coordinate maximization and coordinate gradient ascent \cite{BecTet2013,LiuWright2015,Nesterov2012,Wright2015}. Even if the algorithm described above looks very appealing to solve the maximum log-likelihood estimation problem, there is a serious issue related to the computation of the moments vector $\boldsymbol{\mu}^{(k)}$ for each parameters vector $\boldsymbol{\theta}^{(k)}$ generated by the algorithm. Indeed, the exact evaluation of the moments becomes rapidly nearly intractable as the number of vertices $n$ grows, due, to the computational difficulties related, respectively, to the need of considering all the possible realizations of the matrix $\bX$ when computing the expected value in (\ref{eq:expectations}), and of evaluating the log-partition function $A(\boldsymbol{\theta}^{(k)})$ (which is also defined in terms of all such possible realizations). In the literature, there exist iterative methods for estimating the exact moments, such as as the elimination algorithm and the junction tree algorithm \cite{Wainwright}, but they work well only for small $n$. In this paper, we propose a different approach based on the so-called mean-field approximation of the moments vector $\boldsymbol{\mu}$. Before introducing such an approach, in the next section we describe another strategy for estimating the parameters vector, which is based on the maximization of the log-pseudo-likelihood function \cite{Handcock}. Then, we compare the two approaches. \section{Maximum log-pseudo-likelihood estimation}\label{sec:MPL} In this section, the problem of maximizing the log-pseudo-likelihood function is addressed. This method is widely used in parameters estimation for general exponential random graph models, because iterative methods for maximum log-pseudo-likelihood estimation are typically computationally more tractable when compared to those for maximum log-likelihood estimation. Under appropriate assumptions, it is known (see, e.g., \cite{Koller}) that, when the number of samples $N$ tends to $+\infty$, the Maximum Log-Pseudo-Likelihood Estimate (MLPLE) of the parameters vector converges to its maximum log-likelihood estimate. For a $p$-star model $\mathbb{P}_{\boldsymbol{\theta}}$ and an edge $(i,j)$, we set $p_{ij}:=\mathbb{P}_{\boldsymbol{\theta}}(X_{ij}=1 \vert \bX_{-ij})$, where $\bX_{-ij}$ is the collection of all the remaining edges. Let $\boldsymbol{\theta}:=(\theta_1, \theta_2, \theta_3)^T$ be the parameters vector, and $E(X), S_2(X), T(X)$ be the corresponding numbers of $1$-stars, $2$-stars, and triangles for a network $X$. Then, for each edge $(i,j)$, the vector $\Delta \bX_{ij}$ of difference statistics is defined as follows: $$ \Delta \bX_{ij}:=( E(X^+_{ij})-E(X^-_{ij}), S_2(X^+_{ij})-S_2(X^-_{ij}), T(X^+_{ij})-T(X^-_{ij}) )^T, $$ where $X^{+}_{ij}$ is the network associated with the matrix with $X_{ij}=1$ and all the remaining entries equal to the corresponding entries of $X$, and $X^{-}_{ij}$ is the network associated with the matrix with $X_{ij}=0$ and all the remaining entries equal to the corresponding entries of $X$. The log-pseudo-likelihood function associated with the model $\mathbb{P}_{\boldsymbol{\theta}}$ and the samples $\bX^1, \dots , \bX^N$ is defined as follows: \begin{align*} PL(\boldsymbol{\theta}; \bX^1, \dots , \bX^N)&:= \dfrac{1}{N} \sum^{N}_{k=1} \sum_{i,j} \log \mathbb{P}_{\boldsymbol{\theta}}(X^k_{ij}=1 \vert \bX^k_{-ij}) \\ &=\dfrac{1}{N} \sum^{N}_{k=1} \sum_{i,j} \left[ X^{k}_{ij} \log(p^k_{ij})+ (1- y^{k}_{ij}) \log(1-p^k_{ij}) \right]\\ &=\dfrac{1}{N} \sum^{N}_{k=1} \sum_{i,j} \left[ X^{k}_{ij} \boldsymbol{\theta} \cdot \Delta \bX^k_{ij} - \log(1+ \exp( \boldsymbol{\theta} \cdot \Delta \bX^k_{ij}) ) \right]. \end{align*} One can notice that the expression above is equivalent to the log-likelihood for a logistic regression model, in which each element of the adjacency matrices $X^1_{ij}, \dots, X^N_{ij}$ is treated as an independent observation, with the corresponding row of the design matrix given by $\Delta \bX^1_{ij}, \dots, \Delta \bX^N_{ij}$. Since the function ${PL}(\boldsymbol{\theta}; \bX^1, \dots , \bX^N)$ is concave in the parameters vector $\boldsymbol{\theta}$, we apply the gradient ascent method to approximate the optimal $\boldsymbol{\theta}^*_{MLPLE}$, where the gradient of the objective function has the following expression: \begin{equation*} \nabla_{\boldsymbol{\theta}} PL(\boldsymbol{\theta}^{(k)})= \dfrac{1}{N} \sum^{N}_{k=1} \sum_{i,j} \left( X^{k}_{ij} - \dfrac{ \exp( \boldsymbol{\theta}^{(k) } \cdot \Delta \bX^k_{ij} ) }{1+ \exp( \boldsymbol{\theta}^{(k) } \cdot \Delta \bX^k_{ij} ) } \right) \Delta \bX^k_{ij}. \end{equation*} The gradient ascent method, applied to the maximization of the log-pseudo-likelihood function, reads as follows: \begin{algorithm}[H] \caption{ \textbf{ Maximum Log-Pseudo-Likelihood Estimation (MLPLE) via gradient ascent} } \label{alg2} \begin{algorithmic} \State \text{Fix a stepsize $\gamma >0$, a number of iterations $N_{\rm it}$, and an initialization $\boldsymbol{\theta}^{(0)}$ for $\boldsymbol{\theta}$.} \State \text{Evaluate the empirical moments vector $\hat{\boldsymbol{\mu}}$.} \For {$k=0,1,\dots,N_{\rm it}-1$} \State \text{Update the parameters vector as}: \begin{align*} \boldsymbol{\theta}^{(k+1)}&=\boldsymbol{\theta}^{(k)}+ \gamma \nabla_{\boldsymbol{\theta}} PL(\boldsymbol{\theta}^{(k)}). \end{align*} \EndFor \end{algorithmic} \end{algorithm} \section{Mean-field approximation of the moments}\label{sec:MF} In this section, we derive explicit formulas for the approximate computation of the moments of the features associated with a 3-parameters $p$-star model, using the mean-field approach. The starting point of our analysis is the paper \cite{ParkNewman}, in which the problem was addressed for a 2-parameters $p$-star model. Consider all the terms in the Hamiltonian \eqref{eq:hamiltonian} involving the edge $X_{ij}$, for $i \neq j$: \begin{align*} \partial H(X_{ij}=x):= x \left[\theta+ \sigma \sum_{l \neq i,j} X_{il}+ \sigma \sum_{l \neq i,j} X_{jl}- \alpha \sum_{l \neq i,j} X_{jl} X_{li} \right]. \end{align*} Then, the mean number of edges $p:= \mathbb{E}[X_{ij}]$ is: \begin{align*} \mathbb{E} [X_{ij}] &=\sum_{X} \mathbb{P}(X) \dfrac{\mathbb{P}(X_{ij}=1)}{\mathbb{P}(X_{ij}=1)+\mathbb{P}(X_{ij}=0)} \\ &=\dfrac{1}{Z} \sum_{X} \mathbb{P}(X) \dfrac{e^{-\partial H(X_{ij}=1)}}{e^{-\partial H(X_{ij}=1)}+e^{-\partial H(X_{ij}=0)}}\\ &=\mathbb{E} \left[ \dfrac{1}{e^{\left( \theta +\sigma \sum_{l \neq i,j} X_{il}+\sigma \sum_{l \neq i,j} X_{jl} -\alpha \sum_{l \neq i,j}X_{jl} X_{li} \right) }+1} \right]. \end{align*} Let $q$ be the mean number of 2-stars under the $3$-parameters $p$-star model. Then, approximating in the previous expression all the terms of the form $X_{jl}X_{li}$ ($j \neq l \neq i$) with $q$, we obtain the following approximate expression for $p$: \begin{equation}\label{eq:p} p \approx \dfrac{1}{e^{\theta -\alpha(n-2)q+2\sigma (n-2)p}+1}. \end{equation} Now, let \begin{align*} \partial H(X_{ij}=x,X_{jk}=y):=&\theta x+ \theta y -\alpha x \sum_{l \neq i,j,k} X_{jl} X_{li} -\alpha y \sum_{l \neq i,j,k} X_{lj} X_{lk} -\alpha xy X_{ki} \\ &+\sigma x \sum_{l \neq i,j} X_{il} +\sigma y \sum_{l \neq j,k} X_{kl} +\sigma (x+ y) \sum_{l \neq i, j,k} X_{jl} +\sigma xy \end{align*} be the collection of all the terms in the Hamiltonian involving $X_{ij}$ and $X_{jk}$ ($i \neq j \neq k$). Then, the mean number of 2-stars $q= \mathbb{E} [X_{ij}X_{jk}]$ is expressed as follows: \begin{align*} &\sum_{X} \mathbb{P}(X) \dfrac{\mathbb{P}(X_{ij}=1, X_{jk}=1)}{\mathbb{P}(X_{ij}=1, X_{jk}=1)+\mathbb{P}(X_{ij}=1, X_{jk}=0)+\mathbb{P}(X_{ij}=0, X_{jk}=1)+\mathbb{P}(X_{ij}=0, X_{jk}=0)} \\ &=\dfrac{1}{Z} \sum_{X} \mathbb{P}(X) \dfrac{e^{-\partial H(X_{ij}=1, X_{jk}=1)}}{e^{-\partial H(X_{ij}=1, X_{jk}=1)} +e^{-\partial H(X_{ij}=0, X_{jk}=1)} +e^{-\partial H(X_{ij}=1, X_{jk}=0)} +e^{-\partial H(X_{ij}=0, X_{jk}=0)}}\\ &=\mathbb{E} \left[ \dfrac{e^{(\alpha X_{ki}-\sigma)}}{D} \right], \end{align*} } where \begin{align*} D:= &\left( e^{(\theta-\alpha \sum_{l \neq i,j,k} X_{jl} X_{li}+ \sigma \sum_{l \neq i,j} X_{il}+\sigma \sum_{l \neq i, j,k} X_{jl})}+1 \right) \\ & \times \left( e^{(\theta-\alpha \sum_{l \neq i,j,k} X_{lj} X_{lk}+ \sigma \sum_{l \neq j,k} X_{kl}+\sigma \sum_{l \neq i, j,k} X_{kl})} +1 \right) +(e^{(\alpha X_{ki}-\sigma)}-1). \end{align*} Now, because $e^{\alpha X_{ki}}$=$1 + (e^{\alpha} -1)X_{ij}$, passing to the mean-field approximation, we obtain the following approximate expression for $q$: \begin{equation}\label{eq:q} q \approx \dfrac{e^{- \sigma}(1+(e^{\alpha} -1)p)}{\left( e^{\theta-\alpha (n-3)q +\sigma (2n-5) p}+1 \right)^2+ e^{- \sigma}(1+(e^{\alpha} - 1)p)-1 }. \end{equation} Finally, for $i \neq j \neq k$, consider \begin{align*} &\partial H(X_{ij}=x,X_{jk}=y, X_{ki}=z)\nonumber \\ :=& -\alpha x \sum_{l \neq i,j,k} X_{il} X_{lj} -\alpha y \sum_{l \neq i,j,k} X_{jl} X_{lk} -\alpha z \sum_{l \neq i,j,k} X_{kl} X_{li} -\alpha xyz \\ & + \sigma (x+z) \sum_{l \neq i,j,k} X_{il} +\sigma (x+y) \sum_{l \neq i,j,k} X_{jl} +\sigma (y+z) \sum_{l \neq i,j,k} X_{kl} \\ & +\sigma xy +\sigma xz +\sigma yz+ \theta x+ \theta y + \theta z. \end{align*} Then, the mean number of triangles $r=\mathbb{E}[X_{ij} X_{jk} X_{ki}]$ is expressed as follows: \begin{equation}\nonumber \mathbb{E}[X_{ij} X_{jk} X_{ki}]=\mathbb{E}\left[ \dfrac{e^{\alpha}}{\Delta+(e^{\alpha}-1)} \right], \end{equation} where: \begin{align*} \Delta:=& \left( e^{(\theta -\alpha \sum_{l \neq i,j,k } X_{jl} X_{lj}+\sigma \sum_{l \neq i,j,k} X_{il}+ \sigma \sum_{l \neq i,j,k} X_{kl} + \sigma)} +1\right) \\ & \times \left( e^{(\theta -\alpha \sum_{l \neq i,j,k } X_{jl} X_{lk}+\sigma \sum_{l \neq i,j,k} X_{jl}+ \sigma \sum_{l \neq i,j,k} X_{kl} + \sigma)} +1\right) \\ & \times \left( e^{(\theta -\alpha \sum_{l \neq i,j,k } X_{jl} X_{li}+\sigma \sum_{l \neq i,j,k} X_{il}+ \sigma \sum_{l \neq i,j,k} X_{jl} + \sigma)} +1\right). \end{align*} Using again the mean-field approximation, this leads to the following approximate expression for $r$: \begin{equation}\label{eq:r} r \approx \dfrac{e^{\alpha}}{\left( e^{\theta - \alpha q(n-3)+2 \sigma (n-3)p+\sigma} +1\right)^3+ (e^{\alpha}-1)}. \end{equation} When replacing the approximation sign with the equality sign, Equations \eqref{eq:p} and \eqref{eq:q} form a nonlinear system of equations in the unknowns $p,q$, which we solve numerically using Newton's method \cite{bonnansetal2006} (details are reported in the Appendix). Once the values of $p$ and $q$ are determined, they are used in \eqref{eq:r} to find $r$. Then, the corresponding average numbers of $1$-stars, $2$-stars, and triangles are expressed as follows: \begin{align*} \mu_{1} \approx \binom{n}{2} p , \quad \mu_{2} \approx n \binom{n-1}{2} q, \quad \mu_3 \approx \binom{n}{3} r. \end{align*} The expressions above are approximations, since $p$, $q$ and $r$ have been computed by solving the nonlinear system derived from the mean-field approximation. For $\sigma=0$, we obtain the same result as in \cite{ParkNewman}. It is worth noticing that the mean-field approximation works well if the number of vertices $n$ is large, and the components $\theta_1=-\theta, \theta_2=-\sigma, \theta_3=\alpha=$ of the parameters vector $\boldsymbol{\theta}$ are in a suitable range (called high-temperature phase), as explained in details in the next paragraphs. Concluding, the modified gradient ascent method for the maximization of the log-likelihood function using the mean-field approximation for the computation of the moments reads as follows: \begin{algorithm}[H] \caption{ \textbf{Maximum Log-Likelihood Estimation (MLLE) via modified gradient ascent based on the mean-field approximation of the moments} } \label{alg3} \begin{algorithmic} \State \text{Fix a stepsize $\gamma >0$, a number of iterations $N_{\rm it}$, and an initialization $\boldsymbol{\theta}^{(0)}$ for $\boldsymbol{\theta}$.} \State \text{Evaluate the empirical moments vector $\hat{\boldsymbol{\mu}}$.} \For {$k=0,1,\dots,N_{\rm it}-1$} \State Evaluate the mean-field approximation of the moments vector: $ \boldsymbol{\mu}^{(k)}_{\rm MF}. $ \State \text{Update the parameters vector as}: \begin{align*} \boldsymbol{\theta}^{(k+1)}&=\boldsymbol{\theta}^{(k)}+ \gamma\left(\hat{\boldsymbol{\mu} }- \boldsymbol{\mu} ^{(k)}_{\rm MF} \right). \end{align*} \EndFor \end{algorithmic} \end{algorithm} When is one allowed to use the mean-field approximation? In \cite{Mixingtime}, the authors make a distinction between a high- and a low-temperature phase for a $p$-star model with parameters vector $\boldsymbol{\theta}$. Considering a $3$-parameters $p$-star model of the form \eqref{eq:3params}, following \cite{Mixingtime}, we define: \begin{equation}\nonumber \varphi_{\boldsymbol{\theta}}(p):=\dfrac{1}{1+\exp\left( \Psi_{\boldsymbol{\theta}}(p) \right)}, \end{equation} where $\Psi_{\boldsymbol{\theta}}(p):=\theta+2 \sigma(n-2)p -\alpha(n-2)p^2$. \begin{itemize} \item {\bf High-temperature phase}. We say that $\boldsymbol{\theta}$ is in the high-temperature phase if $\varphi_{\boldsymbol{\theta}}(p)=p$ has a unique solution $p^*$ such that $0<\varphi'_{\boldsymbol{\theta}}(p^*)=p^* <1$. \item {\bf Low-temperature phase}. We say that $\boldsymbol{\theta}$ is in the low-temperature phase if $\varphi_{\boldsymbol{\theta}}(p)=p$ has at least two fixed points $p^*$ which satisfy $0<\varphi'_{\boldsymbol{\theta}}(p^*)=p^* <1$. \end{itemize} One can notice that the only difference between the espression $\varphi_{\boldsymbol{\theta}}(p)$ and the espression on the left-hand side of Equation \eqref{eq:p} is the presence of the term $p^2$ instead of $q$. This is not surprising because, if $p$ is the average edge connectivity in a $p$-star model, then, making a rough estimate, the average probabilities of having a 2-star and a triangle ($q$ and $r$, respectively), are approximately $q \approx p^2$ and $r \approx p^3$ \cite{Diaconis}. In \cite{Mixingtime}, it is shown that, when $n$ is large and for $\boldsymbol{\theta}$ in the high-temperature phase, the $p$-star model is not appreciably different from the classical Erd\H{o}s-R\'{e}nyi random graph model. Indeed, in such case, most of the Gibbs measure of the model is concentrated on configurations which are essentially indistinguishable from the ones obtained through an Erd\H{o}s-R\'{e}nyi $G(n, p^* )$ random graph model, for a given $p^*$. In this model, the edges are chosen independently. This result justifies the use of the mean-field approximation \cite{KapWie2001} to compute the moments of the statistics defining the model, namely the expected numbers of edges, $2$-stars, and triangles, for a $p$-star model with parameters vector $\boldsymbol{\theta}$ in the high-temperature phase and with large $n$. Instead, in the low-temperature phase, the mean-field approximation does not work \cite{Mixingtime}. This is also justified by the fact that, when the parameters vector is in the low-temperature phase, the matrix in \eqref{eq:Newton} in the Appendix is typically close to be singular, as observed in some preliminary numerical tests. Figure \ref{fig.AR1} refers to two parameters vectors, respectively in the high- and in the low-temperature phase. \begin{figure}[H] \centering \subfigure[High-temperature phase]{\includegraphics[width=.48\textwidth]{T_high.png}} \qquad \subfigure[Low-temperature phase]{\includegraphics[width=.48\textwidth]{T_low.png}} \caption{$(a)$ High-temperature phase, obtained setting $\theta=-1.6, \sigma=-0.2/n, \alpha=2/n$ with $n=18$; $(b)$ Low-temperature phase, obtained setting $\theta=-0.1, \sigma=-0.23, \alpha=0.97$ with $n=18$.} \label{fig.AR1} \end{figure} In the 3-parameters $p$-star model considered so far, the parameters $\theta_1$, $\theta_2$, and $\theta_3$ are responsible for the formation, respectively, of edges, 2-stars, and triangles. If the transitivity parameter $\theta_3$ is too large with respect to the other two parameters, then the model is likely to be in the low-temperature phase, as evinced by preliminary numerical tests. Hence, a good rule of thumbs for the applicability of the method at hand (i.e., having the parameters vector in the high-temperature phase) is that the condition above is violated. This suggests to use the proposed method for estimating networks having a low number of triangles. \section{The Metropolis-Hastings sampler}\label{sec:MH} The Metropolis-Hastings algorithm is often used to obtain samples of i.i.d. graphs drawn from a $p$-star model $\mathbb{P}_{\boldsymbol{\theta}}$ \cite{Snijders2002}. In our context, this algorithm is useful to test the algorithms of the previous sections by a) fixing a parameters vector for the $p$-star model, b) generating $N$ i.i.d. samples according to that choice of the parameters vector, then c) estimating the parameters vector itself using each of the algorithms, and d) assessing the quality of each estimate (see the next Subsection \ref{subsec:example} for such a test). For a rigorous and exaustive analyisis of the properties of the Metropolis-Hastings algorithm when applied to exponential random graph models such as the Markov $p$-star model, we refer to \cite{Mixingtime}. The algorithm consists in generating a Markov chain $( \bX^t )_{t \in \mathbb{N}}$, having stationary probability distribution equal to $\mathbb{P}_{\boldsymbol{\theta}}$. Let $\tau_{\rm mix}$ be the mixing time of the Markov chain. For $t \geq \tau_{\rm mix}$, the Markov chain can be assumed to be stationary, making the procedure able to produce $N$ samples $\bX^1, \dots , \bX^N$ of networks, which are essentially drawn from $\mathbb{P}_{\boldsymbol{\theta}}$, as described briefly in the following. Let $\boldsymbol{\theta}$ be a given vector of parameters for the $p$-star model $\mathbb{P}_{\boldsymbol{\theta}}$, $t_{ \rm burn} \geq \tau_{\rm mix}$ the burn-in time of the algorithm, $N$ the number of samples generated by the algorithm, and $m$ a given integer. The algorithm proceeds as follows: \begin{enumerate} \item A random graph $\bX^0$ is generated to initialize the algorithm. \item At each time $t \in \{0, 1, \dots , t_{\text{burn}}+ mN -1\}$: \begin{itemize} \item an edge $(i,j)$ in the current network $\bX^t$ is randomly chosen, and a new network $\bX$ is obtained from $\bX^t$ switching $(i,j)$ (from $0$ to $1$, or vice-versa). \item The acceptance probability $ \alpha(\bX^t, \bX):= \text{min}\left\{ 1, \dfrac{\mathbb{P}_{\boldsymbol{\theta}}(\bX)}{\mathbb{P}_{\boldsymbol{\theta}}(\bX^t)} \right\} $ is evaluated. \item A value $U$ is generated from the uniform $(0,1)$ distribution. If $U \leq \alpha ( \bX^t , \bX )$, then set $\bX^{t + 1} = \bX$, else set $\bX^{t + 1} = \bX^t$. \end{itemize} \end{enumerate} The first $t_{\rm burn}$ networks are neglected (hence the name burn-in time for $t_{\rm burn}$), then a network is selected every $m$ samples, for a total of $N$ selected networks. For a $p$-star model with parameters vector $\boldsymbol{\theta}$ in the low-temperature phase, the Markov chain is known to take an exponentially long time to converge to its stationary probability distribution. In this case, sampling from the $p$-star model using the Metropolis-Hastings algorithm is highly inefficient. On the other hand, for a $p$-star model with parameters vector $\boldsymbol{\theta}$ in the high-temperature phase, the mixing time of the Markov chain is of order $\Theta(n^2 \log n)$, and the Metropolis-Hastings algorithm is known to produce samples representative of the underlying probability distribution in a reasonably small number of steps. \section{Numerical tests}\label{sec:examples} We consider in what follows two examples, one with synthetic data and the other one with real data taken from the specialized literature. \subsection{Example 1}\label{subsec:example} In order to assess the capability of the mean-field approximation for its use in the modified gradient ascent method for maximum log-likelihood estimation, we perform some numerical tests, by varying the number of vertices $n$, and comparing the computational cost of the proposed approach (Algorithm \ref{alg3}) with the one of the gradient ascent method applied to the maximization of the log-pseudo-likelihood function (Algorithm \ref{alg2}). Algorithm \ref{alg1} is not considered since it is only of theoretical interest, as explained in Section \ref{sec:ML}. The tests are performed to show that the proposed mean-field approximation of the components of the gradient in the modified gradient ascent method for maximum log-likelihood estimation has the advantage of being computationally much faster with respect to the gradient ascent method applied to the maximization of the log-pseudo-likelihood function. This is particularly useful when the number of vertices $n$ is large. \\ The tests are performed as follows. First, for each $n$, the 3 parameters of the vector $\boldsymbol{\theta}$ are fixed as follows: $$ \theta_1=2 \beta_1, \quad \theta_2=\dfrac{\beta_2}{n}, \quad \theta_3=\dfrac{\beta_3}{n}, $$ with $\beta_1=-0.8, \beta_2=-0.2, \beta_3=2$. Notice that the parameters for the 2-stars ($\theta_2$) and of the triangles ($\theta_3$) are rescaled by the factor $n$ as in \cite{Diaconis}. This is also justified by the form of the function $\Psi_{\boldsymbol{\theta}}(p)$. Then, a number $N=950$ of i.i.d. samples $\{ \bX^1, \dots, \bX^N \}$, drawn from $\mathbb{P}_{\boldsymbol{\theta}}$, is generated using the Metropolis-Hastings algorithm \cite{Snijders2002}, as described in the previous section. The number of samples $N$ is large enough to ensure that the log-pseudo-likelihood estimate practically coincides with the log-likelihood estimate. Then, the gradient ascent method is applied to the log-pseudo-likelihood function (Algorithm \ref{alg2}), and is compared to the mean-field approximation of the moments in the modified gradient ascent method applied to the log-likelihood function (Algorithm \ref{alg3}), for an increasing number of vertices $n=10, 20, \dots, 80 $, using for each comparison the same data set $\{ \bX^1, \dots, \bX^N \}$. In Figure \ref{fig.CPU}, the CPU time per iteration is shown for the two cases $n=20, 40$. It is clear that the proposed Algorithm \ref{alg3}, based on the mean-field approximation of the moments, requires a much cheaper cost per iteration than Algorithm \ref{alg2}, based on the log-pseudo-likelihood function, and that its cost is practically the same for increasing values of $n$. \begin{figure}[H] \centering \includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, totalheight=0.35\textheight, angle=0]{Com_CPU.png} \caption{CPU time per iteration for Algorithms \ref{alg2} and \ref{alg3} applied to the numerical tests in Subsection \ref{subsec:example}, for the two cases $n=20, 40$.} \label{fig.CPU} \end{figure} In contrast, when the dimension $n$ of the problem increases, the computational cost per iteration of the gradient ascent method for the maximum log-pseudo-likelihood estimation increases significantly, because the evaluation of the quantities $\Delta \bX^k_{ij}$ becomes more and more expensive. On the other hand, our proposed mean-field approximation of the moments, even if it requires more iterations for convergence to the ``true'' parameters vector (i.e., the one used to generate the samples), has the advantage of having a very cheap cost per iteration, because only the solution of a nonlinear system of equations is required for each iteration, which is obtained via a nested Newton's iterative procedure. Figure \ref{fig.Conv} reports, for $n=20$ and $i=1,2,3$, the sequences $\{\theta^k_i\}^{N_{\rm it}}_{k=1}$ of estimates for each parameter, obtained, respectively, when maximizing the log-likelihood via the modified gradient ascent method based on the mean-field approximation (Algorithm \ref{alg3}) and maximizing the log-pseudo-likelihood via gradient ascent (Algorithm \ref{alg2}). One can notice from the figure that the size of the sample $N$ is large enough to ensure that the maximum log-pseudo-likelihood estimate practically coincides with the maximum log-likelihood estimate, because the two sequences converge to the same estimates $\theta^*_i$, $i=1,2,3$. For a fair comparison, the number of iterations $N_{\rm it}$ and the stepsize $\gamma$ of the gradient ascent method is the same for the two methods in all the simulations. It is clear that, although the sequences produced by Algorithm \ref{alg2} are usually more precise than the ones produced by Algorithm \ref{alg3} in the approximation of the parameters, they require a much larger computational cost per iteration. For this reason, the computational time (and not the number of iterations) being the same, the proposed Algorithm \ref{alg3} produces better estimates than Algorithm \ref{alg2}. \begin{figure}[H] \centering \subfigure[]{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, totalheight=0.27\textheight, angle=0]{th1.png}}\\ \subfigure[]{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, totalheight=0.27\textheight, angle=0]{th2.png}}\\ \subfigure[]{\includegraphics[trim=0cm 0cm 0cm 0cm, clip=true, totalheight=0.27\textheight, angle=0]{th3.png}} \caption{Sequences of estimates of the parameters obtained by Algorithms \ref{alg2} and \ref{alg3} for the numerical test in Subsection \ref{subsec:example} with $n=20$.} \label{fig.Conv} \end{figure} Finally, Table \ref{tab:1} reports, for each simulation and $i=1,2,3$, the final estimates $\theta^*_{i, {\rm MF}}$ obtained by the proposed Algorithm \ref{alg3}, together with the corresponding components of the ``true'' parameters vector which has been used to generate the samples through the Metropolis-Hastings algorithm. The table shows, for these cases, the effectiveness of the Metropolis-Hastings sampler for the generation of the samples, since the estimates above are quite near the corresponding ``true'' parameters. It is also worth mentioning that, in all cases, the final estimated parameters are in the high-temperature phase. \begin{table} \caption{For the numerical tests of Subsection \ref{subsec:example}: final estimates $\theta^*_{i, {\rm MF}}$ obtained by Algorithm \ref{alg3}, and corresponding components of the ``true'' parameters vector, used to generate the samples through the Metropolis-Hastings algorithm.} \centering \begin{tabular}{ccc ccc ccc} \hline \noalign{\smallskip} $n$ & $N_{\rm it} $ & $\gamma$ & $\theta_1=2\beta_1$ &$\theta^*_{1, {\rm MF}}$ & $\theta_2=\beta_2/n$ & $\theta^*_{2, {\rm MF}}$ & $\theta_3= \beta_3/n$ & $\theta^*_{3, \rm{MF}}$\\ \noalign{\smallskip} \hline \hline\noalign{\smallskip} \hline \noalign{\smallskip} 10 & 1000 & 1E-2 &-1.6 &-1.5377 & -0.02 & -0.0462 & 0.2 & 0.2105 \\ 20 & 2500 & 1E-3 &-1.6 &-1.5947 & -0.01 & -0.080 & 0.1 & 0.0891 \\ 30 & 50000 & 1E-4 &-1.6 &-1.5884 & -0.006667 & -0.0053 & 0.06667 & 0.0460 \\ 40 & 50000 & 1E-4 &-1.6 &-1.5877 & -0.005 & -0.0060 & 0.05 & 0.0518 \\ 50 & 150000 & 1E-5 &-1.6 &-1.5977 & -0.004 & -0.0036 & 0.04 & 0.0355 \\ 60 & 150000 & 1E-5 &-1.6 &-1.5873 & -0.003333 & -0.0036 & 0.03333 & 0.0296 \\ 70 & 150000 & 1E-5 &-1.6 &-1.5775 & -0.002857 & -0.0032 & 0.02857 & 0.0250 \\ 80 & 900000 & 1E-6 &-1.6 &-1.6054 & -0.0025 & -0.0025 & 0.0250 & 0.0266 \\ \noalign{\smallskip}\hline \end{tabular} } \label{tab:1} \end{table} \subsection{Example 2: The trade network of Renaissance Florentine families} To test the accuracy of the mean-field approximation proposed in this work, we make a comparison of the estimates using the two methods at hand (Algorithms \ref{alg2} and \ref{alg3}) on a real example taken form the specialized literature. The network illustrated in Figure \ref{fig.adjacency} represents the business ties between $16$ Florentine families during Renaissance, and is taken from \cite{Bre1986} (see also \url{http://moreno.ss.uci.edu/data.html}). \begin{figure}[H] \centering {\includegraphics[trim=1cm 1cm 1cm 1cm, clip=true, totalheight=0.29\textheight, angle=0]{figure_pstar.png}} \caption{A graphical representation of the trade network of Renaissance Florentine families, which is considered in Example 2.} \label{fig.adjacency} \end{figure} The network contains $20$ edges, $47$ 2-stars and $3$ triangles, and is described by the following adjacency matrix: \begin{equation}\nonumber X= \left( \begin{array}{cccc cccc cccc cccc} 0 & 0 & 0 & 0& 0& 0& 0& 0& 1& 0 & 0& 0& 0 & 0& 0& 0 \\ 0 & 0& 0& 0& 0& 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0& 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0& 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0& 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 \\ 0& 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0& 1& 0& 0& 0& 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0& 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1& 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ 0& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \end{array} \right)\,. \end{equation} } Also in this case, the mean-field approximation of the moments in the gradient ascent method for maximum log-likelihood estimation is compared with the maximum log-pseudo-likelihood estimate. The proposed method (Algorithm \ref{alg3}) is run for $N_{\rm it}=100000$ iterations and with $\gamma_{\rm MF}=1 \rm E-4$, and the final estimated parameters are: $$ \theta_{1, \rm MF}^*=-1.5553 , \quad \theta_{2, \rm MF}^*=-0.0293 , \quad \theta_{3, \rm MF}^*=0.2106. $$ The time needed for convergence is $165.760$ seconds. The gradient ascent method for the maximum pseudo-log-likelihood (Algorithm \ref{alg2}) is run for $N_{ \rm it}=10000$ iterations with $\gamma_{PL}=1 \rm E-3$, and the final estimated parameters are: $$ \theta_{1, \rm PL}^*=-1.6231 , \quad \theta_{2, \rm PL}^*=-0.0188 , \quad \theta_{3, \rm PL}^*=0.2459. $$ The time needed for convergence is $188.379$ seconds. The results show that the estimated parameters are in good agreement. However, the proposed mean-field approximation has the advantage of being faster from the computational viewpoint. Likewise in the previous subsection, the final estimated parameters are in the high-temperature phase. \section{Conclusions}\label{sec:conclusions} Computational advantages of maximum log-likelihood estimation of a 3-parameters $p$-star model via a modified gradient ascent method based on the mean-field approximation of its 3 moments have been shown, comparing it with maximum log-pseudo-likelihood via gradient ascent. These advantages are evident because, in the first method, empirical quantities (i.e., the empirical moments) are computed only in its initialization. On the contrary, gradient ascent applied to the maximization of the log-pseudo-likelihood function requires the computation of the empirical quantities $\Delta \bX^k_{ij}$ at each gradient step, which is a computationally expensive for a large number of vertices $n$. The proposed algorithm is applicable whenever the parameters are in the high-temperature phase. This can be easily checked graphically at each iteration (see Figure \ref{fig.AR1}). A significant difference between this algorithm and other approaches for parameters estimation in exponential random graph models is in its use, inside each iteration, of the mean-field approximation of the moments. This is cheap from a computational point of view, since at each iteration one has to solve a nonlinear system with only two equations and two unknowns, even when the number $n$ of vertices in the graph is large. Other approaches, instead, apply at each iteration a more expensive Markov Chain Monte Carlo (MCMC) approximation (such as one based on the Metropolis-Hastings sampler)\footnote{It is known that MCMC estimation procedures do not converge for {\em near degenerate} exponential random graph models \cite{Robinsetal2006}, a condition similar to being in the {\em low-temperature phase} as defined in this paper. The paper \cite{Robinsetal2006} provides also a description of various software packages available for Monte Carlo maximum likelihood estimation, such as {\em SIENA}, {\em pnet}, and {\em statnet}.}. As a possible future extension, the proposed algorithm could be applied to more sophisticated and realistic exponential random graph models, such as the alternating $k$-stars and alternating $k$-triangles models \cite{Snijdersetal2006}, provided the mean-field approximation is still valid in such cases. \section*{Compliance with Ethical Standards} \section*{Funding} The authors acknowledge support from the Italian National Interest Project ``Crisis Lab'' (MIUR, PNR 2011-2013). \section*{Conflict of Interest} The authors declare that they have no conflict of interest. \section*{Appendix} Approximate values for $p$ and $q$ in the mean-field approximation are found by applying an iterative solver based on Newton's method to the system of Equations \eqref{eq:p}, \eqref{eq:q}: \begin{align*} G_p(p,q):=p- F_p(p,q)&=0, \\ G_q(p,q):=q-F_q(p,q)&=0. \end{align*} Starting from an initial guess $(p^0,q^0)$, each iteration of Newton's method is given by: \begin{equation}\label{eq:Newton} \left( \begin{array}{c} p^{k+1} \\ q^{k+1} \end{array} \right)= \left( \begin{array}{c} p^{k} \\ q^{k} \end{array} \right)+ \left( \begin{array}{cc} \dfrac{\partial G_p }{\partial p} & \dfrac{\partial G_p }{\partial q} \\ \dfrac{\partial G_q }{\partial p} & \dfrac{\partial G_q }{\partial q} \end{array} \right)^{-1} \left( \begin{array}{c} p^{k} \\ q^{k} \end{array} \right) \end{equation} until $\Vert ( p^k, q^k ) \Vert < {tol}$, for a given tolerance $tol >0$. \small
1,314,259,993,713
arxiv
\section{Methodology}\label{sec:Methodology} \subsection{Preliminaries} \subsubsection{Problem statement} The goal is to develop a RL based approach that provides a high rate of successful UAV landings on a horizontally moving platform. A landing trial is considered successful if the UAV touches down on the surface of the moving platform. The approach should require less training time than the baseline method and generalize well enough to be deployed on real hardware. Furthermore, required hyperparameters should be determined in an interpretable way. \subsubsection{Base Notations} \label{sec:prelim_notations} \begin{figure}[thpb] \centering \includegraphics[scale=0.35]{./pics/multi_rotor_drawing_small.pdf} \caption{Coordinate frames (blue, $e$: earth frame, $mp$: body-fixed frame of moving platform, $mr$: body-fixed frame of multi-rotor UAV, $s$: stability frame), forces (red) and attitude angle (green) associated with 1D motion in longitudinal direction.} \label{fig:model} \end{figure} We first define a fly zone $\mathcal{F}=[-x_{max},x_{max}]\times [-y_{max},y_{max}] \times [0,z_{max}]\subset\mathbb{R}^3[\si{m}]$ in which the motion of the multi-rotor UAV $mr$ and the moving platform $mp$ are considered, as is illustrated by Fig.~\ref{fig:model}. Furthermore, for the goal of landing on the moving platform, the RL agent controlling the multi-rotor vehicle has to consider only the relative translational and rotational motion. This is why we express the relative dynamics and kinematics in the stability frame $s$, to be able to formulate the RL problem from the UAV's and thus the agent's point of view. For this purpose we first denote the Euler angles roll, pitch and yaw in the stability frame with $_s\boldsymbol\varphi= (\phi,\theta,\psi)^T$. Each landing trial will begin with the UAV being in hover state, leading to the following initial conditions for rotational movement $ {}_s\dot{\boldsymbol\varphi}_{mr,0} = {}_s\dot{\boldsymbol\varphi}_{mp,0} = \mathbf{0} \si{rad/s} $, $ {}_s\boldsymbol\varphi_{mr,0} = {}_s\boldsymbol\varphi_{mp,0} = \mathbf{0} \si{rad}$. The initial conditions for the multi-rotor vehicle's translational motion are expressed in the earth-fixed frame $e$, it is $_e\dot{\mathbf{r}}_{mr,0} =\mathbf{0}\si{m/s}$ and $_e\mathbf{r}_{mr,0} \in \mathcal{F}$. Transforming the translational initial conditions of both, multi-rotor vehicle and moving platform into the stability frame allows to fully express the relative motion as the second order differential equations \begin{small} \begin{equation} _{s}\ddot{\boldsymbol\varphi}_{rel} = \mathbf{0} - {}_s\ddot{\boldsymbol\varphi}_{mr} ~~~ \textrm{and} ~~~ {}_{s}\ddot{\mathbf{r}}_{rel} = {}_s\ddot{\mathbf{r}}_{mp} - {}_s\ddot{\mathbf{r}}_{mr}. \label{eq:non_linear_rel_motion} \end{equation} \end{small} \vspace{-11pt} For the remainder of this work we specify the following values that will serve as continuous observations of the environment (see Sec.~\ref{sec:discrete_state_space} and Fig.~\ref{fig:sim_framework}) \vspace{-5pt} \begin{small} \begin{align} \mathbf{p}_c &= [p_{c,x},p_{c,y},p_{c,z}]^T = {}_s\mathbf{r}_{rel} \label{eq:cont_obs_p_cx}\\ \mathbf{v}_c &= [v_{c,x},v_{c,y},v_{c,z}]^T = {}_s\dot{\mathbf{r}}_{rel}\\ \mathbf{a}_c &= [a_{c,x},a_{c,y},a_{c,z}]^T = {}_s\ddot{\mathbf{r}}_{rel}\\ \boldsymbol\varphi_c &= [\phi_{rel},\theta_{rel},\psi_{rel}]^T = {}_{s}\boldsymbol\varphi_{rel}.\label{eq:cont_obs_phi_rel} \end{align} \end{small} \vspace{-11pt} $\mathbf{p}_c$ denotes the relative position, $\mathbf{v}_c$ the relative velocity, $\mathbf{a}_c$ the relative acceleration and $\boldsymbol\varphi_c$ the relative orientation between vehicle and platform, expressed in the stability frame. \subsection{Motion of the Landing Platform} We introduce the rectilinear periodic movement (RPM) of the platform that is applied during training. With the initial conditions $_e\dot{\mathbf{r}}_{mp,0} = \mathbf{0}\si{m/s}$ and $_e\mathbf{r}_{mp,0} = \mathbf{0}\si{m}$, the translational motion of the platform is expressed as a second order differential equation in the earth-fixed frame \textit{e}. \begin{small} \begin{equation} \begin{split} _e\ddot{\mathbf{r}}_{mp} &=-\left(v_{mp}^2/r_{mp}\right)\left[\sin(\omega_{mp} t),0,0\right]^T , \text{ } \omega_{mp} = v_{mp}/r_{mp}\\ \end{split} \label{eq:platform_acceleration_rpm} \end{equation} \end{small} \vspace{-10pt} \noindent where $r_{mp}$ denotes the maximum amplitude of the trajectory, $v_{mp}$ the maximum platform velocity and $\omega_{mp}$ the angular frequency of the rectilinear movement. The platform is not subjected to any rotational movement. As a consequence, the maximum acceleration of the platform that is required as part of the hyperparameter determination in Sec.~\ref{sec:hyperparemter_estimation} is \begin{small} \begin{equation} a_{mp,max} = v_{mp}^2/r_{mp}. \label{eq:a_mpmax} \end{equation} \end{small} \vspace{-15pt} \subsection{Basis Learning Task} \label{sec:basis_learning_task} \subsubsection{Dynamic Decoupling} \label{sec:basis_learning_task_preliminaries} The nonlinear dynamics of a multi-rotor vehicle such as a quadcopter can be greatly simplified when the following assumptions are applied. i) Linearization around hover flight, ii) small angles iii) a rigid, symmetric vehicle body, iv) no aerodynamic effects \cite{Wang2016}. Under these conditions, the axes of motion are decoupled. Thus, four individual controllers can be used to independently control the movement in longitudinal, lateral, vertical direction and around the vertical axis. For this purpose, low-level controllers track a setpoint $\theta_{ref}$ for the pitch angle (longitudinal motion), $\phi_{ref}$ for the roll angle (lateral motion), $T_{ref}$ for the total thrust (vertical motion) and $\psi_{rel}$ for the yaw angle (rotation around vertical axis). We leverage this fact in our controller structure as is further illustrated in Sec.~\ref{sec:Implementation}. Furthermore, this enables us to introduce a learning task for longitudinal 1D motion only. Its purpose is to learn a policy for the pitch angle setpoint $\theta_{ref}$ that induces a 1D movement in longitudinal direction, allowing the vehicle to center over the platform moving in longitudinal direction as well. After the training, we then apply a second instance of the trained agent for controlling the lateral motion, where it produces set point values for the roll angle $\phi_{ref}$. This way, full 2D motion capability is achieved. We use the basis learning task to compose a curriculum later in Sec.~\ref{sec:curriculum_discretization}. PID controllers ensure that $\phi_{rel} =0\si{rad},\psi_{rel} = 0\si{rad}$ and $ v_{z_{rel}} = \text{const} < 0\si{m/s}$ during training. \subsubsection{Markov Decision Process} For the RL task formulation we consider the finite, discrete Markov Decision Process \cite{Sutton2015}. The goal of the agent is to learn the optimal action value function $Q^*$ to obtain the optimal policy $\pi^* = \argmax_a Q^*(s,a)$, maximizing the sum of discounted rewards $\Sigma^{T-t-1}_{k = 0}\gamma^{k}r_{t+k+1}$. \subsubsection{Discrete Action Space} We choose a discrete action space comprising three actions, namely \textit{increasing pitch}, \textit{decreasing pitch} and \textit{do nothing}, denoted by \begin{small} \begin{align} \mathbb{A_d} = \left[ \Delta\theta^{+},\Delta\theta^{-}, -\right]. \label{eq:discrete_action_space} \end{align} \end{small} \vspace{-10pt} The pitch angle increment is defined by \begin{small} \begin{equation} \Delta\theta = \frac{\theta_{max}}{n_{\theta}}, n_{\theta}\in \mathbb{N}^+, \label{eq:normalized_pitch_set} \end{equation} \end{small} \noindent where $\theta_{max}$ denotes the maximum pitch angle and $n_{\theta}$ the number of intervals intersecting the range $[0,\theta_{max}]$. The set of possible pitch angles that can be taken by the multi-rotor vehicle is then \begin{small} \begin{align} \begin{split} \Theta &= \left\lbrace -\theta_{max} + i_{\theta}\Delta\theta\lvert i_{\theta} \in \left\lbrace 0,\ldots,2n_{\theta}\right\rbrace\right\rbrace, \end{split} \end{align} \end{small} \vspace{-10pt} \noindent where $i_{\theta}$ is used to select a specific element. \subsubsection{Discrete State Space} \label{sec:discrete_state_space} For the discrete state space we first scale and clip the continuous observations of the environment that are associated with motion in the longitudinal direction to a value range of $[-1,1]$. For this purpose, we use the function $\text{clip}(x,x_{min},x_{max})$ that clips the value of $x$ to the range of $[x_{min},x_{max}]$. \begin{small} \begin{align} p_x &= \text{clip}(p_{c,x}/p_{max},-1,1) ~~~ v_x = \text{clip}(v_{c,x}/v_{max},-1,1) \nonumber \\ a_x &= \text{clip}(a_{c,x}/a_{max},-1,1), \label{eq:xyz_c} \end{align} \end{small} \vspace{-10pt} \noindent where $p_{max},v_{max},a_{max}$ are the values used for the normalization of the observations. The reason for the clipping is that our state space discretization technique, a crucial part of the sequential curriculum, is derived assuming a worst case scenario for the platform movement in which the multi-rotor vehicle is hovering (see Sec.~\ref{sec:curriculum_discretization}) and where scaling an observation with its maximum values, $p_{max}, v_{max}$ and $a_{max}$, would constitute a normalization. However, once the multi-rotor vehicle starts moving too, the scaled observation values could exceed the value range of $[-1,1]$. Clipping allows the application of the discretization technique also for a moving UAV. Next, we define a general discretization function $d(x,x_1,x_2)$ that can be used to map a continuous observation of the environment to a discrete state value. \begin{small} \begin{equation} d(x,x_1,x_2) = \begin{cases} & 0 \text{ if } x\in [-x_2,-x_1)\\ & 1 \text{ if } x\in [-x_1,x_1] \\ & 2 \text{ if } x\in (x_1,x_2] \end{cases} \label{eq:mapping} \end{equation} \end{small} We apply \eqref{eq:mapping} to the normalized observations \eqref{eq:xyz_c} to determine the discrete state $s =(p_d, v_d,a_d,i_{\theta}) \in \mathbb{S}$, where $i_{\theta} \in \left\lbrace 0,\ldots,2n_{\theta}\right\rbrace$, $\mathbb{S}= \mathbb{N}_0^{ 3\times 3\times 3\times 2n_{\theta}+1}$ and \begin{small} \begin{align} p_d &= d\left(p_x,p_{goal},p_{lim}\right), ~~~~ v_d = d\left(v_x,v_{goal},v_{lim}\right) \label{eq:p_d}\\ \textrm{and} ~~~ a_d &= d\left(a_x,a_{goal},a_{lim}\right).\label{eq:a_d} \end{align} \end{small} \vspace{-10pt} In \eqref{eq:p_d}-\eqref{eq:a_d}, the normalized values $\pm p_{goal}\pm ,v_{goal}, \pm a_{goal}$ define the boundaries of the discrete states the agent should learn to reach whereas the normalized values of $\pm p_{lim}\pm ,v_{lim}, \pm a_{lim}$ denote the limits in the observations the agent should learn not to exceed when controlling the multi-rotor vehicle. \subsubsection{Goal State} We define the goal state $s^* \in \mathbb{S}$ \begin{small} \begin{equation} s^* = \left\lbrace 1,1,1,*\right\rbrace. \label{eq:goal_state} \end{equation} \end{small} \vspace{-10pt} This means the goal state is reached if $-p_{goal}\leq p_x\leq p_{goal}$, $-v_{goal}\leq v_x\leq v_{goal}$ and $-a_{goal}\leq a_x\leq a_{goal}$ regardless of the value of $i_{\theta}$. \subsubsection{Reward Function} Our reward function $r_t$, inspired by a shaping approach \cite{Rodriguez-Ramos2018} is given as \begin{small} \begin{align} r_t = r_p + r_v + r_{\theta} + r_{dur} + r_{term}, \label{eq:reward} \end{align} \end{small} \vspace{-10pt} \noindent where $r_{term} = r_{suc}$ if $s=s^*$ and $r_{term} = r_{fail}$ if $|p_x|>p_{lim}$. In all other cases, $r_{term} = 0$. $r_{suc}$ and $r_{fail}$ will be defined at the end of this section. Positive rewards $r_p$, $r_v$ and $r_{\theta}$ are given for a reduction of the relative position, relative velocity and the pitch angle, respectively. The negative reward $r_{dur}$ gives the agent an incentive to start moving. Considering the negative weights $w_{p},w_{v},w_{\theta}, w_{dur}$ and the agent frequency $f_{ag}$, derived in Sec.~\ref{sec:hyperparemter_estimation}, with which actions are drawn from $\mathbb{A}$, we define one time step as $\Delta t=1/f_{ag}$ and $r_p,r_v,r_{\theta},r_{dur}$ as \begin{small} \begin{align} &r_p =\text{clip}(w_p(|p_{x,t}|-|p_{x,t-1}|),-r_{p,max},r_{p,max}) \label{eq:clip_r_p}\\ &r_v = \text{clip}(w_v(|v_{x,t}|-|v_{x,t-1}|),-r_{v,max},r_{v,max})\label{eq:clip_r_v}\\ &r_ {\theta} = w_{\theta}( |\theta_{d,t}|-|\theta_{d,t-1}|)/\theta_{max} v_{lim}\label{eq:scale_r_theta}\\ &r_{dur} = w_{dur} v_{lim}\Delta t.\label{eq:scale_r_dur} \end{align} \end{small} \vspace{-10pt} The weights are negative so that decreasing the relative position, relative velocity or the pitch angle yields positive reward values. Clipping $r_p$ and $r_v$ to $\pm r_{p,max}$ and $\pm r_{v,max}$, as well as scaling $r_{\theta}$ and $r_{dur}$ with $v_{lim}$ is necessary due to their influence on the value of $r_{max}$. $r_{max}$ denotes the maximum achievable reward in a non-terminal timestep if $v_x \leq v_{lim}$ and $a_x \leq a_{lim}$ and thus complies with the limits set by the motion scenario described in Sec.~\ref{sec:curriculum_discretization} during the derivation of the sequential curriculum. To ensure the applicability of the curriculum also in situations in which that compliance is not given is why the clipping is applied in \eqref{eq:clip_r_p} and \eqref{eq:clip_r_v}. $r_{max}$ plays a role in the scaling of Q-values that is performed as part of the knowledge transfer during different curriculum steps. It is defined as follows \begin{small} \begin{align} r_{p,max} &= |w_p| v_{lim} \Delta t ~~~~~~~~~~~~~ r_{v,max} = |w_v| a_{lim} \Delta t\\ r_{\theta,max} &= |w_{\theta}| v_{lim} \Delta\theta/\theta_{max} ~~~~ r_{dur,max} = w_{dur} v_{lim}\Delta t\\ r_{max} &= r_{p,max}+r_{v,max}+r_{\theta,max}+r_{dur,max}. \end{align} \end{small} \vspace{-12pt} With $r_{max}$ derived and the weights $w_{suc}$ and $w_{fail}$, we can finally define the success and failure rewards as \begin{small} \begin{equation} r_{suc} = w_{suc} r_{max} ~~ \textrm{and}~~~ r_{fail} = w_{fail} r_{max}.\label{eq:r_suc_r_fail} \end{equation} \end{small} \vspace{-12pt} All weights of the reward function were determined experimentally. \subsubsection{Double Q-Learning} We use the Double Q-Learning algorithm \cite{Hasselt2010} to address the problem of overestimating action values and increasing training stabilty. Inspired by \cite{Even-Dar2003}, we apply a state-action pair specific learning rate that decreases the more often the state-action pair has been visited. \begin{small} \begin{align} \alpha(s_t,a_t) = \max\left\lbrace \left( n_c(s_t,a_t)+1\right)^{-\omega},\alpha_{min}\right\rbrace, \end{align} \end{small} \vspace{-12pt} \noindent where $ n_c(s_t,a_t)$ denotes the number of visits the state action-pair $\left(s_t,a_t \right)$ has been experiencing until the timestep $t$ and $\omega$ is a decay factor. To keep a certain minimal learning ability, the learning rate is constant once the value of $\alpha_{min}$ has been reached. Sufficient exploration of the state space can be ensured by applying an $\epsilon$-greedy policy for the action selection during training. The exploration rate $\epsilon$ is varied according to a schedule for the episode number. \subsection{Curriculum and Discretization}\label{sec:curriculum_discretization} Our curriculum and discretization approach builds on \cite{Lampton2009}, where a multiresolution method for state space discretization is presented. There, starting with a coarse discretization, a region around a pre-known learning goal is defined. Once the agent has learned to reach that region, it is further refined by discretizing it with additional states. Then, training starts anew but only in the refined region, repeatedly until a pre-defined resolution around the learning goal has been reached. Consequently, a complex task is reduced to a series of smaller, quickly-learnable sub tasks. \begin{figure} \centering \includegraphics[scale = 0.35]{./pics/curriculum.pdf} \caption{Illustration of the sequential curriculum for $n_{cs}+1$ curriculum steps. Each curriculum step consists of an instance of the basis learning task.} \label{fig:seq_cur} \end{figure} In our novel approach, we introduce a similar multiresolution technique combined with transfer learning as part of a sequential curriculum \cite{Narvekar2020} to accelerate and stabilize training even further. Each new round of training in the multiresolution setting constitutes a different curriculum step. A curriculum step is an instance of the basis learning task introduced in Sec.~\ref{sec:basis_learning_task}. This means in particular that each curriculum step has its own Q-table. Since the size of the state and action space is the same throughout all steps, knowledge can be easily transferred to the subsequent curriculum step by copying the Q-table that serves then as a starting point for its training. Figure~\ref{fig:seq_cur} illustrates the procedure of the sequential curriculum.\\ However, throughout the curriculum the goal regions become smaller, leading to less spacious maneuvers of the multi-rotor. Thus, the achievable reward, which depends on relative change in position and velocity, is reduced. In order to achieve a consistent adaptability of the Q-values over all curriculum steps, the initial Q-values need to be scaled to match the maximum achievable reward per timestep. Therefore, we define \begin{small} \begin{equation} Q_{init, i+1} = r_{max,i+1}/r_{max,i} Q_{result,i}, \end{equation} \end{small} \vspace{-10pt} \noindent where $i\geq 1$ is the current curriculum step. Furthermore, adding more curriculum steps with smaller goal regions possibly leads to previously unconsidered effects, such as overshooting the goal state. This is why in our work, training is not only restricted to the latest curriculum step but performed also on the states of the previous curriculum steps when they are visited by the agent. \begin{figure} \centering \includegraphics[scale = 0.3]{./pics/discrete.pdf} \caption{Illustration of the mapping of the normalized observations $p_x$ and $v_x$ to the discrete states (red - $0$, yellow - $1$, blue - $2$) for a curriculum with three steps $(i=0,i=1,i=2)$. The yellow chequered regions illustrate the size of the goal state when the curriculum step is the last of the sequence.} \label{fig:discretization} \end{figure} Furthermore, the discretization introduced in Sec.~\ref{sec:discrete_state_space}, which is now associated with a curriculum step, covers only a small part of the continuous observation space with few discrete states. In order to completely capture the predominant environment characteristics of the subspace in which the agent is expected to learn the task, it is necessary for it to have the knowledge of the state space topology \cite{Anderson1994}. Against the background of the multiresolution approach, this can be rephrased as the question of how to define suitable goal regions, i.e., the size of the goal states. To this end, we again leverage the insight that accurate knowledge about the relative position and velocity is only required when landing is imminent. We begin by choosing an exponential contraction of the size of the unnormalized goal state $p_{c,x,goal,i}$ associated with the $i^{th}$ curriculum step such that \begin{small} \begin{align} &p_{c,x,goal,0} = \sigma^2 x_{max} , ~~~ p_{c,x,goal,i} = \sigma^{2(i+1)}x_{max},\\ &p_{c,x,goal,n_{cs}} = l_{mp}/2 = \sigma^{2(n_{cs}+1)}x_{max} \label{eq:exp_contraction} \end{align} \end{small} \vspace{-12pt} \noindent where $0<\sigma<1$ is a contraction factor, $l_{mp} \in \mathbb{R}[\si{m}]$ is the edge length of the squared moving platform, $x_{max}$ is the boundary of the fly zone $\mathcal{F}$ (see Sec.~\ref{sec:prelim_notations}) and $n_{cs}$ is the number of curriculum steps following the initial step. The contraction factor $\sigma$ plays a role in how easy it is for the agent to reach the goal state. If it is set too low, the agent will receive success rewards only rarely, thus leading to an increased training time. \eqref{eq:exp_contraction} can be solved for $n_{cs}$ because $\sigma$, $l_{mp}$ and $x_{max}$ are known parameters. To determine the relationship between the contraction of the positional goal state and the contraction of the velocity goal state, we need a kinematics model that relates these variables. For this purpose, we envision the following worst case scenario for the relative motion \eqref{eq:non_linear_rel_motion}. Assume that the vehicle hovers right above the platform which is located on the ground in the center of the fly zone $\mathcal{F}$, with $v_x = 0\si{m/s}$ and $\psi_{rel} = 0$. Now, the platform starts to constantly accelerate with the maximum possible acceleration $a_{mp,max}$ defined by \eqref{eq:a_mpmax} until it has reached $x_{max}$ after a time $t_{0} = \sqrt{2 x_{max}/a_{mp,max}}$. This is considered to be the worst case since the platform never slows down, unlike the rectilinear periodic movement \eqref{eq:platform_acceleration_rpm} used for training. The evolution of the continuous observations over time is then expressed by \begin{small} \begin{align} a_{x,c,wc}(t) = a_{mp,max}, &~~~~ v_{x,c,wc}(t) =a_{mp,max} t \label{eq:a_p_s_rel_max}\\ \textrm{and} ~~ p_{x,c,wc}(t) &=0.5 a_{mp,max}t^2,\label{eq:a_v_s_rel_max} \end{align} \end{small} \vspace{-12pt} \noindent which constitute a simple kinematics model that relates the relative position and relative velocity via the time $t$. Next, we consider $p_{max}=x_{max}$, $v_{max}= a_{mp,max}t_{0}$ and $a_{max} = a_{mp,max}$ for the normalization \eqref{eq:xyz_c} and discretize \eqref{eq:a_p_s_rel_max} and \eqref{eq:a_v_s_rel_max} via the time $t_i = \sigma^i t_0$. This allows us to determine the values of $p_{lim,i}$, $v_{lim,i}$, $a_{lim,i}$ and $p_{goal,i}$, $v_{goal,i}$, $a_{goal,i}$. They are required for the basis learning task presented in Sec.~\ref{sec:basis_learning_task} that now constitutes the $i^{th}$ curriculum step. For this purpose, we define with $i \in \left\lbrace 0,1,\ldots,n_{cs}\right\rbrace$ \begin{small} \begin{align} p_{lim,i}&= \sigma^{2i}= 0.5 a_{mp,max} t_i^2/p_{max}, \\ v_{lim,i}&=\sigma^{i}= a_{mp,max} t_i /v_{max}, \\ a_{lim,i} &= 1 , \\ p_{goal,i}&=\beta_p \sigma^{2i}=\beta_p p_{lim,i}, \\ v_{goal,i}&=\beta_v\sigma^i=\beta_v v_{lim,i}, \\ a_{goal,i} &= \beta_{a}\sigma_{a}=\beta_{a}\sigma_{a} a_{lim,i} , \end{align} \end{small} \vspace{-12pt} \noindent where $\beta_p=\beta_v =\beta_{a} = 1/3$ if the $i$th curriculum step is the curriculum step that was most recently added to the curriculum sequence, and $\beta_p=\sigma^2,\beta_v = \sigma,\beta_{a}=1$ otherwise. Thus, for the latest curriculum step the goal values result from scaling the associated limit value with a factor of $1/3$, a value that has been found empirically, and for all previous steps from the discretized time applied to \eqref{eq:a_p_s_rel_max} and \eqref{eq:a_v_s_rel_max}. The entire discretization procedure is illustrated by Fig.~\ref{fig:discretization}. Introducing a different scaling value only for the latest curriculum step has been empirically found to improve the agent's ability to follow the moving platform. The discretization of the acceleration is the same for all curriculum steps and is defined by a contraction factor $\sigma_a$ which has been empirically set. Finding a suitable value for $\sigma_a$ has been driven by the notion that if this value is chosen too high, the agent will have difficulties reacting to changes in the relative acceleration. This is due to the fact that the goal state would cover an exuberant range in the discretization of the relative acceleration. However, if it is chosen too low the opposite is the case. The agent would only rarely be able to visit the discrete goal state $a_{goal,i}$, thus unnecessarily foregoing one of only three discrete states. \\ During the training of the different curriculum steps the following episodic terminal criteria have been applied. On the one hand, the episode terminates with success if the goal state $s^*$ of the latest curriculum step is reached if the agent has been in that curriculum step's discrete states for at least one second without interruption. On the other hand, the episode is terminated with a failure if the UAV leaves the fly-zone $\mathcal{F}$ or if the success terminal criterion has not been met after the maximum episode duration $t_{max}$. The reward received by the agent in the terminal time step for successful or failing termination of the episode is defined by \eqref{eq:r_suc_r_fail}. Once the trained agent is deployed, the agent selects the actions based on the Q-table of the latest curriculum step to which the continuous observations can be mapped (see Fig.~\ref{fig:seq_cur}). \subsection{Hyperparameter Determination} \label{sec:hyperparemter_estimation} We leverage the discrete action space \eqref{eq:discrete_action_space} to determine the hyperparameters agent frequency $f_{ag}$ and maximum pitch angle $\theta_{max}$ in an interpretable way. The purpose is to ensure sufficient maneuverability of the UAV to enable it to follow the platform. For sufficient maneuverability, the UAV needs to possess two core abilities. i) produce a maximum acceleration bigger than the one the platform is capable of and ii) change direction of acceleration quicker than the moving platform. Against the background of the assumptions explained in Sec.~\ref{sec:basis_learning_task_preliminaries} complemented with thrust compensation, we consider the first aspect by the maximum pitch angle $\theta_{max}$ \begin{small} \begin{equation} \theta_{max} = \tan^{-1}\left( \frac{k_a a_{mp,max}}{g}\right).\label{eq:theta_max} \end{equation} \end{small} \vspace{-10pt} $k_{a}$ denotes the multiple of the platform's maximum acceleration of which the UAV needs to be capable. For the second aspect, we leverage the platform's known frequency of the rectilinear periodic movement. According to \eqref{eq:platform_acceleration_rpm}, the moving platform requires the duration of one period to entirely traverse the available range of acceleration. Leveraging equation \eqref{eq:normalized_pitch_set}, we can calculate the time required by the copter to do the same as $4n_{\theta}\Delta t$, where $\Delta t = 1/f_{ag}$. Next, we introduce a factor $k_{man}$ which specifies how many times faster the UAV should be able to roam through the entire range of acceleration than the moving platform. Against this background, we obtain the agent frequency as \begin{small} \begin{align} f_{ag} = 4 n_{\theta} k_{man} \frac{\omega_{mp}}{2\pi}= 2n_{\theta} k_{man} \frac{\omega_{mp}}{\pi}. \label{eq:f_ag} \end{align} \end{small} Both hyperparameters $\theta_{max}$ and $f_{ag}$ are eventually based on the maximum acceleration of the moving platform $a_{mp,max}$ as is the discretization of the state space. Therefore, we argue that these hyperparameters pose a matching set of values that is suitable to prevent excessive jittering in the agent's actions. \section{Implementation}\label{sec:Implementation} \subsection{General} We set up the experiments to showcase the following. \begin{itemize}[leftmargin=*] \item Empirically show that our method is able to outperform the approach presented in \cite{Rodriguez-Ramos2019} with regard to the rate of successful landings while requiring a shorter training time. \item Empirically show that our method is able to perform successful landings for more complex platform trajectories such as an 8-shape. \item Demonstrate our method on real hardware. \end{itemize} \subsection{Simulation Environment} The environment is built within the physics simulator Gazebo 11, in which the moving platform and the UAV are realized. We use the RotorS simulator \cite{Furrer2016} to simulate the latter. Furthermore, all required tasks such as computing the observations of the environment, the state space discretization as well as the Double Q-learning algorithm are implemented as nodes in the ROS Noetic framework using Python 3.\\ The setup and data flow are illustrated by Fig.~\ref{fig:sim_framework}. The associated parameters are given in Tab.~\ref{tab:environment_params}. We obtain $\mathbf{a}_c$ by taking the first order derivative of $\mathbf{v}_c$ and applying a first order Butterworth-Filter to it. The cut-off frequency is set to $0.3 \si{hz}$. \begin{figure}[thpb] \centering \includegraphics[scale=0.7]{./pics/framework.pdf} \caption{Overview of the framework's structure. Red are components relying on Gazebo. Green are components designed for converting and extracting / merging data. Yellow are components dealing with the control of the UAV. } \label{fig:sim_framework} \end{figure} \begin{table}[] \centering \fontsize{7}{10}\selectfont \begin{tabular}{@{}ccccccccc@{}} \toprule Group & \multicolumn{2}{c}{Gazebo} & \thead{Rel. St.} & \thead{Observ.} & \thead{1D RL } & \multicolumn{2}{c}{\thead{PID }} \\ \midrule Name & \thead{Max. step \\size\\ $[\si{s}]$} & \thead{Real time \\ factor\\$[-]$} & \thead{{ }\\Freq. \\$[\si{hz}]$} & \thead{{ }\\Freq.\\ $[\si{hz}]$ } &\thead{Pitch / Roll\\ Freq. \\$[\si{hz}]$} & \thead{Yaw\\Freq. $[\si{hz}]$ \\ $k_p,k_i,k_d$} & \thead{$v_z$\\Freq. $[\si{hz}]$\\ $k_p,k_i,k_d$} \\ \midrule Value &$ 0.002$& $1$& $100$ & $100$ &\thead{see\\ \eqref{eq:f_ag}} &\thead{$\sim 110$\\ $8,1,0$} & \thead{$\sim 110$\\ $5,10,0$} \\ \bottomrule \end{tabular} \caption{Parameters of the training environment.} \label{tab:environment_params} \end{table} \subsection{Initialization} In \cite{Kooi2021}, it is indicated that training can be accelerated when the agent is initialized close to the goal state at the beginning of each episode. For this reason, we use the following normal distribution to determine the UAV's initial position within the fly zone $\mathcal{F}$ during the first curriculum step. \begin{small} \begin{equation} (x_{init},y_{init}) = \left(\text{clip}\left(N(\mu,\sigma_{\mathcal{F}}),-x_{max},x_{max},\right),0\right) \end{equation} \end{small} We set $\sigma_{\mathcal{F}} = p_{max}/3$, which will ensure that the UAV is initialized close to the center of the flyzone and thus in proximity to the moving platform more frequently. All subsequent curriculum steps as well as the testing of a fully trained agent are then conducted using a uniform distribution over the entire fly zone. \subsection{Training Hardware} Each training is run on a desktop computer with the following specifications. Ubuntu 20, AMD Ryzen threadripper 3960x 24-core processor, 128 GB RAM, 6TB SSD. This allows us to run up to four individual trainings in parallel. Note, that being a tabular method, the Double Q-learning algorithm does not depend on a powerful GPU for training since it does not use a neural network. \subsection{Training} We design two training cases, \textit{simulation} and \textit{hardware}. Case \textit{simulation} is similar to the training conditions in the baseline method \cite{Rodriguez-Ramos2019} so that we can compare our approach in simulation. Case \textit{hardware} is created to match the spatial limitations of our real flying environment, so that we can evaluate our approach on real hardware. For both cases, we apply the rectilinear periodic movement (RPM) of the platform specified by \eqref{eq:platform_acceleration_rpm} during training. We consider different scenarios for training regarding the maximum velocity of the platform, which are denoted by ``RPM $v_{mp,train}$''. For each velocity $v_{mp,train}$, we train four agents using the same parameters. The purpose is to provide evidence of reproducibility of the training results instead of only presenting a manually selected, best result. We choose the same UAV (Hummingbird) of the RotorS package that is also used in the baseline method. Other notable differences between the baseline and the training cases \textit{simulation} and \textit{hardware} are summarized in Tab.~\ref{tab:training_differences}. \begin{table} \fontsize{7}{10}\selectfont \centering \begin{tabular}{ccccccc} \toprule Method & \thead{Fly zone\\size $[m]$} & \thead{Platform\\size $[m]$} & \makecell{$r_{mp}$\\$[\si{m}]$} & \thead{$v_{mp}$\\$[\si{m/s}]$} & \thead{$a_{mp,max}$\\$[\si{m/s^2}]$} & \thead{$f_{ag}$\\$[hz]$}\\ \hline Baseline & $3\times 6$ & $1\times1\times\sim 0.3$ & $\sim 2.5$ & $1$ &$\sim 0.4$ & $20$ \\ \hline \multirow{3}{*}{Case \textit{simulation}} &\multirow{3}{*}{$9\times9$}&\multirow{3}{*}{$1\times1\times0.3$}& \multirow{3}{*}{2} & $0.8$ & $0.32$ & $11.46$ \\ & & & & $1.2$ & $0.72$ & $17.19$ \\ & & & & $1.6$ & $1.28$ & $22.92$ \\\hline Case \textit{hardware}&$2\times2$ &$0.5\times0.5\times0.3$& $0.5$& $0.4$& $0.32$ & $22.92$\\ \bottomrule \end{tabular} \caption{Differences between the training parameters of our method and the baseline \cite{Rodriguez-Ramos2019}. } \label{tab:training_differences} \end{table} Note, that some of our training cases deal with a higher maximum acceleration of the moving platform than the training case used for the baseline method and are therefore considered as more challenging. For all trainings, we use an initial altitude for the UAV of $z_{init} = 4\si{m}$ and a vertical velocity of $v_z = -0.1\si{m/s}$ so that the UAV is descending during an entire episode. The values used for these variables in the baseline \cite{Rodriguez-Ramos2019} can not be inferred from the paper. For the first curriculum step, the exploration rate schedule is empirically set to $\varepsilon= 1$ (episode $0 - 800$) before it is linearly reduced to $\varepsilon=0.01$ (episode $800-2000$). For all later curriculum steps, it is $\varepsilon=0$. For choosing these values, we could exploit the discrete state-action space where for each state-action pair the number of visits of the agent can be tracked. The exploration rate schedule presented above is chosen so that most state action pairs have at least received one visit. Other training parameters are presented in Tab.~\ref{tab:common_training_params}. The training is ended as soon as the agent manages to reach the goal state \eqref{eq:goal_state} associated with the latest step in the sequential curriculum in $96\%$ of the last $100$ episodes. This value has been empirically set. For all trainings, we use noiseless data to compute the observations fed into the agent. However, we test selected agents for their robustness against noisy observations as part of the evaluation. \begin{table}[] \fontsize{7}{10}\selectfont \centering \begin{tabular}{@{}ccccccccc@{}} \toprule Group & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Double Q-Learning\end{tabular}} & \multicolumn{3}{c}{Reward function} & \multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}Discretization\end{tabular}} \\ \midrule Name & \begin{tabular}[c]{@{}c@{}}$\gamma$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$\alpha_{min}$ \\$\omega$\end{tabular} &\begin{tabular}[c]{@{}c@{}}$w_p$\\ $w_v$ \\ $w_{\theta}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$w_{dur}$\\ $w_{suc}$ \\ $w_{fail}$\end{tabular} &$t_{max}$& \begin{tabular}[c]{@{}c@{}}$\sigma_{a}$\\$\sigma $\end{tabular} & \begin{tabular}[c]{@{}c@{}}$n_{\theta}$\\$n_{cs,sim.}$\\$n_{cs,hardw.}$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$k_{a}$\\$k_{man}$\end{tabular} \\ \midrule Value & $0.99$ & \begin{tabular}[c]{@{}c@{}} $0.02949$ \\$0.51$\end{tabular} &\begin{tabular}[c]{@{}c@{}}$-100$\\ $-10$ \\ $-1.55$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$-6$\\$ 2.6$\\ $-2.6$\end{tabular} &$20\si{s}$ &\begin{tabular}[c]{@{}c@{}}$0.416$\\$0.8 $\end{tabular} & \begin{tabular}[c]{@{}c@{}}$3$\\$4$\\$3$\end{tabular} & \begin{tabular}[c]{@{}c@{}}$3$\\$15$\end{tabular} \\ \bottomrule \end{tabular} \caption{Other training parameters of our approach.} \label{tab:common_training_params} \end{table} \vspace{-10pt} \subsection{Initiation of Motion} \label{sec:state_lock} During training, the yaw controller ensures $\psi_{rel} = 0$. However, during evaluation in simulation this sometimes led to the situation that the agent commanding the lateral motion of the UAV occasionally was not able to leave its initial state (see Fig.~\ref{fig:state_lock}). We hypothesize that the reason for this behavior is that the agent is trained on a platform that \textit{moves} while considering \textit{relative} motion in its observations. As a consequence, the agent learns a policy that, while being in certain states, exclusively relies on the platform movement to achieve a desired state change. However, for evaluation using $\psi_{rel} = 0$, the agent controlling the lateral motion observes no relative movement if the UAV is hovering and the platform following a rectilinear trajectory in longitudinal direction. The issue can be addressed by setting initial $\psi_{rel} \neq 0$. In this case, the platform's movement shows a component in the values ${p}_{c,y},{v}_{c,y}$ and ${a}_{c,y}$ that are used as observations for the lateral motion's agent. A change in the states is therefore much more likely, allowing the agent to enter states in which the policy selects an action other than ``do nothing''. For this reason, we apply $\psi_{rel} = \pi/4$ for all experiments. \begin{figure} \centering \includegraphics[scale = 0.25]{./pics/relative_trajectories_combined.pdf} \caption{Illustration of the problem of motion initiation. After initialization, commanding a yaw angle of $\psi_{rel} = \pi/4$ allows the lateral agent to enter a state associated with another action than "do nothing" due to the state change induced by the longitudinal platform movement. The platform movement is now reflected in $p_y,v_y,a_y$ that are the observations fed into the lateral agent.} \label{fig:state_lock} \end{figure} \section{Results}\label{sec:Experiments} \subsubsection{Evaluation in Simulation without Noise} In this scenario, all agents trained for case \textit{simulation} and \textit{hardware} are evaluated in simulation using noiseless observations of the environment, just as in training. Besides a static platform, we use two types of platform trajectories. The first is the rectilinear periodic movement (RPM) specified by \eqref{eq:platform_acceleration_rpm} and the second is an eight-shaped trajectory defined by \begin{small} \begin{equation} _e\mathbf{r}_{mp} =r_{mp}\left[\sin(\omega_{mp} t),\sin(0.5\omega_{mp} t),0\right]^T, ~ \omega_{mp} = v_{mp}/r_{mp}.\\ \label{eq:platform_acceleration} \end{equation} \end{small} \vspace{-10pt} For all landing attempts, we specify an initial altitude of $z_{init} = 2.5\si{m}$. {A landing attempt is ended once the UAV touches the surface of the moving platform or reaches an altitude that is lower than the platform surface, i. e. misses the platform. If the center of the UAV is located above the moving platform at the moment of touchdown, the landing trial is considered successful. The value of $z_{init}$ leads to a duration of a landing attempt which corresponds roughly to the time $t_{max}$ used as maximum episode length during training, see reward function \eqref{eq:reward}. The information regarding the training duration of the agents is summarized in Tab.~\ref{tab:duration_results} for case \textit{simulation} and case \textit{hardware}. \\ The training durations of the different curriculum steps suggest that the majority of the required knowledge is learned during the first curriculum step. The later curriculum steps required significantly fewer episodes to reach the end condition. This is because the exploration rate is $\varepsilon=0$. Thus, the agent is only exploiting previously acquired knowledge, which is also supported by the accumulated sum of rewards (Fig.~\ref{fig:rewards}). This also implies that the decomposition of the landing procedure into several, similar sub tasks is a suitable approach to solve the problem. The achieved success rates in simulation with noiseless observations of the environment are presented in Fig.~\ref{fig:success_rates} for training case \textit{simulation} and in Tab.~\ref{tab:success_hardware_case} for training case \textit{hardware}. \aboverulesep = 0.0mm \belowrulesep = 0.0mm \begin{table}[] \fontsize{7}{10}\selectfont \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \thead{Case $\rightarrow$} & \multicolumn{3}{c|}{Case \textit{simulation}} & Case \textit{hardware} \\ \cmidrule{2-5} \thead{Curricul.\\ step$\downarrow$} & RPM 0.8 & RPM 1.2 & RPM 1.6 & RPM 0.4 \\ \hline 0 & $112,109,112,119$ & $102,87,87,91$ & $92,83,88,90$ & $ 89,92,107,110$\\ \hline 1 & $7,6,7,6$ & $6,5,5,5$ & $5,4,6,6$ & $5,4,5,6$ \\ \hline 2 & $7,6,7,6$ & $7,5,6,5$ & $6,5,9,6 $ & $5,6,5,7 $ \\ \hline 3 & $9,7,8,8$ & $8,6,7,7$ & $7,6,8,8$ & $6,7,6,8$ \\ \hline 4 & $8,8,10,9$ & $10,8,8,9$ & $8,8,8,9$ &N.A. \\ \hline \multirow{2}{*}{Total} &$143,136$ & $133,111$ & $118,106$ &$105,109$ \\ &$144,148$ & $113,117$ & $119,119$ &$124,133$ \\ \hline \end{tabular} \caption{Training time in minutes required for the different curriculum steps by four agents trained with identical parameters for the different training cases. The values are rounded to the nearest integer. The training scenarios are identified by the platform's rectilinear periodic movement denoted RPM $v_{mp,train}$.} \label{tab:duration_results} \end{table} \begin{figure}[thpb] \centering \includegraphics[width=8.5cm]{./pics/success_plot_v5.pdf} \caption{Success rates of four agents trained with same parameters on a platform's rectilinear periodic movement with $v_{mp,train} = 0.8\si{m/s}$ (RPM 0.8 - red), $v_{mp,train} = 1.2\si{m/s}$ (RPM 1.2 - blue) and $v_{mp,train} = 1.6\si{m/s}$ (RPM 1.6 - green). The agents are evaluated on different types of platform movement, indicated by the values of the abscissa. Error bars illustrating the mean success rate and standard deviation are depicted in grey. The success rates have been determined for 150 landing trials for each evaluation scenario. } \label{fig:success_rates} \end{figure} They were determined over a larger set of RPMs than in the baseline method and indicate a good performance of the approach. The success rates become higher when the platform velocity is lower during evaluation compared to the one applied during training. This is to be expected since the equations presented in Sec.~\ref{sec:hyperparemter_estimation} for hyperparameter determination ensure a sufficient maneuverability up to the velocity of the rectilinear periodic movement used in training. For the training case RPM 0.8, the fourth agent has a comparably poor performance. For the static platform the agent occasionally suffers from the problem of motion initiation as described in Sec.~\ref{sec:state_lock}, despite setting $\psi_{rel} = \pi/4 \si{rad}$. For the evaluation case where the platform moves with RPM 0.4, the reason is an oscillating movement, which causes the agent to occasionally overshoot the platform. A similar problem arises for agent four of training case \textit{hardware} where the agent occasionally expects a maneuver of the platform during the landing procedure. As a consequence, this agent achieves a success rate of only $82\%$ for a static platform. However, for higher platform velocities, the success rate improves significantly. \subsubsection{Selection of Agents for further Evaluation} We select the first agent of training case \textit{simulation} which is performing best over all evaluation scenarios and was trained on a rectilinear periodic movement with a platform velocity of $v_{mp}=1.6\si{m/s}$. It is denoted RPM 1.6/1. Its reward curve is depicted in Fig.~\ref{fig:rewards}. \begin{figure}[thpb] \centering \includegraphics[scale=0.25]{./pics/reward_rpm_1_6_1_rpm_0_4_3.pdf} \caption{Accumulated reward achieved by agent RPM 1.6/1 of case \textit{simulation} (top) and agent RPM 0.4/3 of case \textit{hardware} (bottom) during training. Red lines indicate the end of a curriculum step. } \label{fig:rewards} \end{figure} We compare its results with the baseline method \cite{Rodriguez-Ramos2019} in Tab.~\ref{tab:comparison}. The comparison shows that our approach is able to outperform the baseline method. For the RPM $0.4$ evaluation scenario with noiseless observations we achieve a success rate of $99\%$, which is $+8\% $ better than the baseline. For the RPM $1.2$ evaluation scenario, our method is successful in $99\%$ of the landing trials, increasing the baseline's success rate by $+26\%$. Note that the maximum radius of the rectilinear periodic movement is $r_{mp} = \sim2.5\si{m}$ in the baseline and $r_{mp}=2\si{m}$ in our approach. However, the value used for our approach poses a more difficult challenge since the acceleration acting on the platform is higher, due to the same maximum platform velocity, see Tab.~\ref{tab:training_differences}. Furthermore, our method requires $\sim 80\%$ less time to train and $53\%$ less episodes. We select agent 3 of training case \textit{hardware} for further evaluation, denoted RPM 0.4/3. Its reward curve is also depicted in Fig.~\ref{fig:rewards}. It is able to achieve a success rate of $99\%$ for the evaluation scenario with a static platform, $100\%$ in case of a platform movement of RPM $0.2$, $99\%$ for RPM $0.4$ and $97\%$ for the eight-shaped trajectory of the platform. It required $2343$ episodes to train which took $123\si{min}$. \aboverulesep = 0.0mm \belowrulesep = 0.0mm \begin{table}[thpb] \fontsize{7}{10}\selectfont \centering \begin{tabular}{|c|c|c|c|c|c|} \hline Sim. eval. $\rightarrow$ & Static $[\%]$ & RPM 0.2 $[\%]$ & RPM 0.4 $[\%]$ & 8-shape $[\%]$ \\ \hline \thead{Case \\ \textit{hardware}} & $100,100,99,82$ & $99,100,100,90$ & $98,99,99,97$ & $ 99,96,97,94$\\ \hline \end{tabular} \caption{Success rates in percent achieved in simulation with noiseless observations by four agents trained with same parameters for case \textit{hardware}. They are evaluated in four different scenarios of platform movement, as is indicated by the column titles. } \label{tab:success_hardware_case} \end{table} \aboverulesep = 0.0mm \belowrulesep = 0.0mm \begin{table}[] \fontsize{7}{10}\selectfont \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \thead{Category$\rightarrow$\\ Method$\downarrow$ } & \thead{Fly zone \\size} & \thead{Training\\ duration} & \thead{Success rate\\RPM 0.4} & \thead{Success rate\\RPM 1.2} \\ \hline \thead{Baseline \cite{Rodriguez-Ramos2019}} & $5\si{m}\times9\si{m}$ & \thead{$\sim600\si{min}$\\$4500\si{ep}.$} & $91\%$ & $73\%$\\ \hline \thead{Our method } & $9\si{m}\times9\si{m}$ & \thead{$118\si{min}$\\$2113\si{ep}.$} & $99\%$ & $99\%$\\ \hline \thead{Difference } & $-$ &\thead{$\sim -80\%$\\$-53\%$} & $+8\%$ & $ +26\%$\\ \hline \end{tabular} \caption{Comparison of our approach with the baseline method. The agent selected for the comparison is agent RPM 1.6/1. Our method significantly improves the success rate in the two reference evaluation scenarios while requiring significantly less time and fewer episodes to train. Furthermore, the fly zone used in our approach covers a larger area. } \label{tab:comparison} \end{table} \subsubsection{Evaluation in Simulation with Noise} We evaluate the selected agent of case \textit{simulation} and case \textit{hardware} for robustness against noise. For this purpose, we define a set of values $\sigma_{noise}= \left\lbrace \sigma_{p_x},\sigma_{p_y}, \sigma_{p_z},\sigma_{v_x},\sigma_{v_y},\sigma_{v_z}\right\rbrace$ specifying a level of zero mean Gaussian noise that is added to the noiseless observations in simulation. The noise level corresponds to the noise present in an EKF-based pose estimation of a multi-rotor UAV recorded during real flight experiments. \begin{small} \begin{align} \sigma_{noise} &= \left\lbrace 0.1\si{m},0.1\si{m},0.1\si{m},0.25\si{m/s},0.25\si{m/s},0.25\si{m/s}\right\rbrace \label{eq:noise_level} \end{align} \end{small} \vspace{-10pt} We evaluate the landing performance again for static, periodic and eight-shaped trajectories of the landing platform in Tab.~\ref{tab:noise_evaluation_case_simulation} for training case \textit{simulation} and in Tab.~\ref{tab:noise_evaluation_case_hardware} for training case \textit{hardware}. For the selected agent of training case \textit{simulation}, adding the realistic noise $\sigma_{noise}$ leads to a slightly reduced performance. However, the achieved success rates are still higher than the agent's performance in the baseline without noise. For the evaluation scenario with RPM $0.4$, our success rate is $+4\%$ higher. With RPM $1.2$, it is $+20\%$ higher. For the selected agent of training case \textit{hardware}, the drop in performance is slightly more pronounced. The reason is that the fly zone and platform specified for this training case are significantly smaller than for the case \textit{simulation}. As a consequence, the size of the discrete states is also significantly reduced, whereas the noise level stays the same. Thus, noise affects the UAV more, since it is more likely that the agent takes a suboptimal action due to an observation that was biased by noise. \aboverulesep = 0.0mm \belowrulesep = 0.0mm \begin{table}[] \fontsize{7}{10}\selectfont \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline Simulation eval. $\rightarrow$ & Static & RPM 0.4 & RPM 0.8 &RPM 1.2& RPM 1.6 &8-shape \\ \hline \thead{Case \textit{simulation} \\Agent RPM 1.6/1} & $97\%$ & $95\%$ & $98\%$ & $93\%$ & $85\%$ &$85\%$\\ \hline \end{tabular} \caption{Success rates in percent achieved in simulation with noisy observations by the best performing agent of training case \textit{simulation}. It has been evaluated over 150 landing trials for six different scenarios of platform movement, as is indicated by the column titles. } \label{tab:noise_evaluation_case_simulation} \end{table} \aboverulesep = 0.0mm \belowrulesep = 0.0mm \begin{table}[] \fontsize{7}{10}\selectfont \centering \begin{tabular}{|c|c|c|c|c|} \hline Simulation eval. $\rightarrow$ & Static & RPM 0.2 & RPM 0.4 &8-shape \\ \hline \thead{Case \textit{hardware} \\Agent RPM 0.4/3} & $95\%$ &$91\%$ &$85\%$ & $85\%$\\ \hline \end{tabular} \caption{Success rates in percent achieved in simulation with noisy observations by the selected agent of training case \textit{hardware}. It has been evaluated over 150 landing trials for four different scenarios of platform movement, as is indicated by the column titles. } \label{tab:noise_evaluation_case_hardware} \end{table} \subsubsection{Evaluation in Real Flight Experiment} Unlike the baseline method, we do not evaluate our approach on real hardware in single flights only. Instead, we provide statistics on the agent's performance for different evaluation scenarios to illustrate the sim-to-real-gap. For this purpose, the selected agent of case \textit{hardware} was deployed on a quadcopter, see Figs.~\ref{fig:title_image} and \ref{fig:real_flights_equipment}, that has a mass of $m_{uav} = 0.72 \si{kg}$ and a diameter of $d_{uav}=0.28 \si{cm}$ and deviates from the UAV used in simulation (mass $6\%$, diameter $18\%$). \begin{figure} \centering \includegraphics[width=8.5cm]{./pics/copter_platform.png} \caption{Multi-rotor vehicle and autonomous platform moving on rails that were used for the flight experiments in the real world. The squared platform has an edge length of $0.5\si{m}$.} \label{fig:real_flights_equipment} \end{figure} The quadcopter is equipped with a Raspberry Pi 4B providing a ROS interface and a LibrePilot Revolution flight controller to enable the tracking of the attitude angles commanded by the agents via ROS. A motion capture system (Vicon) provides almost noiseless values of the position and velocity of the moving platform and the UAV. However, we do not use these information directly to compute the observations \eqref{eq:cont_obs_p_cx}-\eqref{eq:cont_obs_phi_rel}. The reason is that the motion capture system occasionally has wrong detections of markers. This can result in abrupt jumps in the orientation estimate of the UAV. Since the flight controller would immediately react, this could lead to a dangerous condition in our restricted indoor environment. To avoid any dangerous flight condition, we obtain the position and velocity of the UAV from an EKF-based state estimation method running on the flight controller. For this purpose, we provide the flight controller with a fake GPS signal (using Vicon) with a frequency of $10\si{hz}$. It is then fused with other noisy sensor data (accelerometer, gyroscope, magnetometer, barometer). The EKF is robust to short periods of wrong orientation estimation. Our algorithm is run off-board and the setpoint values for the attitude angles generated by the agents are sent to the flight controller. Due to hardware limitations regarding the moving platform, we only evaluate the agent for a static platform and the rectilinear periodic movement. Generally, the Vicon system allows for a state estimate of the copter that has a lower noise level than the one specified by \eqref{eq:noise_level}. However, the velocity of the platform is determined purely by means of the Vicon system. The rough surface of the ground caused vibrations of the platform that induced an unrealistic high level of noise in the Vicon system's readings of the platform velocity. For this reason, it is filtered using a low-pass filter with a cut-off frequency of $10\si{hz}$. \\ Figure \ref{fig:landing_trajectories_real} shows the trajectory of the UAV and moving platform for a landing trial where the platform was executing a rectilinear periodic movement with a maximum velocity of $0.4\si{m/s}$. The depicted $x$ and $y$ component of the multi-rotor vehicle's trajectory are based on the state estimate calculated by the EKF. All other presented trajectory values are based on the readings of the Vicon system. Table~\ref{tab:real_flights_evaluation_with_ground_effect} contains the success rates achieved in the experiments with real hardware. The starting positions of the UAV were manually selected and as uniformly distributed as possible, in an altitude of about $2.5\si{m}$. \begin{figure} \centering \includegraphics[scale=0.25]{./pics/vicon_trajectory.pdf} \caption{Example trajectory of the multi-rotor vehicle and moving platform during the real flight experiment. The platform's position is determined using the Vicon system, the multi-rotor vehicle's position results from the state estimate of the flight controller. } \label{fig:landing_trajectories_real} \end{figure} \aboverulesep = 0.0mm \belowrulesep = 0.0mm \begin{table}[] \fontsize{7}{10}\selectfont \centering \begin{tabular}{|c|c|c|c|} \hline Real flights eval. $\rightarrow$ & Static & RPM 0.2 & RPM 0.4 \\ \hline \thead{Case \textit{hardware} \\Agent RPM 0.4/3} & \thead{$96\%$\\$23$ trials} &\thead{$72\%$\\$25$ trials} &\thead{$82\%$\\$28$ trials} \\ \hline \end{tabular} \caption{Success rates in percent achieved in real flights with ground effect by the selected agent of training case \textit{hardware}. It has been evaluated over at least 20 landing trials for three different scenarios of platform movement, as is indicated by the column titles. } \label{tab:real_flights_evaluation_with_ground_effect} \end{table} Whereas for the static platform a success rate of $96\%$ could be reached, there is a noticeable drop in performance for the evaluation scenarios in which the platform is performing the rectilinear periodic movement. We argue that these can be attributed to the following main effects. \begin{enumerate} \item The deviation in the size of the UAV and the different mass plays a role. Especially the different mass is important since it changes the dynamics of the multi-rotor vehicle. Since the platform is only slightly larger than the copter, the effect of this deviation could be decisive regarding situations in which the copter overshoots the platform shortly before touchdown. \item A reason for the drop in performance could also be a slightly different behaviour of the low-level controllers than in simulation. They have been tuned manually. A possible solution approach to reduce the sim-to-real gap here could be to vary the controller gains within a small range during training. This should help the agent towards better generalization properties. \item Trials in which occasional glitches in the state estimate occurred were not treated specially although they could result in extra disturbances the RL controllers had to compensate. Furthermore, we counted also small violations of the boundary conditions as failure, such as leaving the fly zone by a marginal distance even if the agent was able to complete the landing successfully hereafter. \item The ground effect can play an important role for multi-rotor vehicles \cite{Sanches-Cuevas2017}. Since it was not considered in the Gazebo based simulation environment used for training, it contributes to the sim-to-real gap. \end{enumerate} Furthermore, during the real flight experiments no significant jittering in the agent's actions could be observed. This substantiates the approach of calculating values for the maximum pitch angle and agent frequency presented in Sec.~\ref{sec:hyperparemter_estimation}. \section{Conclusion and Future Work} In this work, we presented a RL based method for autonomous multi-rotor landing. Our method splits the overall task into simpler 1-D tasks, then formulates each of those as a sequential curriculum in combination with a knowledge transfer between curriculum steps. Through rigorous experiments, we demonstrate significantly shorter training time ($\sim -80\%$) and higher success rates (up to $+26\%$) than the DRL-based actor-critic method presented in \cite{Rodriguez-Ramos2019}. We present statistics of the performance of our approach on real hardware and show interpretable ways to set hyperparameters. In future, we plan to extend the approach to control the vertical movement and yaw angle of the UAV, and to address complex landing problems, such as on an inclined or on a flying platform. \section{Introduction} Classical control approaches suffer from the fact that accurate models of the plant and the environment are required for the controller design. These models are necessary to consider non-linearities of the plant behavior. However, they are subjected to limitations, e.g., with regard to disturbance rejection \cite{Rodriguez-Ramos2019} and parametric uncertainties \cite{Mo2018}. To overcome these problems, reinforcement learning (RL) has been studied for robot control in the last decades, including in the context of UAV control \cite{Mo2018}. In model-free, (action-) value based RL methods, an approximation of a value function is learned exclusively through interaction with the environment. On this basis, a policy then selects an optimal action while being in a certain state \cite{Sutton2015}. Classical RL approaches such as Q-Learning \cite{Sutton2015} store a tabular representation of that value function approximation. Deep Reinforcement Learning (DRL) methods\cite{Mnih2013}, on the other hand, leverage a deep neural network (DNN) to learn an approximate value function over a continuous state and action space, thus making it a powerful approach for many complex applications. Nevertheless, in both RL and DRL, the training itself is in general sensitive to the formulation of the learning task and in particular to the selection of hyperparameters. This can result in long training times, the necessity to perform a time-consuming hyperparameter search and an unstable convergence behavior. Furthermore, especially in DRL, the neural network acts like a black box, i.e. it is difficult to trace back which training experience caused certain aspects of the learned behaviour. \begin{figure}[!t] \centering \includegraphics[width=8.5cm, trim={0 0 0 0cm},clip]{./pics/title_image.png} \caption{Our multi-rotor vehicle landing on a platform moving on rails.} \label{fig:title_image} \end{figure} The purpose of this work is to address the aforementioned issues for the task of landing a multi-rotor UAV on a moving platform. Our RL based method aims at i) achieving a high rate of successful landing attempts, ii) requiring short training time, and iii) providing intrepretable ways to compute hyperparameters necessary for training. To achieve these aims, we leverage the tabular off-policy algorithm Double Q-Learning \cite{Hasselt2010}, due to the few required hyperparameters which are the decay factor $\gamma$ and the learning rate $\alpha$. Using a tabular method does not require defining the DNN's architecture or finding values for additional hyperparameters such as minibatch and buffer size or the number of gradient steps. Furthermore, unlike a NN-based deep learning algorithm, it does not require a powerful GPU for training, making it also suitable for computers with lower performance and less power-consuming. However, tabular methods only provide discrete state and action spaces and therefore suffer from the ``curse of dimensionality'' \cite{Lampton2009}. Furthermore, the training performance and control performance is influenced by the sampling rate. When the sampling and discretization do not match, the performance of the agent can be low, e. g. due to jittering, where the agent rapidly alternates between discrete states. To solve these problems, our method's \textbf{novel aspects} are outlined below. The first two of these allow us to address the ``curse of dimensionality'' issue by reducing the complexity of the learning task. The third addresses the problem to find a matching discretization and sampling rate. \begin{enumerate}[leftmargin=*] \item Under the assumption of a symmetric UAV and decoupled dynamics, common in literature \cite{Wang2016}, our method is able to control the vehicle's motion in longitudinal and lateral direction with two instances of the \emph{same} RL agent. Thus, the learning task is simplified to 1D movement only. The vertical and yaw movements are controlled by PID controllers. The concept of using independent controllers for different directions of motion is an approach which is often used, e.g. when PID controllers are applied \cite{Wenzel2011}, \cite{Araar2017}. \item We introduce a novel state-space discretization approach, motivated by the insight that precise knowledge of the relative position and relative velocity between the UAV and the moving platform is required only when touchdown is imminent. To this end, we leverage a multiresolution technique \cite{Lampton2009} and augment it with information about the state space topology, derived from a simple kinematics model of the moving platform. This allows us to restructure the learning task of the 1D movement as a sequence of even simpler problems by means of a sequential curriculum, in which learned action values are transferred to the subsequent curriculum step. Furthermore, the discrete state space allows us to accurately track how often a state has been visited during training. \item We leverage the discrete action space to ensure sufficient maneuverability of the UAV to follow the platform. To this end, we derive equations that compute the values of hyperparameters, such as the agent frequency and the maximum value of the roll/pitch angle of the UAV. The intention of these equations is twofold. First, they link the derived values to the maneuverability of the UAV in an interpretable way. Second, they ensure that the discretization of the state space matches the agent frequency. The aim is to reduce unwanted side effects resulting from the application of a discrete state space, such as jittering. \end{enumerate} Section~\ref{sec:related_work} presents related work, followed by Sec.~\ref{sec:Methodology} describing the proposed approach in detail. Section~\ref{sec:Implementation} presents the implementation and experimental setup. The results are discussed in Sec.~\ref{sec:Experiments}, with comments on future work. \section{Related Work} \label{sec:related_work} \subsection{Classical Control Approaches} So far, the problem of landing a multi-rotor aerial vehicle has been tackled for different levels of complexity regarding platform movement. One-dimensional platform movement is considered in \cite{Hu2015} in the context of maritime applications, where a platform is oscillating vertically. Two-dimensional platform movement is treated in \cite{Wenzel2011},\cite{Ling2014}, \cite{Gautam2015}, \cite{Vlantis2015}, \cite{Araar2017}, \cite{Borowczyk2017} and \cite{Falanga2017}. Three-dimensional translational movement of the landing platform is covered for docking scenarios involving multi-rotor vehicles in \cite{Zhong2016} and \cite{Miyazaki2018}. Various control techniques have been applied to enable a multi-rotor UAV to land in one of these scenarios. In \cite{Hu2015} an adaptive robust controller is used to control the vehicle during a descend maneuver onto a vertically oscillating platform while considering the ground effect during tracking of a reference trajectory. The authors of \cite{Gautam2015} apply guidance laws that are based on missile control principles. \cite{Vlantis2015} uses a model predictive controller to land on an inclined moving platform. \cite{Falanga2017} relies on a non-linear control approach involving LQR-controllers. However, the most used controller type is a PID controller. Reference \cite{Wenzel2011} uses four independent PID controllers to follow the platform that has been identified with visual tracking methods. In \cite{Ling2014}, the landing maneuver onto a maritime vessel is structured into different phases. Rendezvous with the vessel, followed by aquiring a visual fiducial marker and descending onto the ship. During all phases, PID controllers are used to control the UAV. Perception and relative pose estimation based methods are the focus of \cite{Araar2017}, where again four PID controllers provide the UAV with the required autonomous motion capability. A PID controller is also applied in \cite{Borowczyk2017} to handle the final touchdown on a ground vehicle moving up to $50\si{km/h}$. Also the landing on a 3D moving platform can be solved with PID controllers \cite{Zhong2016}, \cite{Miyazaki2018}. Although automatic methods for tuning the gains of a PID controller exist, tuning is often a manual, time-consuming procedure. However, learning-based control methods enable obtaining a suitable control policy exclusively from data through interaction with the environment, making it an attractive and superior approach for controlling an UAV to land on a moving platform. Furthermore, methods such as Q-Learning enable the approximation of an optimal action-value function and thus lead to a (near-) optimal action selection policy \cite{Sutton2015}. \subsection{Learning-based Control Approaches} The landing problem has been approached with (Deep) Reinforcement Learning for static platforms \cite{Kooi2021}, \cite{Shi2019}, \cite{Polvara2018}, \cite{Polvara2019} and for moving platforms \cite{Rodriguez-Ramos2019}, \cite{Rodriguez-Ramos2018}, \cite{Lee2018}. The authors of \cite{Kooi2021} accelerate training by means of a curriculum, where a policy is learned using Proximal Policy Optimization (PPO) to land on an inclined, static platform. Their curriculum is tailored, involving several hyperparameters, whereas in this work we present a structured approach for deriving a curriculum for different scenarios. In \cite{Polvara2018} and \cite{Polvara2019} authors assign different control tasks to different agents. A DQN-agent aligns a quadcopter with a marker on the ground, whereas a second agent commands a descending maneuver before a closed-loop controller handles the touchdown. Our approach also leverages separate RL agents, but for controlling the longitudinal and lateral motion. The authors of \cite{Shi2019} present a deep-learning-based robust non-linear controller to increase the precision of landing and close ground trajectory following. A nominal dynamics model is combined with a DNN to learn the ground effect on aerodynamics and UAV dynamics. \cite{Lee2018} presents an actor-critic approach for landing using continuous state and action spaces to control the roll and pitch angle of the drone, but no statistics on its performance. In \cite{Rodriguez-Ramos2019}, a DRL-framework is introduced for training in simulation and evaluation in real world. The Deep Deterministic Policy Gradient (DDPG) algorithm, an actor-critic approach, is used to command the movement of a multi-rotor UAV in longitudinal and lateral direction with continuous states and actions. Detailed data about the agent's success rate in landing in simulation is provided. We use this work as a baseline method and show how we outperform it, providing also statistics about our agent's performance in the real world. However, the baseline method does not provide any systematic and explainable way for deriving hyperparameters used for the learning problem. For our method, we present equations that link values of hyperparameters to problem properties such as the state space discretization and maneuverability of the UAV in an intuitively understandable way.
1,314,259,993,714
arxiv
\section{Introduction} The detection of galaxies at large redshifts that are forming stars for the first time, the so--called primeval galaxies, remains a very important astrophysical challenge. Bearing in mind that galaxy formation may not be assigned to any preferential cosmological epoch but instead is probably a continuous process, one might find left--over pristine gas pockets that are forming young galaxies at the present epoch. For this reason there may be star--forming galaxies in our local universe that look very much like distant primeval ones. Hopes have been that Ly$\alpha$\ emission could be a signature of star formation that would be recognized up to very large redshifts; hence there have been numerous studies of the Ly$\alpha$\ emission from distant and local starbursts. Early IUE observations were performed on more than a dozen nearby starburst galaxies in its SWP low resolution mode (Meier \& Terlevich 1981; Hartmann et al. 1984; Deharveng et al. 1986; Hartmann et al. 1988 and Terlevich et al. 1993). Galaxies with redshifts large enough that their Ly$\alpha$\ emission is separated from the geocoronal line were selected. It was realized from the very beginning that the Ly$\alpha$/H$\beta$ ratio and the Ly$\alpha$\ equivalent width are much smaller, by at least an order of magnitude, than expected from the recombination theory. These early works have also shown a possible anticorrelation between the Ly$\alpha$/H$\beta$ ratio and the H\,{\sc II} galaxy metallicity (actually the O/H abundance, as measured in the ionized gas). \begin{figure}[t] \begin{center}\mbox{\epsfxsize=9cm \epsfbox{h0749f2.eps}}\end{center} \caption[]{ Detail on the O\,{\sc I} and Si\,{\sc II} region for the galaxies showing Ly$\alpha$\ emission. The vertical bars indicate the wavelength at which the O\,{\sc I} and Si\,{\sc II} absorption lines should be located, according to the redshift derived from optical emission lines. Some Galactic absorption lines have been marked. Note that the metallic lines appear systematically blueshifted in these galaxies with respect to the systemic velocity. In some cases there is no significant absorption at all at zero velocity. } \label{fig:oila} \end{figure} \begin{figure}[t] \begin{center}\mbox{\epsfxsize=9cm \epsfbox{h0749f3.eps}}\end{center} \caption[]{ O\,{\sc I} and Si\,{\sc II} region for the galaxies showing damped Ly$\alpha$\ absorptions. Details as in Fig.~\ref{fig:oila}. Note that in these galaxies the metallic lines are essentially at the same redshift than the ionized gas, indicating the presence of static clouds of neutral gas, as discussed in the text. } \label{fig:oiab} \end{figure} These results, the lack of ``primeval galaxies'' at large redshift in blank sky searches for redshifted Ly$\alpha$\ emission and the few tentative detections of Ly$\alpha$\ emission from the damped Ly$\alpha$\ systems have been attributed to the effects of dust absorption that preferentially destroys Ly$\alpha$\ photons (Charlot \& Fall 1993, and references therein). The process behind this is that the transfer of Ly$\alpha$\ radiation is strongly affected by resonant scattering from neutral interstellar hydrogen atoms. By increasing enormously their optical path length, Ly$\alpha$\ photons become more vulnerable to dust absorption, even in small amounts (Neufeld 1991; Charlot \& Fall 1991; Chen \& Neufeld 1994). This process was believed (even in the early paper of Meier \& Terlevich 1981) to be able to account for the anticorrelation between the Ly$\alpha$\ emission line visibility and the dust abundance in these galaxies, as parameterized by the metallicity. Alternatively such an anticorrelation has been attributed to a metallicity--dependent extinction law at the wavelength of Ly$\alpha$\ (Calzetti \& Kinney 1992, Valls-Gabaud 1993). However this conclusion seems very unlikely in view of the anticorrelation between the Ly$\alpha$\ equivalent width and the gas-phase abundance of oxygen O/H. Charlot \& Fall (1993) have emphasized the advantages of using the Ly$\alpha$\ equivalent widths rather than the Ly$\alpha$/H$\beta$ ratios, because the former are independent on the extinction curve of the dust and can be measured with a single observational device. Their discussion of the anticorrelation between the Ly$\alpha$\ line equivalent widths and the O/H abundances in a sample of nearby star-forming galaxies has examined several factors that will affect the observed Ly$\alpha$\ emission from galaxies, among which contributions from supernova remnants and active galactic nuclei, the orientation of the galaxy and the absorption by dust. They finally suggest that the structure of the interstellar medium (porosity and multi-phase structure of the medium) is most probably the dominant one. \begin{table*}[t] \caption{Adopted properties of observed H\,{\sc II} galaxies. Systemic velocities have been taken from the NASA Extragalactic Database, except for Haro~2 (Legrand et al. 1997). } \label{tab:galaxies} \begin{tabular}{lcccc}\hline Galaxies & m(V or B) & v(km\thinspace s$^{-1}$) & 12+log(O/H) & E(B-V) \\ \hline ESO 350-IG038 & 14.27V & 6156 & ?? & 0.16\\ SBS0335-052 & 16.65V & 4043 & 7.36 & 0.18 \\ IRAS 08339+6517& 14.16V & 5730 & ?? & 0.55 \\ IZw 18 & 15.6B & 740 & 7.17 & $<$0.10 \\ Haro 2 & 13.4V & 1465 & 8.40 & 0.12 \\ Mkn 36 & 15.5V & 646 & 7.86 & $<$0.10 \\ IIZw 70 & 14.83V & 1215 & 8.33 & 0.15 \\ ESO 400-G043 & 14.22B & 5900 & 8.0 & 0.20 \\ \hline \end{tabular} \end{table*} Our new HST observations indicate that velocity structure in the interstellar medium plays a key role in the transfer and escape of Ly$\alpha$\ photons. At first place, Ly$\alpha$\ was observed only in absorption in the starburst dwarf galaxy IZw~18 by Kunth et al. (1994). Since IZw~18 at $Z=1/50 ~Z_{\odot}$ is the most metal--poor starburst galaxy known at present, it was considered previously a good candidate to show Ly$\alpha$\ in emission. To add to the confusion, a positive Ly$\alpha$\ emission showing a complicated profile, but a clear P~Cygni component, has been detected in Haro~2, a rather dusty star-forming galaxy at $Z=1/3 ~Z_{\odot}$ (Lequeux et al. 1995). Giavalisco et al. (1996) have strengthened the suggestion that the transport of the Ly$\alpha$\ photons is primarily controlled by the ISM geometry rather than by the amount of dust, so that the Ly$\alpha$\ emission line would be detected only if there are holes (regions with low column density of neutral gas) along the line of sight, a factor which in principle is independent on dust and metal content of the gas. As we show hereafter, other factors can be certainly more important in accounting for variations in the Ly$\alpha$\ emission strength, at least in some cases. The detection of a P~Cygni profile in the Ly$\alpha$\ emission line of Haro~2 led us to postulate that the line was visible because the absorbing neutral gas was velocity--shifted with respect to the ionized gas. This was confirmed by the analysis of the UV O\,{\sc I} and Si\,{\sc II} absorption lines, which were blue--shifted by 200~km\thinspace s$^{-1}$\ with respect to the optical emission lines, and also of the profile of the H$\alpha$ line (Legrand et al. 1997). These new facts and the capability of the HST to analyze in detail for the first time Ly$\alpha$\ line profiles in nearby galaxies led us to embark on a longer--term project using the GHRS aiming to study the processes controlling the detectability of the Ly$\alpha$\ emission line in star-forming galaxies. These studies have also been aimed to measure abundances in the neutral gas of gas--rich dwarf galaxies with spectra dominated by recent star formation episodes. Indeed, in objects such as these, the H\,{\sc I} clouds largely extend beyond the optical images suggesting that a substantial fraction of this gas might still be chemically unevolved or even pristine (Roy \& Kunth 1995). At a spectral resolution of 20 000 it became possible to disentangle nebular from stellar absorption lines and to give crude estimates of the metal abundances in the interstellar medium. The study of the IZw~18 data by Kunth et al. (1994) and the preliminary analysis of the rest of the sample (Kunth et al. 1997) have yielded extremely low values of the O\,{\sc I}/H\,{\sc I} ratios (log N(O\, {\sc I})/N(H\,{\sc I}) $<$ -7) in some galaxies of the sample. The complete analysis of the interstellar abundances will be presented in a forthcoming paper. In Sect.~2 we present new HST data on a sample of 8 H\,{\sc II} galaxies. Spectra are described in Sect.~3 and the results are discussed in Sect.~4. The conclusions are finally summarized in Sect.~5. \section{The HST observations} Eight galaxies have been observed so far, and their properties are listed in Table~\ref{tab:galaxies}. They were selected by the following procedure: \begin{itemize} \item The H\,{\sc II} galaxies IZw~18, Mkn~36, IIZw~70 and Haro~2 were first chosen (Cycle~1 and Cycle~4) because they span a wide range of metallicity. The original aim was to investigate a possible relationship between the composition of their H\,{\sc II} regions and that of the H\,{\sc I} gas using the O\,{\sc I} and Si\,{\sc II} lines. \item Results obtained with IZw~18 and Haro~2 prompted us to investigate the Ly$\alpha$\ emission profiles per se. Therefore three starburst galaxies were selected in the IUE-ULDA from the a-priori knowledge that they were Ly$\alpha$\ emitters; they include: IRAS~08339+651, ESO~350-IG038 and ESO~400-G043. Their redshifts are necessarily larger than those of the above galaxies because their Ly$\alpha$\ emission on IUE spectra had to be separated from the geocoronal line. \item In addition the SBS~0335-052 spectra, observed by Thuan et al. (1997) with the same setup, were retrieved from the HST archives. \end{itemize} Observations were made using the same settings as in Kunth et al. (1994) and Lequeux et al. (1995) using the Goddard High Resolution Spectrograph (GHRS) onboard the Hubble Space Telescope (HST). The journal of observations is given in Table~\ref{tab:journal}. The Large Science Aperture (LSA) (2 $.\llap"0 \times$ 2 $.\llap"0$) was chosen to ensure a sufficient flux level. The grating angle was selected according to the redshift of the objects, so as to cover the Ly$\alpha$\ and the O\,{\sc I} 1302.2~\AA\ regions respectively. The spectral resolution achieved with this setup at around 1300~\AA\ is close to 0.08~\AA. The Ly$\alpha$\ range was chosen to investigate both emission and absorption features so that the H\,{\sc I} column density could be estimated. The O\,{\sc I} 1302~\AA\ and Si\,{\sc II} 1304~\AA\ region was selected to crudely estimate the chemical composition of the gas and to measure with reasonable accuracy the mean velocity at which the absorbing material lies with respect to the star-forming region of a given galaxy. The spectrum and internal background were moved on the diode array by steps of one fourth of a diode (GHRS substep pattern 5). In most cases, the photocathode granularity was averaged out using the GHRS FP--SPLIT = 4 procedure breaking each exposure into four parts between which the grating is moved by about 5 diodes. We have subsequently extracted those scans to align and combine them using the standard STSDAS software and form final spectra with four samples per diode. Wavelength calibrations were achieved using the platinum-neon lamp\- onboard the spacecraft resulting in an expected accuracy of the wavelength scale of about 0.08~\AA. However after correcting from heliocentric orbital motion of the earth, we noticed on the IZw 18 spectrum a systematic shift of about 0.24~\AA\ between tabulated vacuum wavelengths and measured ones. We thus have applied a further shift so as to match the geocoronal Ly$\alpha$\ line and the observed O\,{\sc I} and Si\,{\sc II} lines originating from Galactic clouds. We later were informed that the reduction package had introduced a wrong sign to the heliocentric correction. The centroid of the Galactic H~I profile in the direction of Haro 2 is at -27.5~km\thinspace s$^{-1}$\ LSR, or -23.3 km\thinspace s$^{-1}$\ heliocentric (Hartmann \& Burton 1995). Therefore we have checked the scale on the Galactic O~I 1302~\AA\ and Si~\,{\sc II} 1304.4 \AA\ absorption lines, for which we measured heliocentric radial velocities of -27 and -23 km\thinspace s$^{-1}$\ respectively. Similar checks with Galactic lines have been performed with the other galaxies in the sample. Thus the wavelength scale we used should be correct to a few km\thinspace s$^{-1}$. \begin{table*} \caption{Journal of observations. All spectra obtained through the GHRS Large Science Aperture, using the G160M grating, and a 5-fold substep pattern. } \vspace*{0.2truecm} \begin{tabular}{lcclccr} \hline Name & \multicolumn{2}{c} {Slit position (2000)} & Mode & Date & $\lambda$-range (\AA) & Exposure \\ & RA & Dec & & & & time (s) \\ \hline ESO~350-IG038 & 00 36 52.3 & -33 33 18.2 & FP-SPLIT = NO & 16/01/96 & 1222 - 1258 & 7018 \\ & & & & 16/01/96 & 1312 - 1347 & 4678 \\ SBS~0335-052 & 03 37 44.0 & -05 02 39.0 & FP-SPLIT = DS 4 & 03/01/95 & 1211 - 1247 & 7181 \\ & & & & 04/01/95 & 1299 - 1335 & 7181 \\ IRAS~08339+6517& 08 38 23.2 & 65 07 15.0 & FP-SPLIT = NO & 24/02/96 & 1221 - 1257 & 7997 \\ & & & & 25/02/96 & 1309 - 1344 & 4787 \\ IZw 18 & 09 34 02.0 & 55 14 27.4 & FP-SPLIT = DS 4 & 23/04/92 & 1195 - 1231 & 9216 \\ & & & & 22/04/92 & 1283 - 1319 & 10137 \\ Haro 2 & 10 32 31.8 & 54 24 03.5 & FP-SPLIT = 4 & 29/04/94 & 1205 - 1241 & 7181 \\ & & & & 30/04/94 & 1286 - 1321 & 5222 \\ Mkn 36 & 11 04 58.4 & 29 08 15.2 & FP-SPLIT = 4 & 19/04/95 & 1205 - 1241 & 5984 \\ & & & & 20/04/95 & 1281 - 1317 & 5984 \\ IIZw 70 & 14 50 56.5 & 35 34 17.8 & FP-SPLIT = 4 & 08/04/95 & 1204 - 1240 & 3590 \\ & & & & 08/04/95 & 1286 - 1321 & 7181 \\ ESO~400-G043 & 20 37 41.9 & -35 29 06.4 & FP-SPLIT = NO & 16/04/96 & 1221 - 1258 & 7181 \\ & & & & 16/04/96 & 1312 - 1347 & 4787 \\ \hline \end{tabular} \label{tab:journal} \end{table*} \section {Description of individual spectra} The individual spectra of all the galaxies in our sample are shown in Fig.~\ref{fig:total}. \subsection{Galaxies with damped Ly$\alpha$\ absorption} \begin{itemize} \item IZw 18: The HST spectrum shows a damped Ly$\alpha$\ absorption with no sign of emission at the redshift of the galaxy (740 km\thinspace s$^{-1}$). This absorption is a blend of the intrinsic IZw~18 and the Galactic components. A multi--component fit yields an H\,{\sc I} column density in front of the northwest (NW) emission patch of log~N(H\,{\sc I}) = 21.06 cm$^{-2}$, with a Galactic component of log~N(H\,{\sc I}) = 20.3 cm$^{-2}$. The multicomponent fit requires a third contribution blueshifted with respect to the Galactic absorption. This component seems to be an observational artifact due to the poor signal to noise in the region. Absorption lines due to O\,{\sc I} $\lambda$1302~\AA\ and Si\,{\sc II} $\lambda$1304~\AA\ were also detected at the redshift of the NW H\,{\sc II} region. Unfortunately the O\,{\sc I} line is saturated casting some doubts on any attempts to derive a reliable O/H abundance for the H\,{\sc I} region (see discussion). O\,{\sc I} and Si\,{\sc II} absorptions at a velocity of --160 km\thinspace s$^{-1}$\ due to a Galactic high velocity cloud (No. 117 of Hulsboch \& Wakker 1988) were detected indicating that the high velocity clouds are not composed of primordial material. \medskip \item Mkn 36: A broad Ly$\alpha$\ is observed in absorption. The observed profile can be reproduced by assuming two components: one is due to neutral gas in Mkn 36 with H\,{\sc I} column density of log~N(H\,{\sc I}) = 20.07 cm$^{-2}$ and the second is a Galactic component with log~N(H\,{\sc I}) = 19.7. The O\,{\sc I} region clearly shows O\,{\sc I} and Si\,{\sc II} absorptions that are in good agreement with the systemic velocity of the galaxy. An unidentified emission line or glitch is seen at 1287.92~\AA . We note that the standard photometric calibration of the Ly$\alpha$\ region spectrum had to be corrected by an offset of +2.5$\cdot 10^{-15}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$, apparently due to poor background subtraction. \medskip \item IIZw 70: From the damped Ly$\alpha$\ absorption line we derive log~N(H\,{\sc I}) = 20.8 cm$^{-2}$, together with a Galactic component with log~N(H\,{\sc I}) = 19.3 cm$^{-2}$. Both the Galactic and the intrinsic O\,{\sc I} and Si\,{\sc II} lines are well detected, with the later at the systemic velocity of the galaxy. We had to offset the Ly$\alpha$\ region spectrum as well by -5.0$\cdot 10^{-15}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$ in order to have the core of the absorption profile at zero level. \medskip \item SBS~0335-052: The GHRS spectrum of this galaxy has been discussed by Thuan et al. (1997). A broad absorption is observed but the intensity of the central part of the Ly$\alpha$\ profile does not go to zero. Thuan et al. (1997a) attribute this broad residual emission to resonant scattering of the Lyman photons that would be re-directed into the line of sight. We disagree with this interpretation for the following reason: we noticed that the continuum level is weak (i.e. $<$ 2.0$\cdot 10^{-15}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$) hence the GHRS extraction procedure corrects for an instrumental background level that is a least 10 times higher than the signal. The signal on this object being the lowest of the sample, the result is consequently very sensitive to this difficult subtraction and we suspect the Ly$\alpha$\ region to be spoiled by the extraction procedure. Similar effects have been discussed above for Mkn~36 (zero level below zero) and IIZw~70 (zero level above zero), on profiles which otherwise are very well reproduced by theoretical ones. In this respect it is interesting to note that the blue wing of the line, which should be dominated by the Galactic absorption profile, does not reach the zero level either, although the H\,{\sc I} column density along this line of sight is around log~N(H\,{\sc I}) = 20.68~cm$^{-2}$\ (Dickey \& Lockman 1990). This supports our interpretation about an instrumental artifact due to low signal to noise ratio. A more recent lower resolution GHRS spectrum obtained by Thuan \& Izotov (1997) shows indeed no significant contamination at the core of the absorption line. In any case, the red wing of the profile can be extended using the available IUE spectrum. By combining both spectra, we have fitted the whole profile up to 1300~\AA, obtaining log~N(H\,{\sc I}) = 21.5 ($\pm 0.2$) cm$^{-2}$, as shown in Fig.~\ref{fig:sbs}. Thuan \& Izotov (1997) obtain log~N(H\,{\sc I}) = 21.8 cm$^{-2}$\ from their lower resolution spectra.Weak O\,{\sc I} at 1319.66~\AA\ (v = 4041 km\thinspace s$^{-1}$) and Si\,{\sc II} at 1321.94~\AA\ are also detected. \end{itemize} \begin{figure*} \begin{center}\mbox{ \epsfig{file=h0749f4.eps,rheight=9cm,height=11cm,angle=-90}}\end{center} \caption[]{ IUE spectrum of SBS~0035-052 superposed on the GHRS one. Although the GHRS Ly$\alpha$\ absorption profile is very noisy, its red wing extends clearly into the IUE range. The Ly$\alpha$\ absorption profile fitted to this red wing is also shown. } \label{fig:sbs} \end{figure*} \subsection{Galaxies with Ly$\alpha$\ emission} \begin{itemize} \item Haro 2: For a full discussion of this GHRS spectrum we refer the reader to Lequeux et al. (1995). The spectrum around Ly$\alpha$\ is complex. We find a deep absorption line at the blue edge, below 1207 \AA\ that is probably the redshifted NI triplet (1199.6 - 1200.7 \AA) from the interstellar medium in front of the H\,{\sc II} region of Haro 2. A deep, broad line at 1211.7 \AA\ is the Si~\,{\sc III} line at 1206.51 \AA\ from Haro 2, probably interstellar and redshifted by 1260 km s$^{-1}$ (heliocentric). Some Galactic Ly$\alpha$\ interstellar absorption is detected at zero velocity, with log~N(H\,{\sc I}) = 19.9 cm$^{-2}$. A broad absorption around 1221 \AA, produced by the gas in front of the star cluster of Haro 2, is attributed to Ly$\alpha$\ absorption. A strong asymmetric emission line around 1222.1 \AA\ is Ly~$\alpha$ redshifted by 1580 km s$^{-1}$. The existence of this line came as a surprise. The spectrum around 1305 \AA\ shows several absorption lines. Most of the fainter absorption lines are presumably produced in the stars of Haro~2. The four strong lines are the Galactic interstellar lines of O\,{\sc I} at 1302.2 \AA\ and Si\,{\sc II} at 1304.4 \AA, and the same lines from Haro~2 redshifted by about 1260 km s$^{-1}$. Hence the heliocentric velocities of the absorption lines are about 200 km s$^{-1}$ lower than the velocity of the H\,{\sc II} region as measured from the H$\alpha$ emission (Legrand et al. 1997). Lequeux et al. (1995) interpreted these profiles as being produced by a neutral (partially ionized) medium outflowing from the central star cluster at a projected velocity around 200 km\thinspace s$^{-1}$, as we will discuss later. \medskip \item IRAS 0833+6517: This galaxy has a redshift of 5730~km\thinspace s$^{-1}$. The Ly$\alpha$\ emission measured at 1339.5 \AA\ (flux of 5.6 $\cdot 10^{-14}$ erg s$^{-1}$ cm$^{-2}$ and EW of 34 \AA) is narrow and exhibits a clear P~Cygni profile. Remarkable enough is a clear secondary emission situated at 1237.76~\AA\ (at -200 km\thinspace s$^{-1}$\ from the main line component). The intensity of this secondary peak is 10 times smaller than that of the main component (with 5$\cdot 10^{-14}$ erg s$^{-1}$ cm$^{-2}$). The absorption component extents over 1500 km\thinspace s$^{-1}$\ on the blue side of the line emission. The presence of a secondary emission peak reveals the chaotic structure of the interstellar medium in this case. Unfortunately no O\,{\sc I} and/or Si\,{\sc II} absorption is detected that could provide more detailed information about the kinematics of the absorbing gas. \medskip \item ESO-B400-G043: This galaxy is at a redshift of 5900 km\thinspace s$^{-1}$. An asymmetric Ly$\alpha$\ emission is measured at 1339.5~\AA\ with a flux of 3.1$\cdot 10^{-14}$ erg s$^{-1}$ cm$^{-2}$ and EW of 20~\AA, showing a P~Cygni shape. Metallic lines are blueshifted by around -225 km\thinspace s$^{-1}$, with the O\,{\sc I} line at 1326.68 \AA\ (-252 km\thinspace s$^{-1}$) and the Si\,{\sc II} at 1329.18 \AA\ (-194 km\thinspace s$^{-1}$). The absorption profile is best fitted assuming an absorption with log~N(H\,{\sc I}) of 19.7 cm$^{-2}$, slightly shifted with respect to the metallic lines by -70~km\thinspace s$^{-1}$. There might be an additional secondary emission peak at around -300~km\thinspace s$^{-1}$, with a flux of 1.1$\cdot 10^{-15}$ erg s$^{-1}$ cm$^{-2}$, but the low signal to noise of this spectrum does not allow us to make any firm conclusion. \medskip \item ESO 350-IG038: This galaxy has a redshift of 6156 km\thinspace s$^{-1}$. At this redshift the O\,{\sc I} region falls close to the C\,{\sc II} 1334~\AA\ Galactic line which can be used as a reference for the wavelength scale. We find indeed that the O\,{\sc I} 1302.2 \AA\ and Si\,{\sc II} 1304.4 \AA\ lines are at 1328.63 \AA\ and 1330.89 \AA\ respectively, corresponding to a mean velocity of 6097 km s$^{-1}$ or -58 km s$^{-1}$ from the recession velocity of the ionized regions. In fact both interstellar lines are very broad, indicative of multicomponents on the line of sight spanning roughly 200 km s$^{-1}$ in velocity range. A careful inspection of the underlying Ly$\alpha$\ absorption shows that it extends over more than 1500 km s$^{-1}$ to the blue side of the emission. The Ly$\alpha$\ emission peaks at 1241.79 \AA\ (its flux is 1.8$\cdot 10^{-14}$ erg s$^{-1}$ cm$^{-2}$ and EW is 37 \AA ) but does not exhibit a clear P~Cygni profile. On the contrary, the blue wing of the line does not sharply drop at zero velocity and moreover the underlying absorption extends beyond to the red. This agrees with the finding that the metallic lines are shifted by -58 km s$^{-1}$ with respect to Ly$\alpha$\ but extend to 100 km s$^{-1}$ on both sides. The Ly$\alpha$\ absorption is best fitted with three components at -26, -197 and -330 km\thinspace s$^{-1}$\ and log~N(H\,{\sc I}) of 18.81, 19.93 and 20.26 cm$^{-2}$, respectively. The gas coverage is contributed therefore by numerous components. \end{itemize} \begin{table*} \caption{Measured values for the metallic absorption lines. For each object the first line gives the centroid wavelength of the different lines. The second line gives the systemic velocity, measured from the optical emission lines, and the corresponding mean velocity offset of the metallic lines $\delta$v. All wavelengths are given in Angstr\"oms and all velocities in km\thinspace s$^{-1}$. } \label{tab:absorptions} \vspace*{0.2truecm} \begin{tabular}{lcclccc} \hline Name & O\,{\sc I}~1302 & Si\,{\sc II}~1304 & &Galactic &Galactic &Galactic \\ v(H\,{\sc II}) km s$^{-1}$ & v(O\,{\sc I}) & v(Si\,{\sc II}) &$\delta$v & O\,{\sc I} & Si\,{\sc II} & C\,{\sc II} \\ \hline ESO~350-IG038 & 1328.63 & 1330.89 & & -- & -- & 1334.46 \\ 6156$\pm$31 & 6096.0 & 6099.0 & -58 & & & \\ &&&&&&\\ SBS~0335-052 & 1319.7 & nd & & -- & 1304.29 & -- \\ 4043$\pm$10 & 4030.0 & -- & -13 & & & \\ &&&&&&\\ IRAS~08339+6517& nd & nd & & -- & -- & 1334.15 \\ 5730$\pm$80 & -- & -- & -- & & & \\ &&&&&&\\ IZw 18 & 1305.3 & 1307.45 & & 1301.88 & 1304.06 & -- \\ 740$\pm$5 & 721.4 & 708.3 & -25 & & & \\ &&&&&&\\ Haro 2 & 1307.76 & 1310.02 & & 1302.15 & 1304.34 & -- \\ 1465$\pm$10& 1288.2 & 1299.4 &-171 & & & \\ &&&&&&\\ Mkn 36 & 1305.3 & 1307.25 & & 1301.50 & 1304.20 & -- \\ 646$\pm$5 & 714.5 & 662.3 & +40 & & & \\ &&&&&&\\ IIZw 70 & 1307.3 & 1309.5 & & 1301.88 & 1304.17 & -- \\ 1215$\pm$23 & 1182.2 & 1184.4 & -32 & & & \\ &&&&&&\\ ESO~400-G043 & 1326.7 & 1329.2 & & -- & -- & 1334.97 \\ 5900$\pm$8 & 5647.0 & 5706.0 &-225 & & & \\ \hline \end{tabular} \end{table*} We have fitted all absorption profiles interactively by using the Xvoigt code (Xvoigt, Copyright 1994, David Mar). In case of damped Ly$\alpha$\ lines blended with the Galactic line, special weight has been given to the red wing. On the other hand, when there is an emission feature on the red, the blue side of the profile and its terminal velocity have allowed to determine precisely the required H\,{\sc I} column density. In most cases the Ly$\alpha$\ fitting procedure is insensitive to the $b$ value due to the strong saturation of the profile, so that only upper limits have been estimated. The line measurements are listed in Table~\ref{tab:absorptions} (metallic lines) and Table~\ref{tab:lyman} (Ly$\alpha$\ line). We have also included the absorption lines attributed to Galactic clouds. We have checked that the centroid of the Galactic 21~cm line is in good agreement with the velocity derived from these lines, supporting our velocity calibration. In Table~\ref{tab:lyman} we have included the measured logN(H\,{\sc I}) as well as the flux of the emission component and its peak wavelength, if any. In Fig.5 we show the Ly$\alpha$\ region at rest wavelength with the fitted H\,{\sc I} absorption profiles superposed to the observed spectra. \begin{figure}[t] \begin{center}\mbox{\epsfxsize=10cm \epsfbox{h0749f5.eps}}\end{center} \caption[]{ Ly$\alpha$\ line profiles plotted in velocity scale. The zero point corresponds to the systemic velocity as derived from the optical emission lines. The Galactic Ly$\alpha$\ absorption profile has been also included in Haro~2. The geocoronal Ly$\alpha$\ profile has been truncated in this case. The sharp edge on the blue side of the emission profile is evident in most cases. Note the secondary Ly$\alpha$\ emission peak in IRAS~0833+6517, and possibly also in ESO~400-G043. } \label{fig:lalfav} \end{figure} \section {Interpretation} Ly$\alpha$\ photons are produced by recombinations in H\,{\sc II} regions at about 2/3 of the ionization rate (the exact yield depends weakly on the density and temperature of the gas). They are subsequently absorbed and reemitted by H atoms, both in the H\,{\sc II} regions in which they were produced and in the surrounding H\,{\sc I} regions, if present. This process -- resonant scattering -- changes both the frequency and direction of the Ly$\alpha$\ photons but not their produced within a galaxy would eventually escape from it, in one direction or another. This scattering process increases enormously the mean free path of the trapped photons, so that if some dust is present, the probability of absorption around the Ly$\alpha$\ wavelength increases also by a significant factor with respect to the standard UV extinction. As a consequence, absorption is potentially important whenever the dust-to-gas ratio exceeds about one percent of the Galactic value (see, e.g., equation (3) of Charlot \& Fall (1993)). If the neutral gas surrounding the star-forming regions is not static with respect to the ionized gas, but is outflowing from these regions towards the observer, the resonant scattering would affect photons at shorter wavelengths than the Ly$\alpha$\ emission line, i.e., the photons resonantly trapped, and potentially destroyed by dust, would be mostly stellar continuum photons emitted at wavelengths below 1216~\AA. For a galaxy in which the source of ionizing radiation is a stellar population with a normal initial mass function, the angle-averaged equivalent width of the Ly$\alpha$\ emission line is about 100~\AA\ in the dust-free case (Charlot \& Fall 1993). This depends only weakly on the star formation rate in the galaxy provided it is reasonably continuous (and nonzero over the past few $\times10^7$~yr). This value can be somewhat higher if instead the star formation episode is ``instantaneous'', i.e. lasts less than a few $\times10^6$~yr, as it seems to be the case in most compact star-forming galaxies. Nevertheless, since the Ly$\alpha$\ photons would diffuse (in the dust-free case) through the external surface of the neutral clouds (which are rather large in these compact star-forming galaxies, extending far beyond the optical regions), its surface brightness would be very small. Therefore, even in a dust-free case, we would expect to detect an absorption line around the Ly$\alpha$\ wavelength if the aperture sustended by the slit is small compared to the spatial extension of the neutral cloud. This absorption will be centered at the wavelength corresponding to the mean velocity of this neutral gas, i.e., it will be blueshifted with respect to the Ly$\alpha$\ emission line if the neutral gas is moving towards the observer. This scenario allows to explain in a natural way most of the observational properties in our sample. Among the eight galaxies observed with the GHRS four show no Ly$\alpha$\ emission at all. Instead, a strong damped Ly$\alpha$\ absorption at the systemic velocity (as derived from the optical emission lines) is observed with O\,{\sc I} and Si\,{\sc II} appearing in absorption. We can infer from Table~\ref{tab:absorptions} and from the above description that these lines occur without any significant velocity shift with respect to the H\,{\sc II} regions. This indicates that the neutral gas in which they mostly originate is static with respect to the star-forming region. Therefore, since these galaxies have a low dust content (see Table~\ref{tab:galaxies}; IZw~18 shows weak signs of reddening and its dust-to-gas ratio is at least 50 times smaller than the Galactic value - Kunth et al. 1994), this suggests that it remains possible to observationally weaken Ly$\alpha$\ by simple multiple resonant scattering from the neutral gas, and even to produce an absorption feature. If this is the case, the H\,{\sc I} cloud surrounding these galaxies should be leaking Ly$\alpha$\ photons through its external surface. The Ly$\alpha$\ line would then become very hard to detect because of its low surface brightness. This extended emission could be detected with deep, large area observations around these galaxies. Nevertheless, it might be that even the small amounts of dust present in these galaxies is enough to efficiently destroy a significant fraction of Ly$\alpha$\ photons, especially if the clouds extension is very large. In fact this may be the most inescapable explanation from the lack of extended Ly$\alpha$\ emission seen at large redshift in blank searches, since high--z galaxies are expected to have much smaller angular size. \begin{table*} \caption{ Measured values in the Ly$\alpha$\ spectral region. The second line gives the estimated error bars for logN(H\,{\sc I}), except for ESO~350-IG038 for which the column density of the three main absorbing components have been indicated. } \label{tab:lyman} \vspace*{0.2truecm} \begin{tabular}{lccccccc} \hline Name & $\lambda$(abs) & logN(H\,{\sc I}) & $b$ & $\lambda$(em.peak)& Flux & EW & logN(H\,{\sc I}) \\ & \AA & cm$^{-2}$ & km\thinspace s$^{-1}$\ & \AA & erg s$^{-1}$ cm$^{-2}$ & \AA & Galactic \\ \hline ESO~350-IG038 & 1240.37 & 20.4 & $<$140 & 1241.9 & 1.8(-14) & 37 & -- \\ & & 18.8--19.9--20.3 &&&&&\\ SBS~0335-052 & 1232.4 & 21.5 & -- & no em. & -- & -- & -- \\ & & 21.4--21.7&&&&&\\ IRAS~08339+6517& 1237.7 & 19.9 & 90 & 1239.5 & 5.6(-14) & 34 & -- \\ & & 19.7--20.0& & & &&\\ IZw 18 & 1218.6 & 21.1 & -- & no em. & -- & -- &20.3 \\ & & 21.0--21.5&&&&&\\ Haro 2 & 1220.9 & 19.9 & -- & 1222.1 & 6.0(-14) & 13 &19.8 \\ & & 19.6--20.5 &&&&&\\ Mkn 36 & 1218.4 & 20.1 &-- & no em. & -- & -- &19.7 \\ & & 19.9--20.3&&&&&\\ IIZw 70 & 1220.46 & 20.8 & $<$200 & no em. & -- & -- & 19.3 \\ & & 20.6--21.0&&&&& \\ ESO~400-G043 & 1238.6 & 19.7 & 70 & 1240.0 & 3.1(-14) & 20 & -- \\ & & 19.6--19.8&&&&&\\ \hline \end{tabular} \end{table*} On the other hand, the Ly$\alpha$\ emission in Haro~2 is accompanied by a broad absorption in the blue wing of that line, with the general appearance of a typical P~Cygni profile. The amount of neutral gas that produces the blue absorption trough at Ly$\alpha$\ is rather modest and of the order of N~(H\,{\sc I}) = 7.7$\cdot 10^{19}$ atom cm$^{-2}$. The crucial point here is that the neutral gas responsible for the absorption in this galaxy is not at the velocity at which the Ly$\alpha$\ photons were emitted. Moreover, it seems that all the neutral gas along the line of sight is being pushed by an expanding envelope around the H\,{\sc II} region, outflowing at velocities close to 200 km\thinspace s$^{-1}$. This interpretation is of course strengthened by the presence of other detected absorptions of O\,{\sc I}, Si\,{\sc II} and Si\,{SC III} due to outflowing gas in front of the ionizing hot stars of the central H\,{\sc II} region. The heliocentric velocities of all these absorptions are lower by about 200 km s$^{-1}$ than that of the bulk of the galaxy as measured in the 21-cm line and of the optical emission lines. To confirm this hypothesis, Legrand et al. (1997) have obtained high resolution spectroscopic observations of H$\alpha$ with the William Herschel telescope at La Palma, finding evidences of an expanding shell not participating in the rotation of the galaxy. Comparison of the Ly$\alpha$\ with the H$\alpha$ profiles shows that the Ly$\alpha$\ line is significantly broader than H$\alpha$, suggesting also scattering of photons from the back side of the expanding neutral cloud. The data on the other three H\,{\sc II} galaxies with detected Ly$\alpha$\ emission confirm that Haro 2 is not an isolated case. All spectra show Ly$\alpha$\ emission with a broad absorption on their blue side except for ESO 350-IG038 in which the emission is seen atop of a broad structure requiring several filaments. When the metallic lines are detected, they are always blueshifted with respect to the ionized gas, further supporting the interpretation. In the case of ESO 350-IG038 the velocity structure seems to be more complicated and several components at different velocities are identified on the metallic lines. The Ly$\alpha$\ absorption profile fitting requires one or several components (in addition with a Galactic component if the redshift is small). We find relatively little scatter in the derived column densities (see Table~\ref{tab:lyman}). Most clouds have a column density log~N(H\,{\sc I}) of nearly 19.7 to 21.1. The static clouds tend to have larger column densities than the moving ones that are also splitted into several components as expected in a dynamical medium. The main conclusions that are drawn from this set of data is that complex velocity structures are determining the Ly$\alpha$\ emission line detectability, showing the strong energetic impact of the star-forming regions onto their surrounding ISM. This velocity structure is indeed the driving factor for the Ly$\alpha$\ line visibility in the objects of our sample. We want to stress again that if the absorbing gas is not static with respect to the ionized region, the Ly$\alpha$\ emission line would be detected, almost independently on the dust and metal abundance of the gas. It would be affected of course by the same extinction than the UV continuum but this extinction would not be enhanced by resonant scattering effects. If the neutral gas is static with respect to the H\,{\sc II} region, the covering factor by these neutral clouds would probably become the key factor determining the visibility of the line. Thuan \& Izotov (1997) have indeed detected strong Ly$\alpha$\ emission in T1214-277, with no evidences of blueshifted Ly$\alpha$\ absorption. In this case the detection of the line requires that a significant fraction of the area covered by the slit along the line of sight is essentially free from neutral gas, suggesting a patchy or filamentary structure of the neutral clouds. Such a geometry would be possible only in galaxies not surrounded by enormous H\,{\sc I} clouds, as it seems to be the case in IZw~18 and similar objects. The effect of neutral gas flows helps to understand why luminous high-redshift objects have only been found up to now with linewidths larger than 1000 km s$^{-1}$. High--redshift galaxies with very strong (EWs $>$ 500~\AA) extended Ly$\alpha$\ emission are characterized by strong velocity shears and turbulence (v $>$ 1000 km s$^{-1}$); this suggests an AGN activity, in the sense that other ISM energising mechanism than photoionization by young stars may be operating. However Steidel et al. (1996) have recently discovered a substantial population of star--forming galaxies at 3.0$<$z$<$3.5 that were selected not from their emission--line properties but from the presence of a very blue far-UV continuum and a break below 912~\AA\ in the rest frame. Similarly to our local starbursts they find that 50\% of their objects show no Ly$\alpha$\ emission whereas the rest does, but with weak EWs no larger than 20~\AA\ at rest. The Ly$\alpha$\ profiles of this population look indeed very similar to those of our local starburst galaxies (Franx et al. 1997 ; Pettini et al. 1997). We can conclude from the preceeding discussion that the use of Ly$\alpha$\ as a star formation indicator underestimates the comoving star formation density at high redshift (e.g. Cowie \& Hu, 1998). \section{Conclusions} We have analyzed HST UV spectroscopical data of eight H\,{\sc II} galaxies aiming to characterize the detectability of the Ly$\alpha$\ emission line in this kind of objects. We obtain the following results: \begin{itemize} \item Ly$\alpha$\ emission has been observed in four out of the eight H\,{\sc II} galaxies. In all these four galaxies we have found a clear evidence of a wide velocity field by the presence of deep absorption troughs at the blue side of the Ly$\alpha$\ profiles. Moreover, absorption lines of metallic elements (O\,{\sc I}, Si\,{\sc II}) are also significantly blueshifted with respect to the H\,{\sc II} gas velocity. \item The determining factor for the detectability of the Ly$\alpha$\ emission line in these galaxies is therefore the velocity structure of the neutral gas along the line of sight, rather than the abundance of dust particles alone. If most of the neutral gas is outflowing from the ionized region, the Ly$\alpha$\ emission line would escape (partially) unaffected, independently on the metal abundance and dust content of this neutral gas. This outflowing material apparently powered by massive stars winds and/or SN may eventually leave the galaxy. We thus may be witnessing galactic winds resulting from intense star formation activity. In the case of Haro~2, Lequeux et al. (1995) suggested that 10$^{7}$ M$\odot$ are expanding at 200 km\thinspace s$^{-1}$. \item Broad Ly$\alpha$\ absorption is detected in all H\,{\sc II} galaxies. The derived N(H\,{\sc I}) column densities lie unexpectedly inside a relatively small range with 6 of the 8 H\,{\sc II} galaxies having logarithmic column densities logN(H\,{\sc I}) between 19.9 and 21.1~cm$^{-2}$\ (extreme values are 19.7 and 21.5). We stress again that the Ly$\alpha$\ photons emitted by the H\,{\sc II} region are absorbed or redistributed by the H\,{\sc I} gas only if its velocity is the same as that of the H\,{\sc II} region. Otherwise, the photons that are resonantly trapped were emitted in the stellar continuum close to Ly$\alpha$. \item The dependence of Ly$\alpha$\ emission detectability on the presence/absence of neutral static/outflowing gas along the line of sight (and within the field of view covered by the slit), helps to explain the apparently contradictory detection of Ly$\alpha$\ emission in metal and dust--rich galaxies (like Haro~2), while it may be absent in metal and dust deficient objects, of which IZw~18 is the prototype. \item A partial covering factor of the H\,{\sc II} region by neutral gas, with low H\,{\sc I} column densities, would be required to allow the detection of the Ly$\alpha$\ emission line if the neutral gas is static with respect to the ionized regions. \item The generally weak or absent Ly$\alpha$\ emission from ``primeval'' and other galaxies at high redshifts can only be explained by velocity-structure effects combined with absorption of the Ly$\alpha$\ photons by dust grains. The relatively small angular extent of these sources implies that if photons were leaking through the neutral gas clouds surface after multiple scattering without being destroyed, the equivalent widths of the lines measured from Earth should be significantly higher than observed. \item The present study invalidates attempts to measure the comoving star--formation rate density at high redshift on the basis of Ly$\alpha$\ emission surveys. \end{itemize} Future work should address the several effects discussed in this work to understand the reasons that govern the presence/absence of the Ly$\alpha$\ line emission and absorption: the strength and age of the burst, the metallicity of the gas (controlling the cooling, hence the wind evolution), the gravitational potential of the parent galaxy and its morphology and the H\,{\sc I} and the dust distributions will all play a role. The challenge is to determine their relative importance in affecting the Ly$\alpha$\ emission and absorption processes. Clearly the way forward is to realistically model the hydrodynamical evolution of the ISM in gas rich dwarf galaxies under the influence of starburst of different fractional masses. Particular attention should be paid to the time evolution of neutral gas kinematical and structural parameters. \section*{Acknowledgments} We wish to thank the staff at STScI in Baltimore for his continuous support during this project. J.M. Mas-Hesse acknowledges support from Spanish CICYT through grant ESP95-0389-C02-02. Support for this work was provided by NASA through grant number GO-05833.01-94A from the Space Telescope Science Institute. ET and RT acknowledge support from an EC -- ANTARES -- grant and CNRS respectively during visits to IAP and LAEFF where part of this work was accomplished.
1,314,259,993,715
arxiv
\section{Background} Researchers and government officials are often interested in characteristics of human populations for which there are no practicable sampling frames for direct sampling. In some such hidden populations, members are connected through social networks. A common approach is to collect a sample using a variant of {\it link-tracing} \citep{Thompson2006, Thompson2006a}, such as a {\it snowball sample} \citep{goodman1961}, where subsequent sample members are selected based on their relations with previously sampled individuals. When the initial sample is not a probability sample, this approach does not result in a probability sample. However, most available alternatives, \citep{muhib01,peterson_etal2008, targetedsampling1989} also fail to produce a probability sample of the population. Respondent-Driven Sampling (RDS, introduced by Heckathorn 1997, 2002, see also Salganik and Heckathorn, 2004, Volz and Heckathorn 2008\nocite{heck97, Heckathorn2002, salgheck04, volzheck08}) is a recently introduced variant of link-tracing sampling which increases the ease of sampling and produces samples which, it is argued, approach probability samples as sampling progresses. The lack of satisfactory alternatives has spurned a strong demand for RDS \citep{johnston08, lanskycdcrds07}. RDS presents two main innovations: a sampling design and a corresponding approach to estimation. The former is highly effective in many settings \citep{lanskycdcrds07}; the {\it respondent-driven} design relies on the respondents at each wave to select the next wave by distributing uniquely identified coupons to others in the target population, who can choose to return the coupons to enroll in the study. Thus, the sampling exploits the network of social relations while also avoiding the confidentiality concerns associated with recording the names of contacts. The key innovation for estimation is that through many waves of sampling, the dependence of the final sample on the initial sample is reduced, allowing researchers more confidence in making approximate probability statements about the resulting samples. This insight allows for statistical inference in settings where the initial sample is typically selected by a convenience mechanism. Although current inference is likely superior to the alternative non-probability methods, existing methods are sensitive to deviations from many assumptions \citep{goel2007, gilehanSM09, neely09}.\knote{after blind: add Gile 2008 reference} This paper offers a modification of the existing theoretical formulation of respondent-driven sampling, and corresponding inference to address what we find to be a serious conceptual weakness of existing work: the known inaccuracy of the with-replacement approximation to the sampling process. In the next section, we begin by introducing current RDS estimation, particularly the estimator introduced by \cite{volzheck08}, and illustrate the sensitivity of this estimator to the with-replacement sampling assumption in cases of substantial sample fractions. In Sections \ref{sssection} and \ref{ssestsection}, we then introduce a new model for RDS sampling based on successive sampling, and introduce a new estimator based on that model. In Section \ref{mhsim}, we use a simulation study to illustrate the superior performance of the new estimator. We also include sensitivity analyses concerning inaccurate estimation of the size of the target population and the characteristics of the initial sample. In Section \ref{sec:apply}, we apply our estimator to data collected in three populations of drug users and men who have sex with men. We conclude with a broader discussion of the method and its limitations. \section{Previous Approaches to Estimation} \label{sec:rdsest} The basic ideas underlying estimation from RDS data are clever and important. They allow for something like valid statistical inference, in a sampling setting where the target population cannot be effectively reached using a traditional sampling frame. The original article, \citep{heck97} made very strong assumptions about the sampling procedure so as to assume that the sample proportions were representative of the population proportions. \cite{salgheck04} introduced a Markov chain argument for population mixing, and proposed an estimator based on equating the number of cross-relations between pairs of sub-populations of interest, based on the referral patterns of each group. This estimator is currently in wide use, and is implemented in the standard RDS analysis software \citep{rdsat}. \cite{volzheck08} connect RDS estimation to mainstream survey sampling through the use of a generalized Horvitz-Thompson estimator form. This estimator relies on the estimation of the inclusion probabilities of the sampled units, $\pi_i$. Based on an argument for treating the sample as independent draws from the stationary distribution of a random walk on the nodes of an undirected graph, \cite{volzheck08} approximate the sampling probabilities as proportional to nodal degree, $d_i$, or number of incident edges in the graph. This estimator avoids the problem of potentially unknown population size $N$ by using the generalized Horvitz-Thompson form, normalizing by an estimator of $N$ as follows: \begin{eqnarray} \hat{\mu}_{\rm VH} = \frac{\sum_{i=1}^N\frac{{\bf S}_i {\bf z}_i}{{\bf d}_i}}{\sum_{i=1}^N\frac{{\bf S}_i}{{\bf d}_i}}, \label{volzheckest} \end{eqnarray} where ${\bf S}_i=1$ indicates that the $i^{th}$ unit has been selected for sampling, and ${\bf S}_i=0$ indicates it has not been selected, and ${\bf z}_i$ represents the variable of interest measured on the $i^{th}$ unit. We refer to this approach as the Volz-Heckathorn (VH) estimator. \cite{gilehanSM09} illustrate that this estimator consistently out-performs the Salganik-Heckathorn estimator, and we believe it is the most principled estimator currently available for RDS. For these reasons, we use the VH estimator as the standard for comparison in this paper. Many critical assumptions required to justify this estimator are explored in \cite{gilehanSM09} and listed in Table \ref{tab:assmh}. In this paper, we focus on the reliance on a with-replacement sampling model. We present an estimator based on an alternative sampling model reflecting the without-replacement nature of the sampling process. \section{Successive Sampling for RDS}\label{sssection} The Volz-Heckathorn estimator requires many waves of sampling to justify its reliance on a stationary distribution. In practice the number of waves is small (almost always fewer than 20, and often 5 or fewer). Also, we wish to consider a without-replacement process for which stationarity does not apply. It is therefore instructive to consider a special case when stationarity is not necessary. Consider a graph formed in the following manner: Begin with $N$ vertices, designated by indices $1:N$. Assign to each vertex a number of edge-ends according to arbitrary fixed degree distribution $\bf N=\bf N_1, \bf N_2, \ldots \bf N_K$, where $\bf N_j$ is the number of nodes of degree $j$, $1$ and $K$ are the minimum and maximum degrees, respectively, and subject to the constraint that twice the number of edges, $2E=\sum_{k \in 1 \ldots K}k \bf N_k$, is even. Now select pairs of edge-ends completely at random and assign an edge connecting each pair. This procedure results in a variant of the so-called {\it configuration model} for networks, a popular null model for networks, especially in the physics literature \citep{molloyreed1995}. Note that this formulation does allow {\it loops}, or links to oneself as well as multiple edges between the same pair of vertices, although the rate of these events decreases with increasing population size for fixed maximum degree $K$, such that several authors have suggested they are negligible for $K < (\bar{{\bf d}}N)^{1/2}$ or $K < N^{1/2}$, where $\bar{{\bf d}}$ is the mean degree of the network \citep{chunglu2002, burdakrzy2003, bogunapv2004, catanzarobp2005, fosterfgp2007}. Now consider a random walk ${\bf G}^*$ on a set of vertices with degrees given by ${\bf d}_1, {\bf d}_2, \ldots, {\bf d}_N$, such that $\sum_i \mathbb{I}({\bf d}_i=k)=\bf N_k$, where $\mathbb{I}(A)$ is the indicator function on $A$. \cite{volzheck08} consider the stationary distribution of this random walk for a fixed graph. Instead, we consider the transition probabilities of the corresponding walk over the distribution of all networks of fixed degree distribution constructed as above, in which the $j^{th}$ node visited, ${\bf G}^*_j$, is selected from the distribution of possible edges from node ${\bf G}^*_{j-1}$. The transition probabilities are then given by: \begin{eqnarray} P({\bf G}^*_j={g}^*_j | {\bf G}^*_1,{\bf G}^*_2, \ldots {\bf G}^*_{j-1} = {g}^*_1,{g}^*_2, \ldots {g}^*_{j-1}) = \left\{ \begin{array}{cl} \frac{{\bf d}_{{g}^*_j}}{2E-1} & {g}^*_j \neq {g}^*_{j-1} \\\\ \frac{{\bf d}_{{g}^*_j}-1}{2E-1} & {g}^*_j = {g}^*_{j-1}, \end{array} \right. \end{eqnarray} where this probability is taken over the space of all possible configuration model graphs of given degree distribution, as well as over the steps of the random walk. This procedure results in stationary distribution proportional to ${\bf d}_i$, and, in fact, selection probabilities at each step very nearly proportional to ${\bf d}_i$. Thus, for a network structure given by such a configuration model, the Volz-Heckathorn estimator constitutes a \cite{HansenHurwitz1943} estimator, without further requirement of sufficient waves for convergence. Now consider the corresponding self-avoiding random walk ${\bf G}$. In this procedure, \begin{eqnarray} P({\bf G}_j={g}_j | {\bf G}_1, {\bf G}_2, \ldots {\bf G}_{j-1} = {g}_1, {g}_2, \ldots {g}_{j-1}) = \left\{ \begin{array}{cl} \frac{{\bf d}_{{g}_j}}{2E-\sum_{i = 1}^{j-1}{\bf d}_{{g}_i}} & {g}_j \notin {g}_1 \ldots {g}_{j-1} \\ 0 & {g}_j \in {g}_1 \ldots {g}_{j-1}. \end{array} \right. \label{ppsworeqn} \end{eqnarray} This sampling procedure is mathematically equivalent to {\it successive sampling} (SS) or {\it probability proportional to size without replacement sampling} (PPSWOR), dating back to \cite{yatesgrundy53}, and typically defined by the following sampling process: \begin{itemize} \item Begin with a population of $N$ units, denoted by indices $1\ldots N$ with varying sizes represented by ${\bf d}_1, {\bf d}_2, \ldots {\bf d}_N$, with $\sum_{i=1}^N {\bf d}_i = 2E$, for total edges $E$. \item Sample the first unit ${\bf G}_1$ from the full population $\{1 \ldots N\}$ with probability proportional to size ${\bf d}_i$. \item Select each subsequent unit with probability proportional to size {\it from among the remaining units}, such that conditional sampling probabilities are given by (\ref{ppsworeqn}). \end{itemize} In the survey sampling literature, mostly based on the work of \cite{raj56} and \cite{murthy57}, this sampling design is referred to as {\it probability proportional to size without replacement} (PPSWOR), and typically used in instances where the desired {\it probability proportional to size} (PPS) design is not feasible. In such cases, the sizes of population units are all known, the sampling design is implemented, and the main interest is in estimating the population total or mean of a variable measured on sampled units. The analytical properties of this procedure are quite difficult, as suggested by more recent work including \cite{raosensin91} and \cite{kockor01}. In fact, even the marginal unit sampling probabilities are not available in closed form. \knote{insert the other one mark found more recently?} In the geological discovery literature, successive sampling is not a purposive design, but an approximation to a non-designed sampling process. Important work in this area includes \cite{andkau86}, \cite{nairwang89}, and \cite{bnw92}. These authors address the case of oil field discovery, in which successive fields are discovered with probabilities proportional to some measure of their size, typically taken to be their volume. This literature does not assume the full population of sizes to be known, and typically takes some function of the sizes, such as the sum of the sizes of undiscovered reserves, as the object of inference. Our application of successive sampling also assumes the unobserved sizes to be unknown, however as in \cite{raj56} and \cite{murthy57}, our object is to use these sizes to estimate the population characteristics on another variable measured on all sampled units. To do so, we both develop a novel algorithm and leverage more recent work by \cite{fattorini06}. \section{Estimation of population means from RDS samples based on successive sampling} \label{ssestsection} \newcommand{f_{\vpi}(k;n,\N)}{f_{{\mbox{\boldmath$\pi$}}}(k;n,\bf N)} Under successive sampling, for population of sizes given by $\bf N$ and sample size $n$, there is a function $f_{\vpi}(k;n,\N)$ mapping the size $k$ of a unit to its sampling probability ${\mbox{\boldmath$\pi$}}_i$. Our proposed estimator is based on estimated sampling weights, which are based on the ${\mbox{\boldmath$\pi$}}$ given by the successive sampling procedure, applied to nodal sampling units and {{\elevenrm ``}}sizes{\elevenrm "}\ given by nodal degrees. There are three key challenges for this approach. The mapping depends on first, the known population size $N$ and, second, the degree distribution, $\bf N$, neither of which is known in the general RDS case. Finally, given $\bf N$ and the sample size $n$, the mapping is not explicit. The lack of explicit mapping has been addressed by \cite{fattorini06}, who suggests estimating the mapping by simulation. For known $\bf N$ and given $n$, he simulates the successive sampling procedure, then estimates the inclusion probability ${\mbox{\boldmath$\pi$}}_i$ associated with unit $i$ by: \begin{eqnarray} \tilde{{\mbox{\boldmath$\pi$}}}_i = \frac{{\bf U}_i + 1}{M+1}, \label{fattorinip} \end{eqnarray} where ${\bf U}_i$ is the number of times unit $i$ is sampled in the $M$ trials. He proposes using these estimated probabilities in the standard Horvitz-Thompson estimator: \begin{eqnarray} \sum_{j: {\bf S}_j = 1}\frac{{\bf z}_j}{\tilde{{\mbox{\boldmath$\pi$}}}_j}. \label{fattoriniT} \end{eqnarray} \knote{should we have different notation for the sampling probabilities under different procedures?} \newcommand{\mathop{\rm E\,}\nolimits}{\mathbb{E}} \newcommand{{\hat{\pi}}}{{\hat{\pi}}} \def\mathop{\rm E\,}\nolimits{\mathop{\rm E\,}\nolimits} In most of this paper, we assume the population size, $N$, is known. We evaluate the sensitivity of our results to that assumption in Section \ref{sec:sizesens}, and in the examples in Section \ref{sec:apply}. Given the known population size, we present a novel approach to estimating the degree distribution $\bf N$ jointly with inclusion probabilities when degrees are only observed for sampled nodes. This procedure can be applied beyond the RDS setting whenever the population distribution of unit sizes corresponding to a sample collected through successive sampling is unknown. \newcommand{\f}{f_{\vpi}(k;n,\N)} \newcommand{f_{\vpi}(k;n,\N)}{f_{{\mbox{\boldmath$\pi$}}}(k;n,\bf N)} Our approach iteratively estimates the population distribution $\bf N$ and the mapping $f_{\vpi}(k;n,\N):k \to {\mbox{\boldmath$\pi$}}$. This approach relies on two key points: \begin{itemize} \item For known population of degrees $\bf N$, the mapping $\f$ can be estimated by simulation in a form similar to (\ref{fattorinip}). \item For known mapping $\f$, the number (or proportion) of the population of degree $k$ can be estimated using a form similar to (\ref{fattoriniT}). \end{itemize} We leverage these two points to propose the following procedure for the estimation of population mean $\mu$ in the case of nodal degrees observed only for sampled units. \knote{all this notation needs to be improved} Let $\mathop{\rm E\,}\nolimits[\cdot ; n, \bf N]$ denote expectation with respect to a sample of size $n$ sampled by successive sampling from a population with degree counts $\bf N=\{\bf N_1, \bf N_2, \ldots, \bf N_K\}.$ Then: \begin{eqnarray} \mathop{\rm E\,}\nolimits[{\bf V}_{k} ; n, \bf N] = \bf N_{k} f_{\vpi}(k;n,\N) ~~~~~~~~~k=1, \ldots, K \end{eqnarray} where ${\bf V}_k$ is the random variable representing the number of sample units with degree $k$, $f_{\vpi}(k;n,\N)=\mathop{\rm E\,}\nolimits[{\bf S}_{j}:{\bf d}_{j}=k; n, \bf N]$ is the (common) inclusion probability of a node of degree $k$, and $K$ is the maximum degree, $K<N$. This suggests first order moment equations for the unknown true $\bf N:$ \begin{eqnarray} \mathop{\rm E\,}\nolimits[{\bf V}_{k} ; n, \bf N] = {\bf v}_{k} ~~~~~~~~~k=1, \ldots, K \label{mmeqn} \end{eqnarray} where ${\bf v}_k$ is the observed number of sample units with degree $k$. \newcommand{f_{\vpi}(k;n,\N^0)}{f_{{\mbox{\boldmath$\pi$}}}(k;n,\bf N^0)} \newcommand{f_{\vpi}(k;n,\N^i)}{f_{{\mbox{\boldmath$\pi$}}}(k;n,\bf N^i)} \newcommand{f_{\vpi}(k;n,\N^{i-1})}{f_{{\mbox{\boldmath$\pi$}}}(k;n,\bf N^{i-1})} \newcommand{f_{\vpi}(l;n,\N^{i-1})}{f_{{\mbox{\boldmath$\pi$}}}(l;n,\bf N^{i-1})} \newcommand{f_{\vpi}(\cdot;n,\N^r)}{f_{{\mbox{\boldmath$\pi$}}}(\cdot;n,\bf N^r)} The algorithm is then: \begin{enumerate} \item Initial estimate: \begin{eqnarray} f_{\vpi}(k;n,\N^0) = \frac{k}{N} \sum_{l =1}^K \frac{v_l}{l}, \end{eqnarray} that is $f_{\vpi}(k;n,\N^0)$ proportional to $k$. \item For $i=1, \ldots, r$ iterate the following steps: \begin{enumerate} \item Estimate the population distribution of degrees: \begin{eqnarray} {\bf N_{k}^{i} = N \cdot \frac{\frac{{\bf v}_k}{f_{\vpi}(k;n,\N^{i-1})}}{\sum_{l =1}^K \frac{{\bf v}_l}{f_{\vpi}(l;n,\N^{i-1})}} }~~~~~~~~~~~k=1, \ldots, K \end{eqnarray} {where ${\bf v}_k$ is the observed number of sample units with degree $k$.} \item Estimate inclusion probabilities: \begin{enumerate} \item Simulate $M$ successive sampling samples of size $n$ from a population with composition $\bf N^{i}.$ \item Estimate the inclusion probabilities: \begin{eqnarray} f_{\vpi}(k;n,\N^i) = \frac{\mathop{\rm E\,}\nolimits[{\bf V}_{k} ; n, \bf N^{i}]}{\bf N_{k}^{i}} \approx \frac{{\bf U}_k + 1}{M \cdot \bf N_{k}^{i} + 1}, \label{estq2a} \end{eqnarray} {\small where ${\bf U}_k$ is the total number of observed units of size $k$ in the $M$ simulations.} \end{enumerate} \end{enumerate} \item Estimate the population distribution of degrees and the corresponding inclusion probabilities via, respectively: ${\hat{\bf N}} = \bf N^r$ and ${\hat{\pi}}(\cdot) = f_{\vpi}(\cdot;n,\N^r).$ \item Use the resulting mapping ${\hat{\pi}}$ to estimate $\mu$ via the generalized Horvitz-Thompson estimator: \begin{eqnarray} \hat{\mu}_{SS} = \frac{\sum_{j = 1}^N \frac{{\bf S}_j {\bf z}_j}{{\hat{\pi}}({\bf d}_j)}}{\sum_{j= 1}^N\frac{{\bf S}_j}{{\hat{\pi}}({\bf d}_j)}}. \label{estmu2} \end{eqnarray} \end{enumerate} For computational efficiency, most simulations in this paper were conducted with $M=500$ and $r=3$, with good results. In general, we recommend at least $M=2000$ and $r=3$, and we have used these parameters for the simulations of the standard error procedure in the supplemental materials, in the application to real data, and in the extension to $N>1000$ in the discussion. Estimation time scales with sample size, population size, and $M$. In our simulations, with $N=1000$ and $M=500$, estimates require about $1.5$ seconds on a personal computer, increasing to about $6$ seconds when $M=2000$. In practice, these parameters can be adjusted for desired precision in the solution to (\ref{mmeqn}). Higher values of $M$ are particularly helpful for more dispersed degree distributions and larger population sizes. Simulations have shown the results of this procedure to be at least as good as those provided by a first order asymptotic approximation provided by \cite{andkau86}. This approach is novel and its theoretical properties are not well understood. It is appealing is to consider the algorithm as a variant of an EM algorithm, such as an ECM algorithm \citep{ECM}, with the unobserved part of the degree sequence, $d_{n+1}, \ldots d_N$ as the latent variable. Unfortunately, in the design-based frame, these values also fully determine the unknown parameters $N_k = \sum_{i=1}^N {\mathbb I}(d_i=k)$, resulting in a degenerate likelihood form. \subsection{Between Infinite Population and Full Population Sample} Consider the limiting case of the moment equation (\ref{mmeqn}) where $N=n$. In this case, this equation is only satisfied for the degree distribution given by, $\hat{\bf N}_k = {\bf v}_k$, resulting in ${\hat{\pi}}({\bf d}_j) = 1 ~\forall ~ j$, and $\hat{\mu}_{SS} = \hat{\mu}$, the sample mean. Now consider the limit as $N \to \infty$, for fixed $n$ and fixed maximum degree. Then the step-wise selection probabilities for an unsampled node approach values proportional to degree: \begin{eqnarray} \frac{{\bf d}_i}{\sum_j {\bf d}_j - \sum_{j: {\bf S}_j=1} {\bf d}_j} - {\bf p}_i \to 0, \end{eqnarray} where ${\bf p}_i = \frac{{\bf d}_i}{\sum_j {\bf d}_j}$. Then $P(\rm{Binom}(n, {\bf p}_i)>1) \to 0$, such that \begin{eqnarray} \frac{{\mbox{\boldmath$\pi$}}_i}{{\mbox{\boldmath$\pi$}}_j} \to \frac{{\bf p}_i}{{\bf p}_j} = \frac{{\bf d}_i}{{\bf d}_j}, \end{eqnarray} for overall inclusion probabilities ${\mbox{\boldmath$\pi$}}$, and step-wise selection probabilities ${\bf p}$. Therefore, $\hat{\mu}_{SS} \to \hat{\mu}_{\rm VH}$. In either limit, $N \to n$ or $N \to \infty$, $\hat{\mu}_{SS}$ approaches an existing estimator. Thus, it retains the professed limiting properties of $\hat{\mu}_{\rm VH}$, such as robustness to bias based on the initial sample, while retaining the favorable finite population characteristics of the sample mean in the case of a large sample fraction. In the next section, we use simulation studies to argue that for $n < N < \infty$, the proposed estimator appropriately mediates between these two, and therefore out-performs both. \knote{se was here} \section{Comparing the New and Existing Estimators:\\ A Simulation Study} \label{mhsim} Our simulation study is designed to highlight the treatment of without-replacement sampling in the case of a large sample fraction. To increase the realism of the study, we chose parameters to match the characteristics of the pilot data from the CDC surveillance program \citep{aqcdc06} wherever possible. The general procedure was as follows: \begin{itemize} \item 1000 networks are simulated under each test condition. \item An RDS sample is simulated from each sampled network. \item RDS estimators are computed from each sample. \end{itemize} Because the CDC's surveillance system aims for a sample size of 500, and many RDS studies approach exhaustion of their populations of interest \citep{johnston09exhaust, cdc08exhaust}, we fix all sample sizes at 500. We also consider a mean degree of 7, close to the mean of the pilot data from the CDC study \citep{aqcdc06}. We assign a discoverable class to each member of the simulated population. In reference to studies designed to estimate the prevalence of infectious disease, we refer to this characteristic as {{\elevenrm ``}}infection status,{\elevenrm "}\ assigning the {{\elevenrm ``}}infected{\elevenrm "}\ status (${\bf z}_i=1$) to 20\% of simulated population members in each simulation. Note that $\hat{\mu}_{SS}$ could also be applied to a continuous or categorical variable. \newcommand{{\bf y}}{{\bf y}} \newcommand{{\bf Y}}{{\bf Y}} \newcommand{{\bf x}}{{\bf x}} \newcommand{{\mbox{\boldmath$\eta$}}}{{\mbox{\boldmath$\eta$}}} \newcommand{{\bf g}}{{\bf g}} \newcommand{{\bf u}}{{\bf u}} We consider networks sampled from models from the {\it Exponential-family Random Graph Model} (ERGM) class \citep{sprh06}. Here the relations ${\bf y}$ are represented as a realization of the random variable ${\bf Y}$ with distribution: \begin{eqnarray} P_{\eta}({\bf Y}={\bf y} | {\bf x}) = \exp\{{\mbox{\boldmath$\eta$}}{\cdot}{\bf g}({\bf y},{\bf x})-\kappa({\mbox{\boldmath$\eta$}},{\bf x})\}\quad \quad {\bf y}\in {\cal Y}, \label{ergm} \end{eqnarray} where ${\bf x}$ are nodal or dyadic covariates, ${\bf g}({\bf y},{\bf x})$ is a $p$-vector of network statistics, ${\mbox{\boldmath$\eta$}}\in \mathbb{R}^p$ is the parameter vector, ${\cal Y}$ is the set of all possible undirected graphs, and $\exp\{\kappa({\mbox{\boldmath$\eta$}},{\bf x})\} = \sum_{{\bf u}\in{\cal Y}}\exp\{{\mbox{\boldmath$\eta$}}{\cdot}{\bf g}({\bf u},{\bf x})\} $ is the normalizing constant \citep{bar78}. The structure of the networks represented is determined by the choice of ${\bf g}({\bf y},{\bf x})$. In this study, we vary the structure of the networks in three ways: First, we consider populations of sizes (i.e. numbers of nodes) 1000, 835, 715, 625, 555, and 525, such that samples of size 500 constitute about 50\%, 60\%, 70\%, 80\%, 90\%, and 95\% of the target population. We focus on this range of sample fractions because it highlights settings in which we might expect to see finite population biases such as those $\hat{\mu}_{SS}$ is intended to address. Another critical feature is the {\it activity ratio} $w$, equal to the ratio of the mean degree of infected nodes to the mean degree of uninfected nodes, \begin{eqnarray} w=\frac{\sum_{i=1}^N d_i z_i }{\sum_{i=1}^N z_i } \frac{\sum_{i=1}^N (1-z_i)}{\sum_{i=1}^N d_i (1-z_i)}. \end{eqnarray} This measures the differential tendency for the groups to be socially connected in the population. We consider $w \in \{0.5, 0.8, 1, 1.1, 1.4, 1.8, 2.5, 3\}$. We also induce homophily on infection status in these simulations, parameterized as the relative probability of an edge between two infected nodes, and an edge between an infected and an uninfected node. This is an intuitive parameterization, also used in \cite{gilehanSM09}, but differs from that used in other analyses of RDS. Except in Section \ref{sec:senshomoph}, the edge probability between the two infected nodes is fixed at five times that of the mixed dyad. For $w=1$, this, along with the 20\% infected, implies that an edge between two uninfected nodes is twice as likely as an edge in a mixed dyad. We consider several other levels of homophily, and in Section \ref{sec:senshomoph} we show that this feature has important implications for the bias induced by biased initial samples, however for the case of initial samples at random with respect to ${\bf z}$, as in most of the simulations presented here, bias was not affected by level of homophily, although increasing homophily does increase the variance of both $\hat{\mu}_{SS}$ and $\hat{\mu}_{\rm VH}$, and is therefore important to consider in standard error estimation. \knote{could show a variance simulation for the change in variance estimation (width and coverage) for changing homophily.} These features are represented in the ERGM by choosing network statistics to represent the mean degree, the activity ratio $w$, and homophily (based on using the {{\elevenrm ``}}infected{{\elevenrm "}\ } status as a nodal covariate). These three values were specified using the {{\elevenrm ``}}{\tt nodemix}{\elevenrm "}\ {\tt statnet} model term, which includes three parameters corresponding to the three cells of the mixing matrix on infection. The parameter ${\mbox{\boldmath$\eta$}}$ was chosen so the expected values of the statistics were equal to the values given above \citep{vanduijngilehan09}. Samples from the resulting models were taken using the {\tt statnet} R package \citep{statnet,ergmjss}. The RDS sampling mechanism is again designed to mimic that of the CDC's pilot study. Ten initial sample nodes were chosen for each sample, selected sequentially with probability proportional to degree, (i.e. by successive sampling). In the sensitivity analysis, we also consider initial sample selection regimes dependent on ${\bf z}$. Subsequent sample waves were selected without-replacement by sampling up to two nodes at random from among the un-sampled alters of each sampled node. Exactly two alters were sampled whenever possible. This process typically resulted in the sampling of four complete waves and part of a fifth wave, stopping when a sample size of 500 was attained. We augment our basic results with two sub-studies evaluating the sensitivity of the estimator $\hat{\mu}_{SS}$ to assumptions not required for $\hat{\mu}_{\rm VH}$: the accuracy of the assumed population size $\hat{N}$ and the dependence on the initial sample. \subsection{Results}\label{sec:basic} The Volz-Heckathorn estimator exhibits substantial bias in cases of non-unity activity ratio, and more so for larger sample fractions. This result was noted in \cite{gilehanSM09}, and is illustrated in Figure \ref{fig:biasbars}. The bias can be understood as follows: in the case of higher mean degree among infected nodes $(w>1)$, and large sample fraction, the higher-degree infected samples will be down-weighted proportional to their degrees. The true without-replacement sampling probabilities are closer to uniform than would be suggested by the proportional-to-degree estimates, such that these higher-degree nodes are excessively down-weighted, leading to negative bias in the estimated proportion infected. The corresponding mappings from degree to sampling weight are illustrated in Figure \ref{fig:curvesboth}. \newcommand{10cm}{7.2cm} \begin{figure}[h] \begin{center} \subfigure[Bias of $\hat{\mu}_{VH}$] { \label{fig:biasbars} \includegraphics[width=10cm]{vhbiascircles.pdf} } \hspace{-.3cm} \subfigure[Bias of $\hat{\mu}_{SS}$] { \label{fig:sppsbiasbars} \includegraphics[width=10cm]{ssbiascircles.pdf} }\end{center} \caption{Bias of the Volz-Heckathorn and Successive Sampling estimators from samples of size 500 constituting about 50\%, 60\%, 70\%, 80\%, 90\%, and 95\% of the population, for varying activity ratio ($w$). The same samples were used for both estimators. }\label{bothbars} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=3in]{curves0725bigbothbw.pdf} \end{center} \caption{Estimated mappings from nodal degree to inclusion probability for successive sampling samples of size 500 constituting about 50\%, 60\%, 70\%, 80\%, 90\%, 95\%, and 100\% of the population, for simulated network degree distributions with $w=1$, along with the proportional mapping assumed by the Volz-Heckathorn estimator (for 50\% sample), indicated by {\it $\propto$ degree}. Note that given the simulated degree distribution, the proportional mapping requires probabilities greater than 1 to attain the desired sample size.} \label{fig:curvesboth} \end{figure} The SS estimator, however, is not subject to this type of bias, as illustrated in Figure \ref{fig:sppsbiasbars}. This is the main contribution of the proposed estimatior. In this plot, the bias is negligible, with the exception of the case in which $w=0.5$. At least two factors, related to the strong homophily and the smaller size of the infected group contribute to this exception. First, the homophily-induced dependence implies the initial sample has greater influence on the final sample than in standard successive sampling. As this sample is selected first, its selection probabilities are closer to proportional to degree than the final successive sampling inclusion probabilities, contributing to a resulting sample slightly over-representing high-degree uninfected nodes. Furthermore, because the infected nodes have low degrees and high homophily, they more often fail to produce both possible recruits. The relative group sizes contribute to the difference in the magnitude of this effect for $w<1$ and $w>1$. The variance of $\hat{\mu}_{SS}$ is also consistently lower than that of $\hat{\mu}_{\rm VH}$, combining with the lower bias to yield substantially lower mean squared error, often massively so\knote{get numbers}.\knote{put in variance plot, or some other variance measure?} The mean squared error of $\hat{\mu}_{SS}$ is less than that of $\hat{\mu}_{\rm VH}$ for the full parameter space of Figure \ref{bothbars}. The combination of bias and variance effects is visible in Figures \ref{small151} and \ref{small152}. \newcommand{2in}{2in} \newcommand{2in}{2in} \subsection{Sensitivity to Population Size Estimate}\label{sec:sizesens} It is important to note that the performance of $\hat{\mu}_{SS}$ in Figure \ref{fig:sppsbiasbars} is dependent on knowledge of the true population size $N$. This is often unrealistic in practice. Therefore, we evaluate the performance of $\hat{\mu}_{SS}$ in the case of over and under estimation of $N$. In particular, we consider small ($\hat{N}_s$) and large ($\hat{N}_l$) estimates of $N$ given by: \begin{eqnarray} \hat{N}_s = N - \frac{N-n}{2}, ~~~ \hat{N}_l = N + \frac{N-n}{2}. \label{nhat} \end{eqnarray} Figure \ref{small15} depicts the results of these simulations for the case of $w=1.4$. Each sub-plot gives the distribution of the estimator over 1000 samples from each of the 6 population proportions. The four sub-plots represent $\hat{\mu}_{\rm VH}$ (top left), $\hat{\mu}_{SS}(N)$ (top right), $\hat{\mu}_{SS}(\hat{N}_s)$ (bottom left) and $\hat{\mu}_{SS}(\hat{N}_l)$ (bottom right). \newcommand{2.1in}{2.1in} \newcommand{2in}{2in} \begin{figure}[h] \begin{center} \subfigure[$\hat{\mu}_{VH}$] { \label{small151} \includegraphics[width=2.1in,height=2in]{small5_BL_1pt5vh} } \hspace{.25cm} \subfigure[$\hat{\mu}_{SS}$] { \label{small152} \includegraphics[width=2.1in,height=2in]{small5_BL_1pt5spps} }\vspace{.25cm} \subfigure[$\hat{\mu}_{SS}(\hat{N}_s < N)$] { \label{small153} \includegraphics[width=2.1in,height=2in]{small5_BL_1pt5sppssmall} }\hspace{.25cm} \subfigure[$\hat{\mu}_{SS}(\hat{N}_l > N)$] { \label{small154} \includegraphics[width=2.1in,height=2in]{small5_BL_1pt5sppsbig} }\end{center} \caption{$\hat{\mu}_{\rm VH}$ and $\hat{\mu}_{SS}$ from samples of size 500 constituting about 50\%, 60\%, 70\%, 80\%, 90\%, and 95\% of the population. Initial samples selected independent of infection status ${\bf z}$. Activity ratio ($w$) fixed at 1.4. The first plots depict $\hat{\mu}_{\rm VH}$, and $\hat{\mu}_{SS}$. The last two depict $\hat{\mu}_{SS}$ when the number of nodes is under and over estimated, respectively. The true value is $0.20$. The same samples were used for each estimator. } \label{small15} \end{figure} In this case, the underestimation of $N$ results in a small positive bias, due to the over-estimation of the curvature of $\f$. A small negative bias is also present in the case of $\hat{N}_l$, due to the under-estimation of the curvature or $\f$. In both cases, the bias induced by inaccurate $\hat{N}$ is less than the bias of $\hat{\mu}_{\rm VH}$, and the resulting new estimators clearly out-perform the existing estimator. Figure \ref{fig:longguess} presents mean point estimates for other values of activity ratio $w$. The middle set of plots, $w=0.8$, corresponds to a case of moderately lower mean degree among infected nodes. In this case, the biases introduced by under and over estimation of $N$ change sign, but remain small in magnitude. The remaining plots illustrate the greater bias possible for extreme values of $w$. While the biases become larger, they are generally smaller than those exhibited by $\hat{\mu}_{\rm VH}$, with the exception for the two cases where the negative bias of $\hat{\mu}_{SS}$ due to small $w$, as described in Section \ref{sec:basic} is compounded by negative bias due to $\hat{N} < N$ ($w=0.5, 0.8$, $50\%$ sample). The mean squared error of $\hat{\mu}_{SS}$ is still smaller than that of $\hat{\mu}_{\rm VH}$ in these cases. Although the performance of $\hat{\mu}_{SS}$ is less robust to $\hat{N}$ in cases of extreme $w$, $\hat{\mu}_{\rm VH}$ also performs more poorly in these cases. \begin{figure}[h] \begin{center} \includegraphics[width=6in]{small5_BLlongguess_.pdf} \end{center} \caption{Mean prevalence estimates of four types: $\hat{\mu}_{\rm VH}$ (solid circles), $\hat{\mu}_{SS}$ (circles), $\hat{\mu}_{SS}(\hat{N}_s < N)$ (down-triangles), and $\hat{\mu}_{SS}(\hat{N}_l > N)$ (up-triangles), considered for 50\%, 70\%, and 90\% samples, and differential activity $w = 0.5, 0.8, 2.5$. The true value is $0.20$.} \label{fig:longguess} \end{figure} The behavior we see in these plots is typical of other simulations not shown here. With the more extreme values of activity ratio $w$, the bias induced by the inaccurate estimation of $N$ is increased, but typically not so much as to cause performance of $\hat{\mu}_{SS}$ worse than that of $\hat{\mu}_{\rm VH}$. \newcommand{\hat{\mu}}{\hat{\mu}} \newcommand{\hat{\mu}_{SH}}{\hat{\mu}_{SH}} \knote{introduce 'seeds' or don't use it} \subsection{Sensitivity to Initial Sample and Homophily}\label{sec:senshomoph} The estimator $\hat{\mu}_{\rm VH}$ requires sufficiently many sample waves to overcome the effect of an initial convenience sample and step-wise sampling probabilities not proportional to degree due to the particular network structure. The estimator $\hat{\mu}_{SS}$, not based on a stationary distribution, does not allow for such an argument. We therefore argue only that it is not worse than $\hat{\mu}_{\rm VH}$ in this respect. In Section \ref{sec:basic}, we illustrate that in the simulated networks, bias induced by deviations from the configuration model are is no worse for $\hat{\mu}_{SS}$ than for $\hat{\mu}_{\rm VH}$. We focus here on bias induced by the initial sample. \cite{gilehanSM09} show that $\hat{\mu}_{\rm VH}$ exhibits considerable bias in the case of a biased initial sample and network homophily. We expect $\hat{\mu}_{SS}$ to exhibit similar sensitivity, and here compare its performance to that of $\hat{\mu}_{\rm VH}$. We consider three regimes for the selection of the initial sample: all uninfected, random with respect to infection, and all infected. We also consider five levels of homophily, measured by the ratio $R$ defined by: \begin{eqnarray} R = \frac{\textrm{Probability of an {{\elevenrm ``}}infected-infected{\elevenrm "}\ tie}}{\textrm{Probability of an {{\elevenrm ``}}infected-uninfected{\elevenrm "}\ tie}}, ~~~~ R=1,2,3,5,13. \end{eqnarray} The standard level of homophily used in this paper corresponds to $R=5$. To present a comparison most favorable to $\hat{\mu}_{\rm VH}$, we treat a population size of 1000, (a $50\%$ sample), and activity ratio $w=1$. We consider 1000 samples from each homophily and sampling scenario, and summarize performance in terms of absolute bias, variance, and mean squared error, the last of which is depicted in Table \ref{tab:mse2}. We find that the variance of $\hat{\mu}_{\rm VH}$ always exceeds that of $\hat{\mu}_{SS}$, as does bias in all but a few cases of small differences. The MSE of $\hat{\mu}_{\rm VH}$ always exceeds that of $\hat{\mu}_{SS}$. Table \ref{tab:mse2} illustrates this relation for the case of an all-infected initial sample. The patterns for the other two sampling conditions are similar. \begin{table}[h]\caption{Comparison of mean squared error for $\hat{\mu}_{\rm VH}$ and $\hat{\mu}_{SS}$ with all infected initial sample and five levels of homophily. $MSE(\hat{\mu}_{SS})<MSE(\hat{\mu}_{\rm VH})$ for all conditions.} \begin{center} \begin{tabular}{l||ccccc} Homophily ($R$) & 1 & 2 & 3 & 5 & 13 \\ \hline \hline MSE $\hat{\mu}_{\rm VH}$ & 0.00029 & 0.00036 & 0.00054 & 0.00150 & 0.01525 \\ MSE $\hat{\mu}_{SS}$ & 0.00026 & 0.00032 & 0.00052 & 0.00140 & 0.01449 \\ \hline Efficiency: MSE $\hat{\mu}_{\rm VH}$/MSE $\hat{\mu}_{SS}$ & 1.12 & 1.12 & 1.04 & 1.07 & 1.05 \\ \end{tabular} \end{center}\label{tab:mse2} \end{table} \knote{for now ignore that S-H bias in the opposite direction} \newcommand{\mathscr{S}}{\mathscr{S}} \newcommand{\q^*_{k,z}(y,\s)}{\f^*_{k,z}(y,\mathscr{S})} \newcommand{\q^*_{k,z}(y^i,\s^i)}{\f^*_{k,z}(y^i,\mathscr{S}^i)} \section{Application to HIV Prevalence in High-Risk \\Populations}\label{sec:apply} \subsection{Background} The United Nations requires countries to measure and monitor key indicators related to their HIV epidemics \citep{UNAIDS2007, UNAIDS2008a}. In particular, countries with epidemics concentrated in high-risk groups are required to report on several features of key populations such as (injecting) drug users ((I)DU), men who have sex with men (MSM), and sex workers (SW). Such features include HIV prevalence, risk behaviors, and population sizes. Because these populations are typically hard-to-reach, many countries rely on respondent-driven sampling to estimate HIV prevalence and risk behaviors. In this section, we evaluate data collected on IDU and MSM in cities from two different countries in 2007 and 2008. \begin{figure}[h] \begin{center} \subfigure[East Europe IDU] { \label{fig:eeidu} \includegraphics[width=7.5cm]{lutskidu} } \hspace{-.5cm} \subfigure[Caribbean DU] { \label{fig:cdu} \includegraphics[width=7.5cm]{higduhiv} }\hspace{-.5cm} \subfigure[Caribbean MSM] { \label{fig:cmsm} \includegraphics[width=7.5cm]{higmsmhiv} }\end{center} \caption{Estimated HIV prevalence in three populations according to $\hat{\mu}_{SS}$, $\hat{\mu}_{\rm VH}$, and the sample mean, for various population size estimates. Dotted lines represent 1 standard error above and below $\hat{\mu}_{SS}$, according to the estimator in the supplemental materials. Vertical bars on (a) represent exogenously estimated population size and on (c) represent rule-of-thumb 1\% to 3\% MSM in the population. }\label{fig:examples \end{figure} \subsection{Injecting Drug Users in an East European City} The first example is based on a 2007 survey of IDU in major cities in a former Soviet block country. The HIV epidemic in this country is largely driven by IDU, where prevalence levels are over 50\% in many cities, and RDS is used to study this population, as well as MSM and SW in several larger cities. This country also invests heavily in producing estimates of the size of the hidden population through scale-up and multiplier methods \citep{Kruglov08, UNAIDS2003}. We focus here on the results for one city. Because in very large populations we expect $\hat{\mu}_{SS}$ to be nearly identical to $\hat{\mu}_{\rm VH}$, we consider a city with one of the smaller estimates for the size of the hidden population. In this city, the number of IDU is estimated at 1200, with confidence interval 1100-1400. The RDS sample began with 6 initial samples, and distributed 3 coupons per respondent. The sample size was 175, all with full degree and HIV data. The two longest sample chains ended at wave 9, with 4 total respondents from wave 9. We estimate the HIV prevalence using $\hat{\mu}_{SS}$ for several population sizes including 1100, 1200, and 1400, and also report the corresponding estimates of $\hat{\mu}_{\rm VH}$ and the sample mean. These results are summarized in Figure \ref{fig:eeidu}. We find that the prevalence estimate based on $\hat{\mu}_{SS}$ is very close to that based on $\hat{\mu}_{\rm VH}$ for population sizes in this range, and well within the uncertainty of the estimate, according to the estimator in the supplemental materials. This pattern is consistent across the cities we have considered in this country. This is partly due to the policy in this country of using RDS only in cities with larger estimated population sizes, opting for strategies of institutional sampling or attempted complete enumeration in areas with smaller populations. \subsection{Drug Users in a Caribbean City} The second example is taken from a small Caribbean country, which has conducted RDS studies of drug users (DU), MSM, and SW in four main cities in 2008 (note that this study did not limit participation to {\it injecting} drug users). We focus here on the study of DU in one of the cities with smaller total population. In this study, there were 7 initial samples, resulting in a sample of size 301, of which we include here only the 285 with full degree and HIV information. Again three coupons were distributed to most respondents, although this number was reduced as the sample size approached the target of 300. The two longest sample chains reached wave 11, with 6 respondents from this wave. In this city, the number of DU is unknown. Therefore, we use the successive sampling estimator to produce a sensitivity analysis for the effect of population size on the HIV prevalence estimate. The results of this analysis are shown in Figure \ref{fig:cdu}. Here the point estimate varies from 6.2\% for a population size of 301 to 9.6\% for a population size of 6000, with $\hat{\mu}_{\rm VH} = 9.7\%$. In absolute and relative terms, this is more variation than in the previous example, (3.4\% or about 15\% of $\hat{\mu}_{\rm VH}$). Although these differences are still well within the uncertainty of the estimator, in many cases RDS point estimates are important in themselves. It is also possible that a sensitivity analysis such as this one will be of interest with respect to a particular prevalence threshold, such at the $5\%$ threshold on which UNAIDS bases national epidemic classifications. In this example, the point estimate of prevalence is above $5\%$ for all population sizes, while a nominal 90\% confidence interval includes 5\% for all population sizes. \subsection{Men who have Sex with Men in a Caribbean City} Finally, we consider a population of MSM in the same Caribbean city. This study design was very similar to the corresponding DU study, with 7 initial samples, and a maximum of 3 coupons distributed. Here, the sample size was 270, with complete information for 269. One chain in this sample reached wave 12, with two respondents in that wave. Figure \ref{fig:cmsm} illustrates the sensitivity of the prevalence estimate to the number of MSM in this city. For comparison, the horizontal axis corresponds to the same population sizes as in \ref{fig:cdu}, although here it is labeled in proportions. The prevalence estimate changes by about 1.6 percentage points over this range of possible population sizes (9.5\% for 301 or .2\% of the population are MSM, to 7.9\% for 6000 or 4.2\% of the population are MSM) . There are no additional studies available to provide population size estimates, however this problem is less severe for populations of MSM, as a rule-of-thumb is often applied, estimating the number of MSM at about 1\%-3\% of the general population, corresponding to the two vertical bars on Figure \ref{fig:cmsm}. In this case, however, further information is available. This sample did not reach its desired sample size (300), because no more coupons were returned, and the research team was not able to find additional MSM to sample. Thus the sample neared exhaustion of the portion of the population available for sampling, suggesting that although the population of MSM might be 1-3\% of the city population, a smaller portion of those may be connected to the giant component of the social network of MSM and willing to participate in such a study. Therefore, if we restrict our inference to the reachable portion of the target population, the relevant population size is likely much closer to the sample size, suggesting a value of the $\hat{\mu}_{SS}$ estimate over 1.5 percentage ponts higher than $\hat{\mu}_{\rm VH} = 7.8\%$. Note that among the 12 RDS studies in this country, three samples exhibited this near exhaustion behavior. It is also of interest to note that in this example, unlike the previous two, $\hat{\mu}_{SS}$ is consistently higher than $\hat{\mu}_{\rm VH}$. This result is consistent with the expectation that $\hat{\mu}_{SS}$ is typically between $\hat{\mu}_{\rm VH}$ and the sample mean. In this case, the degree-based weights of $\hat{\mu}_{\rm VH}$ reduced the overall weight given to infected nodes, and the more moderate weights used in $\hat{\mu}_{SS}$ reduced this effect. \section{Discussion}\label{sec:discussion} The key insight of this paper is the recognition that the true mapping $\f$ from nodal degree ${\bf d}_i$ to inclusion probability, ${\mbox{\boldmath$\pi$}}_i$ under Respondent-Driven Sampling is better approximated by successive sampling than by the linear mapping assumed by \cite{volzheck08}. In addition, we introduce a novel approach to estimating the unit size (or degree) distribution in the population, based on the population size and sizes of observed units under successive sampling. Combining this insight and estimation strategy, we present a new estimator for population proportions based on an RDS sample. The contribution of this new estimator is illustrated in Figure \ref{fig:long}. In cases with no correlation between degree and quantity of interest ($w=0$), and initial sample selected at random with respect to infection status, the naive sample mean, $\hat{\mu}$ is an unbiased estimator of the population mean, as are existing estimators such as $\hat{\mu}_{\rm VH}$ and the proposed estimator $\hat{\mu}_{SS}$. This is illustrated in the first three columns of the figure. However, when the variable of interest is related to nodal degree, classes with higher degrees are over-represented in the population mean, resulting in the positive bias in the sample mean $\hat{\mu}$ in the second three columns of Figure \ref{fig:long}. This bias shrinks as the sample fraction increases. The existing estimator $\hat{\mu}_{\rm VH}$ adjusts reasonably well for this effect when the sample fraction is small, however for larger sample fractions, this estimator over-compensates for the effect of degree distribution, resulting in bias opposite that of $\hat{\mu}$. The contribution of the proposed estimator, $\hat{\mu}_{SS}$ is that it correctly adjusts for the joint effects of varying degree distributions and large sample fractions. The last three columns of Figure \ref{fig:long} illustrate a shortcoming of all three of these estimators. In this case, all initial samples, or {\it seeds}, are selected from among the infected nodes. This results in increased positive bias in all three of these estimators. All three estimators are also subject to other sources of bias discussed in \cite{gilehanSM09}, including bias induced by the systematically biased passing of RDS coupons. \begin{figure}[h] \begin{center} \includegraphics[width=6in]{small5_BLlong_.pdf} \end{center} \caption{Mean prevalence estimates of three types: $\hat{\mu}_{\rm VH}$ (solid circles), $\hat{\mu}_{SS}$ (circles), and sample mean $\hat{\mu}$ (crosses), considered for 50\%, 70\%, and 90\% samples, and activity ratio $w = 1, 1.8$, and for initial sample (seeds) selected at random with respect to infection and all infected. None of the estimators exhibit bias in the face of homophily with $w=1$. When $w \neq 1$, $\hat{\mu}_{\rm VH}$ and $\hat{\mu}$ exhibit bias, with magnitude sensitive to sample fraction. All estimators exhibit bias in the case of biased initial sample.} \label{fig:long} \end{figure} Because RDS is in wide usage, and often in cases where the sample fraction is large, this new estimator may improve estimation in many contexts. Its applicability is limited, however, by the requirement that the population size $N$ is known. Figures \ref{small15} and \ref{fig:longguess} illustrate that inaccurate estimates of $N$ can introduce bias into $\hat{\mu}_{SS}$, although this new estimator still out-performs $\hat{\mu}_{\rm VH}$ with the level of inaccuracy considered in this study. While most of our simulation study has focused on sample fractions consistent with the finite population effects that $\hat{\mu}_{SS}$ is intended to address, it is of interest to note that $\hat{\mu}_{SS}$ is nearly identical to $\hat{\mu}_{\rm VH}$ for small sample fractions. We have replicated much of the simulation study here with population size $N=10,000$. We find no significant differences between $\hat{\mu}_{SS}$ and $\hat{\mu}_{\rm VH}$ in these simulations, over a full range of values of $w$. When using inaccurate estimates of $N$ as in (\ref{nhat}), we found only one case of significant difference between $\hat{\mu}_{\rm VH}$ and $\hat{\mu}_{SS}$, corresponding to a dramatic under-estimate of population size, $\hat{N_s}=5250, N=10,000$, and for high activity ratio, $w=3$. In this case, $\hat{\mu}_{SS}$ had bias $0.0139$ and $\hat{\mu}_{\rm VH}$ had bias $0.0105$. The difference in mean squared error was not significant. Thus, the proposed estimator is helpful in correcting for finite population biases when they exist, and sensitive to the estimated population size in these cases. In cases where finite population effects are not present, the proposed estimator nearly coincides with existing estimator $\hat{\mu}_{\rm VH}$, and, unless the inaccurate population size estimate is small enough to transition to the region of finite population sensitivity, $\hat{\mu}_{SS}$ is insensitive to population size estimates in these cases. Known population size $N$ and random mixing are key assumptions of the estimator $\hat{\mu}_{SS}$. Additional assumptions required by $\hat{\mu}_{SS}$ are listed in Table \ref{tab:assmh}. The assumptions with \st{strikethrough} are necessary for $\hat{\mu}_{\rm VH}$ but not for $\hat{\mu}_{SS}$, and the {\it italic text} indicates additional assumptions required by $\hat{\mu}_{SS}$ but not $\hat{\mu}_{\rm VH}$. In these terms, the key contribution of $\hat{\mu}_{SS}$ is to remove the dependence on a known inaccurate random walk model for estimating sampling probabilities. In return, $\hat{\mu}_{SS}$ sacrifices the theoretical robustness to the initial sample promised by the Markov chain model, although this robustness was not truly present in $\hat{\mu}_{\rm VH}$ either, and the proposed estimator performs no worse than the former in this respect (Table \ref{tab:mse2}). More critically, $\hat{\mu}_{SS}$ relies on the assumption of known population size. The estimator $\hat{\mu}_{SS}$ is also sensitive to the degree of non-random mixing, or homophily in the network, although no more sensitive than $\hat{\mu}_{\rm VH}$ (Table \ref{tab:mse2}), as well as to other assumptions such as accurate self-reported degree and an undirected network. \cite{gilehanSM09} illustrate the sensitivity of $\hat{\mu}_{\rm VH}$ to deviations from some other assumptions in Table \ref{tab:assmh}. We do not expect $\hat{\mu}_{SS}$ to be more robust to such deviations than $\hat{\mu}_{\rm VH}$. \newcommand{}{} \newcommand{}{} \newcommand{}{} \begin{table}\caption{Assumptions of $\hat{\mu}_{SS}$. Assumptions with \st{strikethrough} apply to $\hat{\mu}_{\rm VH}$ but not $\hat{\mu}_{SS}$. Assumptions in the {\it italic} apply to $\hat{\mu}_{SS}$ but not $\hat{\mu}_{\rm VH}$.} \begin{center} \begin{tabular}{l||c|c} & Network Structure & Sampling Assumptions\\ & Assumptions & \\ \hline \hline Random Walk & \st{Population size large ($N >> n$)} & \st{Sampling with replacement} \\ Model & & \st{Single non-branching chain} \\ \hline Remove Initial & Homophily weak enough & Sufficiently many sample waves \\ Sample & Connected graph & \it Initial sample unbiased\\ Dependence & & \\ \hline To Estimate & All ties reciprocated & Degree accurately measured \\ Probabilities & \it Known population size $N$ & Random referral \\ \end{tabular} \label{tab:assmh} \end{center} \end{table} \renewcommand{}{\color{Gray}} Our application to HIV prevalence estimation in Section \ref{sec:apply} illustrates several uses for $\hat{\mu}_{SS}$. First, when the population size is known, $\hat{\mu}_{SS}$ provides a better estimate of population proportions than the other available methods. This can also be done when only a range of population sizes is known. In particular, in the case of MSM, officials are often willing to assume that the population ranges between 1\% and 3\%. When no information is available on population size, $\hat{\mu}_{SS}$ can be used to perform a sensitivity analysis. Finally, other information from the sampling process, such as the exhaustion of the population available for sampling, may suggest that the sample fraction is large. Additional information may also be gathered by asking respondents about their exposure to others who have been sampled. Intuition suggests that in the case of a large sample fraction, an RDS estimator should negotiate between the infinite-population assumption of the Volz-Heckathorn estimator and the full-population sample assumption of the naive sample mean. In this paper, we provide an estimator based on a successive sampling model that appropriately negotiates between these two extremes. Whenever the hidden population size is known, it will be preferable to use this estimator. When the hidden population size is not known, this estimator still provides a helpful diagnostic check on the other available estimators. Beyond the estimator itself, this paper contributes a new theoretical framework for understanding the sampling process in respondent-driven sampling. Previous understandings have relied on a with-replacement or infinite population assumption, an assumption known to be inaccurate, and critically so in some populations. The introduction of a sampling model appropriately accommodating the true without-replacement nature of the sampling process opens new possibilities for future research on respondent-driven sampling. We intend to make code available for these procedures in the R package {\tt RDS} on CRAN. \singlespacing \subsection{Standard Error Estimation} \label{bootstrap} The key to a successful bootstrap procedure is a good representation of the underlying population and sampling process. We propose an estimator of standard error based on a population bootstrap procedure in the case of a binary nodal covariate ${\bf z}$, such as infection status. We begin by using the weights given by ${\hat{\pi}}({\bf d}_j)$ to create a simulated population of size $N$, treating the equivalence classes given jointly by degree, ${\bf d}$, and infection status, ${\bf z}$. This results in a simulated population given by a $2 \times K$ random matrix ${\bf N^*}$, where ${\bf N^*}_{0,k}$ and ${\bf N^*}_{1,k}$ represent the population counts of uninfected and infected nodes of degree $k$, respectively. We estimate $\bf N^*$ by $\hat{\bf N}^*$, where \begin{eqnarray} \hat{\bf N}^*_{i,k}=\frac{1}{{\hat{\pi}}_{{\bf d}_k}}\sum_{j:{\bf S}_j=1}\mathbb{I}\{{\bf d}_j=k,{\bf z}_j=i\}. \end{eqnarray} The simplest bootstrap sample in this case would select samples from $\hat{\bf N}^*$ under successive sampling with sizes given by degrees. This approach is severely anti-conservative, however, whenever there is {\it homophily} on ${\bf z}$, or a tendency for individuals to preferentially form relations with others of the same ${\bf z}$ value. For this reason, we introduce a first order correction for homophily on binary variable ${\bf z}$, based on a mixing-matrix approximation to the population homophily. \newcommand{{\bf H}}{{\bf H}} \newcommand{{\bf W}}{{\bf W}} Let $c_{i,j}$ represent the number of observed referrals from recruiter with ${\bf z}=i$ to recruit with ${\bf z}=j$. Then let \begin{eqnarray} r_1 = \frac{c_{1,1}}{c_{1,1}+c_{1,0}}, ~~~~~~ r_0 = \frac{c_{0,1}}{c_{0,1}+c_{0,0}} \label{r0r1} \end{eqnarray} represent the observed rates of referral of nodes with ${\bf z}_i=1$ for referring nodes of ${\bf z}_i=1$ and ${\bf z}_i=0$. In the equilibrium distribution of a with-replacement random walk process, all links are followed with equal probability, making (\ref{r0r1}) a plausible estimator for the proportion of ties to nodes of status ${\bf z}_i=1$. We can then approximate the population homophily using a {\it mixing matrix} ${\bf H}$ partitioning the edges of the graph into cells according to the ${\bf z}$ values of their incident nodes, such that ${\bf H}(i,j)$ represents the estimated number of edges between nodes with ${\bf z}=i$ and nodes with ${\bf z}=j$, $(i,j) \in (0,0), (1,1), (1,0), (0,1)$. We estimate ${\bf H}$ by: \begin{align} {\bf H}(1,1)& = \bar{{\bf d}}_1 \sum_k\hat{\bf N}^*_{1,k}r_1 \label{h11}\\ {\bf H}(0,0)& = \bar{{\bf d}}_0 \sum_k\hat{\bf N}^*_{0,k}(1-r_0) \label{h00}\\ {\bf H}(1,0) ={\bf H}(0,1)& = \frac{1}{2}\left( \bar{{\bf d}}_1 \sum_k\hat{\bf N}^*_{1,k}(1-r_1) + \bar{{\bf d}}_0 \sum_k\hat{\bf N}^*_{0,k}(r_0)\label{h10} \right) \end{align} where $\bar{{\bf d}}_1$ and $\bar{{\bf d}}_0$ are the estimated mean degrees of nodes with ${\bf z}_i=1$ and ${\bf z}_i=0$: \[ \bar{{\bf d}}_i = \frac{\sum_k \hat{\bf N}^*_{i,k}k}{\sum_k \hat{\bf N}^*_{i,k}}. \] We update this matrix with each sampled node in a given bootstrap sample, and use the resulting matrix to determine the proportion of links to unsampled nodes of each class, and therefore determine the distribution of classes of the next sampled node. Within a class, each sampled node is chosen with probability proportional to degree. Each replicate of the bootstrap procedure is as follows: \begin{itemize} \item Stochastically re-distribute $\hat{\bf N}^*$ to get $\hat{\hat{\bf N}}^*$ so that each cell has an integer value, with expected counts given by $\hat{\bf N}^*$. First, truncate all cell counts at the integer value, then re-distribute the remaining population units proportional to the size of the truncated piece. For convenience, we also define $N$-vectors $d^*$ and ${\bf z}^*$ representing the ordered set of degrees and infection statuses of each member of the simulated population, such that $\sum_i \mathbb{I}(z_i^* =l, d_i^*=k)=\hat{\hat{\bf N}}^*_{l,k}$. Also, set all elements of an $N$-vector ${\bf S}$ to $0$ to track sampling. \item Estimate the initial mixing matrix ${\bf H}_0$ as in (\ref{h11}), (\ref{h00}), and (\ref{h10}). \item Select $n0$ nodes with probability proportional to degree, where $n_0$ is the observed number of initial samples. Assign the indices of these sampled nodes to the first $n0$ elements of the vector ${\bf W}$, and set the corresponding elements of ${\bf S}$ to $1$. \item Set $n_n=n0$ to index the number of nodes sampled. \item Update ${\bf H}$ as follows: \[ {\bf H}(i,j)={\bf H}_0(i,j)\frac{\sum_{l \in 1 \ldots N} {\bf d}_l^* \mathbb{I}({\bf z}_l^*=j,{\bf S}_l=0)}{\sum_{l \in 1 \ldots N} {\bf d}_l^* \mathbb{I}({\bf z}_l^*=j)}, ~~ i,j \in 0,1. \] \item Set $i=1$ to index the active referring node. \item While $n_n < n$, where $n$ is the observed sample size, \begin{itemize} \item Select a number of referrals $m$ from the empirical distribution of number of observed referrals per observed node. If $m > n-n_n$ then let $m= n-n_n$. \item Repeat $m$ times: \begin{itemize} \item Select a class $j$ according to a Bernoulli distribution with parameter \[ \frac{{\bf H}({\bf z}_{{\bf W}_i}^*,1)}{{\bf H}({\bf z}_{{\bf W}_i}^*,1)+{\bf H}({\bf z}_{{\bf W}_i}^*,0)}. \] \item Select a node $l$ from $\{k:{\bf z}_k^*=j,{\bf S}_k=0\}$ with probability proportional to ${\bf d}_k$. Let ${\bf W}_{n_n+1}=l$, $n_n=n_n+1$, ${\bf S}_l=1$. \item Let ${\bf H}(,j) = {\bf H}_0(,j)\frac{\sum_k {\bf d}_k^* \mathbb{I}({\bf z}_k^*=j,{\bf S}_k=0)}{\sum_k {\bf d}_k^* \mathbb{I}({\bf z}_k^*=j)}$. \end{itemize} \item let $i=i+1$. \end{itemize} \item Compute $\hat{\mu}_{SS}^*$ based on the sample consisting of ${\bf d}_j^*, {\bf z}_j^*: {\bf S}_j=1$. \end{itemize} We use the standard deviation of the resulting population of $T$ bootstrap estimates as an estimate of the standard error of $\hat{\mu}_{SS}$. We have used $T=1000$ bootstrapped samples. Note that this procedure reproduces an homophily structure similar to that in the original network, but does not reproduce the potential biases resulting from the selection of the initial sample. We illustrate the performance of this standard error estimator by comparing three critical cases. As with the point estimate, we illustrate both cases in which we expect the estimator to perform reasonably well, and a case in which we expect the estimator to perform poorly. Each set of simulations involved 1000 bootstrapped re-samples for each of 1000 simulated RDS samples. The parameters of the samples, average estimated standard errors, and coverage rates of nominal $95\%$ and $90\%$ confidence intervals are given in Table \ref{tab:boot}. \begin{table}[h]\caption{Observed (simulation) standard errors of estimates, and average bootstrap standard error estimates, along with coverage rates of nominal 95\% and 90\% confidence intervals for procedure given in Section \ref{bootstrap} and the Appendix for varying sample proportion and activity ratio $w$, and for initial sample selected either independent of infection ({{\elevenrm ``}}No{\elevenrm "}\ bias) or all from within the infected subgroup ({{\elevenrm ``}}Yes{\elevenrm "}\ bias). Observed standard errors are based on 1000 samples. Bootstrap standard errors are the average bootstrap standard error estimates over the same 1000 samples. Nominal confidence intervals are based on quantiles of the Gaussian distribution.} \begin{center} \begin{tabular}{cccc|cc|cc} \% & homoph. & & initial sample & SE & SE & coverage & coverage \\ sample & R & $w$ & bias & observed & bootstrap & 95\% & 90\%\\ \hline \hline 50\% & 5 & 1 & No & 0.0212 & 0.0218 & 94.3\% & 89.8\% \\ 70\% & 5 & 1.8 & No & 0.0087 & 0.0090 & 95.9\% & 90.6\% \\ 50\% & 5 & 1 & Yes & 0.0211 & 0.0224 & 75.9\% & 63.7\% \\ \end{tabular} \end{center}\label{tab:boot} \end{table} The magnitudes of the average bootstrap standard error estimates are quite close to the observed values in all three cases, and the coverage rates in the cases without biased initial sample selection are very close to their nominal values. The last row of Table \ref{tab:boot} illustrates the poor performance of the estimator in the case of extreme initial sample bias. In this case, the estimator $\hat{\mu}_{SS}$ has substantial bias, leading to poor coverage rates of the nominal intervals. \end{document}
1,314,259,993,716
arxiv
\section{Program summary} \noindent \textbf{Program Title:} GFCCLib \noindent \textbf{Program Files doi:} 10.24433/CO.5131827.v1 \noindent \textbf{Licensing provisions:} MIT License \noindent \textbf{Programming language:} C++ \noindent \textbf{Nature of problem:} The applications of coupled cluster Green's function on large scale molecular electronic structure problems suffer from expensive higher dimensional tensor contractions in the complex space, expensive inter-process communication, and severe load imbalance. Tackling these issues are a key step in building high-performance coupled cluster Green's function library for its routine use in large scale molecular science. \noindent \textbf{Solution method:} We have developed a C++ library for large scale molecular GFCC calculations on high-performance computing clusters. We provide implementations for high dimensional tensor algebra for many-body methods (TAMM), Cholesky decomposition of high dimensional electron repulsion integral tensors, process group technique for mitigating load imbalance. The library is written in C++. The source code, tutorials and documentation are provided online. A continuous integration mechanism is set up to automatically run a series of regression tests and check code coverage when the codebase is updated. \section{Introduction} Faster and more accurate description of the electronic structure of large quantum systems has always been stimulating the new development of \textit{ab initio} electron correlation theory \cite{march67, fetter2012quantum, linderberg2004propagators, paldus75_105, cederbaum77_205, joergensen2012second, szabo2012modern, oddershede87propagator, mattuck2012guide, harris2020algebraic, bartlett07_291, shavitt2009many}. For example, larger fullerene molecules (e.g. C60 and C70) and their derivatives have been widely applied in the molecular electronics and nanostructured devices, especially in organic photovoltaics (OPVs) \cite{Yu95_1789, brabec01_374, tang86_183}. In comparison with the traditional silicon-based photovoltaic materials, the fullerene-based OPVs have exhibited promising photoelectric properties, and their power conversion efficiency has been gradually improved over the past years (from 1\% to over 9\%) \cite{mayer07_1369,Deibel10_096401}. However, the fundamental understanding of the correlation between the photoelectric properties and the structure of these fullerene-based materials is still not fully clear, in particular how the open-circuit voltage is affected by the electronic structure that is associated with certain structures. Previously, only single particle pictures (for example density-functional theory, DFT) of the electronic structure of fullerene and derivatives have been reported \cite{tiago08_084311, akaike08_023710, zhang08_19158, tiago09_195410, blase11_115103}, which only shows qualitative agreement with experiment. Of course, one can invoke different density functionals in the DFT calculation to test the agreement with the experiment. Unfortunately, there is no systematic way to improve the single particle DFT results, and separate calculations for different states would be required. On the other hand, one can directly compute the one-particle many-body Green's function (MBGF) to capture the key electronic properties of the ionization and attachment process. Typical approaches include {\it GW} method,\cite{hedin65_a796,faleev04_126406, schilfgaarde06_226402, louie06_216405, louie11_186404,setten13_232} outer-valence Green's function (OVGF) method,\cite{cederbaum75_290, cederbaum84_57, ortiz97, ortiz13_123} and algebraic-diagrammatic construction (ADC)\cite{schirmer82_2395, cederbaum84_57, dreuw15_82} approximation scheme. Both the {\it GW} method and OVGF method rely on the finite many-body perturbative expansion of the self-energy via the Dyson equation, and have been proved by numerous studies of weakly and moderately correlated molecular systems to provide accurate single particle properties. However, when many-body effects become crucial, as often featured by the satellite states in the ionization process out of the inner valence band where poles will appear in the analytical structure of the self-energy,\cite{cederbaum77_L549, cederbaum75_2160, cederbaum80_481} one can not properly describe the Green's function and/or self-energy by a finite expansion. In contrast, infinite partial summations of the self-energy perturbation series, for example the ADC approximation scheme (especially the third-order ADC method, ADC(3)), has been proved to be able to provide qualitative description of the many-body poles in the analytical structures of the self-energy. Alternatively, the improved description of the many-body effect in the Green's function calculations might systematically be achieved by incorporating the one-particle MBGF with correlated wave function expansions. For example, the possibility of utilizing systematically-improvable highly-correlated wave function methodologies as impurity solver to describe local Green's function or corresponding self-energies in dynamical-mean field theories (DMFT) has drawn considerable interest recently.\cite{kotliar96_13, kotliar06_865, vollhardt12_1, millis06_155107, millis06_076405, zgid11_094115, zgid12_165128, zhu2019_115154, zgid19_6010} Among these systematically-improvable many-body approaches, the Green's function coupled cluster (GFCC) methodology has attracted much attention in recent years for the molecular and material quantum chemical calculations \cite{nooijen92_55, nooijen93_15, nooijen95_1681, kowalski14_094102, kowalski16_144101, kowalski16_062512, chan16_235139, hirata17_044108, kowalski18_561,kowalski18_4335, kowalski18_214102, matsushita18_034106, kowalski18_3, berkelbach18_4224, peng19_3185, zgid19_6010, peng20_011101, matsushita20_012330, bauman20}, and could be one of the ideal many-body tools to treat a complex molecular system like C60. Inheriting the merits of many-body Green's function method and coupled cluster method, the GFCC method is able to describe the electron propagation in a systematically improvable many-body way for the molecular and material quantum systems. However, despite these features, the GFCC method scales polynomially with system size, that is if $N$ is the number of basis functions representing the problem size, the scaling is $N^x$ with the value of $x$ relying on the approximation that one takes in the calculation. Usually, cruder approximation that requires smaller $x$ to make the calculations cheaper will also lead to larger error. Thus, similar to the relevant coupled cluster theory \cite{cizek66_4256,paldus72_50,purvis82_1910,paldus07,bartlett07_291}, the GFCC method forms a hierarchy in terms of computational efficiency and accuracy. As a rule of thumb, for small and medium quantum systems that are described by $<$500 basis functions, one may comfortably use GFCC with singles and doubles (GFCCSD, an $N^6$ method) to compute the corresponding many-body electronic structures. For relatively large quantum systems that are described by $>$500 basis functions, the GFCC calculations on conventional computing clusters can only compute a small number of ionized states, which would make the method less predictable if important states were not computed. For example, the GFCC matrix was used to be obtained through diagonalizing the non-Hermitian equation-of-motion coupled cluster (EOM-CC) Hamiltonian matrix in the ($N \pm 1$)-particle space to construct the sum-over-states representation. The performance of solving such an eigen-problem, if the EOM-CC Hamiltonian is a dense matrix, will significantly deteriorate if the dimension of the Hamiltonian grows over $10^{10}$. Not to mention that only limited number of states can be obtained with the help of iterative methods. To reduce the computational cost of solving the eigen-problem in the EOM step, as well as the cost in the preceding ground state CC step, early attempts have mainly focused on perturbative truncation of the similarity transformed Hamiltonian to achieve an $\mathcal{O}(N^5)$ scaling \cite{stanton95_1064}. More recently, various local descriptions of the correlation wave function have emerged to facilitate the development of reduced-scaling coupled-cluster methods. In particular the pair natural orbitals (PNOs) introduced almost half century ago \cite{meyer73_1017,edmiston66_1833,edmiston68_192,ahlrichs75_275} have been resurrected and further developed by Neese and co-workers as domain-based local pair natural orbitals (DLPNO) \cite{neese13_034106,neese16_024109} to combine with the EOM-CC method to reduce the size of the space for the diagonalization \cite{neese16_034102,neese18_244101,neese19_164123}. Alternatively, the GFCC matrix can be obtained directly by solving a set of shifted linear systems involving the EOM-CC Hamiltonian at a frequency of interest. Therefore, due to its algebraic structure, systems of linear equations can be solved simultaneously over multiple frequencies that is well suited for massively distributed computing architectures. Using similar methods, pilot calculations have been previously reported for uniform electron gas \cite{chan16_235139}, light atoms \cite{matsushita18_034106}, heavy metal atoms \cite{matsushita18_224103}, and simple 1-D periodic systems \cite{matsushita18_204109}. More recently, the GFCC as an impurity solver has been applied in the embedding computing framework to compute the electronic structure of complex materials.\cite{zhu2019_115154,zgid19_6010}. However, large GFCC calculations (with $N>$ 1000) have never been tried until recently, where the valence band electronic structures of a series of large DNA fragments have been computed using GFCC approach showing the possibility and potential of this approach in the large scale applications \cite{peng20_011101}. Remarkably, the algebraic structure of the GFCC equations enables highly scalable implementations utilizing multiple levels of parallelism. To further optimize the GFCC infrastructure, three bottlenecks need to be properly addressed, which include expensive high dimensional tensor contractions in the complex space (i.e. a mathematical space based upon complex numbers), expensive inter-processor communication, and load imbalance. These issues can not simply be addressed by utilizing more processors for the calculations, but rather, on the other hand, relying on the developments of new computational strategy, new numerical solvers, and new computation infrastructures that integrate both the novel computer science techniques and highly scalable computing resource. Here, in this work, we present our effort to develop a numerical library that tackles these issues for achieving a highly scalable and efficient GFCC approach. We will first briefly review the GFCC theory/methodology, and give an overview of the entire workflow. After pointing out the specific bottlenecks in the workflow, we will discuss how we tackle these bottlenecks in the implementation details. Also, we will present a profiling analysis of our implementation of the optimized GFCC approach for its parallel performance on large computing facility~\cite{olcfsummit}. Finally, to highlight the capability of our GFCC implementation, we perform for the first time the GFCCSD calculations employing over 800 basis functions for computing the many-body electronic structure of the fullerene C60 molecule covering up to $\sim$ 25 eV near-valence spectral region. \section{Theory} For a review of the GFCC method employed in this work, we refer the readers to Refs. \cite{nooijen92_55, nooijen93_15, nooijen95_1681,meissner93_67,kowalski14_094102, kowalski16_144101,kowalski16_062512, kowalski18_561,kowalski18_4335, kowalski18_214102}. Briefly, the matrix element of the analytical frequency dependent Green's function of an $N$-electron system at the frequency $\omega$ can be expressed as \begin{widetext} \begin{eqnarray} G_{pq}(\omega) = \langle \Psi | a_q^\dagger (\omega + ( H - E_0 ) - i \eta)^{-1} a_p | \Psi \rangle + \langle \Psi | a_p (\omega - ( H - E_0 ) + i \eta)^{-1} a_q^\dagger | \Psi \rangle \label{gfxn0} \end{eqnarray} \end{widetext} Here $H$ is the electronic Hamiltonian of the $N$-electron system, $| \Psi \rangle$ is the normalized ground-state wave function of the system, $E_0$ is the ground state energy, $\eta$ is the broadening factor introduced numerically to provide the width of the computed spectral bands, and the $a_p$ ($a_p^\dagger$) operator is the annihilation (creation) operator for electron in the $p$-th spin-orbital (we use $p,q,r,s,\ldots$ for the general spin-orbital indices, $i,j,k,l,\ldots$ for the occupied spin-orbital indices, and $a,b,c,d,\ldots$ for the virtual spin-orbital indices). Integrating the Green's function formulation into the bi-orthogonal coupled cluster (CC) formalism, the generated GFCC formulation can then be expressed as \begin{widetext} \begin{eqnarray} G_{pq}(\omega) = \langle\Phi|(1+\Lambda) \overline{a_q^{\dagger}} (\omega+\bar{H}_N - \text{i} \eta)^{-1} \overline{a_p} |\Phi\rangle + \langle\Phi|(1+\Lambda) \overline{a_p} (\omega-\bar{H}_N + \text{i} \eta)^{-1} \overline{a_q^{\dagger}} |\Phi\rangle \label{gfxn1} \end{eqnarray} \end{widetext} with $|\Phi\rangle$ being the reference function, and the normal product form of similarity transformed Hamiltonian $\bar{H}_N$ being defined as $\bar{H} - E_0$. Here, the similarity transformed operators $\bar{A}$ ($A = H, a_p, a_q^{\dagger}$) are defined as $\bar{A} = e^{-T} A ~e^{T}$, and the cluster operator $T$ and the de-excitation operator $\Lambda$ are obtained from solving the conventional CC equations. By defining $\omega$-dependent many-body operators $X_p(\omega)$ and $Y_q(\omega)$ as \begin{widetext} \begin{eqnarray} X_p(\omega) &=& X_{p,1}(\omega)+X_{p,2}(\omega) + \ldots = \sum_{i} x^i(\omega)_p a_i + \sum_{i<j,a} x^{ij}_a(\omega)_p a_a^{\dagger} a_j a_i +\ldots , \label{xp} \\ Y_q(\omega) &=& Y_{q,1}(\omega)+Y_{q,2}(\omega) + \ldots = \sum_{a} y^a(\omega)_q a_a^\dagger + \sum_{i,a<b} y^{i}_{a,b}(\omega)_q a_a^{\dagger} a_b^{\dagger} a_i +\ldots , \label{yq} \end{eqnarray} \end{widetext} (the dependence of $G$, $X$, and $Y$ on $\omega$ will not explicitly mentioned henceforth) to satisfy \begin{eqnarray} (\omega+\bar{H}_N - \text{i} \eta )X_p|\Phi\rangle &=& \overline{a_p} |\Phi\rangle, \label{eq:xplin} \\ (\omega-\bar{H}_N + \text{i} \eta )Y_q|\Phi\rangle &=& \overline{a_q^\dagger} |\Phi\rangle, \label{eq:xplin} \end{eqnarray} Eq. (\ref{gfxn1}) can be re-expressed by a compact expression \begin{eqnarray} G_{pq} = \langle\Phi|(1+\Lambda) \overline{a_q^{\dagger}} X_p |\Phi\rangle + \langle\Phi|(1+\Lambda) \overline{a_p} Y_q |\Phi\rangle. \label{gfxn2} \end{eqnarray} By truncating the many-body expansion of the cluster and mapping amplitudes (i.e. $T$, $\Lambda$, $X$, and $Y$) at the two-body level, the obtained approximate formulation of Eq. (\ref{gfxn2}) can be writen as \begin{widetext} \begin{eqnarray} G_{pq} = \langle\Phi|(1+\Lambda_1+\Lambda_2) \overline{a_q^{\dagger}} (X_{p,1}+X_{p,2}) |\Phi\rangle + \langle\Phi|(1+\Lambda_1+\Lambda_2) \overline{a_p} (Y_{q,1}+Y_{q,2}) |\Phi\rangle \label{gfxn3}, \end{eqnarray} \end{widetext} which is the so-called GFCCSD approximation (GFCC with singles and doubles) with $X_{p,1}$($Y_{q,1}$ or $\Lambda_1$) and $X_{p,2}$($Y_{q,2}$ or $\Lambda_2$) being one- and two-body component of $X_{p}$ ($Y_{q}$ or $\Lambda$) operator, respectively. The electronic structure of the system is then captured by the computed spectral function from the GFCCSD approximation that is given by the trace of the imaginary part of the retarded GFCCSD matrix, \begin{equation} A = - \frac {1} {\pi} \text{Tr} \left[ \Im\left({\bf G}^{\text{R}} \right) \right] = - \frac {1} {\pi} \sum_{p} \Im\left(G_{pp}^{\text{R}} \right)~. \label{specfxn} \end{equation} Eqs. (\ref{gfxn0})$-$(\ref{specfxn}) describe the working equations to obtain the GFCC matrix element and spectral function at a specific frequency. For a broad frequency regime with a high frequency revolution, it would require the GFCC calculation at tens or hundreds of frequencies that constitutes a sizeable prefactor for the calculation. The working equations for GFCC formulations can also be derived using various diagrammatic techniques. The corresponding diagrams (typical examples of diagrams defining equations for $X_p$ operator and matrix elements of CC Green’s function are shown in Fig.\ref{diagrams}) can be translated into the form of tensor contractions involving tensors describing interactions included in the Hamiltonian operator, ground-state CC amplitudes, and amplitudes defining $X_p$ operators. Efficient implementations of these complicated expressions require specialized form of tensor libraries to provide tools for distributing multidimensional tensors across the network, performing tensor contraction in parallel, and utilizing computational resources offered by existing GPU architectures. \begin{figure}[htbp] \centerline{\includegraphics[width=0.4\textwidth]{Figure1.pdf}} \caption{Examples of diagrams and corresponding tensor contractions contributiong to Eqs.(\ref{eq:xplin}) (inset (a)) and (\ref{gfxn2}) (inset (b)).} \label{diagrams} \end{figure} In order to reduce the formal computational cost of the GFCC method over a broad frequency regime, further approximation has been made in our calculation through the model-order-reduction (MOR) technique\cite{peng19_3185}. Specifically, one can construct an effective Hamiltonian based on the orthonormal subspace $\mathbf{S}=\{\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_m\}$ (with its dimension $m$ being much smaller than the dimension of the original Hamiltonian) such that the conventional GFCC linear system over a broad frequency regime can be approximated through a more easily solvable model linear system over the same frequency regime, \begin{widetext} \begin{eqnarray} \left\{ \begin{array}{ccl} (\omega - \text{i} \eta + \hat{\bar{\textbf{H}}}_N) \hat{\textbf{X}} & = & \hat{\textbf{b}}, \\ \hat{\mathbf{G}}^{\text{R}} & = & \hat{\mathbf{c}}^{\text{T}} \hat{\textbf{X}}, \end{array}\right. ~~~\text{and}~~~ \left\{ \begin{array}{ccl} (\omega + \text{i} \eta - \hat{\bar{\textbf{H}}}_N) \hat{\textbf{Y}} & = & \hat{\textbf{b}}, \\ \hat{\mathbf{G}}^{\text{A}} & = & \hat{\mathbf{c}}^{\text{T}} \hat{\textbf{Y}}, \end{array}\right. \label{model} \end{eqnarray} \end{widetext} Here $\hat{\bar{\textbf{H}}}_N = \textbf{S}^{\text{T}} \bar{\textbf{H}}_N \textbf{S}$, $\hat{\textbf{X}} = \textbf{S}^{\text{T}} \textbf{X}$, $\hat{\textbf{Y}} = \textbf{S}^{\text{T}} \textbf{Y}$, $\hat{\textbf{b}} = \textbf{S}^{\text{T}} \textbf{b}$, and $\hat{\mathbf{c}}^{\text{T}} = \mathbf{c}^{\text{T}} \textbf{S}$ with the columns of $\textbf{b}$ corresponding to $\overline{a_p} |\Phi\rangle$ or $\overline{a_q^\dagger} |\Phi\rangle$, and the columns of $\textbf{c}$ corresponding to $\langle \Phi | (1+\Lambda) \overline{a^\dagger_q}$ or $\langle \Phi | (1+\Lambda) \overline{a_p}$, respectively. \section{GFCC Workflow and Bottlenecks} A typical GFCC workflow for calculating the many-body electronic structure of a quantum system is shown in Fig. \ref{workflow}. As shown, prior to the practical calculation of GFCC matrix, the conventional Hartree-Fock (HF) and coupled cluster (CC) calculations need to be performed to get converged reference wave function and cluster amplitudes $T$ and $\Lambda$. The computational cost of the conventional HF calculation scales as $\mathcal{O}(N^3)$, while the computational cost of the conventional CC calculation depends on the truncation level in the many-body expansion of the cluster amplitudes. For coupled cluster singles and doubles (CCSD) calculation, where only one- and two-body terms are included in $T$ and $\Lambda$, the computational cost scales as $\mathcal{O}(O^2V^4)$ ($O$ denotes the number of the occupied orbitals, $V$ denotes the number of virtual orbitals, and $O+V=N$). To construct the retarded GFCC matrix (similar for the advanced GFCC matrix), the key step is to use iterative linear solver to solve Eq. (\ref{eq:xplin}) for $X_p$ (the inner cycle of Fig. \ref{workflow}). For given orbital $p$ and frequency $\omega$, the computational cost for each iteration in the GFCCSD level approximately scales as $\mathcal{O}(O^3V^2)$. The MOR step is performed after solving the linear equations (as marked by the red dashed frame in Fig. \ref{workflow}), which involves two steps, (i) a Gram-Schmidt (GS) orthogonalization for newly obtained $X_p$'s with respect to the previous orthonormal vectors (if any) to generate/expand an orthonormal subspace, and (ii) a projection of the original similarity transformed Hamiltonian to the aforementioned subspace. The cost of GS step scales as $\mathcal{O}(m^2O^2V)$, while the cost of the projection scales as $\mathcal{O}(mO^3V^2)$. Finally, after performing MOR, one then needs to solve the projected GFCC linear system and compute the spectral function, of which the cost approximately scales as $\mathcal{O}(N_{\omega}m^3)$ with $N_{\omega}$ being the total number of frequencies. It is worth mentioning that, as discussed in Ref. \citenum{peng19_3185}, the rank of the subspaces (i.e. the dimension $m$) depends on the required accuracy, the interpolated frequency regime, and frequency interval. Typically, for a designated frequency regime being interpolated, the number of levels in the GFCC loop is usually less than five, and the rank of the subspace is $(5\sim33)\times N$ where $N$ is the number of MOs. \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\textwidth]{Figure2.pdf}} \caption{The GFCC workflow for many-body electronic structure calculation of a quantum systems.} \label{workflow} \end{figure} From the above workflow and scaling analysis, the bottleneck of performing a GFCC calculation mainly comes from three parts, (i) CC calculations to get the cluster amplitudes, (ii) GFCC iterations, and (iii) the projection of GFCC linear system to the constructed orthonormal subspace. Based on our experience, both CC and GFCC calculations are bound to (a) intensive compute and communication and (b) performance of the iterative solver. Regarding (a), there are many similar tensors and tensor contractions between CC and GFCC calculations, which indicate a universal tensor algebra library may be applied. Our effort of designing and applying such a tensor algebra library specific for the many-body methods will be detailed in the following section. For (b), it is worth mentioning that the fundamental difference from the mathematical point of view between the CC equations and the GFCC equations. The CC equations used to determined the cluster amplitude is a set of coupled energy-independent non-linear algebraic equations, while the GFCC equation is linear and $\omega$-dependent. The difficulty of efficiently solving these linear and nonlinear equations in the GFCC workflow comes from the fact that there is no unique solver that is able to efficiently deal with both linear and non-linear systems, and multiple solvers need to be included in the workflow to avoid the severe load imbalance (caused by failures to converge in some scenarios using certain solvers), and to reach a balance between efficiency and stability. According to previous studies\cite{pulay80_393,pulay82_556}, especially the early self-consistent field and ground state coupled cluster studies, the direct inverse of iterative subspace (DIIS) solver usually exhibits faster or even super-linear convergence performance for non-linear equations. In our pilot GFCC calculations, we have also applied DIIS to solve the GFCC linear equations for small molecular systems such as CO and N$_2$, and found that the DIIS solver can be faster than the Lanczos iterative method in computing the valence spectral function. The difficulty of biconjugate gradient (BiCG) method (built on the unsymmetric Lanczos iterative methods) of converging to the solution of the $\omega$-dependent GFCC linear equation has also been reported in the spectral studies of single atoms \cite{matsushita18_034106}. Indeed, in the BiCG method, the residual does not reduce monotonically, and the convergence of the linear equation is not guaranteed. However, this doesn't mean that DIIS is superior to the Lanczos iterative method, and the difference often comes with how the initial guess and preconditioner are applied for different solvers. From our own experience, the difficulty of applying DIIS for solving the $\omega$-dependent GFCC linear equations often emerges when computing the shake-up states in the core regime where the higher order elements in the $X_p$ amplitude becomes significant but the quality of the initial guess generated from low order perturbation is poor and the conventional preconditioner (inverse of the diagonal of the Hamiltonian) is close to singular. Alternatively, one can choose non-preconditioning. A popular choice is the generalized minimal residual (GMRes) method \cite{saad86_856}, where the original linear equation is projected to the orthonormal Krylov subspace generated from power iterations to get approximate solution with minimal residual. Employing GMRes, the residual decreases monotonically, and in principle, the GMRes will converge to the exact solution after at most $\mathcal{O}(O^2V)$ steps. According to our observation, the GMRes for solving the GFCC linear equations is quite stable. Even though the GMRes might be slower than other solvers in some scenarios, we haven't encountered any situation so far where GMRes is unable to converge within a reasonable number of iterations, in particular when converging some shake-up states in the core regime. Unlike DIIS whose cost is constant about $\mathcal{O}(O^2V)$, the cost of GMRes grows as $\mathcal{O}(n^2O^2V)$ (with $n$ being the iteration number). In comparison with other linear solvers, GMRes is the only Krylov method that works for general matrices (note that the similarity transformed hamiltonian in GFCC is non-symmetric). \section{Tensor Algebra for Many-body Methods} \begin{figure}[htbp] \centerline{\includegraphics[width=0.45\textwidth]{Figure3.png}} \caption{The TAMM architecture.} \label{TAMMworkflow} \end{figure} As aforementioned, the most expensive parts of the GFCC approach are associated with the high dimensional tensor contractions. For example, the most expensive tensor contraction in the GFCCSD approach can be expressed as \begin{equation} A(i,j,k,l) += B(i,j,m,n)\times C(m,n,k,l) \;, \label{cont1} \end{equation} in which Einstein summation convention over the repeated indices is invoked, and the involved tensors ($A$, $B$, and $C$) have four dimensions with the length of each dimension being $\sim\mathcal{O}(N)$. These multi-dimensional tensor contractions are both compute and communication intensive. Previously, to ease these demands for large many-body calculations and new many-body theory developments, several specialized parallel tensor algebra systems have been developed \cite{hirata039887, hirata2006symbolic, deumens2011software, deumens2011super, solomonik2013cyclops, solomonik2014massively, calvin2015scalable, peng2019coupled}. Progress has been made for automated code generators, memory reduction, and the real-space many-body calculations. To further manipulate the tensor contractions on the complex space as required by the new GFCC approach, a more efficient tensor algebra library designed for generalized many-body calculations is needed. The Tensor Algebra for Many-body Methods~(TAMM) library \cite{mutlu2019toward} provides such an infrastructure (see Fig. \ref{TAMMworkflow}) to achieve a scalable performance--portable implementation of key many-body methods on exascale supercomputing platforms. Briefly speaking, the TAMM infrastructure is flexible in allowing the user to specify and manipulate tensor distribution, memory management, and scheduling of tensor operations, and supporting full complex and complex-real mixed operations on tensors that is mainly driven by the GFCC developments (the imaginary broadening in Eq. \ref{gfxn1} makes the GFCC approach a many-body theory on the full complex space being different from other excited state coupled cluster approaches). The TAMM infrastructure is implemented using Global Arrays (GA) \cite{ga1994,ga1995,ga1996} and MPI for scalable parallelization on distributed memory platforms and using optimized libraries for efficient intra-node execution of tensor operation kernels on CPUs and accelerators. TAMM takes high-level expressions describing computations on block-sparse tensors, decomposes these into a set of dependent operations that are then passed to a backend for scheduling and execution. High performance is obtained in the backend by focusing upon a small number (several tens) of kernels that are extensively optimized by the vendor libraries, or by code generation plus auto-tuning, or by hand tuning. The TAMM API includes these high-level expressions and that of the distributed, block-sparse tensor. Execution of operations using the TAMM library involves multi-granular dependence analysis and task-based execution. To leverage parallelism across a set of tensor operations, a dependency analysis scheme (called the levelizer) in the TAMM scheduler splits the tensor operations into independent execution levels. By analyzing data dependences between a group of tensor operations, the tensor operations are organized into levels. Operations in the same level can be executed concurrently with synchronizations between levels. This allows for multiple operations to be executed at the same time, exposing more parallelism, and improving processor utilization. The operations that can be scheduled in parallel are executed in a single program multiple data (SPMD) fashion. The execution is compatible with MPI and the operations are collectively executed on a given MPI communicator. Each operation is further partitioned into tasks. The tasks that constitute an operation are produced using task iterators. Each task computes a portion of the operation, typically a contribution to a block of data in the output tensor. Until it begins execution, a task is migratable and can be scheduled for execution on any compute node or processor core. Once execution of a task begins, the data required by the task are transferred to its location. At this point, the task is bound to the process in which it is executing and cannot be migrated. TAMM uses a GPU execution scheme (with TALSH \cite{TALSH} as the underlying GPU tensor algebra engine) where we make use of localized summation loops to limit the transfer of output blocks from GPUs to CPUs. By keeping the output block that is being updated by multiple input tensor blocks on the GPU until all updates are finished, we are able to reduce the data transfer between CPUs and GPUs. TAMM also uses a non-blocking GPU kernel launch scheme to enable data transfer/compute overlap between successive summation loop iterations. The productivity and performance benefits of TAMM are presented in upcoming evaluation section. \section{Implementation Details} In our current implementation of the GFCC approach, to ease the memory/storage demands and increase the data locality (to reduce communication), a well-controlled on-the-fly pivoting Cholesky decomposition (CD) is performed for the four dimensional atomic-orbital based electron repulsion integral (ERI) tensors right after the relatively cheap Hartree-Fock calculations. According to the previous study \cite{kowalski17_4179}, by using generated three-index Cholesky vectors instead of four-index ERI tensors one can bypass the $\mathcal{O}(N^5)$ tensor transformation from atomic-orbital to molecular-orbital space with two $\mathcal{O}(N^4)$ steps, and significantly reduce the storage requirement from $\mathcal{O}(V^4)$ for the four-index ERIs to $\mathcal{O}(V^2N)$ for Cholesky vectors in the ground- and excited-state CC calculations. Before entering the main GFCC loop, the major contractions have been designed to maximize the number of constant intermediate tensors of small- and medium-size (for GFCCSD method, intermediate tensor size needs to be $\le\mathcal{O}(O^2V^2)$). These intermediate tensors will be pre-computed to ease the operation demands in the main GFCC loop. Our main GFCC loop is shown in Algorithm~\ref{alg:gfcc_pseudo} where we are focused on the GFCCSD method. At the high level, there is an outer loop that checks for the convergence of the entire GFCCSD calculation. We refer to iterations of this loop as $levels$. Each $level$ consists of two loops: one that goes over the frequencies ($\omega$'s) for a given $level$ and the second loop goes over all the orbitals ($p$'s). In the first loop, $\omega$'s are sampled following the adaptive midpoint refinement strategy described in Ref. \cite{vanbeeumen17_4950}. In the second loop, the GFCCSD singles and doubles equations need to be solved for all ($\omega$,$p$) pairs in a given level. Here we are focused on Lines 4$-$15 of Algorithm ~\ref{alg:gfcc_pseudo} which constitute the most expensive tensor contraction and communication, as well as the most severe load imbalance, of each $level$ in the overall iterative GFCCSD calculation. The upper bound on the maximum number of $level$s ($L_{max}$) in a GFCC calculation is determined as follows. Given a frequency regime $\left[\omega_\text{min}, \omega_\text{max}\right]$, and a desired frequency resolution $\Delta\omega$, The total number of available frequencies, $N_{\text{tot},\omega}$, in the regime is $N_{\text{tot},\omega} = \frac{\omega_\text{max}-\omega_\text{min}}{\Delta\omega}+1$, and the maximum number of frequencies at $level$ $L$, $N_{\text{max},\omega}(L)$, is given by \begin{equation} N_{\text{max},\omega}(L) = \left\{ \begin{array}{lcl} 3 & \mbox{for} & L = 1,\\ 2^{L-1} & \mbox{for} & L > 1. \end{array}\right. \end{equation} Since $\sum_{L=1}^{L_\text{max}} N_{\text{max},\omega}(L) \le N_{\text{tot},\omega}$, the upper bound of $L_\text{max}$ is \begin{equation} L_\text{max} \le \log_2 \left( \frac{\omega_\text{max}-\omega_\text{min}}{\Delta\omega} \right). \label{Lmax} \end{equation} According to Eq. (\ref{Lmax}), take a frequency regime of [-0.8,-0.4] a.u. for an example, if $\Delta\omega$=0.01 a.u., then $L_\text{max}\le 5$. \begin{algorithm*}[htbp] \small \SetKw{To} {\textbf{to}} \SetKw{Step} {\textbf{step}} \SetKwInOut{Input} {input} \SetKwInOut{Output} {output} \SetKwRepeat{Do}{do}{while}% int $level$ = 0; \\ Setup process groups based on resources provided to the application run \\ \While{not converged} { \For {$\omega$ = 0 \To $n\_freq$} { task\_list = ($\omega$,p) for p = 0 \To $n\_orbitals$ \\ divide all tasks in task\_list across process groups \\ \For {p = 0 \To $n\_orbitals$} { each process group executes a task ($\omega$,p) \\ determine task ($\omega$,p) for a process group ${\rm PG}_i$ \\ setup intermediate and output tensors \\ compute initial guess \\ \While{not converged} { solve GFCC singles equations $\mathcal{O}(O^3V)$ \\ solve GFCC doubles equations $\mathcal{O}(O^3V^2)$ \\ GMRES micro loop $ngmres * \mathcal{O}(O^3V^2)$ } write output tensors for ($\omega$,p) to disk \\ } synchronize across all process groups \\ } Gram-Schmidt ($\mathcal{O}(m^2O^2V)$) \\ project GFCC linear systems onto the subspace constructed for all $(\omega,p)$'s in current $level$ ($\mathcal{O}(O^3V^2)$) \\ compute spectral function ($\mathcal{O}(N_{\omega}m^3)$) \\ determine convergence (OR) list of frequencies to be processes in next $level$ \\ $level$++; } \caption{Retarded GFCCSD Approach} \label{alg:gfcc_pseudo} \end{algorithm*} We use the TAMM library to implement the GFCCSD approach shown in Algorithm~\ref{alg:gfcc_pseudo}. The task list in a given $level$ is the list of all ($\omega$,$p$) pairs that need to be processed in that $level$. Since the computation of all tasks in a given $level$ are independent, we divide all the $p's$ for a given $\omega$ across process groups that are set up at the beginning of the calculation. All process groups are of the same size. The size of a process group for computing each task is determined automatically for a given problem size and the resources provided for that run. The size of each process group can also be provided as an input parameter for a given GFCCSD calculation. We present a detailed analysis of using different sized process groups for each task in the upcoming evaluation section. It is worth mentioning that due to the difference in the orbitals, the converging performance for different ($\omega$,$p$) pairs would become different and lead to severe load imbalance. To see that, for a given $\omega$ the spectral function is the trace of the GFCC matrix (sum of the diagonal elements), therefore if the $\omega$ is located on the valence regime, due to the relatively weak coupling between the core orbitals and valence orbitals, the diagonal elements contribution from the core orbital will be negligible. In terms of the convergence performance, the corresponding GFCC calculations will usually converge within one iteration or two. Fast convergence can also be observed for computing valence contribution to the core spectral function for the same reason. However, to compute the contribution from the orbitals that are lying close to the $\omega$, the convergence will involve more iterations, in particular when computing the shake-up states where the double excitation dominates the ionized wave function. \begin{table}[] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|} \hline Nodes & Basis & BasisSet & NWChem & TAMM \\ \hline 100 & 6-31G & 424 & 2.8 & 0.22 \\ 220 & cc-pVDZ & 737 & 13 & 0.58 \\ 256 & aug-cc-pVDZ & 1243 & 74 & 2.5 \\ \hline \end{tabular} \caption{CCSD performance compared to NWChem for the Ubiquitin DGRTL fragment on OLCF Summit. Time per CCSD iteration is given in minutes. NWChem has CPU only implementation. TAMM based Cholesky-CCSD uses GPUs. } \label{tab:ccsd} } \end{table} \begin{table}[] \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|l|} \hline Impl. & \begin{tabular}[c]{@{}c@{}}NWChem\\ CCSD\end{tabular} & \begin{tabular}[c]{@{}c@{}}TAMM\\ Cholesky-CCSD\end{tabular} & \multicolumn{2}{c|}{TAMM-GFCCSD-IP} \\ \cline{4-5} & & & \multicolumn{1}{l|}{Closed-shell} & Open-shell \\ \hline SLOC & 11314 & 236 & 1700 & \multicolumn{1}{c|}{2700} \\ \hline \end{tabular} \caption{SLOC (source lines of code) counts for different CCSD and GFCC implementations. For the TAMM-GFCCSD-IP implementations, the closed-shell case only deals with the alpha electrons in the system, while the open-shell case deals both alpha and beta electrons in the system. } \label{tab:sloccount} } \end{table} \begin{figure} \centering \includegraphics[clip,angle=0, width=0.45\textwidth]{Figure4.png} \caption{Number of iterations (y-axis) needed to converge linear equations for $X_p$ operators corresponding to different orbitals of guanine for $\omega$=-0.4 a.u.} \label{guan} \end{figure} \begin{figure} \centering \includegraphics[clip,angle=0, width=0.45\textwidth]{Figure5.png} \caption{Number of iterations (y-axis) needed to converge linear equations for $X_p$ operators corresponding to different orbitals of cytosine for $\omega$=-0.4 a.u.} \label{cyto} \end{figure} To further illustrate the load imbalance issue originated from the orbital difference on the contribution to the spectral function, we have exhibited the number of iterations needed to converge the GFCC linear equations using two small molecules, guanine and cytosine, at the valence frequency for all the occupied molecular orbitals in Figs. \ref{guan} and \ref{cyto}. As can be seen, for the probing frequency $\omega$ = -0.4 a.u., which is very close to the valence band edges of guanine and cytosine base molecules, there is a clear distinction between the core region and valence region in terms of the number of iterations. Due to the aforementioned weak coupling between the core molecular orbitals and valence molecular orbitals, the contribution of the core orbitals to the trace of the GFCCSD matrix is trivial, thus only one or two iterations are usually used to solve the GFCCSD linear equation, and the solving process is insensitive to the linear solver. Note that in Figs. \ref{guan} and \ref{cyto} there are also some valence orbitals possessing low number of iterations for solving the GFCCSD linear equations. This is because the spectral amplitude at the probing frequency ($\omega$=-0.4 a.u.) does not reach its prime. When the probing frequency is moving towards the ionization potentials that the molecular orbitals contribute the most to, the number of iterations will generally bounce back. For example, we have only used five iterations to converge the GFCCSD linear equation for molecular orbital \#38 of the guanine base at the probing frequency -0.4 a.u. If we change the probing frequency to -0.25 a.u. that is more close to the first IP value of guanine base, the number of iterations will go up to $\sim$40 iterations (the molecular orbital \#38 will contribute the most to the first IP of the guanine base). To prevent any load imbalance between different process groups computing different orbitals, a process group that finishes the computation of a orbital earlier will be told to proceed to pick another orbital in the list of orbitals remaining to be computed for that frequency. It would not wait for other process groups to finish before proceeding to the next set of orbitals. However, there can still be a situation in which a few process groups are idle when the last set of orbitals for a given frequency are being processed as indicated by the barrier in Line 15 of Algorithm~\ref{alg:gfcc_pseudo}. In practice, we would not face this load imbalance for the last set of orbitals being processed for a given $\omega$. This is due to the fact that for a GFCC calculation aimed at solving a real science problem, the number of orbitals are large enough such that it's enough to keep an entire modern supercomputer such as Summit busy just for computing a fraction of orbitals for a single frequency. We finish the entire GFCC calculation after a series of application restarts since there are no LCF machines that provide a job running time required to finish the GFCC calculation for even a single frequency in one attempt of the application run. The last set of orbitals might as well be processed in a separate job. After all orbitals for a given frequency are computed, the calculation proceeds to the next frequency in that $level$. Another optimization that is straightforward to implement in our current infrastructure is to process all orbitals for all the frequencies in a given $level$ simultaneously using using process groups. In the current situation, the optimization brought by such an implementation will be greatly compromised due to a relatively short job walltime on LCF machines. The current version of GFCCLib can be obtained at https://github.com/spec-org/gfcc/. \section{Evaluation} We evaluate the performance of the GFCC application on OLCF Summit~\cite{olcfsummit}. Each Summit node has two 22-core POWER9 CPUs and 512GB of CPU memory. Each node is also equipped with 6 NVIDIA Volta GPUs with a total GPU memory of 96GB. The GFCC application was compiled using GCC 8.1 compiler, CUDA 10.1 toolkit, IBM Spectrum MPI 10.3, IBM ESSL 6.1 and BLIS 0.6. Table~\ref{tab:ccsd} shows the performance of TAMM based Cholesky CCSD in comparison with the performance of the state-of-the-art CCSD in the open-source quantum chemical platform NWChem\cite{valiev2010nwchem}. The NWChem run was performed using all the CPU cores available on each node, while the TAMM-based implementation used 6 CPUs mapped to 6 GPUs on each node. As can be seen, the overall performance of TAMM-based Cholesky CCSD is up to $\sim$13 times faster than that of the NWChem run for the closed-shell calculation of large DNA fragment~\cite{ubiquitin}. The promising speed-up of TAMM-based Cholesky CCSD with respect to the NWChem CCSD module can be attributed to many factors including GPU acceleration, dependency analysis and scheduling of independent contractions in parallel to enable barrier-minimized scheduling across tensor operations, better data/tensor distribution enabled by Cholesky vectors, and locality-aware parallelization methodologies. Note that the Cholesky vectors are not used in the NWChem CCSD module (also there is no GFCC module in NWChem). Generally speaking, the utilization of Cholesky vectors can greatly reduce the memory/storage demands of the calculation, and therefore the communication demands \cite{kowalski17_4179}. In particular, the storage requirement associated with the two-electron integral tensors in the molecular orbital basis ($\mathcal{O}(N^4)$) can be avoided. Instead, only $\mathcal{O}(N^3)$ requirement is needed for storing the Cholesky vectors. Therefore, the $\mathcal{O}(N^5)$ atomic orbital (AO) to molecular orbital (MO) two-electron integral tensor transform will be bypassed, and replaced by the $\mathcal{O}(N^4)$ AO to MO Cholesky vector transformation. It is worth mentioning that CD can help refactor and simplify a portion of the CCSD tensor contractions to yield lower computational scaling, but is unable to refactor the most expensive CCSD tensor contraction that scales as $\mathcal{O}(O^2V^4)$. Due to the large number of basis functions (and therefore high storage demand) and complexity of the conventional CCSD tensor contractions, we are currently unable to perform the conventional TAMM-CCSD calculation for C60 to single out the performance advantage only from utilizing the Cholesky vectors. On the other hand, the overall performance improvement associated with the utilization of Cholesky vectors in the CCSD approach depends on many factors such as the algorithm design, studied system, and decomposition threshold. In the previous CD-CCSD study of benzene trimer, utilizing Cholesky vectors (with 10$^{-4}$ threshold) can result in a modest speedup of 13\% \cite{sherrill13_2687}. To minimize load imbalance overheads, the CCSD execution is performed by grouping all operations in a single iteration, analyzing their dependencies, and grouping them into levels of independent operations. This scheduling of operations applies to all methods implemented using TAMM. Our GFCCSD implementation also benefits from all the optimizations mentioned. Here, we want to emphasize that, even though the NWChem calculation may also benefit from the GPU acceleration, the promising speedup were only limited to noniterative higher order many-body methods (note that the GPU implementation in the official releases of NWChem is only available for the perturbative part of the CCSD(T) formalism \cite{kowalski11_1316}; see Ref. \cite{kowalski13_1949} for a more recent example), and it is actually infeasible to make NWChem CCSD benefit from GPUs due to the large volume of code ($\sim$11,000 lines, see Table~\ref{tab:sloccount}) that need to be rewritten. On the other hand, since TAMM provides a high-level abstraction to compose sequence of tensor operations, the methods implemented using TAMM can simply specify the hardware resource that the operations need to be executed on - this allows the application code to be at a higher level while the implementation details of how the operations are executed are handled internally by TAMM. This also allows the application code to be ported to newer architectures when TAMM supports them without having to rewrite the application code itself. \begin{table}[] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{} & \multicolumn{7}{c|}{\#Parallel Tasks} \\ \hline \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}{20} & \multicolumn{1}{c|}{40} \\ \hline \multicolumn{1}{|c|}{Nodes, GPUs per task} & \multicolumn{7}{c|}{Time per iter per task} \\ \hline 20,120 & X & X & X & X & 485 & 493 & 522 \\ 40,240 & X & X & 252 & 250 & 257 & 264 & 358 \\ 60,360 & X & 176 & 176 & 178 & 180 & 190 & X \\ 75,450 & 153 & 147 & 150 & 148 & 154 & 163 & X \\ 100,600 & 119 & 120 & 120 & 119 & 126 & 146 & X \\ 150,900 & 91 & 92 & 95 & 99 & 105 & X & X \\ 200,1200 & 77 & 80 & 81 & 85 & 88 & X & X \\ 250,1500 & 70 & 68 & 69 & 71 & X & X & X \\ \hline \end{tabular} \caption{The operation time per task (in seconds) for one GFCCSD iteration with different settings of number of nodes, number of GPUs, and number of tasks (orbitals) processed in parallel on OLCF Summit. The test system is C60 molecule with aug-cc-pVDZ basis set with a total of 1380 basis functions and 130 linear dependencies.} \label{gfcc_scaling_data} } \end{table} \begin{table}[] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|} \hline Nodes, GPUs & get & compute & add & misc. & Total \\ \hline 75, 450 & 4.4 & 120 & 0.35 & 28.3 & 153 \\ 100, 600 & 2.5 & 89 & 0.2 & 27.3 & 119 \\ 150, 900 & 3 & 60 & 0.3 & 31.7 & 95 \\ 200, 1200 & 2.3 & 44.6 & 0.2 & 33 & 80 \\ 250, 1500 & 2 & 36 & 0.13 & 30 & 68 \\ \hline \end{tabular} \caption{Performance analysis of a single iteration for task ($\omega$,p) = (-0.4,160). The table shows the time (in seconds) spent in getting input tensors, computing, and putting (or adding) to output tensors. Total time per iteration and time not spent in communication or computation (``misc.'' time, dominated by load imbalance at barrier) are also shown. We observe that all tasks exhibit nearly identical runtime behavior.} \label{tab:taskprofile} } \end{table} To study how different number of tasks will affect the operation time of the GFCCSD calculation, we have performed a series of tests with different numbers of nodes (and GPUs) to record the time for one GFCCSD iteration, and the results are shown in Table \ref{gfcc_scaling_data}. We choose C60 molecules (with $C$1 symmetry) as our testing system. Employing the aug-cc-pVDZ basis set, the GFCCSD tests were performed with over 1,000 basis functions on OLCF Summit. Here, a guideline is provided on choosing the number of nodes and tasks for a large GFCCSD calculation. In Table~\ref{gfcc_scaling_data}, each row corresponds to a particular configuration. For example, row 1 shows the performance when using 20 nodes per task for various number of parallel tasks. For the configuration using 20 nodes per task and 40 tasks (last entry in row 1), the calculation was run on 800 nodes (20x40) where 40 process groups were executing the 40 different tasks for a given frequency simultaneously. This table also provides insights into the weak and strong scaling behavior of the implementation. It is worth mentioning that weak and strong scaling behaviors enable studying the performance of a parallel application on supercomputers. Strong scaling\cite{strongscaling} allows us to measure the parallel efficiency of an application using the speedup computed by increasing the number of computing resources for a fixed problem size. Weak scaling\cite{weakscaling} on the other hand describes application behavior when both the number of computing resources and problem size are increased. Going down a column of Table~\ref{gfcc_scaling_data}, the same number of tasks is performed on an increasing number of total nodes, effectively strong scaling the computation. Along a row, more tasks are performed in parallel, with each task performed on the same number of nodes. This corresponds to weak scaling the computation. In general, GFCC requires good weak scaling behavior to support multiple GFCCSD calculations to be efficiently performed in parallel. However, improved strong scaling can enable the use of fewer parallel tasks, reducing the impact of load imbalance. We observe that weak scaling leads to a small increase in total time. For example, per-iteration time increases from 77 seconds to 88 seconds as we go from executing 1 task to 10 tasks in parallel, with each task running on 200 compute nodes. Importantly, while not perfectly scaling, the implementation scales both under the weak scaling and strong scaling regimes. Combining both scaling modalities allows the GFCCSD implementation to run on very large node counts without significant performance degradation. To get further insights into the implementation's strong-scaling behavior, we profiled the execution of a single iteration of GFCCSD for varied node counts. Table~\ref{tab:taskprofile} shows the time spent in getting data from input tensors (labeled "get"), performing the local computation (``compute,'' much of it spent performing tensor contractions), and putting or adding the result to the output tensors (labeled ``add''), and the total per-iteration time. Time not accounted in data communication or computation is shown under ``misc.'', which is dominated by load imbalances at each iteration. We observe that compute time improves nearly linearly with node count. However, communication time, though initially small, starts to form an increasingly large fraction of the total time. Significantly, miscellaneous time within each iteration begins to dominate the total work performed. In general, as the per-iteration time approaches a minute, non-compute times become significant. By combining weak and strong scaling, we mitigate this effect at large node counts. From the table, we observe that at least 20 nodes are needed for such a large GFCCSD calculation, and given a fixed number of nodes and GPUs per task, the time per iteration time per task remains consistent when executing up to 20 orbitals in parallel, which thus indicates a good scalability. Beyond 20 parallel tasks, we see a significant performance hit in the time per iteration per task. This is due to the fact that the MPI processes across all process groups are communicating to fetch from the same input (cluster amplitudes) and intermediate tensors leading to network congestion. Though the tensors involved in the computation of a given task are created and destroyed within the process group computing the task, all process groups share the input and some intermediate tensors that were created on the world process group. One way to address this problem is to replicate the input and intermediate tensors where possible. If the process group size used to compute each task is large enough for a given system size, we can identify and replicate the global cluster amplitude tensors and GFCCSD intermediate tensors that are frequently accessed by all process groups. Note that these tensors usually have small or medium size ($<\mathcal{O}(O^2V^2)$). The implementation of this replication scheme is still under intensive development, and will be further discussed in our future work. \begin{figure}[htbp] \centerline{\includegraphics[width=0.45\textwidth]{Figure6.pdf}} \caption{The computed spectral function of C60 molecule at the GFCCSD level, and in comparison with single-particle $GW_0$ (PBE) and experimental spectrum. The GW spectrum were adopted from Ref. \citenum{qian15_245105}. The experimental photoelectron spectrum of C60 were adopted from Ref. \citenum{benning92_6899}. In the GFCCSD calculation of C60 molecule, the total number of frequency points used to construct the subspace is nine, and the dimension of the subspace $m$ is $\sim10,800$.} \label{specfxn} \end{figure} \section{Testing on C60 complex molecule} To test the capability of the developed GFCC library, we choose the fullerene C60 molecule and compute its spectral function in a broad near-valence regime for the first time with the many-body GFCCSD method. Since we are focused on the post-Hartree-Fock calculation in the valence region, Dunning's correlation-consistent polarized valence-only double-zeta, cc-pVDZ, basis set \cite{dunning89_1007} has been used in the calculation. Note that larger Dunning basis sets can be further used for converging GFCCSD calculations systematically to the complete basis set limit. Our computed spectrum is shown in Fig. \ref{specfxn} where we also compared our results with the $GW_0$ (PBE) and experimental results. The lowest ionization potential (IP) computed from GFCCSD/cc-pVDZ is about -7.58 eV which is almost identical to the value of -7.6$\pm$0.2 eV from the experiment\cite{benning92_6899}, and the deviation is within the chemical accuracy ($\sim$1 kcal/mol, or 0.043 eV). The GFCCSD many-body results is superior to the single particle $GW_0$ results. The latter depends on the density functionals used in the calculation\cite{qian15_245105}. Using local-density approximation (LDA), the reported IP values range from -7.28 eV to -8.22 eV, and using generalized gradient approximation (GGA,\cite{langreth83_1809,becke88_3098,perdew92_6671} e.g. the Perdew--Burke--Ernzerhof (PBE) functional\cite{PBE96_3865}), the IP value was reported to be -7.37 eV. For higher IPs, in particular in the [-17.5,-5.0] eV regime, the $GW_0$ single particle results red-shift the entire spectrum by $\sim$ 0.5 eV with respect to the experimental spectrum, while the GFCCSD spectrum shows excellent agreement with the experiment. It is worth mentioning that the GFCCSD results can be further improved by including higher order amplitudes in the GFCC approach, and/or using larger basis set. Regarding the higher order GFCC approach (e.g GFCC with singles, doubles, and triples, GFCCSDT), sometimes only only a portion of higher order terms can greatly improve the description of the many-body states\cite{kowalski18_214102}. Regarding the basis set effect, it seems employing more diffuse basis set will not significantly change the IPs of C60. According to the previous EOM-CC study (same theoretical level as GFCC)\cite{kowalski14_074304}, the lowest IP of C60 will only be red-shifted by $\sim$0.08 eV if we switch from cc-pVDZ basis to aug-cc-pVDZ basis. By and large, GFCCSD approach (shown in Fig. \ref{specfxn}) has produced an accurate description of the electronic structure of the C60 molecule spreading over ~20 eV in the near valence energy regime. Accurate and efficient many-body GFCC routine calculations for other fullerene molecules and their derivatives can be anticipated in the near future. The accurate description of the electronic structure (in terms of peak positions and amplitudes) of these molecular systems will not only benefit a clear explanation of the static structure-property correlation, but also provide a better understanding of the electron dynamics (through the Fermi's golden rule) explaining the vital charge and energy flow when applying these systems in the nanostructured devices. \section{Comparison with Prior Work} The TAMM implementation of the GFCCSD formalism discussed here is the only reported implementation of the GFCC formalism capable of carrying out calculations for hundreds of correlated electrons with $>$1,000 basis functions. As such, it provided a unique capability that can be used not only in the simulations of spectral functions but also as an integral component of complex quantum embedding workflows. To answer the scientific questions of (i) how the ionizations of DNA fragments and their features change as the system expands and (ii) how the many-body coupled cluster description would be different from the single-particle picture and lead us to a more generalized near-valence ionization picture of longer DNA sequence, we expect the TAMM GFCCSD implementation to be further applied to the simulations of spectral function for very large DNA fragments in the complex environment with the aid of quantum embedding scheme. The anticipated effort will go beyond the thus-far largest GFCCSD simulation (as we mentioned in the introduction, the latter was performed for a gas phase DNA hexamer without the perturbation from the solution environment \cite{peng20_011101}). Compared to prior implementations of correlated Green’s function formulations such as various GW and ADC(n) approximations, the present GFCC implementation provides significant advance to (i) simulate larger and more complex quantum systems, and (ii) understand the ionization process that can support various photoelectron spectroscopy studies. Additionally, a unique footprint of the present implementation is its flexibility in employing various representations of two-electron integrals (such as Cholesky Decomposition) and parallel models. Furthermore, the present GFCC implementation pave the way for real applications of GFCC approach to real systems and to arbitrary frequency regimes. Also, controlling the accuracy of the Dyson equation (Green’s function and self-energy matrices are complementary quantities in the sense of this equation) on the CC level is much easier at the level of Green’s function than at the level of the self-energy operator. Recent studies\cite{hirata17_044108} of the perturbative self-energy expansions also demonstrated that these expansions may be characterized in some situations by a slow convergence rate. Additionally, the MOR methods adapted for the GFCC approach were validated only for small-dimensionality problems in the previous applications.\cite{peng19_3185} In the application of present GFCC implementation, these techniques were successfully extended to high-dimensionality problems allowing for successful identification of not only main peaks (which can be identified by lower-order methods such as GW formalisms) but also satellite states that are much harder to capture in lower-order methods. To our best knowledge, there are no existing GFCC implementations that enable the routine calculations with $>$1,000 basis functions for arbitrary frequency regimes. From the technical viewpoint, the novel interdisciplinary engineering of tensor contractions library (TAMM), MOR algorithm, efficient compression algorithms for two-electron integrals, and GPUs into a many-body theoretical framework enables us even with the existing implementation to perform simulations for even bigger systems described by 2,000$-$3,000 orbitals. \begin{figure}[tp] \centerline{\includegraphics[width=0.45\textwidth]{Figure7.pdf}} \caption{Schematic representation of development threads and application areas for the scalable TAMM implementations of Green's function coupled cluster.} \label{fig_out} \end{figure} \section{Conclusions and Future Work} In this paper we discussed implementation details of the GFCC approach, which utilizes capabilities of novel parallel tensor contraction library TAMM. The discussed effort integrates recent advances in many-body quantum mechanics (GFCC theory), applied math (MOR algorithm), and high-performance computing (TAMM library) to reduce time-to-solution associated with the construction of Green’s function matrix on leadership class GPU architectures such as OLCF Summit. The key elements of our design correspond to parallel algorithms for multi-dimensional tensors, flexible solvers to handle large number of linear equations, and compression algorithms (based on the Cholesky decomposition) for class of largest 4-dimensional tensors corresponding to two-electron integrals. A special role in this effort was played by the successful implementation of the MOR algorithm in the GFCC context, which was instrumental to enable realistic applications of GFCC to simulate the electronic structure of large molecular systems in arbitrary frequency regime. In the future, the TAMM infrastructure will enable quick deployment of hierarchical structure of GFCC approximations accounting for higher-rank excitations, which play an important role in identifying challenging satellite states. Local orbital techniques, in particular (DL)PNO, that has been utilized in the EOM-CC framework exhibiting significant reduced-scaling performance \cite{neese16_034102,neese18_244101,neese19_164123} will also be implemented in the GFCC infrastructure targeting larger and more complex systems. These unique features make the GFCC formalism competitive to existing Green’s function approaches including family of GW and ADC(n) formalism. We believe that these formulations can coexist and be used to construct more accurate forms of quantum embedding formalism that can utilize GFCC and through Dyson equation CC self-energies $\Sigma_{\rm CC}(\omega)$ as building blocks (see Fig.\ref{fig_out}). A natural extension of GFCC will be core-level spectroscopy and transport theory. \section{Acknowledgements} This work was supported by the Center for Scalable, Predictive methods for Excitation and Correlated phenomena (SPEC), which is funded by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences, the Division of Chemical Sciences, Geosciences, and Biosciences. SPEC is located at Pacific Northwest National Laboratory (PNNL) operated for the U.S. Department of Energy by the Battelle Memorial Institute under Contract DE-AC06-76RLO-1830. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. \section{CRediT author statement} \textbf{Bo Peng}: Investigation, Supervision, Conceptualization, Methodology, Software, Validataion, Data curation, Visualization, Writing-Original draft preparation, Reviewing and Editing; \textbf{Ajay Panyala}: Investigation, Conceptualization, Methodology, Software, Validataion, Data curation, Visualization, Writing-Original draft preparation, Reviewing and Editing; \textbf{Karol Kowalski}: Investigation, Supervision, Conceptualization, Methodology, Writing-Original draft preparation, Reviewing and Editing; \textbf{Sriram Krishnamoorthy}: Investigation, Supervision, Conceptualization, Methodology, Writing-Original draft preparation, Reviewing and Editing
1,314,259,993,717
arxiv
\section{Invariant verification} \label{sec:reachalgo} A subproblem for invariant verification is to compute $\reachtube{\H}$, or more specifically, the reachtubes for the set of trajectories $\TL$ in a given mode, up to a time bound. This is a difficult problem, even when $\TL$ is generated by white-box models. The algorithms in~\cite{donze2010breach,DMV:EMSOFT2013,FanMitra:2015df} approximate reachtubes using simulations and sensitivity analysis of ODE models generating $\TL$. Here, we begin with a probabilistic method for estimating sensitivity from black-box simulators. \subsection{Discrepancy functions} \label{sec:disc} Sensitivity of trajectories is formalized by the notion of discrepancy functions \cite{DMV:EMSOFT2013}. For a set $\TL$, a {\em discrepancy function\/} is a uniformly continuous function $\beta: \reals^n \times \reals^n \times \nnreals \rightarrow \nnreals$, such that for any pair of identically labeled trajectories $\langle \tau_1,\ell \rangle, \langle \tau_2, \ell \rangle \in \TL$, and any $t \in \tau_1.{\mathit dom} \cap \tau_2.{\mathit dom}$: \begin{inparaenum}[(a)] \item $\beta$ upper-bounds the distance between the trajectories, i.e., \begin{align} |\tau_1(t) - \tau_2(t)| \leq \beta(\tau_1.\mathop{\mathsf {fstate}},\tau_2.\mathop{\mathsf {fstate}},t), \label{eq:discrepancy} \end{align} and \item $\beta$ converges to $0$ as the initial states converge, i.e., for any trajectory $\tau$ and $t \in \tau.{\mathit dom}$, if a sequence of trajectories $\tau_1,\ldots, \tau_k, \ldots$ has $\tau_k.\mathop{\mathsf {fstate}} \rightarrow \tau.\mathop{\mathsf {fstate}}$, then $\beta(\tau_k.\mathop{\mathsf {fstate}},$ $\tau.\mathop{\mathsf {fstate}},t)$ $\rightarrow 0$. \end{inparaenum} In~\cite{DMV:EMSOFT2013} it is shown how given a $\beta$, condition~(a) can used to over-approximate reachtubes from simulations, and condition~(b) can be used to make these approximations arbitrarily precise. Techniques for computing $\beta$ from ODE models are developed in~\cite{FanMitra:2015df,FanM:EMSOFT2016,HFMMK:CAV2014}, but these are not applicable here in absence of such models. Instead we present a simple method for discovering discrepancy functions that only uses simulations. Our method is based on classical results on PAC learning linear separators~\cite{KearnsVazirani}. We recall these before applying them to find discrepancy functions. \vspace{-10pt} \subsubsection{Learning linear separators.} \label{Sec: discrepancy} For $\Gamma \subseteq \reals\times\reals$, a \emph{linear separator} is a pair $(a,b) \in \reals^2$ such that \begin{align} \forall (x,y) \in \Gamma.\ x \leq ay + b. \label{eq:separator} \end{align} Let us fix a subset $\Gamma$ that has a (unknown) linear separator $(a_*,b_*)$. Our goal is to discover some $(a,b)$ that is a linear seprator for $\Gamma$ by sampling points in $\Gamma$~\footnote{We prefer to present the learning question in this form as opposed to one where we learn a Boolean concept because it is closer to the task at hand.}. The assumption is that elements of $\Gamma$ can be drawn according to some (unknown) distribution ${\cal D}$. With respect to ${\cal D}$, the \emph{error} of a pair $(a,b)$ from satisfying Equation~\ref{eq:separator}, is defined to be $\mathsf{err}_{{\cal D}}(a,b) = {\cal D}(\{(x,y) \in \Gamma\: |\: x > ay+b\})$ where ${\cal D}(X)$ is the measure of set $X$ under distribution ${\cal D}$. Thus, the error is the measure of points (w.r.t. ${\cal D}$) that $(a,b)$ is not a linear separator for. There is a very simple (probabilistic) algorithm that finds a pair $(a,b)$ that is a linear separator for a large fraction of points in $\Gamma$, as follows. \begin{enumerate} \itemsep0em \item\label{alg:linsep1} Draw $k$ pairs $(x_1,y_1), \ldots (x_k,y_k)$ from $\Gamma$ according to ${\cal D}$; the value of $k$ will be fixed later. \item\label{alg:linsep2} Find $(a,b) \in \reals^2$ such that $x_i \leq ay_i + b$ for all $i \in \{1,\ldots k\}$. \end{enumerate} Step~\ref{alg:linsep2} involves checking feasibility of a linear program, and so can be done efficiently. This algorithm, with high probability, finds a linear separator for a large fraction of points. \begin{proposition} \proplabel{linear-sep-learn} Let $\epsilon, \delta \in \plreals$. If $k \geq \frac{1}{\epsilon}\ln\frac{1}{\delta}$ then, with probability $\geq 1-\delta$, the above algorithm finds $(a,b)$ such that $\mathsf{err}_{{\cal D}}(a,b) < \epsilon$. \end{proposition} \begin{proof} The result follows from the PAC-learnability of concepts with low VC-dimension~\cite{KearnsVazirani}. However, since the proof is very simple in this case, we reproduce it here for completeness. Let $k$ be as in the statement of the proposition, and suppose the pair $(a,b)$ identified by the algorithm has error $> \epsilon$. We will bound the probability of this happening. Let $B = \{(x,y)\: |\: x > ay+b\}$. We know that ${\cal D}(B) > \epsilon$. The algorithm chose $(a,b)$ only because no element from $B$ was sampled in Step~\ref{alg:linsep1}. The probability that this happens is $\leq (1-\epsilon)^k$. Observing that $(1-s) \leq e^{-s}$ for any $s$, we get $(1-\epsilon)^k \leq e^{-\epsilon k} \leq e^{-\ln \frac{1}{\delta}} = \delta$. This gives us the desired result. \end{proof} \subsubsection{Learning discrepancy functions} Discrepancy functions will be computed from simulation data independently for each mode. Let us fix a mode $\ell \in \L$, and a domain $[0,T]$ for each trajectory. The discrepancy functions that we will learn from simulation data, will be one of two different forms, and we discuss how these are obtained. \vspace{-10pt} \paragraph{Global exponential discrepancy (GED)} is a function of the form \[ \beta(x_1,x_2,t) = |x_1 - x_2| Ke^{\gamma t}. \] Here $K$ and $\gamma$ are constants. Thus, for any pair of trajectories $\tau_1$ and $\tau_2$ (for mode $\ell$), we have \[ \forall t \in [0,T].\ |\tau_1(t) - \tau_2(t)| \leq |\tau_1.\mathop{\mathsf {fstate}} - \tau_2.\mathop{\mathsf {fstate}}| Ke^{\gamma t}. \] Taking logs on both sides and rearranging terms, we have \[ \forall t.\ \ln \frac{|\tau_1(t) - \tau_2(t)|}{|\tau_1.\mathop{\mathsf {fstate}} - \tau_2.\mathop{\mathsf {fstate}}|} \leq \gamma t + \ln K. \] It is easy to see that a global exponential discrepancy is nothing but a linear separator for the set $\Gamma$ consisting of pairs $(\ln \frac{|\tau_1(t) = \tau_2(t)|}{|\tau_1.\mathop{\mathsf {fstate}} - \tau_2.\mathop{\mathsf {fstate}}|}, t)$ for all pairs of trajectories $\tau_1,\tau_2$ and time $t$. Using the sampling based algorithm described before, we could construct a GED for a mode $\ell \in \L$, where sampling from $\Gamma$ reduces to using the simulator to generate traces from different states in $\TL_{\sf init, \ell}$. \propref{linear-sep-learn} guarantees the correctness, with high probability, for any separator discovered by the algorithm. However, for our reachability algorithm to not be too conservative, we need $K$ and $\gamma$ to be small. Thus, when solving the linear program in Step~\ref{alg:linsep2} of the algorithm, we search for a solution minimizing $\gamma T + \ln K$. \vspace{-10pt} \paragraph{Piece-wise exponential discrepancy (PED).} The second form of discrepancy functions we consider, depends upon dividing up the time domain $[0,T]$ into smaller intervals, and finding a global exponential discrepancy for each interval. Let $0 = t_0,t_1,\ldots t_N = T$ be an increasing sequence of time points. Let $K, \gamma_1, \gamma_2, \ldots \gamma_N$ be such that for every pair of trajectories $\tau_1,\tau_2$ (of mode $\ell$), for every $i \in \{1,\ldots, N\}$, and $t \in [t_{i-1},t_i]$, $|\tau_1(t) = \tau_2(t)| \leq |\tau_1(t_{i-1}) - \tau_2(t_{i-1})| Ke^{\gamma_i t}$. Under such circumstances, the discrepancy function itself can be seen to be given as \[ \beta(x_1,x_2,t) = |x_1 - x_2| Ke^{\sum_{j=1}^{i-1}\gamma_j(t_j - t_{j-1}) + \gamma_i (t-t_{i-1})} \qquad \mbox{for } t \in [t_{i-1},t_i]. \] If the time points $0 = t_0,t_1,\ldots t_N = T$ are fixed, then the constants $K, \gamma_1, \gamma_2, \ldots $ $\gamma_N$ can be discovered using the learning approach described for GED; here, to discover $\gamma_i$, we take $\Gamma_i$ to be the pairs obtained by restricting the trajectories to be between times $t_{i-1}$ and $t_i$. The sequence of time points $t_i$ are also dynamically constructed by our algorithm based on the following approach. Our experience suggests that a value for $\gamma$ that is $\geq 2$ results in very conservative reach tube computation. Therefore, the time points $t_i$ are constructed inductively to be as large as possible, while ensuring that $\gamma_i < 2$. \vspace{-10pt} \subsubsection{Experiments on learning discrepancy} We used the above algorithm to learn discrepancy functions for dozens of modes with complex, nonlinear trajectories. Our experiments suggest that around 10-20 simulation traces are adequate for computing both global and piece-wise discrepancy functions. For each mode we use a set $S_{\sf train}$ of simulation traces that start from independently drawn random initial states in $\TL_{\sf init,\ell}$ to learn a discrepancy function. Each trace may have $100$-$10000$ time points, depending on the relevant time horizon and sample times. Then we draw another set $S_{\sf test}$ of $1000$ simulations traces for validating the computed discrepancy. For every pair of trace in $S_{\sf test}$ and for every time point, we check whether the computed discrepancy satisfies Equation~\ref{eq:discrepancy}. We observe that for $|S_{\sf train}| > 10$ the computed discrepancy function is correct for $96\%$ of the points $S_{\sf test}$ in and for $|S_{\sf train}| > 20$ it is correct for more than $99.9\%$, across all experiments. \subsection{Verification algorithm} \label{sec:verfication_algorithm} In this section, we present algorithms to solve the bounded verification problem for hybrid systems using learned exponential discrepancy functions. We first introduce an algorithm $\mathit{GraphReach}$ (Algorithm \ref{alg:ComputeRT}) which takes as input a hybrid system $\H = \langle \L, \Theta, G, \TL \rangle$ and returns a set of reachtubes---one for each vertex of $G$---such that their union over-approximates $\reachtube{\H}$. $\mathit{GraphReach}$ maintains two data-structures: \begin{inparaenum}[(a)] \item $RS$ accumulates pairs of the form $\langle RT, v\rangle$, where $v \in \V$ and $RT$ is its corresponding reachtube; \item $\mathit{VerInit}$ accumulates pairs of the form $\langle S, v \rangle$, where $v \in \V$ and $S\subset \reals^n$ is the set of states from which the reachtube in $v$ is to be computed. \end{inparaenum} Each $v$ could be in multiple such pairs in $RS$ and $\mathit{VerInit}$. Initially, $RS = \emptyset$ and $\mathit{VerInit} = \{\langle \Theta, v_{\sf init}\rangle\}$. $\mathit{LearnDiscrepancy(S_{\sf init},d,\ell)}$ computes the discrepancy function for mode $\ell$, from initial set $S_{\sf init}$ and upto time $d$ using the algorithm of Section \ref{sec:disc}. $\mathit{ReachComp}(S_{\sf init},d,\beta)$ first generates finite simulation traces from $S_{\sf init}$ and then bloats the traces to compute a reachtube using the discrepancy function $\beta$. This step is similar to the algorithm for dynamical systems given in~\cite{DMV:EMSOFT2013}. The $\mathit{GraphReach}$ algorithm proceeds as follows: first, a topologically sorted array of the vertices of the DAG $G$ is computed in $\mathit{Order}$ (\lnref{ln: init2}). The pointer $ptr$ iterates over the $\mathit{Order}$ and for each vertex $\mathit{curv}$ the following is computed. The variable $\mathit{dt}$ is set to the maximum transition time to other vertices from $\mathit{curv}$ (\lnref{ln: dwt}). For each possible initial set $S_{\sf init}$ corresponding to $\mathit{curv}$ in $\mathit{VerInit}$, the algorithm computes a discrepancy function (\lnref{ln: disc}) and uses it to compute a reachtube from $S_{\sf init}$ up to time $\mathit{dt}$ (\lnref{ln: reachtube}). For each successor $\mathit{nextv}$ of $\mathit{curv}$, the restriction of the computed reachtube $RT$ to the corresponding transition time interval ${\sc elab}((\mathit{curv,nextv}))$ is set as an initial set for $\mathit{nextv}$ (\lnsref{ln: forloopbegin}{ln: forloopend}). \begin{algorithm}[h!] \caption{$\mathit{GraphReach}(\H)$ computes bounded time reachtubes for each vertex of the transition $G$ of hybrid system $\H$.} \label{alg:ComputeRT} \SetKwInOut{Input}{input} \SetKwInOut{Initially}{initially} {$RS \gets \emptyset; \mathit{VerInit} \gets \{\langle \Theta, v_{\sf init} \rangle\}; \mathit{Order} \gets \mathit{TopSort}(G)$\;} \lnlabel{ln: init2} {\bf for} {$ptr = 0: len(Order)-1$} { {$\mathit{curv} \gets \mathrm{Order}[ptr]$ \;} \lnlabel{ln: currentV} {$\ell \gets {\sc vlab}(\mathit{curv})$\;} \lnlabel{ln: currentL} $\mathit{dt} \gets \textrm{max} \{t' \in \nnreals \:| \:\exists vs \in \V, (\mathit{curv}, vs) \in \E, (t,t') \gets {\sc elab} \left( (\mathit{curv}, vs) \right) \}$\; \lnlabel{ln: dwt} {\bf for} {$S_{\sf init} \in \{S~|~ \langle S,\mathit{curv} \rangle \in \mathit{VerInit}\}$} { {$\beta \gets \mathit{LearnDiscrepancy}(S_{\sf init},\mathit{dt},\ell)$\;} \lnlabel{ln: disc} {$ RT \gets \mathit{ReachComp}(S_{\sf init},\mathit{dt},\beta)$\;} \lnlabel{ln: reachtube} {$RS \gets RS \cup \langle RT, \mathit{curv} \rangle $\;} {\bf for} {$\mathit{nextv} \in \mathit{curv}.succ$} { $(t,t') \gets {\sc elab} \left( (\mathit{curv}, nextv) \right)$\; \lnlabel{ln: forloopbegin} $\mathit{VerInit} \gets \mathit{VerInit} \cup \langle \mathit{Restr}(RT,(t,t')), nextv\rangle$\; \lnlabel{ln: forloopend} } } } \Return $RS$ \; \end{algorithm} The invariant verification algorithm $\mathit{VerifySafety}$ decides safety of $\H$ with respect to a given unsafe set $\U$ and uses $\mathit{GraphReach}$. The detailed pseudocode appears in Appendix~\ref{appendix:safetyveri}. This algorithm proceeds in a way similar to the simulation-based verification algorithms for dynamical and hybrid systems~\cite{DMV:EMSOFT2013,fan2016automatic}. Given initial set $\Theta$ and transition graph $G$ of $\H$, this algorithm partitions $\Theta$ into several subsets, and then for each subset $S$ it checks whether the computed over-approximate reachtube $RS$ from $S$ intersects with $\U$: \begin{inparaenum}[(a)] \item If $RS$ is disjoint, the system is safe starting from $S$; \item if certain part of a reachtube $RT$ is contained in $\U$, the system is declared as unsafe and $RT$ with the the corresponding path of the graph are returned as counter-example witnesses; \item if neither of the above conditions hold, then the algorithm performs refinement to get a more precise over-approximation of $RS$. \end{inparaenum} Several refinement strategies are implemented in {{\sc DryVR}} to accomplish the last step. Broadly, these strategies rely on splitting the initial set $S$ into smaller sets (this gives tighter discrepancy in the subsequent vertices) and splitting the edge labels of $G$ into smaller intervals (this gives smaller initial sets in the vertices). The above description focuses on invariant properties, but the algorithm and our implementation in {{\sc DryVR}} can verify a useful class of temporal properties. These are properties in which the time constraints only refer to the time since the last mode transition. For example, for the $\auto{Powertrn}$ benchmark the tool verifies requirements like ``after $4$s in $\Lmode{normal}$ mode, the air-fuel ratio should be contained in $[14.6,14.8]$ and after $4$s in $\Lmode{powerup}$ it should be in $[12.4,12.6]$''. \vspace{-10pt} \paragraph{Correctness} Given a correct discrepancy function for each mode, we can prove the soundness and relative completeness of Algorithm~\ref{alg:safetyveri}. This analysis closely follows the proof of Theorem~19 and Theorem~21 in~\cite{duggirala2015dynamic}. Combining this with the probabilistic correctness of the $\mathit{LearnDiscrepancy}$, we obtain the following probabilistic soundness guarantee. \begin{theorem} If the $\beta$'s returned by $\mathit{LearnDiscrepancy}$ are always discrepancy functions for corresponding modes, then $\mathit{VerifySafety}(\H,U)$ (Algorithm~\ref{alg:safetyveri}) is sound. That is, if it outputs ``SAFE'', then $\H$ is safe with respect to $\U$ and if it outputs ``UNSAFE'' then there exists an execution of $\H$ that enters $\U$. \end{theorem} \section{Appendix} \label{app:A} \subsection{ADAS and autonomous vehicle venchmarks} \label{app:adas} We provide more details for the different scenarios used for testing ADAS and Autonomous driving control systems. Recall, that each vehicle model in {Simulink\textsuperscript{\textregistered}}\ has several continuous variables including the $x, y$-coordinates of the vehicle on the road, its velocity, heading, steering angle, etc. The vehicle can be controlled by two input signals, namely the throttle (acceleration or brake) and the steering speed. By choosing appropriate values of these input signals, we have defined the following modes for each vehicle \begin{inparaenum}[(a)] \item \Lmode{cruise}: move forward at constant speed, \Lmode{speedup}: constant acceleration, \Lmode{brake}: constant (slow) deceleration, \Lmode{em\_brake}: constant (hard). \end{inparaenum} We have designed lane switching modes \Lmode{ch\_left} and \Lmode{ch\_right} in which the acceleration and steering are controlled in such a manner that the vehicle switches to its left (resp. right) lane in a certain amount of time. For each vehicle, we mainly analyze four variables: absolute position ($sx$) and velocity $vx$ orthogonal to the road direction ($x$-axis), and absolute position ($sy$) and velocity $vy$ along the road direction ($x$-axis). The throttle and steering information can be expressed using the four variables. We will use subscripts to distinguish between different vehicles. The following scenarios are constructed by defining appropriate sets of initial states and transitions graphs labeled by the modes of two or more vehicles. \begin{description} \item[$\auto{MergeBehind}$:] Initial condition: Vehicle A is in left and vehicle B is in the right lane; initial positions and speeds are in some range; A is in \Lmode{cruise} mode, and B is in \Lmode{cruise} or \Lmode{speedup}. % Transition graph: Vehicle A goes through the mode sequence \Lmode{speedup}, \Lmode{ch\_right}, \Lmode{cruise} with specified intervals of time to transit from mode to another mode. % Requirement: A merges behind B within a time bound and maintains at least a given safe separation. \item[$\auto{MergeAhead}$:] Initial condition: Same as $\auto{MergeBehind}$ with except that B is in \Lmode{cruise} or \Lmode{brake} mode. Transition graph: Same structure as $\auto{MergeBehind}$ with different timing parameters. % Requirement: A merges ahead of B and maintains at least a given safe separation. \item[$\auto{AutoPassing}$:] Initial condition: Vehicle A behind B in the same lane, with A in \Lmode{speedup} and B in \Lmode{cruise}; initial positions and speeds are in some range. Transition graph: A goes through the mode sequence \Lmode{ch\_left}, \Lmode{speedup}, \Lmode{brake}, and \Lmode{ch\_right}, \Lmode{cruise} with specified time intervals in each mode to complete the overtake maneuver. If B switches to \Lmode{speedup} before A enters \Lmode{speedup} then A aborts and changes back to right lane. If B switches to \Lmode{brake} before A enters \Lmode{ch\_left}, then A should adjust the time to switch to \Lmode{ch\_left} to avoid collision. % Requirement: Vehicle A overtakes B while maintaining minimal safe separation. \item[$\auto{AEB}$:] (Emergency brakes) Initial condition: Vehicle A behind B in the same lane with A in \Lmode{cruise}, B is stopped (in \Lmode{cruise} mode with velocity $0$). Initial positions and speeds are in some range; Transition graph: A transits from \Lmode{cruise} to \Lmode{em\_brake} over a given interval of time or several disjoint intervals of time. Requirement: Vehicle A stops behind B and maintains at least a given safe separation. \item[$\auto{MergeBetween}$:] Initial condition: Vehicle A, B, C are all in the same lane, with A behind B, B behind C, and in the \Lmode{cruise} mode, initial positions and speeds are in some range. Transition graph: A goes through the mode sequence \Lmode{ch\_left}, \Lmode{speedup}, \Lmode{brake}, and \Lmode{ch\_right}, \Lmode{cruise} with specified time intervals in each mode to overtake B. C transits from \Lmode{cruise} to \Lmode{speedup} then transits back to \Lmode{cruise}, so C is always ahead of A. Requirement: Vehicle A merges between B and C and any two vehicles maintain at least a given safe separation. \end{description} \subsection{Automatic transmission control} \label{ssec:gear} We provide some details about the Automatic transmission control benchmark that we have modeled as a hybrid system that combine white-box and black-box components and we have verified using {\sc DryVR}'s safety verification algorithm. This is a slightly modified version of the Automatic Transmission model provided by {Mathworks\textsuperscript{\textregistered}}\ as a {Simulink\textsuperscript{\textregistered}}\ demo \cite{Matlab_trans}. It is a model of an automatic transmission controller that exhibits both continuous and discrete behavior. The model has been previously used by S-taliro~\cite{S-Taliro} for falsifying certain requirements. We are not aware of any verification results for this system. For our experiments, we made some minor modifications to the {Simulink\textsuperscript{\textregistered}}\ model to create the hybrid system $\auto{ATS}$. This allows us to simulate the vehicle from any one of the four modes, namely, \Lmode{gear1}, \Lmode{gear2}, \Lmode{gear3} and \Lmode{gear4}. Although the system has many variables, we are primarily interested in the car Speed ($v$), engine RPM (Erpm), impeller torque ($T_i$), output torque ($T_o$), and transmission RPM (Trpm), and therefore, use simulations that record these. Transition graph of $\auto{ATS}$ encodes transition sequences and intervals for shifting from \Lmode{gear1} through to \Lmode{gear4}. Requirement of interest is that the engine RPM is less than a specified maximum value, which in turn is important for limiting the thermal and mechanical stresses on the cylinders and camshafts. Typical unsafe set $\U_t$ could be Erpm $>4000$. \subsection{Safety verification algorithm} \label{appendix:safetyveri} The safety verification algorithm is shown in~\ref{alg:safetyveri}. It proceeds along the line of the simulation-based verification algorithms presented in~\cite{DMV:EMSOFT2013, FanMitra:2015df, DuggiralaMV:2015c2e2}. \begin{algorithm} \caption{$\mathit{VerifySafety}(\H,\U)$ verifies safety of hybrid system $\H$ with respect to unsafe set $\U$.} \label{alg:safetyveri} \SetKwInOut{Input}{input} \SetKwInOut{Initially}{initially} {\bf initially}{${\ensuremath{\cap}}.push(Partition(\Theta))$} {\bf while} {${\ensuremath{\cap}} \neq \emptyset$} { $S \gets {\ensuremath{\cap}}.pop()$\; $RS \gets \mathit{GraphReach}(\H)$ \; \uIf {$RS \cap \U = \emptyset$} { continue\;} \uElseIf {$\exists (x,l,t) \in RT$ s.t. $\langle RT,v \rangle \in RS$ and $(x,l,t) \subseteq \U $} {\Return UNSAFE, $\langle RT,v \rangle$} {\bf else} { {$I.push (Partition(S))$ \;} {Or, $G \gets RefineGraph(G)$ \;} } } \Return SAFE \end{algorithm} \section{Conclusions} \label{sec:cs} The work presented in this paper takes an alternative view that complete mathematical models of hybrid systems are unavailable. Instead, the available system description combines a black-box simulator and a white-box transition graph. Starting from this point of view, we have developed the semantic framework, a probabilistic verification algorithm, and results on simulation relations and sequential composition for reasoning about complex hybrid systems over long switching sequences. Through modeling and analysis of a number of automotive control systems using implementations of the proposed approach, we hope to have demonstrated their promise. One direction for further exploration in this vein, is to consider more general timed and hybrid automata models of the white-box, and develop the necessary algorithms and the reasoning techniques. \section{Other examples} \subsection{ADAS and autonomous vehicle benchmarks} \label{ssec:adas} This is a suite of benchmarks we have created representing various common scenarios used for testing ADAS and Autonomous driving control systems. The hybrid system for a scenario is constructed by putting together several individual vehicles. The higher-level decisions (paths) followed by the vehicles are captured by transition graphs while the detailed dynamics of each vehicle comes from a black-box {Simulink\textsuperscript{\textregistered}} \ simulator from {Mathworks\textsuperscript{\textregistered}}~\cite{simulinkcar}. Each vehicle has several continuous variables including the $x, y$-coordinates of the vehicle on the road, its velocity, heading, and steering angle. The vehicle can be controlled by two input signals, namely the throttle (acceleration or brake) and the steering speed. By choosing appropriate values for these input signals, we have defined the following modes for each vehicle --- \Lmode{cruise}: move forward at constant speed, \Lmode{speedup}: constant acceleration, \Lmode{brake}: constant (slow) deceleration, \Lmode{em\_brake}: constant (hard) deceleration. In addition, we have designed lane switching modes \Lmode{ch\_left} and \Lmode{ch\_right} in which the acceleration and steering are controlled in such a manner that the vehicle switches to its left (resp. right) lane in a certain amount of time. For each vehicle, we mainly analyze four variables: absolute position ($sx$) and velocity ($vx$) orthogonal to the road direction ($x$-axis), and absolute position ($sy$) and velocity ($vy$) along the road direction ($y$-axis). The throttle and steering are captured using the four variables. We will use subscripts to distinguish between different vehicles. The following scenarios are constructed by defining appropriate sets of initial states and transitions graphs labeled by the modes of two or more vehicles. In all of these scenarios a primary safety requirement is that the vehicles maintain safe separation. See Appendix~\ref{app:adas} for more details on initial states and transition graphs of each scenario. % \begin{description} \itemsep0em \item[$\auto{Merge}$:] Vehicle A in the left lane is behind vehicle B in the right lane. A switches through modes \Lmode{cruise}, \Lmode{speedup}, \Lmode{ch\_right}, and \Lmode{cruise} over specified intervals to merge behind B. Variants of this scenario involve $B$ also switching to \Lmode{speedup} or \Lmode{brake}. \item[$\auto{AutoPassing}$:] Vehicle A starts behind B in the same lane, and goes through a sequence of modes to overtake B. If B switches to \Lmode{speedup} before A enters \Lmode{speedup} then A aborts and changes back to right lane. \item[$\auto{Merge3}$:] Same as $\auto{AutoPassing}$ with a third car C always ahead of $B$. \item[$\auto{AEB}$:] Vehicle A cruises behind B and B stops. A transits from \Lmode{cruise} to \Lmode{em\_brake} possibly over several different time intervals as governed by different sensors and reaction times. \end{description} \subsection{Experiments on safety verification} \label{sec:reachexp} \begin{figure} \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth,trim={1.0cm 0.8cm 1.5cm 0.8cm},clip]{Car_reachTube.pdf} \caption{Safe reachtube. } \label{fig:AutoPassingA} \end{subfigure} ~ ~ \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth,trim={1.0cm 0.8cm 1.5cm 0.8cm},clip]{Car_simulation.pdf} \caption{Unsafe execution. } \label{fig:AutoPassingB} \end{subfigure} \vspace{-8pt} \caption{\small $\auto{AutoPassing}$ verification. Vehicle A's (red) modes are shown above each subplot. Vehicle B (green) is in \Lmode{cruise}. Top: $sx_A,sx_B$. Bottom: $sy_A, sy_B$. } \label{fig:AutoPassing} \end{figure} The algorithms have been implemented in {{\sc DryVR}} and have been used to automatically verify the benchmarks from Section~\ref{sec:prelims} and an Automatic Transmission System (Appendix~\ref{ssec:gear}). The transition graph, the initial set, and unsafe set are given in a text file. {{\sc DryVR}} uses simulators for modes, and outputs either ``Safe'' of ``Unsafe''. Reachtubes or counter-examples computed during the analysis are also stored in text files. The implementation is in Python using the MatLab's Python API for accessing the {Simulink\textsuperscript{\textregistered}} ~simulators. Py-GLPK~\cite{python-glpk-package} is used to find the parameters of discrepancy functions; either global (GED) or piece-wise (PED) discrepancy can be selected by the user. Z3~\cite{de2008z3} is used for reachtube operations. At this stage, all the benchmarks we are working on heavily rely on {{Mathworks\textsuperscript{\textregistered}}} {{Simulink\textsuperscript{\textregistered}}}. We don't have a public {{Mathworks\textsuperscript{\textregistered}}} license to release the tool, and it is complicated for the users to build a connection between {{\sc DryVR}} and their own {{Simulink\textsuperscript{\textregistered}}} models. We will release {{\sc DryVR}} soon after we move the blackbox benchmarks to a different open source software. Figure~\ref{fig:AutoPassing} shows example plots of computed safe reachtubes and counter-examples for a simplified $\auto{AutoPassing}$ in which vehicle B stays in the \Lmode{cruise} always. As before, vehicle A goes through a sequence of modes to overtake B. Initially, for both $i \in \{A,B\}$, $sx_i = vx_i = 0$ and $vy_i = 1$, i.e., both are cruising at constant speed at the center of the right lane; initial positions along the lane are $sy_A\in [0,2], sy_B \in [15,17]$. Figure~\ref{fig:AutoPassingA} shows the lateral positions ($sx_A$ in red and $sx_B$ in green, in the top subplot), and the positions along the lane ($sy_A$ in red and $sy_B$ in green, in the bottom plot). Vehicle A moves to left lane ($sx$ decreases) and then back to the right, while B remains in the right lane, as A overtakes B (bottom plot). The unsafe set $(|sx_A-sx_B|<2 \ \&\ |sy_A-sy_B|<2)$ is proved to be disjoint from computed reachtube. With a different initial set, $sy_B \in [30,40]$, {\sc DryVR}\ finds counter-example (Figure~\ref{fig:AutoPassingB}). \begin{table} \begin{center} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{|c|rlcccr|} \hline Model & TH & Initial set & $\U$ & Ref & Safe & Runtime \\ \hline \makecell[c]{$\auto{Powertrn}$ \\ (5 vers, 6 edges)} & 80 & \makecell[l]{$\lambda \in [14.6,14.8]$} & $\U_{p}$ & 2 & \cmark & 217.4s \\ \cline{1-7} {$\auto{AutoPassing}$} & 50 & $sy_A \in [-1,1]$ $sy_B \in [14,16]$ & $\U_{c}$ & 4 & \cmark & 208.4s \\ \cline{2-7} (12 vers, 13 edges) & 50 & $sy_A \in [-1,1]$ $sy_B \in [4,6.5]$ & $\U_{c}$ & 5 & \xmark & 152.5s \\ \hline {$\auto{Merge}$ } & 50 & $sx_A \in [-5,5]$ $sy_B\in [-2,2]$ & $\U_{c}$ & 0 & \cmark & 55.0s \\ \cline{2-7} (7 vers, 7 edges) & 50 & $sx_A \in [-5,5]$ $sy_B \in [2,10]$ & $\U_{c}$ & - & \xmark & 38.7s \\ \hline {$\auto{Merge3}$} & 50 & \makecell[l]{$sy_A \in [-3,3]$ $sy_B \in [14, 23]$\\$sy_C \in [36, 45]$} & $\U_{c}$ & 4 & \cmark & 197.6s \\ \cline{2-7} (6 vers, 5 edges) & 50 & \makecell[l]{$sy_A \in [-3,3]$ $sy_B \in [14,15]$ \\ $sy_C \in [16, 20]$} & $\U_{c}$ & - & \xmark & 21.3s \\ \hline \makecell[c]{ {$\auto{ATS}$ }\\ (4 vers, 3 edges)} & 50 & Erpm $\in [900,1000]$& $\U_{t}$ & 2 & \cmark & 109.2s \\ \cline{2-7} \hline \end{tabular}% } \vspace{-14pt} \caption{\small Safety verification results. Numbers below benchmark names: \# vertices and edges of $G$, TH: duration of shortest path in $G$, Ref: \# refinements performed; Runtime: overall running time.} \label{table:results} \end{center} \end{table}% Table~\ref{table:results} summarizes some of the verification results obtained using {\sc DryVR}. $\auto{ATS}$ is an automatic transmission control system (see Appendix~\ref{ssec:gear} for more details). These experiments were performed on a laptop with Intel Core i7-6600U CPU and 16 GB RAM. The initial range of only the salient continuous variables are shown in the table. The unsafe sets are discussed with the model description. For example $\U_c$ means two vehicles are too close. For all the benchmarks, the algorithm terminated in a few minutes which includes the time to simulate, learn discrepancy, generate reachtubes, check the safety of the reachtube, over all refinements. For the results presented in Table~\ref{table:results}, we used GED. The reachtube generated by PED for $\auto{Powertrn}$ is more precise, but for the rest, the reachtubes and the verification times using both GED and PED were comparable. In addition to the $\mathit{VerifySafety}$ algorithm, {\sc DryVR}\ also looks for counter-examples by quickly generating random executions of the hybrid system. If any of these executions is found to be unsafe, {\sc DryVR}\ will return ``Unsafe'' without starting the $\mathit{VerifySafety}$ algorithm. \section{Introduction} \label{sec:intro} The starting point of existing hybrid system verification approaches is the availability of nice mathematical models describing the transitions and trajectories. This central conceit severely restricts the applicability of the resulting approaches. Real world control system ``models'' are typically a heterogeneous mix of simulation code, differential equations, block diagrams, and hand-crafted look-up tables. Extracting clean mathematical models from these descriptions is usually infeasible. At the same time, rapid developments in Advanced Driving Assist Systems (ADAS), autonomous vehicles, robotics, and drones now make the need for effective and sound verification algorithms stronger than ever before. The {\sc DryVR}\ framework presented in this paper aims to narrow the gap between sound and practical verification for control systems. \vspace{-10pt} \paragraph{Model assumptions} Consider an ADAS feature like automatic emergency braking system (AEB). The high-level logic deciding the timing of when and for how long the brakes are engaged after an obstacle is detected by sensors is implemented in a relatively clean piece of code and this logical module can be seen as a {\em white-box\/}. In contrast, the dynamics of vehicle itself, with hundreds of parameters, is more naturally viewed as a {\em black-box\/}. That is, it can be simulated or tested with different initial conditions and inputs, but it is nearly impossible to write down a nice mathematical model. % The empirical observation motivating this work is that many control systems, and especially automotive systems, share this combination of white and black boxes (see other examples in Sections~\ref{ex:powertrain}, \ref{ssec:adas}, and \ref{ssec:gear}). In this paper, we view hybrid systems as a combination of a white-box that specifies the mode switches and a black-box that can simulate the continuous evolution in each mode. Suppose the system has a set of modes $\L$ and $n$ continuous variables. The mode switches are defined by a {\em transition graph} $G$ which is a directed acyclic graph (DAG) whose vertices and edges define the allowed mode switches and the switching times. The black-box is a set of trajectories $\TL$ in $\reals^n$ for each mode in $\L$. We do not have a closed form description of $\TL$, but instead, we have a {\em simulator\/}, that can generate sampled data points on individual trajectories for a given initial state and mode. Combining a transition graph $G$, a set of trajectories $\TL$, and a set of initial states in $\reals^n$, we obtain a hybrid system for which executions, reachability, and trace containment can be defined naturally. We have studied a suite of automotive systems such as powertrain control~\cite{jin2014powertrain}, automatic transmission control~\cite{Matlab_trans}, and ADAS features like automatic emergency braking (AEB), lane-change, and auto-passing, that are naturally represented in the above style. In verifying a lane change or merge controller, once the maneuver is activated, the mode transitions occur within certain time intervals. In testing a powertrain control system, the mode transitions are brought about by the driver and it is standard to describe typical driver classes using time-triggered signals. Similar observations hold in other examples. \vspace{-10pt} \paragraph{Safety verification algorithm} With black-box modules in our hybrid systems, we address the challenge of providing guaranteed verification. Our approach is based on the idea of simulation-driven reachability analysis~\cite{fan2016automatic,DMV:EMSOFT2013,DuggiralaMV:2015c2e2}. For a given mode $\ell \in \L$, finitely many simulations of the trajectories of $\ell$ and a {\em discrepancy function\/} bounding the sensitivity of these trajectories, is used to over-approximate the reachable states. For the key step of computing discrepancy for modes that are now represented by black-boxes, we introduce a probabilistic algorithm that learns the parameters of exponential discrepancy functions from simulation data. The algorithm transforms the problem of learning the parameters of the discrepancy function to the problem of learning a linear separator for a set of points in $\reals^2$ that are obtained from transforming the simulation data. A classical result in PAC learning, ensures that any such discrepancy function works with high probability for all trajectories. We performed dozens of experiments with a variety of black-box simulators and observed that 15-20 simulation traces typically give a discrepancy function that works for nearly 100\% of all simulations. The reachability algorithm for the hybrid system proceeds along the vertices of the transition graph in a topologically sorted order and this gives a sound bounded time verification algorithm, provided the learned discrepancy function is correct. \vspace{-10pt} \paragraph{Reasoning} White-box transition graphs in our modelling, identify the switching sequences under which the black-box modules are exercised. Complex systems have involved transition graphs that describe subtle sequences in which the black-box modules are executed. To enable the analysis of such systems, we identify reasoning principles that establish the safety of system under a complex transition graph based on its safety under a simpler transition graph. We define a notion of forward simulation between transition graphs that provides a sufficient condition of when one transition graph ``subsumes'' another --- if $G_1$ is simulated by $G_2$ then the reachable states of a hybrid system under $G_1$ are contained in the reachable states of the system under $G_2$. Thus the safety of the system under $G_2$ implies the safety under $G_1$. Moreover, we give a simple polynomial time algorithm that can check if one transition graph is simulated by another. Our transition graphs are acyclic with transitions having bounded switching times. Therefore, the executions of the systems we analyze are over a bounded time, and have a bounded number of mode switches. An important question to investigate is whether establishing the safety for bounded time, enables one can conclude the safety of the system for an arbitrarily long time and for arbitrarily many mode switches. With this in mind, we define a notion of sequential composition of transition graphs $G_1$ and $G_2$, such that switching sequences allowed by the composed graph are the concatenation of the sequences allowed by $G_1$ with those allowed by $G_2$. Then we prove a sufficient condition on a transition graph $G$ such that safety of a system under $G$ implies the safety of the system under arbitrarily many compositions of $G$ with itself. \vspace{-10pt} \paragraph{Automotive applications} We have implemented these ideas to create the {\bf D}ata-d{\bf r}iven S{\bf y}stem for {\bf V}erification and {\bf R}easoning ({\sc DryVR}). The tool is able to automatically verify or find counter-examples in a few minutes, for all the benchmark scenarios mentioned above. Reachability analysis combined with compositional reasoning, enabled us to infer safety of systems with respect to arbitrary transitions and duration. \vspace{-10pt} \paragraph{Related work} Most automated verification tools for hybrid systems rely on analyzing a white-box mathematical model of the systems. They include tools based on decidablity results~\cite{doty95,hh95,ck99,adm02,Dutertre04timedsystems,fre05}, semi-decision procedures that over-approximate the reachable set of states through symbolic computation~\cite{gm99,mt00,bt00,kv00,st00,Frehse:cav11,ariadne,flow}, using abstractions~\cite{adi03,efhkost03-1,efhkost03-2,Henzinger_refinement,seg07,07-HSolver,dkl07,JKWB:HSCC:2007,HareFMSD,14-sttt,14-AGAR,15-PLC-CEGAR,HareTACAS16}, and using approximate decision procedures for fragments of first-order logic~\cite{dreach}. More recently, there has been interest in developing simulation-based verification tools~\cite{Julius:2007:RTG:1760804.1760833,SensitivityDM,donze2010breach,Kanade09,staliro-tool-paper,Fainekos:2009:RTL:1609208.1609591,DRJqest13,DuggiralaMV:2015c2e2}. Even though these are simulation based tools, they often rely on being to analyze a mathematical model of the system. The type of analysis that they rely on include instrumentation to extract a symbolic trace from a simulation~\cite{Kanade09}, stochastic optimization to search for counter-examples~\cite{staliro-tool-paper,Fainekos:2009:RTL:1609208.1609591}, and sensitivity analysis~\cite{SensitivityDM,donze2010breach,DRJqest13,DuggiralaMV:2015c2e2}. Some of the simulation based techniques only work for systems with linear dynamics~\cite{Sim2Veri,ILABS}. Recent work on the APEX tool~\cite{o2016apex} for verifying trajectory planning and tracking in autonomous vehicles is related our approach in that it targets the same application domain. \section{Modeling/semantic framework} \label{sec:prelims} We introduce a powertrain control system from~\cite{jin2014powertrain} as a running example to illustrate the elements of our hybrid system modeling framework. \subsection{Powertrain control system} \label{ex:powertrain} This system ($\auto{Powertrn}$) models a highly nonlinear engine control system. The relevant state variables of the model are intake manifold pressure ($p$), air-fuel ratio ($\lambda$), estimated manifold pressure ($pe$) and intergrator state ($i$). The overall system can be in one of four modes $\Lmode{startup}$, $\Lmode{normal}$, $\Lmode{powerup}$, $\Lmode{sensorfail}$. A {Simulink\textsuperscript{\textregistered}}\ diagram describes the continuous evolution of the above variables. In this paper, we mainly work on the {\em Hybrid I/O Automaton Model} in the suite of powertrain control models. The {Simulink\textsuperscript{\textregistered}}\ model consists of continuous variables describing the dynamics of the powertrain plant and sample-and-hold variables as the controller. One of the key requirements to verify is that the engine maintains the air-fuel ratio within a desired range in different modes for a given set of driver behaviors. This requirement has implications on fuel economy and emissions. For testing purposes, the control system designers work with sets of driver profiles that essentially define families of switching signals across the different modes. Previous verification results on this problem have been reported in~\cite{DFMV:CAV2015, FDMV:ARCH2015} on a simplified version of the powertrain control model. \vspace{-10pt} \subsection{Transition graphs} \label{ssec:modes} We will use $\L$ to denote a finite set of {\em modes\/} or locations of the system under consideration. The discrete behavior or mode transitions are specified by what we call a transition graph over $\L$. \begin{definition} A {\em transition graph\/} is a labeled, directed acyclic graph $G = \langle \L, \V, \E, {\sc vlab}, {\sc elab} \rangle$, where \begin{inparaenum}[(a)] \item $\L$ is the set of vertex labels also called the set of {\em modes\/}, \item $\V$ the set of vertices, \item $\E\subseteq \V \times \V$ is the set of edges, \item ${\sc vlab}: \V \rightarrow \L$ is a vertex labeling function that labels each vertex with a mode, and \item ${\sc elab}: \E\rightarrow \nnreals \times \nnreals$ is an edge labeling function that labels each edge with a nonempty, closed, bounded interval defined by pair of non-negative reals. \end{inparaenum} \end{definition} Since $G$ is a DAG, there is a nonempty subset $\V_{\sf init} \subseteq \V$ of vertices with no incoming edges and a nonempty subset $\V_{\sf term} \subseteq \V$ of vertices with no outgoing edges. We define the set of initial locations of $G$ as $\L_{\sf init} = \{ \ell \ | \ \exists \ v \in \V_{\sf init}, {\sc vlab}(v) = \ell \}$. A (maximal) {\em path\/} of the graph $G$ is a sequence $\pi = v_1, t_1, v_2, t_2, \ldots, v_k$ such that, \begin{inparaenum}[(a)] \item $v_1 \in \V_{\sf init}$, \item $v_k \in \V_{\sf term}$, and \item for each $(v_i,t_i,v_{i+1})$ subsequence, there exists $(v_i, v_{i+1}) \in \E$, and $t_i \in {\sc elab}((v_i,v_{i+1}))$. \end{inparaenum} $\paths{G}$ is the set of all possible paths of $G$. For a given path $\pi = v_1, t_1, v_2, t_2, \ldots,$ $v_k$ its {\em trace\/}, denoted by ${\sc vlab}(\pi)$, is the sequence ${\sc vlab}(v_1), t_1, {\sc vlab}(v_2), t_2, \ldots,$ ${\sc vlab}(v_k)$. Since $G$ is a DAG, a trace of $G$ can visit the same mode finitely many times. ${\trace{}}{G}$ is the set of all traces of $G$. An example transition graph for the $\auto{Powertrain}$ system of Section~\ref{ex:powertrain} is shown in Figure~\ref{fig:power_graph}. The set of vertices $\V = \{0,\ldots, 4\}$ and the ${\sc vlab}$'s and ${\sc elab}$'s appear adjacent to the vertices and edges. \begin{figure}[h] \includegraphics[scale=0.4]{power_graph1.pdf} \vspace{-10pt} \caption{\small A sample transition graph for $\auto{Powertrain}$ system.} \label{fig:power_graph} \end{figure} \vspace{-8pt} \subsubsection{Trace containment} We will develop reasoning techniques based on reachability, abstraction, composition, and substitutivity. To this end, we will need to establish containment relations between the behaviors of systems. Here we define containment of transition graph traces. Consider transition graphs $G_1, G_2,$ with modes $\L_1,\L_2,$ and a mode map ${\sc lmap}: \L_1 \rightarrow \L_2$. For a trace $\sigma = \ell_1, t_1, \ell_2, t_2, \ldots, \ell_k \in {\trace{}}{G_1}$, simplifying notation, we denote by ${\sc lmap}(\sigma)$ the sequence ${\sc lmap}(\ell_1), t_1, {\sc lmap}(\ell_2), t_2,$ $\ldots, {\sc lmap}(\ell_k)$. We write $G_1 \preceq_{{\sc lmap}} G_2$ iff for every trace $\sigma \in {\trace{}}{G_1}$, there is a trace $\sigma' \in {\trace{}}{G_2}$ such that ${\sc lmap}(\sigma)$ is a prefix of $\sigma'$. \begin{definition} Given graphs $G_1, G_2$ and a mode map ${\sc lmap}: \L_1 \rightarrow \L_2$, a relation $R \subseteq \V_1 \times \V_2$ is a {\em forward simulation relation from $G_1$ to $G_2$\/} iff \begin{enumerate}[(a)] \itemsep0em \item for each $v \in \V_{1 {\sf init}}$, there is $u \in \V_{2 {\sf init}}$ such that $(v,u) \in R$, \item for every $(v,u) \in R$, ${\sc lmap}({\sc vlab}_1(v)) = {\sc vlab}_2(u)$, and \item for every $(v,v') \in \E_1$ and $(v,u)\in R$, there exists a finite set $u_1, \ldots, u_k$ such that: \begin{inparaenum}[(i)] \item for each $u_j$, $(v,u_j) \in R$, and \item ${\sc elab}_1((v,v')) \subseteq \cup_j {\sc elab}_2((u,u_j))$. \end{inparaenum} \end{enumerate} \end{definition} \begin{proposition} \label{prop:graphsim} If there exists a forward simulation relation from $G_1$ to $G_2$ with ${\sc lmap}$ then $G_1 \preceq_{{\sc lmap}} G_2$. \end{proposition} \vspace{-10pt} \subsubsection{Sequential composition of graphs} We will find it convenient to define the \emph{sequential composition} of two transition graphs. Intuitively, the traces of the composition of $G_1$ and $G_2$ will be those that can be obtained by concatenating a trace of $G_1$ with a trace of $G_2$. To keep the definitions and notations simple, we will assume (when taking sequential compositions) $|\V_{\sf init}| = |\V_{\sf term}| = 1$; this is true of the examples we analyze. It is easy to generalize to the case when this does not hold. Under this assumption, the unique vertex in $\V_{\sf init}$ will be denoted as $v_{\sf init}$ and the unique vertex in $\V_{\sf term}$ will be denoted as $v_{\sf term}$. \begin{definition} \label{def:seq-comp} Given graphs $G_1 = \langle \L, \V_1, \E_1, {\sc vlab}_1, {\sc elab}_1 \rangle$ and $G_2 = \langle \L, \V_2, \E_2,$ ${\sc vlab}_2, {\sc elab}_2 \rangle$ such that ${\sc vlab}_1(v_{1 {\sf term}}) = {\sc vlab}_2(v_{2 {\sf init}})$, the \emph{sequential composition} of $G_1$ and $G_2$ is the graph $G_1\seqcomp G_2 = \langle \L, \V, \E, {\sc vlab}, {\sc elab} \rangle$ where \begin{enumerate}[(a)] \itemsep0em \item $\V = (\V_1 \cup \V_2) \setminus \{v_{2 {\sf init}})\}$, \item $\E = \E_1 \cup \{(v_{1 {\sf term}},u)\: |\: (v_{2 {\sf init}},u) \in \E_2\} \cup \{(v,u) \in \E_2\: |\: v \neq v_{2 {\sf init}}\}$, \item ${\sc vlab}(v) = {\sc vlab}_1(v)$ if $v \in \V_1$ and ${\sc vlab}(v) = {\sc vlab}_2(v)$ if $v \in \V_2$, \item For edge $(v,u) \in \E$, ${\sc elab}((v,u)) $ equals \begin{inparaenum}[(i)] \item ${\sc elab}_1((v,u))$, if $u \in \V_1$, \\ \item ${\sc elab}_2((v_{2 {\sf init}},u))$, if $v = v_{1 {\sf term}}$, \item ${\sc elab}_2((v,u)), otherwise$. \end{inparaenum} \end{enumerate} \end{definition} Given our definition of trace containment between graphs, we can prove a very simple property about sequential composition. \begin{proposition} \label{prop:seqcomp-containment} Let $G_1$ and $G_2$ be two graphs with modes $\L$ that can be sequential composed. Then $G_1 \preceq_{{\sf id}} G_1\seqcomp G_2$, where ${\sf id}$ is the identity map on $\L$. \end{proposition} The proposition follows from the fact that every path of $G_1$ is a prefix of a path of $G_1\seqcomp G_2$. Later in Section \ref{sec: sub_exp} we see examples of sequential composition. \subsection{Trajectories} The evolution of the system's continuous state variables is formally described by continuous functions of time called {\em trajectories\/}. Let $n$ be the number of continuous variables in the underlying hybrid model. A {\em trajectory\/} for an $n$-dimensional system is a continuous function of the form $\tau: [0,T] \rightarrow \reals^n$, where $T \geq 0$. The interval $[0,T]$ is called the {\em domain\/} of $\tau$ and is denoted by $\tau.{\mathit dom}$. The first state $\tau(0)$ is denoted by $\tau.\mathop{\mathsf {fstate}}$, last state $\tau.\mathop{\mathsf {lstate}} = \tau(T)$ and $\tau.\mathop{\mathsf {ltime}} = T$. For a hybrid system with $\L$ modes, each trajectory is labeled by a mode in $\L$. A {\em trajectory labeled by $\L$\/} is a pair $\langle \tau, \ell \rangle$ where $\tau$ is a trajectory and $\ell \in \L$. A {\em $T_1$-prefix\/} of $\langle \tau, \ell\rangle$, for any $T_1 \in \tau.{\mathit dom}$, is the labeled-trajectory $\langle \tau_1, \ell\rangle$ with $\tau_1:[0,T_1] \rightarrow \reals^n$, such that for all $t \in [0, T_1]$, $\tau_1(t) = \tau(t)$. A set of labeled-trajectories $\TL$ is prefix-closed if for any $\langle \tau,\ell \rangle \in \TL$, any of its prefixes are also in $\TL$. A set $\TL$ is {\em deterministic\/} if for any pair $\langle \tau_1, \ell_1 \rangle, \langle \tau_2,\ell_2 \rangle \in \TL$, if $\tau_1.\mathop{\mathsf {fstate}} = \tau_2.\mathop{\mathsf {fstate}}$ and $\ell_1 = \ell_2$ then one is a prefix of the other. A deterministic, prefix-closed set of labeled trajectories $\TL$ describes the behavior of the continuous variables in modes $\L$. We denote by $\TL_{\sf init,\ell}$ $=\{ \tau.\mathop{\mathsf {fstate}} \ | \ \langle \tau,\ell \rangle \in \TL \}$, the set of initial states of trajectories in mode $\ell$. Without loss generality we assume that $\TL_{{\sf init},\ell}$ is a connected, compact subset of $\reals^n$. We assume that trajectories are defined for unbounded time, that is, for each $\ell \in \L, T >0$, and $x \in \TL_{{\sf init},\ell}$, there exists a $\langle \tau, \ell \rangle \in \TL$, with $\tau.\mathop{\mathsf {fstate}} = x$ and $\tau.\mathop{\mathsf {ltime}} = T$. In control theory and hybrid systems literature, the trajectories are assumed to be generated from models like ordinary differential equations (ODEs) and differential algebraic equations (DAEs). Here, we avoid an over-reliance on the models generating trajectories and closed-form expressions. Instead, {{\sc DryVR}} works with sampled data of $\tau(\cdot)$ generated from simulations or tests. \begin{definition} \label{def:sims} A {\em simulator\/} for a (deterministic and prefix-closed) set $\TL$ of trajectories labeled by $\L$ is a function (or a program) ${\sc sim}$ that takes as input a mode label $\ell \in \L$, an initial state $x_0 \in \TL_{\sf init,\ell}$, and a finite sequence of time points $t_1, \ldots, t_k$, and returns a sequence of states ${\sc sim}(x_0,\ell,t_1), \ldots, {\sc sim}(x_0,\ell, t_k)$ such that there exists $\langle\tau,\ell\rangle \in \TL$ with $\tau.\mathop{\mathsf {fstate}} = x_0$ and for each $i\in \{1,\ldots, k\}$, ${\sc sim}(x_0,\ell,t_i) = \tau(t_i)$. \end{definition} The trajectories of the $\auto{Powertrn}$ system are described by a {Simulink\textsuperscript{\textregistered}}\ diagram. The diagram has several switch blocks and input signals that can be set appropriately to generate simulation data using the {Simulink\textsuperscript{\textregistered}}\ ODE solver. For simplicity, we assume that the simulations are perfect (as in the last equality of Definition~\ref{def:sims}). Formal guarantees of soundness of {{\sc DryVR}} are not compromised if we use \emph{validated simulations} instead. \vspace{-10pt} \paragraph{Trajectory containment} Consider sets of trajectories, $\TL_1$ labeled by $\L_1$ and $\TL_2$ labeled by $\L_2$, and a mode map ${\sc lmap}: \L_1 \rightarrow \L_2$. For a labeled trajectory $\langle \tau,\ell \rangle \in \TL_1$, denote by ${\sc lmap}(\langle \tau,\ell \rangle)$ the labeled-trajectory $\langle \tau,{\sc lmap}(\ell) \rangle$. Write $\TL_1 \preceq_{{\sc lmap}} \TL_2$ iff for every labeled trajectory $\langle \tau,\ell \rangle \in \TL_1$, ${\sc lmap}(\langle \tau,\ell \rangle) \in \TL_2$. \subsection{Hybrid systems} \begin{definition} An $n$-dimensional {\em hybrid system\/} $\H$ is a 4-tuple $\langle \L, \Theta, G, \TL \rangle$, where \begin{inparaenum}[(a)] \item $\L$ is a finite set of modes, \item $\Theta \subseteq \reals^n$ is a compact set of initial states, \item $G = \langle \L, \V, \E, {\sc elab} \rangle$ is a transition graph with set of modes $\L$, and \item $\TL$ is a set of deterministic, prefix-closed trajectories labeled by $\L$. \end{inparaenum} \end{definition} A {\em state\/} of the hybrid system $\H$ is a point in $\reals^n \times \L$. The set of initial states is $\Theta \times \L_{\sf init}$. Semantics of $\H$ is given in terms of executions which are sequences of trajectories consistent with the modes defined by the transition graph. An {\em execution\/} of $\H$ is a sequence of labeled trajectories $\alpha = \langle \tau_1, \ell_1\rangle\ldots, \langle \tau_{k-1}, \ell_{k-1}\rangle, \ell_k$ in $\TL$, such that \begin{inparaenum}[(a)] \item $\tau_1.\mathop{\mathsf {fstate}} \in \Theta$ and $\ell_1 \in \L_{\sf init}$, \item the sequence $\pathof{\alpha}$ defined as $\ell_1, \tau_1.\mathop{\mathsf {ltime}}, \ell_2, \ldots \ell_k$ is in ${\trace{}}{G}$, and \item for each consecutive trajectory, $\tau_{i+1}.\mathop{\mathsf {fstate}} = \tau_i.\mathop{\mathsf {lstate}}$. \end{inparaenum} The set of all executions of $\H$ is denoted by ${\exec{}}{\H}$. The first and last states of an execution $\alpha = \langle \tau_1, \ell_1\rangle\ldots, \langle \tau_{k-1}, \ell_{k-1}\rangle, \ell_k$ are $\alpha.\mathop{\mathsf {fstate}} = \tau_1.\mathop{\mathsf {fstate}}$, $\alpha.\mathop{\mathsf {lstate}} = \tau_{k-1}.\mathop{\mathsf {lstate}}$, and $\alpha.\mathit{fmode} = \ell_1$ $\alpha.\mathit{lmode} = \ell_k$. A state $\langle x, \ell \rangle$ is {\em reachable\/} at time $t$ and vertex $v$ (of graph $G$) if there exists an execution $\alpha = \langle \tau_1, \ell_1\rangle\ldots, \langle \tau_{k-1}, \ell_{k-1}\rangle, \ell_k \in {\exec{}}{\H}$, a path $\pi = v_1, t_1, \ldots v_k$ in $\paths{G}$, $i \in \{1,\ldots k\}$, and $t' \in \tau_i.{\mathit dom}$ such that ${\sc vlab}(\pi) = \pathof{\alpha}$, $v = v_i$, $\ell = \ell_i$, $x = \tau_i(t')$, and $t = t' + \sum_{j=1}^{i-1} t_j$. The set of reachable states, reach tube, and states reachable at a vertex $v$ are defined as follows. \begin{itemize}[\null] \itemsep0pt \item $\reachtube{\H} = \{\langle x,\ell,t \rangle\: |\: \mbox{for some } v,\ \langle x, \ell \rangle \mbox{ is reachable at time $t$ and vertex $v$}\}$ \item $\reach{\H} = \{\langle x,\ell \rangle\: |\: \mbox{for some } v,t,\ \langle x, \ell \rangle \mbox{ is reachable at time $t$ and vertex $v$}\}$ \item $\reach{\H}^v = \{\langle x,\ell \rangle\: |\: \mbox{for some } t,\ \langle x, \ell \rangle \mbox{ is reachable at time $t$ and vertex $v$}\}$ \end{itemize} Given a set of (unsafe) states $\U \subseteq \reals^n \times \L$, the {\em bounded safety verification problem\/} is to decide whether $\reach{\H} \cap \U = \emptyset$. In Section~\ref{sec:reachalgo} we will present {\sc DryVR}'s algorithm for solving this decision problem. \begin{remark} Defining paths in a graph $G$ to be maximal (i.e., end in a vertex in $\V_{{\sf term}}$) coupled with the definition above for executions in $\H$, ensures that for a vertex $v$ with outgoing edges in $G$, the execution must leave the mode ${\sc vlab}(v)$ within time bounded by the largest time in the labels of outgoing edges from $v$. \end{remark} An instance of the bounded safety verification problem is defined by (a) the hybrid system for the $\auto{Powertrn}$ which itself is defined by the transition graph of Figure~\ref{fig:power_graph} and the trajectories defined by the {Simulink\textsuperscript{\textregistered}}\ model, and (b) the unsafe set ($\U_p$): in \Lmode{powerup} mode, $t>4\wedge \lambda \notin [12.4,12.6]$, in \Lmode{normal} mode, $t>4 \wedge \lambda \notin [14.6,14.8]$. Containment between graphs and trajectories can be leveraged to conclude the containment of the set of reachable states of two hybrid systems. \begin{proposition} \label{prop:hybrid-contain} Consider a pair of hybrid systems $\H_i = \langle \L_i, \Theta_i, G_i, \TL_i \rangle$, $i \in \{1,2\}$ and mode map ${\sc lmap}: \L_1 \to \L_2$. If $\Theta_1 \subseteq \Theta_2$, $G_1 \preceq_{{\sc lmap}} G_2$, and $\TL_1 \preceq_{{\sc lmap}} \TL_2$, then $\reach{\H_1} \subseteq \reach{\H_2}$. \end{proposition} \section{#1}% \newcommand{\pow}[1]{{\bf P}(#1)} \newcommand{\inverse}[1]{#1^{-1}} \newcommand{\range}[1]{\ms{range{(#1)}}} \newcommand{\domain}[1]{{\it dom}(#1)} \newcommand{\type}[1]{\ms{type{(#1)}}} \newcommand{\dtype}[1]{\ms{dtype{(#1)}}} \newcommand{\mathrel{\lceil}}{\mathrel{\lceil}} \newcommand{\matrel{\lceil}}{\matrel{\lceil}} \newcommand{\mathrel{\downarrow}}{\mathrel{\downarrow}} \newcommand{\point}[1]{\wp(#1)} \def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def\C{{\cal C}} \def\D{{\cal D}} \def\E{{\cal E}} \def\mathit{false}{{\cal F}} \def\G{{\cal G}} \def\H{{\cal H}} \def{\ensuremath{\cap}}{{\cal I}} \def\K{{\cal K}} \def\L{{\cal L}} \def\M{{\cal M}} \def{\cal N}{{\cal N}} \def\O{{\cal O}} \def\P{{\cal P}} \def\Q{{\cal Q}} \def\R{{\cal R}} \def\S{{\cal S}} \def\mathit{true}{{\cal T}} \def\V{{\cal V}} \def\U{{\cal U}} \def\X{{\cal X}} \def\Y{{\cal Y}} \def{\cal Z}{{\cal Z}} \def\TL{\cal{T\mkern-3mu L}} \def{\cal A}_q{{\cal A}_1} \def{\cal A}_2{{\cal A}_2} \def{\cal A}_3{{\cal A}_3} \def{\cal B}{{\cal B}} \newcommand{\col}[1]{\relax\ifmmode \mathscr #1\else $\mathscr #1$\fi} \def\col{S}{\col{S}} \definecolor{HIOAcolor}{rgb}{0.776,0.22,0.07} \newcommand{\textcolor{HIOAcolor}{\tt HIOA\hspace{3pt}}}{\textcolor{HIOAcolor}{\tt HIOA\hspace{3pt}}} \newcommand{\textcolor{HIOAcolor}{\tt PVS\hspace{3pt}}}{\textcolor{HIOAcolor}{\tt PVS\hspace{3pt}}} \newcommand{\textcolor{HIOAcolor}{\tt PVS\hspace{1pt}}}{\textcolor{HIOAcolor}{\tt PVS\hspace{1pt}}} \newcommand{\textcolor{HIOAcolor}{\tt HIOA\hspace{6pt}}}{\textcolor{HIOAcolor}{\tt HIOA\hspace{6pt}}} \newcommand{\textcolor{HIOAcolor}{\tt HIOA}}{\textcolor{HIOAcolor}{\tt HIOA}} \newcommand{\lessgtr}{\lessgtr} \newcommand{\SC}[2]{\relax\ifmmode {\tt Scount}(#1,#2) \else ${\tt Scount}(#1,#2)$\fi} \newcommand{\SCM}[2]{\relax\ifmmode {\tt Smin}(#1,#2) \else ${\tt Smin}(#1,#2)$\fi} \newcommand{\Aut}[1]{\relax\ifmmode {\tt Aut}(#1) \else ${\tt Aut}(#1)$\fi} \newcommand{\auto}[1]{{\operatorname{\mathsf{#1}}}} \newcommand{\operatorname{\mathsf {act}}}[1]{{\operatorname{\mathsf{#1}}}} \newcommand{\smodel}[1]{{\operatorname{\mathsf{#1}}}} \newcommand{\pvstheory}[1]{{\operatorname{\mathit{#1}}}} \newcommand{\vspace{0.1in}}{\vspace{0.1in}} \newcommand{\hspace{-0.1in}}{\hspace{-0.1in}} \newcommand{\algref}[1]{Algorithm~\ref{#1}} \newcommand{\chlabel}[1]{\label{ch:#1}} \newcommand{\chref}[1]{Chapter~\ref{ch:#1}} \newcommand{\chreftwo}[2]{Chapters~\ref{ch:#1}~and~\ref{ch:#2}} \newcommand{\seclabel}[1]{\label{sec:#1}} \newcommand{\secref}[1]{Section~\ref{sec:#1}} \newcommand{\secreftwo}[2]{Sections~\ref{sec:#1}~and~\ref{sec:#2}} \newcommand{\sslabel}[1]{\label{ss:#1}} \newcommand{\ssref}[1]{Subsection~\ref{ss:#1}} \newcommand{\figlabel}[1]{\label{fig:#1}} \newcommand{\figref}[1]{Fig.~\ref{fig:#1}} \newcommand{\tablabel}[1]{\label{tab:#1}} \newcommand{\tabref}[1]{Table~\ref{tab:#1}} \newcommand{\figreftwo}[2]{Figs.~\ref{fig:#1}~and~\ref{fig:#2}} \newcommand{\applabel}[1]{\label{app:#1}} \newcommand{\appref}[1]{Appendix~\ref{app:#1}} \newcommand{\lnlabel}[1]{\label{line:#1}} \newcommand{\lnsref}[2]{Lines~\ref{line:#1}--\ref{line:#2}\xspace} \newcommand{\lnref}[1]{Line~\ref{line:#1}\xspace} \newcommand{\lnreftwo}[2]{Lines~\ref{line:#1}~and~\ref{line:#2}\xspace} \newcommand{\thmref}[1]{Theorem~\ref{thm:#1}\xspace} \newcommand{\thmlabel}[1]{\label{thm:#1}} \newcommand{\lemref}[1]{Lemma~\ref{lem:#1}\xspace} \newcommand{\lemreftwo}[2]{Lemmata~\ref{lem:#1} and~\ref{lem:#2}\xspace} \newcommand{\lemlabel}[1]{\label{lem:#1}} \newcommand{\corref}[1]{Corollary~\ref{cor:#1}\xspace} \newcommand{\corlabel}[1]{\label{cor:#1}} \newcommand{\invref}[1]{Invariant~\ref{inv:#1}\xspace} \newcommand{\invlabel}[1]{\label{inv:#1}} \newcommand{\assref}[1]{Assumption~\ref{assump:#1}\xspace} \newcommand{\assreftwo}[2]{Assumptions~\ref{assump:#1}~and~\ref{assump:#2}\xspace} \newcommand{\asslabel}[1]{\label{assump:#1}} \newcommand{\eqlabel}[1]{\label{eq:#1}} \newcommand{\defref}[1]{Definition~\ref{def:#1}\xspace} \newcommand{\deflabel}[1]{\label{def:#1}} \newcommand{\propref}[1]{Proposition~\ref{prop:#1}\xspace} \newcommand{\proplabel}[1]{\label{prop:#1}} \newcommand{\exref}[1]{Example~\ref{ex:#1}\xspace} \newcommand{\exlabel}[1]{\label{ex:#1}} \newcommand{\ppageref}[1]{page~\pageref{#1}} \newcommand{\conjref}[1]{Conjecture~\ref{conj:#1}\xspace} \newcommand{\conjlabel}[1]{\label{conj:#1}} \newcommand{\remove}[1]{} \newcommand{\salg}[1]{\relax\ifmmode {\mathcal F}_{#1}\else ${\mathcal F}_{#1}$\fi} \newcommand{\msp}[1]{\relax\ifmmode (#1, \salg{#1}) \else $(#1, \salg{#1})$\fi} \newcommand{\msprod}[2]{\relax\ifmmode ( #1 \times #2, \salg{#1} \otimes \salg{#2}) \else $(#1 \times #2, \salg{#1} \otimes \salg{#2})$\fi} \newcommand{\dist}[1]{\relax\ifmmode {\mathcal P}\msp{#1} \else ${\mathcal P}\msp{#1}$\fi} \newcommand{\subdist}[1]{\relax\ifmmode {\mathcal S}{\mathcal P}\msp{#1} \else ${\mathcal S}{\mathcal P}\msp{#1}$\fi} \newcommand{\disc}[1]{\relax\ifmmode {\sf Disc}(#1) \else ${\sf Disc}(#1)$\fi} \newcommand{\relax\ifmmode {\mathcal R}_\mathit{true} \else ${\mathcal R}_\mathit{true}$\fi}{\relax\ifmmode {\mathcal R}_\mathit{true} \else ${\mathcal R}_\mathit{true}$\fi} \newcommand{\relax\ifmmode {\mathcal R}_A \else ${\mathcal R}_A$\fi}{\relax\ifmmode {\mathcal R}_A \else ${\mathcal R}_A$\fi} \newcommand{\relax\ifmmode \lambda \else $\lambda$\fi}{\relax\ifmmode \lambda \else $\lambda$\fi} \newcommand{\close}[1]{\relax\ifmmode \overline{#1} \else $\overline{#1}$\fi} \newcommand{\mathop{\mathsf {c}}}{\mathop{\mathsf {c}}} \newcommand{\mathop{\mathsf {full}}}{\mathop{\mathsf {full}}} \newcommand{\mathop{\mathsf {dur}}}{\mathop{\mathsf {dur}}} \newcommand{\mathop{\mathsf {tdist}}}{\mathop{\mathsf {tdist}}} \newcommand{\mathop{\mathsf {extbeh}}}{\mathop{\mathsf {extbeh}}} \newcommand{\mathop{\mathsf {apply}}}[2]{\mathop{\mathsf {apply}({#1},{#2})}} \newcommand{\mathop{\mathsf {supp}}}{\mathop{\mathsf {supp}}} \newcommand{\mathrel{R}}{\mathrel{R}} \newcommand{C}{C} \newcommand{\mathord{\mathsf {flatten}}}{\mathord{\mathsf {flatten}}} \newcommand{\mathord{\mathsf {Disc}}}{\mathord{\mathsf {Disc}}} \newcommand{\lift}[1]{\mathrel{{\mathcal L}(#1)}} \newcommand{\expansion}[1]{\mathrel{{\mathcal E}(#1)}} \newcommand{\operatorname{\mathsf {SubDisc}}}{\operatorname{\mathsf {SubDisc}}} \newcommand{\operatorname{\mathsf {tran}}}{\operatorname{\mathsf {tran}}} \newcommand{{\operatorname{\mathsf {Frags}}}}{{\operatorname{\mathsf {Frags}}}} \newcommand{{\operatorname{\mathsf {trace}}}}{{\operatorname{\mathsf {trace}}}} \newcommand{{\operatorname{\mathsf {finite}}}}{{\operatorname{\mathsf {finite}}}} \newcommand{{\operatorname{\mathsf {hide}}}}{{\operatorname{\mathsf {hide}}}} \newcommand{{\operatorname{\mathsf {Early}}}}{{\operatorname{\mathsf {Early}}}} \newcommand{{\operatorname{\mathsf {Late}}}}{{\operatorname{\mathsf {Late}}}} \newcommand{{\operatorname{\mathsf {Toss}}}}{{\operatorname{\mathsf {Toss}}}} \newcommand{:=}{:=} \newcommand{{\operatorname{\mathsf {counter}}}}{{\operatorname{\mathsf {counter}}}} \newcommand{{\operatorname{\mathsf {chosen}}}}{{\operatorname{\mathsf {chosen}}}} \newcommand{{\operatorname{\mathsf {random}}}}{{\operatorname{\mathsf {random}}}} \newcommand{{\operatorname{\mathsf {unif}}}}{{\operatorname{\mathsf {unif}}}} \newcommand{i.e.,\xspace}{i.e.,\xspace} \newcommand{I.e.,\xspace}{I.e.,\xspace} \newcommand{e.g.,\xspace}{e.g.,\xspace} \newcommand{E.g.,\xspace}{E.g.,\xspace} \newcommand{\mybox}[3]{ \framebox[#1][l] { \parbox{#2} { #3 } } } \newcommand{\two}[4]{ \parbox{.95\columnwidth}{\vspace{1pt} \vfill \parbox[t]{#1\columnwidth}{#3}% \parbox[t]{#2\columnwidth}{#4}% }} \newcommand{\twosep}[4]{ \parbox{\columnwidth}{\vspace{1pt} \vfill \parbox[t]{#1\columnwidth}{#3}% \vrule width 0.2pt \parbox[t]{#2\columnwidth}{#4}% }} \newcommand{\eqntwo}[4]{ \parbox{\columnwidth}{\vspace{1pt} \vfill \parbox[t]{#1\columnwidth}{$ #3 $} \parbox[t]{#2\columnwidth}{$ #4 $} }} \newcommand{\three}[6]{\vspace{1pt} \vfill \parbox{\columnwidth}{% \parbox[t]{#1\columnwidth}{#4}% \parbox[t]{#2\columnwidth}{#5}% \parbox[t]{#3\columnwidth}{#6}% }} \newcommand{\tup}[1] { \relax\ifmmode \langle #1 \rangle \else $\langle$ #1 $\rangle$ \fi } \newcommand{\lit}[1]{ \relax\ifmmode \mathord{\mathcode`\-="702D\sf #1\mathcode`\-="2200} \else {\it #1} \fi } \newcommand{\scriptsize}{\scriptsize} \newcommand{\footnotesize}{\footnotesize} \lstdefinelanguage{ioa}{ basicstyle=\scriptsize, keywordstyle=\bf \scriptsize, identifierstyle=\it \scriptsize, emphstyle=\tt \scriptsize, mathescape=true, tabsize=20, sensitive=false, columns=fullflexible, keepspaces=false, flexiblecolumns=true, basewidth=0.05em, escapeinside={(*@}{@*)}, moredelim=[il][\rm]{//}, moredelim=[is][\sf \scriptsize]{!}{!}, moredelim=[is][\bf \scriptsize]{*}{*}, keywords={automaton,and, assumed, choose,const,continue, components, derived, discrete, do, eff, external,else, elseif, evolve, end, fi,for, each, foreach, forward, from, hidden, in,input,internal,if,invariant, initially, imports, let, or, output, operators, od, of, pre, private, return, satisfies, shared, signature, simulation, stop, such, trajectories,trajdef, transitions, that,then, type, types, to, tasks, variables, vocabulary, when,where, with,while}, emph={Set, set, seq, tuple, map, array, enumeration}, literate= {(}{{$($}}1 {)}{{$)$}}1 {\\in}{{$\in\ $}}1 {\\preceq}{{$\preceq\ $}}1 {\\subset}{{$\subset\ $}}1 {\\subseteq}{{$\subseteq\ $}}1 {\\supset}{{$\supset\ $}}1 {\\supseteq}{{$\supseteq\ $}}1 {\\forall}{{$\forall$}}1 {\\le}{{$\le\ $}}1 {\\ge}{{$\ge\ $}}1 {\\gets}{{$\gets\ $}}1 {\\cup}{{$\cup\ $}}1 {\\cap}{{$\cap\ $}}1 {\\langle}{{$\langle$}}1 {\\rangle}{{$\rangle$}}1 {\\exists}{{$\exists\ $}}1 {\\bot}{{$\bot$}}1 {\\rip}{{$\rip$}}1 {\\emptyset}{{$\emptyset$}}1 {\\notin}{{$\notin\ $}}1 {\\not\\exists}{{$\not\exists\ $}}1 {\\ne}{{$\ne\ $}}1 {\\to}{{$\to\ $}}1 {\\implies}{{$\implies\ $}}1 {<}{{$<\ $}}1 {>}{{$>\ $}}1 {=}{{$=\ $}}1 {~}{{$\neg\ $}}1 {|}{{$\mid$}}1 {'}{{$^\prime$}}1 {\{\cal A}}{{$\forall\ $}}1 {\\E}{{$\exists\ $}}1 {\\nE}{{$\nexists\ $}}1 {\\/}{{$\vee\,$}}1 {\\vee}{{$\vee\,$}}1 {/\\}{{$\wedge\,$}}1 {\\wedge}{{$\wedge\,$}}1 {=>}{{$\Rightarrow\ $}}1 {->}{{$\rightarrow\ $}}1 {<=}{{$\Leftarrow\ $}}1 {<-}{{$\leftarrow\ $}}1 {~=}{{$\neq\ $}}1 {\\U}{{$\cup\ $}}1 {\{\ensuremath{\cap}}}{{$\cap\ $}}1 {|-}{{$\vdash\ $}}1 {-|}{{$\dashv\ $}}1 {<<}{{$\ll\ $}}2 {>>}{{$\gg\ $}}2 {||}{{$\|$}}1 {[}{{$[$}}1 {]}{{$\,]$}}1 {[[}{{$\langle$}}1 {]]]}{{$]\rangle$}}1 {]]}{{$\rangle$}}1 {<=>}{{$\Leftrightarrow\ $}}2 {<->}{{$\leftrightarrow\ $}}2 {(+)}{{$\oplus\ $}}1 {(-)}{{$\ominus\ $}}1 {_i}{{$_{i}$}}1 {_j}{{$_{j}$}}1 {_{i,j}}{{$_{i,j}$}}3 {_{j,i}}{{$_{j,i}$}}3 {_0}{{$_0$}}1 {_1}{{$_1$}}1 {_2}{{$_2$}}1 {_n}{{$_n$}}1 {_p}{{$_p$}}1 {_k}{{$_n$}}1 {-}{{$\ms{-}$}}1 {@}{{}}0 {\\delta}{{$\delta$}}1 {\\R}{{$\R$}}1 {\\Rplus}{{$\Rplus$}}1 {\{\cal N}}{{${\cal N}$}}1 {\\times}{{$\times\ $}}1 {\\tau}{{$\tau$}}1 {\\alpha}{{$\alpha$}}1 {\\beta}{{$\beta$}}1 {\\gamma}{{$\gamma$}}1 {\\ell}{{$\ell\ $}}1 {--}{{$-\ $}}1 {\\TT}{{\hspace{1.5em}}}3 } \lstdefinelanguage{ioaNums}[]{ioa} { numbers=left, numberstyle=\tiny, stepnumber=2, numbersep=4pt } \lstdefinelanguage{ioaNumsRight}[]{ioa} { numbers=right, numberstyle=\tiny, stepnumber=2, numbersep=4pt } \newcommand{\lstinline[language=IOA]}{\lstinline[language=IOA]} \lstnewenvironment{IOA}% {\lstset{language=IOA}} {} \lstnewenvironment{IOANums}% { \if@firstcolumn \lstset{language=IOA, numbers=left, firstnumber=auto} \else \lstset{language=IOA, numbers=right, firstnumber=auto} \fi } {} \lstnewenvironment{IOANumsRight}% { \lstset{language=IOA, numbers=right, firstnumber=auto} } {} \newcommand{\figioa}[5]{ \begin{figure}[#1] \hrule \mathit{false} {\scriptsize \bf #2} \lstinputlisting[language=ioaLang]{#5} \mathit{false} \hrule \mathit{false} \caption{#3} \label{fig: #4} \end{figure} } \newcommand{\linefigioa}[9]{ } \newcommand{\twofigioa}[8]{ \begin{figure}[#1] \hrule \mathit{false} {\scriptsize \bf #2} \\ \two{#5}{#6} { \lstinputlisting[language=ioaLang]{#7} } { \lstinputlisting[language=ioaLang]{#8} } \mathit{false} \hrule \mathit{false} \caption{#3} \label{fig: #4} \end{figure} } \lstdefinelanguage{ioaLang}{% basicstyle=\ttfamily\small, keywordstyle=\rmfamily\bfseries\small, identifierstyle=\small, keywords={assumes,automaton,axioms,backward,bounds,by,case,choose,components,const,d,det,discrete,do,eff,else,elseif,ensuring,enumeration,evolve,fi,fire,follow,for,forward,from,hidden,if,in,% input,initially,internal,invariant,let, local,od,of,output,pre,schedule,signature,so,% simulation,states,variables, tasks, stop,tasks,that,then,to,trajdef,trajectory,trajectories,transitions,tuple,type,union,urgent,uses,when,where,while,yield}, literate= {\\in}{{$\in$}}1 {\\preceq}{{$\preceq$}}1 {\\subset}{{$\subset$}}1 {\\subseteq}{{$\subseteq$}}1 {\\supset}{{$\supset$}}1 {\\supseteq}{{$\supseteq$}}1 {\\rho}{{$\rho$}}1 {\\infty}{{$\infty$}}1 {<}{{$<$}}1 {>}{{$>$}}1 {=}{{$=$}}1 {~}{{$\neg$}}1 {|}{{$\mid$}}1 {'}{{$^\prime$}}1 {\{\cal A}}{{$\forall$}}1 {\\E}{{$\exists$}}1 {\\/}{{$\vee$}}1 {/\\}{{$\wedge$}}1 {=>}{{$\Rightarrow$}}1 {->}{{$\rightarrow$}}1 {<=}{{$\leq$}}1 {>=}{{$\geq$}}1 {~=}{{$\neq$}}1 {\\U}{{$\cup$}}1 {\{\ensuremath{\cap}}}{{$\cap$}}1 {|-}{{$\vdash$}}1 {-|}{{$\dashv$}}1 {<<}{{$\ll$}}2 {>>}{{$\gg$}}2 {||}{{$\|$}}1 {<=>}{{$\Leftrightarrow$}}2 {<->}{{$\leftrightarrow$}}2 {(+)}{{$\oplus$}}1 {(-)}{{$\ominus$}}1 } \lstdefinelanguage{bigIOALang}{% basicstyle=\ttfamily, keywordstyle=\rmfamily\bfseries, identifierstyle=, keywords={assumes,automaton,axioms,backward,by,case,choose,components,const,% d,det,discrete,do,eff,else,elseif,ensuring,enumeration,evolve,fi,for,forward,from,hidden,if,in% input,initially,internal,invariant,local,od,of,output,pre,schedule,signature,so,% tasks, simulation,states,stop,tasks,that,then,to,trajdef,trajectories,transitions,tuple,type,union,urgent,uses,when,where,yield}, literate= {\\in}{{$\in$}}1 {\\preceq}{{$\preceq$}}1 {\\subset}{{$\subset$}}1 {\\subseteq}{{$\subseteq$}}1 {\\supset}{{$\supset$}}1 {\\supseteq}{{$\supseteq$}}1 {<}{{$<$}}1 {>}{{$>$}}1 {=}{{$=$}}1 {~}{{$\neg$}}1 {|}{{$\mid$}}1 {'}{{$^\prime$}}1 {\{\cal A}}{{$\forall$}}1 {\\E}{{$\exists$}}1 {\\/}{{$\vee$}}1 {/\\}{{$\wedge$}}1 {=>}{{$\Rightarrow$}}1 {->}{{$\rightarrow$}}1 {<=}{{$\leq$}}1 {>=}{{$\geq$}}1 {~=}{{$\neq$}}1 {\\U}{{$\cup$}}1 {\{\ensuremath{\cap}}}{{$\cap$}}1 {|-}{{$\vdash$}}1 {-|}{{$\dashv$}}1 {<<}{{$\ll$}}2 {>>}{{$\gg$}}2 {||}{{$\|$}}1 {<=>}{{$\Leftrightarrow$}}2 {<->}{{$\leftrightarrow$}}2 {(+)}{{$\oplus$}}1 {(-)}{{$\ominus$}}1 } \lstnewenvironment{BigIOA}% {\lstset{language=bigIOALang,basicstyle=\ttfamily} \csname lst@SetFirstLabel\endcsname} {\csname lst@SaveFirstLabel\endcsname\vspace{-4pt}\noindent} \lstnewenvironment{SmallIOA}% {\lstset{language=ioaLang,basicstyle=\ttfamily\scriptsize} \csname lst@SetFirstLabel\endcsname} {\csname lst@SaveFirstLabel\endcsname\noindent} \newcommand{\relax\ifmmode \mathit true \else \em true \/\fi}{\relax\ifmmode \mathit true \else \em true \/\fi} \newcommand{\relax\ifmmode \mathit false \else \em false \/\fi}{\relax\ifmmode \mathit false \else \em false \/\fi} \newcommand{{\operatorname{\texttt{Real}}}}{{\operatorname{\texttt{Real}}}} \newcommand{{\operatorname{\texttt{Bool}}}}{{\operatorname{\texttt{Bool}}}} \newcommand{{\operatorname{\texttt{Char}}}}{{\operatorname{\texttt{Char}}}} \newcommand{{\operatorname{\texttt{Int}}}}{{\operatorname{\texttt{Int}}}} \newcommand{{\operatorname{\texttt{Nat}}}}{{\operatorname{\texttt{Nat}}}} \newcommand{{\operatorname{\texttt{AugmentedReal}}}}{{\operatorname{\texttt{AugmentedReal}}}} \newcommand{{\operatorname{\texttt{String}}}}{{\operatorname{\texttt{String}}}} \newcommand{{\operatorname{\texttt{Discrete}}}}{{\operatorname{\texttt{Discrete}}}} \newcommand{\Rightarrow}{\Rightarrow} \newcommand{\Leftrightarrow}{\Leftrightarrow} \newlength{\bracklen} \newcommand{\sem}[1]{\settowidth{\bracklen}{[} [\hspace{-0.5\bracklen}[#1]\hspace{-0.5\bracklen}]} \newcommand{1.4}{1.4} \renewcommand{\arraystretch}{1.4} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathcal{FV}}{\mathcal{FV}} \newcommand{\mathcal{V}_\mathit{spec}}{\mathcal{V}_\mathit{spec}} \newcommand{\mathcal{V}_\mathit{A}}{\mathcal{V}_\mathit{A}} \newcommand{\mathcal{V}_\mathit{sigs}}{\mathcal{V}_\mathit{sigs}} \newcommand{\mathcal{V}_\mathit{sorts}}{\mathcal{V}_\mathit{sorts}} \newcommand{\mathcal{V}_\mathit{ops}}{\mathcal{V}_\mathit{ops}} \newcommand{\mathit{sort}}{\mathit{sort}} \newcommand{\mathit{sig}}{\mathit{sig}} \newcommand{\mathit{id}}{\mathit{id}} \newcommand{\lsl`->`}{\lsl`->`} \newcommand{\super}[2]{\ensuremath{\mathit{#1}^\mathit{#2}}} \newcommand{\tri}[3]{\ensuremath{\mathit{#1}^\mathit{#2}_\mathit{#3}}} \newcommand{\ensuremath{\mathit{Assumptions}}}{\ensuremath{\mathit{Assumptions}}} \newcommand{\actPred}[3][\pi]{\tri{P}{#2,#1}{#3}} \newcommand{\actualTypes}[1]{\super{actualTypes}{#1}} \newcommand{\actuals}[1]{\super{actuals}{#1}} \newcommand{\autActVars}[2][\pi]{\vars{#2}{},\vars{#2,#1}{}} \newcommand{\bracket}[2]{\mathit{#1}[\mathit{#2}]} \newcommand{\compVars}[1]{\super{compVars}{#1}} \newcommand{\mathit{context}}{\mathit{context}} \newcommand{\ensuring}[2]{\tri{ensuring}{#1}{#2}} \newcommand{\initPred}[1]{\tri{P}{#1}{init}} \newcommand{\initVals}[1]{\super{initVals}{#1}} \newcommand{\initially}[2]{\tri{initially}{#1}{#2}} \newcommand{\invPred}[2]{\tri{Inv}{#1}{#2}} \newcommand{\knownVars}[1]{\super{knownVars}{#1}} \newcommand{\localPostVars}[2]{\tri{localPostVars}{#1}{#2}} \newcommand{\localVars}[2]{\tri{localVars}{#1}{#2}} \newcommand{\locals}[1]{\bracket{Locals}{#1}} \newcommand{\nam}[1]{\rho^{\mathit{#1}}} \newcommand{\otherActPred}[3][\pi]{\otherTri{P}{#2,#1}{#3}} \newcommand{\otherParams}[2]{\otherTri{params}{#1}{#2}} \newcommand{\otherSub}[2]{\otherTri{\sigma}{#1}{#2}} \newcommand{\otherTri}[3]{\tri{\smash{#1'}}{#2}{#3}} \newcommand{\otherVars}[2]{\otherTri{vars}{#1}{#2}} \newcommand{\params}[2]{\tri{params}{#1}{#2}} \newcommand{\postVars}[1]{\super{postVars}{#1}} \newcommand{\pre}[2]{\tri{Pre}{#1}{#2}} \newcommand{\prog}[2]{\tri{Prog}{#1}{#2}} \newcommand{\prov}[2]{\tri{Prov}{#1}{#2}} \newcommand{\stateSorts}[1]{\super{stateSorts}{#1}} \newcommand{\stateVars}[1]{\super{stateVars}{#1}} \newcommand{\states}[1]{\bracket{States}{#1}} \newcommand{\sugActPred}[3][\pi]{\tri{P}{#2,#1}{#3,desug}} \newcommand{\sugLocalVars}[2]{\ifthenelse{\equal{}{#2}}% {\tri{localVars}{#1}{desug}}% {\tri{localVars}{#1}{#2,desug}}} \newcommand{\sugVars}[2]{\ifthenelse{\equal{}{#2}}% {\tri{vars}{#1}{desug}}% {\tri{vars}{#1}{#2,desug}}} \newcommand{\cVars}[1]{\super{cVars}{#1}} \newcommand{\dot{\varrho}}{\dot{\varrho}} \newcommand{\map}[2]{\tri{\dot{\varrho}}{#1}{#2}} \newcommand{\types}[1]{\super{types}{#1}} \newcommand{\vars}[2]{\tri{vars}{#1}{#2}} \newcommand{\subActPred}[3][\pi]{\sub{#2,#1}{#3}(\tri{P}{#2,#1}{#3,desug})} \newcommand{\subLocalVars}[2]{\sub{#1}{#2}(\tri{localVars}{#1}{#2,desug})} \newcommand{\hat{A}}{\hat{A}} \newcommand{\renameAction}[1]{\ensuremath{\rho_{#1}(\vars{\hat{A}{#1},\pi}{})}} \newcommand{\renameComponent}[1]{\ensuremath{\rho_{#1}\hat{A}_{#1}}} \newenvironment{Syntax}{\[\begin{subSyntax}}{\end{subSyntax}\]\vspace{-.3in}} \newenvironment{subSyntax}{\begin{array}{l}}{\end{array}} \newcommand{\ms}[1]{\ifmmode% \mathord{\mathcode`-="702D\it #1\mathcode`\-="2200}\else% $\mathord{\mathcode`-="702D\it #1\mathcode`\-="2200}$\fi} \newcommand{\kw}[1]{{\bf #1}} \newcommand{\tcon}[1]{{\tt #1}} \newcommand{\syn}[1]{{\tt #1}} \newcommand{\pvskw}[1]{{\sc #1}} \newcommand{\pvsid}[1]{{\operatorname{\mathit{#1}}}} \def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def\D{{\cal D}} \def\mathit{true}{{\cal T}} \newcommand{{\bf v}}{{\bf v}} \newcommand{{\bf w}}{{\bf w}} \newcommand{{\bf x}}{{\bf x}} \newcommand{{\bf y}}{{\bf y}} \newcommand{{\bf a}}{{\bf a}} \newcommand{{\bf b}}{{\bf b}} \newcommand{{\bf c}}{{\bf c}} \newcommand{{\bf q}}{{\bf q}} \newcommand{{\bf s}}{{\bf s}} \newcommand{{\bf m}}{{\bf m}} \newcommand{{\bf u}}{{\bf u}} \newcommand{\arrow}[1]{\mathrel{\stackrel{#1}{\rightarrow}}} \newcommand{\sarrow}[2]{\mathrel{\stackrel{#1}{\rightarrow_{#2}}}} \newcommand{\concat}{\mathbin{^{\frown}}} \newcommand{\mathrel{\diamond}}{\mathrel{\diamond}} \def\CC{{\mathscr C}} \lstdefinelanguage{pvs}{ basicstyle=\tt \scriptsize, keywordstyle=\sc \scriptsize, identifierstyle=\it \scriptsize, emphstyle=\tt \scriptsize, mathescape=true, tabsize=20, sensitive=false, columns=fullflexible, keepspaces=false, flexiblecolumns=true, basewidth=0.05em, moredelim=[il][\rm]{//}, moredelim=[is][\sf \scriptsize]{!}{!}, moredelim=[is][\bf \scriptsize]{*}{*}, keywords={and, begin, cases, const, do, external, else, exists, end, endcases, endif, fi,for, forall, from, hidden, in, if, importing, let, lambda, lemma, measure, not, or, of, return, recursive, stop, theory, that,then, type, types, type+, to, theorem, var, with,while}, emph={nat, setof, sequence, eq, tuple, map, array, enumeration, bool, real, exp, nnreal, posreal}, literate= {(}{{$($}}1 {)}{{$)$}}1 {\\in}{{$\in\ $}}1 {\\mapsto}{{$\rightarrow\ $}}1 {\\preceq}{{$\preceq\ $}}1 {\\subset}{{$\subset\ $}}1 {\\subseteq}{{$\subseteq\ $}}1 {\\supset}{{$\supset\ $}}1 {\\supseteq}{{$\supseteq\ $}}1 {\\forall}{{$\forall$}}1 {\\le}{{$\le\ $}}1 {\\ge}{{$\ge\ $}}1 {\\gets}{{$\gets\ $}}1 {\\cup}{{$\cup\ $}}1 {\\cap}{{$\cap\ $}}1 {\\langle}{{$\langle$}}1 {\\rangle}{{$\rangle$}}1 {\\exists}{{$\exists\ $}}1 {\\bot}{{$\bot$}}1 {\\rip}{{$\rip$}}1 {\\emptyset}{{$\emptyset$}}1 {\\notin}{{$\notin\ $}}1 {\\not\\exists}{{$\not\exists\ $}}1 {\\ne}{{$\ne\ $}}1 {\\to}{{$\to\ $}}1 {\\implies}{{$\implies\ $}}1 {<}{{$<\ $}}1 {>}{{$>\ $}}1 {=}{{$=\ $}}1 {~}{{$\neg\ $}}1 {|}{{$\mid$}}1 {'}{{$^\prime$}}1 {\{\cal A}}{{$\forall\ $}}1 {\\E}{{$\exists\ $}}1 {\\/}{{$\vee\,$}}1 {\\vee}{{$\vee\,$}}1 {/\\}{{$\wedge\,$}}1 {\\wedge}{{$\wedge\,$}}1 {->}{{$\rightarrow\ $}}1 {=>}{{$\Rightarrow\ $}}1 {->}{{$\rightarrow\ $}}1 {<=}{{$\Leftarrow\ $}}1 {<-}{{$\leftarrow\ $}}1 {~=}{{$\neq\ $}}1 {\\U}{{$\cup\ $}}1 {\{\ensuremath{\cap}}}{{$\cap\ $}}1 {|-}{{$\vdash\ $}}1 {-|}{{$\dashv\ $}}1 {<<}{{$\ll\ $}}2 {>>}{{$\gg\ $}}2 {||}{{$\|$}}1 {[}{{$[$}}1 {]}{{$\,]$}}1 {[[}{{$\langle$}}1 {]]]}{{$]\rangle$}}1 {]]}{{$\rangle$}}1 {<=>}{{$\Leftrightarrow\ $}}2 {<->}{{$\leftrightarrow\ $}}2 {(+)}{{$\oplus\ $}}1 {(-)}{{$\ominus\ $}}1 {_i}{{$_{i}$}}1 {_j}{{$_{j}$}}1 {_{i,j}}{{$_{i,j}$}}3 {_{j,i}}{{$_{j,i}$}}3 {_0}{{$_0$}}1 {_1}{{$_1$}}1 {_2}{{$_2$}}1 {_n}{{$_n$}}1 {_p}{{$_p$}}1 {_k}{{$_n$}}1 {-}{{$\ms{-}$}}1 {@}{{}}0 {\\delta}{{$\delta$}}1 {\\R}{{$\R$}}1 {\\Rplus}{{$\Rplus$}}1 {\{\cal N}}{{${\cal N}$}}1 {\\times}{{$\times\ $}}1 {\\tau}{{$\tau$}}1 {\\alpha}{{$\alpha$}}1 {\\beta}{{$\beta$}}1 {\\gamma}{{$\gamma$}}1 {\\ell}{{$\ell\ $}}1 {--}{{$-\ $}}1 {\\TT}{{\hspace{1.5em}}}3 } \lstdefinelanguage{BigPVS}{ basicstyle=\tt, keywordstyle=\sc, identifierstyle=\it, emphstyle=\tt , mathescape=true, tabsize=20, sensitive=false, columns=fullflexible, keepspaces=false, flexiblecolumns=true, basewidth=0.05em, moredelim=[il][\rm]{//}, moredelim=[is][\sf \scriptsize]{!}{!}, moredelim=[is][\bf \scriptsize]{*}{*}, keywords={and, begin, cases, const, do, datatype, external, else, exists, end, endif, endcases, fi,for, forall, from, hidden, in, if, importing, let, lambda, lemma, measure, not, or, of, return, recursive, stop, theory, that,then, type, types, type+, to, theorem, var, with,while}, emph={nat, setof, sequence, eq, tuple, map, array, first, rest, add, enumeration, bool, real, posreal, nnreal}, literate= {(}{{$($}}1 {)}{{$)$}}1 {\\in}{{$\in\ $}}1 {\\mapsto}{{$\rightarrow\ $}}1 {\\preceq}{{$\preceq\ $}}1 {\\subset}{{$\subset\ $}}1 {\\subseteq}{{$\subseteq\ $}}1 {\\supset}{{$\supset\ $}}1 {\\supseteq}{{$\supseteq\ $}}1 {\\forall}{{$\forall$}}1 {\\le}{{$\le\ $}}1 {\\ge}{{$\ge\ $}}1 {\\gets}{{$\gets\ $}}1 {\\cup}{{$\cup\ $}}1 {\\cap}{{$\cap\ $}}1 {\\langle}{{$\langle$}}1 {\\rangle}{{$\rangle$}}1 {\\exists}{{$\exists\ $}}1 {\\bot}{{$\bot$}}1 {\\rip}{{$\rip$}}1 {\\emptyset}{{$\emptyset$}}1 {\\notin}{{$\notin\ $}}1 {\\not\\exists}{{$\not\exists\ $}}1 {\\ne}{{$\ne\ $}}1 {\\to}{{$\to\ $}}1 {\\implies}{{$\implies\ $}}1 {<}{{$<\ $}}1 {>}{{$>\ $}}1 {=}{{$=\ $}}1 {~}{{$\neg\ $}}1 {|}{{$\mid$}}1 {'}{{$^\prime$}}1 {\{\cal A}}{{$\forall\ $}}1 {\\E}{{$\exists\ $}}1 {\\/}{{$\vee\,$}}1 {\\vee}{{$\vee\,$}}1 {/\\}{{$\wedge\,$}}1 {\\wedge}{{$\wedge\,$}}1 {->}{{$\rightarrow\ $}}1 {=>}{{$\Rightarrow\ $}}1 {->}{{$\rightarrow\ $}}1 {<=}{{$\Leftarrow\ $}}1 {<-}{{$\leftarrow\ $}}1 {~=}{{$\neq\ $}}1 {\\U}{{$\cup\ $}}1 {\{\ensuremath{\cap}}}{{$\cap\ $}}1 {|-}{{$\vdash\ $}}1 {-|}{{$\dashv\ $}}1 {<<}{{$\ll\ $}}2 {>>}{{$\gg\ $}}2 {||}{{$\|$}}1 {[}{{$[$}}1 {]}{{$\,]$}}1 {[[}{{$\langle$}}1 {]]]}{{$]\rangle$}}1 {]]}{{$\rangle$}}1 {<=>}{{$\Leftrightarrow\ $}}2 {<->}{{$\leftrightarrow\ $}}2 {(+)}{{$\oplus\ $}}1 {(-)}{{$\ominus\ $}}1 {_i}{{$_{i}$}}1 {_j}{{$_{j}$}}1 {_{i,j}}{{$_{i,j}$}}3 {_{j,i}}{{$_{j,i}$}}3 {_0}{{$_0$}}1 {_1}{{$_1$}}1 {_2}{{$_2$}}1 {_n}{{$_n$}}1 {_p}{{$_p$}}1 {_k}{{$_n$}}1 {-}{{$\ms{-}$}}1 {@}{{}}0 {\\delta}{{$\delta$}}1 {\\R}{{$\R$}}1 {\\Rplus}{{$\Rplus$}}1 {\{\cal N}}{{${\cal N}$}}1 {\\times}{{$\times\ $}}1 {\\tau}{{$\tau$}}1 {\\alpha}{{$\alpha$}}1 {\\beta}{{$\beta$}}1 {\\gamma}{{$\gamma$}}1 {\\ell}{{$\ell\ $}}1 {--}{{$-\ $}}1 {\\TT}{{\hspace{1.5em}}}3 } \lstdefinelanguage{pvsNums}[]{pvs} { numbers=left, numberstyle=\tiny, stepnumber=2, numbersep=4pt } \lstdefinelanguage{pvsNumsRight}[]{pvs} { numbers=right, numberstyle=\tiny, stepnumber=2, numbersep=4pt } \newcommand{\lstinline[language=PVS]}{\lstinline[language=PVS]} \lstnewenvironment{BigPVS}% {\lstset{language=BigPVS}} {} \lstnewenvironment{PVSNums}% { \if@firstcolumn \lstset{language=pvs, numbers=left, firstnumber=auto} \else \lstset{language=pvs, numbers=right, firstnumber=auto} \fi } {} \lstnewenvironment{PVSNumsRight}% { \lstset{language=pvs, numbers=right, firstnumber=auto} } {} \newcommand{\figpvs}[5]{ \begin{figure}[#1] \hrule \mathit{false} {\scriptsize \bf #2} \lstinputlisting[language=pvs]{#5} \mathit{false} \hrule \mathit{false} \caption{#3} \label{fig: #4} \end{figure} } \newcommand{\linefigpvs}[9]{ } \newcommand{\twofigpvs}[8]{ \begin{figure}[#1] \hrule \mathit{false} {\scriptsize \bf #2} \\ \two{#5}{#6} { \lstinputlisting[language=pvsLang]{#7} } { \lstinputlisting[language=pvsLang]{#8} } \mathit{false} \hrule \mathit{false} \caption{#3} \label{fig: #4} \end{figure} } \lstdefinelanguage{pvsproof}{ basicstyle=\tt \scriptsize, mathescape=true, tabsize=4, sensitive=false, columns=fullflexible, keepspaces=false, flexiblecolumns=true, basewidth=0.05em, } \lstdefinelanguage{pseudo}{ basicstyle=\scriptsize, keywordstyle=\bf \scriptsize, identifierstyle=\it \scriptsize, emphstyle=\tt \scriptsize, mathescape=true, tabsize=20, sensitive=false, columns=fullflexible, keepspaces=false, flexiblecolumns=true, basewidth=0.05em, moredelim=[il][\rm]{//}, moredelim=[is][\sf \scriptsize]{!}{!}, moredelim=[is][\bf \scriptsize]{*}{*}, keywords={automaton,and, choose,const,continue, components, discrete, do, eff, external,else, elseif, evolve, end, fi,for, forward, from, hidden, in,input,internal,if,invariant, initially, imports, let, or, output, operators, od, of, pre, return, round, such,satisfies, stop, signature, simulation, trajectories,trajdef, transitions, that,then, type, types, to, tasks, upon, variables, vocabulary, wait, when,where, with,while}, emph={set, seq, tuple, map, array, enumeration}, literate= {(}{{$($}}1 {)}{{$)$}}1 {\\in}{{$\in\ $}}1 {\\preceq}{{$\preceq\ $}}1 {\\subset}{{$\subset\ $}}1 {\\subseteq}{{$\subseteq\ $}}1 {\\supset}{{$\supset\ $}}1 {\\supseteq}{{$\supseteq\ $}}1 {\\forall}{{$\forall$}}1 {\\le}{{$\le\ $}}1 {\\ge}{{$\ge\ $}}1 {\\gets}{{$\gets\ $}}1 {\\cup}{{$\cup\ $}}1 {\\cap}{{$\cap\ $}}1 {\\langle}{{$\langle$}}1 {\\rangle}{{$\rangle$}}1 {\\exists}{{$\exists\ $}}1 {\\bot}{{$\bot$}}1 {\\rip}{{$\rip$}}1 {\\emptyset}{{$\emptyset$}}1 {\\notin}{{$\notin\ $}}1 {\\not\\exists}{{$\not\exists\ $}}1 {\\ne}{{$\ne\ $}}1 {\\to}{{$\to\ $}}1 {\\implies}{{$\implies\ $}}1 {<}{{$<\ $}}1 {>}{{$>\ $}}1 {=}{{$=\ $}}1 {~}{{$\neg\ $}}1 {|}{{$\mid$}}1 {'}{{$^\prime$}}1 {\{\cal A}}{{$\forall\ $}}1 {\\E}{{$\exists\ $}}1 {\\/}{{$\vee\,$}}1 {\\vee}{{$\vee\,$}}1 {/\\}{{$\wedge\,$}}1 {\\wedge}{{$\wedge\,$}}1 {=>}{{$\Rightarrow\ $}}1 {->}{{$\rightarrow\ $}}1 {<=}{{$\Leftarrow\ $}}1 {<-}{{$\leftarrow\ $}}1 {~=}{{$\neq\ $}}1 {\\U}{{$\cup\ $}}1 {\{\ensuremath{\cap}}}{{$\cap\ $}}1 {|-}{{$\vdash\ $}}1 {-|}{{$\dashv\ $}}1 {<<}{{$\ll\ $}}2 {>>}{{$\gg\ $}}2 {||}{{$\|$}}1 {[}{{$[$}}1 {]}{{$\,]$}}1 {[[}{{$\langle$}}1 {]]]}{{$]\rangle$}}1 {]]}{{$\rangle$}}1 {<=>}{{$\Leftrightarrow\ $}}2 {<->}{{$\leftrightarrow\ $}}2 {(+)}{{$\oplus\ $}}1 {(-)}{{$\ominus\ $}}1 {_i}{{$_{i}$}}1 {_j}{{$_{j}$}}1 {_{i,j}}{{$_{i,j}$}}3 {_{j,i}}{{$_{j,i}$}}3 {_0}{{$_0$}}1 {_1}{{$_1$}}1 {_2}{{$_2$}}1 {_n}{{$_n$}}1 {_p}{{$_p$}}1 {_k}{{$_n$}}1 {-}{{$\ms{-}$}}1 {@}{{}}0 {\\delta}{{$\delta$}}1 {\\R}{{$\R$}}1 {\\Rplus}{{$\Rplus$}}1 {\{\cal N}}{{${\cal N}$}}1 {\\times}{{$\times\ $}}1 {\\tau}{{$\tau$}}1 {\\alpha}{{$\alpha$}}1 {\\beta}{{$\beta$}}1 {\\gamma}{{$\gamma$}}1 {\\ell}{{$\ell\ $}}1 {--}{{$-\ $}}1 {\\TT}{{\hspace{1.5em}}}3 } \newcommand{\abs}[1]{\left\lvert#1\right\rvert} \newcommand{\pair}[1]{\left\langle#1\right\rangle} \newcommand{\floor}[1]{\left\lfloor#1\right\rfloor} \newcommand{\ceil}[1]{\left\lceil#1\right\rceil} \newcommand{\norm}[1]{\left\lvert\left\lvert#1\right\rvert\right\rvert} \newcommand{\argmin}[2]{\underset{#2}{\operatorname{argmin}} #1} \newcommand{\argmax}[2]{\underset{#2}{\operatorname{argmax}} #1} \newcommand{\maxel}[2]{\underset{#2}{\operatorname{max}} #1} \newcommand{\minel}[2]{\underset{#2}{\operatorname{min}} #1} \newcommand{\supel}[2]{\underset{#2}{\operatorname{sup}} #1} \newcommand{\infel}[2]{\underset{#2}{\operatorname{inf}} #1} \newcommand{\sgn}[1]{\operatorname{sgn} \left( #1 \right)} \newcommand{\lima}[2]{\underset{#2}{\operatorname{lim}} #1} \newcommand{\ds}[1]{\left\llbracket#1\right\rrbracket} \newcommand{\mathrel{\stackrel{\scriptscriptstyle\Delta}{=}}}{\mathrel{\stackrel{\scriptscriptstyle\Delta}{=}}} \newenvironment{proofof}[1]{\trivlist\item[]\emph{Proof of #1}:}{\unskip\nobreak\hskip 1em plus 1fil\nobreak\qed\parfillskip=0pt\endtrivlist} \def\sim{\sim} \def\hat{\nu}{\hat{\nu}} \def\vec{V}_I{\vec{V}_I} \def\vec{V}_T{\vec{V}_T} \def\dot{\nu}_{2min}{\dot{\nu}_{2min}} \def\dot{\nu}_{2max}{\dot{\nu}_{2max}} \newcommand{\reacht}[1]{\relax\ifmmode {\sf Reach}_{#1}(t) \else ${\sf Reach}_{#1}(t)$\fi} \newcommand{\reachi}[2]{\relax\ifmmode {\sf Reach}_{#1}(#2) \else ${\sf Reach}_{#1}(#2)$\fi} \newcommand{\breach}[2]{\relax\ifmmode {\sf Reach}^{#2}_{#1} \else ${\sf Reach}^{#2}_{#1}$\fi} \newcommand{\breachi}[3]{\relax\ifmmode {\sf Reach}^{#2}_{#1}(#3) \else ${\sf Reach}^{#2}_{#1}(#3)$\fi} \newcommand{\breacht}[2]{\relax\ifmmode {\sf Reach}^{#2}_{#1}(t) \else ${\sf Reach}^{#2}_{#1}(t)$\fi} \newcommand{\INDSTATE}[1][1]{\STATE\hspace{#1\algorithmicindent}} \defC{C} \def\sigma{\sigma} \def\Omega{\Omega} \defT{T} \defY{Y} \defy{y} \def\Gamma{\Gamma} \def\I{{\ensuremath{\cap}}} \newcommand{\comp}[1]{ {#1}' } \defe{e} \newcommand{\varvec}[1]{ \mathbf{#1} } \def\overrightarrow{\Delta V}{\overrightarrow{\Delta V}} \def\textbf{(Right)}\xspace{\textbf{(Right)}\xspace} \def\textbf{(Left)}\xspace{\textbf{(Left)}\xspace} \def\circ{\circ} \subsection{Experiments on trace containment reasoning} \label{sec: sub_exp} \paragraph{Graph simulation} Consider the $\auto{AEB}$ system of Section~\ref{ssec:adas} with the scenario where Vehicle B is stopped ahead of vehicle A, and A transits from \Lmode{cruise} to \Lmode{em\_brake} to avoid colliding with B. In the actual system ($G_2$ of Figure~\ref{fig: em_brake_graphs}), two different sensor systems trigger the obstacle detection and emergency braking at time intervals $[1,2]$ and $[2.5,3.5]$ and take the system from vertex $0$ (\Lmode{cruise}) to two different vertices labeled with \Lmode{em\_brake}. \begin{figure}[ht] \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{brake_2_modes.png} \caption{\small Transition graph $G_1$.} \label{fig: em_brake_graphsA} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{brake_3_modes.png} \caption{Transition graph $G_2$.} \label{fig: em_brake_graphsB} \end{subfigure} \begin{subfigure}[b]{0.35\textwidth} \includegraphics[width=\textwidth]{Abstraction_reachTube.pdf} \caption{$\auto{AEB}$ Reachtubes.} \label{fig:abstraction} \end{subfigure} \caption{\small Graphs and reachtubes for the Automatic Emergency Braking $\auto{AEB}$ system.} \label{fig: em_brake_graphs} \end{figure} To illustrate trace containment reasoning, consider a simpler graph $G_1$ that allows a single transition of A from \Lmode{cruise} to \Lmode{em\_brake} over the interval bigger $[0.5, 4.5]$. Using Proposition~\ref{prop:hybrid-contain} and checking that graph $G_2 \preceq_{\sf id} G_1$, it follows that verifying the safety of $\auto{AEB}$ with $G_1$ is adequate to infer the safety with $G_2$. Figure~\ref{fig:abstraction} shows that the safe reachtubes returned by the algorithm for $G_1$ in red, indeed contain the reachtubes for $G_2$ (in blue and gray). \vspace{-10pt} \paragraph{Sequential composition} We revisit the $\auto{Powertrn}$ example of Section~\ref{ex:powertrain}. The initial set $\Theta$ and unsafe set are the same as in Table \ref{table:results}. Let $G_A$ be the graph ($v_0$,\Lmode{startup}) $\xrightarrow{[5,10]}$ ($v_1$,\Lmode{normal}) $\xrightarrow{[10,15]}$ ($v_2$,\Lmode{powerup}), and $G_B$ be the graph ($v_0$,\Lmode{powerup}) $\xrightarrow{[5,10]}$ ($v_1$,\Lmode{normal}) $\xrightarrow{[10,15]}$ ($v_2$,\Lmode{powerup}). The graph $G_1 =$ ($v_0$,\Lmode{startup}) $\xrightarrow{[5,10]}$ ($v_1$,\Lmode{normal}) $\xrightarrow{[10,15]}$ ($v_2$,\Lmode{powerup}) $\xrightarrow{[5,10]}$ ($v_3$,\Lmode{normal}) $\xrightarrow{[10,15]}$ ($v_4, $\Lmode{powerup}), can be expressed as the composition $G_1 = G_A \seqcomp G_B$. Consider the two hybrid systems $\H_i = \langle \L, \Theta_i, G_i, \TL \rangle$, $i \in \{A,B\}$ with $\Theta_A = \Theta$ and $\Theta_B = \reach{\H_A}^{v_2}$. {{\sc DryVR}}'s estimate of $\Theta_{B}$ had $\lambda$ in the range from $14.68$ to $14.71$. The reachset $\reach{\H_B}^{v_2}$ computed by {{\sc DryVR}} had $\lambda$ from $14.69$ to $14.70$. The remaining variables also were observed to satisfy the containment condition. Therefore, $\reach{\H_B}^{v_2} \subseteq \Theta_{B}$. Consider the two hybrid systems $\H_i = \langle \L, \Theta, G_i, \TL \rangle$, $i \in \{1,2\}$, where $G_1$ is (defined above) $ G_A \seqcomp G_B$, and $G_2 = G_A \seqcomp G_B \seqcomp G_B \seqcomp G_B$. Using Theorem \ref{thm:seqcomp-mainresult} it suffices to analyze $\H_1$ to verify $\H_2$. $\H_1$ was been proved to be safe by {\sc DryVR}\ without any refinement. As a sanity check, we also verified the safety of $\H_2$. {{\sc DryVR}} proved $\H_2$ safe without any refinement as well. \section{Reasoning principles for trace containment} \label{sec:trace-containment} For a fixed unsafe set $\U$ and two hybrid systems $\H_1$ and $\H_2$, proving $\reach{\H_1} \subseteq \reach{\H_2}$ and the safety of $\H_2$, allows us to conclude the safety of $\H_1$. Proposition~\ref{prop:hybrid-contain} establishes that proving containment of traces, trajectories, and initial sets of two hybrid systems, ensures the containment of their respective reach sets. These two observations together give us a method of concluding the safety of one system, from the safety of another, provided we can check trace containment of two graphs, and trajectory containment of two trajectory sets. In our examples, the set of modes $\L$ and the set of trajectories $\TL$ is often the same between the hybrid systems we care about. So in this section present different reasoning principles to check trace containment between two graphs. Semantically, a transition graph $G$ can be viewed as one-clock timed automaton, i.e., one can constructed a timed automaton $T$ with one-clock variable such that the timed traces of $T$ are exactly the traces of $G$. This observation, coupled with the fact that checking the timed language containment of one-clock timed automata~\cite{ow-lics} is decidable, allows one to conclude that checking if $G_1 \preceq_{{\sc lmap}} G_2$ is decidable. However the algorithm in~\cite{ow-lics} has non-elementary complexity. Our next observation establishes that forward simulation between graphs can be checked in polynomial time. Combined with Proposition~\ref{prop:graphsim}, this gives a simple sufficient condition for trace containment that can be efficiently checked. \begin{proposition} \proplabel{prop:check-sim} Given graphs $G_1$ and $G_2$, and mode map ${\sc lmap}$, checking if there is a forward simulation from $G_1$ to $G_2$ is in polynomial time. \end{proposition} \begin{proof} The result can be seen to follow from the algorithm for checking timed simulations between timed automata~\cite{cerans} and the correspondence between one-clock timed automata; the fact that the automata have only one clock ensures that the region construction is poly-sized as opposed to exponential-sized. However, in the special case of transition graphs there is a more direct algorithm which does not involve region construction that we describe here. Observe that if $\{R_i\}_{i\in I}$ is a family of forward simulations between $G_1$ and $G_2$ then $\cup_{i \in I} R_i$ is also a forward simulation. Thus, like classical simulations, there is a unique largest forward simulation between two graphs that is the greatest fixpoint of a functional on relations over states of the transition graph. Therefore, starting from the relation $\V_1 \times \V_2$, one can progressively remove pairs $(v,u)$ such that $v$ is not simulated by $u$, until a fixpoint is reached. Moreover, in this case, since $G_1$ is a DAG, one can guarantee that the fixpoint will be reached in $|\V_1|$ iterations. \end{proof} Executions of hybrid systems are for bounded time, and bounded number of mode switches. This is because our transition graphs are acyclic and the labels on edges are bounded intervals. Sequential composition of graphs allows one to consider switching sequences that are longer and of a longer duration. We now present observations that will allow us to conclude the safety of a hybrid system with long switching sequences based on the safety of the system under short switching sequences. To do this we begin by observing simple properties about sequential composition of graphs. In what follows, all hybrid systems we consider will be over a fixed set of modes $\L$ and trajectory set $\TL$. Also ${\sf id}$ will be identity function on $\L$. Our first observation is that trace containment is consistent with sequential composition. \begin{proposition} \label{prop:cong-seqcomp} Let $G_i, G_i'$, $i \in \{1,2\}$, be four transition graphs over $\L$ such that $G_1\seqcomp G_2$ and $G_1'\seqcomp G_2'$ are defined, and $G_i \preceq_{{\sf id}} G_i'$ for $i \in \{1,2\}$. Then $G_1\seqcomp G_2 \preceq_{{\sf id}} G_1'\seqcomp G_2'$. \end{proposition} Next we observe that sequential composition of graphs satisfies the ``semi-group property''. \begin{proposition} \label{prop:semi-group} Let $G_1,G_2$ be graphs over $\L$ for which $G_1\seqcomp G_2$ is defined. Let $v_{1 {\sf term}}$ be the unique terminal vertex of $G_1$. Consider the following hybrid systems: $\H = \langle \L, \Theta, G_1\seqcomp G_2, \TL\rangle$, $\H_1 = \langle\L, \Theta, G_1, \TL\rangle$, and $\H_2 = \langle\L, \reach{\H_1}^{v_{1 {\sf term}}}, $$ G_2,$ $ \TL\rangle$. Then $\reach{\H} = \reach{\H_1} \cup \reach{\H_2}$. \end{proposition} Consider a graph $G$ such that $G\seqcomp G$ is defined. Let $\H$ be the hybrid system with transition graph $G$, and $\H'$ be the hybrid system with transition graph $G\seqcomp G$; the modes, trajectories, and initial set for $\H$ and $\H'$ are the same. Now by Proposition~\ref{prop:seqcomp-containment} and~\ref{prop:hybrid-contain}, we can conclude that $\reach{\H} \subseteq \reach{\H'}$. Our main result of this section is that under some conditions, the converse also holds. This is useful because it allows us to conclude the safety of $\H'$ from the safety of $\H$. In other words, we can conclude the safety of a hybrid system for long, possibly unbounded, switching sequences (namely $\H'$) from the safety of the system under short switching sequences (namely $\H$). \begin{theorem} \label{thm:seqcomp-mainresult} Suppose $G$ is such that $G\seqcomp G$ is defined. Let $v_{{\sf term}}$ be the unique terminal vertex of $G$. For natural number $i \geq 1$, define $\H_i = \langle \L, \Theta, G^i, \TL\rangle$, where $G^i$ is the $i$-fold sequential composition of $G$ with itself. In particular, $\H_1 = \langle \L, \Theta, G, \TL\rangle$. If $ \reach{\H_1}^{v_{{\sf term}}} \subseteq \Theta $ then for all $i$, $\reach{\H_i} \subseteq \reach{\H_1}$. \end{theorem} \begin{proof} Let $\Theta_1 = \reach{\H_1}^{v_{{\sf term}}}$. From the condition in the theorem, we know that $\Theta_1 \subseteq \Theta$. Let us define $\H_i' = \langle \L, \Theta_1, G^i, \TL\rangle$. Observe that from Proposition~\ref{prop:hybrid-contain}, we have $\reach{\H_i'} \subseteq \reach{\H_i}$. The theorem is proved by induction on $i$. The base case (for $i = 1$) trivially holds. For the induction step, assume that $\reach{\H_i} \subseteq \reach{\H_1}$. Since $\seqcomp$ is associative, using Proposition~\ref{prop:semi-group} and the induction hypothesis, we have $ \reach{\H_{i+1}} = \reach{\H_1} \cup \reach{\H_i'} \subseteq \reach{\H_1} \cup \reach{\H_i} = \reach{\H_1}. $ \end{proof} Theorem~\ref{thm:seqcomp-mainresult} allows one to determine the set of reachable states of a set of modes $\L$ with respect to graph $G^i$, provided $G$ satisfies the conditions in the statement. This observation can be generalized. If a graph $G_2$ satisfies conditions similar to those in Theorem~\ref{thm:seqcomp-mainresult}, then using Proposition~\ref{prop:semi-group}, we can conclude that the reachable set with respect to graph $G_1\seqcomp G_2^i\seqcomp G_3$ is contained in the reachable set with respect to graph $G_1\seqcomp G_2\seqcomp G_3$. The formal statement of this observation and its proof is skipped in the interest of space, but we will use it in our experiments. \input{substitutivity_exp}
1,314,259,993,718
arxiv
1,314,259,993,719
arxiv
\section{Introduction}\label{intro} In the search for near-conformal composite Higgs theories, non-perturbative lattice determinations of the beta function of the renormalized gauge coupling have played a crucial role. The recent development of the gradient flow~\cite{Luscher:2010iy,Narayanan:2006rf,Lohmayer:2011si,Luscher:2011bx} has added a new level of precision allowing for very accurate measurement of renormalized quantities. However lattice simulations of Beyond Standard Model (BSM) theories with an increased number of fermion flavors, like $N_f = 12$, or fermion representations other than the fundamental can lead to a significant increase in computational effort compared to simulations of QCD. To determine if a given model is infrared conformal or not, one has to know the behavior in the chiral limit. For beta function studies, that typically leads to working directly at zero fermion mass, with a particular choice of boundary conditions. There are also complementary studies of the particle spectrum of such theories, where the fermion mass is varied to see if e.g.~chiral symmetry appears to be spontaneously broken in the massless limit generating a set of Goldstone bosons, or if a light composite scalar particle, perhaps a Higgs impostor, exists in such a model. Given the large computational resources each such study requires, a beta function measurement which can take advantage of pre-exisiting particle spectrum type gauge ensembles would be very valuable, since {\bf (a)} it would involve negligible additional computational cost, {\bf (b)} the beta function would be measured at renormalized gauge couplings strong enough to see if chiral symmetry could be spontaneously broken in the chiral limit, and {\bf (c)} it would complement independent beta function measurements from simulations directly at zero fermion mass. In this report we describe such a technique. We apply it in the context of near-conformal gauge theories, the method can just as well be applied to other gauge theories such as QCD. \section{Gradient flow and step-scaling in finite volume}\label{gradient} The gradient flow $d A_\mu/dt = D_\nu F_{\nu \mu}$ defines the gauge field $A_\mu(t)$ at flow time $t$. Perturbatively, the action density $E = (F_{\mu \nu}^a)^2/4$ has an expectation value \be \langle E \rangle = \frac{3(N^2-1) g^2}{128 \pi^2 t^2} \left\{ 1 + \overline{c_1} g^2 + {\cal O}(g^4) \right\} \label{eq1} \ee in the $\overline{\mathrm {MS}}$ scheme for ${\mathrm {SU(N)}}$ gauge theory where the renormalized coupling $g$ is defined at the renormalization group scale $\mu = 1/\sqrt{8t}$. This motivates a non-perturbative definition of the renormalized coupling \be g^2(t) \equiv \frac{1}{\cal N} \left( \frac{128 \pi^2}{3(N^2-1)} \right) t^2 \langle E \rangle_{\mathrm {latt}}, \label{eq2} \ee where the expectation value of the action density at flow time $t$ is measured via lattice simulations and the normalization factor ${\cal {N}}$ depends on the choice of boundary conditions. As the action density is a bulk quantity, the observable $\langle E \rangle$ can be measured non-perturbatively very precisely. One way to measure the beta function in finite volume is via step-scaling: in a physical volume $L^4$, the flow is adjusted holding $c = \sqrt{8t}/L$ fixed, each choice of $c$ corresponding to a particular renormalization group (RG) scheme. The RG scale $\mu$ is now in terms of the only remaining scale $L$. For a given lattice volume $(L/a)^4$ the bare gauge coupling (and hence the lattice spacing) is adjusted such that the renormalized coupling has a chosen fixed value e.g.~$g^2_c(L/a)=6$. Keeping the lattice spacing $a$ fixed, a second simulation on a larger volume e.g.~$(sL/a)^4$ with $s=2$ gives the discrete step $\beta(g^2_c) = \{g^2_c(sL/a) - g^2_c(L/a)\}/\log(s^2)$ i.e.~the response of the gauge coupling as the RG scale is changed by a finite amount. In this context {\it discrete} has nothing to do with the lattice discretization. However the beta function will contain lattice artifacts which must be removed. To take the continuum limit, the procedure is repeated for a sequence of lattice volumes e.g.~$L/a = 16, 18, 20, 24, 28$ on each of which $g^2_c(L/a) = 6$ is tuned via the bare coupling and larger volumes e.g.~$2L/a = 32, 36, 40, 48, 56$ from which the discrete step is measured and the limit $a/L \rightarrow 0$ is obtained. The final result is the continuum finite-step beta function in finite volume. This approach, widely used in QCD, has already been applied in the context of near-conformal gauge theories~\cite{Fodor:2012td,Hasenfratz:2014rna,Fodor:2015baa,Lin:2015zpa,Appelquist:2009ty,Hietanen:2009az,Hayakawa:2013yfa}. \section{Beta function in infinite volume}\label{beta} The main message of this report is to describe an alternative approach. Since the gradient flow defines a renormalized coupling $g^2(t)$ at any flow time $t$, one can also directly measure on the same ensemble of gauge configurations the derivative $t \cdot dg^2/dt = - \mu^2 \cdot dg^2/d \mu^2$ i.e.~the usual beta function with an infinitesimal change in the RG scale at any particular $g^2$ value. Note that asymptotic freedom corresponds to $t \cdot dg^2/dt > 0$. In comparison to the approach at fixed $c$ in Section~\ref{gradient}, the flow time $t$ is not held fixed relative to the lattice size $L/a$ in the new method as described in what follows. From a sequence of ensembles with various lattice volumes, fermion masses and lattice spacings, a sequence of limits can be taken to reach the continuum infinitesimal-step beta function in infinite volume in the chiral limit. We have previously generated a large set of such ensembles in our study of the particle spectrum of two flavor sextet ${\mathrm {SU(3)}}$ gauge theory. We use staggered fermions with stout link improvement and the Symanzik gauge action in generating the gauge configurations as described in~\cite{Fodor:2012ty}. Our previous lattice studies of the model found a set of massless Goldstone bosons in the chiral limit separated from massive vector, axial vector and baryonic states, with an emergent light scalar, as well as strong evidence that the chiral condensate is non-zero at zero fermion mass \cite{Wong:lat17beta,Fodor:2016pls,Fodor:2012ty}. These p-regime gauge ensembles, already strongly indicative of near-conformal behavior, provide the basis for this beta function computation. \begin{figure}[thb] \centering \sidecaption \includegraphics[width=5cm,clip]{L56_g2fit} \includegraphics[width=5cm,clip]{L56_BetaFit} \caption{(left) The gradient flow renormalized coupling $g^2$ and (right) its associated beta function on a lattice volume $56^3 \times 96$ at a Goldstone boson mass of $m_{\pi}\cdot a \approx 0.08$.} \label{fig-1 \end{figure} In Figure~\ref{fig-1} we show the renormalized coupling $g^2$ and its corresponding derivative $t \cdot dg^2/dt$ for one ensemble, a lattice volume $56^3 \times 96$ at the bare gauge coupling $6/g_0^2 = 3.20$ and fermion mass $ma = 0.001$, corresponding to a Goldstone boson mass $m_{\pi}\cdot a \approx 0.08$. The derivative is approximated by $\{ -F(t+2\ep) + 8 F(t+\ep) - 8 F(t - \ep) + F(t - 2\ep) \}/(12 \ep) = dF/dt + {\cal O}(\ep^4)$. As opposed to step-scaling where the flow time $t$ is set by the choice of $c = \sqrt{8t}/L$, in this method the value of the renormalized coupling $g^2$ is chosen and the flow time where this value is reached is measured. We show the choice $g^2(t_0) = 6.7$, which for this ensemble occurs at $t_0/a^2 = 5.487 \pm 0.077$. (Note that this does not correspond to the choice of $t_0$ set by $t^2 \cdot \langle E \rangle_{t_0} = 0.3$ in the original investigation of~\cite{Luscher:2010iy}.) A larger choice of $g^2$ gives a larger statistical error on $t_0$, however too small a value of $g^2$ gives a beta function distorted by large cutoff effects, as seen on the right of Figure~\ref{fig-1} for $t < 2$. These and other constraints we describe later influence which fixed value of $g^2(t_0)$ we choose to target. \begin{figure}[thb] \centering \sidecaption \includegraphics[width=5.2cm,clip]{jMpiFSS} \includegraphics[width=5cm,clip]{t0FSS} \caption{Infinite volume extrapolations of (left) the Goldstone boson mass and (right) the scale $t_0$ at which $g^2(t_0) = 6.7$, at fixed fermion mass and bare coupling.} \label{fig-2 \end{figure} Since the goal is the infinite volume beta function, it is necessary to correct for finite volume dependence. We use an ansatz with an infinite sum $g_1$ of Bessel functions dependent on the aspect ratio $L_t/L_s$ of the lattice volume to account for Goldstone bosons wrapping around the finite volume~\cite{Gasser:1986vb}~e.g. $M_\pi(L) = M_\pi + c_M g_1(M_\pi L)$ where the complicated sum $g_1$ is evaluated numerically. At 1-loop in chiral perturbation theory $c_M = M_\pi^2/(64 \pi^2 F_\pi^2)$, we leave the prefactor $c_M$ of the $g_1$ function as a free parameter to be fitted. In Figures~\ref{fig-2} and~\ref{fig-3} we show examples of such infinite volume extrapolations for the Goldstone boson mass, the scale $t_0$ and the corresponding beta function. These figures are typical: the volume effect is relatively small but visible and is well described by the ansatz. Note that the infinite volume mass $M_\pi$ is first determined by the Goldstone boson volume fit and is then used as one of the inputs for the $t_0$ and beta function volume fits. \begin{figure}[thb] \centering \sidecaption \includegraphics[width=5cm,clip]{betaFSS} \includegraphics[width=5cm,clip]{t0ChiralFit_LIN320} \caption{(left) Infinite volume extrapolation of the beta function at the renormalized coupling $g^2(t_0) = 6.7$. (right) Chiral extrapolation of the scale $t_0$ as a function of $M_\pi^2$. The cyan data points are not included in the fit.} \label{fig-3 \end{figure} The next natural step is the extrapolation to zero fermion mass at fixed bare coupling. From~\cite{Bar:2013ora} if the smearing radius $\sqrt{8t}$ is small compared to the Goldstone boson Compton wavelength, a chiral expansion gives \be t_0 = t_{\rm 0,ch} \left( 1 + k_1 \frac{M_\pi^2}{(4 \pi f)^2} + k_2 \frac{M_\pi^4}{(4 \pi f)^4} \log \left( \frac{M_\pi^2}{\mu^2} \right) + k_3 \frac{M_\pi^4}{(4 \pi f)^4} \right) \label{eq3} \ee where $f$ is the Goldstone boson decay constant in the chiral limit. We show in Figure~\ref{fig-3} an example of such a chiral fit of the infinite-volume $t_0$ data. We do not have sufficient data at all lattice spacings for a quadratic fit in $M_\pi^2$ or to fit the chiral logarithm, hence we use a linear fit in $M_\pi^2$ for the data at the lighter masses. At this leading order, linear dependence in $M_\pi^2$ is equivalent to linear dependence in the fermion mass $m$ itself, extrapolating in either variable to the chiral limit should give consistent results. We show in Figure~\ref{fig-4} the results of linear fits in the mass $m$ at the same bare coupling, which are indeed consistent with extrapolating in $M_\pi^2$. The determination of the scale in the chiral limit is $t_0/a^2 = 6.20 \pm 0.14$ at this bare coupling $6/g_0^2 = 3.20$, which corresponds to our coarsest lattice spacing. \begin{figure}[thb] \centering \includegraphics[width=6cm,clip]{t0ChiralFit_mLIN320} \includegraphics[width=6cm,clip]{betaChiralFit_mLIN320} \caption{Chiral extrapolations of (left) the scale $t_0$ and (right) the beta function in the fermion mass $m$.} \label{fig-4 \end{figure} The entire procedure is repeated for two other sets of ensembles: $6/g_0^2 = 3.25$ corresponding to our intermediate lattice spacing, and $6/g_0^2 = 3.30$, our finest lattice spacing. We hold the renormalized coupling $g^2(t_0) = 6.7$ fixed, find the corresponding $t_0/a^2$ and beta function values for a variety of lattice volumes and fermion masses, fit their finite-volume dependence at fixed mass and then extrapolate to the chiral limit. The final step is shown in Figures~\ref{fig-5} and ~\ref{fig-6}. We see that estimates of the chiral limit scale $t_0/a^2$ are $10.48 \pm 0.23$ and $15.85 \pm 0.46$ for the intermediate and fine lattice spacings respectively, giving an overall change of $\approx 1.6$ in lattice spacing from coarsest to finest ensembles. The chiral limit of the beta function shows modest cutoff effects on the order of 10\%, which makes the continuum extrapolation mild. Note that a larger choice of the renormalized coupling to define the scale e.g.~$g^2(t_0) = 8$ would give a larger value of $t_0/a^2$, which might not be possible to accommodate at the finest lattice spacing such that the finite-volume dependence could be removed. On the other hand too small a value of $g^2(t_0)$ would give much larger lattice artifacts, hence the choice $g^2(t_0) = 6.7$ balances these two considerations. \begin{figure}[thb] \centering \includegraphics[width=6cm,clip]{t0ChiralFit_mLIN325} \includegraphics[width=6cm,clip]{betaChiralFit_mLIN325_v2} \caption{Similar to Figure~\ref{fig-4}, chiral extrapolations at $6/g_0^2 = 3.25$, our intermediate lattice spacing.} \label{fig-5 \end{figure} We show the last step, the continuum extrapolation of the beta function, in Figure~\ref{fig-7}. In the chiral limit we expect the leading cutoff effect to be ${\cal O}(a^2)$, hence we fit the data linearly in $a^2/t_0$, with only three data points a more extended fitting form is not possible. Because the fitting variable $t_0$ has its own error, this effect in included in the fit as described in~\cite{0957-0233-18-11-025}, with the $\chi^2$ function being generalized to include the error in both $x$ and $y$ coordinates \be \chi^2 = \sum_{k=1}^n \left[ \frac{(X_k - x_k)^2}{\sigma_{x,k}^2} + \frac{(Y_k - y_k)^2}{\sigma_{y,k}^2} \right], \label{eq-chi2} \ee where $x_k$ and $y_k$ are the data pairs with their respective errors $\sigma_{x,k}$ and $\sigma_{y,k}$, and $Y_k = c \cdot X_k + d$ is the fitting form with $c$ and $d$ as the parameters to be determined. Using this form, our result for the infinite-volume infinitesimal beta function at $g^2 = 6.7$ is $\beta(g^2) = 0.548 \pm 0.047$. Any physical target, like the beta function in this work, requires appropriate orders of the chiral and continuum limits as noted in~\cite{Bernard:2004ab}. An alternative to the approach presented here would take the chiral and continuum limits simultaneously in terms of $\sqrt{t_0} \cdot m$ and $a^2/t_0$, similar to~\cite{Wong:lat17beta}. This method is being investigated for the beta function. \begin{figure}[thb] \centering \includegraphics[width=6cm,clip]{t0ChiralFit_mLIN330} \includegraphics[width=6cm,clip]{betaChiralFit_mLIN330} \caption{Similar to Figures~\ref{fig-4} and~\ref{fig-5}, chiral extrapolations at $6/g_0^2 = 3.30$, our finest lattice spacing.} \label{fig-6 \end{figure} \section{Comparison and conclusion}\label{conclude} The infinite volume beta function we determine is in a different scheme than the finite volume beta function measured via step-scaling, which in turn has its own dependence on the choice of $c$, the ratio of flow time to lattice volume. It is still instructive to compare these different results for the sextet model as shown in Figure~\ref{fig-7}, where the finite volume beta function is taken from our own work in~\cite{Fodor:2015zna}. We see that the two calculations are in good agreement -- the beta function is small but non-zero in the range of renormalized couplings which, from our independent studies of the particle spectrum, are strong enough that chiral symmetry is spontaneously broken in the chiral limit. Our recent extended study of the beta function of the twelve-flavor ${\mathrm {SU(3)} }$ model with fundamental representation fermions~\cite{Fodor:2017gtj} shows that at small values of $c$ there is little volume dependence in the method of Section~\ref{gradient}. This may explain the good agreement between our infinite and finite volume beta functions at $g^2 = 6.7$ in the sextet model since the new beta function in some sense might be viewed as the $c \rightarrow 0$ limit. The finite volume beta function, calculated directly at zero mass, starts in the perturbative regime and moves to stronger coupling as the physical volume grows. If no infrared fixed point (IRFP) is found i.e.~a non-trivial zero of the beta function, one could argue it is simply because strong enough coupling and large enough physical volumes have not yet been reached. However, the gauge ensembles where the finite volume beta function at $g^2 = 6.7$ could be attained are matched by p-regime gauge configurations at the same coupling for the targeted scale but with massless fermions in the infinite volume limit and spontaneous chiral symmetry breaking. This is demonstrated by the particle spectrum and the eigenvalues of the Dirac operator. In this phase the theory has sufficiently strong coupling to generate a p-regime with massive states separated from the massless Goldstone bosons, there is no room left at stronger coupling for the theory to have a conformal spectrum of massless states whose mass deformation would be governed by a universal anomalous dimension. This bridges the gap between the weak and strong coupling regimes and obviates any need to continue exploring even stronger coupling with the finite volume beta function in the hunt for an IRFP. \begin{figure}[thb] \centering \includegraphics[width=6cm,clip]{xyBetaFunctionFit} \includegraphics[width=7cm,clip]{xCompositeBeta} \caption{(left) Continuum extrapolation of the beta function at $g^2(t_0) = 6.7$, yielding $\beta = 0.548 \pm 0.047$ as the continuum result. (right) Comparison of this calculation with previous finite volume beta function measurements. In the gradient flow scheme in infinite volume, the 3-loop beta function~\cite{Harlander:2016vzb} has an infrared fixed point at $g^2 \approx 6.8$, in the $\overline{\mathrm {MS}}$ scheme the corresponding 3-loop beta function has a zero at $g^2 \approx 6.3$.} \label{fig-7 \end{figure} Our beta function calculations, consistent with one another, contradict other lattice studies of the finite volume beta function for the sextet model~\cite{Shamir:2008pb,Hasenfratz:2015ssa}. We believe this is because of lattice artifacts whose effects were not fully removed in those works. The range of lattice volumes we employ is larger than in either of those studies, which allows us to push further towards the continuum. This is mostly an issue of systematic errors, not a question of underestimated statistical errors, and should be accounted for without any speculation about differing universality classes for different fermion discretizations, contrary to the claims made in~\cite{Hasenfratz:2017mdh}. Our beta function determinations are also consistent with our large-volume non-perturbative study of the particle spectrum, which shows that chiral symmetry is spontaneously broken in the massless fermion limit, with associated Goldstone bosons and a spectrum of massive states~\cite{Wong:lat17beta,Fodor:2016pls,Fodor:2012ty}. This is inconsistent with other studies of the sextet model using Wilson fermion discretization, which interpret the sextet model as being infrared conformal~\cite{Hansen:2017ejh}. In comparison to ${\rm SU(3)}$ gauge theory with $N_f$ massless fermion flavors in the fundamental representation, the sextet model appears to have near-conformal behavior, with a lighter composite scalar than in the $N_f=4$ and 8 theories. Our first investigations of the anomalous mass dimension, measured via the Dirac operator eigenvalues, indicates that it could be sufficiently large to be phenomenologically viable~\cite{Fodor:2016hke}. If this first sign holds, and is combined with the other properties of the sextet model, the theory continues to be a relevant and interesting candidate for explicit realization of the composite Higgs paradigm. However the entangled dynamics of the light scalar and the light Goldstone pion with need for a generalized framework in chiral perturbation theory remains an unsolved problem. This is under active investigation as addressed in~\cite{Kuti:lat17dila} with potential implications for the beta function analysis presented here. \section*{Acknowledgments} We acknowledge support by the DOE under grant DE-SC0009919, by the NSF under grants 1318220 and 1620845, by OTKA under the grant OTKA-NF-104034, and by the Deutsche Forschungsgemeinschaft grant SFB-TR 55. Computational resources were provided by the DOE INCITE program on the ALCF BG/Q platform, by USQCD at Fermilab, by the University of Wuppertal, by Juelich Supercomputing Center on Juqueen and by the Institute for Theoretical Physics, Eotvos University. We are grateful to Szabolcs Borsanyi for his code development for the BG/Q platform. We are also grateful to Sandor Katz and Kalman Szabo for their CUDA code development.
1,314,259,993,720
arxiv
\section{Introduction} In the study of symmetric tensors $T$ of type $(n+1)\times\dots\times (n+1)$ ($d$ times), which can be identified with polynomial forms of degree $d$ in $n+1$ variables, one of the main aspects concerns their Waring decompositions, i.e. decompositions of type $T=L_1^d+\dots+L_r^d$, where each $L_i$ is a linear form. The (Waring) rank of $T$, which is the minimum $r$ for which the decomposition exists, turns out to be a good measure for the complexity of $T$. Many recent results are devoted to the computation of the (Waring) rank of forms. Effective methods for the computation of decompositions are available, though their practical implementation is often out of reach (see e.g. \cite{HauensteinOedOttSomm}, \cite{Ange}). An important, open question, however, is related to the uniqueness of a decomposition with $r$ minimal, i.e. the uniqueness of a decomposition which computes the rank. The existence of a unique decomposition that computes the rank of $T$ is a property of $T$ denoted as {\it identifiability}. Uniqueness is important for the applications, at least for two reasons. First, the computation of an effective decomposition is often faster, when the target is uniquely determined. Second, if we are looking for a specific decomposition, the uniqueness ensures us that if we get close to one of them, then that one is exactly what we were looking for. Here, with {\it uniqueness} of the decomposition we mean uniqueness up to trivialities, like permutation of the summands or their rescaling. We mention the papers \cite{AllmanMatiasRhodes09}, \cite{AnandkumarGeHsuKakadeTelgarsky14}, \cite{AppellofDavidson81} for examples of applications. When $T$ is a sufficiently general form of rank $r$, and $rn+r+n+1<\binom{d+n}n$ (technically, when we are in the range of {\it subgeneric} ranks), then the identifiability of $T$ is known to hold, with the exclusion of a short list of cases for $(n,d,r)$, which are completely described (see \cite{COttVan17a}). On the other hand, for specific forms $T$, the question of the computation of the rank and the identifiability of $T$ is still widely open, except for small values of the rank. More in details, given a form $T$, which arises either from theoretical reasons or from experimental data, it is often possible to determine a specific decomposition $T=L_1^d+\dots+L_r^d$, but it remains unclear whether the decomposition computes the rank or it is unique, i.e. it is unclear if we can exclude the existence of a second decomposition with $r'\leq r$ summands. For specific forms, of which we know a decomposition $T=L_1^d+\dots+L_r^d$, the most famous criterion to exclude the existence of a second decomposition with $r'\leq r$ summands goes back to the 70's, to the celebrated paper by Kruskal \cite{Kruskal77}. The criterion, which is valid even for non-symmetric tensors but holds only for small values of $r$, is based on the notion of {\it Kruskal's ranks} of a decomposition (see Definition \ref{krur} below): if $r$ satisfies an inequality in terms of the Kruskal's ranks, then $r$ is the rank of $T$ and the decomposition is unique. Derksen \cite{Derksen13} proved that the inequality is sharp: one can not hope to improve Kruskal's criterion by using just the Kruskal's ranks of the decomposition. In the specific case of symmetric tensors, however, other methods have been introduced to study the uniqueness of a decomposition. One of them, based on the study of the kernels of catalecticant maps, can be found in \cite{MassaMellaStagliano18}. Another method, based on the Kruskal's original criterion, is described in \cite{COttVan17b} and it is based on the computation of the Kruskal's ranks of a {\it reshaping} of a decomposition, i.e. the Kruskal's ranks of the points obtained by raising the $L_i$'s to some partial powers $d_1,d_2,d_3$, with $d_1+d_2+d_3=d$. Both methods usually work in a range wider than the range in which the original Kruskal's criterion can be applied. Thus, e.g., the determination of the reshaped Kruskal's rank can exclude Derksen's examples in the symmetric case (see \cite{BallC12}, \cite{BallC13} for other methods). Yet, all the previous methods will give no answer when $r$ grows enough. For instance, for the case of ternary septics, i.e. the case $n=2$, $d=7$, subgeneric ranks (for which identifiability holds generically) are ranks $r=1,\dots, 11$, but the original Kruskal's criterion will give no answer for $r>7$, while both the catalecticant method or the reshaped Kruskal's method will work only for $r\leq 10$. The aim of this paper is an extension of a method, introduced in \cite{COttVan17b} and in \cite{AngeCVan18} for the case of forms of degree $4$, which can determine that $r$ is the rank of $T$ and the decomposition is unique, even in a range in which both the catalecticant method or the reshaped Kruskal's method do not work. The method is still based on the computation of the reshaped Kruskal's ranks of the decomposition, but the conclusion is obtained with advanced tools of algebraic geometry: the analysis of the Hilbert function of the set of projective points associated with a decomposition. We apply the method to ternary forms of degree $7$, and we find that if $T$ has a decomposition $T=L_1^d+\dots+L_r^d$ whose reshaped Kruskal's ranks are general, then $T$ has rank $r$ and the decomposition is unique (up to trivialities). We obtain thus an effective criterion for the identifiability of specific symmetric tensors, which definitively improves our previous knowledge. The proof of the criterion is based on results for the analysis of the postulation of finite sets in projective spaces (e.g. on the Cayley-Bacharach property) which, we believe, have an independent theoretical interest for investigators in the field. In particular, our result also proves that, for a general choice of the linear forms $L_i$'s, the span of $L_1^d,\dots,L_r^d$, which is a subset of the $r$-th secant variety $S$ of the Veronese image of degree $7$ of $\mathbb{P}^2=\mathbb{P}(\mathbb C^3)$, does not contain singular points of $S$, outside those generated by a proper subset of the powers $L_i^d$. In section \ref{cub} we consider forms of any degree, with a decomposition in which the Kruskal's ranks are not maximal, but are maximal modulo the fact that the points representing the decomposition lie in a plane cubic curve. We show also in this case that, with the same methods, one can prove the identifiability of the form $T$. We end by noticing that ternary septics $T$ are the last numerical case in which the analysis of the Kruskal's rank of a decomposition is sufficient to conclude, outside a (Zariski closed) set of measure $0$, that a given decomposition is the unique one that computes the rank of $T$. In a forthcoming series of papers we will prove that, already for ternary optics, the linear span of every general decomposition contains both identifiable and not identifiable points. \vskip.3cm \paragraph{\textbf{Acknowledgements.}} The first two authors are members of the Italian GNSAGA-INDAM and are supported by the Italian PRIN 2015 - Geometry of Algebraic Varieties (B16J15002000005). \section{Notation}\label{nota} We work over the complex field $\mathbb C$. For any finite subset $A$ of the projective space $\mathbb{P}^n=\mathbb{P}(\mathbb{C}^{n+1})$, we denote by $\ell(A)$ the cardinality of $A$. We call $v_d:\mathbb{P}^n\to \mathbb{P}^N$, $N = \binom{n+d}d - 1 $, the Veronese embedding of degree $d$. The space $\mathbb{P}^N$ can be identified with $\mathbb{P}(Sym^d(\mathbb{C}^{n+1}))$. So, the points of $\mathbb{P}^N$ can be identified, up to scalar multiplication, with forms $T$ of degree $d$ in $n+1$ variables. By abuse, we will denote by $T$ both the form in $Sym^d(\mathbb{C}^{n+1})$ and the point in $\mathbb{P}^N$ which represents $T$. The form $T$ belongs to the \emph{Veronese variety} $v_d(\mathbb{P}^n)$ if and only if $T=L^d$, for some linear form $L$. \smallskip With the above notations we give the following definitions. \begin{definition} Let $A \subset \mathbb{P}^n$ be a finite set, $A=\{P_1,\dots, P_r\}$. $A$ is a \emph{decomposition} of $T\in \mathbb{P}(Sym^d(\mathbb{C}^{n+1}))$ if $ T\in \langle v_d(A) \rangle$, the linear space spanned by the points of $v_d(A)$. In other words, for a choice of scalars $a_i$'s, $$T = a_1v_d(P_1)+\dots + a_rv_d(P_r). $$ The number $\ell(A)$ is the \emph{length} of the decomposition. The decomposition $A$ is \emph{minimal} or \emph{non-redundant} if $T$ is not contained in the span of $v_d(A')$, for any proper subset $A'\subset A$. In particular, if $v_d(A)$ is not linearly independent, then $A$ can not be minimal. \end{definition} If we identify points $P\in\mathbb{P}^n=\mathbb{P}(\mathbb{C}^{n+1})$ with linear forms $L$ in $n+1$ variables, then the image $v_d(L)$ corresponds to the power $L^d$, so that a decomposition of $T\in \mathbb{P}(Sym^d(\mathbb{C}^{n+1}))$ corresponds to a Waring decomposition of a form of degree $d$. \begin{definition}\label{krur} For a finite set $A \subset \mathbb{P}^n$, the \emph{Kruskal's rank} $k(A)$ of $A$ is the maximum $k$ for which any subset of cardinality $\leq k$ of $A$ is linearly independent. The Kruskal's rank $k(A)$ is bounded above by $n+1$ and $\ell(A)$. If $A$ is general enough, then $k(A)=\min\{n+1, \ell(A)\}$. \end{definition} For instance, if $A\subset \mathbb{P}^3$ is a set of cardinality $5$, with a subset of $4$ points on a plane and no three points aligned, then $k(A)=3$. \begin{remark} It is straightforward that $k(A)$ attains the maximum $\min\{n+1, \ell(A)\}$ when all subsets of $A$ cardinality at most $n+1$ are linearly independent. In this case, for any subset $A'\subset A$ one has $k(A')=\min\{n+1, \ell(A')\}$. Notice also that for any subset $A'\subset A$, we have $k_{A'}\geq \min\{\ell(A'), k_A\}$. \end{remark} We can use the Veronese maps to define the higher Kruskal's ranks of a finite set $A$. \begin{definition} For a finite set $A \subset \mathbb{P}^n$, the \emph{$d$-th Kruskal's rank} $k_d(A)$ of $A$ is the Kruskal's rank of the image of $A$ in the Veronese map $v_d$. Thus, the Kruskal's rank $k(A)$ coincides with the first Kruskal's rank $k_1(A)$. \end{definition} The $d$-th Kruskal's rank $k_d(A)$ is thus bounded by $\min\{\ell(A), \binom{n+d}n\}$. \begin{remark}\label{ksub} Since the projective spaces are irreducible, then any subset of a general finite set $A$ is general. Thus one can prove that, for a sufficiently general subset $A\subset\mathbb{P}^n$, then all the Kruskal's ranks $k_d(A)$ are maximal. \end{remark} \section{Preliminary results} We collect in this section the main technical tools for the investigation of the identifiability of symmetric tensors. For the proofs of most of them, we refer to Sect. 2 of \cite{C} and Sect. 2.2 of \cite{AngeCVan18}. \begin{definition} Let $Y\subset \mathbb{C}^{n+1} $ be an ordered, finite set of cardinality $\ell $ of vectors. Fix an integer $ d \in \mathbb{N} $. The \emph{evaluation map of degree $d$ on $Y$} is the linear map $$ ev_{Y}(d): Sym^d(\mathbb{C}^{n+1}) \to \mathbb{C}^\ell $$ which sends $ F \in Sym^d(\mathbb{C}^{n+1}) $ to the evaluation of $ F$ at the vectors of $Y$. Let $Z \subset \mathbb{P}^n $ be a finite set. Choose a set of homogeneous coordinates for the points of $Z$. We get an ordered set of vectors $Y\subset \mathbb{C}^{n+1} $, for which the evaluation map $ev_{Y}(d)$ is defined for every $d$. If we change the choice of the homogeneous coordinates for the points of the fixed set $Z$, the evaluation map changes, but all the evaluation maps have the same rank. So, we can define the \emph{Hilbert function} of $Z$ as the map: $$ h_Z : \mathbb{Z} \to \mathbb{N} \qquad h_Z(d) = \operatorname{rank}(ev_{Y}(d)) .$$ We point out that the Hilbert function does not depend on the choice of the coordinates, as well as it does not vary after a change of coordinates in $\mathbb{P}^n$. We define the \emph{first difference of the Hilbert function} $Dh_Z$ of $Z$ as: $$ Dh_Z(j) = h_Z(j)-h_Z(j-1),\quad j \in \mathbb{Z} .$$ \end{definition} Let $v_d: \mathbb{P}^n\to \mathbb{P}^N$ be the $d$-th Veronese embedding of $\mathbb{P}^n$. For any finite set $Z\subset \mathbb{P}^n$, and for any $d \geq 0 $, the value $ h_Z(d) $ determines the dimension of the span of $v_d(Z)$. Indeed we have the following straightforward fact: \begin{proposition}\label{dimspan} Let $v_d: \mathbb{P}^n\to \mathbb{P}^N$ be the $d$-th Veronese embedding of $\mathbb{P}^n$. $$ h_Z(d) = \dim (\langle v_d(Z) \rangle) +1. $$ \end{proposition} From the previous proposition it follows that the Kruskal's rank $k_d(Z)$ is maximal if and only if every subset of cardinality at most $N+1=\binom{n+d}n$ of $v_d(Z)$ is linearly independent. \smallskip Several properties of the Hilbert functions and their differences are well known in Algebraic Geometry. We will need in particular the following facts (for the proofs, see Sect. 2 of \cite{C}). \begin{proposition} \label{inclu} If $Z' \subset Z$, then for every $d$ we have $h_{Z'}(d) \leq h_Z(d) $ and $Dh_{Z'}(d) \leq Dh_Z(d).$ \end{proposition} \begin{proposition}\label{nonincr} Assume that for some $j>0$ we have $Dh_Z(j) \leq j$. Then: $$ Dh_Z(j) \geq Dh_Z(j+1). $$ In particular, if for some $j>0$, $Dh_Z(j)=0 $, then $Dh_Z(i)=0$ for all $i\geq j$. \end{proposition} We introduce the following notation, which comes from a cohomological interpretation of the Hilbert function of $Z$. We will use it as a mere symbolism. \begin{definition} For every finite set $Z$ and for every degree $d$, define: $$h^1_Z(d)= \ell(Z)-h_Z(d) = \sum_{j=d+1}^\infty Dh_Z(j).$$ In particular, from Proposition \ref{dimspan} we get: \begin{equation}\label{h1} \dim (\langle v_d(Z)\rangle) = \ell(Z)-1- h^1_Z(d). \end{equation} \end{definition} Often, we will use the properties of the Hilbert function of the union of two different decompositions of a form. Thus, for our application the following property, which is a consequence of a straightforward application of the Grassmann formula, has particular interest. \begin{proposition}\label{cap} Let $A,B\subset \mathbb{P}^n$ be \emph{disjoint} finite sets and set $Z=A\cup B$. Then for any $d$, the spans of $v_d(A)$ and $v_d(B)$ satisfy the following formula: $$\dim(\langle v_d(A)\rangle\cap \langle v_d(B)\rangle)+ 1 = h^1_Z(d).$$ \end{proposition} \begin{proof} See \cite{AngeCVan18}, Sect. 6. \end{proof} Next, we define the {\it Cayley-Bacharach} property for a finite set in a projective space. \begin{definition}\label{CB} A finite set $Z\subset \mathbb{P}^n$ satisfies the \emph{Cayley-Bacharach property in degree $d$}, abbreviated as $CB(d)$, if, for any $P \in Z$, every form of degree $d$ vanishing at $ Z\setminus\{ P\}$ also vanishes at $P$. \end{definition} \begin{example} Let $Z$ be a set of $6$ points in $\mathbb{P}^2$. If the $6$ points are general, then $Dh_Z(i)=i+1$ for $i\in\{0,1,2\}$. Moreover $Z$ satisfies $CB(1)$. Since $h_Z(2)=6$, $Z$ does not satisfy $CB(2)$. If $Z$ is a general subset of an irreducible conic, then $Dh_Z(2)=2$ and $Dh_Z(3)=1$, moreover $Z$ satisfies $CB(2)$, hence it satisfies $CB(1)$. If $Z$ has $5$ points on a line plus one point off the line, then $Dh_Z$ is given by the following table: \begin{center}\begin{tabular}{c|ccccccc} $d$ & 0 & 1 & 2 & 3 & 4 & 5 & \dots \\ \hline $Dh_Z(d)$ & 1 & 2 & 1 & 1 & 1 & 0 & \dots \end{tabular} \end{center} Moreover $Z$ does not satisfy $CB(1)$. \end{example} \begin{remark}\label{CBprop} If $Z$ satisfies $CB(d)$, then it satisfies $CB(d-1)$ too. Otherwise, one could find $P \in Z$ and a hypersurface $F \subset \mathbb{P}^n$ of degree $d-1$ such that $Z \setminus \{P\} \subset F$ and $P \notin F$. Therefore, if $H\subset \mathbb{P}^n$ is a hyperplane missing $P$, then $F \cup H$ contains $Z \setminus \{P\}$ and misses $P$, contradicting the $CB(d)$ hypothesis. \end{remark} The main reason to introduce the Cayley-Bacharach properties relies in the following observation. \begin{proposition} \label{CBonHilb} $Z$ satisfies $CB(d)$ if and only if for all $P\in Z$ and for all $j\leq d$ we have $h_Z(j)=h_{Z\setminus\{P\}}(j)$. I.e. $Z$ satisfies $CB(d)$ if and only if $$Dh_Z(j)=Dh_{Z\setminus\{P\}}(j) \quad \forall j\leq d.$$ \end{proposition} \begin{proof} Fix coordinates for the points of $Z$ and consider the corresponding evaluation map $\mathbb{C}[x_0,\dots,x_n]\to \mathbb{C}^{\ell(Z)}$ in degree $j$. The kernel is the space of forms of degree $j$ vanishing on $Z$. Since $Z$ satisfies $CB(j)$ for all $j\leq d$, then, for all $P\in Z$, every form of degree $j\leq d$ that vanishes on $Z\setminus\{P\}$ also vanishes on $Z$. Thus, in any degree $j\leq d$ the kernel of the evaluation map does not change if we forget any point $P\in Z$. Hence $h_Z(j)=h_{Z\setminus\{P\}}(j)$. \end{proof} \begin{remark}\label{CBcons} We will use the previous proposition as follows. Assume that $Z$ does not satisfy $CB(d)$. Then there exists a points $P\in Z$ and an integer $j_0\leq d$ such that $Dh_Z(j_0)>Dh_{Z\setminus\{P\}}(j_0)$. Since $\sum Dh_Z(i)=\ell(Z) =\ell(Z\setminus\{P\})+1 = \sum Dh_{Z\setminus\{P\}}(i)$, it follows that $Dh_Z(i) = Dh_{Z\setminus\{P\}}(i)$ for all $i\neq j_0$. In particular, the equality holds for $i\geq d+1$. Thus there exists $P\in Z$ such that $$h^1_Z(d) = \sum_{i=d+1}^\infty Dh_Z(i) = \sum_{i=d+1}^\infty Dh_{Z\setminus\{P\}}(i)= h^1_{Z\setminus\{P\}}(d).$$ \end{remark} The following proposition, which gives a strong constraint on the Hilbert function of sets with a Cayley-Bacharach property, is a refinement of a result due to Geramita, Kreuzer, and Robbiano (see Corollary 3.7 part (b) and (c) of \cite{GerKreuzerRobbiano93}). \begin{theorem}\label{GKRext} If a finite set $ Z \subset \mathbb{P}^{n} $ satisfies $\mathit{CB}(i)$, then for any $ j $ such that $ 0 \leq j \leq i+1 $ we have $$ Dh_{Z}(0)+Dh_{Z}(1)+\cdots + Dh_{Z}(j) \leq Dh_{Z}(i+1-j)+\cdots +Dh_{Z}(i+1).$$ \end{theorem} \begin{proof} See Theorem 4.9 of \cite{AngeCVan18}. \end{proof} \section{Kruskal's criterion for symmetric tensors} The symmetric version of the Kruskal's criterion for the identifiability of tensors can be found in \cite{COttVan17b}. We recall it for the reader's convenience. \begin{theorem} \label{K} ({\bf Kruskal's criterion}) Let $T$ be a form of degree $d$ and let $A$ be a minimal decomposition of $T$, with $\ell(A)=r$. Fix a partition $a,b,c$ of $d$ and call $k_a,k_b,k_c$ the Kruskal's ranks of $v_a(A),v_b(A), v_c(A)$ respectively. If: $$ r\leq \frac {k_a+k_b+k_c-2}2,$$ then $T$ has rank $r$ and it is identifiable. \end{theorem} \section{The extension} In this section, we prove the main result, i.e. an extension of the Kruskal's criterion for the case of septics in $3$ variables. Let $T$ be a septic and consider a minimal decomposition of $T$ with $11$ elements $A=\{P_1,\dots,P_{11}\}$. We assume that the points of $A$ are general, and more precisely: \begin{itemize} \item the Kruskal's rank of $A$ is $3$, i.e. no three points of $A$ are aligned; \item the Kruskal's rank of $v_3(A)$ is $10$, i.e. no $10$ points of $A$ are contained in a cubic or, equivalently, for any subset $A'\subset A$ with $\ell(A')\leq 10$, the set $v_3(A')$ is linearly independent. \end{itemize} With these assumptions, we get: \begin{theorem} $T$ has rank $11$ and it is identifiable. \end{theorem} Notice that the original Kruskal's criterion \ref{K} does not cover this case. Namely if we take a partition $7=3+3+1$, then under our hypothesis the given decomposition $A$ of $T$ satisfies $k_3=10$, $k_1=3$, but: $$ 11 \not\leq \frac {10+10+3-2}2.$$ A similar computation holds for any other partition of $7$. Indeed in section 4 of \cite{COttVan17b} it is pointed out that the partition $7=3+3+1$ is the one that covers the widest range for the application of Kruskal's criterion. We will prove the extension of the Kruskal's criterion by contradiction. So, from now on consider the set $Z=A\cup B$, where $B$ is a second \emph{minimal} decomposition of $T$ with at most $11$ points. Since $Z$ contains $A$, then the difference $Dh_Z$ of the Hilbert function of $Z$ is: \begin{center}\begin{tabular}{c|cccccccccc} $j$ & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ &\dots \\ \hline $Dh_Z(j)$ & $1$ & $2$ & $3$ & $4$ & $a_4$ & $a_5$ & $a_6$ & $a_7$ & $a_8$ & $\dots$ \end{tabular}\end{center} with $a_8>0$, since $T\in \langle v_7(A)\rangle\cap \langle v_7(B)\rangle$ and $A,B$ are minimal, thus the points of $v_7(Z)$ can not be linearly independent. It follows that $h^1_Z(7)>0$, as in Proposition \ref{cap}. Hence also $a_4, a_5, a_6, a_7>0$, by Proposition \ref{nonincr}. \begin{proposition}\label{CB7} $ Z $ can not satisfy $ CB(7) $. \end{proposition} \begin{proof} Assume that $Z$ satisfies $ CB(7) $. Then, by Proposition \ref{CBonHilb}, $a_5+a_6+a_7+a_8\geq 1+2+3+4=10$. Since $\ell(Z)\leq 22$, this implies $a_4\leq 2$. But then, by Proposition \ref{nonincr}, $ a_5+a_6+a_7+a_8\leq 2+2+2+2=8$, a contradiction. \end{proof} Next, we need a result which is the application of the argument of Remark \ref{CBcons} to the existence of two decompositions of a tensor $T$. Since we will use it in several contests, we prove it in a general setting. \begin{lemma}\label{CBdis} Let $U$ be a symmetric tensor and consider two \emph{minimal} decompositions $A_1,A_2$ of $U$. Set $W= A_1 \cup A_2$. If $A_1\cap A_2 = \emptyset$, then $W$ has the Cayley-Bacharach property $CB(d)$. \end{lemma} \begin{proof} Suppose that $CB(d)$ does not hold for $W$. Hence there is a point $P \in W$ and a and a hypersurface $F\subset\mathbb{P}^n$ of degree $d$ such that $F$ contains $W\setminus\{P\}$ but misses $P$. Just as in Remark \ref{CBcons}, we get that: $$h^1_W(d)=h^1_{W\setminus\{P\}}(d).$$ We do not know in principle if $P$ belongs either to $A_1$ or to $A_2$. In any case, by Proposition \ref{cap}: \begin{multline*} \dim \langle v_d(A_1)\rangle\cap \langle v_d(A_2)\rangle = h^1_W(d)-1 \\=h^1_{W\setminus\{P\}}(d) -1 = \dim \langle v_d(A_1\setminus\{P\})\rangle\cap \langle v_d(A_2\setminus\{P\})\rangle.\end{multline*} Thus $U$ belongs to both $v_d(A_1\setminus\{P\})$ and $v_d(A_2\setminus\{P\})$. If $P\in A_1$, we get a contradiction with the minimality of $A_1$. Similarly, if $P\in A_2$, we get a contradiction with the minimality of $A_2$. \end{proof} As a consequence of Lemma \ref{CBdis} and Proposition \ref{CB7}, applied to the decompositions $A,B$ of $T$, in our setting we find: \begin{proposition} \label{capcap} The intersection $A\cap B$ is non empty. \end{proposition} So we must have $\ell(A\cap B)=i>0$. After rearranging the points of $A,B$, we may assume $$ A=\{P_1,\dots,P_i,P_{i+1},\dots,P_{11} \}\qquad B=\{P_1,\dots,P_i,P'_{i+1},\dots,P'_q\},$$ where $q=\ell(B)\leq 11$, $i<q$, and the set $B_0=\{P'_{i+1},\dots,P'_q\}$ is disjoint from $A$, i.e. $B_0=B\setminus A$. Then we know that $Z=A\cup B_0$ has not the property $CB(7)$. For a choice of the scalars, we can write: \begin{multline*} a_1v_7(P_1)+\dots + a_{11}v_7(P_{11}) = T \\ = b_1v_7(P_1)+\dots +b_iv_7(P_i)+b_{i+1}v_7(P'_{i+1})+\dots +b_qv_7(P'_q). \end{multline*} The minimality of $A,B$ implies that none of the coefficients $a_i,b_i$ is $0$. Write: \begin{gather*} T_0= (a_1-b_1)v_7(P_1)+\dots + (a_i-b_i)v_7(P_i)+a_{i+1}v_7(P_{i+1})+\dots + a_{11}v_7(P_{11})\\ = b_{i+1}v_7(P'_{i+1})+\dots +b_qv_7(P'_q) \end{gather*} so that $T=T_0+b_1v_7(P_1)+\dots +b_iv_7(P_i)$. We are now ready to prove that: \begin{proposition} The existence of $B$ yields a contradiction. \end{proposition} \begin{proof} Since $Z=A\cup B_0$ does not satisfy $CB(7)$, as in the proof of Lemma \ref{CBdis}, we know that there exists a point $P\in Z$ such that $$\langle v_7(A\setminus\{P\})\rangle\cap \langle v_7(B_0\setminus\{P\})\rangle = \langle v_7(A)\rangle\cap \langle v_7(B_0)\rangle,$$ hence $T_0$ belongs to $\langle v_7(A\setminus\{P\})\rangle\cap \langle v_7(B_0\setminus\{P\})\rangle$. If $P$ belongs to $B_0$, then $T$ is spanned by $v_7(B\setminus \{P\})$, which contradicts the minimality of $B$. Similarly, if $P=P_j$ with $j>i$, then $T$ is spanned by $v_7(A\setminus \{P\})$, which contradicts the minimality of $A$. Thus $P$ is a point in $A\cap B$, and, after rearranging, we may assume $P=P_1$. Thus $T_0=c_2v_7(P_2)+\dots + c_{11}v_7(P_{11})$. If $a_1-b_1\neq 0$, this implies that $v_7(A)$ is linearly dependent, which contradicts the minimality of $A$. Thus $a_1=b_1$, so that $T_0$ is spanned by $v_7(P_2),\dots, v_7(P_{11})$. It may happen that some other coefficient $(a_i-b_i)$ is $0$. Nevertheless $T_0$ has a minimal decomposition $A'\subset A$ whose length $\ell(A')$ satisfies by $11-i\leq\ell(A')\leq 10$, namely: $$T_0= (a_2-b_2)v_7(P_2)+\dots + (a_i-b_i)v_7(P_i)+a_{i+1}v_7(P_{i+1})+\dots + a_{11}v_7(P_{11}).$$ Moreover $T_0$ has a decomposition, $B_0$, of length $\ell(B_0)=q-i\leq 11-i\leq \ell(A')$. Since $A'$ consists of at most $10$ points of $A$, then $v_3(A')$ is linearly independent. Thus, the Kruskal's rank $k'_3$ of $v_3(A')$ is $\ell(A')$, while the Kruskal's rank $k'_1$ of $A'$ is $\min\{\ell(A'),3\}$. Moreover $\ell(A')>1$, otherwise $i=10$, $q=11$ and $T_0=a_{11}v_7(P_{11})=b_{11}v_7(P'_{11})$, which means that, as projective points, $P_{11}=P'_{11}$, a contradiction. Thus one computes: $$ \ell(A')\leq \frac {k'_3+k'_3+k'_1-2}2.$$ It follows that the Kruskal's criterion contradicts the existence of the two different decompositions of $T_0$. \end{proof} \begin{example} To give an example, consider the linear forms $L_1,\dots,L_{11}$ in $3$ variables associated to the points \begin{gather*} (1,0,0),\ (0,1,0),\ (0,0,1),\, (1,1,1),\ (1,-1,2),\ (1,3,-1),\\ (1,2,3),\ (2,-1,1),\ (-2,-1,3),\ (-1,3,4), \ (3,-1,4). \end{gather*} One computes easily that the set $A$ of the points has maximal Kruskal's ranks $k_1=3$ and $k_3=10$. It turns out that any linear combination of the 7th powers of the forms $L_i$'s, with no zero coefficients, has rank $11$ and it is identifiable. For instance, this holds for the sum $T=L_1^7+\dots +L_{11}^7$, i.e. {\small \begin{gather*} T=2191x^7-849x^6y+249x^5y^2-51x^4y^3+45x^3y^4+501x^2y^5+69xy^6+4500y^7+\\3181x^6z -918x^5yz +430x^4y^2z-204x^3y^3z+346x^2y^4z-1128xy^5z+2390y^6z+3631x^5z^2\\-1390x^4yz^2+274x^3y^2z^2 +344x^2y3z^2 -1034xy^4z^2+4390y^5z^2+5731x^4z^3-1668x^3yz^3\\+1372x^2y^2z^3 -1686xy^3z^3 +5636y^4z^3+6115x^3z^4 -1714x^2yz^4-1346xy^2z^4\\ +7234y^3z^4+11491x^2z^5-5208xyz^5 +11480y^2z^5+7531xz^6+8860yz^6+37272z^7. \end{gather*}} \end{example} \section{Decompositions on cubics}\label{cub} In this section we will prove results on the identifiability of a particular set of tensors. From now on, consider symmetric tensors $T$ of degree $7+2q$ in three variables, which satisfy the following assumptions: \begin{itemize} \item Fix the integer $q\geq 0$ and let $T$ be a symmetric tensor of degree $d=7+2q$ in $3$ variables, with a decomposition $A=\{P_1,\dots,P_r\}\subset\mathbb{P}^2$, of length $r=\ell(A)\leq 10+3q$ such that $A$ is contained in a plane cubic curve $C$. \item Assume that the Kruskal's rank of $A$ is $k_1=\min\{3,r\}$ and the $(q+3)$-th Kruskal's rank $k_{q+3}$ of $A$ is equal to $\min\{r,3q+9\}$. \end{itemize} The second assumption implies that no $3$ points of $A$ are aligned. \begin{remark} Assume that $r=\ell(A)$ is smaller than $3q+10$. Then the second assumption implies that the $(q+3)$-th Kruskal's rank of $A$ is $r$. Take the partition $d=2q+7=(q+3)+(q+3)+1$. Then $r$ satisfies: $$r \leq \frac {k_{q+3}+k_{q+3}+k_1-2}2,$$ thus by Kruskal's Theorem \ref{K}, we get that $T$ has rank $r$ and $A$ is the unique decomposition of $T$ (up to trivialities). \end{remark} Thus, from now on we will assume that $A$ has cardinality $3q+10$. Notice that, when $r=3q+10$, the Kruskal's criterion gives no information (e.g. for the partition $d=2q+7=(q+3)+(q+3)+1$, we have $3q+10> ((3q+9)+(3q+9)+3-2)/2$). Hence we need to apply another strategy. \begin{remark} Assume $r=3q+10$. Since $A$ is contained in a cubic curve, the difference of the Hilbert function of $A$ satisfies $Dh_A(0)=1$, $Dh_A(1)\leq 2$, $Dh_A(2)\leq 3$ and also $Dh_A(3)\leq 3$. Thus $Dh_A(i)\leq 3$ for $i>3$, by Proposition \ref{nonincr}. Assume that $Dh_A(1)\leq 1$. Then $h_A(1)<3$, which contradicts the assumption $k_1=3$. Thus $Dh_A(1)=2$. Assume that $Dh_A(i)<3$ for some $i$ in the range $2\leq i\leq q+3$. Then: $$h_A(q+3)=\sum_{j=0}^{q+3}Dh_A(j)\leq 1+2+3(q+1)+2<3q+9,$$ which contradicts the assumption $k_{q+3}=3q+9$. Thus $Dh_A(i)=3$ for $i=2,\dots, q+3$. It follows that $\sum_{j=0}^{q+3}Dh_A(j)= 3q+9$. Moreover, $Dh_A(q+4)$ can not be $0$, otherwise $Dh_A(j)=0$ for $j\geq q+4$, by Proposition \ref{nonincr}, hence $\sum_{j=0}^{\infty}Dh_A(j)<3q+10$, a contradiction. It follows that $Dh_A(q+4)=1$ and $Dh_A(j)=0$ for $j>q+4$. In particular, we get $h_A(2)=6$ and $h_A(3)=9$, which means that $A$ is contained in no conics and in exactly one cubic curve. The function $Dh_A$ is thus the one displayed below. \begin{center}\begin{tabular}{c|cccccccccc} $i$ & $0$ & $1$ & $2$ & $3$ & \dots & $q+2$ & $q+3$ &$q+4$ &$q+5$ &\dots \\ \hline $Dh_A(i)$ & $1$ & $2$ & $3$ & $3$ & \dots & $3$ & $3$ & $1$ & $0$ &\dots \end{tabular}\end{center} \end{remark} \smallskip We want to exclude the existence of a second decomposition $B$ of $T$ of length $\ell(B)\leq 3q+10$. So assume, by contradiction, that $B$ exists, and assume, as above, that $B$ is minimal. Define as above $Z=A\cup B$, so that $\ell(Z)\leq 6q+20$. Using Lemma \ref{CBdis}, we can prove that the intersection $A\cap B$ can not be empty. \begin{proposition}\label{CubDis} The Cayley-Bacharach property $CB(d)$ can not hold for $Z=A\cup B$. Thus $A\cap B\neq \emptyset$. \end{proposition} \begin{proof} Assume that $CB(d)$ holds for $Z=A\cup B$. Thus, by Theorem \ref{GKRext} we have: $$\sum_{j=0}^{q+3}Dh_Z(j) \leq \sum_{j=q+5}^{d+1}Dh_Z(j).$$ Since $\sum_{j=0}^{q+3}Dh_Z(j)\geq \sum_{j=0}^{q+3}Dh_A(j)=3q+9$, we get: \begin{multline*} 6q+20\geq \ell(Z)\geq \sum_{j=0}^{2q+8}Dh_Z(j) =\sum_{j=0}^{q+3}Dh_Z(j)+Dh_Z(q+4)+\sum_{j=q+5}^{2q+8}Dh_Z(j) \\ \geq 3q+9 + Dh_Z(q+4) + 3q+9,\end{multline*} thus $Dh_Z(q+4)\leq 2$. But then, by Proposition \ref{nonincr}, $Dh_Z(j)\leq 2$ for $j\geq q+4$, thus: $$3q+9 \leq \sum_{j=q+5}^{2q+8}Dh_Z(j)\leq 2(q+4),$$ a contradiction. The second claim follows by Lemma \ref{CBdis} applied to $A,B$ and $T$. \end{proof} Now we can prove the main results of this section. \begin{proposition} The existence of $B$ yields a contradiction. \end{proposition} \begin{proof} Suppose there is another decomposition $B$ of $T$, of cardinality $\ell(B)=k\leq 10+3q$. We know from Proposition \ref{CubDis} that $A\cap B\neq \emptyset$. Thus we can write, without loss of generality, $B=\{ P_1,\dots, P_i,P'_{i+1},\dots,P'_r\}$, i.e. we may assume that $A\cap B=\{ P_1,\dots, P_i\}$, $i>0$. Then there are coefficients $a_1,\dots,a_{3q+10},b_1,\dots,b_k$ such that: \begin{multline*}T=a_1v_d( P_1)+\dots+a_iv_d(P_i)+a_{i+1}v_d(P_{i+1})\dots+a_{3q+10} v_d(P_{3q+10}) = \\ b_1 v_d(P_1)+\dots+b_i v_d(P_i)+ b_{i+1}v_d(P'_{i+1})+\dots+b_k v_d(P'_k).\end{multline*} Consider the tensor $$T_0 = (a_1-b_1)v_d( P_1)+\dots+(a_i-b_i)v_d( P_i)+a_{i+1}v_d(P_{i+1})+\dots+a_{3q+10} v_d(P_{3q+10}),$$ which is also equal to $ b_{i+1}v_d(P'_{i+1})+\dots+b_k v_d(P'_k)$. Thus $T_0$ has the two decompositions $A$ and $B'=\{P'_{i+1},\dots,P'_k\}$, which are disjoint. Thus, if $A$ and $B'$ are both minimal, as decompositions of $T_0$, then by Lemma \ref{CBdis} applied to $A,B'$ and $T_0$, we get that $A\cup B'$ satisfies $CB(d)$. Since $A\cup B'=A\cup B=Z$, and we know by Proposition \ref{CubDis} that $Z$ does not satisfies $CB(d)$, we find that either $A$ or $B'$ are not minimal. Assume that $B'$ is not minimal. Then we can find a point of $B'$, say $P'_k$, such that $T_0$ belongs to the span of $v_d(B'\setminus\{P'_k\})$. Since $T=T_0+b_1v_d(P_1)+\dots+b_iv_d(P_i)$, this would mean that $T$ belongs to the span of $v_d(B\setminus\{P'_k\})$, which contradicts the minimality of $B$. Assume that $A$ is not minimal, and $T_0$ belongs to the span of $v_d(A\setminus\{P_j\})$, for some $j>i$. That is, as above, since $T=T_0+b_1v_d(P_1)+\dots+b_iv_d(P_i)$, this would mean that $T$ belongs to the span of $v_d(A\setminus\{P_j\})$, which contradicts the minimality of $A$. Assume that $A$ is not minimal, and $T_0$ belongs to the span of $v_d(A\setminus\{P_j\})$, for some $j\leq i$, say $j=1$. Then $T_0=\gamma_2v_d(P_2)+\dots+\gamma_{3q+10}v_d(P_{3q+10})$, for some choice of the coefficients $\gamma_j$. Since $v_d(A)$ is linearly independent, because $A$ is minimal for $T$, this is only possible if $a_1-b_1=0$. So there exists a proper subset $A'\subset A$ which provides a minimal decomposition of $T_0$, together with $B'$. Moreover $\ell(A')\geq \ell(B')$. Since $A$ has $(q+3)$-th Kruskal's rank $3q+9\geq \ell(A')$, by Remark \ref{ksub} the $(q+3)$-th Kruskal's rank $k'_{q+3}$ of $A'$ is $\ell(A')$. Similarly, the Kruskal's rank $k'_1$ of $A'$ is $\min\{3,\ell(A')\}$. Moreover $\ell(A')>1$, otherwise $i=3q+9$, $k=3q+10$ and $T_0=a_{3q+10}v_d(P_{3q+10})=b_{3q+10}v_d(P'_{3q+10})$, which means that, as projective points, $P_{3q+10}=P'_{3q+10}$, a contradiction. Thus one computes: $$ \ell(A')\leq \frac {k'_{q+3}+k'_{q+3}+k'_1-2}2.$$ hence the existence of a second decomposition $B'$ of $T_0$, with $\ell(A')\geq \ell(B')$, contradicts Kruskal's Theorem \ref{K}. \end{proof} \begin{remark} We can apply verbatim the previous procedure even for the case $q=-1$. We get ternary forms of degree $7+2q=5$ and rank $r\leq 10+3q=7$. In particular, notice that $7$ is the rank of a generic quintic form, by \cite{AlexHir95}. Notice also that, for $q=-1$, the assumption that $A$ is contained in a cubic is unnecessary, for any set of cardinality $\leq 7$ in $\mathbb{P}^2$ lies in a cubic. Thus, we get back, from our procedure, the classically well known fact that a general ternary form $T$ of degree $5$ has a unique decomposition with $7$ powers of linear forms. Indeed, we can give a more precise notion of {\it generality} for $T$: the uniqueness holds if a decomposition $A$ of $T$ has Kruskal's ranks $k_1=3$ and $k_2=6$. \end{remark} \bibliographystyle{amsplain}
1,314,259,993,721
arxiv
\section{Introduction} \label{sec:intro} We present a method to estimate the mass of a dark matter halo from the dynamical status of its satellite galaxies. In the framework of hierarchical structure formation based on the concordance cold dark matter ($\Lambda\mathrm{CDM}$) cosmology, the mass of a dark matter halo is closely related to its many other properties such as structure, dynamics, and formation history. In the case of the Milky Way (MW), a number of theoretical predictions or interpretations of observations, for example, the baryon fraction \citep[e.g.][]{Zaritsky2017} and the problem of missing massive satellites \citep[e.g.][]{Boylan-Kolchin2011, Wang2012b, Cautun2014}, depend on the MW halo mass. Various methods have been proposed to measure this important quantity (see \citealt{Courteau2014,Bland-Hawthorn2016a} for reviews and \citealt{Wang2015b} for a comparison of recent measurements). Though these measurements are roughly consistent, they result in a factor of $\sim 3$ difference in the estimated MW halo mass. The scatter might be even larger if systematic uncertainties are included \citep{Han2016a, Wang2016b}. Clearly, there is a need for more accurate methods to determine the MW halo mass. The MW halo mass can be constrained by the abundances of certain constituents, such as the baryon fraction \citep{Zaritsky2017}, the total stellar mass \citep{Guo2011}, and the number of satellite galaxies above a specific threshold \citep[e.g.][]{Starkenburg2013, Rodriguez-Puebla2013, Cautun2014}. Timing argument is widely used to give another mass estimator by modeling the expansion of the Local Group galaxies \citep{Kahn1959,Li2008,Penarrubia2016,Banik2016}. Perhaps the most powerful and direct method to estimate the MW halo mass is to use dynamical tracers. In this regard, the mass distribution within $\sim 100$ kpc is reasonably well constrained by the kinematics of stars \citep[e.g.][]{Xue2008, Huang2016} or a stellar stream \citep[e.g.][]{Gibbons2014}. However, due to the limited spatial distribution of these tracers, extrapolation is needed to obtain the total halo mass, which often depends on the assumed parametric form for the overall density profile. The outer region of the MW can be investigated more directly by using its satellite galaxies, which lie far beyond the other tracers. However, this approach was limited for a long time by the small sample size, large uncertainties in distance estimates, and lack of proper-motion measurement. Fortunately, both the sample size and precision of distance measurement have increased greatly over the past decade (see \citealt{McConnachie2012} for a recent compilation of observations\footnote{An updated compilation can be downloaded from \url{http://www.astro.uvic.ca/\~alan/Nearby_Dwarf_Database.html}}). In addition, with the unprecedented precision of the HST and the new generation of ground-based telescopes, proper motions of bright satellites have been measured (e.g. \citealt{Piatek2002, Piatek2003, Piatek2005}, also see Table 1 of \citealt{Pawlowski2013a} for a summary of currently available measurements). Consequently, such satellites are fully characterized in the 6D phase space of position and velocity and their orbits can be computed assuming a potential. Currently, proper motions are available for 12 of the 13 satellite galaxies (the exception being Canes Venatici I) that are more luminous than $10^5L_{\odot}$ and within 300 kpc from the MW center. However, unlike stellar tracers, the limited number of satellite galaxies does not allow direct calculation of the velocity dispersion profile or rotation curve. Instead, analytical models of the dynamical status or comparisons with numerical simulations are required. Previous studies using satellite galaxies considered the orbital energy of Leo I \citep{Boylan-Kolchin2013}, velocity moments \citep{Watkins2010a}, orbital ellipticity distribution \citep{Barber2014}, and probability distribution of orbital parameters \citep{Eadie2015,Eadie2016}. These approaches encounter several difficulties. When the density and velocity anisotropy profiles of the tracer population are assumed, the inferred mass distribution depends sensitively on the assumptions \citep[e.g.][]{Watkins2010a,Eadie2016}, which requires further systematic study. In addition, analytical methods assume that all satellites are bound in a steady state with random orbital phases, which may not hold for all halos. The influence of deviations from a steady state and halo-to-halo scatter has yet to be taken into full consideration. Another difficulty is how to treat observational errors properly, as the measurement uncertainty differs substantially from satellite to satellite. In this paper we develop a new method that either avoids or addresses the above issues in using satellite galaxies to estimate the MW halo mass. We base our method on dark-matter simulations of MW-like halos and associate satellite galaxies with subhalos of a simulated halo. We construct the distribution of subhalos in the phase space of orbital binding energy and angular momentum directly from the simulations without assuming a steady state or any particular form of velocity anisotropy. We also take into account observational uncertainties of satellite galaxies. We estimate the halo mass by maximizing the likelihood in comparing the observed orbital parameters of satellite galaxies with the phase-space distribution derived from simulations. We test the validity of this method and investigate its systematics using mock samples taken from simulations. We also study the dependence of this halo mass estimator on observational uncertainties, the number of satellites used, and halo-to-halo scatter. While our method is motivated by improving the estimate of the MW halo mass, it can be extended to other MW-like halos as well. The plan of this paper is as follows. We outline our method in \refsec{method} and show how to construct the phase-space distribution of subhalos from simulations in \refsec{template}. We discuss systematic tests by mock samples in \refsec{test} and give conclusions in \refsec{conclusion}. \section{Method} \label{sec:method} Our basic assumption is that for a present-day halo of mass $M_{\mathrm{h}}$, its substructures have a characteristic distribution $p(E,L|M_{\mathrm{h}})$ in the phase space of orbital binding energy $E$ and angular momentum $L$ (see below for definition). Then the unknown mass of a halo can be inferred by comparing the observed orbital parameters of its substructure tracers with the phase-space distributions derived from simulations for different $M_{\mathrm{h}}$. In practice, dwarf satellite galaxies are the outmost tracers for the MW. To develop the halo mass estimator, we consider such satellites as a subset of the surviving subhalos for a halo in terms of kinematics\footnote{ We assume that the orbits of satellites are not subject to significant selection effects. Satellite samples are usually selected by some luminosity threshold. Using the data from \citet{McConnachie2012}, we have checked that both the space and radial velocity distributions of the most luminous ($>10^5 L_{\odot}$) 13 MW satellites agree with those of the nearly complete sample of $\sim 25$ fainter ($>10^4 L_{\odot}$) satellites. This result is consistent with our assumption. }. Hereafter, the simulated halo that provides the calculated phase-space distribution is referred to as the \textit{template halo}. The halo whose mass is to be determined is referred to as the \textit{test halo}. We characterize the orbit in the potential of a halo by the corresponding binding energy $E$ and angular momentum $L$ per unit mass. Specifically, \begin{equation} \begin{split} &E = - \Phi (r) - \frac{1}{2} (\vr^2 + v_{\mathrm{t}}^2), \\ &L = rv_{\mathrm{t}}, \end{split} \end{equation} where $r$, $\vr$, and $v_{\mathrm{t}}$ are the distance, the radial and tangential velocity relative to the center of the host halo, respectively, and \begin{equation} \Phi (r) = - \int_r^{r_0} \frac{G M_{\Delta} (r')}{r'^2} \mathrm{d} r' \end{equation} is the gravitational potential. In the above equation, $r_0$ corresponds to the zero potential point, $G$ is the gravitational constant, and \begin{equation} M_{\Delta} (r) = \int_0^r 4 \pi [\rho (r') - \bar{\rho}] r'^2 \mathrm{d} r' \end{equation} represents the mass exceeding the mean cosmic background, where $\rho(r)$ is the dark matter density profile of the host halo and $\bar{\rho}$ is the mean cosmic density. We adopt $r_0 = 1 h^{-1} \, \mathrm{Mpc}$ and have checked that using $r_0=3 h^{-1} \, \mathrm{Mpc}$ instead makes little difference in the results. There are two reasons why we do not use the observable parameters $r$, $\vr$, and $v_{\mathrm{t}}$ directly although this alternative seems to provide more information. First, the $E$ and $L$ of a subhalo are approximately conserved after its infall into the host halo. They are less mixed in phase space over time and also less sensitive to individual merger events that produce halo-to-halo scatter. Second, due to the finite number of subhalos in the simulations, the constructed phase space of $r$, $\vr$, and $v_{\mathrm{t}}$ is more sparse, and therefore, more discontinuous. This problem is mitigated by using the phase space of the corresponding $E$ and $L$ instead. Hereafter, ``phase space'' means $E$-$L$ space. For a template halo of mass $M_{\mathrm{h}}$, we construct the phase-space distribution $p(E,L|M_{\mathrm{h}})$ directly from the simulations. Specifically, \begin{equation} p(E, L|M_{\mathrm{h}}) = \frac{1}{n_{\mathrm{sub}}} \sum_{i = 1}^{n_{\mathrm{sub}}} \tilde p (E, L| \mathrm{sub}_i), \label{eq:p(E,L)} \end{equation} where $n_{\mathrm{sub}}$ is the number of selected subhalos, and $\tilde p (E, L| \mathrm{sub}_i)$ represents the probability density for the $i$-th subhalo to be ``observed'' at $(E,L)$. As described in detail in \refsec{template}, $\tilde p (E, L| \mathrm{sub}_i)$ serves as the kernel function in the kernel density estimation to transform the discrete distribution of subhalos in phase space into a continuous one. The utility of template halos is greatly extended by the scaling technique. For the mass range of our interest for the Milky Way halo, dark-matter halos are built up approximately in a self-similar manner, thus we can scale a halo to a different mass while keeping the formation history and relaxation status unchanged. Specifically, the distribution $p(E,L|M_{\mathrm{h}}')$ for a halo of mass $M_{\mathrm{h}}'$ can be obtained from $p(E,L|M_{\mathrm{h}})$ by using \begin{equation} \label{eq:scaling} \begin{split} r' = (M_{\mathrm{h}}' / M_{\mathrm{h}})^{1 / 3} r ,\\ v_{\mathrm{r(t)}}' = (M_{\mathrm{h}}' / M_{\mathrm{h}})^{1 / 3} v_{\mathrm{r(t)}}, \\ \Phi' = (M_{\mathrm{h}}' / M_{\mathrm{h}})^{2 / 3} \Phi, \end{split} \end{equation} for each of the subhalos in the halo of mass $M_{\mathrm{h}}$. The subhalo mass is scaled as $m'=(M_{\mathrm{h}}' / M_{\mathrm{h}}) m$. In this way, we can construct a family of distributions $p(E,L|M_{\mathrm{h}}')$ for a range of halo mass $M_{\mathrm{h}}'$ from a single template halo. To infer the unknown mass of a test halo hosting a set of satellites with observed $(r,v_r,v_t)$, we calculate $(E,L)$ for each satellite using the potential of a scaled template halo of mass $M_{\mathrm{h}}'$ and further compute the likelihood \begin{equation} \mathcal{L} (\mathrm{obs} |M_h') = \prod^{N_{\mathrm{sat}}}_{k = 1} p(E_k, L_k |M_h'), \label{eq:likelihood} \end{equation} where $N_{\mathrm{sat}}$ is the number of observed satellites, and $(E_k, L_k)$ correspond to the $k$-th satellite. The likelihood $\mathcal{L} (\mathrm{obs} |M_{\mathrm{h}}')$ can be calculated for a range of template halos scaled to different $M_{\mathrm{h}}'$. Assuming that the test halo has the same formation history and relaxation status as the template halos, we can infer the unknown mass of the test halo by maximizing $\mathcal{L} (\mathrm{obs} |M_{\mathrm{h}}')$, which gives the Maximum Likelihood Estimator (MLE) for the mass \begin{equation} M_{\mathrm{esti}} = \arg \max_{M_{\mathrm{h}}'} \mathcal{L} (\mathrm{obs} |M_{\mathrm{h}}'). \end{equation} The above method is illustrated by \reffig{scaling}, which shows how $\mathcal{L} (\mathrm{obs} |M_{\mathrm{h}}')$ changes with $M_{\mathrm{h}}'$. Template halo A1 in our simulations, which has a true mass of $M_h\approx 1.6\times 10^{12}M_{\odot}$, is also used as a test halo and its most massive 9 subhalos are chosen as satellites, for which mock observations of $(r,v_r,v_t)$ are made with the fiducial measurement precision (see \refsec{observation} and \refsec{mock}). The colored contours in \reffig{scaling} represent the phase-space distribution constructed from template halo A1 by scaling it to $M_{\mathrm{h}}'=0.5\times 10^{12}$, $1.5\times 10^{12}$, and $2.5\times 10^{12}M_{\odot}$, respectively. The symbols stand for the mock data on $(E,L)$ for the satellites. Note that the ``observed'' $(r,v_r,v_t)$ for each satellite do not change during scaling. Therefore, the mock data on $L$ remain the same but those on $E$ change with the potential of the scaled template halo. The observation points become more bound as $M_{\mathrm{h}}'$ increases. It can be seen from \reffig{scaling} that among the three $M_{\mathrm{h}}'$ values, the likelihood of the observations is the largest for the middle one, which is also closest to the true value $M_{\mathrm{h}}$. \begin{figure*}[ht!] \epsscale{1.2} \plotone{./scaling_max9_A1.pdf} \caption{Comparison of mock observations with the phase-space distributions constructed from simulations of template halo A1 scaled to different halo masses. Template halo A1 is also used as a test halo and its most massive 9 subhalos are chosen as satellites for mock observations. Symbols with black border and colored contours represent mock data and constructed phase-space distributions, respectively. The scaled halo mass and the corresponding likelihood of mock observations are shown in each panel. It can be seen that among the three cases, the likelihood is the largest for the middle one, whose halo mass is also closest to the true value of $\approx 1.6\times 10^{12}M_{\odot}$.} \label{fig:scaling} \end{figure*} Because the binding energy $E$ of a satellite depends on the template halo mass $M_{\mathrm{h}}'$, the likelihood $\mathcal{L} (\mathrm{obs} |M_{\mathrm{h}}')$ cannot be converted in a straightforward manner into the probability distribution of the true halo mass even when the prior distribution of $M_{\mathrm{h}}'$ is known. Nevertheless, we will show that the MLE $M_{\mathrm{esti}}$ is indeed a good, though biased, indicator for the true halo mass. Using Monte Carlo realization of mock samples, we find that the bias is approximately constant and define an average bias $\eta = \avg{M_{\mathrm{esti}}/M_{\mathrm{true}}}$ over the mock samples. Consequently, we obtain the bias-corrected estimator for the halo mass \begin{equation} \hat{M}_{\mathrm{esti}} = M_{\mathrm{esti}} / \eta. \label{eq:correction} \end{equation} The above discussion assumes that the test and template halos have the same formation history and relaxation status. However, such information about the test halo is not readily available in practice. The lack of such information then introduces an intrinsic uncertainty into our method. We assess this uncertainty using 9 simulated halos with a wide range of formation history in \refsec{test}. \section{Construction of Subhalo Phase-Space Distribution} \label{sec:template} Central to our method is the phase-space distribution $p(E,L|M_{\mathrm{h}})$ of subhalos for a template halo of mass $M_{\mathrm{h}}$. This distribution is constructed directly from our simulations taking into account realistic observational uncertainties. The detailed procedure is described in this section. \subsection{Simulations} \label{sec:simu} In order to have enough substructures within a halo and resolve them with reasonable details, we use the cosmological N-body simulation of \cite{Jing2002} to select nine template halos for high-resolution resimulations. Each of these halos is required to be relatively isolated at redshift $z=0$\footnote{ The selection of relatively isolated halos is required by the zoom-in technique, because a close neighbor of low resolution may bring unpredictable numerical effects in re-simulations, while one of high resolution will consume too much computational time. We confirm in Appendix that the selection of relatively isolated halos does not affect our method.} so that its distance to any more massive halo must exceed three times the sum of the virial radii of both halos. In addition, each template halo is required to have a mass of approximately $1.5\times 10^{12}M_{\odot}$ similar to that of the MW. The simulation was performed in a box of $100 h^{-1} \, \mathrm{Mpc}$ on each side with a parallel particle-particle-particle-mesh $\rm{P^3M}$ code using $512^3$ particles. A $\Lambda\mathrm{CDM}$ cosmology was adopted with the density parameter $\Omega_m = 0.3$, the cosmological constant $\Omega_{\Lambda} = 0.7$, the Hubble constant $h = 0.67$ in units of $100\ \mathrm{km \, s}^{-1} \mathrm{Mpc}^{-1}$, and the slope $n_s=1$ and amplitude $\sigma_8 = 0.9$ of the primordial power spectrum. While these parameters are not up to date, they are close to the most recent results from the Planck mission. The differences in the cosmological parameters have little effect on the conclusions of this study because our main concern is to develop a method of estimating halo masses and test its validity. For each template halo, we use the multiple-mass method to generate the initial conditions for zooming \citep{Jing2000} and carry out zoom-in resimulations using the public code Gadget2 \citep{Springel2005}. In the high-resolution region enclosing a template halo, these simulations have a particle mass of $\sim 10^5 M_{\odot}$ (Table \ref{tab:simu}) and a softening length of 0.15 $h^{-1} \, \mathrm{kpc}$. We find halos using the standard Friends-of-Friends (FoF) algorithm with a linking length $b$ equal to 0.2 times the mean separation of high-resolution particles. For ease of comparison with results in the literature, we define $M_{\mathrm{h}}$ and $R_{\mathrm{h}}$ as the mass and radius, respectively, of a spherical region with a mean density equal to 200 times the critical density of the universe. The 9 template halos have $M_{\mathrm{h}} \sim (1.3$--$1.6) \times 10^{12} M_{\odot}$ (Table \ref{tab:simu}) within $R_{\mathrm{h}} \sim 230$~kpc at $z = 0$. As shown in \reffig{growth} and Table \ref{tab:simu}, these halos have very different histories of mass growth, and therefore, cover a wide range of possible assembly history for an MW-like halo. Lacking the formation history of a test halo, we must resort to exploring a wide range of template halos to investigate the uncertainty from halo-to-halo scatter in our method of halo mass determination. We use the Hierarchical Bound-Tracing (HBT) algorithm of \citet{Han2012} to identify subhalos and build merger trees through time in the simulations. HBT traces merger hierarchy of halos and subhalos with a physically-motivated unbinding algorithm, and thus has robust performance even in the dense inner region of a host halo, which fits our needs very well. The mass $m$ of a subhalo is defined to be its self-bound mass. A subhalo can be identified if it contains at least 10 bound particles ($\sim 10^6 M_{\odot}$). The positions and velocities of subhalos are essential input to construction of their phase-space distribution. These quantities are defined in HBT by the center of mass and bulk velocity of the most bound 25\% of the particles in each subhalo (see \citealt{Han2012} for details). The center of the largest subhalo is taken as the center of its host halo. We have checked the completeness of subhalo samples in the high-resolution region of the zoom-in simulations. As expected \citep{Han2016}, within $2 R_h$, the number density profile of subhalos (including disrupted ones) in any given infall-mass bin coincides very well with the dark-matter density profile of the host halo. Therefore, the subhalo sample within $2 R_h$ is complete and not affected by the low-resolution particles. \begin{deluxetable}{ccccc} \tablecaption{Properties of Template Halos\label{tab:simu}} \tablecolumns{5} \tablewidth{0pt} \tablehead{ \colhead{Halo} & \colhead{$m_p$} & \colhead{$M_h$} & \colhead{$t_{0.5}$} & \colhead{$t_{0.8}$}\\ \colhead{} & \colhead{$(10^5 M_{\odot})$} & \colhead{$(10^{12} M_{\odot})$} & \colhead{$(\mathrm{Gyr})$} & \colhead{$(\mathrm{Gyr})$} } \startdata A1 & 0.99 & 1.58 & 3.14 & 1.79 \\ A2 & 1.11 & 1.46 & 6.95 & 3.58 \\ A3 & 0.96 & 1.61 & 9.21 & 5.06 \\ A4 & 0.93 & 1.60 & 10.20 & 3.96 \\ A5 & 0.96 & 1.60 & 10.55 & 6.65 \\ A6 & 1.37 & 1.54 & 6.93 & 6.68 \\ A7 & 1.02 & 1.55 & 1.42 & 1.09 \\ A8 & 1.05 & 1.64 & 9.54 & 9.13 \\ A9 & 0.92 & 1.38 & 2.71 & 2.46 \\ \enddata \tablecomments{The columns are the particle mass $m_p$ in the high-resolution region, the present ($z = 0$) mass $M_h$ of the template halo, the lookback times $t_{0.5}$ and $t_{0.8}$ when the halo first reached 50\% and 80\% of its present mass, respectively. } \end{deluxetable} \begin{figure}[htb!] \epsscale{1.2} \plotone{./growth_hist.pdf} \caption{ Growth history of template halos. The solid curves color-coded A1--A9 show the fraction of the present halo mass as a function of the lookback time $t$ for the corresponding halos. For reference, the dashed curve shows the median growth history for halos of $1.5 \times 10^{12}M_{\odot}$ in the model of \cite{Zhao2009}. } \label{fig:growth} \end{figure} \subsection{Subhalo sample selection} \label{sec:subhalo_sample} As satellite galaxies are intended as the subhalo tracers, we adopt the following criteria to mimic these tracers in selecting the subhalos to construct the phase-space distribution for a template halo. \begin{itemize} \item Maximum binding mass in history: $m_{\max} > 2 \times 10^{- 5} M_h$ ($\gtrsim 300$ particles) Subhalos containing only $\sim 10$ particles are vulnerable to numerical instability. As shown by \citet{Han2016}, at infall a subhalo should be at least $\sim 30$ times more massive than the smallest resolved subhalo to alleviate artificial disruptions. On the other hand, we would like to keep enough subhalos to have good statistics. The above relatively low mass threshold is adopted as a reasonable compromise. We note that subhalos hosting the bright MW dwarf galaxies were probably $\sim 100$ times larger than this limit at their infall. We will show that our method is not very sensitive to this mass selection. \item Mass at $z = 0$: $m_0 \gtrsim 10^6 M_{\odot}$ ($> 10$ particles) This is a safe lower bound, as MW dwarf galaxies have high mass-to-light ratios \citep[e.g.][]{Wolf2010}, with the mass enclosed within 300 pc being $\sim 10^7M_{\odot}$ for most of these satellites \citep{Strigari2008} and that within the half-light radius being $\gtrsim 5\times 10^6 M_{\odot}$ for the classical dwarf galaxies \citep{McConnachie2012}. We have checked that doubling the lower bounds on $m_{\max}$ and $m_0$ changes the results by only $\lesssim 5\%$. \item Distance to host halo center: $40\ \mathrm{kpc} < r < 300\ \mathrm{kpc}$ This range covers the 9 MW satellites with adequate kinematic data and excludes satellites experiencing strong tidal disruption due to extreme proximity to the Galactic Center (GC) (see \refsec{observation}). As the absolute position of a subhalo changes with the scaled halo mass, this criterion makes the selected subhalo sample dependent on the scaling of a template halo. Consequently, to ensure completeness of the sample, a scaled template halo must satisfy $2 R_h>300\ \mathrm{kpc}$, which limits the scaled halo mass to $M_h > 0.35\times 10^{12} M_{\odot}$. This lower bound is below the expected MW halo mass and does not pose any limitation in practice. \end{itemize} The dots in \reffig{phasespace} show the discrete phase-space distribution of subhalos selected according to the above criteria for each of the 9 template halos in our simulations. These distributions share broad similarity, but significant differences exist. The similarity points to a basic dependence on the halo mass while the differences reflect the halo-to-halo scatter that must be taken into account in our method of halo mass determination. There appears to be a crude relation between the dynamical status of subhalos and the formation history of the host halo: the phase space is more extended and there are more unbound subhalos in late-formed halos such as A7 and A9 (Table \ref{tab:simu}). \begin{figure*}[hbt!] \epsscale{1.2} \plotone{./show_phase.pdf} \caption{Discrete phase-space distribution of subhalos for each of the template halos A1--A9. The units of $E$ and $L$ are $E_h=G M_h/R_h$ and $L_h = \sqrt{G M_h R_h}$\,, respectively. The number $n$ of subhalos in the selected sample is indicated for each halo. Each dot represents a subhalo and is colored according to the local number density in the phase space with red indicating higher density and the same color normalization for all halos. The curve is the equidensity contour enclosing half of the subhalos. The distributions show broad similarity but also clear differences.} \label{fig:phasespace} \end{figure*} \subsection{Observational Guidance} \label{sec:observation} A practical application of the method presented in this paper is to estimate the mass of the MW halo. We use the current observations of the MW and its satellite galaxies as a guide in developing the method. There are 13 satellite galaxies more luminous than $10^5L_{\odot}$ within 300 kpc of the GC. Proper motions are available for 12 of these with the exception being Canes Venatici I \citep{Pawlowski2013a}. We exclude Sextans due to the very large uncertainty in its proper motion. Canis Major and Sagittarius are also excluded because they are so close to the GC that they are experiencing strong tidal disruption. Consequently, 9 satellite galaxies of the MW can be used as subhalo tracers at present. Among these, the Large Magellanic Cloud (LMC) at $50 \pm 2\ \mathrm{kpc}$ is the closest to the GC while Leo I at $258 \pm 15\ \mathrm{kpc}$ is the farthermost \citep{McConnachie2012}. For developing our method to estimate the mass of a test halo based on comparison of the kinematic properties of its subhalos with the phase-space distributions of template halos, the most pertinent guidance provided by current observations of MW satellite galaxies is the number of these tracers with sufficiently accurate kinematic data and the typical uncertainties in such data. We adopt the following fiducial values characteristic of current observations. \begin{enumerate} \item The number of tracers with adequate kinematic data is \begin{equation} N = 9. \end{equation} \item The relative uncertainty in the distance to the Sun in the Heliocentric Standard of Rest (HSR) is \begin{equation} (\sigma_r / r)_{\rm HSR} = 0.06. \label{eq:err_r}\end{equation} \item The measurement error of the radial velocity with respect to HSR is \begin{equation} (\sigma_{v_{\mathrm{los}}})_{\rm HSR} = 1\ \mathrm{km \, s}^{-1}. \label{eq:err_vlos} \end{equation} \item The precision for the proper motion components with respect to HSR is \begin{equation} (\sigma_{\mu_{\alpha}})_{\rm HSR} = (\sigma_{\mu_{\delta}})_{\rm HSR} = 0.08\ \mathrm{mas \, yr}^{-1}. \label{eq:err_pm}\end{equation} \end{enumerate} The uncertainties $(\sigma_r / r,\sigma_{v_{\mathrm{los}}}, \sigma_{\mu_{\alpha}},\sigma_{\mu_{\delta}})_{\rm HSR}$ adopted above are approximately the root-mean-square values of current precisions for luminous MW satellite galaxies \citep{Pawlowski2013a}. Note that the measurement errors of different observables are independent as they are determined by separate methods and that the error in proper motion dominates. We will also check how our method is affected when different values from the fiducial ones are used for the number of tracers and the measurement errors. The measurement errors will be taken into account in constructing the phase-space distribution of subhalos for a template halo. As will be described in detail in \refsec{mock}, this is done by making mock observations of kinematic properties of subhalos in a frame equivalent to HSR and then transforming the results into those with respect to the halo center acting as the GC. To make this transformation, we need the position and motion of the Sun in the Galactocentric Standard of Rest (GSR). We adopt the following distance and velocity of the Sun relative to the GC \citep{Bland-Hawthorn2016a}: \begin{equation} \begin{split} r_{\odot} &= 8.2 \pm 0.1\ \mathrm{kpc}, \\ (U_{\odot}, V_{\odot}', W_{\odot})_{\mathrm{GSR}} &= (10, 248, 7) \pm (1, 3, 0.5)\ \mathrm{km \, s}^{-1}, \end{split} \label{eq:sun_gsr} \end{equation} where $U_{\odot}$ is the velocity towards the GC, $V_{\odot}'$ is positive in the direction of Galactic rotation, and $W_{\odot}$ is positive towards the North Galactic Pole. Note that $V_{\odot}'$ is the net rotation velocity of the Sun around the GC. For simplicity, we drop the subscripts ``HSR'' and ``GSR'' below and note that $(\sigma_r / r,\sigma_{v_{\mathrm{los}}}, \sigma_{\mu_{\alpha}},\sigma_{\mu_{\delta}})$ always refer to HSR and $(U_{\odot}, V_{\odot}', W_{\odot})$ to GSR. \subsection{Mock observations}\label{sec:mock} While simulations yield precise values of $(r,v_r,v_t)$ for each subhalo, we must take observational errors into account when comparing the corresponding phase-space distributions for template halos with the kinematic data on the observed satellite tracers to estimate the unknown mass of a test halo (see \refsec{method}). For developing the method, we assume that all observed satellites have the same measurement errors $(\sigma_r / r, \sigma_{v_{\mathrm{los}}}, \sigma_{\mu_{\alpha}},\sigma_{\mu_{\delta}})$. We then make mock observations with these uncertainties in a frame equivalent to HSR for all the selected subhalos of a template halo (see \refsec{subhalo_sample}). We also include the uncertainties in the position and velocity of the Sun when transforming the mock data in HSR into those with respect to the center of the template halo that serves as the GC. The above procedure results in a smoothed phase-space distribution of subhalos for the template halo and accounts for the measurement errors at the same time. We produce the mock data as follows. \begin{itemize} \item Define ``GSR'' We first apply a random rotation \citep{Arvo1992} to the simulations and require the center of a template halo to rest at the ``GC''. Perhaps a better practice is to adopt the orientation of the angular momentum of the inner halo as the ``Galactic North'' \citep[e.g.][]{Xue2008} instead of applying a random rotation. However, the difference would be very small as the satellite tracers are far from the GC. \item Define ``HSR'' We set the ``Sun'' at the point $(r_{\odot}, 0, 0)$ with velocities $(U_{\odot}, V_{\odot}', W_{\odot})$ in ``GSR''. To account for the uncertainties, we sample $r_{\odot}$ and $(U_{\odot}, V_{\odot}', W_{\odot})$ from Gaussian distributions with means and standard deviations given in \refeq{sun_gsr}. \item Observe subhalos in ``HSR'' We ``observe'' in ``HSR'' the distance, radial velocity, and proper motion of subhalos according to Gaussian distributions with standard deviations given in Equations (\ref{eq:err_r}), (\ref{eq:err_vlos}), and (\ref{eq:err_pm}), respectively. The measurement errors are taken to be independent of each other as they correspond to separate methods in real observations. \item Transform data from ``HSR'' to ``GSR'' We convert the mock data in ``HSR'' into $(r,v_r,v_t)$ in ``GSR'' for each subhalo, which are then used to calculate the corresponding $(E,L)$. Note that in this step we adopt the central values of the ``solar'' position and velocities in ``GSR''. \end{itemize} Following the above procedure, we make 2000 mock observations of the $i$-th subhalo of a template halo and obtain the probability density $\tilde p (E, L| \mathrm{sub}_i)$ for this subhalo to be ``observed'' at $(E,L)$. The quantity $\tilde p (E, L| \mathrm{sub}_i)$ can be approximated by a 2D Gaussian distribution \begin{equation} \tilde p (E, L| \mathrm{sub}_i) = \frac{1}{2 \pi S \sqrt{1 - \rho^2}} \exp \left( - \frac{x^2 + y^2 - 2 \rho xy}{2 (1 - \rho^2)} \right), \label{eq:obs_err} \end{equation} where \begin{gather} x = \frac{E - \mathrm{avg} (E_{i j})}{\sqrt{\mathrm{var} (E_{i j})}},\\ y = \frac{L - \mathrm{avg} (L_{i j})}{\sqrt{\mathrm{var} (L_{i j})}},\\ \rho = \frac{\mathrm{cov} (E_{i j}, L_{i j})}{\sqrt{\mathrm{var} (E_{i j}) \mathrm{var} (L_{i j})}},\\ S = \sqrt{\mathrm{var}(E_{i j}) \mathrm{var} (L_{i j})}, \end{gather} and where $E_{i j}$ and $L_{i j}$ are the results from the $j$-th mock observation of the $i$-th subhalo. In the above equations, the average, variance, and covariance refer to operations on $j$ only. Note that due to the relatively large uncertainties in proper motion measurement, the variances cannot be estimated reliably by analytical error propagation. \refeq{obs_err} shows that accounting for measurement errors through mock observations turns a discrete point representing the $i$-th subhalo into a smooth distribution $\tilde p (E, L| \mathrm{sub}_i)$ in the phase space. In this procedure, measurement errors also shift the mean position of the subhalo in the phase space from that given by the simulations. \subsection{Constructed subhalo phase-space distribution} \label{sec:phase_space} Repeating the procedure for obtaining the probability density $\tilde p (E, L| \mathrm{sub}_i)$ for all the selected subhalos in a template halo of mass $M_{\mathrm{h}}$, we construct the corresponding phase-space distribution $p (E, L| M_{\mathrm{h}})$ as an average of these probability densities [see \refeq{p(E,L)}]. In general, because the subhalo sample has a limited size, $p (E, L|M_{\mathrm{h}})$ may be discontinuous even after measurement errors are taken into account. The discontinuity would be even more prominent were measurement errors to decrease significantly in the future. Our method of halo mass determination requires a smooth $p (E, L|M_{\mathrm{h}})$, which can be obtained conveniently by replacing $\mathrm{var} (E_{i j})$ and $\mathrm{var} (L_{i j})$ with $\widetilde{\mathrm{var}} (E_{i j}) = \mathrm{var} (E_{i j}) +S_E^2$ and $\widetilde{\mathrm{var}} (L_{i j}) = \mathrm{var} (L_{i j}) +S_L^2$, respectively, in \refeq{obs_err}. We take the smoothing terms to be $S_E=\alpha E_h$ and $S_L=\alpha L_h$, where $E_h=GM_h / R_h$ and $L_h = \sqrt{GM_h R_h}$ are the characteristic energy and angular momentum per unit mass. We add the smoothing adaptively, by choosing $\alpha$ for each subhalo such that the ellipse with semiaxes of $S_E$ and $S_L$ covers just the nearest 40 neighbors in the phase space. We have checked that the result is not sensitive to the detailed choice of $\alpha$. Using a fixed $\alpha = 0.1$--0.2 for all subhalos produces almost the same result. \reffig{smooth} shows the phase-space distribution for template halo A1. Compared with the discrete distribution taken directly from the simulations (left panel), the smooth distribution including the fiducial measurement errors (middle panel) is more extended. The effects of the measurement errors are illustrated by the comparison of the middle and right panels. For the latter, $\sigma_{\mu_\alpha}=\sigma_{\mu_\delta}=0.01\ \mathrm{mas \, yr}^{-1}$ are used instead of the fiducial values of $0.08\ \mathrm{mas \, yr}^{-1}$ while all other errors remain the same as for the middle panel. The much smaller $\sigma_{\mu_\alpha}$ and $\sigma_{\mu_\delta}$ not only shrink the distribution, but also shift the point of the highest probability density (marked as the cross). \begin{figure*}[ht!] \epsscale{1.2} \plotone{./error_smoothing.pdf} \caption{Phase-space distribution of subhalos for template halo A1. The discrete distribution in the left panel is taken directly from the simulations. The smooth distribution in the middle panel includes the fiducial measurement errors. The right panel assumes $\sigma_{\mu_\alpha}=\sigma_{\mu_\delta}=0.01\ \mathrm{mas \, yr}^{-1}$ instead of the fiducial values of $0.08\ \mathrm{mas \, yr}^{-1}$ while all other errors remain the same as for the middle panel. Color indicates the probability density and the cross marks the point of the highest density. The dashed curves enclose the densest 68\% of the region. } \label{fig:smooth} \end{figure*} \section{Tests with Mock Samples}\label{sec:test} In our method of halo mass determination, we scale a template halo of mass $M_{\mathrm{h}}$ to different masses and obtain a family of subhalo phase-space distributions $p(E,L|M_{\mathrm{h}}')$ following the procedure presented in \refsec{template}. We then use these distributions and the $(E,L)$ data on the observed satellite tracers of a test halo to obtain the likelihood ${\cal L}({\rm obs}|M_{\mathrm{h}}')$ as a function of $M_{\mathrm{h}}'$ [see \refeq{likelihood}]. This gives the MLE $M_{\mathrm{esti}}$ for the test halo mass based on a specific set of scaled template halos. If the test halo is the MW, we will use the actual kinematic data on its dwarf satellite galaxies. For each satellite, the actual measurement errors will be used to make mock observations to obtain the corresponding $p(E,L|M_{\mathrm{h}}')$ while the central values of the kinematic data in HSR will be used (along with the central values of the solar position and velocities relative to the GC) to obtain the corresponding $(E,L)$ in GSR. To test the validity and accuracy of our method, we choose a subset from the subhalos of a template halo to serve as the ``observed'' satellite tracers. These tracers are referred to as the mock sample and their $(E,L)$ data are obtained by making a single mock observation of each tracer as described in \refsec{mock}. Below we present a series of tests of our method using these mock samples. \subsection{Bias in the MLE for a specific template halo} \label{sec:test_one} We first check how well the true mass $M_{\mathrm{true}}$ of a halo can be recovered by the MLE $M_{\mathrm{esti}}$ in our method. As an example, we use template halo A1 as both the test halo to generate mock samples and the template to estimate the mass of the test halo. We randomly pick 9 of its subhalos to make a mock sample and apply our method to obtain the corresponding $M_{\mathrm{esti}}$. We repeat this with 5000 random mock samples to obtain a distribution of $M_{\mathrm{esti}}/M_{\mathrm{true}}$ at $M_{\mathrm{true}}=1.58\times 10^{12}M_{\odot}$ for template halo A1. We then use halos scaled from template halo A1 as test halos and obtain distributions of $M_{\mathrm{esti}}/M_{\mathrm{true}}$ for $M_{\mathrm{true}}=(0.5$--$3.0)\times 10^{12}M_{\odot}$. We find that to very good approximation, all these distributions can be fitted to a single Gaussian $\mathcal{N}(0.83, 0.26^2)$ with a mean of 0.83 and a standard deviation of 0.26. This is illustrated by the excellent agreement between the histograms showing the distributions for $M_{\mathrm{true}}=(0.5,1,2)\times 10^{12}M_{\odot}$ and the dashed curve for the Gaussian fit in the right panel of \reffig{test1}. In addition, the left panel of this figure shows the median value (solid curve) and the 68\% ($1\sigma$, dashed curves) and 95\% ($2\sigma$, dot-dashed curves) intervals for $M_{\mathrm{esti}}/M_{\mathrm{true}}$ as functions of $M_{\mathrm{true}}$. These again agree very well with the Gaussian fit. \begin{figure}[htb!] \epsscale{1.2} \plotone{./test1_A1.pdf} \caption{ Distribution of $M_{\mathrm{esti}}/M_{\mathrm{true}}$ as a function of $M_{\mathrm{true}}$ for test halos scaled from template halo A1. Left panel: The solid, dashed, dot-dashed curves show the median value and the 68\% ($1\sigma$) and 95\% ($2\sigma$) intervals for $M_{\mathrm{esti}}/M_{\mathrm{true}}$ as functions of $M_{\mathrm{true}}$, which agree very well with the corresponding characteristics of the Gaussian distribution $\mathcal{N}(0.83, 0.26^2)$ (thin dotted line and shaded regions). Right panel: The histograms show the distributions of $M_{\mathrm{esti}}/M_{\mathrm{true}}$ for $M_{\mathrm{true}}=(0.5, 1, 2)\times 10^{12}M_{\odot}$, which are in excellent agreement with the dashed curve showing the Gaussian distribution. } \label{fig:test1} \end{figure} As \reffig{test1} shows, the MLE $M_{\mathrm{esti}}$ tends to underestimate the halo mass $M_{\mathrm{true}}$ with a bias that is nearly independent of $M_{\mathrm{true}}$. Recall that the likelihood is constructed from the phase-space distribution $p(E,L|M_{\mathrm{h}}')$ as a function of $M_{\mathrm{h}}'$. Because $E$ also depends on $M_{\mathrm{h}}'$, the likelihood is non-Bayesian and gives a biased MLE. As shown in \reffig{scaling}, $p(E,L|M_{\mathrm{h}}')$ is denser for a lower $M_{\mathrm{h}}'$. Thus the likelihood tends to favor a smaller halo mass than the true value. This bias is intrinsic to our method, but as shown below, it is insensitive to the number of tracers used, the measurement errors, or the formation history of the halo. Therefore, we can use $\hat{M}_{\mathrm{esti}}=M_{\mathrm{esti}}/\eta$ with $\eta=\langleM_{\mathrm{esti}}/M_{\mathrm{true}}\rangle=0.83$ as an essentially unbiased estimator for the halo mass. However, the relative uncertainty of $\hat{M}_{\mathrm{esti}}$ depends on the number of tracers used and the measurement errors, being $\sim 30\%$ for 9 tracers with the fiducial measurement errors. Using template halo A1 as the test halo, we show in \reffig{test1_multi} the distributions of $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$ for different numbers ($N$) of tracers and observational errors. We take $N=9$, 50, and 400, respectively. As proper-motion measurements dominate the observational uncertainties, we take $\sigma_{\mu_\alpha}=\sigma_{\mu_\delta}=0.08$, 0.03, and 0.01 $\mathrm{mas \, yr}^{-1}$, respectively, but keep the other measurement errors at their fiducial values. For each choice of $(N,\sigma_{\mu_{\alpha,\delta}})$, the distribution of $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$ (histogram) can be well fitted by a Gaussian (red curve) centered at $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}=1$ with the standard deviation (``std'') indicated in the corresponding panel of \reffig{test1_multi}. This demonstrates that the fixed bias-correction $\eta=0.83$ works quite well for very different numbers of tracers and measurement errors. As shown in \reffig{test1_multi}, the standard deviation of $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$ decreases when more tracers and more precise observations are used. For fixed measurement errors, this quantity exhibits the expected $1/\sqrt{N}$ dependence. \begin{figure*}[htb!] \epsscale{1.2} \plotone{./test1_multi.pdf} \caption{ Distributions of $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$ for different numbers ($N$) of tracers and observational errors ($\sigma_{\mu_{\alpha,\delta}}$) when template halo A1 is used as the test halo. The upper, middle, and lower rows assume $N=9$, 50, and 400, respectively. The left, middle, and right columns assume $\sigma_{\mu_{\alpha,\delta}}=0.08$, 0.03, and 0.01 $\mathrm{mas \, yr}^{-1}$, respectively. In each case, the histogram showing the distribution can be well described by the red curve showing a Gaussian centered at $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}=1$ with the standard deviation (``std'') indicated in the corresponding panel. } \label{fig:test1_multi} \end{figure*} We have also checked that the corrected halo-mass estimator $\hat{M}_{\mathrm{esti}}=M_{\mathrm{esti}}/\eta$ with $\eta=0.83$ works consistently when each of the 9 template halos is used as both the test halo and the template to estimate the test halo mass. \reffig{test2} shows the distribution of $\hat{M}_{\mathrm{esti}} / M_{\mathrm{true}}$ in each case when the mock samples are observed with the fiducial measurement errors. All the distributions can again be well fitted by a Gaussian $\mathcal{N}(1, 0.3^2)$. While the median value of $\hat{M}_{\mathrm{esti}} / M_{\mathrm{true}}$ fluctuates slightly around unity, this fluctuation is $< 5\%$, well within the standard deviation. Therefore, $\hat{M}_{\mathrm{esti}}$ recovers the true halo mass within $\sim 30\%$ for all the 9 template halos used as test halos. Because these halos have very different formation history and dynamical status, this demonstrates the validity and robustness of our method at least when the formation history of a test halo is known. \begin{figure}[htb!] \epsscale{1.2} \plotone{./test1_all.pdf} \caption{ Distribution of $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$ when each of the 9 template halos is used as the test halo. Left panel: The filled squares and error bars show the median value and the 68\% ($1\sigma$) and 95\% ($2\sigma$) intervals for $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$. These compare very well with the thin dotted line and the shaded regions showing the corresponding characteristics of the Gaussian distribution $\mathcal{N}(1, 0.3^2)$. Right panel: The histograms showing the distributions of $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$ are compared with the dashed curve showing the Gaussian distribution. } \label{fig:test2} \end{figure} \subsection{Influence of subhalo mass} \label{sec:test_msub} Our method implicitly assumes that the phase-space distribution of subhalos is independent of their masses. This is supported by recent studies \citep[e.g.][]{Han2016}, which showed that small and massive subhalos have very similar dynamics. This can be understood because dynamical friction with strong mass dependence is important only for major mergers. Nevertheless, because the intended tracers for the MW halo are its satellite galaxies, the more luminous of which tend to inhabit massive subhalos, we carry out further tests to check any possible influence of subhalo mass on our method. In the first test, we scale each of the 9 template halos to a mass of $M_{\mathrm{true}}=1.5\times10^{12}M_{\odot}$ and take the most massive (at infall) 9 subhalos in each case as the mock sample with the fiducial measurement errors. \reffig{test2_max9} shows $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$ (filled squares) for these test halos. These results are fully consistent with those in \reffig{test2}, which are obtained from mock samples each having 9 randomly-selected subhalos. Specifically, when the results in \reffig{test2_max9} are compared with the Gaussian distribution $\mathcal{N}(1, 0.3^2)$, the $\chi^2$ test gives $P(>\chi^2)=0.46$, which indicates no deviation. This insensitivity to the mock sample is also confirmed when we scale the template halos to other masses and use those as test halos. \begin{figure}[htb!] \epsscale{1.2} \plotone{./test2_max9.pdf} \caption{ Results obtained from mock samples each having the most massive 9 subhalos of a halo. The filled squares give the $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$ when each of the template halos is scaled to $M_{\mathrm{true}}=1.5\times10^{12}M_{\odot}$ and used as the test halo. The thin dotted line and the shaded regions show the median and the $1\sigma$ and $2\sigma$ intervals for the Gaussian distribution $\mathcal{N}(1,0.3^2)$. The p-value of the $\chi^2$ test of the filled squares against the Gaussian distribution is $P(>\chi^2)=0.46$. } \label{fig:test2_max9} \end{figure} In the second test, mock samples are created by randomly selecting 9 subhalos from the top 100 massive subhalos in each of the 9 template halos used as test halos. We show in \reffig{test2_max100} the distribution of $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$ obtained from 5000 mock samples for each test halo. These results are almost the same as those in \reffig{test2}. Based on the above tests and in view of the $\sim 30\%$ uncertainty in our halo mass estimate, which is mostly due to the relatively small number of tracers used and the rather significant errors in proper motion measurement, we conclude that any influence of subhalo mass can be safely ignored. In any case, when the number of the MW satellite galaxies with precise kinematic data increases, this sample will reach tracers of lower luminosity associated with less massive subhalos, and therefore, become closer to a sample of randomly-selected subhalo tracers best suited for our method. \begin{figure}[htb!] \epsscale{1.2} \plotone{./test2_max100.pdf} \caption{ Same as the left panel of \reffig{test2}, but the mock samples are randomly drawn from the top 100 massive subhalos in each test halo. } \label{fig:test2_max100} \end{figure} \subsection{Halo-to-halo scatter} \label{sec:test_sys} So far we have shown that if the formation history of a test halo is known, the halo mass can be determined reliably by our method. However, halo formation history is not readily available in practice. Without such information, we must resort to comparing the kinematic data on the observed tracers of a test halo with the subhalo phase-space distributions for a number of template halos with a wide range of formation history. For example, when we use template halo A1 as the test halo, we estimate its mass using all the 9 template halos. For clarity, we refer to template halo A1 in this case as test halo A1. \reffig{test3a1} shows the distributions of $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$ for test halo A1 obtained by comparing fiducial mock samples (9 tracers with fiducial measurement errors) from this halo with the phase-space distribution from each of the 9 template halos. The influence of halo formation history on $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$ can be seen clearly. \begin{figure}[htb!] \epsscale{1.2} \plotone{./test3_A1.pdf} \caption{ Distribution of $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$ obtained for test halo A1 by comparing mock samples from this halo with each of the 9 template halos. Left panel: The filled squares and error bars show the median value and the 68\% and 95\% intervals of $\hat{M}_{\mathrm{esti}}/M_{\mathrm{true}}$. The thin dotted line and the shaded regions indicate a Gaussian distribution centered at unity with a standard deviation of 0.3. The rightmost filled square with error bars shows $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ obtained by averaging the results from all 9 template halos. Right panel: The distribution of $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ is compared with a lognormal $\ln \mathcal{N}(0, 0.35^2)$. } \label{fig:test3a1} \end{figure} To address the lack of the formation history of a test halo, we take the average of the $\hat{M}_{\mathrm{esti}}$ obtained for this halo using the subhalo phase-space distribution for each of the template halos, \begin{equation} \hat{\overline{M}}_{\mathrm{esti}}=\sum_{i=1}^{N_{\rm temp}} \frac{\hat{M}_{\mathrm{esti}}\phantom{}_{,i}}{N_{\rm temp}}, \label{eq:mavg} \end{equation} and check if this average (for $N_{\rm temp}=9$) gives a better estimate. We show the distribution of $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ for test halo A1 in \reffig{test3a1}. The rightmost filled square in this figure shows that the median $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ is 1. On the other hand, the corresponding 68\% and 95\% intervals are asymmetric and favor higher values. As shown in the right panel of \reffig{test3a1}, the distribution of $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ is well described by a lognormal $\ln \mathcal{N}(0, 0.35^2)$. Using other template halos as test halos, we show in \reffig{test3} the corresponding distributions of $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$. The filled squares with error bars in the left panel show the median value and the 68\% and 95\% intervals of $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ for each test halo. The median $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ scatters around unity within $\sim 20\%$ ($\sim 30\%$ for a few cases), reflecting the difference in formation history between the test and template halos. We have also checked that the bias of $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ does not depend on the number of tracers used or the measurement errors (not shown). Note that the relative uncertainty in $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ is $\sim 30\%$ for all 9 test halos. \begin{figure}[htb!] \epsscale{1.2} \plotone{./test3_all.pdf} \caption{ Distribution of $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ for each test halo. Left panel: The filled squares and error bars show the median value and the 68\% and 95\% intervals for $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$. The thin dotted line and the shaded regions show the median and the $1\sigma$ and $2\sigma$ intervals for the lognormal distribution $\ln \mathcal{N}(0,0.38^2)$. The rightmost filled square with error bars shows the result for the mixed mock samples randomly drawn from the 9 test halos. Right panel: The distribution of $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ for the mixed mock samples is compared with the lognormal distribution. } \label{fig:test3} \end{figure} As a final test, we make mock samples by randomly picking a test halo and then randomly selecting 9 subhalos from this halo. Based on 5000 such mixed mock samples, we show the distribution of $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ as the rightmost filled square with error bars (left panel) and the histogram (right panel) in \reffig{test3}. This distribution best characterizes the halo mass estimate given by our method in practice, and is well described by a lognormal $\ln \mathcal{N}(0, 0.38^2)$, whose $1\sigma$ interval corresponds to the interval $[0.68, 1.46]$ for $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$. The uncertainty is slightly larger than the case with mock samples from a single test halo because a discrepancy of $\sim 20\%$ due to halo-to-halo scatter is also included in addition to the statistical uncertainty. More precisely, the uncertainty due to halo-to-halo scatter is $19_{-4}^{+6}\%$ based on the variance of the 9 data points in \reffig{test3}. While this uncertainty is irreducible without additoinal information, it can be estimated better with more test halos. In addition, the limited number (9) of template halos also introduces an uncertainty of $\sim 20\%/\sqrt{9}\approx 7\%$ in $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$. However, this uncertainty is relatively small compared to that from halo-to-halo scatter. In principle, knowledge on the formation history of a test halo can reduce the uncertainty in its mass estimate due to halo-to-halo scatter. Of particular importance is information on the growth of the halo potential as well as the accretion and disruption of substructures. However, it is difficult to find a simple indicator to characterize the influence of the halo assembly history on the kinematics of surviving substructures. We intend to study this problem in the future. \subsection{Prospects and limitation} \label{sec:test_err} Based on the preceding discussion, there are two main sources of uncertainties in our method of halo mass determination: one is statistical and due to the limited number of tracers and measurement errors, while the other is intrinsic and due to the lack of knowledge about the formation history of a test halo. Below we quantify these uncertainties using mixed mock samples created by randomly picking one of the test halos and then randomly selecting a subset of its subhalos. We vary the sample size (the number $N$ of tracers) and the error $\sigma_{\mu}= \sigma_{\mu_{\alpha}} = \sigma_{\mu_{\delta}}$ in proper motion measurement, which dominates the observational uncertainties. The other measurement errors are kept at their fiducial values. Because $\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}$ follows a lognormal distribution, we define the uncertainty of our method as \begin{equation} \sigma = \sqrt{\mathrm{var}\left[\ln\left(\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}\right)\right]}\,. \label{eq:sigma} \end{equation} Note $\mathrm{var}(X) \simeq \mathrm{var}(\ln X)$ when $X$ follows $\ln \mathcal{N}(0, \sigma^2)$ and $\sigma$ is small. In Figure \ref{fig:error}, we show $\sigma$ as a function of the number $N$ of tracers for $\sigma_{\mu}=0.01$, 0.03, 0.06, and 0.08\ $\mathrm{mas \, yr}^{-1}$, respectively. As shown by the dashed curves, these results can be well described by \begin{equation} \sigma^2 = \sigma_{\mathrm{stat}}^2 + \sigma_{\mathrm{hist}}^2 = \frac{A^2 \sigma_{\mu}^2 + \sigma_{\mathrm{other}}^2}{N} + \sigma_{\mathrm{hist}}^2, \end{equation} where $\sigma_{\mathrm{stat}}$ is the statistical term that decreases with increasing $N$, and $\sigma_{\mathrm{hist}}$ is the intrinsic term due to the lack of knowledge about the halo formation history. The statistical term can be specified by $A\sigma_{\mu}$, where $A$ is a constant, and $\sigma_{\mathrm{other}}$, which captures the other observational uncertainties and errors in constructing the subhalo phase-space distribution. Fit to the data gives $A=8.75$, $\sigma_{\mathrm{other}}=0.80$, and $\sigma_{\mathrm{hist}}=0.17$. Note that $\sigma_{\mathrm{hist}}$ depends on the template halos used and may be estimated better with more halos in addition to the 9 used here. \begin{figure}[htb!] \epsscale{1.2} \plotone{./err_anal.pdf} \caption{Uncertainty $\sigma$ in $\ln\left(\hat{\overline{M}}_{\mathrm{esti}}/M_{\mathrm{true}}\right)$ as a function of the number $N$ of tracers for different values of the error $\sigma_\mu$ in proper motion measurement. Filled squares are data obtained using mixed mock samples randomly drawn from the 9 test halos. Good fit to the data is provided by the dashed curves for $\sigma^2=(8.75^2\sigma_\mu^2+0.80^2)/N+0.17^2$.} \label{fig:error} \end{figure} For the fiducial number of tracers ($N=9$) with the fiducial measurement errors ($\sigma_\mu=0.08\ \mathrm{mas \, yr}^{-1}$), $\sigma$ is $\sim 40\%$. If $N$ increases to 30, $\sigma$ decreases to $\sim$25\% for the fiducial measurement precision. However, as $N$ increases further, $\sigma$ becomes dominated by $\sigma_{\mathrm{hist}}$. This sets a limiting number of tracers at $N\sim 50$, beyond which there is no significant gain in the accuracy of our halo mass estimate. This is similar to the result of \citet{Wang2016b}, who gave a systematic uncertainty of 25--40\% for the MW mass estimate using dynamical tracers under the steady-state assumption. We emphasize that the ultimate improvement of our method requires detailed knowledge about the formation history of a test halo. \section{Conclusions}\label{sec:conclusion} We have presented a method to estimate the mass of a dark matter halo using the kinematic data of its subhalo tracers, which are satellite galaxies in practice. The halo mass is inferred by comparing these data with the distribution in the phase space of binding energy and angular momentum for subhalos in each of the template halos obtained in cosmological simulations. We have tested the validity and accuracy of this method with mock samples and found that the halo mass can be recovered within $\sim 40\%$ by using 9 tracers with the current observational precision. The uncertainty can be reduced to $\sim$25\% if the number of tracers with sufficiently accurate proper motion measurement increases to 30 in the future. However, the subhalo phase-space distribution depends on the halo formation history and the lack of this knowledge results in an intrinsic uncertainty of $\sim 20$\% in our halo mass estimate, which cannot be reduced by increasing the number of tracers. Further studies on the assembly history of a halo and how this history affects the kinematics of its substructures are essential to an accurate determination of its mass. A direct application of our method is to estimate the mass of the MW halo. Using the data on its 9 dwarf satellite galaxies, we obtain a mass of $1.3\times 10^{12}M_{\odot}$ with uncertainties comparable to the expected value of $\sim 40\%$. This preliminary result is consistent with various estimates in the literature. A detailed report will be given elsewhere. Although they do not seem to affect our current results, several issues regarding our approach merit discussion. We have found that the phase-space distribution is nearly independent of subhalo mass. Because satellite galaxies are the intended subhalo tracers, it is desirable to confirm this with further tests using satellite samples from semi-analytical or hydrodynamic simulations. We have simulated 9 template halos with a wide range of formation history. It is valuable to have more template halos to check if this range is sufficiently representative. Because high-resolution zoom-in simulations are required to provide well-resolved substructures for constructing the phase-space distributions, it is computationally intensive to study many template halos. Another issue is the influence of massive neighbors such as M31 in the case of the MW. Our 9 template halos are chosen to be relatively isolated to exclude such neighbors. Using a larger halo sample, we have checked that the presence of a massive neighbor will not affect our method when the distance to the neighbor exceeds three times its virial radius as in the case of M31 and the MW (see Appendix). Finally, in our mock observations, we set the origin of the ``GSR'' to rest at the center of a template halo. However, theoretical and observational studies suggest that central galaxies do not necessarily rest at the centers of their host halos \citep[e.g.][]{Berlind2003,Yoshikawa2003}. A recent study by \cite{Guo2015a} reported that a central galaxy tends to move around the host halo center with a dispersion of $0.2 \sigma_{v, \mathrm{DM}}$ ($\sim 30\ \mathrm{km \, s}^{-1}$ for the MW) for each velocity component. In addition, if the LMC exceeds 10\% of the MW mass, then the MW is moving relative to their barycenter at a velocity of $\sim 30\ \mathrm{km \, s}^{-1}$ ($v_{\mathrm{LMC}} \simeq 300\ \mathrm{km \, s}^{-1}$ relative to the GC). In principle, the unknown velocity offset between the GC and the MW halo center introduces an extra uncertainty in the MW halo mass estimate. However, in practice, we find with Monte Carlo experiments that adding an extra velocity of 30 $\mathrm{km \, s}^{-1}$ to the ``GC'' in mock observations only changes the results at the $\lesssim3\%$ level. So this effect might become significant only when the intrinsic uncertainty in our method is reduced with information on the halo formation history. Currently, our method is still limited by the number of tracers and measurement errors. Proper motions are only available for 12 of the 13 MW satellite galaxies (the exception being Canes Venatici I) that are more luminous than $10^5L_{\odot}$ and within 300 kpc of the GC. The best of these proper motion measurements were made with HST. The Gaia mission will reduce uncertainties in proper motions of nearby classical satellites \citep[e.g.][]{vanderMarel2016} and make new measurements for fainter objects within $\sim$100 kpc of the Sun \citep{Wilkinson1999}. Proper motions of more distant satellite galaxies could be measured by a multi-year HST program with followup by the James Webb Space Telescope (JWST) or the Wide-Field Infrared Survey Telescope (WFIRST) \citep{Kallivayalil2015}. In addition, ongoing deep, wide-field sky surveys, such as the Dark Energy Survey (DES), PanSTARRS 1 (PS1), and VST ATLAS have doubled the number of known MW satellites over the past two years \citep[e.g][]{Bechtol2015,Koposov2015,Drlica-Wagner2015,Laevens2015b,Torrealba2016a}. The number of satellites brighter than the faintest known dwarf galaxies might eventually reach 300--600 and possibly as high as $\sim$1000 \citep{Tollerud2008}. The above exciting progress in observations will undoubtedly enable us to determine the MW halo mass with increasing accuracy. \acknowledgments We thank Hui-Yuan Wang and You-Cai Zhang for their help in carrying out the N-body simulations, and Jia-Xin Han and Yang Wang for their help with identification of subhalos. We also thank the anonymous referee for constructive criticisms and helpful suggestions. ZZL is grateful to Zheng-Yi Shao, Lu Li, Ying Zu, and Jia-Xin Han for helpful discussions of statistical methods. This work was supported in part by the NSFC (11222325, 11320101002, 11533006, \& 11621303), the Knowledge Innovation Program of CAS (KJCX2-EW-J01), 973 Program No. 2015CB857003, Shanghai Key Laboratory Grant No.11DZ2260700, Shanghai talent development funding No. 2011069, and the US DOE (DE-FG02-87ER40328). This work made use of the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Shanghai Astronomical Observatory. \software{Astropy \citep{AstropyCollaboration2013}, Gadget-2 \citep{Springel2005}}
1,314,259,993,722
arxiv
\section{Introduction}\label{sec:intro} There are many solid state materials which are one - dimensional and metallic at room temperature. Due to the coupling of the electrons to the underlying lattice the metallic state in these materials is usually not stable, leading to a phase transition into a charge modulated state at low temperatures. The low temperature ground state of the coupled electron-phonon system is characterized by a gap in the single-particle excitation spectrum, by collective modes formed by the electron-hole pairs, and by a deformation of the lattice. Many of those compounds are inorganic (K$_{0.3}$MoO$_{3}$, NbSe$_{3}$, TaS$_{3}$) \cite{dum83,fle79,gru88}, but these properties are found in organic compounds as well \newline ((2.5(OCH$_{3}$)$_{2}$DCNQI)$_{2}$Li, TTF-TCNQ) \cite{pin99,wan03}. The best known type of this class of phase transitions is the Peierls transition \cite{pei55}. In this case the material develops the charge density modulation state at low temperature with a simultaneous deformation of the lattice, and the opening of a gap in the quasiparticle excitation spectrum. As long as the charge density modulation is pinned by lattice or impurities, these materials can be described as narrow-band-gap semiconductors or even insulators. An exception to this rule is NbSe$_{3}$, which remains a semimetal at low temperature \cite{ong77}. One of the more intriguing features of low dimensional charge ordered materials are their nonlinear transport properties. In the charge density modulation ground state they only exhibit ohmic conductivity below a certain threshold applied electric field $E_T$. Below this field, the charge density modulation is pinned by lattice, impurities, defects, and grain boundaries and the conductivity is solely due to a strongly temperature dependent quasiparticle transport. For fields above $E_T$, the conductivity becomes strongly enhanced and nonlinear due to the contribution of the now moving depinned charge density modulation \cite{aya99,bra04,loo02,zaw00}. In addition to the nonlinear behavior, charge density modulated materials often show an alternating current response to a static applied field. This latter may be either due to so called ratcheting of the charge density modulation phase as observed in for instance NbSe$_3$ \cite{fle79}, generally referred to as narrow band noise, or due to macroscopic polarization oscillations as observed in for instance blue bronze at low temperatures \cite{tes87}. The recently revived interest in the vanadium bronze $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$\ has been triggered by the observation of a charge ordering transition \cite{yam99}, and sparked once more by the observation of the pressure induced superconductivity \cite{yam02} in this electronically low dimensional material. The vanadium bronze $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$\ has been the subject of various structural studies during the last 40 years \cite{sie65,yas82}. At room temperature $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$\ is a highly anisotropic metal with a site occupancy disorder of the sodium atoms. Around $T_{Na}$ = 240~K a second order phase transition occurs leading to an ordering of the sodium atoms and a doubling of the primitive cell along the b direction \cite{ued01}. A charge ordering transition, which is thought to be driven by the electron phonon coupling, occurs at $T_{MI}$ = 136~K and is accompanied by a further tripling of the unit cell along the b direction \cite{ued01,nag05}, leading to a commensurate charge modulated state with a period of 6b. Temperature dependent measurements of the magnetic susceptibility revealed a magnetic transition from a paramagnetic to an antiferromagnetic state at $T_{AF}=22$~K \cite{yam99}. Finally, as mentioned above, $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$\ becomes a superconductor for pressures above 8~GPa and low temperatures (8~K) as revealed by recent pressure dependent resistivity measurements \cite{yam02}. Optical conductivity data suggest that this system may be understood in terms of a small polaron model, where the charge ordering in fact corresponds to an ordering of the polarons \cite{pre03,pres03}. Since previous experiments strongly suggest that $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$\ is a low dimensional conductor with an electron-phonon interaction induced charge ordering, this material should also show the usual nonlinear transport properties discussed above. Indeed the results presented in this paper of the first detailed measurements of the field dependence of conduction in the sodium bronze $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$\ are fully consistent with this picture. The observed nonlinear transport properties are well described using a classical domain model. The observed charge density modulation conductivity increases with increasing temperature, suggesting a screening of the charge density modulation pinning by the thermally excited carriers. \section{Temperature dependent transport}\label{sec:tdep} Single crystal samples have been prepared as described elsewhere \cite{ued01}. Platelets with typical dimensions $4$ $\times$ $2$ $\times$ $0.2$ mm $^{3}$ were mounted on the cold finger of a flow cryostat and contacted using $50$~$\mu$m diameter platinum wires. Four wires were fixed on the sample surface using silver paste, spaced 1 mm apart. Measurements were performed along the b axis in a four probe configuration using a Keithley 236 source-meter in a current driven mode. The resistivity was calculated from the measured resistance and the geometry of the sample. First we turn to the temperature dependence of the low-field ohmic resistance, of which a typical example is shown in Fig.\ref{fig:tdep} \cite{samples}. The strong increase of the resistivity below $T_{MI}$ = 136~K clearly signals the charge ordering transition, consistent with results published in literature. Note that also the sodium ordering transition at $T_{Na} = 240$~K leads to a change in the temperature dependence of the resistivity (see also inset Fig. \ref{fig:tdep}.). The small enhancement of the conductivity below $T_{Na}$\ can be understood in terms of the decreased amplitude of the spatial potential variations on the vanadium sites due to the ordering of the sodium subsystem \cite{pre03}. Although the resistivity at high temperatures ($T>T_{MI}$) is fairly small, as expected for a bad metal, it does not exhibit the metallic behavior as observed previously \cite{ued01}. The reason for this could be that the samples are slightly misaligned. In this case the measured resistivity is a combination of the resistivity of the metallic b-axis and the insulating perpendicular axes, leading to the observed temperature dependence. X-ray diffraction experiments on the samples, however, have ruled out this option in showing that to within a degree the samples are indeed b-oriented. A more likely reason is that the sodium stoichiometry of the sample slightly deviates from $x = 0.33$, which is known to lead to a rapid loss of metallic behavior and eventually to the disappearance of the metal-insulator transition \cite{yam99}. Such deviations lead to an additional disorder in the sodium site occupancy, which in turn leads to a more disordered potential on the vanadium sites which make up the one-dimensional conduction chains \cite{pre03}. Since in particular low dimensional systems are very susceptible to disorder, this may lead to a small disorder induced gap and to the observed non-metallic behavior. \begin{figure} \resizebox{0.5\textwidth}{!}{% \includegraphics{fig1} } \caption{ Resistivity as a function of temperature in $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$. Inset: change in conductivity near the sodium ordering transition at $T_{Na}\simeq 240$~K.} \label{fig:tdep} \end{figure} An estimate for the transport gaps above and below the charge ordering transition temperature is obtained fitting a simple activated behavior, $\rho = \rho_{0}\exp(\Delta/kT)$, to the data. Well within the charge ordered phase (30~K~$\leq T\leq$~80~K) we find $\rho_0=630$~m$\Omega${cm} and a gap $\Delta_{LT} = 548$~K. At high temperatures (150~K~$\leq T\leq$~300~K) we find $\rho_0=75$~m$\Omega${cm} and a gap $\Delta_{HT} = 472$~K. Surprisingly, the transport gap in the charge ordered phase is only about 15\% larger than the one found for the high temperature phase. The major change is found in the prefactors which differ by an order of magnitude. This is consistent with the expectation that the majority of the charge carriers is frozen out by the charge ordering at $T_{MI}$. In a similar analysis Yamada {\em et al.} \cite{yam99} found an activation energy of 538~K, in good agreement with the present value. In previous work we found an optical gap of $2\Delta$=2450~K \cite{pre03}. Most likely, this optical gap corresponds to the energy needed to free a quasiparticle from the charge ordered state. Clearly then, the transport gap must have a different origin. The origin for this difference could be that the observed transport gap is in fact a sodium disorder induced pseudo gap in the charge carrier spectrum, where the charge carriers themselves are present due to an incomplete charge ordering, rather then due to thermal excitation as is usually the case. This would be consistent with the small difference in the transport gap values below and above the charge ordering transition, as well as with the observed finite low frequency conductivity in the optical data \cite{pre03}. \begin{figure}[hbt] \resizebox{0.5\textwidth}{!}{% \includegraphics{fig2} } \caption{ Temperature dependent conductivity (symbols, same data as in Fig. 1.) together with a fit to an activated two particle behavior, Eq. (1) (dark line). The gray line shows a fit of thermally activated quasiparticle transport to the high temperature part of the data.} \label{fig:sol} \end{figure} However, closer inspection of the low temperature conductivity shows that it does not strictly follow an activated behavior. In particular, below 60~K there seems to be an enhancement of the conductivity, which presumably is due to the presence of an additional conductivity channel. In a purely one-dimensional model, such an enhancement may originate from the presence of mid-gap bound states of amplitude $\pi$-solitons and quasiparticles, as described by Brazovskii \cite{bra80}. Indeed, the low temperature conductivity is better described by the empirical from \begin{equation}\label{eq:actsol} \sigma (T)= \sigma^{0}_{QP}\exp(-\Delta_{QP}/kT)+\sigma^{0}_{S}\exp(-\Delta_{S}/kT). \end{equation} Here the first term on the right hand side accounts for the contribution of the quasiparticles, whereas the second term takes the midgap state conductivity into account. In the {\em one}-dimensional description, the midgap states are located halfway the quasiparticle gap, {\em i.e.} $\Delta_{S}\approx \Delta_{QP}/2$. Fitting Eq.~(\ref{eq:actsol}) to the data yields $\Delta_{QP}$ = 995~K; $\Delta_{S}$ = 487~K, and a ratio $\sigma^{0}_{QP}/\sigma^{0}_{S}\sim 0.4\times10^3$. Note that the quasiparticle gap obtained in this way becomes comparable to the gap found in optical experiments \cite{pre03}. The conductivity ratio shows that just below the phase transition quasiparticle transport dominates the conductivity, whereas the 'midgap' contribution becomes important at lower temperatures only, despite its smaller energy gap. Finally, we note that a similar analysis to blue bronze, K$_{0.3}$MoO$_{3}$, data leads to similar conclusions. For both these cases, one can worry whether a one-dimensional model is applicable at all. Indeed, the midgap states described by Brazovski require the existence of topological $\pi$-solitons which do not exist in three dimensions. Therefore, the above analysis merely shows the presence of additional excitations halfway the gap. Though in sodium vanadate, these might originate from the sodium disorder, one would not expect a similar disorder in blue bronze. Therefore, the precise nature of the midgap states remains unresolved at present. One interesting thought is that they might result from excitations inside domain walls which separate the charge density modulation ordered regions in the samples. \section{Field dependent transport}\label{sec:edep} The charge density modulation in low dimensional systems is usually pinned by the underlying lattice, by impurities or by structural defects. When the charge density modulation is incommensurate with the underlying lattice, the main pinning centers are the impurities and other defects. In this case, the pinning might be considered relatively weak, and the charge density modulation can fairly easily move in the material upon application of an external electric field, leading to the typical non-linear charge density wave conduction. When the modulation is commensurate with the lattice, the pinning is considered to be strong, usually much stronger than the impurity pinning, and the large pinning energy prevents conduction of the charge density modulation. The charge modulation in $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$\ is commensurate with the underlying lattice. Since the modulation period is, however, rather large (6b) the commensurability pinning in this system is expected to be relatively weak, opening the possibility of non-linear charge density wave conduction in the charge ordered phase. To study the nonlinear transport properties of single crystal $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$, current driven field dependence measurements \cite{field} were performed in the temperature range 65~K - 300~K. The obtained transport properties are very similar with those obtained in CDW systems. A typical example of the conductivity measured along the b-axis at 65~K is displayed in Fig. \ref{fig:nlc}. \begin{figure}[hbt] \resizebox{0.5\textwidth}{!}{% \includegraphics{fig3} } \caption{Nonlinear field dependent conductivity in $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$\ measured along the b axis at 65~K displaying three charge density modulation transport regimes. } \label{fig:nlc} \end{figure} The behavior of the conductivity shows three regimes. Below 0.06~mV/cm (first threshold field) the conductivity is field-independent and mainly due to quasiparticle transport. At about 0.06~mV/cm the conductivity shows a nonlinear increase resulting from an incoherent contribution of the charge density wave to the conductivity. This behavior saturates around 4~mV/cm. Above the second threshold field at 30~mV/cm a steep increase of the conductivity takes place signaling the onset of the coherent charge density modulation regime. Despite the strong increase of the conductivity, it never reaches values comparable to the metallic state. The values for the threshold fields are smaller than those found, for example in blue bronze. Zawilski {\em et al.} \cite{zaw00} reported 40~mV/cm for the first threshold field and around 2000~mV/cm for the second threshold field in K$_{0.3}$MoO$_{3}$\ at 60~K while Mihaly {\em et al.} \cite{mih88} found 40~mV/cm in K$_{0.3-x}$Na$_{x}$MoO$_{3}$ around the same temperature. Fleming {\em et al.} \cite{fle86} found the first threshold field around 500~mV/cm in TaS$_{3}$ and 90~mV/cm in K$_{0.3}$MoO$_{3}$\ both measured at 60~K. In K$_{0.3-x}$Na$_{x}$MoO$_{3}$ (x = 0, 0.02, 0.05, 0.1), at 77~K, Wang {\em et al.} \cite{wan99} reported values of (300, 670, 750, 950) mV/cm for the first threshold field at 180~K. Beauchene \emph{et. all} \cite{bea86} reported similar values in Rb$_{0.3}$MoO$_{3}$. K\"{u}ntscher {\em et al.} \cite{kun05} found in K$_{0.3}$MoO$_{3}$\ and Rb$_{0.3}$MoO$_{3}$\ values of 150~mV/cm for the first threshold field at 60~K while van Loosdrecht {\em et al.} \cite{loo02} found 200~mV/cm at the same temperature. Although the values of the first threshold field for K$_{0.3}$MoO$_{3}$\ show some variation, they are typically in the 50-200~mV/cm range, $i.e.$ substantially higher than the present values observed for $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$. This is consistent with the notion that the temperature dependence of the low field transport observed in $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$\ is due to incomplete charge ordering resulting from disorder in the sodium sublattice. The presence of charge carriers, even at low temperatures, leads to screening of the pinning potential, and hence to a lowering of the threshold field. In contrast, the presence of charge carriers in blue bronze is almost entirely due to thermal quasiparticle excitations. \begin{figure}[hbt] \resizebox{0.5\textwidth}{!}{% \includegraphics{fig4} } \caption{ Nonlinear conductivity along the b axis as a function of electric field in $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$\ measured from $65$~K to $300$~K (in $5$~K steps between 65~K - 125~K, 140~K - 230~K, and 250~K - 300~K; in 2~K steps in the transition temperatures regions: 128~K - 146~K, and 236~K - 246~K).} \label{fig:nlct} \end{figure} Figure \ref{fig:nlct} displays the electric field dependence of the conductivity along the b-axis for a variety of temperatures between 65~K and 300~K. The nonlinear behavior is most pronounced at low temperatures, and is slowly decreasing upon increasing the temperature towards the transition temperature $T_{MI}$. The first threshold field $E_T$, {\it i.e.} the field required to induce charge density modulation conductivity, is observed below $T_{MI}$ only. As the temperature is reduced, the threshold fields increase, evidencing a strengthening of the charge density modulation pinning. This is due to the reduction of the free carrier concentration at lower temperatures, leading to a less effective screening of pinning centers. The second transport regime, the incoherent moving regime, shows an increasing conductivity followed by saturation upon increasing field. Going up in temperature, the field at which this saturation is reached increases, until around 90~K it merges with the second threshold field.\\ Even above the charge ordering transition temperature, the nonlinear behavior has not entirely disappeared, although there are no clear threshold fields anymore. Nonlinear conduction is still observed up to the sodium ordering temperature $T_{Na}$ = 240~K, consistent with the disorder induced non-metallic behavior observed above $T_{MI}$. Finally, above 240~K a nearly field independent conductivity is observed. \begin{figure}[hbt] \resizebox{0.5\textwidth}{!}{% \includegraphics{fig5} } \caption{Temperature dependence of the threshold fields for $T<T_{MI}$, obtained from the conductivity data. Open symbols (left scale): first threshold field; Filled symbols (right scale): second threshold field. } \label{fig:tres} \end{figure} The temperature dependencies of the first and second threshold fields are shown in Fig. \ref{fig:tres}. The open symbols display the first threshold field, $E_T$, obtained from the conductivity curves by taking the values where the conductivity starts to increase. The lower part displays the second threshold field, $E_{T}^{*}$, obtained in a similar manner. Although there is a factor of 10$^3$ difference between the two threshold fields, the displayed behavior is qualitatively the same: the threshold field strongly decreases with increasing temperature. With some exceptions ( for example K$_{0.3}$MoO$_{3}$\ or TaS$_{3}$ \cite{sch89} ) this type of behavior for in particular the first threshold field is common for many of the known charge density wave systems. At low temperature, when the number of quasiparticles is reduced, large electric fields may build up around pinning centers and the value of $E_{T}^{*}$ is essentially determined by the amount of disorder in the system . Going up in temperature, the thermally excited quasiparticles tend to homogenize the electric field inside the sample. This effect is a natural source for the redistribution of the driving fields inside the sample. A variety of models were proposed to describe the nonlinear conductivity observed in charge density modulation materials, mostly based on the original suggestion of Fr\"{o}hlich \cite{fro54} that conductivity is dominated by sliding CDW transport. Between them, there are two main approaches in describing the CDW conductivity \cite{gru88,bra04,oga05}. The first one, treats the CDW as a classical particle which is moving in a periodic potential, with the period determined by the period of the CDW \cite{mon82,gru81}. This model gives a sharp threshold field, $E_{T}$ for the onset of nonlinearity and a saturation of the conductivity for high electric fields. The second model, proposed by Bardeen and referred to as the tunneling model \cite{bar79}, assumes that the nonlinear transport occurs as a result of coherent tunneling of the CDW over macroscopic distances. Besides the threshold field, $E_{T}$, the model gives another characteristic field $E_{0}$, which can be interpreted as a tunneling barrier. The onset of the conductivity reveals a sharp threshold field and at high electric fields the conductivity saturates. Both models rely on a $T = 0$ treatment of the problem, though nonzero temperature models based on thermally assisted flux creep have been discussed as well \cite{lem99,oga05}. The models described above fail to account for the charge density modulation conductivity observed in $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$. Fukuyama, Lee and Rice \cite{fuk78,lee79} discussed the effect of the pinning centers on the charge-density-waves dynamics. They focused on phase fluctuations and show that, when the material is characterized by weak pinning, the system can be though to break up into domains. Here, we integrate this notion in a simple phenomenological model describing the observed nonlinear transport in $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$, by treating the sample as a collection of interconnected domains. Each domain is characterized by its own conductivity and threshold field so that the sample can be considered as a collection of parallel and series nonlinear conduction paths. At low applied fields (smaller then $E_T$), excited quasiparticles dominate the conductivity of a single domain, leading to the strongly temperature dependent ohmic behavior. All charge density modulation domains are pinned by lattice and defects and will not start moving until the applied field exceeds a certain critical field. At higher applied fields, some domains start to become depinned, leading to an additional charge density modulation contribution to the conductivity. The charge density modulation still cannot move as a whole due to the domain structure resulting from grain boundaries and strong pinning centers. The sample is now in the incoherent moving regime. Finally for applied fields exceeding a second critical field, the charge density modulation may move as a whole (or at least a percolation path exists between the contacts), and the conductivity becomes completely dominated by the moving charge density modulation transport. Within this model, the second critical field is then a percolation threshold and the transport regime above this field is dubbed coherent moving regime. The model sketched above can be made more quantitative by specifying a model for the charge density modulation conductivity itself. Here we take one of the simplest approaches, and describe the charge density modulation within a domain as a charged particle moving due to an applied field $E$ in a viscous medium \cite{gru88}. The charge density modulation conductivity is then given by. \begin{equation}\label{eq:bare} \sigma_{cdm}(E) = {\sigma_{c}\ \frac{E-E_{T}}{E}\ \theta(E-E_{T})}\ , \end{equation} where the Heaviside function $\theta(E-E_{T})$ assures that the equation is also valid for $E<E_T$. In the incoherent moving regime the conductivity can then be modeled as a collection of interconnected charge density modulation domains, shunted by the free carrier conductivity. In general, this is still a complex system which solution would require detailed knowledge of domain properties and their connectivity. Here we will take a more qualitative and statistical approach, and model the charge density modulation system as a statistically large number of interconnected domains. The conductivity of this network of domains together with the free carrier conductivity will then yield the total conductivity. The network can be considered as a collection of parallel and series conduction paths. For the series connected domains of a single conduction path, the conductivity will have the same form as Eq. (\ref{eq:bare}), but with an effective threshold field $\hat{E}_T=\sum_i{E_T^i}$ and effective conductivity $\hat{\sigma}_c=\left(\sum_i{(\sigma_c^i)^{-1}}\right)^{-1}$. The total charge density modulation conductivity is then given by \begin{equation}\label{eq:full} \sigma_{cdm}(E) = \sum_{j}{\hat{\sigma}_{c}^{j}\ \frac{E-\hat{E}_{T}^{j}}{E}\ \theta(E-\hat{E}_{T}^{j})}\ , \end{equation} where $j$ runs over the parallel conduction pathways, each having their own effective threshold field $\hat{E}_{T}^{j}$ and effective conductivity $\hat{\sigma}_{c}^{j}$. If there are a large number of conduction paths, the above summation can be replaced by an integration, from zero to infinity, weighted by the statistical distribution of threshold fields and conductivities. For simplicity, we assume a lorentzian distribution of width $\gamma$, centered on $E_T^0$ for the threshold fields, and a constant effective conductivity $\sigma_{c}^{j}=\hat{\sigma}_c$ for each path. The total conductivity, including the free carrier contribution, $\sigma_{0}$, can then be equated as \begin{equation}\label{eq:shunt} \sigma(\epsilon) = \sigma_{0}+\hat{\sigma}_{c}\frac{2\epsilon(\arctan(\epsilon)+\arctan(\epsilon_0))-\ln( \frac{1+\epsilon^{2}}{1+\epsilon_0^{2}})}{(\epsilon+\epsilon_0)(\pi+2\arctan(\epsilon_0))} , \end{equation} with $\epsilon_0=\hat{E}_T^0/\gamma$, $\epsilon=(E-\hat{E}_T^0)/\gamma$. In the limit $\gamma/\hat{E}_T^0$ $\gg$ 1, which is valid for the present experiment, the total conductivity becomes: \begin{equation}\label{eq:sim} \sigma(\epsilon) = \sigma_{0}+\hat{\sigma}_{c}\frac{2\epsilon\arctan(\epsilon)-\ln( 1+\epsilon^{2})}{\epsilon\pi} , \end{equation} where $\epsilon=E/\gamma$ In the derivation of Eq. (\ref{eq:sim}) quite a few assumptions have been made. The most important one is that domains are formed in the charge density modulation material, within which the charge density modulation transport is coherent. This is mainly based on the observation of the slow onset of the moving conductivity in $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$ (see Fig. \ref{fig:nlc}. and Fig. \ref{fig:nlct}.) as well as in for instance K$_{0.3}$MoO$_{3}$\ \cite{loo02}. The assumption of parallel paths of series resistors is pretty robust; allowing for connectivity between these paths would only lead to additional parallel pathways for conduction. Of course, there is nothing known on the statistical distribution of domain properties. Taking a symmetric lorentzian (or gaussian) distribution simply makes the integration over domains tractable. Taking an asymmetric distribution would be more physical. Also typical domain sizes are not known. X-ray diffraction measurements \cite{yama02}, however, have shown that the width of the superstructure peaks originating from charge ordering is comparable to the sharpness of the fundamental peaks, thereby setting a lower limit of the domain sizes to $\sim$100 nm. Finally, we did not allow for a distribution of effective conductivities for the conduction paths. Numerical simulations taking lorentzian distributions $\hat{\sigma}^{j}_{c}$\ have shown, however, that the shape of the field dependent conductivity does not strongly depend on this. Now return to the data in Fig. \ref{fig:nlct}. We have fitted Eq. (\ref{eq:sim}) to the field dependent conductivities, measured at different temperatures between 65~K and 136~K. Note that the only free parameters in the fits are the width of the distribution $\gamma$, and the charge density modulation conductivity $\hat{\sigma}_{c}$, since the normal conductivity, $\sigma_{0}$, can be obtained directly from the low field data. At low temperatures, a good approximation of $\hat{\sigma}_{c}$ can be made using the field dependent conductivity data taking the conductivity difference between the ohmic regime and the saturation regime. The only remaining fit parameter in this situation is the width of the lorentzian distribution, $\gamma$. The fits generally show a good agreement with the data, and some typical results of the fitting are shown in Fig. \ref{fig:fit1}. \begin{figure}[hbt] \resizebox{0.5\textwidth}{!}{% \includegraphics{fig6} } \caption{Fits of the domain model, Eq. (\ref{eq:sim}) (solid lines), to the nonlinear transport data (open symbols) at 65~K, 70~K and 75~K.} \label{fig:fit1} \end{figure} The temperature dependence of the threshold field distribution width, $\gamma$, obtained from the fits is displayed in Fig. \ref{fig:gam}. \begin{figure}[hbt] \resizebox{0.5\textwidth}{!}{% \includegraphics{fig7} } \caption{ Temperature dependence of the width $\gamma$ of the lorentzian threshold field distribution obtained from fits of the data in Fig. \ref{fig:nlct}. to the domain model, Eq. (\ref{eq:sim}).} \label{fig:gam} \end{figure} At low temperature, the width increases with increasing temperature. It shows a pronounced peak at 80~K, after which it starts to decrease becoming very small in the region of the MI transition. The decrease with increasing temperature observed above 80~K is what one might intuitively expect; the enhanced screening will decrease the typical domain threshold fields, thereby decreasing $\gamma$. Apart from the decrease of the average pinning potential, there will also be changes in the statistical distribution as the temperature is lowered. At low temperatures one expects that there will be a larger number of domains with a relatively small pinning potential, reducing the distribution width. Therefore we believe that the increase of the width observed in the low temperature results from a competition of enhanced screening and the formation of larger, more strongly pinned, domains due to the coalescence of small, weakly pinned, domains as the temperature increases. Figure \ref{fig:con} shows the temperature dependence of the effective charge density modulation conductivity, $\hat{\sigma}_{c}$, and the low field ohmic conductivity, $\sigma_0$. The moving charge density modulation conductivity is found to be almost an order of magnitude larger than the quasiparticle contribution. The small kink in the charge density modulation contribution around 90 K, which probably again results from the competition between screening and coalescence. \begin{figure}[hbt] \resizebox{0.5\textwidth}{!}{% \includegraphics{fig8} } \caption{ Temperature dependence of the effective charge density modulation conductivity, $\hat{\sigma_{c}}$ (open symbols), and the low field ohmic conductivity $\sigma_0$ (filled symbols), obtained from fits of the data in Fig. \ref{fig:nlct}. to the domain model, Eq. (\ref{eq:sim}) (see text).} \label{fig:con} \end{figure} The temperature dependence of both contributions show a similar activated behavior. For the quasiparticles this has already been discussed in section \ref{sec:tdep}. The usual interpretation of the activated behavior of the moving density modulation contribution is in terms of thermally activated flux creep \cite{lem99}, similar to the flux creep of the Abrikosov flux lattice in superconductors \cite{and64}. From fits to an activated behavior we estimate the low temperature ($T<90$~K) activation energies for the quasiparticle and the moving density modulation contributions to be 740~K and 805~K, respectively. For the ohmic contribution, this is in good agreement with the earlier results (Sec. \ref{sec:tdep}). The above discussed model gives a phenomenological understanding of the ohmic and incoherent transport regimes. It does not, however, describe the coherent transport regime, where a sharp rise in conductivity takes place. The appearance of this second threshold field can be understood as follows. As the field increases, more and more local pinning potentials will be overcome, increasing the number of domains which contribute to the conductivity. Eventually this will lead to the formation of a percolation path between the contacts. We thus propose that the second threshold is in fact a percolation threshold. For fields bigger than the second threshold field, $E_{T}^{*}$, the CDW will then coherently move along such percolation paths, leading to the sharp rise in conductivity. This leads to an additional contribution to the conductivity of the form of Eq. (\ref{eq:shunt}), so that the total conductivity now becomes \begin{equation}\label{eq:all} \begin{split} \vspace*{5cm} \sigma(\epsilon) = \sigma_{0}+\hat{\sigma}_{c}\frac{2\epsilon\arctan(\epsilon)-\ln( 1+\epsilon^{2})}{\epsilon\pi}+ \\% split here \hat{\sigma}_{p}\frac{2\epsilon_{p}(\arctan(\epsilon_{p})+\arctan(\epsilon_{0p}))-ln( \frac{1+\epsilon_{p}^{2}}{1+\epsilon_{0p}^{2}})}{(\epsilon_{p}+\epsilon_{0p}) (\pi+2\arctan(\epsilon_{0c}))} \end{split} \end{equation} where $\epsilon_{p} = (E-E_{T}^{*})/\gamma_{p}$, and $\epsilon_{0p} = E_{T}^{*}/\gamma_{p}$. $\gamma_{p}$ and $\hat{\sigma}_{p}$ are the width of the distribution and the percolation charge density modulation conductivity, respectively. We have fitted this last equation to the low temperature ($T<90$~K) field dependent conductivities and some typical results of the fitting are shown in Fig. \ref{fig:fit2}. \begin{figure}[hbt] \resizebox{0.5\textwidth}{!}{% \includegraphics{fig9} } \caption{ Fits of the domain-percolation model, Eq. (\ref{eq:full}) (solid lines) to the nonlinear transport data (open symbols) at 65~K, 70~K and 75~K. The inset shows the behavior of the percolation threshold with temperature.} \label{fig:fit2} \end{figure} Clearly the data follows Eq. (\ref{eq:all}) quite well. The temperature dependence of the upper threshold field (see inset Fig. \ref{fig:fit2}.) closely follows the results presented in Fig. \ref{fig:tres}. The reason for the sharp decrease of the percolation threshold field upon increasing temperature is the same as for the incoherent moving threshold, as has been discussed in section \ref{sec:tdep}, namely the increased screening of impurities upon increasing temperature. Finally, we note that the fitting is fairly insensitive to the distribution width as long as $\gamma_p<<E_T^{\ast}$, as expected for a percolation threshold. \section{Conclusions}\label{sec:con} We presented detailed nonlinear transport experiments on $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$\ in the temperature range 30~K - 300~K. The low field data in the charge ordered phase show that two types of excitations contribute to the transport. The charge density modulation quasiparticle gap is found to be about 700-800 K, and likely depends on the sodium stoichiometry. Evidence for a second type of excitation, with a gap of $\sim 500$~K, has been presented, although the exact origin of this excitation remains unclear at present. It might be either a bound state of collective charge density modulation excitations like the phason, or possibly an excitation within the domain walls between ordered charge density modulation domains. A competing model for the thermally activated low field charge transport in a low dimensional system in the presence of disorder is the variable range hopping (VRH) model \cite{mot69,mot79}. Analyzing the low temperature data in terms of VRH conductivity indeed leads to a reasonable agreement with this model as well, again with deviations at temperatures below 50 K. Since the present data can not distinguish between these models we adapted the most widely used model here as well. The field dependent data clearly show the charge density modulation nature of the insulating phase, and is very similar to the transport in other well known charge density modulation materials like K$_{0.3}$MoO$_{3}$. The phenomenological domain model for nonlinear transport in charge density modulation materials presented here is found to be in good agreement with the experiments, and we believe that this model should be applicable to other semi-conducting charge ordered materials as well. The model could be improved by taking a more realistic model for the statistical distribution of domain properties, the single domain transport, and by allowing for field dependent domain properties. In particular the pinning fields are expected to be field dependent, since at high moving velocities the charge density modulation excitations may excite quasiparticle excitations resulting from charge density friction. Finally, we believe that in low free carrier density ({\em i.e.} semiconducting) compounds like $\beta-$Na$_{0.33}$V$_{2}$O$_{5}$\ and K$_{0.3}$MoO$_{3}$\ much of the temperature dependence of the transport properties, including the observed decrease of threshold fields upon raising temperature, can be understood in terms of enhanced screening of pinning centers by thermally excited charge carriers. Acknowledgements -- This project is supported by the Netherlands Foundation for Fundamental Research on Matter with financial aid from the Nederlandse Organisatie voor Wetenschappelijk Onderzoek.
1,314,259,993,723
arxiv
\section{Introduction} \setcounter{theorem}{0} \numberwithin{theorem}{section} Recently Chudnovsky, Kim, Oum, and Seymour established that any prime graph contains one of a short list of induced prime subgraphs \cite{CKOS}. A module of a graph $G=(V,E)$ is a set of vertices $X\subseteq V$ such that any vertex $v\in V\setminus X$ is either connected or non-connected to all vertices in $X$. Prime graphs are graphs which contain no non-trivial modules. The interest in prime graphs arises from questions around so-called modular decompositions of graphs, as well as the fact that the celebrated Erd\H{o}s-Hajnal conjecture reduces to the case where the omitted graph is prime. In the present paper we re-prove the main theorem of \cite{CKOS} making use of model-theoretic ingredients, in a way that improves the bounds and offers a different structural perspective on the graphs in question. Our background aim is to exemplify the usefulness of model-theoretic ideas in proofs in finite combinatorics. This approach complements that of \cite{MS}, where certain indicators of complexity which had been identified by people working in combinatorics coincided with model theoretic dividing lines, so could be characterized by means of model theory. The model-theoretic contribution of the present argument may be described as follows. The proof of \cite{CKOS} proceeds by means of several cases, sketched in section 2 below, and applies Ramsey's theorem as a main tool. In \cite{MS} it was shown that Ramsey's theorem works much better when the graph is so-called stable, a finitization of an important structural property identified by model theory (for history, see the introduction to \cite{MS} or the original source \cite{classification}). Our approach in the present paper, then, is essentially to reconfigure the proof of \cite{CKOS} so that the procedure for extracting the given configurations is different depending on the degree of stability of the graph, and can take advantage of this additional structural information. We believe this approach raises some interesting questions about model theory's potential contribution to calibrating arguments about finite objects. We have not tried to construct examples showing the bound we obtain is optimal, in part because we believe that a further development of what might be called `model-theoretic Ramsey theory' in the spirit of \cite{MS} may, in general, allow for even finer calibrations in the finite setting. At the same time, it is important to add that model theory works here to amplify the combinatorial analysis rather than to replace it. Already in the present argument, the contribution of combinatorics in e.g. identifying definitions such as `module' (which is much stronger than, if in some sense analogous to, the model-theoretic notion of an indiscernible sequence) and in isolating the original collection of induced configurations appears essential. It is the interaction of these ideas and perspectives which to us seems most interesting. Complementing this approach, the paper concludes with the proof of an infinite analogue of Theorem \ref{mainthm} which implies the finite version, but without explicit bounds. \tableofcontents \section{Definitions and notation} In this section we state relevant definitions and notation, most of which, but not all, is from \cite{CKOS}. Given a set $X$, let ${X\choose 2}=\{Y\subseteq X: |Y|=2\}$. A graph is a pair $(V,E)$ where $V$ is a set of vertices and $E\subseteq {V\choose 2}$ is a set of edges. Unless otherwise stated, all of the following definitions and notation apply to both infinite and finite graphs. Given a graph $G$, we write $xy$ as shorthand for the edge $\{x,y\}$. We will often write $V(G)=V$ and $E(G)=E$. A set of vertices $X$ inside a graph is called a \emph{module} if every vertex outside of $X$ is adjacent to every vertex in $X$ or non-adjacent to every vertex in $X$. A module $X$ of a graph $G$ is called \emph{trivial} if $|X|=1$ or $X=V(G)$. A graph $G$ is called \emph{prime} if it has no non-trivial modules. We say a set of vertices $X$ is \emph{independent} if every pair of vertices is $X$ is non-adjacent, and we say $X$ is \emph{complete} if every pair of vertices in $X$ is adjacent. We say a vertex $v$ is \emph{mixed} on a subset $X\subseteq V$ if there are $x,y\in X$ such that $vx\in E$ and $vy\notin E$. Given a graph $G=(V,E)$, the \emph{compliment of $G$}, denoted $\overline{G}$, is the graph with vertex set $V$ and edge set ${V\choose 2}\setminus E$. Given two graphs $G$ and $H$, we will say $G$ ``contains a copy of $H$" to mean there is an induced subgraph of $G$ which is isomorphic to $H$. We now introduce important structural configurations which will appear throughout the paper. Fix an integer $n\geq 1$. \begin{enumerate}[$\bullet$] \item A \emph{half-graph} of height $n$ is a graph with $2n$ vertices $a_1,\ldots, a_n, b_1,\ldots, b_n$ such that $a_i$ is adjacent to $b_j$ if and only if $i\leq j$. \item The \emph{bipartite half-graph} of height $n$, $H_n$, is a graph with $2n$ vertices $a_1,\ldots, a_n, b_1,\ldots, b_n$ such that $a_i$ is adjacent to $b_j$ if and only if $i\leq j$ and such that $\{a_1,\ldots, a_n\}$ and $\{b_1,\ldots, b_n\}$ are independent sets. \item The \emph{half split graph} of height $n$, $H'_n$, is a graph with $2n$ vertices $a_1,\ldots, a_n, b_1,\ldots, b_n$ such that $a_i$ is adjacent to $b_j$ if and only if $i\leq j$ and such that $\{a_1,\ldots, a_n\}$ is an independent set and $\{b_1,\ldots, b_n\}$ is a complete set (a graph is a \emph{split graph} if its vertices can be partitioned into a complete set and an independent set). \item Let $H_{n,I}'$ be the graph obtained from $H'_n$ by adding a new vertex adjacent to $a_1,\ldots, a_n$ (and no others). Let $H_n^*$ be the graph obtained from $H'_n$ by adding a new vertex adjacent to $a_1$ (and no others). \item The \emph{thin spider} with $n$ legs is a graph with $2n$ vertices $a_1,\ldots, a_n, b_1,\ldots, b_n$ such that $\{a_1,\ldots, a_n\}$ is an independent set, $\{b_1,\ldots, b_n\}$ is a complete set, and $a_i$ is adjacent to $b_j$ if and only if $i=j$. The \emph{thick spider} with $n$ legs is the compliment of the thin spider with $n$ legs. In particular, it is a graph with $2n$ vertices $a_1,\ldots, a_n, b_1,\ldots, b_n$ such that $\{a_1,\ldots, a_n\}$ is an independent set, $\{b_1,\ldots, b_n\}$ is a complete set, and $a_i$ is adjacent to $b_j$ if and only if $i\neq j$. A \emph{spider} is a thin spider or a thick spider. \item A sequence of distinct vertices $v_0,\ldots, v_m$ in a graph $G$ is called a \emph{chain} from a set $I\subseteq V(G)$ to $v_m$ if $m\geq 2$ is an integer, $v_0, v_1\in I$, $v_2,\ldots, v_m\notin I$, and for all $i>0$, $v_{i-1}$ is either the unique neighbor or the unique non-neighbor of $v_i$ in $\{v_0,\ldots, v_{i-1}\}$. The \emph{length} of a a chain $v_0,\ldots, v_m$ is $m$. \end{enumerate} Given an integer $m\geq 1$, $K_m$ denotes the complete graph on $m$. Given integers $m,n$, $K_{m,n}$ denotes the complete bipartite graph with parts of sizes $m$ and $n$, that is, the graph with $m+n$ vertices $\{a_1,\ldots, a_m, b_1,\ldots, b_n\}$ such that $\{a_1,\ldots, a_m\}$ and $\{b_1,\ldots, b_n\}$ are independent and $a_i$ is adjacent to $b_j$ for each $1\leq i\leq m$ and $1\leq j\leq n$. Given a graph $G=(V,E)$, the \emph{line graph} of $G$ is the graph $G'$ which has vertex set $V(G')=E(G)$ and edge set consisting of pairs of elements $e_1\neq e_2\in E(G)$ such that $e_1\cap e_2\neq \emptyset$. Given an integer $m$, a \emph{path} of length $m$ is a set $v_0,\ldots, v_m$ vertices such that $v_i$ is ajacent to $v_j$ if and only if $j=i+1$ or $i=j+1$. The \emph{$m$-subdivision} of a graph $G$ is the graph obtained from $G$ by replacing every edge in $G$ with an induced path of length $m+1$. A perfect matching of height $n$ is the disjoint union $n$ edges, that is, a graph with $2n$ vertices $\{a_1,\ldots, a_n, b_1,\ldots, b_n\}$ such that $\{a_1,\ldots, a_n\}$ and $\{b_1,\ldots, b_n\}$ are independent and $a_i$ is adjacent to $b_j$ if and only if $i=j$. Note that in all of these definitions except that of a chain and of an $m$-subdivision, it makes sense to replace $m$ and $n$ by any cardinals $\lambda$ and $\mu$. In section 6, we will wish to discuss versions of some of these configurations where $m$ or $n$ is replaced by an infinite cardinal. In those cases, we will use the same notation as laid out in this section. \section{Outline of proof of main theorem from \cite{CKOS}}\label{oldproofoutline} In this section we give an outline of the proof of Theorem \ref{mainthm} presented in \cite{CKOS}. We do this to allow for comparison to the proofs we present in sections \ref{finitary} and \ref{infinitary}. Our outline consists of the statements of the propositions from \cite{CKOS} which form the main steps in their proof, then a flow chart illustrating the structure of the proof. We think this outline is sufficient for understanding the global structure of the proof. For more details we direct the reader to the original paper \cite{CKOS}. Throughout $R(n_1,\ldots, n_k)$ denotes the smallest integer $m$ such for that any coloring of the edges of $K_m$ with $k$, there is complete graph on $n_i$ vertices in color $i$ for some $1\leq i\leq k$. \begin{theorem}[Theorem 1.2 of \cite{CKOS}]\label{mainthm} For every integer $n\geq 3$ there is $N$ such that every prime graph with at least $N$ vertices contains one of the following graphs or their compliments as an induced subgraph. \begin{enumerate}[(1)] \item The $1$-subdivision of $K_{1,n}$ (denoted by $K_{1,n}^{(1)}$). \item The line graph of $K_{2,n}$. \item The thin spider with $n$ legs. \item The bipartite half-graph of height $n$. \item The graph $H'_{n,I}$. \item the graph $H_n^*$. \item A prime graph induced by a chain of length $n$. \end{enumerate} \end{theorem} \noindent We will use the following fact from \cite{CKOS}. \begin{proposition}[Corollary 2.3 from \cite{CKOS}]\label{cor2.3} Let $t>3$. Every chain of length $t$ contains a chain of length $t-1$ inducing a prime subgraph. \end{proposition} \noindent The following are the propositions which form the main steps of the proof of Theorem \ref{mainthm} in \cite{CKOS}. \begin{proposition}[Proposition 3.1 from \cite{CKOS}]\label{3.1} For all integers $n, n_1, n_2>0$, there is $N=f(n,n_1,n_2)$ such that every prime graph with an $N$-vertex independent set contains an induced subgraph isomorphic to \begin{enumerate}[(1)] \item a spider with $n$ legs, \item $\overline{L(K_{2,n})}$, \item the bipartite half-graph of height $n$, \item the disjoint union of $n_1$ copies of $K_2$, denoted $n_1K_2$ (i.e. an induced matching of size $n_1$), or \item the half split graph of height $n_2$. \end{enumerate} Specifically, $f(n,n_1,n_2)=2^{M+1}$ where $M=R(n_1+n,2n-1,n+n_2, n+n_2-1)$. \end{proposition} \begin{proposition}[Proposition 4.1 from \cite{CKOS}]\label{4.1} Let $t\geq 2$ and $n,n'$ be positive integers. Let $h(n,n',2)=n$ and $$ h(n,n',i)=(n-1)R(n,n,n,n,n,n,n,n',n',h(n,n',i-1))+1 $$ for an integer $i>2$. Let $v$ be a vertex of a graph $G$ and let $M$ be an induced matching of $G$ consisting of $h(n,n',t)$ edges not incident with $v$. If for each edge $e=xy$ in $M$, there is a chain of length at most $t$ from $\{x,y\}$ to $v$, then $G$ has an induced subgraph isomorphic to one of the following: \begin{enumerate}[(1)] \item $K_{1,n}^{(1)}$, \item the bipartite half-graph of height $n$, \item $\overline{L(K_{2,n})}$, \item a spider with $n$ legs, or \item the half split graph of height $n'$. \end{enumerate} \end{proposition} \begin{proposition}[Proposition 5.1 of \cite{CKOS}]\label{5.1} For every positive integer $n$, there exists $$ N=g(n)=4^{n-2}(n+1)+2(n-2)+1 $$ such that every prime graph having a half split graph of height at least $N$ as an induced subgraph contains a chain of length $n+1$ or an induced subgraph isomorphic to one of $H_{n,I}'$, $H_n^*$, $\overline{H^*_n}$. \end{proposition} In the flow chart below, the bold boxes denote steps which involve Ramsey's theorem. A box with no descendants indicates that the conclusion of the theorem is satisfied in that case. In this chart, the functions $f$, $h$, and $g$ are from Propositions \ref{3.1}, \ref{4.1}, and \ref{5.1} respectively. \newpage \tikzstyle{decision} = [diamond, draw, fill=white!20, text width=4.5em, text badly centered, node distance=3cm, inner sep=0pt] \tikzstyle{block} = [rectangle, draw, fill=blue!20, text width=5em, text centered, rounded corners, minimum height=4em] \tikzstyle{line} = [draw, -latex'] \tikzstyle{cloud} = [draw, ellipse,fill=red!20, node distance=3cm, minimum height=2em] \tikzstyle{box} = [rectangle, draw, fill=white!20, text width=12em, text centered, minimum height=1em] \tikzstyle{thickbox} = [rectangle, draw, ultra thick, fill=white!20, text width=12em, text centered, minimum height=1em] \tikzstyle{widebox} = [rectangle, draw, fill=white!20, text width=20em, text centered, minimum height=1em] \tikzstyle{widethickbox} = [rectangle, draw, ultra thick, fill=white!20, text width=20em, text centered, minimum height=1em] \tikzstyle{widishbox} = [rectangle, draw, fill=white!20, text width=13em, text centered, minimum height=1em] \tikzstyle{widishthickbox} = [rectangle, draw, ultra thick, fill=white!20, text width=13em, text centered, minimum height=1em] \tikzstyle{narrowbox} = [rectangle, draw, fill=white!20, text width=5em, text centered, minimum height=1em] \tikzstyle{asterisk} = [circle, fill=white!20, text width=.05em, text centered, minimum height=.05em] \begin{tikzpicture}[node distance = 2cm, auto] \node[narrowbox](init){Start}; \node [box, below right of=init,node distance=4cm] (yes chain) {There is a chain of length $n+1$.}; \node [box, below right of=yes chain,node distance=3cm] (cor) {By Proposition \ref{cor2.3}, there is a chain of length $n$ inducing a prime subgraph.}; \node [box, below left of=init,node distance=4cm] (no chain) {There is no chain of length $n+1$.}; \node [widethickbox, below of=no chain, node distance=3cm] (ramsey) {Set $m=f(n,h(n,g(n),n),g(n))$, $N=R(m,m)$ and assume $G$ is a prime graph of size $N$. By Ramsey's theorem, we may assume there is an independent set of size $m$ (else work with the dual).}; \node [box, below right of=ramsey,node distance=4cm] (no half split) {There is no half split graph of height $g(n)$.}; \node [box, below left of=ramsey,node distance=4cm] (yes half split) {There is a half split graph of height $g(n)$.}; \node [box, below of=yes half split] (five point one) {Apply Proposition \ref{5.1}.}; \node [widishthickbox, below of=no half split] (three point one) {Apply Proposition \ref{3.1} with $n=n$, $n_1=h(n,g(n),n)$ and $n_2=g(n)$.}; \node [widebox, below right of=three point one,node distance=5cm] (four) {Outcome (4) of Proposition \ref{3.1}. $G$ has an induced matching with $h(n,g(n),n)$ edges. Since $G$ is prime, for every pair of points $\{x,y\}$ and every vertex $v$, there is a chain from $\{x,y\}$ to $v$. Since $G$ has no chains of length $n+1$, all such chains have length at most $n$. Therefore $G$ satisfies the hypotheses of Proposition \ref{4.1} with $n=n$, $n'=g(n)$, and $t=n$. }; \node [box, below left of=three point one,node distance=5cm] (one two or three) {Outcome (1), (2), or (3) of Proposition \ref{3.1}.}; \node [thickbox, below of=four, node distance=4cm] (four point one) {Apply Proposition \ref{4.1}.}; \path [line] (init) -- (no chain); \path [line] (yes chain) -- (cor); \path [line] (init) -- (yes chain); \path [line] (no chain) -- (ramsey); \path [line] (ramsey) -- (no half split); \path [line] (ramsey) -- (yes half split); \path [line] (no half split) -- (three point one); \path [line] (yes half split) -- (five point one); \path [line] (three point one) -- (four); \path [line] (three point one) -- (one two or three); \path [line] (four) -- (four point one); \end{tikzpicture} For the rest of the paper, given $n\geq 2$, let $N_{\ref{mainthm}}=N_{\ref{mainthm}}(n)$ be the bound obtained for Theorem \ref{mainthm} in \cite{CKOS}, that is, $N_{\ref{mainthm}}(n)=R(m,m)$ where $m=f(n,h(n,g(n),n),g(n))$. \begin{remark}\label{reduction} Note this proof shows the following: a prime graph $G$ with an independent set of size $m$ and no chain of length $n+1$ satisfies the conclusion of the theorem. \end{remark} \section{Tree Lemma} \setcounter{theorem}{0} \numberwithin{theorem}{section} In this section we prove a key lemma, Theorem \ref{3.5}, which allows us to improve the bounds in Theorem \ref{mainthm}. This lemma is \cite{MS} Theorem 3.5 tailored to the specific setting of graphs. \cite{MS} Theorem 3.5 handles arbitrary finite sets of formulas, and uses model-theoretic tools such as types and $R$-rank. The bounds there are computed in terms of several associated constants, including the VC-dimension which was used to bound the branching of the trees. For the purposes of the present argument, we give here a streamlined proof for the special case of graphs written with graph theorists in mind. Corollary \ref{treecormain} gives the bound in this case. We now state relevant versions of definitions and lemmas from \cite{MS}. Recall that a \emph{tree} is a partial order $(P, \trianglelefteq)$ such that for each $p\in P$, the set $\{q\in P: p\triangleleft q\}$ is a well-order under $\trianglelefteq$. Given an integer $n\geq 2$, define \begin{align*} 2^{<n}=\bigcup_{i=0}^{n-1} \{0,1\}^i, \end{align*} where $\{0,1\}^0=\langle$ $\rangle$ is the \emph{empty string}, and for $i>0$, $\{0,1\}^i$ is the usual cartesian product. This set has a natural tree structure given by $\eta \trianglelefteq \eta'$ if and only if $\eta=\langle$ $\rangle$ or $\eta$ is an initial segment of $\eta'$. We will write $\eta \triangleleft \eta'$ to denote that $\eta \trianglelefteq \eta'$ and $\eta \neq \eta'$. Given $\eta \in \{0,1\}^i$, let $|\eta|=i$ denote \emph{length} of $\eta$ (the length of the empty string $\langle$ $\rangle$ is $0$). A main idea in the proof of Theorem \ref{3.5} is to take a graph $G=(V,E)$, and arrange $G$ into a tree by indexing its vertex set with elements of $2^{<n}$. Suppose $G=(V,E)$ is a graph, and we have an indexing $V=\{a_{\eta}: \eta \in X\}$ of the vertices of $G$ by some $X\subseteq 2^{<n}$. Given $\eta \in X$, we will say the \emph{height} of $a_{\eta}$, denoted $ht(a_{\eta})$ is $|\eta|$. A \emph{branch} is a set of the form $\{a_{\eta}: \eta \in Y\}$ where $Y$ is a maximal collection of comparable elements in $X$. The \emph{length} of a branch is its cardinality. Given $\eta, \eta' \in 2^{<n}$ and elements $a_{\eta}, a_{\eta'}$ indexed by $\eta$ and $\eta'$, we say $a_{\eta}$ and $a_{\eta'}$ \emph{lie along the same branch} if $\eta \trianglelefteq \eta'$ or $\eta'\trianglelefteq \eta$. If $\eta\triangleleft \eta'$, we say $a_{\eta}$ \emph{precedes} $a_{\eta'}$. Given $\eta=\langle \eta_1,\ldots, \eta_i\rangle \in \{0,1\}^i$, set $\eta \wedge 0= \langle \eta_1,\ldots, \eta_i,0\rangle $ and $\eta \wedge 1= \langle \eta_1,\ldots, \eta_i,1\rangle $. If $x=a_{\eta\wedge 0}$ or $x=a_{\eta \wedge 1}$, then we say $a_{\eta}$ is the \emph{immediate predecessor} of $x$ and write $pred(x)=a_{\eta}$. We will also write $a_{\eta}\wedge i$ to mean $a_{\eta\wedge i}$. Given $j\in \{0,1\}$ and $i\geq 1$, let $j^i$ denote the element of $\{0,1\}^i$ which has every coordinate equal to $j$. \begin{definition} Given a graph $G=(V,E)$ on $n$ vertices and $A\subseteq 2^{<n}$, we say that an indexing $V=\{a_{\eta}: \eta \in A\}$ of $V$ by the elements of $A$ is a \emph{type tree}, if for each $\eta \in A$ the following holds. \begin{itemize} \item If $\eta \wedge 0 \in A$, then $a_{\eta \wedge 0}$ is non-adjacent to $a_{\eta}$. If $\eta\wedge 1\in A$, then $a_{\eta \wedge 1}$ is adjacent to $a_{\eta}$. \item If $\eta \wedge 0$ and $\eta \wedge 1$ are both in $A$, then for all $\eta'\triangleleft \eta$, $a_{\eta \wedge 1}$ is adjacent to $a_{\eta'}$ if and only if $a_{\eta \wedge 0}$ is adjacent to $a_{\eta'}$. \end{itemize} \end{definition} This notion of type tree is a special case of the model theoretic notion of a type tree. We believe for the purposes of this paper it is better to deal only with this special version for graphs. For the general definition, see \cite{classification}. \begin{lemma}\label{index} Every finite graph $G=(V,E)$ can be arranged into a type tree. \end{lemma} \begin{proof} Suppose $|V|=n$. We arrange the vertices of $G$ into a type tree indexed by a subset of $2^{<n}$. \begin{itemize} \item Stage 1: Choose any element of $G$ to be $a_{\langle \rangle}$, and set $A_0=\{a_{\langle \rangle}\}$. Set $X_1=N(a_{\langle \rangle})$ and $X_0=V\setminus (\{a_{\langle \rangle}\}\cup N(a_{\langle \rangle}))$. Note $X_1, X_0$ partition $V\setminus A_0$. \item Stage $m+1$. Suppose we've defined elements in the tree up to height $m\geq 0$ and for each $0\leq i\leq m$, $A_i$ is the set vertices of height $i$. Suppose further that we have a collection of sets of vertices $\{X_{\eta \wedge i}: \eta \in A_m, i\in \{0,1\}\}$ which partition $V\setminus \bigcup_{i=1}^{m}A_i$ and such that for each $\eta \in A_m$, $X_{\eta\wedge 1}\subseteq N(a_{\eta})$ and $X_{\eta\wedge 0}\subseteq V\setminus (N(a_{\eta})\cup \{a_{\eta}\})$. Then for each $\eta \in A_m$ and $i\in \{0,1\}$, if $X_{\eta \wedge i}\neq \emptyset$, choose $a_{\eta\wedge i}$ to be any element of $X_{\eta \wedge i}$. Define $A_{m+1}$ to be the set of these $a_{\eta\wedge i}$. Now for each $a_{\nu}\in A_{m+1}$ and $i\in \{0,1\}$, set \begin{align*} X_{\nu \wedge 1}&=N(a_{\nu})\cap X_{\nu}\text{ and }\\ X_{\nu \wedge 0}&=(V\setminus (N(a_{\nu})\cup \{a_{\nu}\}))\cap X_{\nu}. \end{align*} By assumption, $\{X_{\nu}: \nu \in A_{m+1}\}$ is a partition of $V\setminus \bigcup_{i=1}^{m}A_i$, and by construction, for each $\nu \in A_{m+1}$, $\{X_{\nu\wedge 1}, X_{\nu\wedge 0}\}$ is a partition of $X_{\nu}\setminus A_{m+1}$. Therefore, $\{X_{\nu\wedge i}: \nu \in A_{m+1}, i\in \{0,1\}\}$ is a partition of $V\setminus \bigcup_{i=1}^{m+1}A_i$. \end{itemize} All elements of $V$ will be chosen after at most $n$ steps. So we obtain an indexing of $V$ by a subset of $2^{<n}$ which is a type tree by construction. \end{proof} \begin{definition}\label{rankandheight} Suppose $G=(V,E)$ is a finite graph. \begin{enumerate} \item The \emph{tree rank} of $G$, denoted $t(G)$, is the largest integer $t$ such that there is a subset $V'\subseteq V$ and an indexing $V'=\{a_{\eta}:\eta \in 2^{<t}\}$ which is a type tree (i.e. $V'$ is a full binary type tree of height $n$). \item The \emph{tree height} of $G$, denoted $h(G)$, is the smallest integer $h$ such that every indexing of $V$ which is a type tree has a branch of length $h$. \end{enumerate} \end{definition} \begin{lemma}\label{indset} Suppose $t,h$ are integers, and $G=(V,E)$ is a finite graph with tree rank $t$ and tree height $h$. Then $G$ contains a complete or independent set of size $\max\{t, h/2\}$. \end{lemma} \begin{proof} By definition of tree rank, there is $V'\subseteq V$ and an indexing $V'=\{a_{\eta}: \eta \in 2^{<t}\}$ which is a type tree. Then by definition of a standard type tree, $I_1=\{a_{<>}, a_0, \ldots, a_{0^{t-1}}\}$ is an independent set of size $t$. On the other hand, by definition of tree height and Lemma \ref{index}, there is an indexing $V=\{a_{\eta}: \eta \in B\}$ of $V$ by a subset $B\subseteq 2^{<n}$ which is a standard type tree and which contains a branch $J$ with length $h$. Let $a_{\tau}$ be the last element of $J$ and note $h=ht(a_{\tau})$. If $|N(a_{\tau})\cap J|\geq \frac{|J|}{2}$, set $I_2 = N(a_{\tau})\cap J$. Otherwise set $I_2 = (V\setminus N(a_{\tau}))\cap J$. In either case, $|I_2|\geq |J|/2=h/2$. We now show that $I_2$ is complete or independent. Suppose $x$ and $y$ are elements of $I_2$. By definition of $I_2$, $a_{\tau}$ is adjacent to $x$ if and only if $a_{\tau}$ is adjacent to $y$. Note $x$ and $y$ lie along the same branch, so without loss of generality we may assume $x$ precedes $y$. By construction, $a_{\tau}$ is adjacent to $x$ if and only if $y$ is adjacent to $x$. So if $I_2=N(a_{\tau})\cap J$, $I_2$ must be a complete set, and if $I_2=(V\setminus N(a_{\tau}))\cap J$, $I_2$ must be an independent set. We've now shown $G$ contains a complete or independent set of size $\max\{|I_1|,|I_2|\}\geq \max\{t, h/2\}$. \end{proof} \begin{definition} Suppose $G=(V,E)$ is a graph, $A\subseteq 2^{<n}$, and $V=\{a_{\eta}: \eta \in A\}$ is a type tree. \begin{enumerate} \item Given an element $a_{\eta}\in V$, we say there is a \emph{full binary tree of height $k$ below $a_{\eta}$} if the following holds. There is a set $V'\subseteq \{a_{\sigma}: a_{\eta}\subseteq a_{\sigma}\}$ and a bijection $f: V'\rightarrow 2^{<k}$ with the property that $a_{\sigma}$ precedes $a_{\sigma'}$ in $V'$ if and only if $f(a_{\sigma})\triangleleft f(a_{\sigma'})$ in $2^{<k}$. \item The \emph{tree rank} of an element $a_{\eta}\in V$, denoted $t(a_{\eta})$, is the largest $k$ such that there is a full binary tree of height $k$ below $a_{\eta}$. \end{enumerate} \end{definition} \begin{theorem}\label{3.5} Suppose $n\geq2$ is an integer and $G=(V,E)$ is a graph of size $n$. Then $$ h(G)\geq \frac{(n/t(G))^{\frac{1}{t(G)+1}}}{2}. $$ \end{theorem} \begin{proof} Suppose $A\subseteq 2^{<n}$ and $V=\{a_{\eta}: \eta \in A\}$ of $V$ is a type tree. Let $h$ be the length of the longest branch in this tree, and let $t=\max\{t(a_{\eta}): \eta \in A\}$. Note $t\leq t(G)$. Given a fixed $\ell$ and $s$, set \begin{align*} Z_{\ell}^s&=\{a_{\eta}\in V: t(a_{\eta})=s, ht(a_{\eta})=\ell\}\\ X_{\ell}^s&=\{a_{\eta}\in Z_{\ell}^s: t(p(a_{\eta}))=s\},\text{ and }\\ Y_{\ell}^s&=\{a_{\eta}\in Z_{\ell}^s: t(p(a_{\eta}))=s+1\}. \end{align*} Let $N_{\ell}^s=|Z_{\ell}^s|$, $x_{\ell}^s=|X_{\ell}^s|$ and $y_{\ell}^s=|Y_{\ell}^s|$. Then note that that for each $s$ and $\ell$, $N^s_{\ell}=x_{\ell}^s+y_{\ell}^s$, and $n=\sum_{\ell=0}^h \sum_{s=0}^{t} N_{\ell}^s$. We claim the following facts hold. \begin{enumerate}[(i)] \item For all $s\leq t$ and $\ell$, $x_{\ell+1}^s\leq N_{\ell}^s$. \item For all $s<t$ and all $\ell$, $y^s_{\ell+1}\leq 2N_{\ell}^{s+1}$. \item For all $s<t$ and all $\ell$, $N_{\ell+1}^s\leq N_{\ell}^s+2N_{\ell}^{s+1}$. \item For all $1\leq s\leq t$, $N_0^{t-s}=0$. \item For all $\ell$, $N^t_{\ell}\leq 1$. \item For all $0\leq s\leq t$, $N_1^{t-s}\leq 2$. \end{enumerate} Item (i) holds by definition. Item (ii) follows because every element has at most $2$ successors. Item (iii) follows directly from (i), (ii) and the fact that for all $s$ and $\ell$, $N^s_{\ell}=x_{\ell}^s+y_{\ell}^s$. Item (iv) follows from the fact that the only element of height $0$ is $a_{<>}$, which has height $t$. Item (v) follows from the fact that if for some $\ell$, if $N_{\ell}^t \geq 2$, then we would have $t(a_{\langle \rangle})\geq t+1$. Item (vi) is because the tree is binary, so the second level can have at most two elements. We now show that for each $0\leq s\leq t$ and $0\leq \ell <h$, $N_{\ell+1}^{t-s}\leq (2(\ell+1))^{s}$. If $s=0$ this follows immediately from (v). Case $s=1$: We want to show for all $0\leq \ell <h$, $N_{\ell+1}^{t-1}\leq (2(\ell+1))^{s}$. The case where $\ell =0$ is done by (vi). Let $\ell> 0$ and suppose by induction $N_{\ell}^{t-1}\leq 2 \ell$. By (iii), (v) and our induction hypothesis, $$ N_{\ell+1}^{t-1}\leq N_{\ell}^{t-1}+2N_{\ell}^t \leq 2\ell+2=2(\ell+1). $$ Case $s>1$: Suppose by induction that for all $0\leq s'< s$, the following holds: for all $0\leq \ell<h$, $N_{\ell+1}^{t-s'}\leq (2(\ell+1))^{s'}$. We want to show that for all $0\leq \ell <h$, $N_{\ell+1}^{t-s}\leq (2(\ell+1))^{s}$. The case $\ell=0$ is done by (vi). Let $\ell >0$ and suppose by induction that for all $0\leq \ell'<\ell$, $N_{\ell'+1}^{t-s}\leq (2(\ell'+1))^{s}$. Then by (iii) and our induction hypothesis, $$ N_{\ell+1}^{t-s}\leq N_{\ell}^{t-s}+2N_{\ell}^{t-s+1} \leq (2\ell)^{s}+2(2\ell)^{s-1}=(2\ell)^s\Big(\frac{\ell+1}{\ell}\Big)\leq (2(\ell+1))^{s}. $$ Therefore, for all $0\leq \ell <h$, $$ N_{\ell+1}\leq \sum_{0\leq s\leq t}N_{\ell+1}^s\leq \sum_{0\leq s\leq t}(2(\ell+1))^{s} \leq t(2(\ell+1))^{t}\leq t(2h)^t. $$ This implies that $$ n=N_0+\sum_{0\leq \ell<h}N_{\ell+1} \leq 1+\sum_{0\leq \ell<h}t(2h)^t \leq t(2h)^{t+1} $$ Rearranging this we obtain that $$ \frac{(n/t)^{\frac{1}{t+1}}}{2}\leq h. $$ Since $t\leq t(G)$ this implies $\frac{(n/t(G))^{\frac{1}{t(G)+1}}}{2}\leq h$. This finishes the proof. \end{proof} \noindent Combining Theorem \ref{3.5} and Lemma \ref{indset} immediately implies the following. \begin{corollary}\label{treecormain} Suppose $G=(V,E)$ is a graph with tree rank $t$ and $n$ vertices. Then $G$ contains a complete or independent set of size at least $\frac{(n/t)^{\frac{1}{t+1}}}{4}$. \end{corollary} \section{Finitary proof leveraging Theorem \ref{3.5}}\label{finitary} The following is an adaptation of Proposition 3.1 \cite{CKOS}. \begin{proposition}\label{new3.1} Suppose $G=(V,E)$ has tree height $t\geq R(n_1, n,n,n_2)$ witnessed by $T\subseteq V$ and the indexing $T=\{a_{\eta}: \eta \in 2^{<t}\}$ which is a type tree. Then $G[T]$ contains one of the following as an induced subgraph. \begin{enumerate}[(i)] \item a thin spider with $n$ legs, \item the bipartite half-graph of height $n$, \item the disjoint union of $n_1$ copies of $K_2$, denoted by $n_1K_2$, or \item the half split graph of height $n_2$. \end{enumerate} \end{proposition} \begin{proof} Consider the sets $A=\{a_{<>}, a_0,\ldots a_{0^{t-1}}\}$ and $B=\{a_1, a_{01}, \ldots, a_{0^{t-1}\wedge 1} \}$. Rename the elements of $A$ and $B$ so that $\langle a_{<>}, a_0,\ldots, a_{0^{t-1}}\rangle = \langle x_1,x_2,\ldots, x_t\rangle$ and $\langle a_1, a_{01}, \ldots, a_{0^{t-1}\wedge 1}\rangle =\langle y_1,y_2,\ldots, y_t\rangle$. Note that by definition of a standard type tree and our choice of $A$, we have the following. \begin{itemize} \item $A$ is an independent set. \item For each $i\in [t]$, $x_iy_i\in E$. \item For each $i<j$, $x_iy_j\notin E$. \end{itemize} We now define a coloring of the edges of the complete graph with vertex set $[t]$ with colors $(a,b)\in \{0,1\}^2$. Given $i<j\in [t]$, define the color $(a,b)$ of the edge $ij$ as follows. Set $a=1$ if and only if $x_jy_i\in E$ and $b=1$ if and only if $y_iy_j\in E$. By Ramsey's theorem, there is a subset $I\subseteq [t]$ such that all the edges of $I$ have the same color $(a,b)$ and the following holds. \[ |I|=\begin{cases} n_1 & \text{ if }(a,b)=(0,0)\\ n & \text{ if }(a,b)=(0,1)\\ n & \text{ if }(a,b)=(1,0)\\ n_2 & \text{ if }(a,b)=(1,1) \end{cases} \] Set $Z=\{x_i: i\in I\}\cup \{y_i: i\in I\}$. Then if $(a,b)=(0,0)$, $G[Z]$ forms an induced copy of $n_1K_2$. If $(a,b)=(0,1)$, then $G[Z]$ forms an induced copy of a thin spider with $n$ legs. If $(a,b)=(1,0)$, then $G[Z]$ forms an induced copy of a bipartite half-graph of height $n$. Finally if $(a,b)=(1,1)$, then $G[Z]$ forms an induced copy of the half split graph of height $n_2$. \end{proof} \begin{remark} \vspace{.1mm} \begin{enumerate} \item In the proof of Proposition \ref{new3.1}, we could also have built our configuration over a complete set by instead taking $A=\{a_{<>}, a_1, a_{11}, \ldots, a_{1^{t-1}}\}$ and $B=\{a_0, a_{10}, \ldots, a_{1^{t-1}\wedge 0}\}$. \item If we don't care whether we build over complete or empty sets, then what Proposition \ref{new3.1} uses is the length of the longest ``straight path" through the tree consisting of nodes with two children, which is at least the tree rank. \end{enumerate} \end{remark} \begin{corollary}\label{treecor} Suppose $G$ is a prime graph with tree height $t\geq R(h(n,g(n),n), n,n,g(n))$. Then $G$ contains one of the following or the compliment of one of the following as an induced subgraph. \begin{enumerate}[(1)] \item The $1$-subdivision of $K_{1,n}$ (denoted by $K_{1,n}^{(1)}$). \item The line graph of $K_{2,n}$ (denoted by $L(K_{2,n})$). \item The thin spider with $n$ legs. \item The bipartite half-graph of height $n$. \item The graph $H'_{n,I}$. \item the graph $H_n^*$. \item A prime graph induced by a chain of length $n$. \end{enumerate} \end{corollary} \begin{proof} If $G$ contains a chain of length $n+1$, we are done. So assume this is not the case. Apply Proposition \ref{new3.1} with $n_1=h(n, g(n), n)$ and $n_2=g(n)$. In outcomes \ref{new3.1}.(i) and \ref{new3.1}.(ii), we are done. If $G$ contains a half split graph of height $g(n)$ apply Proposition \ref{5.1} to obtain $H'_{n,I}$ or $H_n^*$. So assume now $G$ contains no half split graph of height $g(n)$. The only possible outcome left is \ref{new3.1}.(iii), i.e., that $G$ contains an induced matching with $n_1=h(n,g(n),n)$ edges. Combining this with our assumptions that $G$ is prime, contains no chains of length $n+1$, and contains no half split graph of height $g(n)$, we have that Proposition \ref{4.1} implies $G$ contains a copy of $K^{(1)}_{1,n}$, the bipartite half-graph of height $n$, $\overline{L(K_{2,n})}$, or a spider with $n$ legs. This finishes the proof. \end{proof} \noindent We now prove Theorem \ref{mainthm} with a value for $N$ which is asymptotically much smaller than $N_{\ref{mainthm}}$. \begin{theorem} Let $n\geq 2$ and recall $$ m=f(n, h(n,g(n), n), g(n)) = 2^{R(n+h(n,g(n),n), 2n-1,n+g(n), n+g(n)-1)+1}. $$ Suppose $$ N= R(h(n,g(n),n), n,n,g(n))(5m)^{R(h(n,g(n),n), n,n,g(n))+1}, $$ and $G$ is a prime graph with at least $N$ vertices. Then the conclusion of Theorem \ref{mainthm} holds. Moreover, for large $n$, $$ N<<R(m,m)=N_{\ref{mainthm}}. $$ \end{theorem} \begin{proof} Suppose $G$ is a prime graph with at least $N$ vertices. Suppose first that the tree height, $t=t(G)$ is at least $R(h(n,g(n),n), n,n,g(n))$. Then Corollary \ref{treecor} implies $G$ contains one of the desired configurations, so we are done. Assume now that $t\leq R(h(n,g(n),n), n,n,g(n))$. Remark \ref{reduction} and Proposition \ref{cor2.3} imply that that if $G$ contains a complete or independent set of size $m$ then the conclusion of Theorem \ref{mainthm} holds. We show $G$ contains a complete or independent set of size $m$. By Corollary \ref{treecormain}, $G$ contains a complete or independent independent set $I$ such that $|I|\geq \frac{(N/t)^{\frac{1}{t+1}}-2}{4}$, so it suffices to show that $\frac{(N/t)^{\frac{1}{t+1}}}{4}\geq m$. By definition of $N$ and our assumption on $t$, $N\geq t(5m)^{t+1}$. This implies $\frac{(N/t)^{\frac{1}{t+1}}}{4}\geq \frac{5m}{4}\geq m$. This finishes the proof that the conclusion of Theorem \ref{mainthm} holds. We've now left to show that $N<<N_{\ref{mainthm}}$. Let $x=R(h(n,g(n),n), n, n, g(n))$. Then we want to show that large $n$, $x (5m)^{x+1}<<R(m,m)$. Note that $x\leq \log_2 m$ and recall that by \cite{spencer1}, as long as $m\geq 2$, $R(m,m)\geq (\sqrt{2})^m$. Combining these facts, we have that the following holds for large $m$ (equivalently, for large $n$). \begin{align*} x(5m)^{x+1} \leq (\log_2m) (5m)^{2\log_2 m+1}<<(\sqrt{2})^m\leq R(m,m). \end{align*} \end{proof} \begin{remark} The theorem uses the fact that any graph $G$ contains a complete or independent set of size $\max\{t(G), h(G)/2\}$, the inverse relationship between $t(G)$ and $h(G)$ from Theorem \ref{3.5}, and the fact that a binary type tree contains the building blocks of the desired configurations. These ingredients, i.e. Theorem \ref{new3.1}, Lemma \ref{indset}, and Theorem \ref{3.5}, hold for arbitrary graphs. \end{remark} \section{An infinitary proof}\label{infinitary} In this section we prove an analogue of Theorem \ref{mainthm} in the infinite setting, and show it implies the finite version, although without the explicit bounds. Throughout this section we work in the first-order language of graphs, $\mathcal{L}=\{E(x,y)\}$, and employ standard model theoretic notation. Given sets $A$ and $B$, we will write $AB$ as shorthand for $A\cup B$, and given a tuple of elements $\bar{a}$, we will often write $\bar{a}$ to mean the set of elements in the tuple. The following proposition is proved in \cite{CKOS} in the setting of finite graphs, but the proof presented there also holds in the setting of infinite graphs. Given an integer $n$, we will write $R(n)$ to mean $R(n,n)$. \begin{proposition}[Proposition 2.1 in \cite{CKOS}]\label{chainsandmodules} Suppose $G$ is a graph and $I\subseteq V(G)$ is a set with at least two vertices, and suppose $v\in V(G)\setminus I$. Then $G$ has a chain from $I$ to $v$ if and only if all modules containing $I$ as a subset contain $v$. \end{proposition} \noindent A useful and straightforward corollary of this is the following. \begin{corollary}\label{chainsandprimeness} A graph $G=(V,E)$ is prime if and only if for every set of pairwise distinct vertices $\{x_1,x_2, x_3\}\subseteq V$, there is chain from $\{x_1,x_2\}$ to $x_3$ in $G$. \end{corollary} \begin{proof} Suppose $G=(V,E)$ is a prime graph and $x_1,x_2, x_3\in V$ are pairwise distinct vertices. Suppose there is no chain from $\{x_1,x_2\}$ to $x_3$. Then by Proposition \label{chainsandmodules}, there is a module $I$ containing $\{x_1,x_2\}$ as a subset and not containing $v$. But now $I$ is a nontrivial module, contradicting that $G$ is prime. Conversely, suppose for every set $\{x_1,x_2, x_3\}\subseteq V$ of pairwise distinct vertices, there is chain from $\{x_1,x_2\}$ to $x_3$ in $G$. We show that any module $I$ in $G$ is either a singleton or all of $V$. Suppose by contradiction $I$ is a module which is neither a singleton, nor all of $V$. Then there are $x_1\neq x_2\in I$ and $x_3\in V\setminus I$. By assumption there is a chain from $\{x_1,x_2\}$ to $x_3$, so Proposition \ref{chainsandmodules} implies that every module containing $\{x_1,x_2\}$ also contains $x_3$. In particular, $x_3\in I$, a contradiction. \end{proof} \begin{definition} Fix an integer $n\geq 1$. \begin{enumerate}[(1)] \item Let $\phi_n(x,y,z)$ be the formula saying that there exists a chain of length at most $n$ from $\{x,y\}$ to $z$. \item Let $\psi_n$ be the sentence saying that for any pairwise distinct $x_1,x_2,x_3$, there is a chain of length at most $n$ from $\{x_1,x_2\}$ to $x_3$, i.e. the sentence $$ \forall x_1x_2x_3 \Bigg(\Bigg(\bigwedge_{1\leq i\neq j\leq 3}x_i\neq x_j \Bigg)\rightarrow \phi_n(x_1,x_2,x_3)\Bigg). $$ \item Let $\sigma_n$ be the sentence saying that there exists a copy of $H_n$ or a copy of $\overline{H_n}$ as an induced subgraph. \item Let $\theta_n$ be the sentence saying there exists a copy of $H_{n,I}'$, $H_n^*$ or $\overline{H_n^*}$. \item Let $\rho_n$ be the sentence which says that one of the following or the compliment of one of the following appears as induced subgraph: $K^{(1)}_{1,n}$, $L(K_{2,n})$, a spider with $n$ legs. \end{enumerate} \end{definition} Given $k\geq 1$, we will call a graph $G$ $k$-edge-stable if $G$ omits all half-graphs of height $k$. We will call $G$ edge-stable when it is $k$-edge stable for some $k$ (equivalently, when its edge relation is a stable formula). Call a subset of $I$ of $G$ \emph{edge indiscernible} if it is indiscernible with respect to the edge relation. We remark that Proposition \ref{5.1} applies in the case of an infinite prime graph as well as a finite one, via exactly the same proof as in \cite{CKOS}. Given a formula $\phi$, we let $\phi^1=\phi$ and $\phi^0=\neg \phi$. We now recall a definition and claim from \cite{MS}. \begin{definition} Given $\ell\geq 2$, let $\Delta_{\ell}=\{E(x_0,x_1)\}\cup \{\phi^i_{\ell, m}: m\leq \ell, i\in \{0,1\}\}$, where $$ \phi^i_{\ell, m}=\phi_{\ell, m}^i(x_0,\ldots, x_{\ell-1})=\exists y\Bigg(\bigwedge_{j<\ell}E(x_j,y)^{\text{ if }i=0} \wedge \bigwedge_{m\leq j\leq \ell}E(x_j,y)^{\text{ if }i=1}\Bigg). $$ \end{definition} \begin{claim}[Claim 3.2 of \cite{MS}]\label{MS3.2} Suppose $G$ is an $\ell$-edge stable graph. Suppose $m\geq 4\ell$ and $\langle a_i: i<\alpha \rangle$ is a $\Delta_{\ell}$-indiscernible sequence in $G$, and $b\in G$. Then either $|\{i: E(a_i, b)\}|<2\ell$ or $|\{i: \neg E(a_i, b)\}|<2\ell$. \end{claim} \begin{proposition}\label{infinite3.1} For any integer $n\geq 1$, any infinite graph satisfying $\psi_n\wedge \neg \sigma_n\wedge \neg \theta_{n}$ is prime, edge-stable, and contains one of the following or the compliment of one of the following as an induced subgraph. \begin{enumerate}[(1)] \item A spider with $\omega$ many legs, \item $L(K_{2,\omega})$, \item A perfect matching of length $\omega$. \end{enumerate} \end{proposition} \begin{proof} Since $G\models \psi_n$, Corollary \ref{chainsandprimeness} implies $G$ is prime. Set $\ell=R(R(g(n)))$. We show $G$ is $\ell$-edge-stable. Suppose by contradiction $G$ contains a half-graph $a_1b_1,\ldots, a_{\ell}b_{\ell}$ so that $E(a_i,b_j)$ if and only if $i\leq j$. By Ramsey's theorem, there is a complete or independent set $A\subseteq \{a_1,\ldots, a_{\ell}\}$ such that $|A|=R(g(n))$. By reindexing, assume $A=\{a_1,\ldots, a_{R(g(n))}\}$. Applying Ramsey's theorem again, we have that there is a complete or independent set $B'\subseteq\{b_1,\ldots, b_{R(g(n))}\}$ such that $|B'|=g(n)$. By reindexing, assume $B'=\{b_1,\ldots, b_{g(n)}\}$. Then $a_1b_1,\ldots,a_{g(n)}b_{g(n)}$ forms an induced copy of $H_{g(n)}$, $\overline{H_{g(n)}}$, or a half split graph of height $g(n)$. Since $G\models \neg \sigma_n$, it must contain a half split graph of height $g(n)$. By Proposition \ref{5.1}, $G$ contains an induced copy of $H_{n,I}'$, $H_n^*$, or $\overline{H_n^*}$, contradicting that $G\models \neg \theta_n$. Therefore $G$ is $\ell$-edge-stable. By Ramsey's theorem there is an infinite $\Delta_{\ell}$-indiscernible sequence $I=\{c_i: i<\omega\}$ in $G$. Note $I$ is a complete or independent set. Without loss of generality, assume it is independent (otherwise we obtain the compliments everything that follows). Claim \ref{MS3.2} implies that for all $b\notin I$, either $|\{c_i: E(b,c_i)\}|\leq 2\ell$ or $|\{c_i: \neg E(b,c_i)\}|\leq 2\ell$. Given $b\notin I$, set \[ f(b)=\begin{cases} 1&\text{ if }|\{c_i: E(b,c_i)\}|\leq 2\ell \\ 0&\text{ if }|\{c_i: \neg E(b,c_i)\}|\leq 2\ell, \end{cases} \] and set $S_b=\{c_i: E(b,c_i)^{f(b)}\}$. We construct two sequences $J_1=\{a_i: i<\omega\}$ and $J_2=\{b_i: i<\omega\}$ along with a sequence of sets $\{A_i: i<\omega\}$ with the following properties. \begin{enumerate}[$\bullet$] \item For each $k<\omega$, $b_k\notin Ib_1\ldots b_{k-1}$ and $a_k\in S_{b_k}$, \item for each $i,j< \omega$, $E(b_i,a_j)^{f(b_i)} \Leftrightarrow i= j$, \item $I\supseteq A_1\supseteq A_2 \supseteq \ldots$ and for each $k<\omega$, $|A_k|=\omega$, \item For each $j\leq k<\omega$, $A_k\cap S_{b_j}=\emptyset$. \end{enumerate} Step $0$: Since $I$ is not a module, there is a vertex $b_1$ which is mixed on $I$. Note that since $I$ is edge-indiscernible, we must have that $b_1\notin I$. Choose $a_1\in S_b$ and set $A_1=I\setminus S_{b_1}$. Note that since $|I|=\omega$ and $a_1S_{b_1}$ is finite, $|A_1|=\omega$. Step $k$: Suppose now we've constructed $b_1a_1,\ldots, b_{k-1}a_{k-1}$, and $A_1,\ldots, A_{k-1}$ satisfying the desired hypotheses. Since $A_{k-1}$ is not a module, there is $b_k$ which is mixed on $A_{k-1}$. In other words, $A_{k-1}\cap S_{b_k}\neq \emptyset$. Since $I$ is edge-indiscernible, $b_k$ is not in $I$. For each $j<k$, $A_{k-1}\cap S_{b_j}=\emptyset$ implies $b_j$ is not mixed on $A_{k-1}$. Therefore $b_k \notin \{b_1\ldots b_{k-1}\}$. Choose $a_k\in S_{b_k}\cap A_{k-1}$ and set $A_k= A_{k-1}\setminus a_kS_{b_k}$. Note that by our induction hypothesis, $|A_{k-1}|=\omega$ and by definition $a_kS_{b_k}$ is finite, so $|A_k|=\omega$. This completes the construction. By Ramsey's theorem, there are infinite subsequences $I_1=(a'_i)_{i<\omega} \subseteq (a_i)_{i<\omega}$ and $I_2=(b'_i)_{i<\omega} \subseteq (b_i)_{i<\omega}$ such that $I_1I_2=(a'_ib'_i)_{i<\omega}$ is edge-indiscernible. If $I_2$ is a complete set and $f(b_1')=0$, then $I_2I_2$ is a thick spider with $\omega$ many legs. If $I_2$ is a complete set and $f(b_1')=1$, then $I_1I_2$ is a thin spider with $\omega$ many legs. If $I_2$ is an independent set and $f(b'_1)=0$, then $I_1I_2$ forms a copy of $\overline{L(K_{2,\omega})}$. Therefore we are left with the case when $I_2$ is an independent set and $f(b_1')=1$. In this case $I_1I_2$ forms a perfect matching of length $\omega$. \end{proof} \noindent The following argument is an infinitary version of the argument used to prove Proposition \ref{4.1} in \cite{CKOS}. \begin{proposition}\label{infinite4.1} Suppose $G$ is an infinite, prime, edge-stable graph satisfying $\psi_n$ and suppose $M$ is an infinite perfect matching in $G$. Then $G$ contains of one of the following or the compliment of one of the following as an induced subgraph. \begin{enumerate}[(1)] \item $K^{(1)}_{1,\omega}$, \item $L(K_{2,\omega})$, \item A spider with $\omega$-many legs. \end{enumerate} \end{proposition} \begin{proof} Suppose $G$ is an infinite, prime, edge-stable graph satisfying $\psi_n$ and suppose $M$ is an infinite perfect matching $M$ in $G$. Since $M$ is not prime, $V(G)\setminus V(M)\neq \emptyset$. Since $G$ is prime and satisfies $\psi_n$, Corollary \ref{chainsandprimeness} implies that for every $v\in V(G)\setminus V(M)$ there is an integer $t(v)\leq n$ such that there is a chain of length less than or equal to $t(v)$ from $v$ to $e$ for infinitely many $e\in M$. Set $t=t(M)=\min \{t(v): v\in V(G)\setminus V(M)\}$. We show by induction on $2\leq t\leq n$ that the conclusion of the proposition is true. Fix $v\in V$ such that $t(v)=t$ and an infinite $M'\subseteq M$ such that there is a chain of length at most $t$ from $v$ to $e$ for every $e\in M'$. Suppose first that $t=2$. Then $vM'$ is isomorphic to $K^{(1)}_{1,\omega}$ and we are done. Assume now $2<t\leq n$ and suppose by induction that for all $2\leq t'<t$, if $G$ contains an infinite perfect matching $M''$ with $t(M'')=t'$, then the conclusion of the proposition holds. Enumerate $M'=\{x_iy_i: i<\omega\}$ and delete the edges $e\in M'$ on which $v$ is mixed. Since $t>2$, we have deleted only finitely many elements of $M'$. For each $i<\omega$ choose a chain $C_{x_iy_i}=\{v_0,v_1,\ldots, v_n\}$ from $x_iy_i$ to $v$ (so $\{x_i,y_i\}=\{v_0,v_1\}$). Set set $z_i=v_2$. Note by assumption, $v$ is not mixed on any $x_iy_i$, so $z_i\neq v$, and since $M'$ is a matching, $z_i\notin M'$. By Ramsey's theorem, the sequence $(x_iy_iz_i)_{i<\omega}$ contains an infinite indiscernible sequence $(x_i'y_i'z_i')_{i<\omega}$. Since $t>2$, we must have that for each $i<\omega$, $z_i'$ is not mixed on $x_j'y_j'$ for all $j\neq i$, so in particular, $E(z_1', x_2')\equiv E(z_1', y_2')$. Since $G$ is edge-stable, we have that $E(z_1'x_2') \equiv E(z_2'x_1')$ and $E(z_1'y_2') \equiv E(z_2'y_1')$. Combining all of this, we have $$ E(z_2'x_1')\equiv E(z_1'x_2')\equiv E(z_1'y_2') \equiv E(z_2'y_1'). $$ By relabeling if necessary, we may assume $E(z_1'y_1')$ and $\neg E(z_1',x_1')$. By indiscernibility and our assumptions, the type of $(x_i'y_i'z_i')_{i<\omega}$ depends only on $E(z'_1,x_2')$ and $E(z'_1,z_2')$. Suppose first that $E(z_1',z_2')$, so $(z_i')_{i<\omega}$ is a complete set. If $E(z_1',x_2')$, then $(z'_i,x_i')_{i<\omega}$ is a thick spider with $\omega$ many legs. If $\neg E(z_1',x_2')$, then $(z'_i,y'_i)_{i<\omega}$ is a thin spider with $\omega$ many legs. Suppose now that $\neg E(z_1',z_2')$, so $(z_i')_{i<\omega}$ is an independent set. If $E(z_1',x_2')$, then $(z_i', x_i')_{i<\omega}$ is a copy of $\overline{L(K_{2,\omega})}$. If $\neg E(z_1',x_2')$, then $M'':=(z_i',y_i')_{i<\omega}$ is an infinite perfect matching. In this case, we now have that for each $i<\omega$, $C_{x'_iy_i'}\setminus \{x_i'\}$ is a chain of length at most $t-1$ from $\{z_i',y_i'\}$ to $v$, that is $t(M'')=t-1$. By our induction hypothesis, $G$ satisfies the conclusion of the proposition. \end{proof} \noindent We now prove a version of Theorem \ref{mainthm} for infinite graphs, then use it to prove Theorem \ref{mainthm}. \begin{theorem} An infinite prime graph $G$ contains one of the following. \begin{enumerate} \item Copies of $H_n$, $\overline{H_n}$, $H_n^*$, $\overline{H_n^*}$, $H'_{n,I}$, or $\overline{H'_{n,I}}$ for arbitrarily large finite $n$, \item Prime graphs induced by arbitrarily long finite chains, \item $K^{(1)}_{1,\omega}$ or its compliment, \item $L(K_{2,\omega})$ or its compliment, \item A spider with $\omega$ many legs. \end{enumerate} \end{theorem} \begin{proof} Suppose $G$ is an infinite prime graph which fails 1 and 2. Since $G$ is prime but fails 2, Proposition \ref{cor2.3} implies $G$ does not contain arbitrarily long finite chains. Thus there is $n_1\in \mathbb{N}$ such that $G\models \psi_{n_1}$. Since $G$ fails 1, there is $n_2$ such that $G$ contains no copy of $H_{n_2}$, $H_{n_2}^*$, $\overline{H_{n_2}^*}$, or $H'_{n_2,I}$. Let $n_3=\max\{n_1,n_2\}$, then $G$ is prime and satisfies $\phi_{n_3}\wedge \neg \sigma_{n_3} \wedge \neg \theta_{n_3}$. Applying Corollary \ref{infinite3.1}, we have that either $G$ satisfies 5 or 4, or $G$ contains an induced perfect matching of length $\omega$. If $G$ contains an induced perfect matching of length $\omega$, Proposition \ref{infinite4.1} implies $G$ satisfies 3, 4, or 5. \end{proof} {\bf Proof of Theorem \ref{mainthm}} Fix $n\geq 1$. By definition, any finite prime graph $G$ satisfying $\sigma_n$ or $\theta_{n}$ contains one of the desired configurations. If a finite prime graph $G$ of size at least $3$ satisfies $\neg \psi_n$, then $G$ contains three distinct points $x,y,z$ such that there is no chain of length less than or equal to $n$ from $\{x,y\}$ to $z$. Corollary \ref{chainsandprimeness} implies that there is some chain from $\{x,y\}$ to $z$. Therefore there is a chain $v_0,\ldots, v_t$ of length $t\geq n+1$ from $\{x,y\}$ to $z$. Since initial sequences of chains are chains, $v_0,\ldots, v_{n+1}$ is a chain of length $n+1$. By Proposition \ref{cor2.3}, $G$ contains a chain of length $n$ inducing a prime subgraph. So if $G$ has size at least $3$ and satisfies $\sigma_n\vee \theta_n\vee \neg \psi_n$, we are done. We now show there is $N$ such that any finite prime graph of size at least $N$ satisfying $\neg \sigma_n\wedge \neg \theta_{g(n)}\wedge \psi_n$ must also satisfy $\rho_n$. This combined with the above finishes the proof. Suppose by contradiction that no such $N$ exists. Then there are arbitrarily large finite graphs which satisfy $\neg \sigma_n\wedge \neg \theta_{n}\wedge \psi_n\wedge \neg \rho_n$, so by compactness there is an infinite graph $G$ satisfying $\neg \sigma_n\wedge \neg \theta_{n}\wedge \psi_n\wedge \neg \rho_n$. By Proposition \ref{infinite3.1}, $G$ is edge-stable and contains an infinite perfect matching. But then Proposition \ref{infinite4.1} clearly implies $G\models \rho_n$, a contradiction. \qed
1,314,259,993,724
arxiv
\section{Introduction} \label{sec:Introduction} Fashion plays an important role in the society. People use fashion as a way of expressing individuality, style, culture, wealth, and status \cite{chao2009framework}. E-commerce fashion industry is expected to rise worldwide from \$481 billion USD revenue market in 2018 to \$712 billion USD by 2022\footnote{http://www.shopify.com/enterprise/ecommerce-fashion-industry, (accessed on 2018-07-17)}. This shows the increasing demands for online apparel shopping and motivates businesses to build more advanced recommendation systems. Many online retailers also started incorporating advanced recommendation systems to tackle the sophisticated fashion recommendation problem, such as StitchFix\footnote{http://www.stitchfix.com/}, asos\footnote{http://www.asos.com/} and Amazon Fashion\footnote{http://www.amazon.com/amazon-fashion}. This enormous e-commerce market has attracted researchers' attention in the artificial intelligence, computer vision, multimedia, and recommendation system communities \cite{mckinsey}. Many research has been done using computational techniques to solve problems in fashion, e-commerce in particular. One most common line of research has been done on recommending single fashion items to consumers based on their purchase or browsing history. The most notable work is done by Kang et al, which they develop a neural network that learns users' preferences towards fashion products based on the visual information through Bayesian Personalized Ranking \cite{kang2017visually}.\footnote{http://wwd.com/business-news/business-features/jill-standish-think-tank-1202941433/} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{fig/outfits_v2.pdf} \caption{Examples of compatible and incompatible outfits.} \label{compatible} \end{figure} However, fashion recommendation is unique compare to other domains not only due to its heavily visual nature, but also because the concept of compatibility if more crucial than in any other types of products. People are often interested in purchasing items that match well together and compose a stylish outfit. Traditionally, fashion recommendation systems rely on co-purchased and co-clicked histories and recommend items based on similarity and user reviews. This requires going beyond retrieving similar items to developing a model that understands the notion of "compatibility" \cite{DBLP:conf/cikm/WanWLBM18}. Modeling compatibility is challenging because semantics that determine what is compatible and stylish are extremely complex and many factors such as color, cut, pattern, texture, style, culture, and personal taste play a role in what people perceive as compatible and fashionable. Developing an artificial intelligence algorithm that can learn the compatibility of items would considerably improve the quality of many fashion recommendation systems. Such systems will help customers decide what to wear every day, which alleviates the tedious task for non-fashion experts. Nonetheless, recommending compatible items that form a fashion outfit includes several challenges. First of all, the item co-occurrence relationships are extremely sparse, hence a collaborative approach is hard based on such data nature. Leveraging the contents of items effectively is preferred. Secondly, compatibility is a very different concept from \emph{similarity}. Simply retrieving items that are similar to each other is merely enough to form fashion outfits. Many times, two items fit perfectly with each other in a fashion outfit, while when looking at them individually, they are visually extremely different. Thirdly, the number of items in fashion outfits varies. In most models, a fixed number of objects or fixed dimensionality is assumed as input. A model that encounters the different number of items as input is desired. While visual features are commonly used in fashion recommendation, existing works on compatibility prediction have three main limitations: \begin{enumerate} \item Can only determine the compatibility of a pair of items and fail to work on outfits with an arbitrary number of items. Example methods with this limitation are \cite{vasileva2018learning,song2017neurostylist,veit2015learning}. \item Need category labels (e.g., shirt, shoes) and rich attributes (e.g., floral, casual) in order to determine compatibility and will not work if such information is not available \cite{hsiao2017creating,li2017mining,han2017learning}. \item Require a fixed order or fixed number of items to determine compatibility of an outfit. For example, Han et al. \cite{han2017learning} proposed a method for compatibility learning that requires items in all outfits to be carefully ordered from top to bottom and then accessories. \end{enumerate} These limitations narrow the application of current methods. For example, many online retailers may lack detailed description or have noisy labels for fashion items. In addition, for items that are showcased at brick and mortar stores detailed descriptions are often not written on item tags. To encounter the above challenges and limitations, in this paper, we use visual information of items to model fashion compatibility to optimize the content-based learning. We then, based on the concept of Relational Networks \cite{santoro2017simple}, build neural networks, FashionRN and FashionRN-VSE, that learn the relation between every pair of items, as well as take in different number of items as inputs. both FashionRN and FashionRN-VSE are to learn visual/textual relations between items of an outfit and use these relations to determine the compatibility of the outfits. The intuition behind using Relational Networks is that we can consider an outfit as a scene and the items of the outfit as objects in the scene. We are interested in learning a certain type of relation which is compatibility. To show the effectiveness of the proposed FashionRN and FashionRN-VSE, we evaluate our models on a collected Polyvore dataset, consisting of 49,740 unique fashion outfits. Through empirical experiments, we show that our proposed models perform well both quantitatively and qualitatively. \underline{Quantitative results.} We design two evaluation tasks: fashion outfit compatibility prediction and fill-in-the-blank, and compare FashionRN and FashionRN-VSE's performances with other state-of-the-art models. We show that FashionRN-VSE achieves an Area Under Curve (AUC) of 88\% in the compatibility prediction task, compared to the second best comparing methods, Bi-LSTM with VSE, which achieves 72\%. Furthermore, FashionRN-VSE achieves an accuracy of 58\% in the fill-in-the-blank task, compared to the second best, SiameseNet, of 35\% in accuracy. \underline{Qualitative results.} Besides learning the compatibility given an outfit, FashionRN and FashionRN-VSE can also generate item embedding from the hidden layer. Through visualization of the learned item embedding, we show that items that make sense to be put together in an outfit are closer to each other in the FashionRN embedding space. While comparing the visualization of the same items on embedding generated by current state-of-the-art CNN model, DenseNet, we see that DenseNet embedding place items that are visually similar (e.g., colors and shapes) but not necessarily compatible close to each other in the embedding space. This shows that embedding learned by FashionRN, besides the visual similarity, also captures the underlying compatibility. Our contributions are: \begin{enumerate} \item We developed FashionRN and FashionRN-VSE, a new line of compatibility learning framework based on Relational Networks \cite{santoro2017simple}. Our approach is independent of the number of items, order of items, and does not need semantic information and category labels. \item We compared FashionRN and FashionRN-VSE to other state-of-the-arts in compatibility prediction and Fill In The Blank (FITB) task. We show that FashionRN outperforms the second best by 112.5\% and 148.5\% in the two tasks, repectively, while FashionRN-VSE outperforms the second best by 122.2\% and 165.7\%, respectively. \item Through visualization, we find the item embedding learned by FashionRN well capture the underlying compatibility among fashion items, when compared to CNN models such as DenseNet that focus on the visual similarity. \end{enumerate} The reminder of this paper is organized as follows. Section \ref{relatedwork} reviews the related work. Section \ref{sec:methodology} describes our methodology. Section \ref{sec:experimental} presents our quantitative experimental results followed by our qualitative results in Section \ref{sec:qualitative}. We finally conclude this work in Section \ref{sec:conclusion}. \section{Related Work} \label{relatedwork} In this section, we review the literature that are related to this work, which are fashion recommendation and relational networks. \subsection{Fashion Recommendation} There is a growing body of literature on fashion recommendation. Most of the available fashion recommendation systems use keyword search \cite{vaccaro2016elements}, purchased histories \cite{wang2011utilizing}, and user ratings \cite{kang2017visually,qian2014personalized} to recommend items. These methods do not consider visual appearance of items which is a key feature in fashion. To address this limitation, several research groups have worked on incorporating visual information in fashion recommendation systems, mainly with the purpose of recommending similar items to an image query \cite{jing2015visual, he2016fashionista,chao2009framework,tautkute2018deepstyle,DBLP:conf/iccv/RenSLMF17}, and recommending aesthetics based on personal preferences \cite{DBLP:conf/cikm/DengCFNY17,DBLP:conf/cikm/SkopalPKGL17}. Similarity based fashion recommendation systems are useful for finding substitutes for an item (e.g., finding a shirt with the same style but different brand or price) or matching street images to online products \cite{chao2009framework,hadi2015buy}. However, many times users are interested in searching for different category of items which are compatible and are in harmony. This requires going beyond similarity based methods and modeling more complex concepts such as compatibility and aesthetics. Many humans are expert in detecting whether an outfit looks compatible or something is "off" by simply looking at its appearance. For example, even though compatibility is a subjective concept, most people would agree that the outfits shown in Figure \ref{compatible} are all well composed and stylish. Research has shown that computer vision and artificial intelligence algorithms are also able to some extent learn the notion of compatibility \cite{song2017neurostylist,veit2015learning,han2017learning,he2018fashionnet}. For example, Iwata et al. used a topic model to find matching tops (e.g., shirt) for bottoms (e.g., jeans) using a small human annotated dataset collected from magazines \cite{iwata2011fashion}. Veit et al. \cite{veit2015learning} used images of co-purchased items from an Amazon dataset to train a Siamese neural network \cite{hadsell2006dimensionality} for predicting compatibility between pairs of items. Song et al. showed that integrating visual and contextual information can improve compatibility prediction \cite{song2017neurostylist}. To exploit the pair-wise compatibility between tops and bottoms they learned a latent compatibility space by employing a dual autoencoder network \cite{ngiam2011multimodal} and a Bayesian Personalized Ranking (BPR) framework \cite{rendle2009bpr}. Lin et al. developed a model that is not only capable of matching tops with bottoms, but also is able to generate a sentence for each recommendation to explain why they match \cite{lin2018explainable}. Instead of a dual auto-encoder network, they used a mutual attention mechanism to model compatibility and a cross-modality attention module to learn the transformation between the visual and textual space for generating a sentence as a comment. Vasileva et al. \cite{vasileva2018learning} extended state-of-the-art in compatibility learning by answering novel queries such as finding a set of tops that can substitute a particular top in an outfit (high compatibility), while they are very different (low similarity). To do this, they jointly learned two embedding spaces, one for item similarity and the other for item compatibility. All of the aforementioned methods are pair-wise and focus on learning compatibility between "tops" and "bottoms". These methods fail to consider an entire outfit with an arbitrary number of items. To address this limitation, Han et al. \cite{han2017learning} and Jiang et al. \cite{jiang2018ask} considered an outfit as a sequence (from top to bottom and then accessories) and each item in the outfit as a time step \cite{han2017learning}. They trained a bidirectional LSTM (Bi-LSTM) model to sequentially predict the next item conditioned on previous ones, learning their compatibility. They used attribute and category information as a regularization for training their model. Treating outfits as a sequence and using an LSTM-based model does not respect the fact that sets are order invariant. Consequently, it requires carefully sorting of items in all outfits in a consistent order based on their category labels. Otherwise, a compatible top-bottom may be detected as incompatible if one changes their order to bottom-top. Li et al. developed a model that considers outfits as order-less sets. Given a collection of fashion items, their method can predict popularity of a set by incorporating images, titles, and category labels \cite{li2017mining}. In a recent work, Hsiao and Grauman \cite{hsiao2017creating} proposed an unsupervised compatibility learning framework which uses textual attributes of items. The researchers employed a Correlated Topic Model (CTM) \cite{blei2006correlated} from text analysis to learn compatibility. They considered an outfit as a document, visual attributes (e.g., floral, chiffon) as words, and style as a topic. Their model learns the composition of attributes that characterizes a style. For example, a formal blazer is more likely to be combined with a pair of jeans than a floral legging. While a fair number of studies are available on compatibility prediction, existing methods are mostly pair-wise and a few studies which consider an entire outfit \cite{hsiao2017creating,han2017learning}, are either not order invariant with respects to the items in an outfit \cite{han2017learning}, or require rich contextual data including explicit category labels, whether extracted from item descriptions or human annotated \cite{hsiao2017creating}. Hence, in our work, we explored a new visual compatibility learning framework that would consider an entire outfit with an arbitrary number of items with an arbitrary order. Our model can work without category labels or semantic attributes. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{fig/fashionRN.pdf} \caption{Model design of FashionRN.} \label{diagram} \end{figure*} \subsection{Relational Networks} Many factors such as style, texture, material, and color contribute to compatibility and the relation between these factors is non-linear. In this work, we develop Fashion RN by modifying a Relational Network (RN) to learn a non-linear space that can predict the compatibility of an outfit. Previous findings suggest that relational reasoning is "baked" into RNs, similar to learning sequential dependencies which is built in recurrent neural networks \cite{santoro2017simple}. Different variations of RNs have been successfully applied to answering semantic based questions about dynamic physical systems. For example, Santoro et al. modified an RN architecture and showed that given an image of a scene, an RN combined with an LSTM can answer relational questions such as "\textit{Are there any rubber things with the same size of the yellow cylinder?}" The input to an RN is a set of objects, but the definition of an object is flexible and not specified. For example, Santoro et al. used a CNN network to convert images of physical systems into $k$ feature maps of size $d*d$ \cite{santoro2017simple}. They then considered each row of the feature map as an object. Therefore, in their work, an object could be a part of the background, a particular physical object, or a texture. The object-object relation in their work was question dependent. Thus, their RN architecture was conditioned on a question embedded by an LSTM. Each pair of objects was concatenated with the question embedding before going into the RN. The intuition behind our approach is that humans do not need to know the textual description of items in an outfit (see Figure \ref{compatible}) and their category labels in order to know if it looks compatible. Humans can detect compatibility in a visual scene by looking at it. In fact, many of the textual attributes (e.g., floral, shirt, casual) can be implicitly learned from visual information. Moreover, sets are order invariant. For example, humans do not need to see the items of an outfit in a specific order (e.g., always seeing pants before seeing shirts) in order to detect their compatibility. Therefore, in this work we try to model similar intelligence by developing a compatibility learning method that is based on visual information and does not require labeling clothing attributes or feeding items in a fixed order. Our network, Fashion RN, is based on Relational Networks (RNs) which are architected for relational reasoning \cite{raposo2017discovering}. Santoro et al. successfully applied RNs to text-based question answering about scenes \cite{santoro2017simple}. We considered compatibility as a particular type of relation and explored developing an RN inspired architecture that can learn the compatibility between items in an outfit. \section{Compatibility Learning with Relational Network} \label{sec:methodology} In this section, we propose our model, FashionRN, that learns the compatibility among fashion items in a fashion outfit. We also propose its variant, FashionRN-VSE. For the ease of understanding, we summarize the symbols used in this paper in Table \ref{table:symbol_definition}. \begin{table}[htb!] \caption{Symbol definition.} \label{table:symbol_definition} \begin{tabular}{@{}cl@{}} \toprule \textbf{Symbol} & \textbf{Definition} \\ \midrule $\mathcal{D}$ & Dataset \\ $\mathcal{I}$ & Item set \\ $\Phi$ & CNN model \\ $x$ & High-dimensional visual features \\ $v$ & Low-dimensional visual features \\ $d$ & Textual embedding \\ $h$ & Relation embedding \\ $f, g$ & Fully-connected layers \\ \bottomrule \end{tabular} \end{table} \subsection{Problem Formulation and Model Intuition} We assume the compatibility of a fashion outfit to be based on the relation among all of the items included in an outfit. To learn the compatibility of fashion outfits, we formulate our problem into a binary classification problem. Let $S = \lbrace i_1,i_2,…i_n \rbrace$ be a fashion outfit, where each $i \in \mathcal{I}$ is an item in this set. The dataset $\mathcal{D} = \lbrace S \rbrace$. Given an $S$, predict whether it is a compatible fashion outfit or not. The learning of fashion outfit compatibility can be thought of as follows. For a fashion outfit, we measure the compatibility of each pair of items in the outfit, and eventually aggregate all of the pairs' compatibility scores to obtain the overall outfit compatibility score. To achieve this, we propose two models: FashionRN and FashionRN-VSE, which we describe in detail in the following. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{fig/fashionRN_VSE.pdf} \caption{Model design of FashionRN-VSE.} \label{joint} \end{figure*} \subsection{FashionRN} \label{sec:compatibility learning} We design FashionRN based on the concept of the relational network architecture. In our model, an outfit is treated like a scene and its items are treated like the objects in the scene. Therefore, as opposed to Santoro et al. who consider rows of a CNN feature map, extracted from the entire scene, as objects; we consider images of items in an outfit as objects and use a DenseNet to transform them into feature vectors. Additionally, we are interested in learning one specific type of visual relation, compatibility, which is not question dependent and therefore we do not need any LSTM model. Our FashionRN consists of two parts, as shown in Figure \ref{diagram}. The first part, \emph{relation construction}, learns the non-linear relation between each pair of items in an outfit and the second part \emph{compatibility scoring}, combines all the pair-wise relations to learn the compatibility of the entire outfit. \subsubsection{Relation Construction} First, the images of items are passed through a pre-trained CNN model of choice $\Phi$ (e.g., DenseNet) to produce high-dimensional visual feature vectors, $\mathbf{x}$. $\mathbf{x}$ is then passed through a fully connected (FC) layer, which serves two purposes. It down sizes the feature vectors and learns a latent space from dimensions that correspond to fashion styles and contribute to compatibility. The reduced-dimensional features are denoted as $v$. After generating the lower-dimensional visual features $v$, the relation between each pair of items in $S$ is constructed as follows. For each pair of items $(i,j) \in S$, we concatenate their visual features and passed through a FC layer $g$ to generate relation embedding $h$. \begin{align} h_{(i,j)} = g([ v_i || v_j ]) \end{align} \subsubsection{Compatibility Scoring} After the relation construction, we model the compatibility among all the pairs of items in $S$ as follows. \begin{align}\label{eq:compatibility_score} m_s = f \Big( \frac{1}{\binom{n}{2}}\sum_{i,j}h_{(i,j)} \Big) \end{align} \noindent where $m_s$ is the compatibility score of outfit $S$. Both $f$ and $g$ are based on multiple non-linear functions with parameters $\theta_f$ and $\theta_g$. In our work, $ f_{\theta_f} $ and $ g_{\theta_g} $ are multi-layer perceptrons (MLPs) and we want to learn the parameters $\theta = \lbrace \theta_f, \theta_g \rbrace$ such that they can predict the compatibility between fashion items. The output of $g_\theta$ is the "relation" \cite{santoro2017simple}. Thus, $g_\theta$ learns the relation between the visual appearances of $v_i$ and $v_j$. \subsection{FashionRN-VSE} While some studies learn compatibility using visual information \cite {vasileva2018learning,veit2015learning}, others have suggested combining textual data with visual data can improve the performance of compatibility prediction \cite{han2017learning,li2017mining,hsiao2017creating,song2017neurostylist}. We hence propose a variant of FashionRN, which combines the concept of Visual Semantic Embedding (VSE) proposed by Han et al. \cite{han2017learning} We name this model FashionRN-VSE. The diagram of this method is presented in Figure \ref{joint}. VSE produces image embedding ($v_i$) and description embedding ($d_i$) for an item $i$. $v_i$ is produced by passing through a CNN model of choice $\Phi$ as in FashionRN, while $d_i$ is produced by encoding each word in the outfit description to a one-hot encoding. $v_i$ and $d_i$ for each item in an outfit are concatenated and fed into FashionRN-VSE. The compatibility stays the same as Eq. (\ref{eq:compatibility_score}), while the relation embedding for FashionRN-VSE is reformulated as follows \begin{align} h_{(i,j)} = g \big( (v_i || d_i) || (v_j || d_j) \big) \end{align} With the consideration of textual information, FashionRN-VSE not only considers the visuals of fashion items, but also more detail information beyond what can be observed from the images. These information include: brands, texture, material, and even price point, etc. We believe through capturing these information, FashionRN-VSE can better learn the compatibility of fashion items in a fashion outfit. \subsection{Design Options and Time Complexity} Our proposed models enable various design and usage options. First of all, depends on one's data richeness, one can choose to use FashionRN if only visual information is available, and FashionRN-VSE is both visual and textual information are available. Secondly, RNs are order invariant. Therefore, it does not matter in which order the outfit items are passed to the network. Although to detect the compatibility of an outfit we consider the relation between all of its items, using RNs gives the flexibility to consider only some of the item pairs, or to put greater weights on some of them. For example, if an item (e.g., a handbag) is the center piece of an outfit and one likes to compose an outfit that magnifies this piece, we can put greater weights on the relations that involve this item. Besides the flexibility, our proposal is also efficient time-complexity-wise. Our compatibility learning framework can be applied to outfits with an arbitrary number of items. The time complexity of calculating the compatibility of an outfit with $n$ items is $O(\binom{n}{2})$ with respects to the number of the items in the outfit\footnote{We empirically found that the order of objects in each pair does not impact the accuracy and thus our time complexity is $O(\binom{n}{2})$ and not $O(n^2)$}. However, considering that outfits have limited number of items (less than 12 in our dataset), this time complexity will remain linear $O(n)$. Also, developing a compatibility framework based on RNs eliminates the need of passing item category labels as input to the network, as the network itself is able to implicitly learn such information. \subsection{Parameter Learning} \label{sec:Implementation} The parameters $\theta_g$ and $\theta_f$ are learned through back propagation using a cross-entropy loss function as follows: \begin{align} \label{eq:30} \mathcal{L} (\theta_g, \theta_f) = -\sum_{i}^{|\mathcal{B}|} (y_i \log(p_i)+(1-y_i) \log(1-p_i)) \end{align} \noindent where $\mathcal{B}$ is one batch of training, $y_i$ is the ground truth label and $p_i$ is the predicted label ($m_s$) of the $i$\textsuperscript{th} outfit. To learn the parameters, all of the outfits in the datasets are viewed as positive samples, with $y$ expected to be 1s. To create negative samples, we randomly select numbers of items to create artificial outfits, and set their labels $y$ to be 0s. \section{Evaluation} \label{sec:experimental} To examine the effectiveness of FashionRN and FashionRN-VSE, in this section, we empirically test their performances on two prediction tasks: compatibility prediction and fill-in-the-blank test, on a large fashion outfit dataset, and compare with other state-of-the-arts. \subsection{Dataset} \label{sec:Dataset} Learning compatibility of fashion outfits requires a rich source of data which can be collected from online fashion communities such as Polyvore, Chictopia\footnote{http://www.chictopia.com}, and Shoplook\footnote{https://www.shoplook.io}. On these websites, users can create stylish outfits and look at million of outfits created by others. Such rich fashion data can be used to train neural networks to learn different fashion concepts and automatically create stylish outfits. Polyvore is a great source of data especially for our work because it has images of items with clear background and descriptions. Researchers have used data from Polyvore for various studies \cite{vaccaro2016elements,li2017mining,lee2017style2vec,vasileva2018learning}. However, some of their datasets are not open source (e.g.,\cite{li2017mining}) or have a small size (e.g., \cite{han2017learning,song2017neurostylist}). Thus, we collected our own dataset from Polyvore. To ensure the quality, we collected outfits from users who are highly popular on Polyvore and have at least 100K followers. For each item we saved a 150 x 150 image and item description We cleaned the dataset by excluding items that are not clothing (e.g., furniture) using their metadata. Then, we removed any outfit that is left with only one item. The remaining dataset had 49,740 outfits and 256,004 items. The collected outfits have arbitrary number of items ranging from 2 to 12, but on average each outfit has five items. We used 70\% of our data for training (34,818 sets), 15\% for validation (7,461 sets) and 15\% for testing (7,461 sets). \underline{Negative Sample Creation.} The data collected from Polyvore includes compatible outfits (positive class). Following the methodology of \cite{han2017learning} we created our negative class by randomly picking items from different outfits. While, these outfits are not guaranteed to be incompatible, they have a lower probability of compatibility compared to outfits that have been created by fashion experts on Polyvore and therefore our network should assign lower compatibility scores to these randomly composed outfits. We created an incompatible outfit per each positive outfit. This resulted in overall 69,636 sets for training (positive and negative), 14,922 sets for validation and 14,922 sets for testing. \subsection{Experiment Setting} We choose DenseNet as our CNN model $\Phi$ since at the time of writing, it is the state-of-the-art. DenseNet generates image features $x$ of dimension 94,080. We design the FC layer $f$ to output 1000-dimensional features, so that $v\in R^{1000}$. In our work, $f$ and $g$ are both multi-layer perceptrons (MLPs). $g$ has four layers with size 512, 512, 256, 256 and $f$ has three layers with size 128, 128, 32. Therefore, $\Theta _g \in R^{2000*256}$ and $\Theta _f \in R^{256*32}$. At the end we used a softmax layer for classification. We used layer normalization and ReLU activation for all the layers of $f$ and $g$. We set dropout rate to 0.35 for all the layers except the last layer of $f$. We set the learning rate to 0.001 and the batch size to 64. Therefore, each mini batch included 64 fashion sets. Finally, we trained our model until the validation loss stabilized which took 19 epochs. Our model is implemented using Tensorflow, and Adam optimizer is used to learn the parameters. All our experiments are run on GPU Tesla P100-PCIE-16GB. \subsection{Prediction Tasks} An effective compatibility model, given an unseen fashion outfit, should accurately score the outfit based on how the items included match with each other. Besides, given an incomplete fashion outfit, it should also be able to provide suggestion on fashion item to fill in. With such objectives in mind, we design two prediction tasks to evaluate the effectiveness of FashionRN and FashionRN-VSE. We evaluated our method using the large dataset we collected from Polyvore (Section \ref{sec:Dataset}). We performed two tests: \begin{itemize} \item \textbf{Compatibility prediction test}: predict the compatibility score of a given fashion outfit. This test is a binary classification task, where the model should answer true if the given outfit is compatible, and false otherwise. \item \textbf{Fill in the blank (FITB) test}: given an outfit and a number of candidate items, find the item that matches best with the existing items in the outfit. This test is a retrieval task, where given an incomplete fashion outfit, and a list of candidate fashion items, the model aims to score all of the candidate items and return the item with the highest compatibility score with the incomplete fashion outfit. \end{itemize} These two tests are commonly used in the fashion recommendation literature for evaluating compatibility learning methods \cite{han2017learning,hsiao2017creating,vasileva2018learning}. \subsection{Comparing Methods} To demonstrate the effectiveness of our proposed method, we compared our results with the following approaches and demonstrate our results in Table \ref{table1} and Table \ref{table2}. We evaluated these methods on the dataset described in Section \ref{sec:Dataset}. For each method, we used the authors' codes and their reported set of parameters. We have considered compatibility prediction as a binary classification task and have calculated Area Under Curve (AUC) score to compare these methods. \begin{itemize} \item\textbf{Bi-LSTM + VSE} \cite{han2017learning}: A fashion outfit is considered as a sequence from top to bottom and a Bi-LSTM model is jointly trained with a visual-semantic embedding (VSE) model to learn compatibility. \item \textbf{SiameseNet} \cite{veit2015learning}: SiameseNet uses a Siamese CNN to transform images into an embedding space in which compatible items are close to each other and are far away from incompatible items. After training the network uses a contrastive loss, the distance between item embeddings is used for estimating their compatibility. To compare with this network, we created compatible pairs by selecting items from the same outfit. Incompatible pairs were created by selecting items from different outfits. To measure the compatibility of an outfit using SiameseNet, we averaged the compatibility scores of all of the item pairs in that outfit. \item \textbf{BPR-DAE} \cite{song2017neurostylist}: A latent compatibility space is learned by employing a dual autoencoder (DAE) network and a Bayesian Personalized Ranking (BPR) framework. We trained BPR-DAE similar to SiameseNet and considered the average compatibility score of all the item pairs in an outfit as its compatibility score. \item \textbf{RAW-V}: The compatibility score of an outfit $S$ is measured based on the raw visual features of its items as: \begin{align} \label{eq4} m_s = \frac{1}{\binom{n}{2}}\sum_{i,j} d(v_i, v_j) \end{align} $v_i$ and $v_j$ are the visual feature representations of items $i$ and $j$, extracted from a fined-tuned DenseNet \cite{huang2017densely} and $d(v_i , v_j) = v_i \cdot v_j$ is the cosine similarity between items $i$ and $j$. The compatibility of an outfit is obtained by averaging pair-wise compatibilities of all the pairs in the outfit. \item \textbf{VSE}: We learned the joint visual semantic embedding proposed by Han et al. \cite{han2017learning} and measured compatibility similar to RAW-V. \item \textbf{FashionRN}: Our proposed model considers a fashion outfit as a scene, and items in the outfit as objects in the scene. It then learns the compatibility of an outfit with arbitrary number of items using a Relational Network. \item \textbf{FashionRN-VSE}: Our proposed model that builds on top of FashionRN, and adds in the component of VSE. \end{itemize} The first four methods are popular in the literature for learning compatibility and the rest are for understanding how different components contribute to compatibility. As our method is mainly based on visual information, we did not compare our method with approaches which only rely on semantic information for learning compatibility \cite{hsiao2017creating}. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{fig/compatibility_test.jpg} \caption{Example test outfits in our compatibility prediction task and their scores.} \label{compat} \end{figure} \subsection{Compatibility Prediction} \label{sec:compatibility prediction} In this task, a number of items are given as input and we aim to find their compatibility score. For items that are compatible with each other, the model should answer yes, and false otherwise. This enables a recommendation system to recommend items based on their compatibility with a query or with items in a shopping cart. In addition, users can create their own outfits and know their compatibility. \begin{table}[htb!] \caption{Performance of different approaches on the compatibility prediction test.} \label{table1} \centering {\begin{tabular}{ l | c } \hline \hline Approaches & AUC \\ \hline Bi-LSTM + VSE \cite{han2017learning} & 0.72 \\ SiameseNet \cite{veit2015learning} & 0.48 \\ BPR-DAE \cite{song2017neurostylist} & 0.53\\ RAW-V & 0.61 \\ VSE & 0.45 \\ Fashion RN & \textbf{0.81} \\ Fashion RN + VSE & \textbf{0.88} \\ \hline \end{tabular} } \end{table} Table \ref{table1} shows the performance comparison among different approaches for compatibility prediction task. This table shows that both of our models, FashionRN and FashionRN-VSE, achieve the best performance among the comparing methods, including the Bi-LSTM method which requires both visual and semantic information. This is because our Relational Network based model is inherently able to learn a variety of relations between items including their categories without requiring to have access to explicit semantic attributes and category labels. This is specially useful when semantic information is not available or is very noisy. We also observe that Bi-LSTM performance decreases on our dataset. This is likely due to our test dataset size (14,922 outfits) which is much larger than the test dataset (3,076 outfits) used by the authors \cite{han2017learning}. Table \ref{table1} shows that our method performs better than the two comparing pair-wise methods (SiameseNet and BPR-DAE). This finding suggests that pair-wise methods fail to work well on learning the compatibility of outfits with more than two items. This is because a linear combination of pair-wise compatibility scores (e.g., averaging all the pair-wise scores) fails to capture the compatibility of an entire outfit. In our work, although we start by learning the relation between item pairs, we combine the pair-wise relations and pass them through multiple nonlinear layers to learn more powerful feature representations from an entire outfit. This can determine the compatibility of an outfit more accurately than simply averaging all the pair-wise compatibility scores. \begin{table}[t] \caption{Performance of different approaches on FITB test.} \label{table2} \centering {\begin{tabular}{ l | c } \hline\hline Methods & Accuracy\\ \hline Bi-LSTM + VSE \cite{han2017learning} & 0.34 \\ SiameseNet \cite{veit2015learning} & 0.35 \\ BPR-DAE \cite{song2017neurostylist} & 0.20\\ RAW-V & 0.35 \\ VSE & 0.33 \\ Fashion RN & \textbf{0.52} \\ Fashion RN + VSE & \textbf{0.58} \\ \hline \end{tabular}} \end{table} Figure \ref{compat} shows qualitative results of our model for compatibility prediction. Compatible outfits have two or more non redundant items that have well-matching colors and share similar style. From Figure \ref{compat}, we can observe that our method can effectively predict if a set of items make a compatible outfit. For example, items in the first row, are all black/green and share a casual/sportive style and therefore they have a high compatibility score; Items in the second row have chic/formal style and are all cream or dark blue which together create a stylish contrast and therefore have a high compatibility score. Items of incompatible outfits may have inconsistent styles or colors. Incompatible outfits may also have redundant items such as two shirts. We can observe that our method is able to capture such concepts from visual information. In contrast to Bi-LSTM model, we do not need to feed any category labels or attributes (e.g., men, women, shirt, shoes), to our model to explicitly teach it that for example a men's shirt is incompatible with a woman's skirt, or an outfit with redundant items is incompatible. Our model is able to implicitly learn such information. For example, items in the fourth row do not have compatible colors/patterns and therefore have received a low compatibility score; items in the fifth row have compatible colors, but a man's shirt and a pair of men's jeans do not match with women's heels. Thus, this outfit has also received a low score; finally, the outfit in the last row has two bottoms (skirt and leggings) and our network has given a low compatibility score to this outfit. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{fig/fitb.jpg} \caption{Example results from the FITB task using Fashion RN model. Items in each row are ranked based on their output scores and held out items are highlighted in rectangles.} \label{fitb} \end{figure} \begin{figure*}[t!] \centering \subfigure[DenseNet]{ \includegraphics[width=.9\linewidth]{fig/outfitnet_item_visualization_densenet_20190501.png} \label{fig:visualization_densenet} } \subfigure[FashionRN]{ \includegraphics[width=.9\linewidth]{fig/outfitnet_item_visualization_outfitnet_20190501.png} \label{fig:visualization_fashionrn} } \caption{Visualization of the same 1000 fashion items with embedding learned by different models.} \label{fig:visualization} \end{figure*} \subsection{Fill In The Blank (FITB) Test} \label{sec:compatibility prediction} In this task an outfit and a number of candidate items are given and the goal is to find the item that best matches with the existing items in the outfit. This is useful when a user has a set of items (e.g., a shirt and a pair of shoes) and wishes to find another item (e.g., a handbag) that best matches with the rest of the outfit. To run this test, we created a FITB dataset using our positive test set from section \ref{sec:Dataset}. In each test outfit, we randomly held-out one item. We then randomly selected three items from other outfits that have the same category as the held-out item. For example, if the held-out item was a shirt, all there randomly selected items were shirts. This is to ensure that the network cannot easily filter out items that already exists in the outfit without needing to understand compatibility. We then found the item among the four candidates that maximizes the compatibility score of the entire outfit. Table \ref{table2} shows the results of FITB test for all the comparing methods. Similar to the compatibility prediction task, we observed that our model outperforms all the baselines. The performance of this task is also improved through utilizing joint visual-semantic embeddings in our model (Fashion RN + VSE). The reason for this improvement in the FITB test is that in many cases there is more than one compatible item among the candidates. While the Fashion RN model is able to rely on visual information to find items that are compatible with the given outfit, adding semantic information can improve ranking among the compatible candidates. For example, in the last row of Figure \ref{fitb} the held-out item is correctly detected compatible (score = 0.98) by the Fashion RN model. However, the first shirt is also compatible with the outfit and has received a higher score. We observed that adding the semantic information (Fashion RN + VSE) improved ranking of the compatible candidates in this example and resulted in choosing the right shirt in the last row of Figure \ref{fitb}. Similar to the compatibility prediction task, we observe that Bi-LSTM method performs poorly on our dataset. As others have noted \cite{hsiao2017creating} this is probably because the FITB test set provided by Han et al. \cite{han2017learning} contains poor choices of negatives. In their FITB dataset, negative items may not be the same type as the held-out item. For example, the test outfit may have a shirt, a pair of jeans, and be missing a handbag. If some of the candidates are shirts, which already exists in the outfit, the network can easily eliminate them based on their category without needing to infer compatibility. Thus, to enforce the model to reason based on compatibility, we ensured that all the candidates are the same type as the missing item. Figure \ref{fitb} shows successful and unsuccessful examples of this test using our model. In most of the test outfits the held-out item is among the top two items in the ranked list and shows a high compatibility score. \section{Fashion Compatibility Embedding} \label{sec:qualitative} Besides the capability of predicting compatibility given a complete fashion outfit and fill-in-the-blank given an incomplete outfit, FashionRN and FashionRN-VSE are also able to learn item and outfit embedding through hidden layers. More specifically, $v$ learned in FashionRN can be viewed as the items' \emph{compatibility features}. As discussed previously, the concept of compatibility is fundamentally different from similarity, since items that are visually similar to each other are not necessarily compatible in fashion outfits, and vice versa. To demonstrate the learned compatibility embedding of fashion items, we take the learned embedding, transformed them into two-dimensional embedding by using TSNE algorithm \cite{maaten2008visualizing}. To show FashionRN's capability of learning compatibility beyond visual similarity, we compare the visualization on the same set of randomly chosen 1000 fashion items with DenseNet features. The results are shown in Figure \ref{fig:visualization}. As shown in Figure \ref{fig:visualization}, at the first glance, the scatters of the same 1000 fashion items created by DenseNet embedding and FashionRN embedding are greatly different. With a closer look, one can see that items with similar colors and shapes are closer to each other in the DenseNet embedding space, while items that make sense to go together in an outfit are closer to each other in the FashionRN embedding space. This shows that FashionRN captures the underlying item compatibility in addition to visual similarity. \section{Conclusion} \label{sec:conclusion} In this paper, we proposed a method for learning fashion compatibility. We considered an outfit as a scene and its items as objects in the scene and developed FashionRN and FashionRN-VSE, RN-based models, to learn the visual relations between items to determine their compatibility. We collected a large dataset from Polyvore and conducted different experiments to demonstrate the effectiveness of our method. In addition to addressing some of the limitations of existing models, our model showed state-of-the-art performance in both compatibility prediction task and fill-in-the-blank test. Besides the capability in the above prediction tasks, FashionRN and FashionRN-VSE are also able to learn item and outfit embedding that carry underlying compatibility. To showcase such results, we visualize the learned embedding of the same items using both DenseNet and FashionRN. Through visualization, we find that FashionRN better capture the compatibility among items compared to DenseNet. . \bibliographystyle{ACM-Reference-Format}
1,314,259,993,725
arxiv
\section{\bf Introduction} In this article we study a model which is a special instance of a more general class of random interfaces. Random interfaces are fields $\phi=(\phi_{x})_{x\in\Z^{d}}$, whose distribution is specified by a probability measure on $\mathbb{R}^{\mathbb{Z}^{d}}$, $d\ge1$. The density is given in terms of an energy function $H$ called Hamiltonian and has the form \begin{align}\label{eq:measure_def} \mathbf{P}_{\Lambda}(\mathrm{d}\phi):=\frac{\mathrm{e}^{-H(\phi)}}{Z_\Lambda}\prod_{x\in{\Lambda}}\mathrm{d}\phi_{x}\prod_{x\in\mathbb{Z}^{d}\setminus \Lambda} \delta_{0}(\mathrm{d}\phi_{x}), \end{align} where $\Lambda\Subset\mathbb{Z}^{d}$ is a finite subset, $\mathrm{d}\phi_{x}$ is the Lebesgue measure on $\mathbb{R}$, $\delta_{0}$ is the Dirac measure at $0,$ and $Z_{\Lambda}$ is a normalizing constant. We are imposing zero boundary conditions: almost surely $\phi_{x}=0$ for all $x\in\mathbb{Z}^{d}\setminus{\Lambda}$, but the definition holds for more general boundary conditions. A special case is when the Hamiltonian is given by \begin{equation}\label{eq:Ham_def}H(\varphi)=\sum_{x\in\Z^d}\left(\kappa_1\|\nabla\varphi_x\|^2+\kappa_2(\Delta\varphi_x)^2\right)\end{equation} where $\nabla$ is the discrete gradient and $\Delta$ is the discrete Laplacian defined by $$\nabla f(x)=(f(x+e_i)-f(x))_{i=1}^d$$ $$\Delta f(x)= \frac1{2d}\sum_{i=1}^d \left( f(x+e_i) +f(x-e_i) -2f(x) \right)$$ for any $x\in \Z^d$, $f:\Z^d\to\R$, and $\kappa_1,\,\kappa_2$ are two non-negative parameters. In the physics literature, the above Hamiltonian is considered to be the energy of a semiflexible membrane (or semiflexible polymer if $d=1$) where the parameters $\kappa_1$ and $\kappa_2$ are the {\it lateral tension} and the {\it bending rigidity}, respectively (\cite{Leibler:2006, ruiz2005phase, lipowsky1995generic}). When $\kappa_2=0$, the model is the purely gradient model and it is known as the discrete Gaussian free field. In this case the Hamiltonian is governed by the surface area of the interface. When $\kappa_1=0$, the model is called the membrane, or Bilaplacian, model. In this case the Hamiltonian is governed by the curvature of the interface. More generally the Hamiltonian is governed by an interplay of the surface area and the curvature, hence one considers the model with both gradient and Laplacian interaction. The main aim of this article is to show how the dependency on {the size of the set $\Lambda$} of $\kappa_1$ and $\kappa_2$ affects the scaling limit of $\mathbf{P}_{\Lambda}$. When $\kappa_1=0$ or $\kappa_2=0$, the scaling limit of the model is well-understood. The literature on the discrete Gaussian free field is huge due to its connection to various other probabilistic objects and we refer the interested reader to the lecture notes and survey articles~\cite{bere, biskup2017extrema, sheffield2007gaussian}. We refer to \cite{mm_scaling, CarJDScaling,HryVel} for the scaling limit of the membrane model in $d\ge 1$. The literature on the case when $\kappa_1>0, \kappa_2>0$ is limited and has been considered in the works of \cite{sakagawa2018,borecki2010, CarBor,mixed_scaling}. \cite{borecki2010} and \cite{CarBor} introduced this model as the $(\nabla+\Delta)$-model (we will also refer to it as ``mixed model'') with constant $\kappa_1,\,\kappa_2$. They studied in $d=1$ the influence of pinning in order to understand the localization behavior of the polymer. The results were extended to higher dimensions, together with further properties of the free energy, in~\cite{sakagawa2018}. In \cite{mixed_scaling} the scaling limit of the $(\nabla+\Delta)$-model is studied. There it is shown that if one lets the lattice size go to zero, under a suitable scaling the Laplacian term is dominated by the gradient and the limit becomes the Gaussian free field. A very natural question, which we aim at investigating in this paper, is whether one can interpolate between the continuum Gaussian free field and the membrane model by tuning $\kappa_2/\kappa_1$ suitably. To the best of our knowledge, the influence of the length on the shape of the polymer through $\kappa_1$ and $\kappa_2$ has not been systematically addressed in the literature. In~\cite{ruiz2005phase} a phase transition on the surface tension for mixed polymers has been investigated according to a suitable rescaling of $\sqrt{\kappa_2/\kappa_1}$ depending on the lattice size. However the model studied in~\cite{ruiz2005phase} is integer-valued, so it differs from the one studied in the present paper. We now briefly describe the phase transition picture which appears in the scaling limit. We restrict our focus to $d=1$ for heuristic explanations. Let us consider the Hamiltonian described in~\eqref{eq:Ham_def}. We take {$\Lambda = \{1,\, \ldots,\, N-1\}$ for $N\in \N, \,\kappa_1=1/4$ and $\kappa_2= \kappa(N)/2$}. In $d=1$ in the DGFF case ($\kappa_2=0$) it is well-known that the finite volume measure can be given by a random walk bridge and in the membrane case ($\kappa_1=0$) by an integrated random walk bridge~(\cite{CaravennaDeuschel_pin}). Therefore the scaling limit for the DGFF and membrane turns out to be Brownian bridge and the integrated Brownian bridge, respectively. In $d=1$, a representation for the $(\nabla+\Delta)$-model using random walks was obtained in~\cite{borecki2010}. The details of the representation are recalled in Appendix~\ref{appendix:C}. Let $\gamma$ and $\sigma$ be as in \eqref{eq:gamma} and \eqref{eq:sigma}, respectively. Let $(\widetilde \varepsilon_i)_{i\in \Z^{+}}$ be i.i.d. normal random variables with mean zero and variance $\sigma^2/(1-\gamma)^2$. {For $n\ge 1$}, let $W_n= S_n-U_n$, where $S_n=\sum_{k=1}^n \widetilde \varepsilon_k$ and $U_n= \gamma^n \widetilde \varepsilon_1+\gamma^{n-1} \widetilde \varepsilon_2+\cdots+ \gamma \widetilde \varepsilon_n$. From~\citet[Proposition 1.10]{borecki2010} it is known that the finite volume measure of the model is given by the joint distribution of {$(W_n)_{1 \le n\le N-1}$} conditioned on $W_{N}=W_{N+1}=0$. We look at the unconditional process and see how the parameter $\kappa(N)$ changes the variance. It follows from~\eqref{eq:gamma} and \eqref{eq:sigma} that { $$\sigma^2\approx \frac{1}{\kappa(N)}\; \text{ and } \;(1-\gamma) \approx \frac{1}{\sqrt{\kappa(N)}}.$$} So for the case when $\kappa(N)\ll N^2$ we have { $$\mathbf{Var}(S_{N-1})\approx N ,\; \mathbf{Var}(U_{N-1})\approx \sqrt{\kappa(N)}\, \text{ and } \,\mathbf{Cov}(S_{N-1}, U_{N-1})\approx \sqrt{\kappa(N)}$$ which together imply that $\mathbf{Var}(W_{N-1}) \approx N$}, thus the random walk dominates with its scaling $\sqrt{N}$. When $\kappa(N)\gg N^2$ the situation is a bit more complicated and one can compute that (see Appendix~\ref{appendix:C}) $$\mathbf{Var}(W_{N-1})\approx \frac{N^3}{\kappa(N)}.$$ It turns out that the Laplacian part dominates under this scaling. When $\kappa(N)\sim N^2$ then the contribution from $S_{N-1}$ and $U_{N-1}$ is similar and hence both the gradient and Laplacian interaction come into picture. The reader can see a simulation of the free boundary case, that is, the trajectories of {$(W_n)_{1 \le n\le N}$}, in Figure~\ref{fig1} and Figure~\ref{fig2}. We plotted the two cases $\kappa\ll N^2$ and $\kappa\gg N^2$ in different pictures as the height scalings are different. \begin{figure}[ht!] \includegraphics[scale=0.7]{fig_gff}\caption{Simulation of some trajectories of {$(W_n)_{1 \le n\le N}$} with $N=10^4$ and {\color{blue}$\kappa=0$}, {\color{green}$\kappa=2\times 10^2$}, {\color{black}$\kappa=2\times 10^4$}, {\color{red}$\kappa=2\times 10^6$}.} \label{fig1} \end{figure} \begin{figure}[ht!] \includegraphics[scale=0.7]{fig_mm}\caption{Simulation of some trajectories of {$(W_n)_{1 \le n\le N}$} with $N=10^3$ and {\color{blue}$\kappa=2\times 10^{6.5}$}, {\color{green}$\kappa=2\times 10^7$}, {\color{red}$\kappa=2\times 10^8$}.} \label{fig2} \end{figure} We stress that in the above description we did not consider boundary effects which can cause considerable difficulty in understanding these processes explicitly. In Appendix~\ref{appendix:C} we have pointed out the conditional representation of $W_{N-1}$. One can see that it is not easy to determine whether the above transition can be pushed to the conditional processes and hence the finite volume measure. The aim of this article is to go beyond such representations and show the above transition holds true in general dimensions and get the explicit limits in each of the cases. In this respect, we also record that the integrated random walk representations of $d=1$ cannot be extended to $d> 1$. We mainly use finite difference methods in the proof of the main results. In a recent work, the authors of the present article introduced a finite difference method to approximate solutions of PDEs to successfully obtain the scaling limit of the membrane model and the $(\nabla+\Delta)$-model with fixed coefficients (see~\cite{mm_scaling,mixed_scaling}). The idea was inspired by the work~\cite{thomee}. Finite difference methods were also employed in the works~\cite{Mueller:Sch:2017, schweiger:2019} to obtain important estimates on the discrete Green's function of the membrane model. The main results of the article are as follows. We consider the model on $\Lambda_N \Subset \Z^d $ for a suitable $\Lambda_N$ defined later in Section \ref{section:main results}. Also, we assume $\kappa_1=1/(4d),\,\kappa_2=\kappa(N)/2$ and distinguish three regimes for $\kappa = \kappa(N)$. \begin{itemize} \item[(a)] Let $\kappa\gg N^2$. In $d\ge 1$, we show that the appropriately rescaled field converges to the continuum membrane model. The continuum membrane model is roughly a centered Gaussian process whose covariance is given by the Green's function of the Bilaplacian Dirichlet problem. For $d\ge 4$, in Theorem~\ref{thm:main} we show the convergence takes place in a distributional space (more precisely a negatively-indexed Sobolev space). In $d=1,2$ and $3$ we show in Theorem~\ref{thm:main2} that the limiting Gaussian process has continuous paths. \item[(b)] Let $\kappa\sim 2dN^2$. In $d\ge 4$ we show (Theorem~\ref{thm:main}) that the rescaled field converges to a random distribution in an appropriate Sobolev space and the covariance of the limiting Gaussian field is given by the Dirichlet problem involving the elliptic operator $-\Delta_c +\Delta_c^2$. In $d=1,2$ and $3$, again we show (in Theorem~\ref{thm:main2}) the convergence takes place in the space of continuous functions. \item[(c)] Let $\kappa\ll N^{2}$. In $d\ge 2$ we show (in Theorem~\ref{thm:main}) that the rescaled field converges in distribution to the Gaussian free field. Again, since the Gaussian free field is a random distribution the convergence takes place in a negatively-indexed Sobolev space. In $d=1$, we show (in Theorem~\ref{thm:main2}) that the limiting process is the Brownian bridge, confirming the heuristics presented above. \end{itemize} To derive the above results, the main technique we use is the approximation of the solution of a continuum Dirichlet problem with its discrete counterpart. Using Sobolev estimates it can be shown that the closeness of the solutions is related to the approximation of the discrete elliptic operator to the continuum one. This idea has been already employed in~\cite{mm_scaling} and~\cite{mixed_scaling}. But in the present scenario, the discrete elliptic operators have coefficients which depend on $N$ and hence the estimates of~\cite{thomee} are not applicable directly. In addition, the rough behavior around the boundary in the case of constant coefficients was dealt with by considering a truncation of the discrete elliptic operator. The operators were rescaled around the boundary and this helped in controlling their behavior. The same technique becomes a bit more involved in the present case. This helps us to tackle with the cases $\kappa\gg N^2$ and $\kappa\sim 2d N^2$ but the method falls short when $\kappa\ll N^2$. In this case an anonymous referee pointed out to the authors the idea of dealing with the boundary effects and discretization separately, adjusting the boundary values with an appropriate cut-off function. We deal with these technical issues in Section~\ref{main_ingredient}. Let us mention in passing that we believe that the result in Section~\ref{main_ingredient} is of independent interest and can be applied to discrete elliptic operators where coefficients depend on the scaling of the lattice. \paragraph{{\em Structure of the article}} In Section~\ref{section:main results} we state our main results precisely. Furthermore, in its Subsection~\ref{main_ingredient} we discuss the approximation technique and the norm estimates in detail, while in Subsection~\ref{open_problems} we mention some open problems. In Section~\ref{sec:main_proofs} we derive the proof of Theorem~\ref{thm:main} and in Section~\ref{sec:main2_proofs} we deal with the lower dimensional case (Theorem~\ref{thm:main2}). In Section~\ref{proof_approx_result} we provide a proof of the approximation results stated in~Subsection~\ref{main_ingredient}. These are mainly improvements of the results of~\cite{thomee}. \paragraph{{\em Notation}} {For real-valued functions $f(\cdot),\,g(\cdot)$ we write $f\gg g,\, f\sim g,\, f\approx g,\,f\ll g$ when $\lim_{n\to \infty} \frac{f(n)}{g(n)}$ equals $\infty,\,1,\,c$ and $0$, respectively, where $c$ is a non zero constant which may be 1 also}. Also we write $f\asymp g$ if there exist two positive constants $c_\ell, c_r$ such that $c_\ell g(n) \le f(n) \le c_r g(n)$ for all $n$. We denote by $C$ a universal constant that may change from line to line within the same equation. In what follows, we shall use $\Delta$ and $\Delta_c$ to denote the discrete and continuous Laplacian respectively. Also $\partial_j$ respectively $\frac{\partial}{\partial x_j}$ denotes the discrete respectively continuous derivative in the $j$-th coordinate. \section{\bf Set-up and main results} \label{section:main results} Let $\Lambda$ be a finite subset of $\Z^d$, $d\ge 1$, and $\mathbf{P}_{\Lambda}$ and $H(\varphi)$ be as in~\eqref{eq:measure_def} and \eqref{eq:Ham_def} respectively. It follows from Lemma 1.2.2 of~\cite{Kurt_thesis} that the Gibbs measure~\eqref{eq:measure_def} on $\R^{\Lambda}$ with Hamiltonian~\eqref{eq:Ham_def} exists. Note that~\eqref{eq:Ham_def} can be written as \begin{equation}\label{eq:ham:mixed} H(\varphi)=\frac{1}{2}\langle\varphi,(-4d\kappa_1\Delta+2\kappa_2\Delta^{2})\varphi\rangle_{\ell^{2}(\mathbb{Z}^{d})}. \end{equation} Let $d\ge 1$. Let $D$ be a bounded domain in $\R^d$. For $N\in \mathbb N$, let $D_N= N\overline D \cap \mathbb{Z}^d$. Let us denote by $\Lambda_N$ the set of points $x$ in $D_N$ such that, for every direction $i, j,$ also the points $\,x\pm e_i ,\,x\pm (e_i\pm e_j)$ are all in $D_N$. In other words, $\Lambda_N\subset N\overline D\cap \Z^d$ is the largest set satisfying $\partial_2\Lambda_N\subset N\overline D\cap \Z^d$ where $\partial_2\Lambda_N:=\{y\in\Z^d\setminus\Lambda_N:\mathrm{dist}(y,\,\Lambda_N)\le 2\}$ is the double (outer) boundary of $\Lambda_N$ of points at $\ell^1$ distance at most $2$ from it. We consider the model with $\Lambda = \Lambda_N,\,\kappa_1=1/4d,\,\kappa_2=\kappa(N)/2$ and want to study what happens when we tune suitably the parameter $\kappa(N)$ as $N$ tends to infinity. We assume $\kappa_1$ to be constant as it is easy to state the results in this format. Also for simplicity we write $\kappa$ for $\kappa(N)$. We just note here that if we write $G_{\Lambda_N}(x,\,y):=\mathbf{E}_{\Lambda_N}(\varphi_x\varphi_y)$, it follows from Lemma 1.2.2 of \cite{Kurt_thesis} that $G_{\Lambda_N}$ solves the following discrete boundary value problem: for $x\in \Lambda_N$ \begin{equation}\label{eq:cov} \left\{\begin{array}{lr} (-\Delta+ \kappa\Delta^2) G_{\Lambda_N}(x,y) = \delta_x(y)& y\in \Lambda_N\\ G_{\Lambda_N}(x,y) = 0 & y\notin \Lambda_N\end{array}.\right. \end{equation} To describe the main results we need some elliptic operators. We first introduce them and the corresponding Dirichlet problem. Let $L$ denote one of the following three elliptic operators: \begin{equation}\label{def:L} L=\begin{cases}-\Delta_c,\\ \Delta_c^2,\\ -\Delta_c +\Delta_c^2,\end{cases}\end{equation} where $\Delta_c$ is the Laplace operator defined by $\Delta_c=\sum_{i=1}^d\frac{\partial^2}{\partial x^2_i}$. We consider the following continuum Dirichlet problem: \begin{equation}\label{eqa:continuum} \begin{cases} Lu(x) = f(x)& x\in D\\ D^\alpha u(x)=0& \lvert\alpha\rvert\leq m-1,\,x\in \partial D. \end{cases} \end{equation} where $\alpha=(\alpha,\ldots,\alpha_d)$ is a multi-index with $\alpha_i$'s being non-negative integers, $|\alpha|:=\sum_{i=1}^d \alpha_i$, $D^\alpha$ is defined in~\eqref{eq:def_D^alpha}, $m=1$ if $L=-\Delta_c$ and $m=2$ in the other cases. \subsection{Lower dimensional results} We first present the results in lower dimensions where we show that convergence takes place in the space of continuous functions. In this case we consider $D=(0,1)^d$. Also here, according to the behavior of $\kappa$ as $N\to\infty$ we have three different limits. To verify the convergence in the space of continuous functions we shall need to continuously interpolate the discrete model. In $d=1$ the linear interpolation gives a continuous process but for higher dimensions there might be many ways. We stick to the following natural way. We will need this interpolation in $d=2$ and $3$ when $\kappa\gg N^2$ or $\kappa\sim 2dN^2$. We define the continuous interpolation $\{\Psi_N\}_{N\in\N}$ in the following fashion: \begin{itemize} \item For $d=1$ and $t\in\overline D$ \begin{align} \Psi_N(t)=\mathbf{c}_N(1)\left[\varphi_{\floor{Nt}} + (Nt-\floor{Nt})(\varphi_{\floor{Nt}+1}-\varphi_{\floor{Nt}})\right].\label{eq:intd=1} \end{align} \item For $d=2$ and $t=(t_1,t_2)\in \overline D$ \begin{align} \Psi_{N}(t)&=\mathbf{c}_N(2)\left[\varphi_{\floor {Nt}}+\{Nt_i\}\left(\varphi_{\floor {Nt}+e_i}-\varphi_{\floor {Nt}}\right)\right. \nonumber\\ &+\left.\{Nt_j\}\left(\varphi_{\floor {Nt}+e_i+e_j}-\varphi_{\floor {Nt}+e_i}\right)\right] ,\,\quad\text{if}\,\{Nt_i\}\ge\{Nt_j\}\label{eq:intd=2} \end{align} where $i,\,j\in\{1,\,2\}$, $i\neq j$. \item For $d=3$ and $t=(t_1,t_2,t_3)\in \overline D$ \begin{align} \Psi_{N}(t)&=\mathbf{c}_N(3)\left[\varphi_{\floor {Nt}}+\{Nt_i\}\left(\varphi_{\floor {Nt}+e_i}-\varphi_{\floor {Nt}}\right)\right.\nonumber \\ &+\{Nt_j\}\left(\varphi_{\floor {Nt}+e_i+e_j}-\varphi_{\floor {Nt}+e_i}\right)\nonumber\\ &+\left.\{Nt_k\}\left(\varphi_{\floor {Nt}+e_i+e_j+e_k}-\varphi_{\floor {Nt}+e_i+e_j}\right)\right],\quad\,\text{ if }\{Nt_i\}\ge\{Nt_j\}\ge\{Nt_k\} \label{eq:intd=3} \end{align} where $i,\,j,\,k\in \{1,\,2,\,3\}$ and pairwise different. Here $(e_i)_{i=1}^d$ denotes the standard basis for $\R^d$ and $c_N(d),\,d=1,\,2,\,3$, are scaling factors which are specified in the following result. \end{itemize} \begin{theorem}\label{thm:main2} We have the following convergence results. \begin{enumerate}[ref=(\arabic*)] \item\label{thm:MM2} $\kappa \gg N^2$. Let $1 \le d \le 3$. Define a continuously interpolated field $\Psi_N$ as in \eqref{eq:intd=1}, \eqref{eq:intd=2} and \eqref{eq:intd=3} with $$\mathbf{c}_N(d)= (2d)^{-1}\sqrt{\kappa}N^{\frac{d-4}{2}}.$$ Then we have, as $N\to\infty$, that the field $\Psi_N$ converges in distribution to $\Psi^{\Delta^2}$ in the space of continuous functions on $\overline D$, where $\Psi^{\Delta^2}$ is defined to be the centered continuous Gaussian process on $\overline D$ with covariance $G_D(\cdot,\,\cdot)$, the Green's function for the following biharmonic Dirichlet problem: \begin{align} \begin{cases} \Delta_c^2 u(x) = f(x), & x\in D\\ D^\alpha u(x)=0, &\forall \,|\alpha|\le 1,\, x\in\partial D. \label{eq:mm_continuum} \end{cases} \end{align} \item \label{thm:MIXED2} $\kappa \sim 2d N^2$. Let $1 \le d \le 3$. Define a continuously interpolated field $\Psi_N$ as in \eqref{eq:intd=1}, \eqref{eq:intd=2} and \eqref{eq:intd=3} with $$\mathbf{c}_N(d)= (2d)^{-1}\sqrt{\kappa}N^{\frac{d-4}{2}}.$$ Define $\Psi^{-\Delta + \Delta^2}$ to be the continuous Gaussian process in $\overline D$ with covariance $G_D(\cdot,\,\cdot)$, where $G_D$ is the Green's function for the problem \begin{align*} \begin{cases} (-\Delta_c + \Delta_c^2) u(x) = f(x), & x\in D\\ D^\alpha u(x)=0, &\forall \,|\alpha|\le 1,\, x\in\partial D. \label{eq:mixed_continuum} \end{cases} \end{align*} Then $\Psi_N$ converges in distribution to the field $\Psi^{-\Delta + \Delta^2}$ in the space of continuous functions on $\overline D$. \item \label{thm:GFF2} $\kappa \ll N^{2}$. Let $d=1$. Define the continuously interpolated field $\Psi_N$ as in \eqref{eq:intd=1} with $$\mathbf{c}_N(1)=(2d)^{-\frac12}N^{-\frac12}.$$ Then as $N\to\infty$, $\Psi_N$ converges in distribution to the Brownian bridge, $\Psi^{-\Delta}$, in the space of continuous functions on $\overline D$. \end{enumerate} \end{theorem} \begin{remark} When $\kappa_1=0$ and $\kappa_2=1$ in \eqref{eq:Ham_def} the $d=1$ case was first studied in~\cite{CarJDScaling}, where they showed that the limiting distribution is given by an integrated Brownian bridge (for a more precise definition see Theorem 1.2 of~\cite{CarJDScaling}). The higher dimensional case was studied in~\cite{mm_scaling}. It was shown in~\cite{mm_scaling} that for $d=2,\,3$ the discrete membrane model converges to a Gaussian process with continuous paths and the methods in that article can be seen to be valid in $d=1$ also. By uniqueness of the limit in $C[0,1]$ it follows that the limiting Gaussian process in $d=1$ for the case $\kappa\gg N^2$ (Theorem~\ref{thm:main2}~\ref{thm:MM2}) can be described using the integrated Brownian bridge, the limit matching that of~\cite{CarJDScaling}. \end{remark} \subsection{Higher dimensional results} We present now the results in higher dimensions where we show convergence in the space of distributions. In order to make our statements precise, we need to introduce three (negative ordered) Sobolev spaces denoted respectively as $\mathcal H^{-s}_{\Delta^2}(D)$, $\mathcal H^{-s}_{-\Delta+\Delta^2}(D)$ and $\mathcal H^{-s}_{-\Delta}(D)$\footnote{We shall use $\Delta$ in the subscript of the spaces and the norms instead of $\Delta_c$ to ease notation.}. We are going to recall some basic notations on Sobolev spaces and also some facts about the eigenvalues of the elliptic operators involved in our problem. \subsubsection{Basics of Sobolev spaces} Let us first describe the standard Sobolev space. Let $C_c^\infty(D)$ denote the space of infinitely differentiable functions $u: D\to \R$ with compact support inside $D$. For $\alpha= (\alpha_1, \,\ldots,\, \alpha_d)$ a multi-index define \begin{equation}\label{eq:def_D^alpha} D^\alpha u= \frac{\partial^{\alpha_1}}{\partial x_1^{\alpha_1}}\cdots \frac{\partial^{\alpha_d}}{\partial x_d^{\alpha_d}} u. \end{equation} Suppose $f, \,g\in L^1_{loc}(D)$. We say that $g$ is the $\alpha$-th weak partial derivative of $f$ (written $D^\alpha f= g$) if $$\int_{D} f D^\alpha u \De x= (-1)^{|\alpha|} \int_{D} g u \De x \quad\forall\, u \in C_c^\infty(D).$$ The Sobolev space $W^{k,p}$ is defined in the usual way as $$W^{k,p}= \{ f\in L^1_{loc}(D) : \,D^\alpha f\in L^p(D), \, |\alpha|\le k\}.$$ Denote by $H^k(D):= W^{k,2}(D)$, $k=0,\,1,\,\ldots$, which is a Hilbert space with norm $$\|f\|_{H^k(D)} = \left( \sum_{|\alpha|\le k}\int_{D} |D^\alpha f|^2\De x\right)^{1/2}.$$ It is true that if $a>b$ then $H^a(D)\subset H^b(D)$. Let us define another Hilbert space, $$H^k_0(D):= \overline{ C_c^{\infty}(D)}^{\|\cdot\|_{H^k(D)}}$$ and let $H^{-k}(D)= [ H^k_0(D)]^*$ be its dual. \subsubsection{Continuum membrane model} We briefly give the definition of the Sobolev space $\mathcal H^{-s}_{\Delta^2}(D)$ and the continuum membrane model. For a more detailed discussion see \cite{mm_scaling}. By the spectral theorem for compact self-adjoint operators and elliptic regularity one can show that there exist smooth eigenfunctions $\{u_j\}_{j\in \N}$ of $\Delta_c^2$ corresponding to the eigenvalues $0<\lambda_1\le\lambda_2\le \cdots \to\infty$ such that $\{u_j\}_{j\in \N}$ is an orthonormal basis for $L^2(D)$. Now for any $s>0$ we define the following inner product on $C_c^\infty(D)$: $$\left\langle f\,,\,g\right\rangle_{s,\,\Delta^2}:=\sum_{j\in \N}\lambda_j^{s/2}\left\langle f\,,\,u_j\right\rangle_{L^2}\left\langle u_j\,,\,g\right\rangle_{L^2}.$$ Then $\mathcal H^{s}_{\Delta^2, 0}(D)$ is defined to be the Hilbert space completion of $C_c^\infty(D)$ with respect to this inner product. We define $\mathcal H^{-s}_{\Delta^2}(D)$ to be its dual and the dual norm is denoted by $\| \cdot \|_{-s,\,\Delta^2}$. The following definition is from \citet[Proposition~3.9]{mm_scaling} and provides a description of the continuum membrane model $\Psi^{\Delta^2}$. \begin{definition}\label{prop:series_rep_h} Let $(\xi_j)_{j\in \N}$ be a collection of i.i.d. standard Gaussian random variables. Set \[ \Psi^{\Delta^2}:=\sum_{j\in \N}\lambda_j^{-1/2}\xi_j u_j. \] Then $\Psi^{\Delta^2}\in \mathcal H^{-s}_{\Delta^2}(D)$ a.s. for all $s>({d-4})/2$ and is called the continuum membrane model. \end{definition} \subsubsection{Continuum mixed model} We define the space $\mathcal H^{-s}_{-\Delta+\Delta^2}(D)$ analogously to $\mathcal H^{-s}_{\Delta^2}(D)$. One can find smooth eigenfunctions $\{v_j\}_{j\in\mathbb N}$ of $-\Delta_c+\Delta^2_c$ corresponding to eigenvalues $0<\mu_1\le \mu_2\le \cdots\to \infty$ such that $\{v_j\}_{j\in \mathbb N}$ is an orthonormal basis of $L^2(D)$. One can define, for $s>0$, the following inner product for functions from $C_c^\infty(D)$: $$\left\langle f, g\right\rangle_{s,\,-\Delta + \Delta^2}:=\sum_{j\in \mathbb N} \mu_j^{s/2}\left\langle f, v_j\right\rangle_{L^2}\left\langle v_j, g\right\rangle_{L^2}.$$ Let $\mathcal H^{s}_{-\Delta+\Delta^2, 0}(D)$ be the completion of $C_c^\infty(D)$ with the above inner product and $\mathcal H^{-s}_{-\Delta+\Delta^2}(D)$ be its dual. The dual norm is denoted by $\|\cdot\|_{-s,\,-\Delta + \Delta^2}$. We describe the details on this space in Appendix~\ref{appendix:B}. The following definition is proved as Proposition~\ref{prop:mixed_series_rep_h} in Appendix~\ref{appendix:B}. \begin{definition} Let $(\xi_j)_{j\in \N}$ be a collection of i.i.d. standard Gaussian random variables. Set \[ \Psi^{-\Delta+\Delta^2}:=\sum_{j\in \N}\mu_j^{-1/2}\xi_j v_j. \] Then $\Psi^{-\Delta+\Delta^2}\in \mathcal H^{-s}_{-\Delta+\Delta^2}(D)$ a.s. for all $s>({d-4})/2$ and is called the continuum mixed model. \end{definition} \subsubsection{Gaussian free field} Here also we briefly give the definition of the Sobolev space $\mathcal H^{-s}_{-\Delta}(D)$ and the Gaussian free field. For a detail discussion see \cite{mixed_scaling}. By the spectral theorem for compact self-adjoint operators and elliptic regularity we know that there exist smooth eigenfunctions $(w_j)_{j\in\N}$ of $-\Delta_c$ corresponding to the eigenvalues $0<\nu_1\le\nu_2\le\cdots \to\infty$ such that $(w_j)_{j\ge 1}$ is an orthonormal basis of $L^2(D)$. Now for any $s>0$ we define the following inner product on $C_c^\infty(D)$: $$\langle f\,,\,g \rangle_{s,\,-\Delta}:=\sum_{j\in\N} \nu_j^s\langle f\,,\,w_j\rangle_{L^2}\langle w_j\,,\,g\rangle_{L^2} .$$ Then $\mathcal H^s_{-\Delta, 0}(D)$ can be defined to be the completion of $C_c^\infty(D)$ with respect to this inner product. We define $\mathcal H^{-s}_{-\Delta}(D)$ to be its dual and the dual norm is denoted by $\| \cdot \|_{-s,\,-\Delta}$. We give the definition of the Gaussian free field in the next Proposition. \begin{definition}[{\citet[Proposition~10]{mixed_scaling}}]\label{prop:series_rep_gff} Let $(\xi_j)_{j\in \N}$ be a collection of i.i.d. standard Gaussian random variables. Set \[ \Psi^{-\Delta}:=\sum_{j\in \N}\nu_j^{-1/2}\xi_j w_j. \] Then $\Psi^{-\Delta}\in \mathcal H^{-s}_{-\Delta}(D)$ a.s. for all $s>{d}/2 -1$ and is called the Gaussian free field. \end{definition} \begin{remark} We define different spaces with respect to different eigenfunctions of the operators. It is not clear to us if these spaces coincide for a general domain. We are not aware of a result which gives the norm equivalence between the spaces $\mathcal H^s_{\Delta^2, 0}(D)$, $\mathcal H^s_{-\Delta+\Delta^2, 0}(D)$ and $\mathcal H^s_{-\Delta, 0}(D)$. In this article we are not pursuing this line of research; what is important for us are the specific norms that determine the limiting variance of the discrete fields. \end{remark} \begin{remark} Note that we have used the same notation for the fields both in higher as well as as in lower dimensions, although they do not live in the same spaces. The relation of the fields comes through the Dirichlet problem. For $f\in C_c^\infty(D)$, one can easily show that $$\mathbf{E}[(\Psi^{L}, f)^2]=\iint_{D\times D} G_L(x,y) f(x)f(y)\De x\De y$$ where $\Psi^{L}$ is one of the three fields associated to the elliptic operator $L$ as in \eqref{def:L} and $G_L$ is the Green's function of the Dirichlet problem~\eqref{eqa:continuum}. \end{remark} We are now ready to state our main results in the higher dimensional case. \begin{theorem}\label{thm:main} Assume that $D$ has smooth boundary. Depending on the behavior of $\kappa$ as $N\to\infty$ we have the following three convergence results. \begin{enumerate}[ref=(\arabic*)] \item\label{thm:MM} $\kappa \gg N^2$. Let $d\ge 4$. Define $\Psi_N$ by \begin{align}(\Psi_N,\,f):=(2d)^{-1}\sqrt{\kappa}N^{-\frac{d+4}2}\sum_{x\in \frac1N\Lambda_N }\varphi_{Nx}f(x) ,\quad f\in \mathcal H^s_{\Delta^2, 0}(D).\label{eq:mm_psi_N} \end{align} Then we have, as $N\to \infty$, that the field $\Psi_N$ converges in distribution to the continuum membrane model $\Psi^{\Delta^2}$ in the topology of $\mathcal H^{-s}_{\Delta^2}(D)$ for $s>s_d$, where \begin{equation}\label{eq:sd} s_d:=\frac{d}{2} + 2\left(\ceil*{ \frac{1}{4}\left(\floor*{\frac{d}{2}}+1\right)} + \ceil*{ \frac{1}{4}\left(\floor*{\frac{d}{2}}+6\right)} -1\right). \end{equation} \item \label{thm:MIXED} $\kappa \sim 2d N^2$. Let $d\ge 4$. Define $\Psi_N$ by \begin{align}(\Psi_N,\,f):=(2d)^{-1}\sqrt{\kappa}N^{-\frac{d+4}2}\sum_{x\in \frac1N\Lambda_N }\varphi_{Nx}f(x) ,\quad f\in \mathcal H^s_{-\Delta+\Delta^2, 0}(D).\label{eq:mixed_psi_N} \end{align} Then, as $N\to \infty$, the field $\Psi_N$ converges in distribution to $\Psi^{-\Delta+\Delta^2}$ in the topology of $\mathcal H^{-s}_{-\Delta+\Delta^2}(D)$ for $s>s_d$ where $s_d$ is as in~\eqref{eq:sd}. \item \label{thm:GFF} $\kappa \ll N^{2}$. Let $d\ge 2$. Define $\Psi_N$ by \begin{align}(\Psi_N,\,f):=(2d)^{-\frac12}N^{-\frac{d+2}2}\sum_{x\in \frac1N\Lambda_N }\varphi_{Nx}f(x) ,\quad f\in \mathcal H^s_{-\Delta, 0}(D).\label{eq:gff_psi_N} \end{align} Then, as $N\to \infty$, the field $\Psi_N$ converges in distribution to the Gaussian free field $\Psi^{-\Delta}$ in the topology of $\mathcal H^{-s}_{-\Delta}(D)$ for $s>{d}/2 + \floor{{d}/2}+2$. \end{enumerate} \end{theorem} \begin{remark} Note that the convergence takes place in a larger Sobolev space than where the field is defined. The appearance of $s_d$ in~\eqref{eq:sd} is due to the tightness proof. We believe that sharp results on convergence, in particular on the index $s_d$, could be obtained with other methods. However we do not pursue optimality results in the present article. \end{remark} \subsection{Main ingredients in the proofs}\label{main_ingredient} We prove both Theorem~\ref{thm:main2} and Theorem~\ref{thm:main} by first showing finite dimensional convergence and secondly tightness. As the measures are Gaussian with mean zero, the finite dimensional convergence follows from the convergence of the covariance. However the behavior of the covariance of the model is not known explicitly. Therefore we use the expedient of finite difference schemes to achieve both goals. The key fact which allows us to employ PDE techniques is that the covariance satisfies the discrete boundary value problem \eqref{eq:cov}. For the proof of our main theorems we will compute in Theorem~\ref{approx_result} the magnitude of the error one commits in approximating the solution of the Dirichlet problem~\eqref{eqa:continuum} by its discrete counterpart. In the present section we only state the error estimate leaving the proof for Section \ref{proof_approx_result}. Let $D$ be any bounded domain in $\mathbb{R}^d$ satisfying the uniform exterior ball condition (UEBC), which states that there exists $r>0$ such that for any $z\in\partial D$ there is a ball $B_r(c)$ of radius $r$ with center at some point $c$ satisfying $\overline{B_r(c)}\cap \overline D = \{z\}$. We mention here that any domain with $C^2$ boundary satisfies the UEBC. Let $h>0$. We will call the points in $h\mathbb{Z}^d$ the grid points in $\mathbb{R}^d$. We consider $L_h$ to be a discrete approximation of $L$ given by \begin{align}\label{discrete_op} L_hu=\left\lbrace \begin{array}{l l l} (-\Delta_h+\rho_1(h)\Delta^2_h)u& \quad \text{if $L=-\Delta_c$}\\ (-\rho_2(h)\Delta_h+\Delta^2_h)u& \quad \text{if $L=\Delta_c^2$}\\ (-\Delta_h+ \rho_3(h)\Delta^2_h)u& \quad \text{if $L=-\Delta_c +\Delta_c^2$} \end{array} \right. \end{align} where $\Delta_h$ is defined by $$\Delta_hu(x):=\frac1{h^2}\sum_{i=1}^d (u(x+he_i)+u(x-he_i)-2u(x)),$$ $u$ is any function on $h\mathbb{Z}^d$ (called a grid function) and $\rho_i(h)$ are functions of $h$ taking values in the positive real line such that $$\lim_{h\to 0}\rho_i(h)=\begin{cases} 0 & i=1,\,2\\ 1& i=3\end{cases}.$$ Let $D_h$ be the set of grid points in $\overline D$, i.e. $D_h=\overline D\cap h\mathbb{Z}^d$. For any grid point $x$ we define the points $x\pm he_i,\,x\pm h(e_i\pm e_j)$ with $1\le i,j\le d$ to be its neighbors. We say that $x$ is an interior grid point in $D_h$ if all its neighbors are in $D_h$. Let $R_h$ be the set of interior grid points in $D_h$ and $B_h:= D_h\setminus R_h $ be the set of grid points near the boundary. We divide $R_h$ further into $R^*_h$ and $B^*_h$, where $R^*_h$ is the set of $x$ in $R_h$ such that all its neighbors are in $R_h$ and $B^*_h$ is the set of remaining points in $R_h$. Thus we have $$D_h=B_h\cup R_h=B_h\cup B^*_h\cup R^*_h.$$ Denote by $\mathcal{D}_h$ the set of grid functions vanishing outside $R_h$. For a grid function $f$ we define $R_hf\in\mathcal{D}_h$ by \begin{equation} \label{eq:R_h} R_hf(x)=\begin{cases} f(x) & x\in R_h\\ 0 & x\notin R_h \end{cases}. \end{equation} Define for grid-functions vanishing outside a finite set \begin{align*} & \left\langle u\,,\,v \right\rangle_{h,\,grid} := h^d \sum_{x\in h\Z^d}u(x)v(x),\\ & \|u\|_{h,\,grid}:= \left\langle u\,,\,u \right\rangle_{h,\,grid}^{1/2}. \end{align*} We now define the finite difference analogue of the Dirichlet problem \eqref{eqa:continuum}. For given $h$, we look for a function $u_h(\cdot)$ defined on $D_h$ such that \begin{align} L_hu_h(x)=f(x), \quad x\in R_h \label{eq:discrete} \end{align} and \begin{align} u_h(x)=0 ,\quad x\in B_h. \label{eq:discrete boundary} \end{align} The uniqueness of the solution of \eqref{eq:discrete} and \eqref{eq:discrete boundary} is shown in Lemma~\ref{fact:unique_discrete}. We are now ready to state the error estimate result which forms the core result of this article. \begin{theorem}\label{approx_result} Depending on $L$ we have the following error bounds. \begin{enumerate}[ref=(\arabic*)] \item\label{thm:mm_one}$L=\Delta_c^2$. Let $u\in {C}^5(\overline{D})$ be the solution of the Dirichlet problem \eqref{eqa:continuum}. If $e_h:=u-u_h$ then we have for all sufficiently small $h$ $$\lVert R_he_h\rVert_{h,\,grid}^2\le C\left[M_5^2h^2 + M_2^2(\rho_2(h))^2 + M_2^2 h\right].$$ \item\label{thm:mixed_one} $L=-\Delta_c + \Delta_c^2$. Let $u\in {C}^5(\overline{D})$ be the solution of the Dirichlet problem~\eqref{eqa:continuum}. If $e_h:=u-u_h$ then we have for all sufficiently small $h$ \begin{align*}\lVert R_he_h\rVert_{h,\,grid}^2 \le C\left[M_5^2h^2 + M_4^2(\rho_3(h)-1)^2 + M_4^2 h^4 + M_2^2 h \right].\end{align*} \item\label{thm:gff_one}$L= -\Delta_c$. Let $u\in {C}^4(\overline{D})$ be a solution of the Dirichlet problem~\eqref{eqa:continuum}. If $e_h:=u-u_h$ then for sufficiently small $h$ we have $$\|R_he_h\|_{h,\,grid}^2 \le C \left[M_4^2 \delta^4 + M_2^2 \rho_1(h) \delta + M_1^2 \delta \right],$$ where $\delta:= \max \{ h, \sqrt{\rho_1(h)}\}$. \end{enumerate} In all the cases $M_k:=\sum_{\lvert\alpha\rvert\le k}\sup_{x\in D}\lvert D^\alpha u(x)\rvert$. \end{theorem} \subsection{Open problems and discussions}\label{open_problems} In this subsection we list some open problems. \begin{enumerate} \item Let $\varepsilon\ge 0$ and consider the following pinned measure on $\R^{V_N}$, with $V_N$ being a box of side length $N$: $$\mathbf{P}_{\varepsilon,N}=\frac{1}{Z_{\varepsilon, N}}\mathrm{e}^{-H(\phi)}\prod_{x\in{V_N}}(\varepsilon \delta_0(\mathrm{d}\phi_x)+\mathrm{d}\phi_{x})\prod_{x\in\mathbb{Z}^{d}\setminus V_N}\delta_{0}(\mathrm{d}\phi_{x})$$ Here $H(\phi)$ is as in \eqref{eq:ham:mixed}. Let $F(\varepsilon)$ be the free energy of the above system, namely, $$F(\varepsilon)= \lim_{N\to\infty} \frac{1}{N} \log\frac{Z_{\varepsilon, N}}{Z_{0,N}}.$$ If $F(\varepsilon)>0$ then the above pinned measure is said to be {\it localized}, otherwise it is {\it delocalized}. We call $\varepsilon_c$ the supremum of all delocalized $\varepsilon$. It would be interesting to see if the above model with $\kappa_1$ and $\kappa_2$ depending on $N$ shows a phase transition with respect to localization. The case when $\kappa_1$ and $\kappa_2$ do not depend on $N$ was studied in \cite{CarBor}. The case of $\kappa_1=0$ and $d=1$ was extensively studied in the literature, see~ \cite{CaravennaDeuschel_pin, CarJDScaling}. \item Extremes of interface models are also to be investigated. From Theorem~\ref{thm:main2} it follows that the maximum of the $(\nabla+\Delta)-$model with varying coefficients converges after appropriate rescaling to the supremum of a Gaussian process. We summarise the cases in which we are able to identify the limiting rescaled maximum: \begin{itemize} \item $\kappa\ll N^{2}$ and $d=1$; \item $\kappa\sim 2d N^2$ and $d=1,\,2,\,3$; \item $\kappa\gg N^2$ and $d=1,\,2,\,3$; \end{itemize} All the remaining cases are not known yet and it would be interesting to see if the existing methods can be pushed to cover other dimensions. The challenge in this problem arises because the behavior of the Green's function is hard to determine. A similar situation was recently handled by \cite{schweiger:2019} to determine the extremes of the the four-dimensional membrane model. He found out estimates for the Green's function and applied the methods of \cite{ding:roy:ofer} to show that the limit of the maximum is a shifted Gumbel distribution. \end{enumerate} \section{\bf Proof of Theorem~\ref{thm:main}}\label{sec:main_proofs} We now give the proof of each of the three parts of Theorem~\ref{thm:main}. \subsection{Proof of finite dimensional convergence} We first show that for $f\in C_c^\infty(D)$ \begin{equation} \label{eq:mm_toshowfdm} ( \Psi_N, f)\overset{d}\longrightarrow \begin{cases} (\Psi^{\Delta^2}\,,\,f) & \kappa \gg N^2\\ (\Psi^{-\Delta+\Delta^2}, f) & \kappa \sim 2d N^2\\ (\Psi^{-\Delta}, f) & \kappa \ll N^{2} \end{cases}.\end{equation} We begin by noting that $(\Psi_N, f)$ is a centered Gaussian random variable. Hence to show the above convergence it is enough to show that $\mathbf{Var}(\Psi_N, f)$ converges to the variance of the Gaussian on the right hand side of~\eqref{eq:mm_toshowfdm}. We denote $G_{\frac1N}(x,y):=\mathbf{E}_{\Lambda_N}[ \varphi_{Nx}\varphi_{Ny}]$. Note that by ~\eqref{eq:cov}, we have for all $x\in \frac1N\Lambda_N$, \begin{equation} \kappa\gg N^2:\qquad \begin{cases} \left(-\frac{2dN^2}{\kappa}\Delta_{\frac1N}+\Delta_{\frac1N}^2\right) G_{\frac1N}(x,y) =\frac{4d^2N^4}{\kappa}\delta_{x}(y), & y\in \frac1N\Lambda_N \\ G_{\frac1N}(x,y) = 0 & y\notin \frac1N\Lambda_N\label{eq:mm_G_N} \end{cases} \end{equation} \begin{align} \kappa\sim 2d N^2: \qquad \begin{cases} \left(-\Delta_{\frac1N}+\frac{\kappa}{2dN^2}\Delta_{\frac1N}^2\right) G_{\frac1N}(x,y) =2dN^2\delta_{x}(y), & y\in \frac1N\Lambda_N \\ G_{\frac1N}(x,y) = 0 & y\notin \frac1N\Lambda_N\label{eq:mixed_G_N} \end{cases} \end{align} \begin{equation} \kappa\ll N^{2}:\qquad \begin{cases} \left(-\Delta_{\frac1N}+\frac{\kappa}{2dN^2}\Delta_{\frac1N}^2\right) G_{\frac1N}(x,y) =2dN^2\delta_{x}(y), & y\in \frac1N\Lambda_N \\ G_{\frac1N}(x,y) = 0 & y\notin \frac1N\Lambda_N. \end{cases}\label{eq:gff_G_N} \end{equation} Now considering all the three cases we can rewrite the variance as $$\mathbf{Var}[(\Psi_N, f)]=N^{-d}\sum_{x\in \frac1N\Lambda_N} H_N(x) f(x)$$ where for $x\in \frac1N D_N$, $$H_N(x)=\begin{cases} (2d)^{-2}\kappa N^{-4}\sum_{y\in \frac1N\Lambda_N}G_{\frac1N}(x,y) f(y)& \kappa\gg N^2\\ (2d)^{-2}\kappa N^{-4}\sum_{y\in \frac1N\Lambda_N}G_{\frac1N}(x,y) f(y) & \kappa\sim 2d N^2\\ (2d)^{-1}N^{-2}\sum_{y\in \frac1N\Lambda_N}G_{\frac1N}(x,y) f(y) & \kappa\ll N^{2}. \end{cases}$$ It is immediate from \eqref{eq:mm_G_N}, \eqref{eq:mixed_G_N}, \eqref{eq:gff_G_N} that $H_N$ is the solution of the following Dirichlet problem: \begin{align} \kappa\gg N^2: \qquad \begin{cases} \left(-\frac{2dN^2}{\kappa}\Delta_{\frac1N}+\Delta_{\frac1N}^2\right) H_N(x) = f(x), & x\in \frac1N\Lambda_N\\ H_N(x)= 0, & x\notin \frac1N\Lambda_N\label{mm_H_N} \end{cases} \end{align} \begin{align} \kappa\sim 2dN^2: \qquad \begin{cases} \left(-\Delta_{\frac1N}+ \frac{\kappa}{2dN^2}\Delta_{\frac1N}^2\right) H_N(x) = f(x) & x\in \frac1N\Lambda_N\\ H_N(x)= 0 & x\notin \frac1N\Lambda_N\label{mixed_H_N} \end{cases} \end{align} \begin{align} \kappa\ll N^{2}: \qquad \begin{cases} \left(-\Delta_{\frac1N}+\frac{\kappa}{2dN^2}\Delta_{\frac1N}^2\right)H_N(x) = f(x), & x\in \frac1N\Lambda_N\\ H_N(x)= 0, & x\notin \frac1N\Lambda_N.\label{gff_H_N} \end{cases} \end{align} Observe that we get the discrete Dirichlet problem involving the operator $L_h$ defined in \eqref{discrete_op} with $h=1/N$ and $$\rho_1(h):=\kappa h^2/2d,\quad \rho_2(h):= 2d/\kappa h^2, \quad \rho_3(h):=\kappa h^2/2d.$$ We now recall the continuum Dirichlet problem~\eqref{eqa:continuum} with the elliptic operator $L$ as in~\eqref{def:L}: \begin{equation*} \begin{cases} Lu(x) = f(x)& x\in D\\ D^\alpha u(x)=0& \lvert\alpha\rvert\leq m-1,\,x\in \partial D. \end{cases} \end{equation*} where $m=1$ if $L=-\Delta_c$ and $m=2$ in the other two cases. We set $L:=\Delta_c^2$ when $\kappa\gg N^2$, $L:=-\Delta_c$ when $\kappa\ll N^{2}$ and $L:=-\Delta_c+\Delta_c^2$ when $\kappa\sim 2dN^2$. Define $e_N(x)= H_N(x)-u(x)$ for $x\in \frac1N D_N$. Then from Theorem~\ref{approx_result} we have \begin{equation}\label{eq:mm_thomee} N^{-d}\sum_{x\in \frac1N\Lambda_N} e_N(x)^2\le \begin{cases} C\left( \frac1{N^2} +\frac{4d^2N^4}{\kappa^2} + \frac1N\right) & \kappa\gg N^2\\ C\left(\frac1N + \left(\frac{\kappa}{2dN^2}-1 \right)^2\right) & \kappa\sim 2d N^2\\ C \max\{\frac{1}{N},\frac{\sqrt{\kappa}}{\sqrt{2d}N}\} & \kappa\ll N^{2}. \end{cases}. \end{equation} Hence we get that \begin{equation}\label{eq:varexp} \mathbf{Var}[(\Psi_N, f)]= \sum_{x\in \frac1N\Lambda_N} e_N(x) f(x)N^{-d} + \sum_{x\in \frac1N\Lambda_N} u(x) f(x) N^{-d}. \end{equation} Note that by Cauchy-Schwarz inequality and ~\eqref{eq:mm_thomee} the first term goes to zero as $N\to \infty$. The second term converges to \begin{equation}\label{eq:mm_limit} \sum_{x\in \frac1N\Lambda_N} u(x) f(x) N^{-d}\to_{N\to \infty} \int_{D} u(x) f(x)\De x. \end{equation} Notice that by integration by parts we have $$\int_{D} u(x) f(x)\De x= \begin{cases} \|u\|_{2,\,\Delta^2}^2=\|f\|_{-2,\,\Delta^2}^2 & L= \Delta_c^2\\ \|u\|^2_{2,\,-\Delta + \Delta^2} = \|f\|_{-2,\,-\Delta + \Delta^2}^2 & L= -\Delta_c+\Delta_c^2\\ \|u\|_{1, -\Delta}^2=\|f\|_{-1, -\Delta}^2 & L=-\Delta_c. \end{cases} $$ On the other hand from the definition it follows that \begin{align*} \mathbf{Var}[(\Psi^{\Delta^2}\,,\,f)]&=\sum_{j\in \N}\lambda_j^{-1} \left\langle u_j\,,\,f\right\rangle_{L^2}^2=\|f\|_{-2,\,\Delta^2}^2\\ \mathbf{Var}[(\Psi^{-\Delta+\Delta^2}\,,\,f)]&=\sum_{j\in \N}\mu_j^{-1} \left\langle v_j\,,\,f\right\rangle_{L^2}^2=\|f\|_{-2,\,-\Delta + \Delta^2}^2\\ \mathbf{Var}[(\Psi^{\Delta}\,,\,f)]&=\sum_{j\in \N}\nu_j^{-1} \left\langle w_j\,,\,f\right\rangle_{L^2}^2=\|f\|_{-1,-\Delta}^2. \end{align*} Consequently we obtain~\eqref{eq:mm_toshowfdm}. \subsection{Tightness} To show tightness we shall need the following bounds on the eigenfunctions $(u_j)_{j\in \mathbb N}$, $(v_j)_{j\in \mathbb N}$ and $(w_j)_{j\in \mathbb N}$ of $\Delta_c^2$, $-\Delta_c+\Delta_c^2$ and $-\Delta_c$ respectively. They can obtained from the general Sobolev inequality (\citet[Chapter~5, Theorem~6 (ii)]{Evans}) and a repeated application of \citet[Corollary~2.21]{GGS}. \begin{lemma} Let $$l_k:=\ceil*{ \frac{1}{4}\left(\floor*{\frac d2}+k+1\right)},\quad k\ge 0. $$ \begin{enumerate} \item For the eigenfunctions $(u_j)_{j\in \mathbb N}$ of $\Delta_c^2$ in Problem~\eqref{eqa:continuum} there exists a constant $C>0$ such that for $k\ge 0$ \begin{equation}\label{bound0:MM} \sum_{\lvert\alpha\rvert\le k}\sup_{x\in D}\lvert D^\alpha u_j(x)\rvert \le C\lambda_j^{l_k}. \end{equation} \item For the eigenfunctions $(v_j)_{j\in \mathbb N}$ of $-\Delta_c+\Delta_c^2$ in Problem~\eqref{eqa:continuum} there exists a constant $C>0$ such that for $k\ge 0$ \begin{equation}\label{bound0:mixed} \sum_{\lvert\alpha\rvert\le k}\sup_{x\in D}\lvert D^\alpha v_j(x)\rvert \le C\mu_j^{l_k}. \end{equation} \item For the eigenfunctions $(w_j)_{j\in \mathbb N}$ of $-\Delta_c$ in Problem~\eqref{eqa:continuum} there exists a constant $C>0$ such that for $k\ge 0$ \begin{equation}\label{bound0:GFF} \sum_{\lvert\alpha\rvert\le k}\sup_{x\in D}\lvert D^\alpha w_j(x)\rvert \le C\nu_j^{\frac{\floor{\frac{d}2}+k+1}2}. \end{equation} \end{enumerate} In each instance, the constant $C$ may depend on $k$. \end{lemma} We can now begin to show tightness.\newline {\bf Case 1: $\kappa\gg N^2$.} Our target is to show that the sequence $(\Psi_N)_{N\in \mathbb{N}}$ is tight in $\mathcal H^{-s}_{\Delta^2}(D)$ for all $s>s_d$. It is enough to show that \begin{equation}\label{mm_limsup_psi} \limsup_{N\to \infty}\mathbf{E}_{\Lambda_N}[\|\Psi_N\|_{-s,\,\Delta^2}^2]<\infty \quad \forall \, s>s_d. \end{equation} The tightness of $(\Psi_N)_{N\in \mathbb{N}}$ would then follow immediately from \eqref{mm_limsup_psi} and the fact that, for $0\le s_1<s_2$, $\mathcal H^{-s_1}_{\Delta^2}(D)$ is compactly embedded in $\mathcal H^{-s_2}_{\Delta^2}(D)$ (for a proof of this fact see \citet[Theorem~3.15]{mm_scaling}). From the definition of dual norm it is immediate that we have $$\mathbf{E}_{\Lambda_N}\left[\|\Psi_N\|_{-s,\,\Delta^2}^2\right] \le \sum_{j\in \N}\lambda_j^{-s/2}\mathbf{E}_{\Lambda_N}[(\Psi_N\,,\,u_j)^2].$$ Note that $u=\lambda_j^{-1}u_j$ is the unique solution of \eqref{eqa:continuum} with $L=\Delta_c^2$ for $f:=u_j$. Define $e_{N,j}$ to be the error between the solution of the discrete Dirichlet problem~\eqref{mm_H_N} and the continuum one~\eqref{eqa:continuum} with input datum $f:=u_j$. Now as in~\eqref{eq:varexp} we have \begin{align} \mathbf{E}_{\Lambda_N}[(\Psi_N\,,\,u_j)^2]&= \sum_{x\in \frac1N\Lambda_N} e_{N,j}(x) u_j(x)N^{-d} + \sum_{x\in \frac1N\Lambda_N} \lambda_j^{-1}u_j(x) u_j(x) N^{-d}\nonumber\\ & \le C\sup_{x\in D}|u_j(x)| \left({N^{-d} \sum_{x\in \frac1N\Lambda_N} e_{N,j}(x)^2}\right)^{1/2}+ C\lambda_j^{-1}\left(\sup_{x\in D}|u_j(x)|\right)^2.\label{eq:varbound} \end{align} Using Theorem \ref{approx_result} \ref{thm:mm_one} along with the bounds~\eqref{bound0:MM} we obtain \begin{align*} \mathbf{E}_{\Lambda_N}[(\Psi_N\,,\,u_j)^2]&\le C \lambda_j^{l_0} \left[\lambda_j^{2l_5-2}N^{-2} + \lambda_j^{2l_2-2} 4d^2N^4\kappa^{-2}+\lambda_j^{2l_2-2} N^{-1}\right]^{\frac12} + C\lambda_j^{2l_0-1}\\ &\le C \lambda_j^{l_0 + l_5-1}. \end{align*} Therefore we have \begin{align*} \mathbf{E}_{\Lambda_N}\left[\|\Psi_N\|_{-s,\,\Delta^2}^2\right]\le C \sum_{j\in \N}\lambda_j^{-\frac s2}\lambda_j^{l_0 + l_5-1}. \end{align*} Thus $$\limsup_{N\to \infty}\mathbf{E}_{\Lambda_N}[\|\Psi_N\|_{-s,\,\Delta^2}^2]<\infty \qquad \text{ if } \qquad \sum_{j\in \N}\lambda_j^{-\frac s2+l_0 + l_5-1}<\infty.$$ Now using $\lambda_j\sim c(d) j^{4/d}$ (see Proposition~3.8 of \cite{mm_scaling}) we obtain that $\sum_{j\in \N}\lambda_j^{-\frac s2+l_0 + l_5-1}$ is finite whenever $s>s_d$. Thus we have proved \eqref{mm_limsup_psi}. {\bf Case 2: $\kappa\sim 2d N^2$.} Due to the compact embedding of the spaces $\mathcal H^{-s}_{-\Delta+\Delta^2}(D)$, to show that the sequence $(\Psi_N)_{N\in \mathbb{N}}$ is tight in $\mathcal H^{-s}_{-\Delta+\Delta^2}(D)$ for all $s>s_d$, it is enough to show that \begin{equation}\label{mixed_limsup_psi} \limsup_{N\to \infty}\mathbf{E}_{\Lambda_N}[\|\Psi_N\|_{-s,\,-\Delta + \Delta^2}^2]<\infty \quad \forall \, s>s_d. \end{equation} As in the previous case, by definition of dual norm we have $$\mathbf{E}_{\Lambda_N}\left[\|\psi_N\|_{-s, \,-\Delta + \Delta^2}^2\right] \le \sum_{j\in \N}\mu_j^{-s/2}\mathbf{E}_{\Lambda_N}[(\psi_N\,,\,v_j)^2].$$ Note that $u=\mu_j^{-1}v_j$ is the unique solution of \eqref{eqa:continuum} with $L=-\Delta_c+\Delta_c^2$ for $f:=u_j$. Define $e_{N,j}$ to be the error between the solution of the discrete Dirichlet problem~\eqref{mixed_H_N} and the continuum one~\eqref{eqa:continuum} with $f:=v_j$. Now as in~\eqref{eq:varbound} we have $$\mathbf{E}_{\Lambda_N}[(\Psi_N\,,\,v_j)^2]\le C\sup_{x\in D}|v_j(x)| \left({N^{-d} \sum_{x\in \frac1N\Lambda_N} e_{N,j}(x)^2}\right)^{1/2}+ C\mu_j^{-1}\left(\sup_{x\in D}|v_j(x)|\right)^2.$$ Using Theorem~\ref{approx_result}~\ref{thm:mixed_one} along with the bounds~\eqref{bound0:mixed} we obtain \begin{align*} \mathbf{E}_{\Lambda_N}[(\Psi_N\,,\,v_j)^2]&\le C \mu_j^{l_0} \left[\mu_j^{2l_5-2}N^{-2} + \mu_j^{2l_5-2}\left(\frac{\kappa}{2dN^2}-1\right)^2 +\mu_j^{2l_2-2} N^{-1}\right]^{\frac12} + C\mu_j^{2l_0-1}\\ &\le C \mu_j^{l_0 + l_5-1}. \end{align*} Therefore we have \begin{align*} \mathbf{E}_{\Lambda_N}\left[\|\Psi_N\|_{-s, \,-\Delta + \Delta^2}^2\right]\le C \sum_{j\in \N}\mu_j^{-\frac s2}\mu_j^{l_0 + l_5-1}. \end{align*} Thus $$\limsup_{N\to \infty}\mathbf{E}_{\Lambda_N}[\|\Psi_N\|_{-s, \,-\Delta + \Delta^2}^2]<\infty \qquad \text{ if } \qquad \sum_{j\in \N}\mu_j^{-\frac s2+l_0 + l_5-1}<\infty.$$ From Proposition \ref{prop:mixed_Weyl} we obtain that $\sum_{j\in \N}\mu_j^{-\frac s2+l_0 + l_5-1}<\infty$ whenever $s>s_d$. Thus we have proved \eqref{mixed_limsup_psi}. {\bf Case 3: $\kappa \ll N^{2}$.} The arguments are similar to the previous two cases and hence we just indicate the required bounds. To show tightness in $\mathcal H^{-s}_{-\Delta}(D)$ it is enough to show \begin{equation}\label{gff_limsup_psi} \limsup_{N\to \infty}\mathbf{E}_{\Lambda_N}[\|\Psi_N\|_{-s,\,-\Delta}^2]\le \sum_{j\in \N}\nu_j^{-s}\mathbf{E}_{\Lambda_N}[(\Psi_N\,,\,w_j)^2] <\infty \quad \forall \, s>{d}/2 + \floor{{d}/2}+2. \end{equation} Setting $e_{N,j}$ to be the error between the solution of the discrete Dirichlet problem~\eqref{gff_H_N} and the continuum one \eqref{eqa:continuum} with $f:=w_j$ we obtain $$ \mathbf{E}_{\Lambda_N}[(\Psi_N\,,\,w_j)^2]\le C\sup_{x\in D}|w_j(x)| \left({N^{-d} \sum_{x\in \frac1N\Lambda_N} e_{N,j}(x)^2}\right)^{1/2}+ C\nu_j^{-1}\left(\sup_{x\in D}|w_j(x)|\right)^2. $$ Using Theorem~\ref{approx_result}~\ref{thm:gff_one} along with the bounds~\eqref{bound0:GFF} we can conclude the following upper bound for $\mathbf{E}_{\Lambda_N}[(\Psi_N\,,\,w_j)^2]$: \begin{align*} &C \sup_{x\in D}|w_j(x)| \left[ \left(\nu_j^{-1}\nu_j^{\frac{\floor{\frac{d}2}+5}2}\right)^2 \delta^4 + \left(\nu_j^{-1}\nu_j^{\frac{\floor{\frac{d}2}+3}2} \right)^2 \frac{\delta\kappa}{2dN^2} + \left(\nu_j^{-1}\nu_j^{\frac{\floor{\frac{d}2}+2}2}\right)^2 \delta\right]^{\frac12}+C\nu_j^{-1}\left(\sup_{x\in D}|w_j(x)|\right)^2 , \end{align*} where $\delta = \max\{\frac{1}{N},\frac{\sqrt{\kappa}}{\sqrt{2d}N}\}$. Now a consequence of the above and \eqref{bound0:GFF} is that \begin{align}\label{eq:second_equation} \mathbf{E}_{\Lambda_N}[(\Psi_N\,,\,w_j)^2] \le C\nu_j^{\floor{\frac{d}2}+2}. \end{align} Therefore we have \begin{align*} \mathbf{E}_{\Lambda_N}\left[\|\Psi_N\|_{-s,\,-\Delta}^2\right]\le C \sum_{j\in \N}\nu_j^{-s}\nu_j^{\floor{\frac{d}2}+2}. \end{align*} Thus $$\limsup_{N\to \infty}\mathbf{E}_{\Lambda_N}[\|\Psi_N\|_{-s,\,-\Delta}^2]<\infty \qquad \text{ if } \qquad\sum_{j\in \N}\nu_j^{-s+\floor{\frac{d}2}+2}<\infty.$$ But $\nu_j\sim Cj^{\frac2d}$ and $\sum_{j\in \N}j^{\frac2d(-s+\floor{\frac{d}2}+2)}<\infty$ whenever $s>{d}/2 + \floor{{d}/2}+2$. Thus we have proved \eqref{gff_limsup_psi}. For all the cases we now have the tightness and the convergence of $(\Psi_N, f)$ for all $f\in C_c^\infty(D)$. A standard uniqueness argument completes the proof of Theorem~\ref{thm:main}, using the fact that $C_c^\infty(D)$ is dense in $\mathcal H_{\Delta^2,0}^s(D)$, $\mathcal H_{-\Delta+\Delta^2, 0}^s(D)$ and $\mathcal H_{-\Delta, 0}^s(D)$ respectively. \section{\bf Proof of Theorem~\ref{thm:main2}}\label{sec:main2_proofs} In this section we prove Theorem~\ref{thm:main2} by showing finite dimensional convergence and tightness. The proof is similar to the proofs of the lower dimensional results in \cite{mm_scaling} and \cite{mixed_scaling} and hence we shall only state the important bounds needed for the proof. One can show tightness of the sequence using Theorem 14.9 of \cite{kallenberg:foundations} (see also Theorem 2.5 of \cite{mm_scaling}). To use this result one mainly needs bounds on the increments of the following type: $$\mathbf{E}_{\Lambda_N}\left[ \left| \Psi_N(t)- \Psi_N(s)\right|^2\right]\le C \|t-s\|^{1+b}, \quad t,s \in \overline{D},\,b\ge 0.$$ Such bounds can be obtained using the Brascamp-Lieb inequality and the following Lemma which is proved by using the estimates for the membrane model and the discrete Gaussian free field. \begin{lemma}~\label{DGF_bound0} Let $\mathbf{P}_{\Lambda_N}^{MM}$ and $\mathbf{P}_{\Lambda_N}^{GFF}$ denote respectively, the law of the membrane model and the discrete Gaussian free field on $\Lambda_N$ with zero boundary conditions outside $\Lambda_N$. \begin{enumerate}[leftmargin=*,label=(\Roman*),ref=(\Roman*)] \item\label{DGF_bound}Let $\kappa\gg N^2$ or $\kappa \sim 2dN^2$ and $1\le d\le 3$. Then for all $x\in\Z^d$ \begin{enumerate}[label=(\arabic*),ref=(\arabic*)] \item\label{1.DGF_bound} $$G_{\Lambda_N}(x,x)\le \kappa^{-1}\mathbf{E}_{\Lambda_N}^{MM}(\varphi_x^2) \le C\kappa^{-1}N^{4-d}.$$ \item\label{2.DGF_bound0} $$\mathbf{E}_{\Lambda_N}\left[\left( \varphi_{x+e_i}-\varphi_x\right)^2 \right] \le \kappa^{-1}\mathbf{E}^{MM}_{\Lambda_N}\left[\left(\varphi_{x+e_i} - \varphi_x\right)^2\right]\le \left\lbrace \begin{array}{l l l} C\kappa^{-1}N & d=1\\ C\kappa^{-1}\log N & d=2\\ C \kappa^{-1}& d=3 \end{array} .\right.$$ \end{enumerate} \item Let $\kappa\ll N^{2}$ and $d=1$. Then for all $x\in\Z^d$ \begin{enumerate}[label=(\arabic*),ref=(\arabic*)] \item $$G_{\Lambda_N}(x,\,x)\le \mathbf{E}_{\Lambda_N}^{GFF}(\varphi_x^2) \le C N. $$ \item \begin{align*} \mathbf{E}_{\Lambda_N}[(\varphi_{x +e_i} - \varphi_x)^2] \le \mathbf{E}_{\Lambda_N}^{GFF}[(\varphi_{x+e_i} - \varphi_x)^2] \le C. \end{align*} \end{enumerate} \end{enumerate} \end{lemma} \begin{proof}~ \begin{enumerate}[leftmargin=*,label=(\Roman*),ref=(\Roman*)] \item To show the first bound one can first show using Theorem 5.1 of \cite{brascamp_lieb} that \begin{align*} G_{\Lambda_N}(x,\,x) \le \kappa^{-1}\mathbf{E}_{\Lambda_N}^{MM}(\varphi_x^2). \end{align*} The bound for the $d=1$ case can be obtained using the random walk representation of the model used in Lemma~\ref{lem:one_mm}. For $d=2,\,3$ we obtain the bound from Theorem 1.1 of \cite{Mueller:Sch:2017}. For the second part the Brascamp-Lieb inequality yields \begin{align*} \mathbf{E}_{\Lambda_N}[(\varphi_{x+e_i} - \varphi_x)^2]\le \kappa^{-1}\mathbf{E}^{MM}_{\Lambda_N}[(\varphi_{x+e_i} - \varphi_x)^2]. \end{align*} The bound now follows from Lemma~\ref{lem:one_mm} (for $d=1$) and Theorem 1.1 of \cite{Mueller:Sch:2017} (for $d=2,3$). \item The argument in this case are similar to the above case. The bounds in this case are obtained using the Brascamp-Lieb inequality and an argument similar to the proof of Lemma 12 of \cite{mixed_scaling}.\qedhere \end{enumerate} \end{proof} The detailed argument for tightness is similar to that in \citet[Section 2]{mm_scaling} and hence skipped. To conclude the finite dimensional convergence we first show the convergence of the covariance. We shall discuss the argument for the cases when $\kappa\gg N^2$ and $\kappa\sim 2d N^2$. The argument for both cases is the same. In the other instance, that is $\kappa\ll N^{2}$, we can argue similarly using the following additional piece of information: the covariance function $$G_D(x,y)=\min\{x,y\} -xy,\quad x,y\in\overline D$$ of the Brownian bridge is nothing but the Green's function for the problem \begin{align*} \begin{cases} -\frac{\De^2 u}{\De x^2}(x) = f(x) & x\in D\\ u(x)=0 & x\in\partial D. \end{cases} \end{align*} Suppose now $\kappa\gg N^2$ or $\kappa\sim 2d N^2$. For $x,y\in \overline D\cap N^{-1}\Z^d$ we define $$G_{\frac1N}(x,y):=(2d)^{-2}\kappa N^{d-4}G_{\Lambda_N}(Nx,Ny).$$ We now interpolate $G_{\frac1N}$ in a piece-wise constant fashion on small squares of $\overline D \times \overline D$ to get a new function $G_{\frac1N}^I$. We show that $G_{\frac1N}^I$ converges uniformly to $G_D$ on $\overline D\times\overline D$. Indeed, let $F_N:=G_{\frac1N}^I - G_D$. Similarly as in the proof of the finite dimensional convergence in Theorem~\ref{thm:main}~\ref{thm:MM} or Theorem~\ref{thm:main}~\ref{thm:MIXED} it follows that, for any $f,\,g\in C_c^\infty(D)$, \begin{align*} \lim_{N\to\infty}\sum_{x, y\in \frac1N D_N} N^{-2d} G^I_{\frac1N}(x,y) f(x) g(y)= \iint_{D\times D} G_D(x,y)f(x)g(y)\De x \De y. \end{align*} Again from Riemann sum convergence we have \begin{align*} \lim_{N\to\infty}\sum_{x, y\in \frac1ND_N} N^{-2d} G_D(x,y) f(x) g(y)=\iint_{D\times D} G_D(x,y)f(x)g(y)\De x \De y. \end{align*} Thus we get \begin{align} \lim_{N\to\infty}\sum_{x, y\in \frac1N D_N} N^{-2d} F_N(x,y) f(x) g(y)=0\label{eq:unique_zero_m}. \end{align} Note that $G_D$ is bounded and $$\sup_{x,y\in\frac1N D_N}|G_{\Lambda_N}(Nx,Ny)| \le C\kappa^{-1}N^{4-d}.$$ These imply that $$\sup_{x,y\in\overline D} |F_N(x,y)|\le C.$$ Thus $F_N$ has a subsequence converging uniformly to some function $F$ which is bounded by $C$. With abuse of notation we denote this subsequence by $F_N$. We then have \begin{align*} \lim_{N\to\infty}\sum_{x, y\in \frac1N D_N} N^{-2d} F_N(x,y) f(x) g(y)= \iint_{D\times D} F(x,y)f(x)g(y)\De x \De y. \end{align*} Uniqueness of the limit gives $$\iint_{D\times D} F(x,y)f(x)g(y)\De x \De y = 0$$ by~\eqref{eq:unique_zero_m}. From this we obtain that $F(x,y)=0$ for almost every $x$ and almost every $y$. The definition by interpolation of $G^I_{\frac1N}$ ensures that $F$ is pointwise equal to zero. Finally, the fact that the original sequence $F_N$ converges uniformly to zero follows using the subsequence argument. We now show the finite dimensional convergence. First let $t\in\overline D$. We write $$\Psi_N(t)=\Psi_{N,1}(t) + \Psi_{N,2}(t)$$ where $\Psi_{N,1}(t):=(2d)^{-1}\sqrt{\kappa}N^{\frac{d-4}2}\varphi_{\floor{Nt}}$ and $\Psi_{N,2}(t):=\psi_N(t)-\psi_{N,1}(t)$. From Lemma~\ref{DGF_bound0}\ref{DGF_bound}\ref{2.DGF_bound0} it follows that $\mathbf{E}_{\Lambda_N}[\Psi_{N,2}(t)^2]$ goes to zero as $N$ tends to infinity. Therefore to show that $\Psi_N(t)$ converges in distribution it is enough to show that $\mathbf{Var}[\Psi_{N,1}(t)]\to G_D(t,t)$. But we have \begin{align*} \mathbf{Var}[\Psi_{N,1}(t)]=(2d)^{-2} \kappa N^{d-4}G_{\Lambda_N} \left(\floor{Nt},\,\floor{Nt}\right)= G^I_{\frac1N}(t,t)\to G_D(t,t) \end{align*} since the sequence $F_N$ converges to zero uniformly. Since the variables under consideration are Gaussian, one can show the finite dimensional convergence using the convergence of the Green's functions. This completes the proof of Theorem~\ref{thm:main2}. \qed \section{\bf Proof of Theorem \ref{approx_result}}\label{proof_approx_result} This section is devoted to proof of the error estimation result in Theorem~\ref{approx_result}. To estimate the error we need to develop some Sobolev inequalities in the general setting which involve consistency between discrete and continuous operators. The content of this section can be of independent interest and can possibly be applied to general interface models. We would like to stress that although we follow the ideas involved in~\cite{thomee}, we cannot quote the results from there {\em verbatim} as the coefficients of the discrete operators do not depend on the scaling of the lattice. Also another important remark is that the discrete Dirichlet problem involving the operators $L_h$ introduced in \eqref{discrete_op} requires two boundary conditions, but the definition of the limiting operator $-\Delta_c$ involves only one boundary condition. The ideas from \cite{thomee} work well when $L=\Delta_c^2$ or $L=-\Delta_c+\Delta_c^2$. In the case when $L=-\Delta_c$, we assign a cut-off which helps in controlling the error around the boundary. The proof of Theorem~\ref{approx_result}~\ref{thm:gff_one} should be applicable to many other models. \subsection{Sobolev-type norm inequalities} The main aim of this Subsection is to have an estimate on the $\ell^2$ norm of a function on the grid in terms of the operator $L_h$ (and its truncated version). Later this turns out to be useful as we use the convergence of $L_h$ to $L$. We continue with all the definitions and notations from Section~\ref{main_ingredient}. The notion of discrete forward and backward derivatives will be essential in the following arguments. \begin{align*} & \partial_ju(x) := \frac1h(u(x+he_j)-u(x)),\\ & \bar\partial_ju(x) := \frac1h(u(x)-u(x-he_j)),\\ & \partial^\alpha:=\partial_1^{\alpha_1}\cdots\partial_d^{\alpha_d},\\ & \bar\partial^\alpha:=\bar\partial_1^{\alpha_1}\cdots\bar\partial_d^{\alpha_d}, \end{align*} where $\alpha=(\alpha_1,\ldots,\alpha_d)$ is a multi-index. It is easy to see that \begin{align*} \left\langle \partial_j u\,,\,v \right\rangle_{h,\,grid} = \left\langle u\,,\,\bar\partial_jv \right\rangle_{h,\,grid} \end{align*} for grid-functions vanishing outside a finite set. We now define $$\|u\|_{h,m}:=\left( \sum_{|\alpha|\le m} \|\partial^\alpha u \|_{h,\,grid}^2\right)^{\frac12}$$ and obtain the following Lemma. \begin{lemma}[{\citet[Lemma~3.1]{thomee}}]\label{lem:3.1} There are constants $C=C_j$ independent of $u$ and $h$ such that \begin{equation}\label{eq:3.1.1} \|u\|_{h,\,grid}\le C\|\partial_j u\|_{h,\,grid}, \quad u\in\mathcal D_h,\,\,j=1,\ldots, d, \end{equation} and for fixed $m\ge 1$, \begin{equation}\label{eq:3.1.2} \|u\|_{h,\,grid}\le C\| u\|_{h,m}, \quad u\in\mathcal D_h. \end{equation} \end{lemma} We will need the following norm which rescales the function near the boundary: $$|||u|||_{h,m}:= \left( h^d\left(\sum_{x\in R_h^*} u(x)^2 + \sum_{x\in B_h^*} (h^{-m} u(x))^2\right)\right)^{\frac12}, \quad u\in\mathcal D_h.$$ We can relate the weighted Sobolev norm $|||\cdot|||_{h,m}$ to $||\cdot||_{h,m}$ with this bound: \begin{lemma}[{\citet[Lemma~3.4]{thomee}}]\label{lem:3.4} There is a constant $C$ independent of $u$ and $h$ such that \begin{align*} |||u|||_{h,m} \le C \|u\|_{h,m}, \quad u\in\mathcal D_h. \end{align*} \end{lemma} We rewrite $L_h$ in \eqref{discrete_op} as \begin{equation}\label{eq:newL_h}L_h u(x) = h^{-2m}\sum_{\eta} c_\eta u(x+\eta h),\end{equation} where $\eta=(\eta_1,\ldots,\eta_d)$ with the $\eta_j$'s being integers and the $c_\eta$'s being real numbers which may depend on $h$. We now define the characteristic polynomial of $L_h$ by \begin{align}\label{char_poly} p(\theta):= \sum_{\eta} c_\eta \mathrm{e}^{\iota \left\langle \eta\,,\, \theta \right\rangle}, \end{align} where $\theta=(\theta_1,\ldots,\theta_d)$ and $\left\langle \eta\,,\, \theta \right\rangle = \sum_{j=1}^d \eta_j \theta_j$. We have the following Lemma: \begin{lemma}\label{lem3.2} \begin{align*} \left\langle L_hu\,,\,u\right\rangle_{h,\,grid} = h^{d-2m}(2\pi)^{-d}\int_S p(\theta) |\hat{u}(\theta)|^2\mathrm{d}\theta,\quad u\in\mathcal D_h. \end{align*} where $$\hat{u}(\theta) = \sum_{\xi\in\Z^d} u(\xi h)\mathrm{e}^{-\iota \left\langle \xi\,,\, \theta \right\rangle}$$ and $S= \{\theta: |\theta_j|\le \pi,\, j=1,\ldots,d\}.$ \end{lemma} \begin{proof}We expand \begin{align*} \left\langle L_hu\,,\,u\right\rangle_{h,\,grid} &= h^d \sum_{x\in h\Z^d} L_hu(x)u(x)\\ & \stackrel{\eqref{eq:newL_h}}{=} h^{d-2m}\sum_{x\in h\Z^d}\sum_{\eta\in\Z^d} c_\eta u(x+\eta h) u(x)\\ & = h^{d-2m}\sum_{x,\,\xi\in h\Z^d} c_{\frac{\xi-x}h} u(\xi) u(x). \end{align*} By inverting \eqref{char_poly} we have $$c_\eta = (2\pi)^{-d} \int_S p(\theta) \mathrm{e}^{-\iota \left\langle \eta,\,\theta \right\rangle} \mathrm{d}\theta.$$ Thus \begin{align*} \left\langle L_hu\,,\,u\right\rangle_{h,\,grid} &= h^{d-2m}\sum_{x,\,\xi\in h\Z^d} (2\pi)^{-d} \int_S p(\theta) \mathrm{e}^{-\iota \left\langle \frac{\xi-x}h,\,\theta \right\rangle} \mathrm{d}\theta u(\xi) u(x)\\ & = h^{d-2m}(2\pi)^{-d}\int_S p(\theta) |\hat{u}(\theta)|^2\mathrm{d}\theta.\qedhere \end{align*} \end{proof} We will also need \begin{lemma}[{\citet[Lemma~3.3]{thomee}}]\label{lem:3.3} There is a constant $C$ independent of $u$ and $h$ such that \begin{align*} \|u\|_{h,m}^2 \le C \sum_{j=1}^d \|\partial_j^m u\|_{h,\,grid}^2,\quad u\in\mathcal D_h. \end{align*} \end{lemma} \begin{proof} We first prove that if $\alpha$ is a multi-index with $|\alpha|=m$ then \begin{equation}\label{lab1} \left\langle \bar\partial^\alpha\partial^\alpha u,\,u \right\rangle_{h,\,grid} \le \left\langle Q_h u,\,u \right\rangle_{h,\,grid},\quad u\in\mathcal D_h, \end{equation} where $Q_h$ is the difference operator \begin{equation}Q_h u := \sum_{j=1}^d \bar\partial_j^m \partial_j^m u. \label{Q_h}\end{equation} Similar to \eqref{char_poly} we can show the characteristic polynomial of $\bar\partial^\alpha\partial^\alpha$ and $Q_h$ are respectively $$q_1(\theta) = 2^m \prod_{j=1}^d (1-\cos \theta_j)^{\alpha_j}$$ and $$q_2(\theta) = 2^m \sum_{j=1}^d (1-\cos \theta_j)^m.$$ Now by the inequality between arithmetic and geometric mean we have $$q_1(\theta) \le 2^m \sum_{j=1}^d m^{-1}\alpha_j(1-\cos \theta_j)^m \le q_2(\theta).$$ Using Lemma \ref{lem3.2} we obtain \eqref{lab1}, which implies $$\|\partial^\alpha u\|_{h,\,grid}^2 \le \sum_{j=1}^d \|\partial_j^m u\|_{h,\,grid}^2,\quad u\in\mathcal D_h.$$ For $|\alpha|<m$, one can show using Lemma \ref{lem:3.1} $$\|\partial^\alpha u\|_{h,\,grid}^2 \le C \sum_{j=1}^d \|\partial_j^m u\|_{h,\,grid}^2,\quad u\in\mathcal D_h.$$ Hence the proof is complete. \end{proof} \subsection{Errors in the Dirichlet problem} We have shown some discrete Sobolev inequalities till now. We now relate these directly to our discrete operators. We start dealing with each of the operators separately. Before we do so let us show here the existence and uniqueness of the solution of the discrete boundary value problem \eqref{eq:discrete}-\eqref{eq:discrete boundary}. \begin{lemma}\label{fact:unique_discrete} The finite difference Dirichlet problem \eqref{eq:discrete}-\eqref{eq:discrete boundary} has exactly one solution for arbitrary $f$. \end{lemma} \begin{proof} We first show the following. There exists a constant $C>0$ independent of $u$ and $h$ such that \begin{equation}\label{eq:revised_00} \|u\|_{h,\,grid}\le C\|L_{h}u\|_{h,\,grid},\quad u\in\mathcal D_h. \end{equation} In case $L= \Delta_c^2$ or $-\Delta_c + \Delta_c^2$,~\eqref{eq:revised_00} follows Lemma~\ref{lem:3.1} and from the proof of Lemmas~\ref{thm:mm_4.2}, \ref{thm:mixed_4.2} respectively. For $L=-\Delta_c$ the argument is similar once we observe that \begin{align*} p(\theta)&=-\sum_{i=1}^d (2\cos{\theta_i}-2)+\frac{\rho_1(h)}{h^2}\sum_{i,\,j=1}^d[ 2\cos{(\theta_i + \theta_j)} + 2\cos{(\theta_i -\theta_j)}-4\cos{\theta_i}-4\cos{\theta_j}+4]\\ &=\sum_{i=1}^d (2-2\cos{\theta_i})+\frac{\rho_1(h)}{h^2}\sum_{i,\,j=1}^d[4(1-\cos{\theta_i})(1-\cos{\theta_j})]\\ &\ge 2\sum_{i=1}^d (1-\cos{\theta_i}). \end{align*} Now since $u\equiv 0$ in $B_h$, Equation~\eqref{eq:discrete} can be considered as a linear system of equations with the same number of equations as of unknowns (the number of points in $R_h$). Therefore it is sufficient to prove that the corresponding homogeneous system has only the trivial solution i.e. $u\equiv 0$ in $R_h$. This follows from~\eqref{eq:revised_00}. \end{proof} \subsubsection{Bilaplacian case: proof of Theorem \ref{approx_result}~\ref{thm:mm_one}} In this subsection we consider $L:=\Delta_c^2$. Recall $\rho_2(h)\to 0$ and we have for $x\in h\mathbb{Z}^d$, \begin{align*} L_hu(x)&=\frac1{h^4}\left[-h^{2}\rho_2(h)\sum_{i=1}^d (u(x+he_i)+u(x-he_i)-2u(x))\right.\\ &\left.+\sum_{i,\,j=1}^d\left\lbrace u(x+h(e_i+e_j))+u(x-h(e_i+e_j))+u(x+h(e_i-e_j))+u(x-h(e_i-e_j))\right.\right.\\ &\left.\left.-2(u(x+he_i)-2u(x-he_i)-2(u(x+he_j)-2u(x-he_j) +4u(x))\right\rbrace\right] . \end{align*} We define the operator $L_{h,2}$ as follows: \begin{equation}\label{eq:Lh2}L_{h,\,2} f(x) = \begin{cases} L_h f(x) & x\in R_h^\ast\\ h^2 L_h f(x) & x\in B_h^\ast\\ 0 & x\notin R_h. \end{cases}\end{equation} Then we have the following Lemma involving $L_{h,\,2}$. \begin{lemma}\label{thm:mm_4.2} There exists a constant $C>0$ independent of $u$ and $h$ such that \[ \|u\|_{h,\,2}\le C\|L_{h,\,2}u\|_{h,\,grid},\quad u\in\mathcal D_h. \] \end{lemma} \begin{proof} We consider the characteristic polynomial of $L_h$ and observe that \begin{align*} p(\theta)&=-h^{2}\rho_2(h)\sum_{i=1}^d (2\cos{\theta_i}-2)\\ &+\sum_{i,\,j=1}^d[ 2\cos{(\theta_i + \theta_j)} + 2\cos{(\theta_i -\theta_j)}-4\cos{\theta_i}-4\cos{\theta_j}+4]\\ &=h^{2}\rho_2(h)\sum_{i=1}^d (2-2\cos{\theta_i})+\sum_{i,\,j=1}^d[4(1-\cos{\theta_i})(1-\cos{\theta_j})]\\ &\ge 4 \sum_{i=1}^d (1-\cos{\theta_i})^2. \end{align*} Hence by Lemmas~\ref{lem:3.3} and~\ref{lem3.2} we obtain for $u\in\mathcal D_h$ \begin{align*} \|u\|_{h,2}^2 \le C \sum_{j=1}^d \|\partial_j^2 u\|_{h,\,grid}^2 = C \left\langle Q_h u,\,u \right\rangle_{h,\,grid} \le C \left\langle L_hu,\,u\right\rangle_{h,\,grid}, \end{align*} where $Q_h$ is the difference operator defined in \eqref{Q_h} with $m=2$. Again we have $$\left\langle L_hu,\, u\right\rangle_{h,\,grid} = h^d \left[ \sum_{x\in B_h^*} L_{h,\,2} u(x)\left(h^{-2}u(x)\right) + \sum_{x\in R_h^*} L_{h,\,2} u(x) u(x) \right]$$ Therefore by Cauchy-Schwarz inequality we have $$|\left\langle L_hu,\, u\right\rangle_{h,\,grid}| \le C \|L_{h,\,2}u\|_{h,\,grid} |||u|||_{h,\,2}.$$ Thus from Lemma \ref{lem:3.4} we have \begin{align*} \|u\|^2_{h,\,2} \le C \|L_{h,\,2}u\|_{h,\,grid} \, |||u|||_{h,\,2} \le C \|L_{h,\,2}u\|_{h,\,grid} \, \|u\|_{h,\,2} \end{align*} This completes the proof. \end{proof} We have now all the ingredients to show Theorem~\ref{approx_result}~\ref{thm:mm_one}. \begin{proof}[Proof of Theorem~\ref{approx_result}~\ref{thm:mm_one}] We denote all constants by $C$ and they do not depend on $u,\, f$. Using Taylor expansion we have for all $x\in R_h$ and for small $h$ $$L_hu(x)=h^{-2}\rho_2(h)\mathcal R_2(x) + Lu(x)+h^{-4}\mathcal R_5(x)$$ where $\lvert \mathcal R_2(x)\rvert\leq CM_2h^2$ and $\lvert \mathcal R_5(x)\rvert\leq CM_5h^5$. We thus obtain, for $x\in R_h$, \begin{align} L_he_h(x)&=L_hu(x)-L_hu_h(x)\nonumber\\ &= h^{-2}\rho_2(h)\mathcal R_2(x) + h^{-4}\mathcal R_5(x).\label{eq:sokhi_bhabona} \end{align} For $x\in R^*_h$ we have \begin{align*} L_{h,2}R_he_h(x)&= L_hR_he_h(x)=L_he_h(x)=h^{-2}\rho_2(h)\mathcal R_2(x) + h^{-4}\mathcal R_5(x). \end{align*} For $x\in B^*_h$ at least one among $x\pm h(e_i\pm e_j),\,x\pm he_i$ is in $B_h\setminus \partial D$. For any $y\in B_h\setminus\partial D$ we consider a point $b(y)$ on $\partial D$ of minimal distance to $y$. Note that this distance is at most $2h$. Now using Taylor expansion and the fact that the value of $u$ and all its first order derivatives are zero at $b(y)$ one sees that $$u(y)=u_h(y)+\mathcal R^{'}_2(y)$$ where $\lvert \mathcal R^{'}_2(y)\rvert\leq CM_2h^2$. For $x\in B_h^\ast$ denote by $S(x)$ the neighbors of $x$ which are in $B_h\setminus \partial D$ i.e. $$S(x)= \{ y: y\in B_h\setminus \partial D \cap \{ x\pm h e_i, x\pm h(e_i\pm e_j): 1\le i,j \le d\}\}.$$ Therefore, for $x\in B_h^\ast$, \begin{align} L_{h,2} R_he_h(x)&= h^2 L_h R_he_h(x)\nonumber\\ &=h^2 \left\{L_he_h(x)- h^{-4} \sum_{y\in S(x)}\left(h^2\rho_2(h)C(y)e_h(y) + C^{'}(y) e_h(y)\right)\right\}\nonumber\\ &\stackrel{\eqref{eq:sokhi_bhabona}}{=} h^2\{h^{-2}\rho_2(h)\mathcal R_2(x) + h^{-4}\mathcal R_5(x)\} + (C \rho_2(h) + C^{'}h^{-2}) \mathcal R^{''}_2(x)\label{eq:long_one} \end{align} where $\lvert \mathcal R^{''}_2(x)\rvert\leq CM_2h^2$. Hence \begin{align*} \|L_{h,2} R_he_h\|_{h,\,grid}^2 & \stackrel{\eqref{eq:long_one}}{=} h^d \left[ \sum_{x\in R_h^\ast} \left(h^{-2}\rho_2(h)\mathcal R_2(x) + h^{-4}\mathcal R_5(x)\right)^2 \right.\\ &\left. +\sum_{x\in B_h^\ast} \left(\rho_2(h)\mathcal R_2(x) + h^{-2}\mathcal R_5(x) + (C \rho_2(h) + C^{'}h^{-2}) \mathcal R^{''}_2(x) \right)^2\right]\\ &\le C h^d \left[ \sum_{x\in R_h^\ast} \left(M_2^2(\rho_2(h))^2 + M_5^2h^2\right) +\sum_{x\in B_h^\ast} \left(M_2^2h^{4}(\rho_2(h))^2 + M_5^2h^6+ M_2^2\right)\right]\\ &\le C\left[\left(M_2^2(\rho_2(h))^2 + M_5^2h^2\right) + h \left(M_2^2h^{4} (\rho_2(h))^2 + M_5^2h^6+ M_2^2\right)\right] \end{align*} where the last inequality holds as the number of points in $B_h^\ast$ is $O(h^{-(d-1)})$. Finally to complete our proof we obtain \begin{align*}\label{eq4N:errorbound} \|R_he_h\|_{h,\,grid}^2 &\le C\left[M_2^2(\rho_2(h))^2 + M_5^2h^2 + M_2^2h^{5} (\rho_2(h))^2 + M_5^2h^7+ M_2^2 h\right]\\ &\le C\left[M_5^2h^2 + M_2^2(\rho_2(h))^2 + M_2^2 h\right] \end{align*} using Lemmas \ref{lem:3.1} and \ref{thm:mm_4.2}. \end{proof} \subsubsection{Laplacian + Bilaplacian case: proof of Theorem~\ref{approx_result}~\ref{thm:mixed_one}} In this subsection we consider $L=-\Delta_c + \Delta_c^2$. Recall $\rho_3(h) \to 1$ and we have for $x\in h\mathbb{Z}^d$, \begin{align*} L_hu(x)&=\frac1{h^4}\left[-h^{2}\sum_{i=1}^d (u(x+he_i)+u(x-he_i)-2u(x))\right.\\ &\left.+ \rho_3(h)\sum_{i,\,j=1}^d\left\lbrace u(x+h(e_i+e_j))+u(x-h(e_i+e_j))+u(x+h(e_i-e_j))+u(x-h(e_i-e_j))\right.\right.\\ &\left.\left.-2(u(x+he_i)-2u(x-he_i)-2(u(x+he_j)-2u(x-he_j) +4u(x))\right\rbrace\right]. \end{align*} We define the operator $L_{h,2}$ as in~\eqref{eq:Lh2} and obtain \begin{lemma}\label{thm:mixed_4.2} There exists a constant $C>0$ independent of $u$ and $h$ such that \[ \|u\|_{h,\,2}\le C \|L_{h,\,2}u\|_{h,\,grid},\quad u\in\mathcal D_h. \] \end{lemma} \begin{proof} We observe that \begin{align*} p(\theta)&=-h^{2}\sum_{i=1}^d (2\cos{\theta_i}-2)+ \rho_3(h)\sum_{i,\,j=1}^d[ 2\cos{(\theta_i + \theta_j)} + 2\cos{(\theta_i -\theta_j)}-4\cos{\theta_i}-4\cos{\theta_j}+4]\\ &=h^{2}\sum_{i=1}^d (2-2\cos{\theta_i})+ \rho_3(h)\sum_{i,\,j=1}^d[4(1-\cos{\theta_i})(1-\cos{\theta_j})]\\ &\ge 4\rho_3(h)\sum_{i=1}^d (1-\cos{\theta_i})^2. \end{align*} Hence by Lemma \ref{lem:3.3} and \ref{lem3.2} we obtain for $u\in\mathcal D_h$ \begin{align*} \|u\|_{h,2}^2 \le C \sum_{j=1}^d \|\partial_j^2 u\|_{h,\,grid}^2 = C \left\langle Q_h u,\,u \right\rangle_{h,\,grid} \le C (\rho_3(h))^{-1} \left\langle L_hu\,,\,u\right\rangle_{h,\,grid} \le C\left\langle L_hu\,,\,u\right\rangle_{h,\,grid}, \end{align*} where $Q_h$ is the difference operator defined in \eqref{Q_h} with $m=2$. The rest of the proof is similar to Lemma \ref{thm:mm_4.2} and hence omitted. \end{proof} We now prove the approximation result in this case. \begin{proof}[Proof of Theorem~\ref{approx_result}~\ref{thm:mixed_one}] As before the constant $C$ does not depend on $u$ and $f$. Using Taylor expansion we have for all $x\in R_h$ and for small $h$ $$L_hu(x)=Lu(x)+ (\rho_3(h)-1)\Delta_c^2 u(x) + h^{-2} \mathcal R_4(x)+\rho_3(h)h^{-4}\mathcal R_5(x)$$ where $\lvert \mathcal R_4(x)\rvert\leq CM_4h^4,\,\lvert \mathcal R_5(x)\rvert\leq CM_5h^5$. We obtain for $x\in R_h$ \begin{align*} L_he_h(x)&=L_hu(x)-L_hu_h(x)\\ &=Lu(x)+ (\rho_3(h)-1)\Delta_c^2 u(x) + h^{-2} \mathcal R_4(x)+\rho_3(h)h^{-4}\mathcal R_5(x)-L_hu_h(x)\\ &=(\rho_3(h)-1)\Delta_c^2 u(x) + h^{-2} \mathcal R_4(x)+\rho_3(h)h^{-4}\mathcal R_5(x). \end{align*} For $x\in R^*_h$ we have \begin{align}\label{eq:looong} L_{h,2}R_he_h(x)&= L_hR_he_h(x)=L_he_h(x)= (\rho_3(h)-1)\Delta_c^2 u(x) + h^{-2} \mathcal R_4(x)+\rho_3(h)h^{-4}\mathcal R_5(x). \end{align} As in the case of $\Delta_c^2$ we have for any $y \in B_h \setminus\partial D$ $$u(y)=u_h(y)+\mathcal R_2(y)$$ where $\lvert \mathcal R_2(y)\rvert\leq C M_2h^2$. Therefore, for $x\in B_h^\ast$, \begin{align} L_{h,2} R_he_h(x)&= h^2 L_h R_he_h(x)\nonumber\\ &=h^2 \left\{L_he_h(x)- h^{-4} \sum_{y\in S(x)} \left(h^2 C(y) e_h(y) + \rho_3(h)C^{'}(y) e_h(y)\right)\right\}\nonumber\\ &\stackrel{\eqref{eq:looong}}{=} h^2(\rho_3(h)-1)\Delta_c^2 u(x) + \mathcal R_4(x) +\rho_3(h)h^{-2}\mathcal R_5(x) \nonumber\\ &\qquad \qquad +C\mathcal R^{'}_2(x) + C h^{-2} \rho_3(h)\mathcal R^{''}_2(x)\label{eq:long} \end{align} where $S(x)$ is defined similarly as in $\Delta_c^2$ case, $C(y),\, C^{'}(y)$ are constants depending on $y$ and $\lvert \mathcal R^{'}_2(x)\rvert\leq CM_2h^2,\,\lvert \mathcal R^{''}_2(x)\rvert\leq CM_2h^2$. We have \begin{align*} \|L_{h,2} R_he_h\|_{h,grid}^2 &= h^d\sum_{x\in R_h} (L_{h,2} R_he_h(x))^2\\ &= h^d \left[ \sum_{x\in R_h^\ast} (L_{h,2} R_he_h(x))^2+ \sum_{x\in B_h^\ast} (L_{h,2} R_he_h(x))^2 \right] \end{align*} which, using the bounds~\eqref{eq:looong}-\eqref{eq:long}, turns into \begin{align*} \|L_{h,2} R_he_h\|_{h,grid}^2 &\le C h^d \sum_{x\in R_h^\ast} \left( (\rho_3(h)-1)^2M_4^2 + M_4^2 h^4 + (\rho_3(h))^2M_5^2h^2 \right)\\ & + Ch^d\sum_{x\in B_h^\ast} \left(h^4(\rho_3(h)-1)^2M_4^2 +M_4^2 h^8 + (\rho_3(h))^2 M_5^2h^6+ M_2^2 + M_2^2 (\rho_3(h))^2h^4 \right)\\ &\le C[(\rho_3(h)-1)^2M_4^2 + M_4^2 h^4 + (\rho_3(h))^2M_5^2h^2 + h^5(\rho_3(h)-1)^2M_4^2 \\ &+M_4^2 h^9 + (\rho_3(h))^2 M_5^2h^7+ M_2^2 h+ M_2^2 (\rho_3(h))^2h^5] \end{align*} where in the last inequality we have used that the number of points in $B_h^\ast$ is $O(h^{-(d-1)})$. Finally to complete our proof we obtain using Lemma~\ref{lem:3.1} and Lemma~\ref{thm:mixed_4.2} \begin{align*}\label{eq:mixed_errorbound} \|R_he_h\|_{h,\,grid}^2 &\le C[(\rho_3(h)-1)^2 M_4^2 + M_4^2 h^4 + (\rho_3(h))^2M_5^2h^2 + h^5(\rho_3(h)-1)^2 M_4^2\\ &+M_4^2 h^9 + (\rho_3(h))^2 M_5^2h^7+ M_2^2 h+ M_2^2 (\rho_3(h))^2h^5]\\ & \le C\left[M_5^2h^2 + M_4^2(\rho_3(h)-1)^2 + M_4^2 h^4 + M_2^2 h \right].\qedhere \end{align*} \end{proof} \subsubsection{Laplacian case: proof of Theorem~\ref{approx_result}~\ref{thm:gff_one}} In this subsection we consider $L=-\Delta_c$. The continuum problem ~\eqref{eqa:continuum} is defined with one boundary condition, whereas in the discrete Dirichlet problem involving $L_h$ two boundary conditions are needed. The contribution of $\Delta_h^2$ is negligible in the limit but for non-zero $h$ it is not. It is the effect of $\rho_1(h)$ which makes $\Delta_h^2$ vanish in the limit. However, if we simply apply the same proof of Theorem~\ref{approx_result}~\ref{thm:mm_one}-\ref{thm:mixed_one}, then if $\rho_1(h)$ does not decay faster than $h$, the method fails to estimate the error. This is due to the fact that one would treat the boundary layer effect and the discretization effect simultaneously. To take care of the different scales at which these effects are seen, we use a suitable cutoff function instead of truncating the discrete operator $L_h$ near the boundary. Using the cutoff we define a function $g$ which is equal to $u$ near the boundary of $D$ and has nice bounds on its derivatives. With the help of $g$ we first take care of the boundary effect. Then we take the discretization parameter $h$ to go to zero and estimate the error. Let us first define the cutoff function. Recall that $\delta := \max\{h, \sqrt{\rho_1(h)}\}$. We define \begin{align*} D^{\ell\delta} := \{x\in\R^d: \mathrm{dist}(x, \partial D) < \ell\delta\},\quad \ell=1,2,\ldots \end{align*} where $\mathrm{dist}(x, \partial D) = \inf \{\|x-y\|: y\in\partial D \} $. Then we have the following Proposition which follows from Theorem 1.4.1 and equation (1.4.2) of~\cite{hormander2015}. \begin{lemma} One can find $\phi \in C_c^\infty\left(\overline{D^{7\delta}}\right)$ with $0\le \phi \le 1$ so that $\phi =1$ on $\overline{D^{5\delta}}$ and \begin{equation}\label{eq:cutoff_derivative} \sup_{x\in\R^d} |D^\alpha\phi(x)| \le C_\alpha \delta^{-|\alpha|}, \end{equation} where $C_\alpha$ depends on $\alpha$ and $d$. \end{lemma} We now define a function $g:\overline D \to R$ so that $g= \tilde{\phi} u$ where $\tilde{\phi} $ is the restriction of $\phi$ to $\overline D$. We will use the following bounds of $g$ and its derivatives. \begin{lemma}\label{lem:bounds_g} We have \begin{enumerate} \item $\displaystyle \sup_{x\in D}|g(x)| \le CM_1 \delta,$ \item $\displaystyle \sum_{|\alpha| \le 1}\sup_{x\in D}|D^\alpha g(x)| \le CM_1, $ \item $\displaystyle\sum_{|\alpha| \le 2}\sup_{x\in D}|D^\alpha g(x)| \le C(M_1 \delta^{-1} + M_2) .$ \end{enumerate} Here we recall that $M_k= \sum_{|\alpha| \le k}\sup_{x\in D}|D^\alpha u(x)|$. \end{lemma} \begin{proof} We first observe that $g=0$ on $D\setminus \overline {D^{7\delta}}$. For any $x$ in $D\cap \overline{D^{7\delta}}$ we use Taylor series expansion and the fact that $u=0$ on $\partial D$ to obtain $|u(x)| \le CM_1 \delta$. The bounds now follows from the definition of $g$ and~\eqref{eq:cutoff_derivative}. \end{proof} We are now ready to prove Theorem~\ref{approx_result}~\ref{thm:gff_one}. \begin{proof}[Proof of Theorem~\ref{approx_result}~\ref{thm:gff_one}] For our convenience we denote by $\|\cdot\|_{\ell^2(A)}$ the $\|\cdot\|_{h,\,grid}$ norm of the projection of any grid-function onto the finite subset $A$ of $h\Z^d$. More precisely, for any finite subset $A$ of $h\Z^d$ and function $v:h\Z^d\to \R$ we define \begin{align} \|v\|_{\ell^2(A)}^2:= h^d\sum_{x\in A} v(x)^2. \end{align} We extend $u $ and $g$ on $\R^d$ by defining their values to be zero outside $\overline D$. Also let us extend $u_h$ by defining it to be zero on $h\Z^d\setminus D_h$. Note that $B_h \subset \overline D \cap \overline{D^{5\delta}}$. Thus by definition we have $e_h =u=g$ on $B_h$. Therefore from Lemma~\ref{lem:3.1} we have \begin{align}\label{eq:revised_0} \|R_he_h\|_{h,\,grid}^2 &\le 2 \|e_h-g\|_{\ell^2(R_h)}^2 + 2\|g\|_{\ell^2(R_h)}^2 \nonumber\\ &\le C \|\nabla_h (e_h-g)\|_{\ell^2(R_h\cup \partial R_h)}^2 + 2\|g\|_{\ell^2(R_h)}^2 \end{align} where $$\nabla_h v(x):= (\partial_j v(x))_{j=1}^d,$$ $$\|\nabla_h v\|_{\ell^2(A)}^2:= \sum_{j=1}^d \|\partial_jv\|_{\ell^2(A)}^2,$$ and $\partial R_h :=\{x\in h\Z^d\setminus R_h: \mathrm{dist}_{h\Z^d}(x,\,R_h)=1\}$ with $\mathrm{dist}_{h\Z^d}$ being the graph distance in the lattice $h\Z^d$. We have for $x\in R_h$ \begin{align*} L_h(e_h-g)(x)= L_hu(x)-f(x) -L_hg(x). \end{align*} Thus \begin{equation}\label{eq:revised_1} \left\langle L_h(e_h-g), e_h-g\right\rangle_{h,\,grid}= \left\langle L_hu-f, e_h-g\right\rangle_{h,\,grid} + \left\langle -L_hg, e_h-g\right\rangle_{h,\,grid}. \end{equation} Using integration by parts we obtain \begin{equation}\label{eq:revised_2} \left\langle L_h(e_h-g), e_h-g\right\rangle_{h,\,grid} = \|\nabla_h(e_h-g)\|_{\ell^2(R_h\cup \partial R_h)}^2 + \rho_1(h) \|\Delta_h(e_h-g)\|_{\ell^2(R_h\cup \partial R_h)}^2. \end{equation} For the first term in equation~\eqref{eq:revised_1} we have, using Lemma~\ref{lem:3.1}, \begin{align}\label{eq:revised_3} |\left\langle L_hu-f, e_h-g\right\rangle_{h,\,grid}|& \le \|L_hu-f\|_{\ell^2(R_h)} \|e_h-g\|_{\ell^2(R_h)} \nonumber\\ &\le C \|L_hu-f\|_{\ell^2(R_h)} \|\nabla_h(e_h-g)\|_{\ell^2(R_h\cup \partial R_h)} \nonumber\\ &\le C \|L_hu-f\|^2_{\ell^2(R_h)} + \frac14 \|\nabla_h(e_h-g)\|_{\ell^2(R_h\cup \partial R_h)}^2. \end{align} For the second term of equation~\eqref{eq:revised_1} we obtain using integration by parts \begin{align}\label{eq:revised_4} |\left\langle -L_hg, e_h-g\right\rangle_{h,\,grid}| &\le |\left\langle -\Delta_hg, e_h-g\right\rangle_{h,\,grid}| + \rho_1(h)|\left\langle \Delta_h^2g, e_h-g\right\rangle_{h,\,grid}| \nonumber\\ & \le |\left\langle \nabla_hg, \nabla_h(e_h-g)\right\rangle_{h,\,grid}| + \rho_1(h)|\left\langle \Delta_hg, \Delta_h(e_h-g)\right\rangle_{h,\,grid}|\nonumber\\ & \le \|\nabla_hg\|^2_{\ell^2(R_h\cup \partial R_h)} + \frac14 \|\nabla_h(e_h-g)\|^2_{\ell^2(R_h\cup \partial R_h)} + \rho_1(h)\|\Delta_hg\|^2_{\ell^2(R_h\cup \partial R_h)}\nonumber\\& + \rho_1(h)\|\Delta_h(e_h-g)\|^2_{\ell^2(R_h\cup \partial R_h)}. \end{align} Combining~\eqref{eq:revised_1},~\eqref{eq:revised_2},~\eqref{eq:revised_3} and~\eqref{eq:revised_4} we get \begin{align*} \|\nabla_h(e_h-g)\|^2_{\ell^2(R_h\cup \partial R_h)} \le C \|L_hu-f\|^2_{\ell^2(R_h)} + C \|\nabla_hg\|^2_{\ell^2(R_h\cup \partial R_h)} + C\rho_1(h)\|\Delta_hg\|^2_{\ell^2(R_h\cup \partial R_h)}. \end{align*} Substituting this in~\eqref{eq:revised_0} we obtain \begin{align}\label{eq:revised_main} \|R_he_h\|_{h,\,grid}^2 &\le C \|L_hu-f\|^2_{\ell^2(R_h)} + C \|\nabla_hg\|^2_{\ell^2(R_h\cup \partial R_h)}\nonumber\\ & + C\rho_1(h)\|\Delta_hg\|^2_{\ell^2(R_h\cup \partial R_h)} + 2\|g\|_{\ell^2(R_h)}^2. \end{align} We now bound each of the term in the right hand side of the inequality~\eqref{eq:revised_main}. Using Taylor series expansion we have for all $x\in R_h$ $$L_hu(x)=Lu(x)+ h^{-2}\mathcal R_4(x) + h^{-4}\rho_1(h) \mathcal R_4^{'}(x)$$ where $\lvert \mathcal R_4(x)\rvert\leq CM_4h^4$ and $\lvert \mathcal R_4^{'}(x)\rvert\leq CM_4h^4$. Now \begin{align*} \|L_hu-f\|^2_{\ell^2(R_h)} & \le h^d \sum_{x\in R_h} (M_4^2 h^4 + M_4^2 \rho_1(h)^2 ) \le C M_4^2 \delta^4. \end{align*} For the second term of~\eqref{eq:revised_main} we have the bound \begin{align*} \|\nabla_hg\|^2_{\ell^2(R_h\cup \partial R_h)} &= h^d\sum_{x\in (R_h \cup\partial R_h)\cap \overline{D^{8\delta}}} h^{-2}\sum_{i=1}^d (g(x+he_i)-g(x))^2\\ &\le C h^d \sum_{x\in (R_h \cup\partial R_h)\cap \overline{D^{8\delta}}} M_1^2 \le CM_1^2 \delta \end{align*} where in the first inequality we used Taylor expansion and Lemma~\ref{lem:bounds_g} and in the last inequality we used the fact that number of points in $(R_h \cup\partial R_h)\cap \overline{D^{8\delta}}$ is $O(\delta h^{-d})$. Similarly, for the third term using Taylor expansion, Lemma~\ref{lem:bounds_g} and the fact that number of points in $(R_h \cup\partial R_h)\cap \overline{D^{8\delta}}$ is $O(\delta h^{-d})$ we have \begin{align*} \rho_1(h)\|\Delta_hg\|^2_{\ell^2(R_h\cup \partial R_h)}&= \rho_1(h)h^d\sum_{x\in (R_h \cup\partial R_h)\cap \overline{D^{8\delta}}} (\Delta_hg(x))^2\\ &\le C \rho_1(h)h^d\delta h^{-d}(M_1 \delta^{-1} + M_2)^2\\ &\le C\left( M_1^2 \sqrt{\rho_1(h)} + M_2^2 \rho_1(h)\delta \right). \end{align*} Finally we obtain \begin{align*} \|g\|_{\ell^2(R_h)}^2 & = h^d\sum_{x\in R_h\cap \overline{D^{7\delta}}} g(x)^2 \\ & \le C h^d\sum_{x\in R_h\cap \overline{D^{7\delta}}} M_1^2 \delta^2\\ & \le C M_1^2 \delta^3. \end{align*} Here in the first inequality we used Lemma~\ref{lem:bounds_g} and in the last inequality we used the fact that number of points in $R_h \cap \overline{D^{7\delta}}$ is $O(\delta h^{-d})$. Combining all these bounds we obtain from~\eqref{eq:revised_main} \begin{align*} \|R_he_h\|_{h,\,grid}^2 &\le C\left(M_4^2 \delta^4 + M_1^2 \delta + M_1^2 \sqrt{\rho_1(h)} + M_2^2 \rho_1(h)\delta + M_1^2 \delta^3\right) \\ &\le C\left(M_4^2 \delta^4 + M_2^2 \rho_1(h)\delta + M_1^2 \delta \right).\qedhere \end{align*} \end{proof}
1,314,259,993,726
arxiv
\section*{Introduction} Ultrafast electronic dynamics of solid-state materials, particularly under light excitation, are of great interests both fundamentally and practically due to the wide applications of optoelectronic devices, such as transistors, photovoltaics, or photodetectors. While conventional semiconductor-based optoelectronic devices are operated based on the light intensity, lightwave-based petahertz electronics describe the manipulation of charge carrier dynamics by the electromagnetic field of light owing to the precisely controlled carrier-envelope phase (CEP) in few-cycle laser pulses at few and sub-femtosecond time scales~\cite{krausz2014attosecondmetrology_signal_processing, Goulielmakis2007}. The driven ultrafast electronic dynamics, induced by intense few-cycle pulses, could occur at time scales of 10--1000 attoseconds ($\SI{1}{as}=\SI{e-18}{s}$)~\cite{Krausz2009ReviewAttosecondPhysicsReview, Nisoli2017,Wolf2017}. For instance, attosecond light pulses can be created in gases as a result of the highly nonlinear high-harmonic generation (HHG) process, where electron photoionization of gas atoms is restricted to a time window much shorter than a half-cycle of the oscillation of the driving laser light field, typically on the order of sub-100\,as for optical frequencies~\cite{Goulielmakis2008, Zhao2012}. While the lightwave-induced strong-field processes in gas-phase atoms and molecules have been under intensive investigations, leading to the birth of attosecond physics (see e.g.~\cite{Corkum_3stepmodel,Kulander1993,Lewenstein1994_LewModel,Salieres1999HHGcoherence,Brabec2008strong_field,Krausz2009ReviewAttosecondPhysicsReview, calegari2016advancesAttosecondReview,Symphony2019}), the exploration of lightwave-driven petahertz electronics in condensed matter at conditions of extreme nonlinearity is still in its early phase and has become an emerging field of research. Lightwave electronics in solids, i.e. dielectrics, semiconductors and metals, involves characteristically different and richer electronic dynamical processes due to their complex band-structure~\cite{Ghimire2014}. Current research monitors light-field driven electron motions in solids mostly through either optical signal detection, e.g. HHG~\cite{chin2001HHGsolids_but_explained_perturbatively, Reis2011HHGsolids_first_observation,Schubert2014HHG_midIR, luu2015HHGsolids_halfcycle_pulses,vampa2015linking,Reis2018solidHHGreview} or pump-probe experiments~\cite{Schultze2014, Keller2018}; detection of the emitted electrons~\cite{Kling2017}; or light-induced current sampling~\cite{schiffrin2013optical, schultze2013controlling, paasch2014solid, paasch2016multiphotontunneling}. Moreover, there are even proposals to detect topological order in topological insulators~\cite{Silva2018topological,Chacon2018topological,Bauer2019SuSchriefferChains} or properties of strongly correlated electrons in solids \cite{Silva2018HHGstronglycorrelated} using HHG. With the ability to control electronic dynamics in solids on attosecond time scales, the development of lightwave electronics holds promise for realizing ultrafast signal processing devices at frequencies up to the petahertz regime ($\SI{1}{PHz}= \SI{e15}{Hz}$)~\cite{krausz2014attosecondmetrology_signal_processing}. \begin{figure}[b] \centering\includegraphics[width=3.25in]{Figure1.pdf} \caption{ Lightwave-induced attosecond electron dynamics at nanostructures: An incident laser pulse induces enhanced nanoscale near-fields around the apices of the nanostructures, which can trigger ultrafast currents as well as highharmonic generation. Ultrafast current can originate from photoemission either directly from the nanostructures or from the valence band of the embedding medium. Moreover a nonlinear polarization of the medium can also induce a current between the nanostructures. High-harmonic generation in the medium occurs when the electrons are transferred to the conduction band and strongly accelerated by the driving laser pulse, either when they experience the non-parabolicity of the conduction band or by subsequent recombination with the parent hole. Both processes promise interesting applications and are used as a tool in attosecond metrology on the nanoscale.} \label{FieldEnhancement} \end{figure} The development of light-driven petahertz electronics is inherently connected to solid state nanophysics in two ways. First, the natural length scale of electron motion on the few attosecond time scale is on the order of one nanometer. Second, for the development of petahertz integrated circuits, the devices have to be both on nanometer length scales and be based on non-resistive processes, such as ballistic electron transport. Nanomaterials or nanostructured solids provide an excellent basis for the development of lightwave-driven electronics. Specifically, nanomaterials with tailored structure at extremely small scale possess unique electronic properties that can hardly be seen in bulk materials; for example, the strong quantum confinement effects and the greatly enhanced non-trivial quantum properties of semiconducting nanowires~\cite{Mourik2012,Lutchyn2018_Majorana,vanZanten}, quantum dots and semiconductor arrays~\cite{Hanson2007,White2017,Delteil2017_cqs}, two dimensional materials~\cite{Scahibley2016, Cao2018}, topological insulators~\cite{Kane2010,Xu2018}, etc. Meanwhile, the strongly enhanced local electric field and its spatial inhomogeneity, through plasmonic effects or scattering, with the presence of artificial nanostructures dramatically modify the behavior of light-matter interactions, resulting in peculiar field-driven electronic dynamics at nanometer spatial scales (for recent reviews see Refs.~\cite{hommelhoff2015attosecond,ciappina2017attosecond}). As schematically shown in Fig.~\ref{FieldEnhancement}, the enhanced local field at the nanostructures is strong enough to induce nonperturbative nonlinear processes in the material, such as HHG in solids or electron emissions either from the nanostructures or from surrounding atoms, which could be subsequently driven by the lightwave and measured as electric current in the nano-circuit or collected by a separate electron detector or spectrometer. The exploration of the interaction between extremely short laser pulses with down to attosecond durations and nanomaterials or nanostructured solids, which has been termed attosecond nanophysics, has strong implications not only in petahertz electronics development, but also in achieving supreme space- and time resolutions in microscopy; for instance, attosecond near-field sampling has been demonstrated for sampling and reconstructing nanoscale near-field distributions on attosecond timescales~\cite{foerg2016streaking_nanotips}. Attosecond nanophysics poses a tremendous theoretical challenge in terms of modeling electronic dynamics as a result of quantum confinement effects in the material and the associated strong field processes induced by the near-field. Particularly, the treatment of spatially inhomogeneous field-driven processes needs to take into account the higher multipole orders of photon-electron coupling terms~\cite{praati1,praati2,prlhhg,ciappina2017attosecond}. The field of light structured beams is developing rapidly recently. Such laser beams have topological properties themselves involving the light orbital momentum and polarization (cf.~\cite{Oreg, Emilio}). HHG with structured beams has very special properties \cite{Oreg1,PRLEmilio,Selftorque,Smirnova-chiral} that open plethora of possible applications. Combining the structured light beams with atto-nanophysics is thus especially challenging. Studies of attosecond physics on the nanoscale were initiated more than a decade ago with the demonstration of strong-field effects on nanostructures, such as laser-triggered field emission from nanotips~\cite{Hommelhoff2006_FieldEmissionNanotip, Hommelhoff2006_UltrafastElectronPulses_CEPdiscussion, RopersLienau2007_localized_electronMicroscope, Barwick_2007_different_emission_regimes} and the demonstration of strong-field photoemission induced by laser irradiation~\cite{stockman2007attoPEEM,Hommelhoff2010_TransitionToTunneling,Ropers2010_TransitionToTunneling}. It has been shown that the highly nonlinear tunnelling photoemission process of metallic nanotips or nanostructures and thus emitted individual electron bursts can be finely tuned by changing the CEP of the incident few-cycle laser pulse on the attosecond time scale~\cite{zherebtsov2011controlled,Hommelhoff2011CEPnanotip,Lienau2014CEPnanotip_nonadiabaticRegime}. Several applications have been, therefore, derived from these studies including ultrafast microscopy \cite{Ropers2014_ultrafastULED_graphene_single_electrons, Ernstdorfer2014pointprojection_nanocurrents, Ehberger2015coherent_tungsten, Lienau2018_nanofocusingholography}, ultrafast light-driven electronics such as light-driven diodes~\cite{Hommelhoff2015lighttriggeredDiode}, and metrology for reconstruction of propagating light properties, including the CEP of few-cycle pulses~\cite{Hommelhoff2017CEPbeamReconstruction}. Recent research has provided new opportunities for the development of petahertz electronics and attosecond nanoscopy. For example, HHG can be seen in emerging two dimensional materials~\cite{Reis2017HHG_monolayer, Yoshikawa2017} and in different artificial nanostructures~\cite{Kim2016HHG_nanotaper_plus_solid,Ropers2017tailored_HHG}; ultrafast light-field driven currents, as can be seen in bulk dielectrics or wide bandgap materials, has also been measured in monolayer graphene~\cite{ Hommelhoff2017CEPgraphene}. In addition, the development of HHG by MHz-repetition rate lasers makes attosecond photoelectron emission microscopy (Atto-PEEM) possible, a technique which combines the advantages of attosecond time resolution from the attosecond streaking spectroscopy technique ~\cite{itatani2002attosecond_streaking_basics,kienberger2004attosecond_streaking_exp} with nanometer scale spatial resolution from photoelectron microscopy~\cite{stockman2007attoPEEM}. Finally, ultrafast electron pulses, which in principle offer sub-nanometer resolution, is another frontier in attosecond metrology applications~\cite{Baum2016electronMicroscopyEMwaveforms, Ropers2017attosecondElectronPulseTrains}. This perspective is organized in two major parts: first, the recent development and potential of attosecond metrology for petahertz electronics applications is reviewed, which includes HHG using nanostructures as potential ultracompact XUV sources (subsection A) and ultrafast light-field induced currents in nanostructures (subsection B); Secondly, the progress, challenges, and potential of attosecond nanoscopy based on photoemission streaking spectroscopy (subsection A) and based on ultrafast electron sources (subsection B) is discussed. \begin{figure*}[htpb!] \centering\includegraphics[width=\textwidth]{Figure2.pdf} \caption{ \label{Fig:HHGnano} High-harmonic generation in solids with nanostructures: (a) and (b) Localized surface plasmon resonance enhanced HHG with a nanoantenna array on a silicon surface [from \citer{Corkum2017plasmon_enhanced_HHG}]: (a) Scanning electron microscopy (SEM) image of the nanoantenna array with antenna major axis parallel to Si [110] direction. The nanoantenna array is designed with its plasmonic resonance at the center wavelength of laser excitation for HHG. (b) The measured harmonic signal, up to 9th order, is enhanced when nanoantennas are illuminated resonantly with the laser polarization parallel to the antenna major axis (red), compared to the signal when they are illuminated off-resonantly with the laser polarization perpendicular to the antenna major axis (green) and the signal from a bare Si surface (black). (c)-(e) Surface plasmon polaritons (SPPs) enhanced HHG in a metal-sapphire nanostructure [from~\citer{Kim2016HHG_nanotaper_plus_solid}]: (c) Schematic overview of the plasmonic enhanced HHG scheme: SPPs are generated and propagate along the metal-sapphire interface, and induce field enhancement close to the top of the cone, where HHG is emitted. (d) SEM image (I) and cross-section image (II) of a single cone-shape structure. (e) Measured harmonic signal at different input intensities. The inset shows the power scaling of the 7th and 9th order harmonic peaks. (f) and (g) Harmonic self-focusing from a Fresnel zone plate (FZP) pattern fabricated by gallium implantation in a silicon surface [from~\citer{Ropers2017tailored_HHG}]: (f) Image of the FZP collected from its third harmonic emission at the sample position. The inset shows a SEM image of the FZP. (g) Spatial characterization of the 5th harmonic emission from the FZP, which shows three focus orders.} \end{figure*} \section*{Petahertz electronics} \subsection*{High-order harmonic generation with nanostructured solids and applications} Nanostructures were initially utilized for enhancing HHG in gases, which was expected to provide the potential for realizing HHG at MHz-repetition rates~\cite{kim2008HHG_better_atomic_line, kim2011HHG_tapered_atomic_line,Ropers2012nanostructure_atomic_line_statement, Ropers2013atomic_line_also_in_conventional_HHG}. However, it has later been shown that the observed MHz-rate XUV light generated in the vicinity of nanostructures originates from incoherent atomic line emission rather than actual HHG~\cite{Ropers2012nanostructure_atomic_line_statement, Ropers2013atomic_line_also_in_conventional_HHG,Kim2016HHG_nanotaper_plus_solid}. The major problem here is that the effective volume, in which field enhancement takes place for HHG ($\approx 10^{-15} \:\rm mm^3$), is much smaller than that in conventional HHG ($\approx10^{-2} \:\rm mm^3$). Therefore far fewer gas atoms could effectively contribute to the signal, which cannot be compensated by the increase in the repetition rate~\cite{raschke2013HHG_plasmonics}. HHG in solids, which contain a much higher atomic density, could circumvent this problem. In recent years, HHG in solids has been studied intensively~\cite{chin2001HHGsolids_but_explained_perturbatively, Schubert2014HHG_midIR, luu2015HHGsolids_halfcycle_pulses, vampa2015linking, Reis2011HHGsolids_first_observation}. Different from gas-based HHG, which can be well described by the classical three-step model~\cite{Corkum_3stepmodel}, the generation mechanism of HHG in solids becomes much more complex involving inter- and intra-band electronic dynamics~\cite{vampa2017HHGsolid_TheoreyReview}. Therefore, the classical three-step model is no longer well-suited since both of the electron and hole dynamics within the entire Brillouin zone of the crystal need to be considered~\cite{Reis2018solidHHGreview}. Practically, HHG in solids exhibits distinct features compared to that in gases. For instance, the HHG spectrum is sensitive to the crystal symmetry and band structure, which allows the appearance of even-order harmonic emission and permits the reconstruction of the band structure from the measured results. And since the bandgap of solids is much smaller than the ionization potential in noble gas atoms, HHG can be observed at lower laser pulse energies with sub-$\mu$J-level. On the other hand, however, considering the fact that the emitted XUV radiation can be re-absorbed shortly after its creation depending on the type of crystal and emitted photon energies, in many cases, only the last few nanometers of material of a bulk sample finally contribute to the far-field radiation. Nevertheless, considerable HHG flux can be achieved due to the extremely high atomic density in solids~\cite{kruchinin2018colloquium,vampa2017HHGsolid_TheoreyReview,huttner2017HHGsolid_review}. High-harmonic generation in nanostructured solids has recently been demonstrated by several groups~\cite{Kim2016HHG_nanotaper_plus_solid, Corkum2017plasmon_enhanced_HHG, Reis2017HHG_monolayer, Yoshikawa2017,Guo2018}. In the case of solids combined with plasmonic nanostructures, thanks to the resonantly enhanced local electric field, significant increase of HHG emission can be seen, see e.g. Fig.~\ref{Fig:HHGnano}(a) and~(b)~\cite{Corkum2017plasmon_enhanced_HHG}. A further demonstration of HHG intensity enhancement has been shown with tapered nano-waveguides, see Fig.~\ref{Fig:HHGnano}(c)--(d)~\cite{Kim2016HHG_nanotaper_plus_solid}. Here, the field enhancement at the tapered nanocone is induced by surface plasmon polaritons (SPP) propagating at the interface between the outer gold layer and inner sapphire (see panel~(c)). In a different approach~\cite{Ropers2017tailored_HHG}, the surface of semiconductors was tailored in different ways, either to manipulate the divergence properties of the high-order harmonic radiation, e.g.~via Fresnel zone plates, as shown in Fig.~\ref{Fig:HHGnano}(f) and~(g), or to use the field enhancement for HHG at lower input intensities. The focal-spot size generated by the Fresnel zone plates was almost diffraction-limited, and integrating the generation and focusing step could allow for very compact devices. Finally, HHG emission from an all-dielectric metasurface was recently demonstrated~\cite{Reis2018sHHGmetasurface}. There, the interplay between a bright (radiating) and a dark (non-radiating) mode led to drastically increased HHG emission, and characteristic resonance effects. Since the field is directly enhanced in the structure itself, a larger volume can contribute to the HHG signal, an advantage that is also increasingly being used in nanostructure-enhanced perturbative (second or third) harmonic generation~\cite{timofeeva2018anapoles}. The ability to tailor the solid surface opens the door to design and tune the generated harmonic emission beams with desired properties, such as certain polarization states~\cite{fleischer2014spin, azoury2019interferometric}, different orbital angular momentum (OAM) content of the harmonic radiation~\cite{gauthier2017tunable, dorney2019controlling,gauthier2019OptLett} by simple laser excitations and more elaborate beam-shaping schemes with structured-light for short-wavelength sources, which is currently still unavailable. One of the main limitations, however, is that the enhanced electric field might damage both the structure itself and the solid substrate, at the laser intensities required for HHG~\cite{Corkum2017plasmon_enhanced_HHG, pfullmann2013damagesBowtie}. Therefore, the enhancement effects on HHG can be gradually undermined; for example, it has been reported that damage of the plasmonic nanostructure reduces the HHG yield over time by one order of magnitude~\cite{Kim2016HHG_nanotaper_plus_solid}. The major damage mechanism at intensities below direct laser ablation, as shown by several studies, is due to the heating induced by electric field inside the nanostructure and the less efficient heat conduction within and away from the interaction region~\cite{pfullmann2013damagesBowtie,Lei2013DamageAgNanowires,summers2014opticalDamageThreshold}. Therefore, careful design of the nanostructures for HHG is required by taking into account the optically-induced heating damage. To this end, the all-dielectric metasurface with low losses at optical frequencies is, in principle, promising; though optical damage could still occur~\cite{Reis2018sHHGmetasurface}. The development of waveform-controlled few-cycle sources at longer wavelengths~\cite{Homann2012, fattahi2014thirdgeneration_FSlasers, Liang2017, Neuhaus2018, Lu2018} will also help reduce optical damage for most materials, due to lower excitation photon energies and consequently suppressed multiphoton excitation and ionization processes~\cite{Reis2018solidHHGreview}. The development of HHG in solids, particularly with nanostructures, is expected to lead to the realization of the next generation of ultracompact XUV light sources, which could be applied to, e.g., scanning attosecond microscopy, with up to MHz repetition rates obtained directly from the unamplified femtosecond pulses of laser oscillators. \subsection*{Ultrafast currents from nanostructures and applications} \begin{figure}[htbp!] \centering\includegraphics[width=3.5in]{Figure3.pdf} \caption{ \label{Fig:NanoCurrents} Electric field sensitive ultrafast currents from nanostructures: (a) and (b) CEP-controlled tunneling current through a single bow-tie nanogap [from~\citer{Leitensdorfer2016CEPcurrents}]: (a) Average current (red line) measured through a bowtie nanostructure dependent on the CEP of the few-cycle laser pulse. The waveform of the laser pulse at three different CEPs is also shown. (b) SEM image of the bow-tie structure. (c) and (d) CEP-dependent photoemission current from a nanostructure array [from~\citer{kaertner2017CEPcurrents}]: (c) Schematic experimental setup. A few-cycle laser pulse is focused onto a nanostructure array. Photoemitted electrons are collected by the ITO collector across a micrometer-size gap under a positive bias voltage. (c) Stepwise changes in the source's CEP result in stepwise changes in the phase of the current detected by a lock-in amplifier. The upper inset shows a SEM image of the structure. The lower inset shows the deviation of measured phase of the electron current with respect to the CEP set value of the laser pulse. } \end{figure} Light-field induced currents can be measured in strong-field photoemission from metallic nanostructures and in solid-state materials, particularly dielectric materials and it is the combination of both that will form the foundation of light-driven petahertz electronics. Strong-field photoemission from nanostructures, such as nanotips~\cite{Hommelhoff2006_FieldEmissionNanotip, Hommelhoff2006_UltrafastElectronPulses_CEPdiscussion, Barwick_2007_different_emission_regimes, RopersLienau2007_localized_electronMicroscope} and nanofilms~\cite{irvine2005electron_plasmon_maybetunneling, Dombi2010_strongfield_plasmons, Dombi2011strongfield_noCEP}, have been studied for over a decade. The control of electron emission on attosecond time scales, by tuning the CEP of few-cycle pulses was first discussed in 2007 and has been studied intensively ever since~\cite{stockman2007attoPEEM, zherebtsov2011controlled, Hommelhoff2011CEPnanotip, Lienau2014CEPnanotip_nonadiabaticRegime}. It has been shown that the energy-resolved spectra of photo-emitted electrons in the regime close to the cutoff energy are strongly modulated by the CEP, which can be used to retrieve the position-resolved absolute phase of a laser beam~\cite{Hommelhoff2017CEPbeamReconstruction}. More recently, experiments have shown that the total electron yield from a single junction~\cite{Leitensdorfer2016CEPcurrents} or from an array of nanostructures~\cite{kaertner2017CEPcurrents} is also CEP dependent, see Fig.~\ref{Fig:NanoCurrents}. The control of current flowing across the junctions by the laser pulse's CEP carries great potential in ultrafast current switching for the realization of petahertz electronics. Moreover, the fact that these experiments were performed under ambient conditions without any vacuum apparatus opens up more general applications. The discovery of controlled ultrafast currents in di\-elec\-trics has evoked substantial interest in recent years~\cite{schiffrin2013optical,schultze2013controlling,paasch2014solid,paasch2016multiphotontunneling}. The attosecond response of dielectrics and wide-bandgap semiconductors, even without resonant photo-excitation of interband transitions, primarily results from light-induced strong-field effects, which permit unprecedentedly fast electronic switching at optical frequencies (PHz) with low heat dissipation~\cite{krausz2014attosecondmetrology_signal_processing}. The studies of petahertz electronics and related devices are still in an early phase. Light-driven electronic devices fabricated from dielectrics materials, such as fused silica (SiO$_2$)~\cite{schiffrin2013optical}, quartz, sapphire~\cite{Kim2016_2}, calcium fluoride (CaF$_2$)~\cite{Kim2016}, and wide-bandgap semiconductors, such as gallium nitride (GaN)~\cite{paasch2016multiphotontunneling}, exhibit CEP-dependent photocurrent induced by few-cycle laser pulses. As one of the applications, a solid-state light phase detector that is able to measure the absolute CEP of few-cycle laser fields has been demonstrated~\cite{paasch2014solid}. Other types of devices, for instance, the petahertz diode~\cite{Park2016} and optical memory devices~\cite{Kim2018} have been theoretically proposed, however, have not yet been realized. In the meantime, attosecond spectroscopy measurements on these materials has revealed the strong light field induced electronic dynamics at frequencies up to multi-PHz~\cite{schultze2013controlling,Goulielmakis2016,Sommer2016,Gotoh2016}. There are several challenges that need to be overcome. First, the field-dependent signals in currents from nanostructures, that have been demonstrated so far, are rather weak, on the order of $10^{-4}$ of the total signal~\cite{kaertner2017CEPcurrents} and the single-shot field-dependent measurement has not yet been realized, while the potential limitations of space-charge effects are still not well understood. Furthermore, miniaturization of ultrafast current switches down to the few nanometer scale, which means short distances between gates, seems to be not only beneficial in terms of integrated devices, but is also necessary to fully exploit the speed-up in switching rate by avoiding communication delays. Such a down-scaling ultimately requires to overcome the diffraction limit and is still to be demonstrated. Moreover, for an implementation of PHz electronics, an increase of clock rate from the currently-demonstrated switches employing kHz sources is required. However, controlled few-cycle light waveforms are, so far, only available with MHz-rate, and a further increase in repetition rate would demand corresponding developments in femtosecond laser technology. Finally, similar to nanostructure-enhanced HHG, optically induced damage mainly due to heat deposition, is expected to be one of the major obstacles. Therefore, the creation of strong electric fields with low energy input pulses is necessary. In order to down-scale the devices and to couple the exciting light into the nanoscale devices, nanoplasmonic systems, which confine electromagnetic energy down to sub-wavelength length scales, seem to be very promising. Near-field enhancement moreover reduces effectively the required input laser pulse energies. Adiabatic nanofocusing of few-cycle light pulses~\cite{Stockman2004nanofocusing} is one of the potential building blocks, where a few-cycle pulse can efficiently be coupled to the SPP of a sharp metallic nanotip, allowing to confine electromagnetic fields down to the nanometer scale with giant near-field intensities~\cite{Raschke2010superfocusing, Raschke2010adiabatic, Fabrizio2011hydrophobic, Lienau2012adiabatic_nanofocusing}. Under the same nanotip geometry, the lightwave-controllable ultrafast current source from strong-field photoemission of nanostructures, as discussed above~\cite{Leitensdorfer2016CEPcurrents, kaertner2017CEPcurrents}, may also play a role so that a combination of these two elements can be designed to scale the entire circuitry down to the nanometer scale. One possible route towards THz repetition rate few-cycle light waveforms, would be the use of mode-locked quantum cascade lasers~\cite{hugi2012frequencyCombQCL,Dhillon2015QCL_ultrafast11ps,Barbieri2017QCL5ps} or solitons in microresonators~\cite{Gorodetsky2018SolitonsMicroresonators,pasquazi2018microReview,microresonators}, which currently provide repetition rates from GHz up to several THz. Their development is also fueled by possible applications in frequency comb spectroscopy~\cite{Picque2019frequencyCombs,picque2019frequency}. Since a laser source with such high repetition rate ($f_{rep}$) requires a small resonator length ($L=c/f_{rep}$, where c is the speed of light), and integrated on-chip high repetition rate laser sources have been demonstrated~\cite{Lipson2018BatteryoperatedIF}. We think that all building blocks for PHz integrated electronics would be available and we believe that bringing them together has the potential to revolutionize electronics. \section*{Attosecond nanoscopy} \subsection*{Attosecond photoemission electron microscopy} \begin{figure*}[t!] \centering \def5in{5in} \includegraphics[width=6.5in]{Figure4.pdf} \caption{ Atto-PEEM and HHG in enhancement cavities: (a) Scheme of the Atto-PEEM principle~\cite{hommelhoff2015attosecond}. A XUV laser pulse is focused onto a nanostructure and photoemits electrons, which are subsequently accelerated by the nanoplasmonic field excited by a synchronized NIR laser pulse. The final electron energy and initial position is measured using a photoelectron emission microscope. (b) Scheme of an enhancement cavity with intracavity HHG gas jet, outcoupling of the XUV radiation through a pierced mirror (OC) and time-of-flight-photoemission electron microscopy (TOF-PEEM) apparatus. (c) Example of space-charge effects limiting the photoemission microscopy imaging quality of plasmonic nanostructures [from~\citer{chew2012TOF_PEEEM}]. Here, XUV pulses are generated using single-pass HHG in a neon gas target with a kHz repetition rate laser and images are collected at the same exposure time. At high XUV photon flux, i.e. gas pressures above 70 mbar, space-charge effects become observable and blur the image. (d) Comparison of HHG via single pass Ti:Sa- and Yb-based laser amplifier systems with enhancement cavities in regard to photoemission spectroscopy [from~\citers{pupeza2019ultrafastRABBITT, pupeza2019ultrafastRABBITT, Chiang2015boosting, Frietsch2013RevSciIn, chew2012TOF_PEEEM, LHuillier2009PEEM_XUVtrains, Dakovski2010ufastXUVsource, Corder2018UVwoSC, mills2015MHzXuvSourcePE}]. The color shading of the background illustrates the expected space charge broadening for a fixed spotsize. The shape and color of the symbols indicates the laser system and XUV energy range, respectively. (e) Demonstration of attosecond resolved photoemission sideband modulation on a tungsten surface from an enhancement cavity [from~\citer{pupeza2019ultrafastRABBITT}], which allows RABBITT experiments at MHz repetition rate. The red dashed lines outline phase errors of fitting results, corresponding to the time precision of the measurement of 36\,as. The inset shows the photoelectron kinetic energy spectrum for two different delays between XUV and NIR laser pulse and the energy range used to determine the sideband oscillation (grey shaded region).} \label{Fig:HHGcavity} \end{figure*} The principle of the attosecond-resolved photoemission electron microscope (Atto-PEEM) is shown in Fig.~\ref{Fig:HHGcavity}(a). A laser pulse with a properly selected central wavelength (optical light field in the range from UV to NIR) is used to photoexcite collective oscillations in nanoplasmonic structures; A co-propagating isolated attosecond XUV pulse hits the same nanostructures with a certain time delay $\Delta t$. Photoemitted electrons induced by the XUV pulse are spatially imaged by the energy-resolved time-of-flight photoemission electron microscope allowing nanometer spatial resolution. Since the recorded electron energy depends on the localized near-field around the nanostructure from the time of photoemission, the delay dependent electron energy spectrum reveals the dynamics of the local plasmonic field oscillations driven by the incident optical light field~\cite{stockman2007attoPEEM, stockman2018roadmap}. In this way, both attosecond temporal resolution and nanometer spatial resolution can be achieved and attosecond nanoscopy realized. Theoretical studies have shown that, since the photoemitted electrons from nanostructures can escape the near-field within an optical cycle, the recorded electron energy distribution in this case directly probes the electric near-field (the `instantaneous regime')~\cite{stockman2007attoPEEM,skopalova2011numerical_nanostreaking, sussmann2011numerical_nanostreaking, borisov2012numerical_nanostreaking, kelkensberg2012numerical_nanostreaking, prell2013numerical_nanostreaking, Scrinzi2014numerical_nanostreaking,ThummPRL}. This is in contrast to the conventional attosecond streaking spectroscopy technique, where photoionized electrons from gases experience a quasi-homogeneous field of the NIR pulse and probe the vector potential of the optical light field (the `ponderomotive regime')~\cite{goulielmakis2004direct}. These two regimes can be characterized by an adiabaticity parameter $\delta=T_{\mathrm{esc}}/T_{\mathrm{opt}}$, where $T_{\mathrm{opt}}$ is the optical period and $T_{\mathrm{esc}}$ is the escape time of the electron~\cite{schoetz2017reconstruction_streaking}. Therefore, the interpretation of streaking traces of nanoscopic near-fields requires some prior knowledge of the spatial scale of the near-field itself. Experimentally, only one study has so far demonstrated the reconstruction of nanoscale near-fields of a tapered gold nanotip~\cite{foerg2016streaking_nanotips}, which, however, lacked the direct spatial resolution. Therefore, extensive numerical simulations of the electric field distributions around the nanostructure were still necessary to interpret the results. Moreover, since the focal spot of the XUV is usually much larger ($\backsim \mu$m$^2$) than the nanoscopic region of interest, typically just the hotspot of the nanostructure ($\backsim$ nm$^2$), only a small portion of the total electron yield is expected to contribute to the signal. The Atto-PEEM technique, in principle, inherits the limitations of contemporary photoemission electron microscopy (PEEM) using XUV attosecond pulse trains from HHG of gases by kHz laser sources~\cite{chew2012TOF_PEEEM, LHuillier2009PEEM_XUVtrains}. Specifically, space-charge repulsion effects on sample surface as well as in the detector, limit the admissible number of emitted electrons per pulse and thus its spatial resolution. Fig. 4(c) shows one of the examples of gold plasmonic nanostructures images collected by time-of-flight-photoemission electron microscopy (TOF-PEEM). The XUV pulses are produced from HHG of neon gas target using a kHz repetition rate laser system. Increasing the gas pressure leads to an enhanced photon flux. With the same exposure time, at 70 mbar, the image become blurred due to the space-charge effects on sample surface and within the electron optics of the detector. Additionally, at 90 and 100 mbar, multiple hits on the detector create artifacts, which further reduce the image quality. Therefore, the repetition rate of kHz laser sources for HHG has become one of the major obstacles that hinders the successful implementation of Atto-PEEM: with these sources, one would need to reduce the HHG flux to levels so low that extremely long data-acquisition time (on the order of several days to a week) is necessary to reach sufficient statistics. This makes experiments impossible, since laser-stability and possible surface contamination limit the achievable acquisition time to typically just several hours. With the development of HHG sources with MHz repetition rates, which are fueled by similar needs in conventional solid-state photoemission (e.g.~\cite{tao2016direct, eich2014trARPES}) and XUV frequency comb spectroscopy~\cite{Ye2012frequencyCombXUV}, we expect that the space-charge limitation can soon be overcome. Current techniques are based on enhancement cavities~\cite{pupeza2013MHzXUVsource, mills2015MHzXuvSourcePE, corder2017MHzXUVPE,Ye2012frequencyCombXUV,Kobayashi2015XUV_enhancementCavity,Ye2018phasematchedXUV} or single-pass systems either using optical parametric amplification (OPA)~\cite{vernaleken2011single, Limpert2016singlepass_HHG_review} or nonlinear compression of laser pulses~\cite{Limpert2017highAveragePower}, all of which require few-cycle laser pulses with high average powers in the generation target. This is necessary since, firstly, HHG in atoms is an extremely nonlinear process which requires at least several tens of $\SI{}{\micro J}$ pulse energy and, secondly, conversion efficiencies from the laser pulse to the HHG radiation are small, typically in the range of $10^{-5}$-$10^{-6}$~\cite{Limpert2016singlepass_HHG_review}. Furthermore, in order to control the HHG process and generate isolated attosecond pulses, it is essential to have a tight waveform control of the laser pulses, particularly in terms of the CEP. Enhancement cavities allow up to several kW of average power at MHz repetition rate for the generation of high-order harmonics inside the cavity. This is possible because the laser pulse is effectively reused for subsequent roundtrips. Outcoupling of the HHG radiation can either be done using pierced mirrors~\cite{pupeza2019ultrafastRABBITT}, gratings~\cite{mills2015MHzXuvSourcePE} or Brewster plates~\cite{corder2017MHzXUVPE}. A schematic illustration of HHG in enhancement cavities is shown in Fig.~\ref{Fig:HHGcavity}(b), together with the TOF-PEEM setup. Recently, high flux XUV pulse trains with MHz repetition rate generated from an enhancement cavity have been reported and its application to attosecond-resolved experiments on solid surface exhibits the advantage of reducing integration times from days to minutes, see Fig.~\ref{Fig:HHGcavity}(e)~\cite{pupeza2019ultrafastRABBITT}. However, a central challenge still remains in this technique: so far it only delivers attosecond pulse trains, since traditional gating techniques are difficult to apply. On the one hand, the circulating laser pulses inside the cavity are usually longer than few-cycle pulses because of the limited bandwidth of the cavities~\cite{Pupeza2017FewCycleCavities}; on the other hand, other gating schemes designed to work with longer pulses, such as polarization gating, are hard to implement in cavity-based schemes. Nevertheless, concepts are being developed towards the generation of isolated attosecond pulses~\cite{hogner2017generationIAP}. Single-pass systems can provide few-cycle pulses at only several tens to hundreds of watts in average power~\cite{Limpert2016singlepass_HHG_review,Limpert2017highAveragePower}. Here, Yb-based systems have largely replaced Ti:Sapphire lasers, since they can deliver much higher average powers (albeit with lower bandwidths, which makes spectral broadening and amplification schemes necessary). In the OPA approach, a spectrally broad pulse, either from a broadband source or obtained from pulse broadening in bulk crystal, is amplified by an optical parametric process. Since the parametric process only couples to virtual energy levels in the gain material, the amplification bandwidth can be tuned by the thickness and phase-mismatch of the crystals. Moreover, the amplification process does not intrinsically require the absorption of any fraction of the photon energy in the crystal~\cite{fattahi2014thirdgeneration_FSlasers}, therefore, the heating load can be minimized, allowing for much higher average powers than e.g. with Ti:Sa systems. Nevertheless, there is still some (regular) absorption of the interacting waves, which leads to residual heating and thermal stress and ultimately limits the achievable average power~\cite{Limpert2013thermal}. In the other approach of nonlinear compression, a high-energy laser pulse is broadened via self-phase modulation in a nonlinear medium, typically a hollow-core fibre filled with a noble gas, which is already a well-established technique in attosecond physics. Although cooling of the fibres can be necessary~\cite{Limpert2016scalability}, this concept seems to allow considerably higher average powers when compared to OPA ~\cite{Limpert2017highAveragePower}. Both of these single-pass concepts allow the generation of few-cycle pulses, and they are not restricted with respect to gating schemes; indeed, the generation of HHG radiation supporting single isolated pulses at 0.6\,MHz based on an optical parametric chirped-pulse amplification (OPCPA) system has been reported~\cite{Limpert2013towardsIAPMHz}. However, higher average power and repetition rate is still desirable for the application to Atto-PEEM. A comparison of the different HHG sources applied to photoemission in terms of emitted photoelectrons per second is shown in Fig.~\ref{Fig:HHGcavity}(d). At present, enhancement cavities deliver the highest space-charge limited photoelectron flux among all XUV sources thanks to their high repetition rate, while single-pass systems have the potential to catch up in the future. It will also be interesting to see if solid state HHG might become a suitable alternative. A successful implementation of these new HHG sources will push forward the development of Atto-PEEM for attosecond nanoscopy. The new metrology techniques with the ability to measure electromagnetic fields on the attosecond temporal and nanometer spatial scale could not only allow to probe new phenomena on the nanoscale, but ultimately benefit the development of petahertz electronics, in particular with respect to ultrafast switches or plasmonic circuitry. \subsection*{Ultrafast electron pulses and applications} \begin{figure*}[htb!] \centering\includegraphics[width=\textwidth]{Figure5.pdf} \caption{ \label{Fig:ElectronMicroscopy} Applications of ultrafast electron sources:. Panels (a)-(c) show the generation and subsequent characterization of attosecond electron pulse trains [from~\citer{Ropers2017attosecondElectronPulseTrains}]: (a) Scheme of the experimental setup for attosecond electron pulse trains generation and characterization with graphite nanofoils on upper and lower grids, respectively. Two phase-locked laser pulses of the same frequency, together with the electron beam are used to prepare and probe the attosecond electron pulse trains. (b) Measured electron kinetic energy shift (in units of the photon energy) depending on the phase-delay between preparation and probe laser pulses. (c) Reconstructed electron pulse duration. (d)-(g) Sub-cycle/sub-wavelength-resolved space-time reconstruction of THz electromagnetic field inside a microresonantor [from~\citer{Baum2016electronMicroscopyEMwaveforms}]. (d) Scheme of the experimental setup: The same laser drives electron pulse generation, compression and sample excitation, leading to exceptional timing stability. (e) Reconstructed electric fields inside the microresonator at a fixed delay time. (f) Visualization of the measured time-dependent electron signal obtained from the image sequence from which the electromagnetic fields are reconstructed. (g) Time evolutions of electric field components (left) and polarizations (right) at positions 1 and 2 inside the microresonator indicated in panel (e).} \end{figure*} The studies of strong-field electron emission from nanotips, that comprise an important pillar in attosecond nanophysics, were initially motivated by the search for new ultrafast electron sources~\cite{Hommelhoff2006_FieldEmissionNanotip}, and nanotip-based electron sources are now being used in several applications, including transmission electron microscopy (TEM), ultrafast low-energy electron diffraction (ULEED) and point-projection microscopy~\cite{Ernstdorfer2014pointprojection_nanocurrents, Lienau2018observingSpaceChargeSeparation, Ropers2018ElectronMicroscopy_Review}. Besides that, other schemes that use conventional photo-cathodes have also demonstrated ultrafast electron pulses~\cite{Baum2016allOptical}. Compared to the laser pulses, ultrafast electron pulses do not only carry the high temporal resolution but also have the advantage of possessing a high momentum at relatively low energies, which allows, in principle, for {\aa}ngstrom spatial resolution. However, two disadvantages result from the use of electrons: first, Coulomb repulsion effects could be significant, when short electron pulses get focused to small spots; secondly, vacuum propagation is dispersive for electrons, which leads to an increase of the electron pulse duration as different kinetic energies travel with different speed. Therefore, the chirped electron pulses need to be simultaneously focused in space and time, which requires special and stringent compression schemes. Practically, in order to achieve a short electron pulse duration at the sample, two different approaches have been adopted. The first approach is to minimize the distance between the source~-- in this case, nanotips~-- and the sample, so that there is a minimal increase in the electron pulse duration from the generation target to the sample; source-to-sample distances as short as 2.7 $\mu$m have been reported~\cite{Lienau2018observingSpaceChargeSeparation}. In this scheme, it is important to avoid sample excitation caused by the generating laser pulse, and this can be done by nanofocusing of propagating plasmons at the tip apex, as discussed above, with the laser driver coupled to the nanotip via a grating at the shank. The other approach is to employ an additional electron pulse compression stage. The current state of the art is to use nanofoils and THz laser pulses with the electrons incident at an oblique angle, such that the electrons, depending on the timing between them and a given half-cycle of the laser field, are either accelerated or decelerated~\cite{Baum2018nanofoils}. This can be used to compress or stretch the electron pulses. If compression schemes are used, a method is needed to measure the electron pulse length and determine the optimal compression parameters. A similar arrangement as in the nanofoil compression can be used to either change the electron kinetic energy~\cite{Baum2014laserStreaking} or the propagation direction~\cite{Baum2016allOptical} depending on the delay between the electron and laser pulses, which allows attosecond temporal resolution. Indeed, attosecond electron pulse trains have been generated and characterized with this method~\cite{Ropers2017attosecondElectronPulseTrains, Baum2018AttoPulseTrains}, as shown in Fig.~\ref{Fig:ElectronMicroscopy}(a)-(c). Attosecond electron pulse trains are generated at a nanofoil by modulating the electrons' phase via a laser pulse, which leads to sidebands in the kinetic energy spaced by multiples of the photon order. At a second nanofoil, the electrons interact with a delayed replica of the first laser pulse and, depending on the relative phase of the two laser pulses, the sidebands are either enhanced or reduced (see panel (b)). From this the quantum state of the electron pulse, and especially its temporal duration, can be reconstructed, as shown in panel (c). In a proof-of-principle experiment, attosecond electron pulse trains have been applied to measure the relative delay of the deflectograms of Bragg diffraction spots of silicon and found a delay of around 10\,as for certain spots~\cite{Baum2018AttoPulseTrains}. Numerical simulations have shown that, for single isolated attosecond pulses in similar experiments, the whole charge dynamics of e.g.~graphene could potentially be reconstructed~\cite{BaumYakovlev2015grapheneDiffraction}. Electronic dynamics in nanostructured samples can be probed by using an ultrafast laser pump-electron pulse probe experimental scheme, where the photoinduced electron dynamics is recorded with extremely high spatial resolutions. It has been shown that using a THz pulse derived from the same laser for electron generation and sample excitation generally leads to extremely low timing jitter~\cite{Baum2016allOptical}. In point projection microscopy experiments, spatial and temporal resolutions down to tens of nanometers and 25\,fs could be demonstrated~\cite{Lienau2018observingSpaceChargeSeparation}, which allowed the measurement of photo-induced ultrafast electric currents in semi-conductor nanowires~\cite{Ernstdorfer2014pointprojection_nanocurrents} or the space-charge separation on a plasmonic nanoantenna after photoemission~\cite{Lienau2018observingSpaceChargeSeparation}. Here, since no electron pulse compression is employed, the temporal resolution is limited by the dispersion of the electron pulse from the source to the sample. By using electron diffraction, the atomic lattice dynamics can be probed. Using a modified transmission electron microscope, without additional pulse compression, structural phase transitions associated with charge density waves~\cite{Ropers2018ICP_NCP} or the dynamics of acoustic waves in graphite films~\cite{Ropers2018strain} could be measured with a temporal resolution of only about 1\,ps. With electron pulses compression schemes that generate attosecond pulse trains, the measurement of Bragg diffraction spots with a resolution on the order of ten attoseconds is possible~\cite{Baum2018AttoPulseTrains}. Since electrons are sensitive to electromagnetic fields, ultrafast electron pulses have been used to reconstruct the local plasmonic field induced by THz light on single microresonator with sub-$\SI{100}{fs}$ and few-micron resolution~\cite{Baum2016electronMicroscopyEMwaveforms}. The setup for this experiment is shown in Fig.~\ref{Fig:ElectronMicroscopy}(d), consisting of the above-mentioned elements for electron pulse generation, compression and the intrinsically synchronized pump-probe scheme. The reconstructed field distribution inside the microresonator at certain time delay is shown in panel~(e) together with the measured transmitted electron signal after the sample in panel~(f), as well as the dynamics of the electric field components at two selected points inside the microresonator, in panel~(g). So far electron pulse durations in electron microscope setups are on the order of few tens of femtoseconds. The attosecond pulse trains, as shown above, could be used to probe periodic dynamics. However, in order to probe more complex electronic dynamics or to fully reconstruct electromagnetic fields with frequencies up to PHz scales, isolated attosecond electron pulses are needed. To achieve this, the use of compression schemes seems unavoidable. In this case, stronger THz fields would be beneficial, as well as a cascade of different compression stages with different laser frequencies. We believe that electron-pulse based techniques, thanks to their excellent spatial resolution and the well-established methodology in static modes of operation, could offer a substantial potential for both fundamental research as well as attosecond nanoscopy applications in material and surface analysis. Since they can measure both electric near-fields as well as charge dynamics, they hold the promise to also become an exceptional analytical tool in the development of light-driven petahertz electronics. \section*{Conclusions} Thanks to the tremendous advances that have been made in the field of attosecond nanophysics over the past few years, the research is now in a position to expect several promising technological applications. In this perspective, we have discussed metrology for petahertz electronics applications including HHG and ultrafast current in nanostructured solids, as well as the development of attosecond metrology for nanoscopy, including Atto-PEEM and ultrafast electron pulses. We have pointed out the major challenges of each topics, and their avenues of further development. \acknowledgements J. S., Z. W., and M. F. K. acknowledge support by the Max Planck Society, the DFG via the cluster of excellence ``Munich Centre for Advanced Photonics'' and from the PETACom project financed by FET Open H2020 program of the European Union. M. F. C. acknowledges the project Advanced research using high intensity laser produced photons and particles (CZ.02.1.01/0.0/0.0/16\_019/0000789) from European Regional Development Fund (ADONIS). E.P. and M.L. acknowledge the Spanish Ministry MINECO (National Plan 15 Grant: FISICATEAMO No.\ FIS2016-79508-P, SEVERO OCHOA No.\ SEV-2015-0522, FPI), European Social Fund, Fundaci\'o Cellex, Generalitat de Catalunya (AGAUR Grant No.\ 2017 SGR 1341 and CERCA/Program), ERC AdG OSYRIS and NOQIA, and the National Science Centre, Poland-Symfonia Grant No.\ 2016/20/W/ST4/00314. \bibliographystyle{arthur}
1,314,259,993,727
arxiv
\subsubsection*{Acknowledgements}} \long\def\longthanks#1{\subsubsection*{Acknowledgements} #1}% \def\enddoc@text{} \def\@makePagedeGarde{% \ifcdr@pagedegarde {\makeatletter\input{CR-pagedegarde}}% \newpage\ifcdr@HTML\pagecolor{white}\else\gdef\GPT@pageliteral{}\fi\fi} \ifcdr@sommaire \let\cdr@postabstracthook\@empty \def\ps@firstpage{\count@=0000 \ps@plain \let\@oddfoot\@empty \let\@evenfoot\@oddfoot \def\@oddhead{\@serieslogo\hss}% \let\@evenhead\@oddhead } \let\ps@headings\ps@empty \let\ps@plain\ps@empty \def\article@logo{% \set@logo{\hsize\textwidth% {\Large\textit{Comptes Rendus\\ \CDRcrasseries}\\[2pt]} \ifx\@empty\currentvolume \textbf{Draft} \else \textbf{0000}, \currentvolume, \no \currentissue\fi\\ \mbox{} \hfill\raisebox{-3mm}[0pt][0pt]{\includegraphics[height=16mm]{Logo_Academie_grisfonce.pdf}}\\[18bp] \centerline{\rule{.8\textwidth}{1.5bp}} }% } \def\vspace*{-17mm}{\vspace*{-17mm}} \def\lastname#1{#1} \def\junior#1{\unskip, #1}% \renewcommand{\@pnumwidth}{4.5em} \def\@adminfootnotes{} \def\enddoc@text{} \newcommand\SomLine[9]{% \def\lng{#3}\def\LNG{french}% \par \addpenalty\@secpenalty\addvspace{\medskipamount}% \begingroup\hyphenpenalty50\def\\{\allowbreak}% \parindent0pt \rightskip\@pnumwidt {\scshape\ignorespaces#1\unskip}\pointrait\ {} \parfillskip-\@pnumwidth \ifx\lng\LNG#5\else#4\fi\unskip\leavevmode % \nobreak\hskip2pt\nobreak \leaders\hbox{$\m@th \mkern 1.5mu\hbox{.}\mkern 1.5mu$}\hfill \nobreak \hb@xt@\@pnumwidth{\@tocpagenum{\ignorespaces#8\unskip-\ignorespaces#9\unskip}}% \par \endgroup } \fi \ifcdr@volume \def\@makefront{% \IncludeSom{\cdr@journalacro -Sommaire}% \cleardoublepage } \def\@makebody{% \ifcdr@newsom \immediate\write18{rm -f \cdr@journalacro -Sommaire-data.tex}% \fi \ifcdr@specialtitre \newwrite\cdr@globalcfgfile \immediate\openout\cdr@globalcfgfile=\CDRidvol -global.cfg \immediate\write\cdr@globalcfgfile{\the\cdr@specialtitret}% \immediate\closeout\cdr@globalcfgfile \fi \pagenumbering{arabic}% \setcounter{page}{\@FirstPage}% \tkkp={\setpage}% } \def\enddoc@text{} \fi \NumberTheoremsIn{} \setcounter{tocdepth}{3} \ifcdr@cdrthmdefs\OneNumberAllTheorems\fi \def\th@plain{% \thm@bodyfont{\normalfont\itshape}% \thm@headfont{\normalfont\bfseries}% \thm@notefont{\normalfont\bfseries}% \thm@headpunct{.\kern.2em\ignorespaces}% \let\thm@indent\noindent \def\thm@space@setup{% \thm@preskip=\smallskipamount \thm@postskip=\thm@preskip}% \itshape } \def\th@remark{% \thm@bodyfont{\normalfont}% \thm@headfont{\normalfont\bfseries}% \thm@notefont{\normalfont\bfseries}% \thm@headpunct{.\kern.2em}% \let\thm@indent\noindent \def\thm@space@setup{% \thm@preskip=\smallskipamount \thm@postskip=\thm@preskip}% } \let\th@definition\th@plain \renewenvironment{proof}[1][\proofname]{\par \pushQED{\qed}% \normalfont \topsep6\p@\@plus6\p@\relax \trivlist \item[\hskip\cdr@proofindent {\providecommand\printref{\relax}% \bfseries #1\@addpunct{\cdr@proofpunct}}]\ignorespaces }{% \popQED\endtrivlist\@endpefalse } \def\@startsection#1#2#3#4#5#6{% \if@noskipsec \leavevmode \fi \par \@tempskipa #4\relax \@afterindenttrue \ifdim \@tempskipa <\z@ \@tempskipa -\@tempskipa \@afterindentfalse\fi \if@nobreak \everypar{}\else \addpenalty\@secpenalty\addvspace\@tempskipa\fi \@ifstar{\@dblarg{\@sect{#1}{\@m}{#3}{#4}{#5}{#6}}}% {\@dblarg{\@sect{#1}{#2}{#3}{#4}{#5}{#6}}}% } \def\@seccntformat#1{% \protect\textup{\protect\@secnumfont \csname the#1\endcsname \protect\@secnumpunct }% } \def\@sectionfamily{\normalfont} \def\@sectionfont{\@sectionfamily\large\bfseries} \let\contentsnamefont\@sectionfont \def\@secnumfont{\selectfont} \def\@subsectionfont{\@sectionfamily\large\itshape} \def\@subsubsectionfont{\@sectionfamily\itshape} \def\@paragraphfont{\normalfont\bfseries} \def\section{\@startsection{section}{1}% \z@{-1.5\linespacing\@plus-.5\linespacing\@minus-.5\linespacing}{.8\linespacing\@plus.2\linespacing\@minus.3\linespacing}% {\@sectionfont}} \def\subsection{\@startsection{subsection}{2}% \z@{-1.5\linespacing\@plus-.5\linespacing\@minus-.5\linespacing}{.8\linespacing\@plus.2\linespacing\@minus.3\linespacing}% {\normalfont\@subsectionfont}} \def\subsubsection{\@startsection{subsubsection}{3}% \z@{.5\linespacing\@plus.5\linespacing}{.5\linespacing}% {\normalfont\@subsubsectionfont}} \def\paragraph{\@startsection{paragraph}{4}% \z@{.5\linespacing\@plus.5\linespacing}{-\fontdimen2\font}% {\normalfont\@paragraphfont}} \newcount\tochyphenpenalty \tochyphenpenalty\@M \def\@tocline#1#2#3#4#5#6#7{\relax \ifnum #1>\c@tocdepth \else \par \addpenalty\@secpenalty\addvspace{#2}% \begingroup \hyphenpenalty\tochyphenpenalty \@ifempty{#4}{% \@tempdima\csname r@tocindent\number#1\endcsname\relax }{% \@tempdima#4\relax }% \parindent\z@ \leftskip#3\relax \advance\leftskip\@tempdima\relax \advance\rightskip\@pnumwidth \parfillskip-\@pnumwidth {#5{\leavevmode\hskip-\@tempdima #6}}% \nobreak \xleaders\hbox to1ex{\normalfont\hss.\hss}\hfil\nobreak \hbox to\@pnumwidth{\@tocpagenum{#7}}\par \nobreak \endgroup \fi} \renewcommand{\tocpart}[3]{% \indentlabel{\@ifnotempty{#2}{\ignorespaces#1 #2.\kern1ex}}#3} \let\tocsection\tocpart \let\tocsubsection\tocsection \let\tocsubsubsection\tocsection \let\tocparagraph\tocsection \def\l@part{\@tocline{-1}{12pt plus2pt}{0pt}{}{\bfseries\smf@boldmath}} \def\l@section{\@tocline{1}{0pt}{0pc}{}{}} \def\l@subsection{\@tocline{2}{0pt}{1pc}{}{}} \def\l@subsubsection{\@tocline{3}{0pt}{2pc}{}{}} \def\l@paragraph{\@tocline{5}{0pt}{3pc}{}{}} \def\l@figure{\@tocline{0}{3pt plus2pt}{0pt}{}{}} \let\l@table\l@figure \def\@smfprotect{\let\smfcr\\\def\\{\protect\smfcr}} \DeclareRobustCommand{\MakeUppercase}[1]{{% \@smfprotect\upchars@ \protected@edef\reserved@a{#1}\uppercasenonmath\reserved@a\reserved@a}} \protected@edef\MakeUppercase#1{\MakeUppercase{#1}} \AtBeginDocument{% \url@samestyle} \emergencystretch2em \arraycolsep 1.5pt \def{\normalfont\itshape et al.}{{\normalfont\itshape et al.}} \def\vadjust{\pagebreak}{\vadjust{\pagebreak}} \newcommand\PB{\pagebreak} \newcommand\SP{\vadjust{\pagebreak}} \newcommand\SPB{\vadjust{\pagebreak}\break} \newcommand\PBI{\pagebreak\noindent} \endinput \let\le\leqslant \let\ge\geqslant \let\leq\leqslant \let\geq\geqslant \let\emptyset\varnothing \def\sfrac#1#2{{#1}/{#2}} \defloc.\kern3pt cit.\kern.3em{loc.\kern3pt cit.\kern.3em} \def\textit{cf}{cf.\kern.3em} \defe.g.\kern.3em{e.g.\kern.3em} \def\textit{i.e.}{i.e.\kern.3em} \def\text{resp.}\kern.3em{\text{resp.}\kern.3em} \defetc.\kern.3em{etc.\kern.3em} \let\eccetc.\kern.3em \let\oldsetminus\setminus \let\setminus\smallsetminus \newcommand{\arXiv}[1]{\href{http://arxiv.org/abs/#1}{\texttt{arXiv\string:\allowbreak#1}}} \newcommand{\eprintHal}[1]{\href{https://hal.archives-ouvertes.fr/hal-#1}{\texttt{hal-#1}}} \AtEndOfClass{% \let\oldbigoplus\bigoplus \let\oldbigcup\bigcup \let\oldbigcap\bigcap \let\oldbigotimes\bigotimes \let\oldbigwedge\bigwedge \let\oldcoprod\coprod \def\vadjust{\pagebreak}{\vadjust{\pagebreak}} \newcommand\PB{\pagebreak} \newcommand\SP{\vadjust{\pagebreak}} \newcommand\SPB{\vadjust{\pagebreak}\break} \newcommand\PBI{\pagebreak\noindent} } \AtBeginDocument{% \url@samestyle \def\bigoplus{\mathop{\textstyle\oldbigoplus}\displaylimits}% \def\bigcup{\mathop{\textstyle\oldbigcup}\displaylimits}% \def\bigcap{\mathop{\textstyle\oldbigcap}\displaylimits}% \def\bigotimes{\mathop{\textstyle\oldbigotimes}\displaylimits}% \def\bigwedge{\mathop{\textstyle\oldbigwedge}\displaylimits}% \def\coprod{\mathop{\textstyle\oldcoprod}\displaylimits}% } \section{Introduction} The plasticity of body-centered cubic (BCC) metals has attracted and will continue to attract a lot of attention for both technological and scientific reasons. Technologically, BCC metals are ubiquitous among structural materials due to their high yield strength and toughness \cite{Ashby2018}. For example, mild steels have a BCC matrix of $\alpha$-Fe hardened by various types of alloying elements, precipitates and second-phase particles. Another example is tungsten, which is foreseen to be used in fusion reactors due to its high density and elevated melting point \cite{Rieth2013}. Plasticity in BCC metals is mainly due to the glide of screw dislocations with a $1/2\,\langle 111 \rangle$ Burgers vector \cite{Hirsch1960}. These dislocations have proved to exhibit unusual properties, which translate directly in surprising features of BCC plasticity at the macroscopic scale \cite{Christian83}. Based on atomistic simulations \cite{Vitek1970} and \textit{in situ}{} transmission electron microscopy \cite{Louchet1979,Caillard2010}, we know that screw dislocations feel a strong lattice resistance and glide through a thermally-activated process, which involves the nucleation and propagation of kink-pairs along the dislocation line. Different glide planes, $\{ 110\}$, $\{ 121\}$ \cite{Argon1966,Spitzig1970}, even $\{ 123\}$ \cite{Caillard2018}, have been observed. At low temperature in most metals, $\{ 110\}$ planes dominate but it remains unclear to this date whether glide in different planes results from different glide mechanisms or from a combination of elementary glide events in different $\{ 110\}$ planes. Very surprising glide sequences in different $\{ 121\}$ planes have also been recently observed in tungsten by \textit{in situ}{} TEM \cite{Caillard2018}. The thermal activation of the screw dislocation mobility implies a rapid increase of the yield stress at low temperatures \cite{Caillard2003}. However, a still much debated observation is that the best prediction of the 0\,K limit of the yield stress, the so-called Peierls stress, based on atomistic models is two to three times larger than experimental extrapolations \cite{Groger2007,Proville2012,Freitas2018}. Another surprising feature of the screw dislocations is that they do not obey the classical Schmid law \cite{Schmid1924}, which states that dislocation motion is driven only by the resolved shear stress, \textit{i.e.}{} the part of the stress tensor, which produces a Peach-Koehler force in the glide plane of the dislocation. Screw dislocations do not follow Schmid law for two reasons \cite{Duesbery1998}. First, due to the asymmetry of the BCC lattice, positive and negative shear stresses on planes other than $\{110\}$ are not equivalent, resulting in the so-called twinning/antitwinning (T/AT) asymmetry. Second, the screw dislocations are affected not only by the resolved shear stress but also by components of the stress tensor which do not produce a Peach-Koehler force, inducing so-called non-glide effects. The lattice resistance as well as non-Schmid effects result from the structure of the BCC lattice and the core structure of the $1/2\,\langle 111 \rangle$ screw dislocation. Both effects can only be accounted for through atomistic simulations. Empirical potentials have been used since the 1970s \cite{Vitek1970} and have led to valuable yet sometimes inconsistent results between potentials or even with experimental evidence. In particular, most interatomic potentials predict a three-fold degenerate core, which glides in $\{ 121\}$ planes, while as mentioned above, slip occurs experimentally at low temperature mostly in $\{ 110\}$ planes. However, with the increase of computing power, it has become possible at the turn of the 21st century \cite{Ismail2000} to model screw dislocations using \textit{ab initio}{} density functional theory (DFT) calculations. These calculations have led to a wealth of new results, showing that in all pure BCC transition metals, the dislocation core is not degenerate nor asymmetrically extended, but is rather non-degenerate and compact \cite{Ismail2000,Woodward2002,Frederiksen2003,Ventelon2007}. DFT has also allowed to study the T/AT asymmetry \cite{Dezerald2016} and non-glide effects \cite{Kraych2019} on perfectly straight screw dislocations, as will be detailed below. At finite temperatures, dislocation glide involves kink pairs, which extend over several tens of Burgers vectors along the dislocations, requiring simulation cells too large for \textit{ab initio}~calculations. In this case, classical molecular dynamics simulations with carefully tested interatomic potentials remain highly valuable \cite{Domain2005,Chaussidon2006,Gilbert2011,Po2016}. Another approach is to introduce a coarse-grained model adjusted on DFT data, for instance a line tension model \cite{Itakura2012,Proville2013,Dezerald2015,He2019}, as described below. To study collective effects, one then needs to go one scale up and use for instance dislocation dynamics \cite{Chaussidon2008,Po2016}. However, in BCC metals at low temperature, collective effects are not dominant, in contrast with face-centered-cubic (FCC) metals and the yield strength can be directly predicted from the behavior of an isolated screw dislocation \cite{Groger08b}. In the past 15 years, important results on the mobility of screw dislocations in BCC metals have been obtained at different scales but in an uncoordinated way, using data from different sources and obtained with different codes. In the present paper, we would like to show on the specific example of tungsten how data obtained from \textit{ab initio}{} calculations can be used in a consistent way to develop a yield criterion including non-Schmid effects able to predict the plastic strength as a function of both the temperature and the direction and sign of the deformation axis. \section{Dislocations and ab initio calculations} \label{sec:methods} \textit{Ab initio}{} calculations of dislocations rely on the density functional theory (DFT) \cite{Hohenberg1964,Kohn1965}. Such an approach is computationally demanding and can handle only small systems, typically a few hundred atoms, at most a few thousands on supercomputers. As a consequence, \textit{ab initio}{} calculations have mainly been restricted until now to model infinite straight dislocations, in order to minimize the length of the simulation cell in the direction of the dislocation line. In the other directions, it is necessary to carefully handle the elastic strain field created by the dislocation, as this elastic field leads to a displacement which varies as the logarithm of the distance to the dislocation line and therefore does not vanish at short range. In addition, is it not possible, for topological reasons, to model a single dislocation in a supercell with full periodic boundary conditions, since the displacement discontinuity created by the dislocation needs to be closed by another defect. Several approaches specific to dislocations have been developed. We only give below a brief overview of these approaches which can be divided in two categories and refer to recent review articles \cite{Woodward2005,Rodney2017,Clouet2018h} for technical details. \begin{figure}[btp] \begin{center} \includegraphics[width=0.9\linewidth]{figure1} \end{center} \caption{Typical simulation cell used to model $1/2\,\hkl<111>$ screw dislocations in BCC metals. The cell contains a dislocation dipole, which forms a quadrupolar arrangement of dislocations of opposite Burgers vectors through the periodic boundary conditions applied in all directions. The dipole is defined by its Burgers vector $\vec{b}$, the dipole vector $\vec{d}$ joining both dislocation lines, and the cut vector $\vec{A}$, with the corresponding discontinuity surface indicated by a double black line. $\vec{U}_1$ and $\vec{U}_2$ are the periodicity vectors of the simulation cell perpendicular to the dislocation line. The atomic structure of the dislocations is shown with differential displacement and Nye tensor maps. In this projection perpendicular to the dislocation line, atoms are shown as circles with a color depending on their \hkl(111) plane in the original perfect lattice. Arrows between atomic columns are proportional to the differential displacement created by the dislocation in the direction of the Burgers vector. The color map shows the dislocation density $\rho_b$ normalized by the lattice parameter $a$. (Figure adapted from \cite{Clouet2018h}) } \label{fig:PBC_sketch} \end{figure} \paragraph{Cluster approach:} one can embed a single dislocation in an infinite cylinder with its axis along the line defect. Atoms in the outer surface of the cylinder are either kept fixed at their positions predicted by elasticity theory or relaxed according to the harmonic response of the crystal \cite{Sinclair1978,Woodward2002,Woodward2005}. Only the atoms inside the cylinder, \textit{i.e.}{} the atoms close to the dislocation cores, are relaxed according to the Hellman-Feynman forces calculated \textit{ab initio}{}. With such boundary conditions, one truly models a single dislocation in an infinite crystal. The main drawback of the approach is the difficulty to isolate the dislocation contribution to the calculated excess energy. One misses in \textit{ab initio}{} calculations a rigorous local projection of the energy which would allow to separate the excess part coming from the dislocation and the one caused by the boundary, \textit{i.e.}{} the external surface of the cylinder. \paragraph{Dipole approach:} to keep full periodic boundary conditions, and thus avoid the need for an external surface, one has to introduce a dislocation dipole in the simulation cell, with two dislocations of opposite Burgers vectors. One then models an infinite periodic array of dislocations. The excess energy is then due to the core energy of the dislocations as well as their mutual elastic interactions, which involve the dislocations inside the supercell as well as their periodic images. The elastic energy can be evaluated quantitatively using anisotropic elasticity \cite{Clouet2018h}, yielding after subtraction the energy of a single dislocation, \textit{i.e.}{} its core energy and variation with the dislocation position in the atomic lattice. Among the different periodic arrangements proposed to model dislocation dipoles, quadrupolar arrangements have to be preferred, as such arrangements minimize the elastic interaction between the dislocations of the dipole and with their periodic images. This arrangement leads to well-converged dislocation properties for small simulation cells compatible with \textit{ab initio}{} calculations, as long as the dislocation core is compact \cite{Clouet2009}. In BCC metals for instance, it is sufficient to use a supercell containing 135 atoms per $b$ ($b$ is the norm of the Burgers vector) to model $1/2\,\hkl<111>$ screw dislocations. Such supercell containing a dislocation dipole is shown in Fig. \ref{fig:PBC_sketch}. Another advantage of full periodic boundary conditions is that one can relate the stress variations observed in the supercell to the dislocation relative positions. Noting $\vec{d}$ the vector joining the center of the $+\vec{b}$ dislocation to the $-\vec{b}$ dislocation, one defines the cut vector of the dipole as $\vec{A}=\vec{l}\wedge\vec{d}$, where $\vec{l}$ is the dislocation line vector (see Fig. \ref{fig:PBC_sketch} for an example). According to linear elasticity theory, the energy variation caused by a homogeneous strain $\barT{\varepsilon}$ is\footnote{Here, and in the following equations, we use Einstein summation convention on repeated indexes.} \begin{equation} \Delta E(\barT{\varepsilon}) = \frac{1}{2} S \, C_{ijkl} \, \varepsilon_{ij} \, \varepsilon_{kl} + C_{ijkl} \, b_i \, A_j \, \varepsilon_{kl}, \label{eq:dE_epsi} \end{equation} where $C_{ijkl}$ are the elastic constants of the perfect crystal and $S$ is the area of the supercell perpendicular to the dislocations line vector $\vec{l}$. This energy is defined per unit length of the supercell in the $\vec{l}$ direction. The stress is then simply obtained by computing the derivative of the energy, leading to \begin{equation} \sigma_{ij}(\barT{\varepsilon}) = \frac{1}{S} \frac{\partial \Delta E}{\partial \varepsilon_{ij}} = C_{ijkl} \left( \varepsilon_{kl} - \varepsilon_{kl}^0 \right), \label{eq:stress_epsi} \end{equation} with the plastic strain defined as \begin{equation} \varepsilon_{kl}^0 = - \frac{b_k \, A_l + b_l \, A_k}{2S}. \label{eq:plastic_strain} \end{equation} According to Eq. \ref{eq:stress_epsi}, a homogeneous strain $\barT{\varepsilon}$, equal to the plastic strain $\barT{\varepsilon}^0$ introduced in the supercell when creating the dislocation dipole, needs to be applied to the simulation cell to maintain a zero stress \cite{Cai2003,Daw2006}. A different homogeneous strain can be applied to reach another target stress. Eqs. \ref{eq:stress_epsi} and \ref{eq:plastic_strain} also show that if the relative positions of both dislocations vary, \textit{i.e.}{} if the cut-vector $\vec{A}$ varies, a stress will build up in the supercell. The stress variation obtained by \textit{ab initio}{} calculations through generalized Hellman-Feynman forces can thus be used to compute the dislocation relative positions. In this way, one can extract dislocation trajectories from \textit{ab initio}{} energy barriers calculations \cite{Chaari2014,Dezerald2016}, as done in section \ref{sec:PeierlsPotential}. All results described in this article for the $1/2\,\hkl<111>$ screw dislocation in tungsten were obtained with the quadrupolar setup of Fig. \ref{fig:PBC_sketch} in a periodic supercell containing 135 atoms. \textit{Ab initio}{} calculations were performed with either the \textsc{Pwscf}{} or \textsc{Vasp}{} codes, as reported in the original works \cite{Dezerald2014,Dezerald2015,Dezerald2016,Kraych2019} along with the corresponding DFT parameters. Unpublished results appearing below were obtained with the calculations parameters given in Appendix \ref{app:abinitio}. \section{Dislocation core structures and energy landscape} \label{sec:landscape} As first realized by Edagawa \emph{et al.} \cite{Edagawa97a,Edagawa97b}, the $1/2\,\hkl<111>$ screw dislocation can in principle take any position in the $\{111\}$ plane perpendicular to its line. Its core energy will depend on the position, yielding a two-dimensional (2D) energy landscape, which will show low energy regions near stable positions, separated by energy barriers that dominate the low-temperature glide of the dislocation as well as higher energy regions. \begin{figure}[tbh] \begin{center} \includegraphics[width=0.9\linewidth]{figure2} \end{center} \caption{Core structures of a screw dislocation in BCC tungsten \cite{Dezerald2014,Dezerald2016}: (a) easy, (b) hard, (c) split cores, and (d) saddle configuration. The centers of the dislocation are indicated by colored symbols and are sketched in the right panel. Differential displacements and Nye tensors are shown as in Fig. \ref{fig:PBC_sketch}. } \label{fig:core} \end{figure} In order to compute the 2D energy landscape from DFT, we start by considering the high-symmetry positions when the dislocation is located at the center of a triangle of $\langle 111 \rangle$ atomic columns. As illustrated in Fig. \ref{fig:core}, atoms in these columns have different heights and form clockwise helices in upward triangles and anticlockwise helices in downward triangles. Inserting a screw dislocation at the center of such triangle adds a helical displacement field, which either inverses the chirality of the helix or brings all three columns at the same height. The first case, shown in Fig. \ref{fig:core}(a), results in the lowest-energy core configuration of the screw dislocation, called an easy core. The second case, shown in Fig. \ref{fig:core}(b), is by symmetry an energy maximum, called a hard core. We see from the differential displacements and Nye tensor in Fig. \ref{fig:core}(a) that the easy core is compact and symmetrical, as observed in other BCC pure metals \cite{Ismail2000,Woodward2002,Frederiksen2003,Weinberger2013,Dezerald2014} and in contrast with the predictions of many empirical interatomic potentials. A third high-symmetry position is when the dislocation core is located in the immediate vicinity of an atomic column. This configuration shown in Fig. \ref{fig:core}(c) is called a split core \cite{Takeuchi1979}. It does not preserve the 3-fold symmetry and has thus three variants depending on the region from which the dislocation approaches the atomic column. \begin{figure}[tbh] \begin{center} \includegraphics[width=0.9\linewidth]{figure3} \end{center} \caption{Energy variation of the dislocation along different paths in BCC tungsten \cite{Dezerald2014}: (a) Peierls barrier, \textit{i.e.}{} minimum energy path between easy cores; (b) Straight path between split and hard cores. The saddle configuration in (a) is also indicated by a diamond in (b). The insets show the path directions with the corresponding reaction coordinates, $\zeta_x$ and $\zeta_y$.} \label{fig:PeierlsCuts} \end{figure} At rest, the screw dislocation is in its low-energy, stable, easy core configuration and glides through a thermally-activated jump to a nearby easy core. As shown in Fig. \ref{fig:PeierlsCuts}(a), we computed the corresponding energy barrier \cite{Dezerald2014}, the so-called Peierls barrier, using the nudged elastic band (NEB) method \cite{Henkelman2000}. In the energy barrier calculation, both dislocations composing the dipole are displaced in the same direction to keep their distance constant. As a consequence, the elastic energy along the migration path is almost constant: only a small variation arises from a deviation of the dislocation trajectory from a straight path, this deviation being in opposite direction for the $+\vec{b}$ and $-\vec{b}$ dislocations. The very small elastic correction is illustrated in Fig. \ref{fig:PeierlsCuts}(a) by the difference between the open symbols obtained directly from the DFT calculation and the full symbols calculated after subtracting the elastic energy. We see in Fig. \ref{fig:PeierlsCuts}(a) that the energy barrier has a single hump and that the dislocation passes by a saddle configuration at mid-distance between easy cores. The corresponding atomic configuration is shown in Fig. \ref{fig:core}(d). In order to obtain 2D information, we computed in the same cell the energy path between the split and hard cores \cite{Dezerald2014}, \textit{i.e.}{} along the ridge which separates the basins of attraction of two successive easy cores. The result is shown in Fig. \ref{fig:PeierlsCuts}(b), where we see a slightly larger, but still rather small, elastic correction. For these calculations, we constrained the difference in altitude between the two atomic columns on either side of the ridge (the white and black columns in the inset of Fig. \ref{fig:PeierlsCuts}(b)) to fix the core position and forbid the dislocation to relax to an easy core configuration during minimization. We find as expected that the hard core is an energy maximum. Less expected, at least from empirical interatomic potential calculations, is that the split core is also an energy maximum, with an energy even higher than the hard core. In-between, there is a minimum, which corresponds to the saddle configuration between easy cores. \begin{figure}[tb] \begin{center} \includegraphics[width=0.50\linewidth]{figure4a.png} \includegraphics[width=0.48\linewidth]{figure4b} \end{center} \caption{2D Peierls potential of the $1/2\,\hkl<111>$ screw dislocation in BCC tungsten \cite{Dezerald2014}. In the projection on the right, the dislocation trajectory corresponding to the minimum energy path between two easy cores (Fig. \ref{fig:PeierlsCuts}a) is shown with a green solid line, while the dashed green line links the split and the hard cores (energy variation shown on Fig. \ref{fig:PeierlsCuts}b). The green diamond corresponds to the saddle configuration. } \label{fig:Peierls2D} \end{figure} We constructed a continuous 2D energy landscape, $V_{\rm P}^{\rm 2D}(x,y)$ shown in Fig. \ref{fig:Peierls2D} from the energy along both the Peierls barrier and the split-to-hard line using a Fourier decomposition satisfying the three-fold symmetry of the lattice \cite{Dezerald2014}. For this, the position of the dislocation along the paths was determined by fitting the relaxed atomic positions given by the \textit{ab initio}{} calculations to the positions predicted by anisotropic elasticity theory, using the dislocation position as fitting parameter. Similar dislocation trajectories are obtained using the stress variation along the paths mentioned in section \ref{sec:methods}. We see in Fig. \ref{fig:Peierls2D}(b) that the path between easy cores is not straight, but deviates towards the split core. We will link this deviation to the T/AT asymmetry in section \ref{sec:trajectory}. For now, we note that the 2D landscape highlights that the $\{111\}$ plane is made of energy minima which correspond to the easy core positions, of primary maxima at the split cores and secondary maxima at the hard cores. Similar 2D energy landscapes were obtained in other BCC transition metals, with the relative heights of the maxima depending on the element \cite{Dezerald2014}. Only Fe differs from the general behavior with an almost constant split-to-hard energy profile near the hard core position. Instead of a local maximum, the hard core in Fe is a saddle point which links three different ground states. \section{From the energy landscape to the Peierls stress} \label{sec:PeierlsPotential} \subsection{Peierls enthalpy barrier and Peierls stress} \label{sec:PeierlsStress} A central quantity to characterize the mobility of a dislocation is its Peierls stress $\tau_{\rm P}$, \textit{i.e.}{} the critical resolved shear stress to apply in order to induce motion of the dislocation at zero Kelvin. The Peierls stress can be determined using quasi-static calculations where an increasing shear stress is applied to the simulation cell in increments followed by energy minimizations until the dislocation starts to move. This method however requires a deep relaxation of the interatomic forces, which costs high CPU time with \textit{ab initio}{} calculations. Another method to determine the Peierls stress is to compute the Peierls barrier discussed above under an applied stress. The Peierls stress then corresponds to the applied stress at which the maximum of the Peierls barrier disappears. This method is more accurate than quasi-static calculations because it relies on the convergence of the energy, which is more easily achieved than forces within \textit{ab initio}{} calculations. We will use this method in the following. \begin{figure}[!bt] \begin{center} \includegraphics[clip, width=0.9\linewidth]{figure5} \end{center} \caption{ (a) Peierls enthalpy barriers calculated by DFT in BCC tungsten for different applied resolved shear stresses $\tau$ as a function of the dislocation position, $x$, between Peierls valleys ($\lambda_P$ is the distance between Peierls valleys). The inset shows the corrected barriers $\Delta H_ {\rm P}^{\rm 1D}(x)+W(x)$, where $W(x)=\tau\,b\,x$ is the work of the applied stress, for the different stresses (symbols) and the interpolation obtained without stress (line). (b) Maximum Peierls enthalpy as a function of applied resolved shear stress, with a power-law fit. } \label{fig:PeierlsEnthalpy} \end{figure} Peierls barriers under stress were computed by applying a strain tensor to the simulation cell to produce the targeted shear stress through Eq. \ref{eq:stress_epsi} followed by a NEB calculation~\cite{Dezerald2016}. Examples for a shear stress resolved in the $(\bar{1}01)$ glide plane and along the $[111]$ Burgers vector of a screw dislocation in tungsten are presented in Fig.~\ref{fig:PeierlsEnthalpy}~(a). We note that to reduce the computed cost, we performed the NEB calculations only on the first half of the paths. We see that the maximum of the energy barrier decreases under increasing shear stress, as expected. Although the calculations are done under a constant strain, if the plastic strain generated along the path is small enough to assume that the stress is constant to first order, the energies obtained here correspond to the enthalpy under the targeted stress. To evaluate the Peierls stress, we extracted from Fig.~\ref{fig:PeierlsEnthalpy}~(a) the Peierls enthalpies $\Delta H_ {\rm P}^{\rm act}(\tau)$, \textit{i.e.}{} the maximum of the enthalpy profiles, which we plotted as a function of the applied shear stress in Fig.~\ref{fig:PeierlsEnthalpy}~(b). The value of $\tau_{\rm P}$ is then determined by extrapolation using a power law interpolation to find the resolved shear stress $\tau_{\rm P}$ satisfying $\Delta H_ {\rm P}^{\rm act}(\tau_{\rm P})=0$. The fit, displayed as a solid black line in Fig.~\ref{fig:PeierlsEnthalpy}~(b), yields $\tau_{\rm P} = 1970$~MPa. We note that the theoretical value of the Peierls stress is significantly larger than the value extrapolated from experiments (900 MPa \cite{Dezerald2015}). Similar discrepancies between theoretical and experimental values of the Peierls stress have been reported in all BCC metals using different atomistic models, from simple pair potentials \cite{Suzuki1970,Basinski1971}, to more advanced embedded atom potentials \cite{Chaussidon2006}, to \textit{ab initio}{} calculations as performed here \cite{Woodward2001,Woodward2002,Romaner2010,Ventelon2013,Dezerald2014,Dezerald2016}. Potential collective \cite{Bulatov2002,Groger2007} and quantum \cite{Suzuki1970,Proville2012,Barvinschi2014} origins have been proposed but are still a matter of debate \cite{Freitas2018}. In the following section, we model the effect of the stress on the 2D energy landscape of the dislocation by assuming that the stress only tilts the potential through the plastic work ($\tau b x$ per unit length of the dislocation if $\tau$ is the $\hkl<111> \hkl(-101)$ resolved shear stress), but does not affect the potential itself. In the inset of Fig. \ref{fig:PeierlsEnthalpy}(a), we show the corrected energy $\Delta H_{\rm P}^{\rm 1D}(x) + \tau b x$, which is indeed independent of the applied stress. We note that the measured variation of the Peierls potential under stress is highly sensitive to the definition used for the dislocation position $x$: it is found independent only when the position is derived from the stress variation observed along the path (\textit{cf}{} methods in section \ref{sec:methods}). Similar independence was observed in other BCC metals \cite{Dezerald2016,Kraych2019}, contrasting with an earlier observation on a specific dislocation in aluminum modeled with an interatomic potential \cite{Rodney2009}. \subsection{Deviation to the Schmid law and dislocation trajectory} \label{sec:trajectory} \begin{figure}[!b] \begin{center} \includegraphics[width=0.49\linewidth]{figure6a} \hfill \includegraphics[width=0.49\linewidth]{figure6b} \end{center} \caption{Trajectory of a $1/2\,\hkl[111]$ screw dislocation between two neighboring easy core positions in BCC tungsten, with corresponding Peierls potential\textsuperscript{a}. The dislocation position was deduced from the \textit{ab initio}{} stress variation along the minimum energy path (Eqs. \ref{eq:stress_epsi} and \ref{eq:plastic_strain}). The angle $\alpha$ defines the orientation of the tangent to the trajectory in the first half of the path, while the maximum resolved shear stress plane (MRSSP) is defined by the angle $\chi$. Downward and upward triangles indicate easy and hard core positions, respectively. \\ \footnotesize \textsuperscript{a}The Peierls potential shown here and further used in the article slightly differs from the one published in Ref. \cite{Dezerald2016} and shown in Fig. \ref{fig:PeierlsEnthalpy} because of small differences in the \textit{ab initio}{} parameters and dislocation setup (\textit{cf}{} appendix \ref{app:abinitio} for a description of the parameters used in the present work). } \label{fig:trajectory} \end{figure} The calculations above were performed with an applied shear stress resolved in the $(\bar{1}01)$ glide plane of the screw dislocation. With the notations of Fig. \ref{fig:trajectory}, this corresponds to a $\chi=0$ maximum resolved shear stress plane (MRSSP). Similar calculations can be done with $\chi \neq 0$ by adjusting the applied strain tensor. If Schmid's law applies, \textit{i.e.}{} if dislocation glide is activated when the shear stress resolved in the glide plane reaches the Peierls stress in this plane, we expect the yield stress to follow the relation \be \tau_{\rm P}(\chi) = \frac{\tau_{\rm P}^0}{\cos(\chi)}, \end{equation} where $\tau_{\rm P}^0=\tau_{\rm P}(\chi=0)$ is the Peierls stress in the $(\bar{1}01)$ glide plane computed above. The above yield criterion is symmetrical between $\chi >0$ and $\chi < 0$, a characteristic of Schmid's law. However, as mentioned in Introduction, a hallmark of low-temperature plasticity in BCC metals is that they do not obey Schmid's law. In particular, the yield stress is lower when the MRSSP is on one side of the glide plane, the twinning region which corresponds to $\chi <0$ with the notations of Fig. \ref{fig:trajectory}, than on the other side, the antitwinning region with $\chi >0$. This twinning/antitwinning (T/AT) asymmetry is a consequence of the lack of symmetry of the BCC lattice with respect to $\hkl{110}$ planes. This lack of symmetry is also at the origin of the deviation of the dislocation trajectory, which was visible in Fig. \ref{fig:Peierls2D} and is reproduced with more details in Fig. \ref{fig:trajectory}, using now the dislocation position defined from the stress variation along the path (Eqs. \ref{eq:stress_epsi} and \ref{eq:plastic_strain}). We see in this figure that the trajectory is deviated towards the split core (i.e. the atomic column in gray), which crystallographically always lies in the twinning region, and away from the hard core (upward red triangle), which necessarily lies in the antitwinning region. This is a direct consequence of the position of the saddle configuration along the split-to-hard line in Fig. \ref{fig:PeierlsCuts}, which is closer to the split core than to the hard core. Below we show that we can quantitatively relate the amplitude of the deviation of the trajectory to the amplitude of the T/AT asymmetry~\cite{Dezerald2016}. As mentioned above, we assume that the effect of an applied stress on the 2D energy landscape discussed in section \ref{sec:landscape} is to add a linear contribution corresponding to the plastic work, yielding an enthalpy variation \begin{equation} \Delta H^{\rm 2D}_{\rm P}(x,y) = V_{\rm P}^{\rm 2D}(x,y) - \tau_{yz}\,b\,x + \tau_{xz}\,b\,y, \label{eq:Peierls2D} \end{equation} with $x$ and $y$, the coordinates of the dislocation respectively in the $\hkl[-12-1]$ glide direction and $\hkl[-101]$ perpendicular direction and $\tau_{xz}$ and $\tau_{yz}$, the two components of the stress tensor which produce a Peach-Koehler force on the dislocation. Considering a resolved shear stress $\tau$ applied in a MRSSP making an angle $\chi$ with respect to the $\hkl(-101)$ glide plane, both components are written as $\tau_{xz}=-\tau\sin{(\chi)}$ and $\tau_{yz}=\tau\cos{(\chi)}$, and the enthalpy variation becomes \begin{equation*} \Delta H^{\rm 2D}_{\rm P}(x,y) = V_{\rm P}^{\rm 2D}(x,y) - \tau\,b \left[ x \cos{(\chi)} + y \sin{(\chi)} \right]. \end{equation*} The saddle point value of this 2D function between two easy cores is the enthalpy barrier $\Delta H_{\rm P}^{\rm act}(\tau,\chi)$ opposing dislocation glide, which was shown in Fig. \ref{fig:PeierlsEnthalpy}~(b) for $\chi=0$. The Peierls stress $\tau_{\rm P}(\chi)$ is then defined as the minimal applied stress $\tau$ for which this enthalpy barrier vanishes. \textit{Ab initio}{} calculations have shown that the trajectory between easy cores (Fig. \ref{fig:Peierls2D}) is not sensitive to the applied shear stress $\tau$ nor to the other stress components \cite{Dezerald2016,Kraych2019}. Assuming that this trajectory does not vary, one can define a 1D functional for the enthalpy barrier between Peierls valleys \begin{equation} \begin{split} \Delta H^{\rm 1D}_{\rm P}(x) &= \Delta H^{\rm 2D}_{\rm P}(x,\bar{y}(x)) \\ &= V_{\rm P}^{\rm 1D}(x) - \tau\,b \left[ x \cos{(\chi)} + \bar{y}(x) \sin{(\chi)} \right], \end{split} \label{eq:Peierls1D} \end{equation} where $\bar{y}(x)$ is the dislocation trajectory and $V_{\rm P}^{\rm 1D}(x) = V_{\rm P}^{\rm 2D}[x,\bar{y}(x)]$ is the energy barrier, shown in Fig. \ref{fig:PeierlsEnthalpy} for $\tau=0$. At the Peierls stress, there exists an unstable position $x^*$ where the first and second derivatives of the enthalpy are null, leading to \begin{align} \left. \frac{\partial V_{\rm P}^{\rm 1D}}{\partial x} \right|_{x^*} - b \, \tau_{\rm P}(\chi) \left[ \cos{(\chi)} + \left. \frac{\partial \bar{y}}{\partial x} \right|_{x^*} \sin{(\chi)} \right] & = 0, \label{eq:Peierls1D_deriv} \\ \left. \frac{\partial^2 V_{\rm P}^{\rm 1D}}{\partial x^2} \right|_{x^*} - b \, \tau_{\rm P}(\chi) \left. \frac{\partial^2 \bar{y}}{\partial x^2} \right|_{x^*} \sin{(\chi)} & = 0 . \label{eq:Peierls1D_deriv2} \end{align} This pair of equations defines the unstable position $x^*$ and the yield stress $\tau_{\rm P}(\chi)$. When the MRSSP coincides with the \hkl(-101) glide plane, \textit{i.e.}{} for $\chi=0$, Eq. \ref{eq:Peierls1D_deriv2} shows that the unstable position is the inflexion point of the Peierls potential. The Peierls stress is the corresponding slope, \textit{i.e.}{} the maximum derivative of the Peierls potential, divided by $b$. For any other MRSSP, the unstable position depends in principle also on the curvature of the dislocation trajectory. However, \textit{ab initio}{} calculations have shown that this trajectory is smooth and not far from a straight line in its first part, where the unstable position $x^*$ is located \cite{Dezerald2016}. As illustrated in Fig. \ref{fig:trajectory}, the first part of the trajectory can thus be approximated by a straight line as $\bar{y}(x)=x\tan{(\alpha)}$, where $\alpha$ is the angle between the approximated straight trajectory and the \hkl(-101) glide plane\footnote{We define $\alpha$ here as the orientation of the tangent to the trajectory at the initial position, rather than the orientation of the straight line linking the initial position to the saddle point as in Refs. \cite{Dezerald2016,Kraych2019}, because a better agreement is obtained with this definition between the modified Schmid law (Eq. \ref{eq:PeierlsStress}) and the numerical solution of Eqs. \ref{eq:Peierls1D_deriv} and \ref{eq:Peierls1D_deriv2}.}. With this approximation, $\partial^2 \bar{y}/\partial x^2=0$ and from Eq. \ref{eq:Peierls1D_deriv2}, $x^*$ remains at the Peierls potential inflexion point and does not depend on $\chi$. Eq. \ref{eq:Peierls1D_deriv} then leads to the yield stress \begin{align} \tau_{\rm P}(\chi) &= \frac{1}{b \left[ \cos{(\chi)} + \tan{(\alpha)} \sin{(\chi)} \right]} \left. \frac{\partial V_{\rm P}^{\rm 1D}}{\partial x} \right|_{x^*} \nonumber \\ &= \frac{\cos{(\alpha)}}{\cos{(\chi-\alpha)}}\tau_{\rm P}^0 , \label{eq:PeierlsStress} \end{align} where the Peierls stress for $\chi=0$ is given by \begin{equation*} \tau_{\rm P}^0 = \frac{1}{b} \left. \frac{\partial V_{\rm P}^{\rm 1D}}{\partial x} \right|_{x^*} . \end{equation*} \begin{figure}[!b] \begin{center} \includegraphics[width=0.7\linewidth]{figure7} \end{center} \caption{Resolved shear stress for plastic yield in BCC tungsten as a function of the angle $\chi$ between the MRSSP and the \hkl{110} glide plane. Theoretical values correspond to 0\,K \textit{ab initio}{} calculations using either Schmid law (dashed green line) or including the T/AT asymmetry (solid green line) based on Eq. \ref{eq:PeierlsStress} with $\alpha=-15^{\circ}$. Experimental values \cite{Rose1962,Beardmore1965,Argon1966} were measured at 77\,K for a tensile axis between \hkl[001] and \hkl[011] (filled symbols) and between \hkl[001] and \hkl[-111] (open symbols). A scaling factor of 6 is used between theoretical (left coordinate axis) and experimental values (right axis).} \label{fig:tauP_chi} \end{figure} When the dislocation trajectory is straight and coincides with the macroscopic \hkl(-101) glide plane, \textit{i.e.}{} when $\alpha=0$, one recovers the classical $1/\cos(\chi)$ Schmid law. However, when the trajectory is deviated as in Fig. \ref{fig:trajectory}, $\alpha\neq0$ and Eq. \ref{eq:PeierlsStress} shows that the shear stress which needs to be considered in the yield criterion is not the shear stress resolved in the \hkl(-101) glide plane, $\tau\cos{(\chi)}$, but rather the shear stress resolved in the plane tangent to the dislocation trajectory, $\tau \cos{(\chi-\alpha)}/\cos{(\alpha)}$. The lowest Peierls stress is thus obtained when the MRSSP coincides with the tangent plane (\textit{i.e.}{} when $\chi=\alpha$), which lies in the twinning region in all BCC transition metals as discussed above. We thus recover the T/AT asymmetry. Moreover, we find that the amplitude of the asymmetry increases with the norm of $\alpha$, and thus increases with the deviation of the dislocation trajectory from a straight line. We compare in Fig. \ref{fig:tauP_chi} the yield stress predicted in tungsten from the modified Schmid law (Eq. \ref{eq:PeierlsStress}), using the angle $\alpha=-15^{\circ}$ measured on the dislocation trajectory (Fig. \ref{fig:trajectory}), with experimental values obtained on single crystals at 77\,K for different orientations of the tensile axis \cite{Rose1962,Beardmore1965,Argon1966} (see appendix \ref{app:chi_exp} for a derivation of the corresponding $\chi$ angles). Because of the discrepancy between theoretical and experimental Peierls stresses, and also because of the different temperatures (0\,K for \textit{ab initio}{} and 77\,K for experiments), theoretical and experimental data are shown with different scales. As already noted, the modified Schmid law correctly predicts that the twinning region with $\chi<0$ is easier to shear than the antitwinning region, in qualitative agreement with the experimental data. But experiments also show that the yield stress does not depend only on $\chi$: the T/AT asymmetry is very strong for a tensile axis between \hkl[001] and \hkl[011] (filled symbols in Fig. \ref{fig:tauP_chi}) whereas the yield stress is almost constant between \hkl[001] and \hkl[-111] (open symbols in Fig. \ref{fig:tauP_chi}). The modified Schmid law given by Eq. \ref{eq:PeierlsStress} cannot account for such variations between different orientations corresponding to the same MRSSP. We will show in the following that these variations partly arise from the second source of deviation from Schmid's law, non-glide stresses. \subsection{Relaxation volume and tension-compression asymmetry} \label{sec:relaxation_volume} \begin{figure}[!b] \begin{center} \includegraphics[width=0.4\linewidth]{figure8} \end{center} \caption{Atomic displacements in the \hkl(111) plane perpendicular to the dislocation line induced in BCC tungsten by the core dilatation of a screw dislocation in its easy core configuration. Displacement vectors have been magnified by a factor 50. The dislocation position is indicated by a downward triangle. } \label{fig:Uedge} \end{figure} \textit{Ab initio}{} calculations have shown that screw dislocations in BCC transition metals induce a short-range dilatation elastic field in addition to the elastic field given by the Volterra solution \cite{Clouet2009}. This can be seen on the atomic displacements in Fig. \ref{fig:Uedge}, which have a component perpendicular to the dislocation line, \textit{i.e.}{} an edge component, in the vicinity of the core. Part of this edge component could be a consequence of elastic anisotropy on the Volterra displacements, but tungsten is close to elastically isotropic. The edge displacements visible in Fig. \ref{fig:Uedge} are thus due to a 2D expansion centered at the dislocation core. The elastic field induced by this dilatation is short ranged compared to the Volterra elastic field, with a displacement varying as $1/r$ with $r$ the distance to the dislocation line instead of $\ln{(r)}$. This dilatation partly arises from anharmonicity in the crystal response and also from the atomic structure of the dislocation core. The core dilatation is responsible for the dislocation formation volume, which manifests itself experimentally through an increase of the average lattice parameter with the dislocation density \cite{Crussard1949}. This core field can be modeled within elasticity using either line-force dipoles \cite{Gehlen1972,Hirth1973,Clouet2009,Clouet2011a,Clouet2011b} or a 2D Eshelby inclusion \cite{Eshelby1957,Eshelby1959,Kraych2019}, both models being equivalent \cite{Clouet2018}. Using the latter picture, a cylindrical inclusion of surface $S_0$ and eigenstrain tensor $\barT{\varepsilon}^*$ is associated to the dislocation core. For the elastic field far from the dislocation and its coupling with an applied stress, it is actually sufficient to consider the relaxation volume tensor $\barT{\Omega} = S_0 \, \barT{\varepsilon}^*$ defined per unit length of dislocation. When the dislocation is in its ground state, the 3-fold symmetry around the \hkl[111] axis of the easy core imposes that $\barT{\Omega}$ is diagonal with only two independent components: $\barT{\Omega}=\diag{(\Omega_{11},\Omega_{11},\Omega_{33})}$. Values of this tensor can be directly obtained from \textit{ab initio}{} calculations because, due to the relaxation volume, the dislocation energy varies when a stress tensor $\barT{\sigma}$ is applied, with a corresponding interaction energy $E^{\rm inter} = -\Omega_{ij} \sigma_{ij}$. In the case of \textit{ab initio}{} modeling of a dislocation dipole with periodic boundary conditions, Eq. \ref{eq:dE_epsi}, which gives the energy variation of the supercell caused by a homogeneous applied strain $\barT{\varepsilon}$, needs to be generalized to \begin{equation} \Delta E(\barT{\varepsilon}) = \frac{1}{2} S \, C_{ijkl}\varepsilon_{ij}\varepsilon_{kl} + C_{ijkl}(b_i A_j - 2\,\Omega_{ij}) \varepsilon_{kl}, \label{eq:dE_epsi_relax} \end{equation} leading to a stress in the supercell \begin{equation} \sigma_{ij}(\barT{\varepsilon}) = C_{ijkl} \left( \varepsilon_{kl} + \frac{b_k A_l - 2\,\Omega_{kl}}{S} \right). \label{eq:stress_epsi_relax} \end{equation} Thanks to this expression, the relaxation volume can be directly deduced from the stress in the supercell after energy relaxation. For tungsten, we obtained $\Omega_{11}=9$ and $\Omega_{33}=-4$\,\AA$^3$ per Burgers vector of screw dislocation. A similar dilatation perpendicular to the dislocation line and contraction along the line was obtained in other BCC metals \cite{Dezerald2014}. \begin{figure}[!bth] \begin{center} \includegraphics[width=0.8\linewidth]{figure9} \end{center} \caption{Variation of the relaxation volume tensor along the minimum energy path between easy core configurations. The different tensor components are shown as a function of the dislocation position $x$ along the glide direction normalized by the distance $\lambda_{\rm P}$ between Peierls valleys. The corresponding shape of the 2D Eshelby inclusion along this path is sketched in the upper part. } \label{fig:coreVol} \end{figure} The variation of the dislocation relaxation volume along the minimum energy path between easy cores, $\Delta\barT{\Omega}$, is actually more important for plasticity than the absolute value of the tensor. Using the stress variation measured during \textit{ab initio}{} NEB calculations of the Peierls energy barrier (\textit{cf}{} section \ref{sec:PeierlsStress}), one obtains the variation of the relaxation volume tensor shown in Fig. \ref{fig:coreVol}. Except in the initial and final positions, the 3-fold symmetry is not obeyed along the path and the relaxation volume tensor has now the following form in the Cartesian basis associated with the dislocation (see Fig. \ref{fig:core} and the sketch on the right hand-side of Fig. \ref{fig:coreVol}): \begin{equation*} \barT{\Omega} = \begin{pmatrix} \Omega_{11} & \Omega_{12} & 0 \\ \Omega_{12} & \Omega_{22} & 0 \\ 0 & 0 & \Omega_{33} \end{pmatrix}. \end{equation*} There is no symmetry argument imposing that the components $\Omega_{13}$ and $\Omega_{23}$ should be null, but \textit{ab initio}{} calculations show that this is the case for tungsten \cite{Kraych2019} and other BCC transition metals. As described in appendix \ref{app:elastic}, a slight adjustment of the elastic constants is necessary to impose that the initial and final positions along the path, both corresponding to the same ground state, have the same relaxation volume. This correction is reasonable considering the precision of the elastic constants from \textit{ab initio}{} calculations and is also compatible with the variations of the elastic constants in the dislocated crystal. Because of the variations of the relaxation volume and its coupling with the applied stress tensor, the enthalpy of the dislocation as a function of its position in the \hkl(111) plane in Eq. \ref{eq:Peierls2D} becomes \begin{equation} \Delta H^{\rm 2D}_{\rm P}(x,y) = V_{\rm P}^{\rm 2D}(x,y) - \sigma_{ij} \, \Delta\Omega_{ij}^{\rm 2D}(x,y) - \tau_{yz}\,b\,x + \tau_{xz}\,b\,y. \label{eq:Peierls2D_relax} \end{equation} Since the dislocation trajectory is not sensitive to the applied stress tensor, (\textit{cf}{} section \ref{sec:trajectory}), one can recover a 1D functional for the enthalpy barrier between Peierls valleys \begin{equation} \Delta H^{\rm 1D}_{\rm P}(x) = V_{\rm P}^{\rm 1D}(x) - \sigma_{ij} \, \Delta\Omega_{ij}^{\rm 1D}(x) - \tau\,b \left[ x \cos{(\chi)} + \bar{y}(x) \sin{(\chi)} \right], \label{eq:Peierls1D_relax} \end{equation} where $\Delta\barT{\Omega}^{\rm 1D}(x) = \Delta\barT{\Omega}^{\rm 2D}(x,\bar{y}(x))$ is the variation of the relaxation volume along the minimum energy path. Finally, keeping our approximation of a straight dislocation trajectory defined by an angle $\alpha$, one obtains the yield stress \begin{equation} \tau_{\rm P}(\chi,\barT{\sigma}) = \frac{1}{b \left[ \cos{(\chi)} + \tan{(\alpha)} \sin{(\chi)} \right]} \left( \left. \frac{\partial V_{\rm P}^{\rm 1D}}{\partial x} \right|_{x^*(\barT{\sigma})} - \left. \sigma_{ij}\frac{\partial \Delta\Omega_{ij}^{\rm 1D}}{\partial x} \right|_{x^*(\barT{\sigma})} \right) \label{eq:PeierlsStress_relax} \end{equation} where $x^*(\barT{\sigma})$ is the inflexion point on the generalized Peierls potential $V_{\rm P}^{\rm 1D}(x) - \sigma_{ij} \, \Delta\Omega_{ij}^{\rm 1D}(x)$, which depends \textit{a priori} on the non-glide stress $\barT{\sigma}$. Using a first-order expansion in $\barT{\sigma}$ of the two instability conditions defining the Peierls stress, $\partial \Delta H_{\rm P}^{\rm 1D}/\partial x=0$ and $\partial^2 \Delta H_{\rm P}^{\rm 1D}/\partial x^2=0$, one shows \cite{Kraych2019} that the variation of the inflexion point position and its impact on the derivatives appearing in Eq. \ref{eq:PeierlsStress_relax} can be neglected finally leading to \begin{equation} \tau_{\rm P}(\chi,\barT{\sigma}) = \frac{\cos{(\alpha)}}{\cos{(\chi-\alpha)}} \left ( \tau_{\rm P}^0 - \sigma_{ij} \, \Delta\Omega_{ij}' \right), \end{equation} with $\Delta\barT{\Omega}'$ the derivative of the relaxation volume tensor calculated at the inflexion point of the Peierls potential $V_{\rm P}^{\rm 1D}(x)$. \begin{figure}[!bth] \begin{center} \includegraphics[width=0.49\linewidth]{figure10a} \hfill \includegraphics[width=0.49\linewidth]{figure10b} \end{center} \caption{Dependence of the Peierls enthalpy barrier (left) and yield stress (right) on a non-glide stress $\sigma$ perpendicular to the MRSSP (\textit{cf}{} Eq. \ref{eq:tauSigmaStess} for the stress tensor definition). $\chi$ is the angle between the MRSSP and the $\hkl{110}$ glide plane and is null in the left figure. } \label{fig:Peierls_relax} \end{figure} To illustrate the effect of the relaxation volume on the Peierls stress, we consider a mechanical loading composed of a shear stress $\tau$ in a MRSSP defined by its angle $\chi$, as in previous section, combined with a traction/compression stress $\sigma$ perpendicular to the dislocation line, leading to the following stress tensor in the dislocation axis \begin{equation} \barT{\sigma} = \begin{pmatrix} -\sigma \cos{(2\chi)} & -\sigma \sin{(2\chi)} & \tau \sin{(\chi)} \\ -\sigma \sin{(2\chi)} & \sigma \cos{(2\chi)} & -\tau \cos{(\chi)} \\ \tau \sin{(\chi)} & -\tau \cos{(\chi)} & 0 \end{pmatrix}. \label{eq:tauSigmaStess} \end{equation} For this loading, the enthalpy variation and associated yield stress are given by \begin{multline} \Delta H^{\rm 1D}_{\rm P}(x) = V_{\rm P}^{\rm 1D}(x) + \sigma \cos{(2\chi)} \left[ \Delta\Omega_{11}(x) - \Delta\Omega_{22}(x) \right] + 2 \sigma \sin{(2\chi)} \Delta \Omega_{12}(x) \\ - \tau\,b \left[ x \cos{(\chi)} + \bar{y}(x) \sin{(\chi)} \right], \label{eq:Peierls1D_relax_traction} \end{multline} and \begin{equation} \tau_{\rm P}(\chi,\sigma) = \frac{\cos{(\alpha)}}{\cos{(\chi-\alpha)}} \left ( \tau_{\rm P}^0 + \sigma \cos{(2\chi)} \left[ \Delta\Omega_{11}' - \Delta\Omega_{22}' \right] + 2 \sigma \sin{(2\chi)} \Delta \Omega_{12}' \right). \end{equation} The yield stress is now sensitive to the anisotropy of the dislocation core dilatation, or more precisely to its variation along the dislocation migration path. When the MRSSP is in tension ($\sigma>0$), both the enthalpy barrier and the yield stress decrease (Fig. \ref{fig:Peierls_relax}) as the dislocation core tends to expand in the direction perpendicular to the glide plane ($\Delta\Omega_{22}>0$) and to contract in the glide direction ($\Delta\Omega_{11}<0$) when transitioning between Peierls valleys (see Fig. \ref{fig:coreVol}). The variation of the dislocation relaxation volume therefore explains the observed decrease of the Peierls stress when the MRSSP is in tension, a general feature of the departure from Schmid's law observed in BCC metals modeled with various atomistic models \cite{Vitek2004,Groger2008,Chen2013,Groger2014,Hale2015}. \subsection{Generalized yield criterion} \label{sec:generalized_yield} \begin{figure}[bth] \begin{center} \includegraphics[width=0.70\linewidth]{figure11} \end{center} \caption{(a) Sketch of a tensile mechanical test showing the \hkl(-101) glide plane and the maximum resolved shear stress plane (MRSSP). (b) Angles $\zeta$ and $\chi$ defining the orientation of the tensile axis $\vec{t}$ in the standard stereographic projection of the minimum irreducible zone of the \hkl[111]\hkl(-101) slip system with $\zeta \in{[0^\circ;\,90^\circ]}$ and $\chi \in{[-30^\circ;\,+30^\circ]}$. The thick black triangle delimited by \hkl[001], \hkl[011] and \hkl[-111] is the standard stereographic triangle where the \hkl[111]\hkl(-101) slip system has the highest Schmid factor. } \label{fig:SchemaAngles} \end{figure} We study now how the coupling of the applied stress with both the trajectory and the relaxation volume of the $1/2\hkl<111>$ screw dislocation impacts the yield stress for the simplest mechanical loading: a traction or compression test on a single crystal. Under a uniaxial loading of magnitude $\sigma$ along an axis $\vec{t}$, the stress tensor applied to the crystal is: \begin{equation} \barT{\Sigma}=\sigma (\vec{t}{\otimes}\vec{t}). \label{eq:uniaxial} \end{equation} Using spherical coordinates to project the tensile axis $\vec{t}$ in the frame of the gliding dislocation, the stress tensor $\overline{\overline{\Sigma}}$ is expressed as: \begin{equation} \barT{\Sigma} = \sigma \begin{bmatrix} \sin^2{(\zeta)}\sin^2{(\chi)} & \sin^2{(\zeta)}\sin{(2\chi)}/2 & \sin{(2\zeta)}\sin{(\chi)}/2 \\ & \sin^2{(\zeta)}\cos^2{(\chi)} & \sin{(2\zeta)}\cos{(\chi)}/2 \\ & & \cos^2{(\zeta)} \end{bmatrix}, \label{eq:stress_tensor} \end{equation} where $\zeta$ is the angle between the slip direction, \textit{i.e.}{} the Burgers vector $\vec{b}$, and the tensile axis $\vec{t}$, and $\chi$ the angle between the glide plane and the MRSSP\footnote{Following Duesbery \cite{Duesbery1984}, $\chi$ can also be defined as the angle between the normal $\vec{n}$ to the glide plane and the projection $\vec{p}$ of the tensile axis on the plane orthogonal to the slip direction.} (Fig. \ref{fig:SchemaAngles}). The enthalpy is now expressed as \begin{equation} \begin{aligned} \Delta H_{\rm P}^{\rm 1D}(x)={} & V_{\rm P}^{\rm 1D}(x) -\frac{1}{2} \sigma \sin{(2\zeta)} \, b \left[ x\cos{(\chi)} + \bar{y}(x)\sin{(\chi)} \right] \\ & - \frac{1}{2} \sigma \sin^2{(\zeta)} \left\{ [\Delta \Omega_{22}(x) - \Delta \Omega_{11}(x)] \cos{(2\chi)} + 2 \Delta \Omega_{12}(x)\sin{(2\chi)} \right\} \\ & - \frac{1}{2} \sigma \sin^2{(\zeta)} [\Delta \Omega_{11}(x) + \Delta \Omega_{22}(x) + \Delta \Omega_{33}(x)] + \frac{1}{2} \sigma [1 - 3 \cos^2{(\zeta)}] \Delta \Omega_{33}(x) . \label{eq:non_glide} \end{aligned} \end{equation} The tensile yield stress at 0\,K, $\sigma^0_Y$, is again found at a position $x^*$ satisfying the instability conditions $\partial \Delta H_{\rm P}^{\rm 1D}/\partial x=0$ and $\partial^2 \Delta H_{\rm P}^{\rm 1D}/\partial x^2=0$, which need to be solved numerically for each orientation of the loading axis defined by $\zeta$ and $\chi$, thus leading to $\sigma^0_{\rm Y}(\zeta,\chi)$. \begin{figure}[bth] \begin{center} \includegraphics[width=0.8\linewidth]{figure12} \end{center} \caption{Stereographic projection of the BCC lattice showing in colors the regions where each individual \hkl<111>\hkl{110} slip system has the highest Schmid factor. The thick black triangle is the minimum irreducible zone of the \hkl[111]\hkl(-101) slip system with $\zeta \in \left[ 0\,,90^\circ \right]$ and $\chi \in \left[ -30^\circ\,,30^\circ \right]$. } \label{fig:StereographicCircleSchmid} \end{figure} With non-Schmid effects taken into account, it becomes necessary to consider all possible \hkl<111>\hkl{110} slip systems to evaluate the minimum tensile yield stress as a function of the orientation of the loading axis. The twelve \hkl<111>\hkl{110} slip systems of the BCC lattice, defined by 4 different \hkl<111> slip directions and 3 different \hkl{110} glide planes, are presented in the stereographic projection in Fig. \ref{fig:StereographicCircleSchmid}. In any region of the stereographic projection delimited by \hkl<100>, \hkl<110>, and \hkl<111> orientations, a single slip system has a maximum Schmid factor. This defines the standard stereographic triangle for this slip system. Among the equivalent slip systems at each corner of a stereographic triangle, all make the same angle $\zeta$ with the tensile axis, but half of them have a positive $\chi$ angle, and the other half a negative $\chi$ angle. Hence only half are sheared in the twinning sense, and the other half in the anti-twinning sense. With the $1/\cos{(\chi-\alpha)}$ variation of the yield stress predicted by the modified Schmid law, slip systems with positive or negative $\chi$ angles are no longer equivalent and systems with a tensile axis oriented towards negative $\chi$ (respectively positive $\chi)$ are easier to activate in tension (respectively in compression). Hence, there is a splitting of the slip systems into twinned and anti-twinned groups. As a consequence, a full description of yield stress variation with tensile axis orientation cannot be restricted to a single stereographic triangle, but two adjacent triangles are needed. \begin{figure}[bth] \begin{center} \includegraphics[trim = 36mm 28mm 2mm 24mm, clip, width=1\linewidth]{figure13.png} \end{center} \caption{Primary \hkl<111>\hkl{110} slip system (first row) and corresponding yield stress (second row) activated in tungsten single crystals under uniaxial loading at 0\,K as a function of the orientation of the deformation axis and predicted using the Schmid law (a), and the yield criterion developed in this work in tension (b) and compression (c). The color coding of the different slip systems and the stress range are indicated in the legend on the left.} \label{fig:PrimarySystemsLevel} \end{figure} The yield stress of each slip system was evaluated numerically at 0\,K as a function of the orientation of the loading axis $\vec{t}$. We present in the first row of Fig. \ref{fig:PrimarySystemsLevel} a color map showing the distribution of primary \hkl<111>\hkl{110} slip systems, {\textit{i.e.}} the systems having the lowest yield stress, as a function of the tensile axis orientation. The results are shown according to Schmid's law, and using the yield criterion including non-Schmid effects in tension and compression. The corresponding minimum yield stress is presented in the second row of Fig.\ref{fig:PrimarySystemsLevel}. With Schmid's law, only one primary slip system exists in each stereographic triangle and the yield stress follows directly the distribution of Schmid factors, with easier glide in the regions near the $\hkl[001]-\hkl[011]$ edge and the $\chi=0$ line, and a maximum yield stress near the \hkl[-111] corner. When non-Schmid effects are taken into account, several primary slip systems appear inside the $\hkl[001]-\hkl[011]-\hkl[-111]$ triangle, with different distributions in tension and compression. The region of the stereographic projection where the \hkl[111]\hkl(-101) system is activated (orange region in Fig.\ref{fig:PrimarySystemsLevel}) is shifted towards $\chi<0$ in tension, and towards $\chi>0$ in compression. This shift of the primary slip system is also responsible for the emergence of neighboring primary systems close to the edges of the stereographic triangle. Looking now to the yield stresses (lower row of Fig. \ref{fig:PrimarySystemsLevel}), a tension / compression asymmetry appears clearly. With a larger blue region corresponding to the lowest values, the average yield stress necessary to activate plasticity is lower in tension than in compression. This asymmetry is a direct consequence of the dislocation relaxation volume and is mainly driven by the sign of the difference $\Delta\Omega_{22}-\Delta\Omega_{11}$ as analyzed in the previous section. The generalized yield criterion which considers the coupling of the dislocation relaxation volume with the applied stress thus offers a physical explanation to the experimental observation that the yield stress for a given angle $\zeta$ is generally higher in compression than in tension, regardless of the orientation of the loading axis \cite{Byron1967,Liu1972,Takeuchi1972,Nawaz1975}. \begin{figure}[bth] \begin{center} \includegraphics[width=1\linewidth]{figure14.pdf} \end{center} \caption{Yield stress of the different slip systems in tungsten under uniaxial loading at 0\,K as a function of the angle $\chi$ for $\zeta=45^\circ,\,50^\circ,\,55^\circ$: (a) according to the Schmid law and including non-Schmid effects in (b) tension and (c) compression. } \label{fig:ThetaLines} \end{figure} To better visualize consequences of the modified Schmid law on the competition between different slip systems, profiles of the yield stress at 0\,K as a function of the orientation of the loading axis are presented in Fig. \ref{fig:ThetaLines} for three different constant $\zeta$ cuts of the stereographic projection. For each slip system, a clear T/AT asymmetry is visible. It also appears that the yield stress is lower in tension than in compression, whatever the orientation of the loading axis, and that the activated primary slip system usually varies between tension and compression for the same orientation. \begin{figure}[bth] \begin{center} \includegraphics[width=1\linewidth]{figure15} \end{center} \caption{Variation of the minimum yield stress (dashed black lines) among all \hkl<111>\hkl{110} slip systems at 0\,K according to (a) the Schmid law and (b) the yield criterion developed in this work, as a function of the orientation of the tensile axis along the $\hkl[001]-\hkl[011]-\hkl[-111]$ edges of the standard stereographic triangle. The path is sketched in the inset of (a) with the definition of the angles $\phi$ and $\psi$. (c) Experimental data at 77\,K in tension from Beardmore and Hull ($\bullet$) \cite{Beardmore1965}, Argon and Maloof ($\blacktriangle$) \cite{Argon1966}, and Rose \textit{et al}. ($\blacksquare$) \cite{Rose1962}.} \label{fig:EdgesTriangle_b111} \end{figure} We compare in Fig. \ref{fig:EdgesTriangle_b111} the predicted yield stress at 0\,K for tungsten with various experimental data \cite{Beardmore1965,Rose1962,Argon1966} obtained in tension at 77\,K for single crystals oriented along the edges of the standard stereographic triangle. As it will be shown in the next section, the 77\,K temperature of experiments is sufficiently low compared to tungsten melting temperature to have a minor impact on the yield stress orientation dependence. Acknowledging the discrepancy between theoretical and experimental Peierls stresses discussed in section \ref{sec:PeierlsStress}, we only compare relative variations. If the theoretical yield criterion correctly predicts a lower yield stress for \hkl[001] than \hkl[-111] orientations, a feature actually already present in Schmid's law, it fails to predict the strong increase of the yield stress close to the \hkl[011] axis. In particular along the $\hkl[001] - \hkl[011]$ edge, the theoretical criterion predicts an almost flat variation of the yield stress because of the competition between two primary slip systems, instead of a steep increase when approaching \hkl[011]. Although the modified Schmid law allows to rationalize the T/AT and tension/compression asymmetries, it apparently still misses some ingredients to fully account for the yield stress variations in all regions of the stereographic projection, in particular in the vicinity of the \hkl[011] direction. In this region, glide of $1/2\,\hkl<111>$ dislocations on \hkl{112} instead of \hkl{110} planes has been evidenced experimentally, both from slip traces analysis on strained single crystals \cite{Argon1966} and \textit{in situ}{} TEM straining experiments \cite{Caillard2018}. Although no precise atomistic mechanism has been proposed until now to explain glide in \hkl{112} planes, the inclusion of \hkl<111>\hkl{112} slip systems in the yield criterion appears as a necessary next step to fully describe the yield surface of BCC metals, in particular tungsten. As inclusion of additional slip systems could only lower the theoretical yield stress, one should probably also invoke a locking of the active $1/2\,\hkl<111>\hkl{110}$ slip systems to rationalize the steep increase of the yield stress experimentally observed close to the \hkl[011] direction. \section{Thermal activation} We now describe how the effect of temperature on the yield stress can be modeled using \textit{ab initio}{} calculations. At low temperature, glide of screw dislocations in BCC metals is thermally activated and operates through the nucleation of kink pairs across the Peierls barrier and their subsequent propagation along the dislocation line, as sketched in Fig. \ref{fig:KinkPropagation}(a). In pure BCC metals, kinks glide along a dislocation line with a negligible lattice friction and the motion of the screw dislocation is controlled by kink nucleation. Modeling kinked dislocations in pure BCC metals like tungsten requires supercells too large for \textit{ab initio}{} calculations. We therefore employ a multiscale approach based on a line tension model adjusted on \textit{ab initio}{} calculations \cite{Proville2013}. \begin{figure}[bth] \begin{center} \includegraphics[width=0.95\linewidth]{figure16} \end{center} \caption{(a) Sketch of a dislocation line (blue) crossing a Peierls barrier through the nucleation and propagation of a kink pair. (b) Projection of the $1/2\,\hkl[111]$ screw dislocation trajectory gliding in the \hkl(-101) plane with the three most displaced \hkl[111] atomic columns represented in blue, red and green.} \label{fig:KinkPropagation} \end{figure} \subsection{Line tension model for kink-pair formation} In the frame of the line tension (LT) approximation, the dislocation is modeled as an elastic line, whose ease to bend is defined by its line tension. The shape of the dislocation is represented by a function $x(z)$ defining the position of the dislocation in its glide plane as a function of the coordinate $z$ along the dislocation line (see Fig. \ref{fig:KinkPropagation}). The bow out of the dislocation line under an applied stress tensor $\barT{\Sigma}$ necessary to form a kink pair causes a change in the dislocation enthalpy given by \cite{Proville2013} \begin{equation} H_{\rm LT}[x(z),\barT{\Sigma}] = \int{ \mathrm{d}{z} \left\{ \Delta H_{\rm P}^{\rm 1D}[x(z),\barT{\Sigma}] + \dfrac{\Gamma}{2} \left( \dfrac{\partial x}{\partial z} \right)^2 \right\} }, \label{eq:LT} \end{equation} where $\Delta H_{\rm P}^{\rm 1D}$ is the enthalpy variation of the straight dislocation per unit length under the applied stress $\barT{\Sigma}$ (Eq. \ref{eq:Peierls1D_relax}) and $\Gamma$ the line tension, which is assumed isotropic and independent of the applied stress. The line tension $\Gamma$ can be extracted from \textit{ab initio}{} calculations following the approach of Proville {\normalfont\itshape et al.}{} \cite{Proville2013,Dezerald2015}. We consider a $2b$-wide supercell constructed by stacking two one-$b$ slabs, each containing a relaxed screw dislocation dipole in its easy core configuration. As the dislocation crosses the Peierls barrier in a \hkl{110} plane, the three \hkl<111> atomic columns defining the dislocation core and represented with colors in Fig. \ref{fig:KinkPropagation}(b) move parallel to the dislocation line along the \hkl[111] direction. To emulate the bow out of the line in the $2b$ simulation cell, the position of the dislocation in the lower $1b$-slab is kept fixed in its Peierls valley by freezing the displacement of the three core \hkl<111> columns, while a constrained displacement is imposed to these columns in the upper slab to mimic the beginning of kink nucleation. The change in energy resulting from the bow-out of the dislocation line under zero applied stress is then fitted to a discretized version of Eq. \ref{eq:LT} to extract the line tension $\Gamma$. Calculations in different BCC metals \cite{Dezerald2015} show only small metal-to-metal variations, in contrast with the line tension calculated from elasticity theory. The line tension defining the bow out of a kinked screw dislocation corresponds to a localized energy variation, which cannot be modeled with elasticity and for which an atomic description is needed. For BCC tungsten, we find $\Gamma = 3.41$\,eV/{\AA}. \begin{figure}[bth] \begin{center} \hspace*{-2mm} \includegraphics[width=1\linewidth]{figure17} \end{center} \caption{Line tension model of dislocation mobility adjusted on \textit{ab initio}{} calculations: (a) line profile of a screw dislocation with a critical kink-pair for different applied resolved shear stresses, (b) kink-pair nucleation enthalpy as a function of applied resolved shear stress (the solid line is a fit to Kocks' law and the colored squares refer to the stresses in (a)), (c) yield stress as a function of temperature with a strain rate $\dot{\varepsilon}=10^{-5}$\,s$^{-1}$, a dislocation length $l_{\rm D}=1/\sqrt{\rho_{\rm D}}$ and dislocation densities ranging from $10^{7}$ to $10^{16}$\,m$^{-2}$. No activation entropy ($T_{\rm m}=\infty$ in Eq. \ref{eq:yield_temperature}) is considered for the yield stress. } \label{fig:ThermalActivation} \end{figure} Having computed \textit{ab initio}{} the two material parameters entering the line tension model, \textit{i.e.}{} the Peierls enthalpy $\Delta H_{\rm P}^{\rm 1D}(x)$ and the line tension $\Gamma$, the kink-pair nucleation enthalpy is obtained by searching for the dislocation profile $x(z)$ crossing one Peierls valley, which corresponds to a saddle point of the functional in Eq. \ref{eq:LT}. Profiles obtained for different amplitudes of an applied $\hkl<111>\hkl{110}$ resolved shear stress $\tau$ are shown in Fig. \ref{fig:ThermalActivation}(a), with the corresponding activation enthalpies $\Delta H_{\rm kp}$ in Fig. \ref{fig:ThermalActivation}(b). Here we used a simple Peierls potential without non-Schmid effects ($\Delta \barT{\Omega}=0$ and $\bar{y}(x)=0$ in Eq. \ref{eq:Peierls1D_relax}). The nucleation enthalpy can be fitted using Kocks' law \cite{Kocks1975}: \begin{equation} \Delta H_{\rm kp}(\tau) = \Delta E_{\rm kp} \left[ 1 - \left( \frac{\tau}{\tau^0_{\rm P}} \right)^p \right]^q, \label{eq:Kocks0} \end{equation} where $\Delta E_{\rm kp}$ is the kink-pair formation energy, $\tau^0_{\rm P}$ is the Peierls stress in the $\{110\}$ plane, and $p$ and $q$ are adjustable parameters. For BCC W, we obtain a formation energy $\Delta E_{\rm kp}=1.70$\,eV for two isolated kinks. As demonstrated in Ref. \cite{Dezerald2015}, this formation enthalpy can be well approximated by $\Delta E_{\rm kp} = 4\sqrt{2}/\pi \times \lambda_{\rm P}\sqrt{\Gamma \, V_{\rm P}^{\rm act}}$, with $\lambda_{\rm P}=a\sqrt{6}/3$ the distance between Peierls valleys and $V_{\rm P}^{\rm act}$ the height of the Peierls potential. This evidences that the kink formation enthalpy is equally sensitive to the Peierls potential and the line tension. Non-Schmid effects can be readily included in the line tension approximation by using in Eq. \ref{eq:LT} the full expression of the Peierls enthalpy $\Delta H_{\rm P}^{\rm 1D}$ given by Eq. \ref{eq:Peierls1D_relax}. The kink-pair nucleation enthalpy can still be described by Kocks' law, but with parameters that now depend on the mechanical loading. In the case of a uniaxial tensile test as discussed above, we have: \begin{equation} \Delta H_{\rm kp}(\sigma,\zeta,\chi) = \Delta E_{\rm kp} \left[ 1 - \left( \frac{\sigma}{\sigma^0_{\rm Y}(\zeta,\chi)} \right)^{p(\zeta,\chi)} \right]^{q(\zeta,\chi)}. \label{eq:Kocks} \end{equation} \subsection{Dislocation velocity and yield stress} Assuming that dislocation glide is controlled by kink nucleation, the dislocation velocity $v_{\rm gl}$ is given by: \begin{equation} v_{\rm gl} = \nu_{\rm D} \dfrac{l_{\rm D}}{b} \lambda_{\rm P} \exp{\left( -\dfrac{\Delta G_{\rm kp}}{k_{\rm B} T} \right)}. \label{eq:velocity} \end{equation} $\nu_{\rm D}$ is an attempt frequency for the nucleation event and is taken equal to the Debye frequency ($\nu_D=52$\,THz for BCC W \cite{Kittel1966}). The ratio $l_{\rm D}/b$ is an estimate of the number of potential kink-pair nucleation sites, with $l_{\rm D}$ the length of the dislocation line. A good estimate of this dislocation length is $l_{\rm D}=1/\sqrt{\rho_{\rm D}}$, with $\rho_{\rm D}$ the dislocation density. The kink-pair formation free enthalpy $\Delta G_{\rm kp} =\Delta H_{\rm kp} - T \Delta S_{\rm kp}$ is composed of the formation enthalpy (Eq. \ref{eq:Kocks}) and of the formation entropy, which is unknown. The computational effort needed to evaluate this entropic contribution using either an harmonic approximation \cite{Proville2012} or thermodynamic integration \cite{Swinburne2018} is still out of reach of \textit{ab initio}{} calculations. Here, in order to evidence the potential impact of entropy, we will use a simple approximation, the Meyer-Neldel compensation rule \cite{Meyer1937}, which assumes that the activation entropy is proportional to the activation enthalpy, $\Delta S_{\rm kp} = \Delta H_{\rm kp}/T_{\rm m}$, with the parameter $T_{\rm m}$ homogeneous to a temperature and expected to be close to the melting temperature ($T_{\rm m}=3695$\,K for W \cite{Kittel1966}). We will first neglect entropic contributions by setting $T_{\rm m}=\infty$, before discussing their potential impact by choosing finite values for $T_{\rm m}$. Altogether, the activation free enthalpy for kink-pair nucleation during a uniaxial tensile test is given by \begin{equation} \Delta G_{\rm kp}(\sigma,\zeta,\chi,T) = \Delta E_{\rm kp} \left[ 1-\left( \dfrac{\sigma}{\sigma^0_{\rm Y}(\zeta,\chi)} \right)^{p(\zeta,\chi)} \right]^{q(\zeta,\chi)} \left( 1 - \dfrac{T}{T_{\rm m}} \right). \label{eq:deltaG} \end{equation} Knowing the dislocation velocity, one can deduce the rate of plastic deformation $\dot{\varepsilon}$ for a given density of mobile dislocations $\rho_{\rm D}$ from Orowan's law, \begin{equation} \dot{\varepsilon} = \rho_{\rm D} \, b \, v_{\rm gl}. \label{eq:Orowan} \end{equation} Using Eq. \ref{eq:velocity} for the dislocation velocity and Eq. \ref{eq:deltaG} for the activation free enthalpy, Orowan's law can be inverted to obtain an expression of the yield stress $\sigma_Y$ of a given slip system during a tensile test at constant temperature and strain rate: \begin{equation} \sigma_{\rm Y}(T,\zeta,\chi) = \sigma^0_{\rm Y}(\zeta,\chi) \ \left\{ 1 - \left[ \dfrac{k_{\rm B} T}{\Delta E_{\rm kp}} \dfrac{T_{\rm m}}{T-T_{\rm m}} \ln{ \left( \dfrac{\dot{\varepsilon}}{\rho_{\rm D} \, \nu_{\rm D} \, l_{\rm D} \, \lambda_{\rm P} } \right) } \right]^{1/q(\zeta,\chi)} \right\}^{1/p(\zeta,\chi)} . \label{eq:yield_temperature} \end{equation} Neglecting the entropic contribution ($T_{\rm m}=\infty$), the critical temperature $T_{\rm c}$ at which the yield stress of \hkl<111>\hkl{110} slip systems vanishes is given by: \begin{equation} T_{\rm c}^0=\frac{\Delta E_{\rm kp} } { k_{\rm B} \ln{\left( {\rho_{\rm D} \, \nu_{\rm D} \, l_{\rm D} \, \lambda_{\rm P} }\,/\, {\dot{\varepsilon}}\right) }} . \label{eq:critical_temperature} \end{equation} We note that this critical temperature depends neither on the relative orientation of the tensile axis nor on the considered slip system. It defines the athermal limit above which the plastic deformation is no longer thermally activated. The variations in tungsten of the \hkl<111>\hkl{110} yield stress with the temperature and the resulting critical temperatures are presented in Fig. \ref{fig:ThermalActivation}(c), when non-Schmid effects are neglected. As illustrated in this figure, the critical temperature is sensitive to the dislocation density $\rho_{\rm D}$, both directly from Orowan's law (Eq. \ref{eq:Orowan}) but also indirectly through the dislocation length $l_{\rm D}=1/\sqrt{\rho_{\rm D}}$. It varies from about 560\,K to 800\,K when the dislocation density varies from very high ($10^{16}$ m$^{-2}$) to very low ($10^{7}$ m$^{-2}$) for a fixed strain rate $\dot{\varepsilon}=8.5\times10^{-4}$\,s$^{-1}$. When entropy contributions are included (finite $T_m$), lower energy barriers for kink nucleation are obtained, allowing for easier dislocation glide at high temperatures and a lower critical temperature $T_{\rm c} = T_{\rm c}^0 \,/\, (1 + T_{\rm c}^0/T_{\rm m})$. For a dislocation density $\rho_{\rm D}=10^9$\,m$^{-2}$ and a strain rate $\dot{\varepsilon}=8.5\times10^{-4}$\,s$^{-1}$, the critical temperature goes from 732\,K without entropy to 611\,K when the Meyer-Neldel approximation is used with $T_{\rm m}=3695$\,K. This sensitivity to the parameter $T_{\rm m}$ of the Meyer-Neldel approximation shows the impact of vibrational contributions to the yield stress at high temperatures. Accounting for the activation entropy associated with atomic vibrations in the kink pair nucleation free enthalpy appears therefore necessary not only at low temperatures to resolve the discrepancy between experimental and theoretical yield stresses \cite{Proville2012,Barvinschi2014,Proville2018b} (\textit{cf}{} section \ref{sec:PeierlsStress}) but also at high temperatures. Moreover, anharmonicity may become important at high temperatures, requiring to use more precise approaches relying on thermodynamic integration \cite{Gilbert2013,Swinburne2018,Sato2021}. These approaches are still too computationally expensive for \textit{ab initio}{} calculations, particularly in the case of a kinked dislocation. Below we will neglect entropic contributions, thus using $T_{\rm m}=\infty$. \subsection{Temperature-dependence of the yield stress in tungsten} \begin{figure}[bht!] \begin{center} \includegraphics[width=1\linewidth]{figure18} \end{center} \caption{Yield stress of the primary \hkl<111>\hkl{110} slip systems predicted from Eq. \ref{eq:yield_temperature} as a function of temperature for a tensile axis along \hkl[001] (first row), \hkl[011] (second row), \hkl[-111] (third row), and \hkl[-149] (last row), according to Schmid's law (a) and including non-Schmid effects in tension (b) and compression (c). The dislocation density is $\rho_{\rm D}=10^{12}$\,m$^{-2}$, with $l_{\rm D}=1/\sqrt{\rho_{\rm D}}$, a strain rate $\dot{\varepsilon}=8.5 \times 10^{-4}$\,s$^{-1}$, and no entropy contribution ($T_{\rm m}=\infty$). } \label{fig:TemperatureOrientations} \end{figure} \begin{figure}[bth] \begin{center} \includegraphics[width=1\linewidth]{figure19} \end{center} \caption{Yield stress of the \hkl[111]\hkl(-101) slip system predicted from Eq. \ref{eq:yield_temperature} at different temperatures as a function of the angle $\chi$ for $\zeta=45^\circ$: according to Schmid's law (a) and including non-Schmid effects in tension (b) and compression (c). The angle $\chi$ leading to a minimum yield stress is indicated by a square symbol. Parameters as in Fig. \ref{fig:TemperatureOrientations} } \label{fig:ThetaLineTemperature} \end{figure} The model in Eq. \ref{eq:yield_temperature} is used in Fig. \ref{fig:TemperatureOrientations} to predict the temperature dependence of the yield stress of all slip systems for uniaxial tensile tests along the \hkl[001], \hkl[011] and \hkl[-111] corners of the standard stereographic triangle and along the more central \hkl[-149] orientation. For the corner orientations, the slip systems that are equivalent according to Schmid's law (Fig. \ref{fig:TemperatureOrientations}(a)) are split in two groups due to non-Schmid effects in tension (Fig. \ref{fig:TemperatureOrientations}(b)) and compression (Fig. \ref{fig:TemperatureOrientations}(c)), with the relative ease to activate one group reversed when the sign of the applied stress is reversed. A notable feature is that the deviation from Schmid's law, both in terms of T/AT and tension/compression asymmetries become less pronounced with increasing temperature. This is better visualized in Fig. \ref{fig:ThetaLineTemperature} where the yield stress of the \hkl[111]\hkl(-101) system as a function of the angle $\chi$ at $\zeta=45^\circ$ is plotted at different temperatures ranging from 0\,K to the critical athermal temperature. At low temperatures, the deviations from Schmid's law are strong as reported at 0\,K in section \ref{sec:generalized_yield}. But with increasing temperature, the T/AT and tension/compression asymmetries become less pronounced and vanish close to the athermal temperature. This recovery of Schmid's law at high temperature has been reported experimentally in BCC transition metals \cite{Liu1972,Nawaz1975}, and was also accounted for using a model 2D Peierls potential coupled with a LT model in the work of Edagawa \textit{et al.} \cite{Edagawa1997}. \begin{figure}[bth] \begin{center} \hspace*{2mm} \includegraphics[width=0.9\linewidth]{figure20} \end{center} \caption{Yield stress for \hkl[111]\hkl(-101) (orange) and \hkl[-111]\hkl(101) (purple) slip systems predicted from Eq. \ref{eq:yield_temperature} (left axis) and experimental data from Brunner and Glebovsky \cite{Brunner2000} (right axis). In both cases, the traction axis is along $\hkl[-149]$ ($\zeta=50^\circ$ and $\chi=0^\circ$) with a strain rate $\dot{\varepsilon}=8.5 \times 10^{-4}$\,s$^{-1}$. The central bold lines correspond to the estimated experimental dislocation density $\rho_{\rm D}=5.5 \times 10^9$\,m$^{-2}$. } \label{fig:BrunnerTemperature} \end{figure} We finally compare the predicted temperature dependence of the yield stress with experimental data from Brunner and Glebovsky \cite{Brunner2000} in Fig. \ref{fig:BrunnerTemperature} obtained in tension along an axis with $\zeta=50^\circ$ and $\chi=0$ close to $\hkl[-149]$. For this orientation, the generalized yield criterion in tension at 0\,K predicts that the \hkl[111]\hkl(-101) and \hkl[-111]\hkl(101) slip systems are the easiest to activate (see Fig. \ref{fig:ThetaLines}(b)). The variation of the yield stress for both slip systems is plotted as a function of temperature in Fig. \ref{fig:BrunnerTemperature} for different dislocation densities $\rho_{\rm D}$ ranging from $10^{7}$ to $10^{16}$\,m$^{-2}$. In the experiments, the dislocation density is unknown. But in a later study on the temperature-dependent tensile properties of tungsten single crystals with a similar orientation close to \hkl[-149] ($\zeta=45.6^\circ$ and $\chi=0$) \cite{Brunner2010}, the authors reported a dislocation density of $5.5 \times 10^{9}$\,m$^{-2}$. Using this value as a reference, we highlighted the corresponding theoretical prediction in Fig. \ref{fig:BrunnerTemperature}. Note that as before, experimental data are shown with a different scale. As expected, both theoretical and experimental yield stresses decrease with temperature and follow similar, slightly convex curves. Moreover, the predicted and experimental curves at the estimated experimental density are rather close, keeping in mind that the data are not shown with the same stress scale. Temperatures are however shown with the same scale and we see that the predicted athermal temperature, 710\,K, is close to the experimental value, about 800\,K. This is particularly satisfactory keeping in mind that we have neglected entropic effects and that the theoretical prediction does not use any fitting parameter. This agreement is an indication that the kink-pair formation energy $\Delta E_{\rm kp}$, which is directly proportional to the critical temperature (Eq. \ref{eq:critical_temperature}), is correctly estimated by our modeling approach. \section{Conclusions} \textit{Ab initio}{} calculations reveal that many specific features of BCC metals plasticity can be rationalized by core properties of the $1/2\,\hkl<111>$ screw dislocation. The compact core of this dislocation results in elementary glide events in \hkl{110} planes, but, because of the lack of inversion symmetry of these \hkl{110} planes, the trajectory of the gliding dislocation deviates from a straight path between stable positions. This leads to the observed T/AT asymmetry of the yield criterion: the shear stress necessary to activate plasticity is minimal when the MRSSP is tangent to the dislocation trajectory and not when it corresponds to the macroscopic \hkl{110} glide plane. \textit{Ab initio}{} calculations also evidence that dislocations have a non-null relaxation volume which couples with non-glide components of the stress tensor. Variations of the relaxation volume along the dislocation glide path lead to variations of the energy barrier opposing dislocation glide, and thus of the yield stress. All these ingredients can be then incorporated in a line tension model to describe kink-pair nucleation and thus predict dislocation velocity as a function of temperature and mechanical loading, either analytically using simple expressions based on classical nucleation theory like in the present work, or using kinetic Monte Carlo simulations \cite{Stukowski2015}. Starting from \textit{ab initio}{} calculations, one thus obtains theoretically a full description of single crystal plastic yield below the critical athermal temperature where plasticity is controlled by the mobility of $1/2\,\hkl<111>$ screw dislocations. Mobility laws for dislocation glide derived from this \textit{ab initio}{} description can be directly implemented in dislocation dynamics simulations \cite{Po2016} or crystal plasticity \cite{Cereceda2016}. Such a multiscale approach allows accounting for collective effects, with the dislocation mobility depending not only on the external mechanical loading but also on internal stresses \cite{Srivastava2020}. Detailed comparison of \textit{ab initio}{} predictions with experimental data shows that this modeling approach is still not fully quantitative. The most striking disagreement concerns the prediction of the Peierls stress at 0\,K, with the theoretical value being two or three times larger than in experiments across BCC metals. To resolve, at least partly, the discrepancy, it appears necessary to consider not only the energy barrier opposing dislocation glide but also the associated activation entropy arising from atomic vibrations. Atomic simulations relying on empirical interatomic potential have shown that variations of the zero-point energy coming from the quantization of the vibrational modes is responsible for a lowering of the theoretical value of the Peierls stress at low temperatures \cite{Proville2012,Proville2018b}. Vibrations are also important at higher temperatures where they can significantly modify the Peierls potential \cite{Gilbert2013} and the activation energy barrier for kink-pair nucleation \cite{Swinburne2018}. Determination of these entropy contributions with \textit{ab initio}{} calculations is still out of reach. It appears therefore necessary to rely on empirical potentials to obtain such entropy contributions. Although simple EAM potentials have shown that they are versatile enough to reproduce key properties of dislocations \cite{Mendelev2003,Marinica2013}, such central-force potentials do not contain the ingredients necessary to account for the angular dependence of interatomic bonding in BCC transition metals. More sophisticated descriptions of atomic interactions are therefore highly desirable to fill the gap between \textit{ab initio}{} calculations and simple empirical potentials: semi-empirical approaches relying on tight binding approximations, like bond order potentials \cite{Mrovec2011}, or fully phenomenological approaches relying on machine learning, such as Gaussian approximation potentials \cite{Maresca2018} or neural network atomic potentials \cite{Mori2020}, are two promising possibilities. A clear understanding of dislocation glide in \hkl{112} planes is also missing. Experiments \cite{Argon1966,Caillard2018} have unambiguously shown that dislocations with a $1/2\,\hkl<111>$ Burgers vector can glide not only in \hkl{110} planes but also in \hkl{112} planes in some BCC metals, for instance in tungsten at low temperature. No elementary glide mechanism compatible with the compact core structure of the $1/2\,\hkl<111>$ screw dislocation predicted by \textit{ab initio}{} calculations have been proposed until now to rationalize glide in \hkl{112} planes. Magnetism is another challenge for the \textit{ab initio}{} modeling of dislocations. If ferromagnetic BCC metals like Fe do not introduce any supplementary technical difficulty compared to non-magnetic elements \cite{Ventelon2013,Proville2013,Dezerald2014,Dezerald2016}, the same is not true for paramagnetic and antiferromagnetic elements. Fe and Cr are two BCC metals which become paramagnetic above respectively their Curie (1043\,K) and Néel (311\,K) temperatures. Modeling of dislocations in these paramagnetic states requires to account for the disordering of the atom magnetic moments and to perform statistical averages on magnetic configurations. Such an \textit{ab initio}{} modeling approach in BCC Fe has shown that screw dislocations have similar properties in the ferromagnetic and paramagnetic phases \cite{CasillasTrujillo2020}. On the other hand, magnetic order in an antiferromagnetic phase, like BCC Cr at low temperature\footnote{The real magnetic state of Cr at low temperature corresponds to a spin density wave for which the antiferromagnetic phase is a good approximate.}, is not compatible with a $1/2\,\hkl<111>$ Burgers vector: dislocations with such a Burgers vector necessarily introduce a magnetic fault in the crystal. It has been proposed that these dislocations coexist pairwise to bound the magnetic fault, thus leading to super-dislocations with a \hkl<111> Burgers vector \cite{Bienvenu2020}. The understanding of alloying effects on BCC plasticity also benefits from the development of \textit{ab initio}{} calculations. Modeling of dislocation interactions with solute atoms sheds new light on the mechanisms responsible for hardening or softening in dilute solid solutions \cite{Trinkle2005,Itakura2013,Tsuru2020} and in more concentrated solid solutions like high entropy alloys \cite{Yin2020}. In addition, some substitutional solute atoms have been shown to induce a change of the core structure of the screw dislocation through the variation of the electronic density \cite{Romaner2010,Li2012,Romaner2014,Samolyuk2013}, with the dislocation going from a symmetric compact to a degenerate polarized core. \textit{Ab initio}{} calculations have also evidenced core reconstructions of the screw dislocation induced by interstitial solutes, with H stabilizing the split configuration \cite{Grigorev2020} and bigger interstitial solute atoms like carbon stabilizing the hard core \cite{Ventelon2015,Luthi2017,Luthi2018,Luthi2019,Bakaev2019,Hachet2020}. Integrating the elementary interaction mechanisms revealed by these \textit{ab initio}{} calculations in higher scale models, allows to tackle more complex phenomena, like the Portevin - Le Chatelier effect \cite{Zhao2020} or the reappearance of a Peierls regime and dynamic strain ageing at temperatures where solute diffusion is activated \cite{Caillard2015,Caillard2016}. \subsubsection*{Acknowledgments} Antoine Kraych, Lisa Ventelon, and François Willaime are acknowledged for their contributions to the works presented here. Part of this work has been performed using HPC resources from GENCI-CINES and -TGCC (Grants 2020-096847 and -0910156). The authors also acknowledge PRACE for access to Juwels system hosted by Jülich Supercomputing Centre, Germany (project DIMAB). LD acknowledges support from the French State through the program “Investment in the future” operated by the National Research Agency (ANR) and referenced by ANR-11-LABX-0008-01 (LabEx DAMAS)
1,314,259,993,728
arxiv
\section{Introduction} The study of interacting bosons under rotation is a fundamental and rich problem: for weak rotation and weak interactions, one finds a rich array of vortex phases, whereas at strong rotation and strong interactions, one obtains bosonic analogues of the fractional quantum Hall effect \cite{Cooper2008}. More recently, it has been realized that synthetic magnetic fields or spin-orbit coupling generated by coupling atoms to Raman lasers, can mimic rotation, in that it can lead to ``kinetic frustration" by flattening the single particle band or introducing multiple degenerate minima in the band structure \cite{Spielman2011, Atala2014, OurNature, Ketterle2013, Bloch2013, Dalibard2011, Stuhl15, Fallani15}. The presence of a large single-particle degeneracy implies that interaction effects are crucial in determining the ground state (see Ref.~\cite{Zhai2012}, and references therein). The recent cooling of polar molecules \cite{Ni2008, Aikawa2010, Deiglmayr2008}, magnetic \cite{Lu2012, Pfau2007, Aikawa2012} and Rydberg atoms \cite{Saffman2010, Schausz2012} now offers the unique opportunity to explore the interplay between novel single-particle band structures and \textit{long range} interactions in systems that have no traditional solid state analogues. Here we study the simplest example of such a system, which can be readily realized in current experiments: a spinless dipolar Bose gas trapped on a two-leg ladder in a large magnetic field. Motivated by ongoing experiments on polar molecules, Rydberg and magnetic atoms, the rich physics of dipolar gases in low dimensional systems has recently been explored by several authors: in arrays of one-dimensional ($1$D) optical lattices, at low densities, strong density-density interactions give rise to ordered crystalline phases at rational filling fractions forming a devil's staircase \cite{Dalmonte2010, Parish2012, Burnell2009}. By taking advantage of the low-lying rotational states of polar molecules, and the anisotropy of the dipolar interaction, spin Hamiltonians such as the $\text{XXZ}$ spin chain with direction dependent couplings, or the bi-quadratic spin-$1$ Haldane Hamiltonian can be realized in deep lattices, allowing the study of symmetry protected topological phases on two leg ladders \cite{Manmana2013, Liu2012}. In engineered lattice potentials with non-trivial band topology, dipolar interactions naturally give rise to lattice analogues of the fractional quantum Hall effect, namely fractional Chern insulators \cite{Yao2013}. In the continuum, spin-orbit coupled bosons with long range interactions possess ground states with novel topological defects or quasi-crystalline order \cite{Sarang2013, Wilson2013}. Our study complements these earlier works by focussing on the weak coupling limit of large onsite occupation, where the interplay between kinetic frustration and long range interactions on a two leg ladder leads to superfluid phases with broken translational and reflection symmetries. Experimentalists at Munich recently engineered a system of bosons on a two-leg ladder by using Raman lasers to create a uniform magnetic flux $\phi$ per plaquette \cite{Atala2014}. They explored the non-interacting physics at a fixed flux $\phi = \pi/2$ and found two phases as a function of the rung-to-leg coupling strength: a saturated chiral current or Meissner phase at large coupling, where equal and opposite currents flow on each leg of the ladder, and a modulated density or vortex phase at small coupling, where the density is modulated along the legs of the ladder. These phases were first discussed theoretically by Orignac and Giamarchi \cite{Orignac2001} using bosonization, although their existence had already been predicted in the context of weakly coupled Josephson junction arrays \cite{Kardar1986}. The strongly interacting limit of this problem has now been thoroughly explored, where the interplay between single-particle degeneracies and interactions leads to interesting physics such as Mott phases with staggered loop currents \cite{Dhar2012, Dhar2013, Petrescu2013, Piraud2015, Greschner2015, Keles2015, Tokuno2014}. Following the experiment of Atala \textit{et al.} \cite{Atala2014}, Wei and Mueller \cite{Wei2014} used a variational approach to explore the effects of weak, short-range repulsive interactions, finding an additional phase, dubbed the biased ladder phase, where the density is uniform but different on each ladder, thereby breaking global $Z_{2}$ reflection symmetry. Here we generalize the theory and results of Ref.~\cite{Wei2014} to long range dipolar interactions, finding a rich phase diagram as a function of the dipolar interaction strength, the synthetic magnetic field, and the relative tilt angle between the external field polarizing the dipoles, and the plane of the ladder. Our main results are summarized below: \begin{enumerate} \item Generally long range dipolar interactions either destroy, or reduce the region of stability of the Meissner or saturated chiral current phase. \item Repulsive dipolar interactions produce an interleg charge density wave (CDW) phase where the densities along the left and right legs of the ladder modulate out of phase with one another. \item Next nearest neighbor interactions support a fully modulated biased ladder phase, where all the particles are located on one leg of the ladder \item Arbitrarily weak attractive interactions along the rungs destroy the Meissner phase entirely, and lead to a cascade of first order transitions between distinct modulated density wave phases with different wave-vectors. \item Attractive nearest neighbor interactions along the ladder produce a regime of parameters where the biased ladder phase is the stable ground state at weak rung hopping. \end{enumerate} \section{The Model} Our Hamiltonian for a two-dimensional spinless Bose gas on a two-leg ladder with lattice spacing $a$, takes the form (see Fig.~\ref{schematic}): \begin{equation}\label{ham0} {\cal{H}} = {\cal{H}}_{0} + {\cal{H}}_{\text{int}} \end{equation} where ${\cal{H}}_{0}$ reads: \begin{eqnarray}\label{spham} {\cal{H}}_{0} = -J\sum_{l}(a^{\dagger}_{l+1,L}a_{l, L} + a^{\dagger}_{l+1, R}a_{l, R} + \text{h.c}) \\\nonumber -K\sum_{l}(a^{\dagger}_{l, L}a_{l, R}e^{-il\phi}+ \text{h.c}) \end{eqnarray} where $a_{l,L}$ and $a_{l,R}$ are bosonic annihilation operators on the left (L) and right (R) legs of the ladder at position $l$. $J$ and $K$ denote the hopping matrix elements along the legs and rungs of the ladder respectively, and $\phi$ is the magnetic flux per plaquette \cite{Atala2014}. This single particle model was introduced by Atala \textit{et al.} \cite{Atala2014}. \begin{figure} \begin{picture}(100, 50) \put(-88, -10){\includegraphics[scale=0.425]{schematic-1.eps}} \end{picture} \caption{\label{schematic} (Color Online) \textbf{Dipolar Bose gas on a two-leg ladder}. The arrows on the sites of the ladder correspond to dipoles, aligned in the direction of the external field indicated by $B$. The external field could be an electric field in the case of polar molecules or a magnetic field for magnetic atoms such as Dy and Er. The hopping along the $x$ direction is denoted by $J$. An artificial flux threads the system, which implies that hopping in the $y$ direction picks up a position dependent phase $e^{il\phi}$. The local on-site interaction is denoted by $U$ and is always assumed to be repulsive $U \geq 0$. Additionally, there is a nearest neighbor interaction along $x$ (ladder) and $y$ (rung) denoted $V_{x}$ and $V_{y}$ respectively, and a next nearest neighbor interaction $V_{\text{NNN}}$. The sign and magnitude of $V_{x}$, $V_{y}$ and $V_{\text{NNN}}$ can be tuned by tilting the external field relative to the plane of the ladder. Higher order contributions to the dipolar interaction are significantly smaller in magnitude, and do not have much effect on the overall phase diagrams we present.} \end{figure} We further assume that there is an external (real) magnetic or electric field which polarizes the magnetic atoms/polar molecules, freezing out any internal degrees of freedom. As a result, the interaction Hamiltonian only includes density-density interactions and takes the form: \begin{eqnarray} {\cal{H}}_{\text{int}} = \frac{U}{2}\sum_{l, \mu}(a^{\dagger}_{l, \mu}a^{\dagger}_{l, \mu}a_{l, \mu}a_{l, \mu}) + \\\nonumber + \frac{1}{2}\sum_{l, l^{'}, \mu, \mu^{'}}V_{l, l^{'}, \mu, \mu^{'}}a^{\dagger}_{l, \mu}a^{\dagger}_{l^{'}, \mu^{'}}a_{l^{'},\mu^{'}}a_{l, \mu} \end{eqnarray} where $\mu, \mu^{'} \in (L, R)$, and $U$ denotes the repulsive onsite interaction potential, which includes s-wave and dipolar contributions. The non-local part of the dipolar interaction given by $V_{l, l^{'}, \mu, \mu^{'}} = V(1 - 3 \cos^{2}\theta_{l, l^{'}, \mu, \mu^{'}})|\bm{r}|^{-3}$, where $V = D^{2}/a^{3}$ ($D$ is the dipole moment), couples both left and right legs of the ladder. Here we define $\theta_{l, l^{'}, \mu, \mu^{'}}$ to be the angle between the external polarizing field and the vector $\textbf{r}$ made by the lattice sites $\{l, \mu\}$ and $\{l^{'}, \mu^{'}\}$. Although the non-local dipolar interaction couples sites arbitrarily far apart, we restrict our calculations to nearest neighbor (NN) and next-nearest neighbor (NNN) interactions as shown in Fig.~\ref{schematic}. Previous studies on dipolar bosons in optical lattices have shown that retaining higher order terms has little \textit{qualitative} effect on the overall phase diagram \cite{Danshita2009}, but modifies the precise location of the phase boundaries. The anisotropic nature of the dipolar interaction means that the interactions between sites on the same leg ($\sim a^{\dagger}_{l, \mu}a^{\dagger}_{l+1, \mu}a_{l+1, \mu}a_{l, \mu}$, or the $x$-direction) and sites on opposite legs ($\sim a^{\dagger}_{l, L}a^{\dagger}_{l, R}a_{l, R}a_{l,L}$, or the $y$-direction), can in principle be different, and can be experimentally controlled by changing the alignment of the external field with respect to the plane of the ladder (see Fig.~\ref{schematic}). We therefore separately denote these interactions as $V_{x}$ and $V_{y}$, and denote the next-nearest neighbor interaction ($\sim a^{\dagger}_{l, L}a^{\dagger}_{l+1, R}a_{l+1, R}a_{l,L}$) as $V_{\text{NNN}}$. Generally, the on-site ($U$) and dipolar ($V$) interaction strengths are tunable by changing the transverse confinement \cite{Chinconfine}. The contact interaction is separately tunable using Feshbach resonances \cite{ChinRMP2010, Pfau2007}. We only consider repulsive on-site potentials in this work \cite{manpreet2014}. In experiments, the sign of the non-local part of the dipolar interaction $V$ is separately tunable from repulsive to partially attractive by tilting the external applied field at an arbitrary angle $(\theta, \chi)$ relative to the ladder plane. For a general polar angle $\theta$ and azimuthal angle $\chi$, the interaction potentials along the ladder and rung direction read: $V_{x} \propto V(1- 3\sin^{2}{\theta}\cos^{2}{\chi})$ and $V_{y} \propto V(1- 3\sin^{2}{\theta}\sin^{2}{\chi})$ \cite{Danshita2009}. A major advantage of this is that the physics of attractive ladders can be accessed, while ensuring overall mechanical stability from a repulsive local interaction \cite{manpreet2014}. Completely attractive dipoles can also be obtained using a time-dependent external field \cite{Pfau2010}. In addition to the case where the external field is polarized perpendicular to the ladder, where all the interactions are repulsive: $V_{x} = V_{y} = V$, $V_{NNN} = V/2\sqrt{2}$, we separately present the general phase diagram for attractive rung ($V_{y} < 0$) and ladder interactions ($V_{x} < 0$) respectively. For convenience, we focus on two particular tilt configurations: (i) external field along the rung ($y$) direction. Here $V_{y} = -2V$, $V_{x} = V$, $V_{NNN} = -V/4\sqrt{2}$ and (ii) external field along the ladder direction. Here $V_{x} = -2V$, $V_{y} = V$ and $V_{NNN} = -V/4\sqrt{2}$. Our qualitative results however are general, and robust to tilting the external field away from these angles, as long as the sign of the interaction along the ladder or rung direction does not change. The single-particle Hamiltonian in momentum space reads ${\cal{H}}(k) = -2 J \cos k\cos \frac{\phi}{2} + 2 J \sin k\sin \frac{\phi}{2} \sigma_{z} - K \sigma_{x}$, and is readily diagonalized to yield two bands, whose energies are $E_{\pm} = -2 J \cos k\cos\phi/2 \pm \sqrt{4 J^2 \sin^{2} k \sin^{2} \phi/2 + K^{2}}$ \cite{Wei2014, Carr2006, Roux2007}. The lowest band has two minima at $k = \pm k_{0}$ for $K < 2 J \tan\phi/2\sin\phi/2$, and a single minimum at $k= 0$, for $K$ greater than this value. \subsection{Variational Approach I} In this work, we consider two complimentary variational approaches, valid for weak interactions, to study the ground state phase diagram of Eq.~(\ref{ham0}). The first approach follows that of Wei and Mueller \cite{Wei2014}, who considered the variational ground state wave-function for $N$ particles, restricted to the lowest band: \begin{equation}\label{gs} |G_{k_{0}}\rangle = \frac{1}{\sqrt{N!}}(\cos \gamma \beta^{\dagger}_{k_{0}}+\sin \gamma \beta^{\dagger}_{-k_{0}})^{N}|0\rangle \end{equation} where $\beta_{\pm k_{0}}$ are the annihilation operators for bosons at $k = \pm k_{0}$. The original boson operators can be expressed in terms of $\beta_{k}$ as $a_{kL} = -\sin{\frac{\theta_{k}}{2}}\beta_{k}$ and $a_{kR} = \cos{\frac{\theta_{k}}{2}}\beta_{k}$, where the angle $\tan{\theta_{k}} = \frac{-K/J}{2\sin{k}\sin{\phi/2}}$. Here $|0\rangle$ denotes the vacuum state, and $0 < \gamma < \pi/2$ for $k_{0} >0$, and $\gamma = 0$ for $k_{0} = 0$. Absent interactions, this is the ground state for any $\gamma$, but arbitrarily weak interactions will break this infinite degeneracy. \subsection{Observables} The local density on each leg is defined as $n_{l,\mu} = \langle G_{k_{0}}|a^{\dagger}_{l,\mu}a_{l,\mu}|G_{k_{0}}\rangle$. We define the average density as $n = N/\Omega$, where $\Omega$ is the volume of the system. The expressions for the average density on each leg and its modulations are derived in Ref.~\cite{Wei2014}, but reproduced here for completeness: \begin{eqnarray}\label{densformulas} n_{l,L} = \cos^{2}{\gamma}\sin^{2}{\theta_{k_{0}}} + \sin^{2}{\gamma}\cos^{2}{\theta_{k_{0}}} + \\\nonumber \frac{1}{2}\sin{2\gamma}\sin{\theta_{k_{0}}}\cos(2k_{0}l) \\\nonumber n_{l,R} = \sin^{2}{\gamma}\sin^{2}{\theta_{k_{0}}} + \cos^{2}{\gamma}\cos^{2}{\theta_{k_{0}}} +\\\nonumber \frac{1}{2}\sin{2\gamma}\sin{\theta_{k_{0}}}\cos(2k_{0}l). \end{eqnarray} A key limitation of variational approach I is that the modulations on the $L$ and $R$ legs of the ladder are identical. While this is a valid approximation for short ranged interactions considered in Ref.~\cite{Wei2014}, where the interaction does not couple the two legs of the ladder, this is no longer the case for dipolar interactions. Working in momentum space, we define the current $j_{\mu} = \langle G_{k_{0}}| \sum_{k} a^{\dagger}_{k, \mu}\frac{\partial {\cal{H}}}{\partial k} a_{k, \mu} |G_{k_{0}}\rangle$. The total current $j_{\text{net}} = \sum_{\mu}j_{\mu}$ is identically zero in equilibrium, but the chiral current $j_{c} = j_{L} - j_{R}$ can be finite. In order to visualize phases, it is also helpful to define the current in real space along the legs and rungs of the ladder as \cite{Piraud2015}: \begin{eqnarray}\label{cureqs} j^{\parallel}_{l, \mu} = i J(a^{\dagger}_{l+1, \mu}a_{l, \mu} -a^{\dagger}_{l, \mu}a_{l+1, \mu})\\\nonumber j^{\perp}_{l} = i K(e^{-il\phi}a^{\dagger}_{l, L}a_{l, R} -e^{il\phi}a^{\dagger}_{l, R}a_{l, L}) \end{eqnarray} This is particularly useful for comparison with real space numerical methods such as DMRG. Explicit expressions for the rung and ladder currents in real space can be calculated using the variational wave-function Eq.~(\ref{gs}), and are provided in the Appendix. The net and chiral currents are defined as $j_{\text{net}/c} = \sum_{l}(j^{\parallel}_{l, L} \pm j^{\parallel}_{i, R})$ respectively. Using variational approach I for short range interactions ($V = 0$), Wei and Mueller \cite{Wei2014}, found three phases: \begin{enumerate} \item Meissner (or saturated chiral current) phase. Here the rung current vanishes, and equal and opposite currents flow along the legs of the ladder. The total density is uniform. \item A biased ladder (BL) phase, where the density is uniform but different on the left and right legs, breaking global $Z_{2}$ reflection symmetry, in addition to the global $U(1)$ symmetry of the condensate. The rung current is zero. \item A modulated density (or vortex) wave phase, where the density is identical on the left and right legs but oscillates with a period incommensurate with the underlying lattice. This phase breaks additional $U(1)$ symmetry, associated with translations of the density wave. \end{enumerate} The presence of local currents on each plaquette in the modulated density phase are analogous to vortices in a type-II superconductor. The terminology of vortex and Meissner phases was first introduced by Orignac and Giamarchi \cite{Orignac2001}, and Atala \textit{et al.} \cite{Atala2014} interpreted their data using this language. Here we will interchangeably use the terminology saturated chiral current (or Meissner) and modulated density wave (or vortex) phase. We caution however that for strong interactions, vortex states may not always be accompanied by density modulations, indeed homogeneous insulating vortex phases have been found in DMRG studies \cite{Piraud2015}. As we explicitly derive in the Appendix, in all the three phases, the net ladder current $j_{\text{net}}$ remains zero on every plaquette. We note that in principle, one can construct general non-local density and bond operators such as $\langle a_{i\mu}a_{j\mu^{'}}\rangle$ for sites $i, j$ and $\mu, \mu^{'} \in \{L, R\}$ \cite{Roux2007, Carr2006}, which can take on nonzero values, especially for systems with long range interactions. In the weakly interacting limit however, whenever $\langle\beta_{k_{0}}\rangle$ becomes finite, these other order parameters automatically take on non-zero expectation values. Therefore it suffices to only consider the density here. How these non-local order parameters vanish near the Mott transition however, is a separate question. The \textit{ansatz} of Eq.~(\ref{gs}) can be readily employed to calculate the total energy of the dipolar gas $E = E_{-}(k) + \langle G_{k_{0}}|{\cal{H}}_{\text{int}}|G_{k_{0}}\rangle$, where we treat $\gamma$ and $k_{0}$ as variational parameters. The saturated chiral current (biased ladder) phases correspond to $\gamma =0$ and $k =0 ~(\neq 0)$ respectively, whereas the vortex or modulated density phase corresponds to $\gamma \neq 0$, $k \neq 0$. A new feature of dipolar systems is that $\gamma$ can be pinned to a finite value, even as $k \rightarrow 0$, which implies a state with long wave-length density modulations, yet a fully saturated chiral current. This state breaks the same symmetries as the modulated density-wave phase. The variational form for the interaction energy is: \begin{widetext} \begin{eqnarray}\label{inte} E(k_{0}, \gamma) = E_{-}(k_{0}) +\frac{(U+V_{x})n}{2}(1 - \frac{1}{2}\sin^{2}{\theta_{k_{0}}} -\frac{1}{2}\sin^{2}{2\gamma} + \frac{1}{2}\sin^{2}{2\gamma}\sin^{2}{\theta_{k_{0}}}) + \frac{(U + V_{x}\cos(2k_{0}))n}{8} \sin^{2}{2\gamma}\sin^{2}{\theta_{k_{0}}} \\\nonumber + \frac{(V_{y} + 2V_{\text{NNN}})n}{8}(\sin^{2}{\theta_{k_{0}}} + \sin^{2}{2\gamma} - \sin^{2}{\theta_{k_{0}}}\sin^{2}{2\gamma}) + \frac{(V_{y} + 2V_{\text{NNN}}\cos{(2k_{0})})n}{16}\sin^{2}{2\gamma}\sin^{2}{\theta_{k_{0}}} \end{eqnarray} \end{widetext} One readily checks that in the absence of non-local dipolar interactions, $V_{x} = V_{y} = V_{\text{NNN}}= 0$, the interaction energy reduces to the form obtained in Ref.~\cite{Wei2014}. \subsection{Variational approach II} It is useful to note that the three phases found in Ref.~\cite{Wei2014} are directly analogous to the corresponding phases of a weakly interacting spin-orbit coupled Bose gas in the continuum \cite{Paredes2014}, which has been extensively studied theoretically and experimentally \cite{Li2012, Spielman2011, Ji2014, Zhang2011, Zhai2012, OurNature}. There, the one-dimensional spin-orbit coupling plays the role of the artificial magnetic field. The modulated density wave phase in the ladder problem corresponds to a stripe phase of the spin-orbit coupled Bose gas, where both single particle minima are occupied; the biased ladder corresponds to the situation where only one minimum is occupied, and the Meissner (saturated chiral current) phase corresponds to a non-magnetic phase, where $k_{0} = 0$. Motivated by this, we consider a second variational \textit{ansatz}, first introduced by Li \textit{et al.} \cite{Li2012} to study the spin-orbit coupled gas, but adapted to the present problem. This has the key advantage that it allows us to readily generalize to a situation where the modulations on the left and right legs of the ladder are unequal. We replace the boson operators on the left and right legs of the ladder by classical fields: \begin{eqnarray}\label{ansatz} a_{l, L} =e^{-\frac{i\phi l}{2}}\sqrt{\frac{N}{\Omega}}\Big(C_{1L}\sin{\frac{\theta_{k_{0}}}{2}}e^{ik_{0}l} + C_{2L}\cos{\frac{\theta_{k_{0}}}{2}}e^{-i k_{0}l}\Big)\hspace{3.5mm}\\\nonumber a_{l, R} = e^{\frac{i\phi l}{2}}\sqrt{\frac{N}{\Omega}}\Big(C_{1R}\cos{\frac{\theta_{k_{0}}}{2}}e^{ik_{0}l} + C_{2R}\sin{\frac{\theta_{k_{0}}}{2}}e^{-i k_{0}l}\Big) \hspace{5mm} \end{eqnarray} where $C_{1\mu},C_{2\mu}$ are complex numbers which satisfy the constraint: $\sin^{2}{\frac{\theta_{k_{0}}}{2}}(|C_{1L}|^{2} +|C_{2R}|^{2}) + \cos^{2}{\frac{\theta_{k_{0}}}{2}}(|C_{2L}|^{2} +|C_{1R}|^{2}) = 1$, from number conservation. Note that the variational approach of Refs. \cite{Wei2014, Li2012} corresponds to the limit where $C_{1L} = C_{1R}$ and $C_{2L} = C_{2R}$. This restricted \textit{ansatz} yields a density wave texture, but the spin density remains homogeneous. However, in the presence of inter-rung interactions, our more general \textit{ansatz} is naturally able to capture states with spin and density wave modulations, which are given by $n_{l} = n_{l, L} + n_{l, R}$ and $S_{l} = n_{l, R} - n_{l, L}$ where: \begin{eqnarray}\label{densspinmod} n_{l, L} = |C_{1L}|^{2}\sin^{2}{\frac{\theta_{k_{0}}}{2}} + |C_{2L}|^{2}\cos^{2}{\frac{\theta_{k_{0}}}{2}} + \\\nonumber \frac{1}{2}(C_{1L}C^{*}_{2L}e^{2ik_{0}l}\sin{\theta_{k_{0}}} + \text{c.c})\\\nonumber n_{l, R} = |C_{2R}|^{2}\sin^{2}{\frac{\theta_{k_{0}}}{2}} + |C_{1R}|^{2}\cos^{2}{\frac{\theta_{k_{0}}}{2}} + \\\nonumber \frac{1}{2}(C_{1R}C^{*}_{2R}e^{2ik_{0}l}\sin{\theta_{k_{0}}} + \text{c.c}) \end{eqnarray} Inserting Eq.~(\ref{ansatz}) into the Hamiltonian, the expression for the total energy reads: \begin{widetext}\label{tote} \begin{eqnarray} E_{\text{MF}} = E_{-}(k_{0}) + \frac{(U+V_{x})n}{2}\Big((|C_{1L}|^{4} + |C_{2R}|^{4})\sin^{4}{\frac{\theta_{k_{0}}}{2}}+(|C_{1R}|^{4} + |C_{2L}|^{4})\cos^{4}{\frac{\theta_{k_{0}}}{2}}\Big)+ \\\nonumber \frac{(U+V_{x}\cos^{2}{k_{0}})n}{2}\sin^{2}{\theta_{k_{0}}}(|C_{1L}|^{2}|C_{2L}|^{2} +|C_{1R}|^{2}|C_{2R}|^{2}) + \frac{V_{y}n}{2}\Big(|C_{1L}|^{2}|C_{2R}|^{2}\sin^{4}{\frac{\theta_{k_{0}}}{2}} +|C_{2L}|^{2}|C_{1R}|^{2}\cos^{4}{\frac{\theta_{k_{0}}}{2}} + \\\nonumber \frac{1}{4}\sin^{2}{\theta_{k_{0}}}(|C_{1L}|^{2}|C_{1R}|^{2} + |C_{2L}|^{2}|C_{2R}|^{2} + C^{*}_{1L}C^{*}_{2R}C_{2L}C_{1R} + \text{c.c})\Big) + V_{\text{NNN}}n\Big(|C_{1R}|^{2}|C_{2L}|^{2}\cos^{4}{\frac{\theta_{k_{0}}}{2}} + \\\nonumber |C_{1L}|^{2}|C_{2R}|^{2}\sin^{4}{\frac{\theta_{k_{0}}}{2}} + \frac{1}{4}\sin^{2}{\theta_{k_{0}}}(|C_{1L}|^{2}|C_{1R}|^{2} +|C_{2L}|^{2}|C_{2R}|^{2} + C^{*}_{1L}C^{*}_{2R}C_{2L}C_{1R}\cos{2k_{0}} + \text{c.c})\Big) \end{eqnarray} \end{widetext} Note that unlike the total energy of variational approach I, $E_{\text{MF}}$ is no longer symmetric under $k_{0} \rightarrow -k_{0}$. This is because under $k_{0} \rightarrow -k_{0}$, $\cos\frac{\theta_{-k_{0}}}{2} \rightarrow \sin\frac{\theta_{k_{0}}}{2}$. Therefore, for generic values of the co-efficients $C_{1\mu}$ and $C_{2\mu}$, $E_{\text{MF}}(k_{0}) \neq E_{\text{MF}}(-k_{0})$. One way to impose this symmetry is by setting $C_{1L} = C_{1R}$ and $C_{2L} = C_{2R}$ in the variational \textit{ansatz}. Here we look for more general solutions by numerically minimizing the variational energy $E_{\text{MF}}$ for arbitrary complex $C_{1\mu}$ and $C_{2\mu}$, with the constraints of particle number conservation and $E_{\text{MF}}(k_{0}) = E_{\text{MF}}(-k_{0})$. In what follows, we use both variational approaches to obtain the ground state energy. For repulsive interactions along the rungs, the interleg charge density wave solution found using variational approach II always has lower energy than the modulated density wave phase of variational approach I. All other phases we find are captured by both \textit{ansatzes}. Moreover, both approaches yield identical phase boundaries. Before turning to the results, we comment on the validity of variational, mean-field approaches in $1$D, where fluctuations are important and destroy long range order. The crucial parameter controlling the validity of our approximations is the ratio of the interaction to the kinetic energy $\zeta = E_{\text{int}}/E_{\text{kin}}$. In $1$D, the interaction energy scales as $E_{\text{int}} \sim Un$, while the kinetic energy scales as $E_{\text{kin}} \sim \hbar^{2}/2md^{2}$, where $d$ is the mean interparticle-spacing $d \propto 1/n$. The weakly interacting, mean-field regime occurs when the parameter $\zeta \propto U/n \ll 1$ \cite{Gangardt2003}. Thus our approximation works best at high densities, which corresponds to having a large number of bosons per site, which is precisely the regime we consider here. \section{Purely Repulsive Dipoles} \subsection{Qualitative Features} We begin by examining the phase diagram of the two-leg ladder for purely repulsive dipoles. Before turning to the numerical results, we discuss the qualitative physics we expect from long range interactions. We first consider the role of the ladder and rung interactions separately, and then present the full phase diagram. \begin{figure*} \begin{picture}(100, 150) \put(120, 95){\includegraphics[scale=0.48]{chiralcurrentplot_d.eps}} \put(135, -10){\includegraphics[scale=0.45]{chiralcurrentswitch.eps}} \put(-25, 70){\includegraphics[scale=0.4]{vnncdw.eps}} \put(-25, -10){\includegraphics[scale=0.4]{interlegcdwdens.eps}} \put(-210, -10){\includegraphics[scale=0.42]{interlegcdwpd.eps}} \put(-54, 15){\includegraphics[scale=0.45]{legend.eps}} \end{picture} \caption{\label{interlegcdw} (Color Online) \textbf{Repulsive Dipoles:} (a) Global phase diagram showing the density difference between left and right legs of the ladder as a function of $V/J$ and $K/J$ at $U = 0$ and $\phi = \pi/2$. Dipolar interactions push the Meissner phase to stronger rung-ladder coupling and give rise to an interleg CDW phase, where the relative densities on each leg of the ladder modulate out of phase with one another in real space as shown in (c). At intermediate rung hopping and weak dipolar interactions, a biased ladder (BL) phase is present. (b) Real space density profiles in the left (solid), right (dashed) and total density (dotted) for large next nearest neighbor interactions. All the particles reside on only one leg of the ladder, producing a fully modulated biased ladder phase. (d) Real space current profile at $\phi = 0.9\pi$, showing switching of the sign of the chiral current. Dhar \textit{et al.} \cite{Dhar2012} refer to this phase as a chiral superfluid. (e) Chiral current at $\phi = 0.9\pi$, $U=0$, $V/J = 1$ and $K/J = 1.5$. \end{figure*} The nearest neighbor interaction ($V_{x}n_{l, \mu}n_{l+1\mu}$) along the ladder can be eliminated by placing particles on either the even or odd sites of the lattice, which corresponds to a modulated density wave phase. By contrast, it penalizes states with homogeneous density profiles on the legs of the ladder, which is the Meissner and biased ladder phases. We thus expect the dipolar ladder interaction to suppress these phases at large rung to ladder coupling strength ($K/J$). To see this, note that in Eq.~(\ref{inte}), this interaction contributes to the on-site interaction, and also yields a momentum dependent term term proportional to $\cos(2k_{0})$, which can compete with the local interaction $U$. In the following, we set $V_{\text{NNN}}$ and $V_{y}$ to zero in Eq.~(\ref{inte}) for simplicity. Although this choice is somewhat artificial, it is instructive in developing a systematic understanding about the effect of long range interactions. For purely contact interactions, when $K > K_{c}$, the single particle minimum occurs at $k=0$, and the energy of the Meissner state ($k=\gamma = 0$) reads: $E_{-}(0) + U/4 < E_{-}(k_{0}) + U/4 + U/8\sin^{2}{\theta_{k_{0}}}$, which is the energy of the modulated density state at $k = k_{0}$ and $\gamma = \pi/4$. Thus the Meissner phase wins at large $K$. By contrast, for nearest neighbor dipolar interactions along $x$, the energy of the modulated density state reads: $E_{-}(k_{0}) + V_{x}/4 + V_{x}\cos(2k_{0})\sin^{2}{\theta_{k_{0}}}/8$, which can be made lower than the Meissner phase energy ($E_{-}(0) + V_{x}/4$), by choosing $\pi/4 < k_{0} < \pi/2$, provided $V_{x}$ is sufficiently large. Therefore, for \textit{any} value of $K$, however large, there is always a transition out of the Meissner phase into a modulated density wave phase, provided $V_{x}$ is made sufficiently large. For typical parameters $K/J \sim 1.5$, the Meissner phase is fully destroyed in favor of a modulated density wave phase for $V_{x}/J \sim 7$. We now consider the effect of a repulsive inter-rung interaction on the phase diagram, which will naturally be present, whenever the external field is polarized perpendicular to the plane of the ladder. While this term also favors non-zero $k_{0}$ and $\gamma = \pi/4$, namely a modulated density phase, repulsive rung interactions penalize the left and right rungs from having identical density modulations. Indeed, we would expect that strong inter-rung repulsion would lead to ``phase separation", where the density modulations on the left and right legs are \textit{out-of-phase} with one another. We term such a state an \textit{interleg charge density wave} (CDW). This state therefore breaks local $Z_{2}$ reflection symmetry, even though globally, $Z_{2}$ reflection symmetry is unbroken, as the densities oscillate about the same average value. In the absence of gauge fields, such a state would be the ladder analog of a checkerboard solid phase of dipolar bosons in optical lattices \cite{Danshita2009, Goral2002}. However, our state additionally has non-zero rung and ladder currents arising from the synthetic flux threading the ladder. We remark that the interleg CDW phase we find here is the weak coupling analog of the ``SDW" phase found by Petrescu and Le Hur \cite{Petrescu2013} in the strong coupling limit of our model (absent next nearest neighbor interactions) at unit filling fraction. Furthermore, unlike the checkerboard phase of dipolar lattice bosons or the SDW phase, the interleg CDW phase we find is generally incommensurate with the underlying lattice, as the wave-vector of the density wave oscillations is set by the minima in the single particle dispersion. \subsection{Interleg CDW} The global phase diagram found by numerically minimizing the total energy $E_{\text{MF}}$ (Eq.~10) for $V_{x} = V_{y} = V$ and $U = V_{\text{NNN}} = 0$, and $\phi = \pi/2$, is shown in Fig.~\ref{interlegcdw}(a). Indeed for repulsive dipolar interactions, we find that a large portion of the phase diagram is occupied by the interleg charge density wave phase. From Eqs.~(\ref{densspinmod}), we plot the densities on the left (solid) and right (dashed) legs of the ladder, which are oscillating out of phase with one another. As expected from our discussion above, the Meissner phase indeed gets pushed to larger $K/J$ with increasing dipolar interactions. We have checked that variational approach I yields the same qualitative phase diagram and identical phase boundaries, but that theory cannot distinguish between a modulated density wave phase and an interleg CDW phase. The latter always has lower energy for repulsive rung interactions. For even larger $V$ (not shown), the Meissner phase disappears completely, as we argued from our qualitative discussion above. The transition from the interleg CDW to the Meissner phase is always first order for repulsive dipoles. For intermediate $K$, and weak dipolar interactions there is a biased ladder phase, which spontaneously breaks global $Z_{2}$ reflection symmetry, as the average densities on the left and right legs are no longer equal. For very weak dipolar interactions, our numerics find a \textit{modulated} biased ladder phase, where the densities on each leg modulate in real space but about different average values. However, if such a phase exists, we believe that it should only be present in a small window of the phase diagram. Inclusion of the next nearest neighbor term $V_{\text{NNN}}$, has no qualitative effect on the phase diagram presented above, as the next nearest neighbor interactions are significantly weaker than the ladder and rung interactions. Quantitatively however, next nearest neighbor interactions increase the window of stability of the biased ladder phase. Interestingly, if the next nearest neighbor interaction is the largest interaction in the problem, then the ground state is a \textit{fully} modulated biased ladder, where all the bosons reside on only one leg of the ladder (see Fig.~\ref{interlegcdw}(b)). This state not only breaks global $Z_{2}$ reflection symmetry, but also has an additional Goldstone mode associated with breaking translational symmetry. This state is the ladder analog of the stripe phase of two-dimensional lattice dipolar bosons. Although such a phase is theoretically interesting, this parameter regime cannot be attained with dipolar interactions alone. Tilting the dipoles away from the plane perpendicular to the ladder does not affect any of the qualitative conclusions of Fig.~\ref{interlegcdw}, provided the nearest and next nearest neighbor interactions remain repulsive. The precise locations of the phase boundaries however will shift. Dipolar systems have long been sought after as ideal candidate systems for exploring ``supersolidity": a phase of matter which exhibits dissipationless flow analogous to a superfluid while simultaneously possessing crystalline order \cite{Lahaye2009}. However a major experimental challenge in observing supersolidity in dipolar systems is that the dipolar interaction strengths in most magnetic atoms and polar molecules that have been cooled to date are far too weak, and are easily overwhelmed by the contact interaction, which produces a homogeneous superfluid \cite{Danshita2009}. However, in the present system, translational symmetry breaking is not an interaction effect, rather a single particle one, owing to the multiple minima in the single particle dispersion. Having broken translational symmetry to produce a modulated density wave phase, there is no additional on-site energy cost to displacing the density modulations on the left and right legs of the ladder. But this lowers the overall dipolar energy by avoiding the rung interaction. Therefore a checkerboard state or an interleg CDW phase can occur for very weak dipolar strengths in this system, and survives even in the presence of a non-zero $U$. \subsection{Chiral current switching} We briefly discuss what happens at other values of the flux. Although the experiments do not tune the flux directly, DMRG studies have extensively explored the phase diagram for different values of the magnetic field \cite{Greschner2015, Piraud2015}. Remarkably, we find that near (but not equal to) $\phi = \pi$, the sign of the chiral current develops a switching pattern from plaquette to plaquette as shown in Fig.~\ref{interlegcdw}((d) and (e)). In Fig.~\ref{interlegcdw}(e), we plot the chiral current at $\phi = 0.9\pi$, which shows this sign reversal. A similar pattern is predicted to occur in the fully frustrated Bose Hubbard model studied by Dhar \textit{et al.} \cite{Dhar2012, Dhar2013}, who refer to this state as a chiral superfluid. We remark that this is different from the findings of Greschner \textit{et al.} \cite{Greschner2015}, where the global chiral current reverses sign compared to the chiral current at small fluxes. Here we find that the sign of the global chiral current is still the same as that for small fluxes. We find that this switching pattern occurs even for weak interactions, at large enough values of $\phi$, when the Meissner phase is destroyed in favor of a vortex phase. Increasing $K$ increases the magnitude of the chiral current at fixed flux, and should make this effect experimentally observable. \section{Partially Attractive Dipoles} \begin{figure*} \begin{picture}(100, 150) \put(135, 70){\includegraphics[scale=0.42]{kplotf.eps}} \put(-200, -5){\includegraphics[scale=0.42]{attractiverung.eps}} \put(-20, 70){\includegraphics[scale=0.42]{curdensity.eps}} \put(40, 35){\includegraphics[scale=0.4]{critvortex_d.eps}} \put(-40, 20){\includegraphics[scale=0.42]{legend2.eps}} \put(30, 0){\includegraphics[scale=0.4]{vortex.eps}} \end{picture} \caption{\label{pdtiltrung} (Color Online) \textbf{Dipoles pointing along $y$ (rung) direction:} (a): Phase diagram of the attractive rung, dipolar bosonic ladder for $U=0$, and $\phi = \pi/2$ as a function of $V$ and $K/J$. A modulated density wave phase (with $\gamma = \pi/4$) is found throughout the entire phase diagram. Density plot shows evolution of $k_{0}$ with $V/J$ and $K/J$, which approaches zero as $K$ becomes large, but $\gamma$ remains finite. Inset shows real space total density at two different values of $K$, at fixed $V$. ((b) and (c): Evolution of the chiral current and $k_{0}$ respectively for fixed $V$, as a function of $K/J$, showing jumps. The discrete jumps indicate first order transitions between CDW phases with different incommensurate wavelengths and is reminiscent of the devil's staircase pattern found in dipolar Mott insulators and superfluids in $1$D \cite{Burnell2009, Dalmonte2010}. For comparison the evolution of $k$ for purely short range interactions is also shown (dashed), revealing a single first order transition from the modulated density phase to the Meissner phase. (d) Schematic showing the real space density and current profile in the two-leg ladder. Increasing $K/J$ increases the wavelength of the modulations (shaded region). Arrows indicate the direction of the chiral current In this picture, ``vortices" sit in the regions between the shaded areas, where the chiral current reverses direction. Therefore, the vortex density decreases as the $K/J$ increases (c.f. Ref.~\cite{Piraud2015} Fig. $5$). Both variational approaches never find a transition to a homogeneous density (Meissner) phase at any $K$.} \end{figure*} We now turn to the study of partially attractive dipoles, which can be realized by tilting the external field relative to the plane of the ladder. For concreteness, we focus on two particular cases: (i) external field aligned parallel to the rungs (attractive rung and next nearest neighbor interaction) and (ii) external field aligned parallel to the legs of the ladder (attractive ladder and next nearest neighbor interactions). We emphasize that even though we focus on specific tilt angles below, our results in Sec IV A and B are valid over a wide range of tilt angles as long as $V_{y} <0, V_{x} > 0$ and $V_{x} <0, V_{y} > 0$ respectively. As before we calculate the phase diagram using both variational approaches. Where comparison is possible, they yield identical results. \subsection{Attractive Rung interaction} We first consider the case where the rung interaction $V_{y}$ is attractive and equals $V_{y} = -2V$, while the interaction along the ladder remains repulsive $V_{x} = V$. We also turn on a finite, attractive NNN interaction. As in the repulsive case, we consider a purely dipolar interaction $U=0$. Owing to the attractive rung interactions, the density modulations on both legs are in phase with one another, and both variational approaches yield a modulated density wave phase as the ground state. In Fig.~\ref{pdtiltrung}(a), we plot the phase diagram of the two leg ladder, as a function of $K/J$ and $V/J$ obtained from Eq.~(\ref{inte}). The density plot reveals that a vortex or modulated density phase ($\gamma = \pi/4$) occurs throughout the entire phase diagram, but that the wave-length of the modulations (or the vortex core radius) diverges as $K/J$ is increased. At large $K/J$, the chiral current saturates to its maximal value ($k_{0} \rightarrow 0$), but $\gamma$ remains pinned to $\pi/4$, so the density modulates in space. Thus we conclude that arbitrarily weak attractive rung interactions destroy the Meissner phase. To understand the destruction of the Meissner phase, we once again consider the energy difference between the Meissner and a modulated density wave phase. Setting $V_{\text{NNN}} = 0$, which has no qualitative impact on the physics, the energy of the Meissner phase from Eq.~(\ref{inte}) reads: $E = E_{-}(0) -V/8$ while that of the modulated density wave phase is $E_{-}(k_{0}) - V/8 + V\sin^{2}({\theta_{k_{0}}})/8(1/2 \cos(2k_{0}) - 1)$. At large $K$, where the Meissner phase usually occurs, the energy difference between $E_{-}(0) - E_{-}(k_{0})$ approaches a constant independent of $K$, $E_{-}(0) - E_{-}(k_{0})\rightarrow -\sqrt{2}(1- \cos(k_{0}))$ . Therefore, for a modulated density wave phase to win, we require: $V\sin^{2}({\theta_{k_{0}}})/8(1/2~\cos(2k_{0}) - 1) < -\sqrt{2}(1- \cos(k_{0}))$. It is easy to see that this condition can be satisfied even for arbitrarily weak $V$, as long as $k_{0} \rightarrow 0$. Therefore, for any value of $V$, there is a critical $k_{\text{crit}}$, such that for $k_{0}< k_{\text{crit}}$, the modulated density wave phase with wave-vector $k_{0}$ and $\gamma = \pi/4$ has lower energy than the Meissner phase. Hence the ground state is always a modulated density wave or vortex phase. Turning on a finite $U$ however, restores the Meissner phase. Indeed, it is easy to show from Eq.~(\ref{inte}), that for $k_{0} \rightarrow 0$, a finite $\gamma$ occurs only when $U < U_{c} = V/4\sqrt{2}$. This condition can achieved experimentally by using Feshbach resonances to tune $U$ near a zero crossing. We expect that including higher order terms will increase $U_{c}$, as longer ranged interactions will be more attractive. However, we expect the quantitative corrections to be small, as longer range interactions decrease in magnitude as $1/\textbf{r}^{3}$. Physically the destruction of the Meissner phase for attractive rung interactions can be simply understood: Absent short-range repulsive forces ($U=0$), the dipolar interaction favors the formation of a CDW, and the attractive rung interaction implies that the CDWs on both legs will be in-phase with one another, \textit{i.e.}, a modulated density wave phase. Upon increasing $K$, the two single particle minima at $\pm k_{0}$ move closer to one another, but the total dipolar interaction energy can be lowered by condensing at \textit{both} minima, rather than only in a single minimum, which is the case for repulsive short-range interactions, which favors a homogeneous density. When $U$ becomes larger than a critical strength, the homogeneous density Meissner phase is restored. Strikingly, the density plot reveals that $k_{0}$ shows a sequence of jumps as it varies from $k_{0} = \pi/4$ to $0$ with increasing $K$ (Fig.~\ref{pdtiltrung}(c)). These jumps are also manifested in the chiral current (Fig.~\ref{pdtiltrung}(b)), and are indicative of a cascade of first order transitions between modulated density wave phases with different wave-vectors. The series of first order transitions between different modulated density wave phases are reminiscent of the Luttinger staircase feature observed in quasi-$1$D dipolar superfluids \cite{Dalmonte2010}. There, the dipolar Luttinger liquid becomes unstable towards the formation of a cascade of solids with fractional filling, in the presence of an arbitrarily weak lattice potential. Although we work in the opposite limit (namely of large incommensurate occupation), our theory nonetheless yields a similar cascade of CDW phases, where the periodicity of the density wave is tuned not by filling fraction, but rather by the hopping parameter. For comparison, in Fig.~\ref{pdtiltrung}(c) we also show the evolution of $k_{0}$ for a gas with contact interactions $U$ of the same magnitude, which shows a single first order transition from a modulated density wave to a Meissner phase. \subsection{Attractive Ladder interaction} We now turn to the case where the magnetic field is tilted along the ladder ($x$) direction. We therefore have $V_{x} = -2V$, $V_{y} = V$. Numerically minimizing the variational energy Eq.~(10), the ground phase diagram for this case is shown in Fig.~\ref{pdtiltladder}. We have set $U/J = 1$. For weak ladder interactions $V << U$, the phase diagram resembles that of a gas with local interactions: a direct first order transition from an interleg CDW phase to a Meissner phase is found. (Note that for the choice of $U$, there is no intervening biased ladder phase (Ref.~\cite{Wei2014}).) However, increasing attractive dipolar interactions destroys the interleg CDW phase in favor of a biased ladder phase for arbitrarily weak rung to ladder couplings. Increasing $K/J$ leads to a second order transition from the biased ladder phase to the Meissner phase. We remark that the biased ladder phase has not yet been observed experimentally, despite being predicted theoretically \cite{Wei2014, Uchino2015, Piraud2015}. The tilted dipolar system displays a wide biased ladder regime, and may be ideally suited for observing this phase experimentally. A similar phase diagram, where the interleg CDW phase is replaced by a modulated density wave phase is expected for purely attractive short range interactions $U<0$. However, such a gas is not mechanically stable at zero temperature, and collapses at sufficiently high densities \cite{Mueller2000, Hulet1998}. In our case, the appearance of a biased ladder phase at small values of the rung hopping is driven entirely by attractive long range forces, and the gas can still be stabilized by maintaining a repulsive on-site potential. To compute the stability of the biased ladder phase, we calculate the excitation spectrum about the biased ladder ground state $|G_{k_{0}}\rangle = 1/\sqrt{N!}(\beta^{\dagger}_{k_{0}})^{N}|0\rangle$. Writing fluctuations above the ground state as $\beta_{k} = \sqrt{N}\delta_{k, k_{0}} + (1-\delta_{k, k_{0}})\gamma_{k-k_{0}}$ \cite{Wei2014}, we expand the Hamiltonian Eq.~(\ref{ham0}) to quadratic order in the $\gamma$ operators. Diagonalizing the effective Bogoliubov Hamiltonian $H - \mu N$, where $\mu = E_{-}(k_{0}) + (U+V_{x})n(\sin^{4}{(\theta_{k_{0}}/2)}+\cos^{4}{(\theta_{k_{0}}/2})) + (V_{y} + V_{\text{NNN}}/\sqrt{2})n\sin^{2}{(\theta_{k_{0}}/2)}\cos^{2}{(\theta_{k_{0}}/2)}$ is the chemical potential, yields the low energy excitation spectrum. As in the case of short-range interactions, the spectrum has a characteristic roton-maxon feature \cite{Wei2014}, which has also been recently found in the shaken lattice experiment of Parker \textit{et al.} \cite{Chin2013}. The dipolar interaction pushes the roton minimum to larger $k$. Near the interleg CDW to biased ladder phase transition, the roton minimum is present for arbitrarily small $K$, and disappears as $K$ is increased. The instability towards collapse is signaled by the appearance of imaginary frequencies at long wave-lengths. For $U/J = 1$, the gas becomes dynamically unstable towards collapse at $V/J \sim 0.5$, or $V_{x} \sim -U$. The critical interaction strength at which collapse occurs is weakly dependent on $K$. Increasing $K/J$ pushes the collapse to somewhat stronger interactions. \section{Summary and Discussion} \begin{figure} \begin{picture}(100, 160) \put(-40, -10){\includegraphics[scale=0.49]{attractiveladder.eps}} \put(140, 20){\includegraphics[scale=0.49]{legend3.eps}} \end{picture} \caption{\label{pdtiltladder} (Color Online) \textbf{Dipoles pointing along x (ladder) direction:} Density plot showing the density difference between the two legs of the ladder normalized to the total density for attractive ladder interactions: $V_{x} = -2V$, $V_{y} = V$, and $U/J = 1$. For finite $V/J$, there is a transition from an interleg CDW to a biased ladder phase at arbitrarily small $K/J$, which occupies a large portion of the phase diagram. Increasing $K/J$ leads to a direct transition from the biased ladder to the Meissner phase. Increasing the dipolar interactions further leads to a collapse of the gas, signaled by the appearance of imaginary frequencies at long wave-lengths.} \end{figure} Experimentally, our system can be realized using highly magnetic dipolar atoms, Rydberg atoms or polar molecules \cite{Ni2008, Aikawa2010, Deiglmayr2008,Lu2012, Pfau2007, Aikawa2012,Saffman2010, Schausz2012}. Magnetic atoms are ideally suited for this study, as we do not incorporate spin degrees of freedom, and therefore do not have any dipolar loss. Moreover, coupling magnetic atoms to Raman lasers is believed to have much lower heating losses \cite{Cui2013}, which is a major advantage over conventional alkalis. With polar molecules, a key experimental challenge that needs to be overcome is the low densities needed to prevent chemical reactions. Non-reactive molecules or quantum-Zeno methods of loss suppression \cite{Zhu2014}, may be required in order to boost the densities to values suitable for studying many-body physics. All the phases discussed here can be probed experimentally by measuring the \textit{in situ} density and the local currents \cite{Atala2014}. The interleg charge density wave phase breaks local $Z_{2}$ symmetry spontaneously. However, as the dipolar interaction is long ranged, even if there is no hopping between neighboring ladders, long range interactions will couple neighboring ladders, giving rise to an overall checkerboard pattern, which can be detected using Bragg spectroscopy \cite{Stenger99}. The modulated biased ladder phase is more challenging to detect, as the density asymmetry between the left and right legs of the ladder will randomly change from ladder to ladder, averaging to zero. However, by detuning the Raman lasers from resonance, an external bias can be applied \cite{Wei2014}, which then polarizes all the ladders in the same way. We comment briefly on the relationship between our work and recent experiments on bosons and fermions in synthetic dimensions \cite{Fallani15, Stuhl15}. The physical rung in the ladder we consider, corresponds to the synthetic dimension, implied by the internal spin states of the atoms. Therefore the onsite density-density and spin-spin interactions can be interpreted as effective ``long range" interactions in the synthetic (rung) dimension. A crucial difference however is that in these experiments, long range interactions only occur in the synthetic direction; in the physical direction, there is a one-dimensional optical lattice and the interactions are still short-ranged. If the analog of an interleg CDW occurs in these experiments, it will be manifest as a spin-density wave, where the local spin orientation changes from site to site, such that on a given rung of the ladder, only one spin component is present. To conclude, we have used two complementary mean-field approaches to explore the interplay between large magnetic fields and long range density-density interactions on a two-leg bosonic ladder, finding a rich phase diagram. Our approximations are justified in the large density limit of large numbers of bosons per site, which for $1$D systems corresponds to the mean-field regime. As in the experiment of Atala \textit{et al.} \cite{Atala2014}, we largely fix the magnetic flux, but modulate the rung-to-ladder hopping and the interactions. Generally we have shown that dipolar interactions destroy the Meissner phase completely or reduce its regime of stability. We expect that more sophisticated bosonization treatments should yield similar qualitative answers, as it is already well known that dipolar Luttinger liquids are unstable towards CDW formation \cite{Dalmonte2010}. A full bosonization treatment of this problem remains an exciting topic for further study. Repulsive density-density dipolar interactions lead to an interleg CDW phase, where the total density is uniform, but the densities along the left and right legs of the ladder modulate in space, out of phase with one another. This phase is stable for weak next nearest neighbor interactions, but for strong next nearest neighbor repulsion, a fully modulated biased ladder phase is found, where all the atoms are either on the left or the right leg of the ladder. For values of the flux near $\phi \rightarrow \pi$, we obtain a modulated density wave phase where the chiral current switches sign from plaquette to plaquette For purely attractive rung interactions, we find that the Meissner phase is completely destroyed. Instead, we find a cascade of first order transitions between CDW phases with different wave-vectors, reminiscent of the Luttinger staircase \cite{Dalmonte2010}. Whether this pattern develops into a full Luttinger staircase at low filling fractions will be studied in future work. Importantly, as these jumps can be tuned by changing the Raman coupling, this may open the possibility of observing this staircase pattern in experiments. Finally, we emphasize that although we have only considered specific tilt angles in this work for simplicity, our results are more general in that the phase diagrams obtained here will survive small deviations away from the tilt angles we consider. For example, the biased ladder phase is the ground state as long as $V_{x} <0$ and $|V_{x}| <U$ and $V_{y} >0$. This condition is met for all angles $\sin^{-1}{1/\sqrt{3}} < \theta < \pi/2$. Similarly, we expect an interleg CDW ground state for small $K/J$ whenever $V_{y} > 0$. \section{Acknowledgements} We are indebted to Ana Maria Rey for motivating us to think about this problem, and Wilbur Shirley and Marie Piraud for numerous discussions during the preparation of this manuscript. We are grateful to Xiaopeng Li, Erich Mueller, Ran Wei and Ryan Wilson for their careful reading of this manuscript and for suggesting numerous improvements. We would like to thank the LPS-CMTC, LPS-MPO-CMTC, NSF-JQI-PFC, and ARO-MURI and the NSF-PFC seed grant ``Emergent phenomena in interacting spin-orbit coupled gases" for support. We are grateful to the Department of Energy's Institute for Nuclear Theory at the University of Washington for its hospitality, during the completion of this work. \begin{appendix} \section{Expressions for local ladder and rung currents} To calculate the currents in Eq.~(\ref{cureqs}), we first transform our variables into Fourier space as follows $a_{kL} = 1/\sqrt{\Omega}\sum_{l}e^{-i(k-\phi/2)l}a_{lL}$ and $a_{kR} = 1/\sqrt{\Omega}\sum_{l}e^{-i(k+\phi/2)l}a_{lR}$. Inserting these expressions into Eq.~(\ref{cureqs}) and expressing $a_{k\mu}$ in terms of $\beta_{k}$, we obtain: \begin{equation}\label{rungcur} j^{\perp}_{l} = -\frac{K}{\Omega}\sin{2\gamma}\sin{2k_{0}l}\cos{\theta_{k_{0}}} \end{equation} The ladder currents are similarly found to be: \begin{eqnarray}\label{ladcur} j^{\parallel}_{lL} = \frac{J}{\Omega}(2\sin^{2}{\frac{\theta_{k_{0}}}{2}}\sin{(k_{0} + \frac{\phi}{2})}\cos^{2}{\gamma} + \\\nonumber 2\cos^{2}{\frac{\theta_{k_{0}}}{2}}\sin{(-k_{0} + \frac{\phi}{2})}\sin^{2}{\gamma} +\\\nonumber i\frac{\sin{\theta_{k_{0}}}\sin{\gamma}}{4}\Big(e^{-2ik_{0}l}(e^{-i(k_{0} + \frac{\phi}{2})} - e^{-i(k_{0} - \frac{\phi}{2})}) -\\\nonumber \text{c.c}\Big) \\\nonumber j^{\parallel}_{lR} = \frac{J}{\Omega}(-2\cos^{2}{\frac{\theta_{k_{0}}}{2}}\sin{(-k_{0} + \frac{\phi}{2})}\cos^{2}{\gamma} - \\\nonumber 2\sin^{2}{\frac{\theta_{k_{0}}}{2}}\sin{(k_{0} + \frac{\phi}{2})}\sin^{2}{\gamma} +\\\nonumber i\frac{\sin{\theta_{k_{0}}}\sin{\gamma}}{4}\Big(e^{-2ik_{0}l}(e^{-i(k_{0} - \frac{\phi}{2})} - e^{-i(k_{0} + \frac{\phi}{2})}) -\\\nonumber \text{c.c}\Big) \end{eqnarray} The reader may readily check that using the fact that there is a symmetry from $k_{0} \rightarrow -k_{0}$, and that $\cos^{2}{\frac{\theta_{-k_{0}}}{2}} = \sin^{2}{\frac{\theta_{k_{0}}}{2}}$, the net ladder current $j^{\parallel}_{lL} + j^{\parallel}_{lR}$ vanishes on every site $l$, and the net chiral current is $j_{c} = \sum_{l}(j^{\parallel}_{lL} - j^{\parallel}_{lR}) = 4J\sin^{2}{\frac{\theta_{k_{0}}}{2}}\sin{(k_{0}+\phi/2)}$. \end{appendix}
1,314,259,993,729
arxiv
\section{Introduction} Consider a population evolving according to a branching process $\cZ'$. In addition to reproduction events, global events called disasters occur at some random times (independent of $\cZ'$) that kill off every individual alive with probability $1-p\in(0,1)$, independently of each other. The resulting process of population sizes $\cZ$ will be called a branching process subject to binomial disasters with survival probability $p$. Provided no information regarding fitness of the individuals in terms of resistance against disasters, this binomial approach appears intuitive, since the survival events of single individuals are iid. Applications span from natural disasters as floods and droughts to effects of radiation treatment or chemotherapy on cancer cells as well as antibiotics on populations of bacteria. Also, Bernoulli sampling comes into mind as in the Lenski Experiment (cf. \citealp{CasanovaEtAl2016}). \noindent In the general setting of Bellman-Harris processes with non-lattice lifetime-distribution subject to binomial disasters, \cite{KaplanEtAl1975} and \cite{AthreyaKaplan1976} have studied the almost sure asymptotic behaviour as well as asymptotics of the expectation of the population size and showed that such processes almost surely either go extinct or explode, giving necessary and sufficient conditions for extinction. They also computed the limit of the age-distribution on the set of explosion. In the special case of homogeneous birth-death processes with binomial disasters, \cite{BühlerPuri1989} obtained more explicit results regarding asymptotics and normalised limit distributions as well as the distribution of the extinction probability conditioned on the disaster times. \cite{Bartoszynski1989} added the analysis of extinction probabilities in a multi-type setting. Furthermore, \cite{Peng1993}, \cite{Thilaka1998} and \cite{KumarII} studied single-type and multi-type population models underlying binomial disasters with survival probabilities depending on the time the last disaster occurred. These models reflect disasters like earthquakes where pressure builds up over time and increases severity. A more general disaster mechanism in a birth-death-scenario has been discussed by \cite{Brockwell1982}, \cite{Brockwell1985}, \cite{Pakes1986} and \cite{Pakes1989}, where the absolute population decline after a catastrophe follows geometric, uniform or even an arbitrary distribution independent of the population size. Additionally, the rate of catastrophes is linear in the population size. For continuous state branching processes with disasters according to some intensity measure $\nu$, \cite{Bansaye2013} have studied the probability of extinction. We will add to this literature of branching processes with disasters precise results for the asymptotic extinction probability at late times. As Theorem~\ref{thm:bp-arb.off.dis} shows, if extinction occurs, the survival probability decays exponentially at a rate which has a phase transition. We will also be dealing with the time-inhomogeneous case (see Theorem~\ref{thm:inhom-bdp}), and extinction probabilities for continuous state branching processes (CSBP) with binomial disasters (see Theorem \ref{thm:cont-state}). \noindent The main technique we are going to use in our study is duality. Recall that duality of branching systems to a solution of a differential equation has particularly proven useful for continuous state branching processes and measure-valued processes. (See \citealp{Etheridge2001} for an overview and the beginning of Section \ref{sec:bps} for a brief introduction to this notion of duality.) Bringing this notion back to a branching process in continuous time (without disasters) $\cZ$, where every individual branches at rate $\lambda$ and has offspring distribution with probability generating function (pgf) $h$, the distribution at time $t$ can be computed via the duality relation $$ \mathbb E[x^{Z_t}|Z_0=z] = X_t^{z},$$ where $X_0=x$ and $X_t$ solves \begin{align} \label{eq:ODE} \dot X = - \lambda (X - h(X)) \end{align} (cf.\ \citealp{AthreyaNey1972}, Chapter III.3). We will generalize this equation in order to include binomial disasters. Here, the dual process will be a piecewise deterministic Markov process (PDMP) $\cX$ on $[0,1]$, where $1-\cX$ evolves according to \eqref{eq:ODE} and jumps by a factor of $p$ at the rate of the disasters. Such processes will be called $p$-jump processes below. In Theorem \ref{thm:convergence-$p$-jump-pros} we will give general limit results for these processes, which become more precise and concise under concavity conditions shown in Corollary \ref{cor:p-jump-concave}. These findings will then translate into limit and asymptotics results for survival and extinction probabilities of branching processes with binomial disasters in Theorem \ref{thm:bp-arb.off.dis}. \noindent The results of Theorem~\ref{thm:bp-arb.off.dis} will be expanded in two directions. First, we are dealing with the time-inhomogeneous case in Theorem~\ref{thm:inhom-bdp}, i.e.\ branching rate, pgf and disaster rate may depend on time. Here -- using results of \cite{Kendall1948} for the case without disasters -- we are able to derive the pgf of a binary branching process subject to binomial disasters, conditioned on the times of disasters -- similar to the approach of \cite{BühlerPuri1989} for the homogeneous case. In this case, we also give in Proposition \ref{prop:inhom-limit} some limits of pgfs. Second, we apply the duality technique to continuous state branching processes (CSBP) with binomial disasters. Here, we derive in Theorem \ref{thm:cont-state} limit results for the extinction probabilities. ~ \noindent The manuscript is organised as follows. In Section~\ref{S:res}, we give our main results on $p$-jump processes with Theorem \ref{thm:convergence-$p$-jump-pros} and Corollary \ref{cor:p-jump-concave}. The results on (time-homogeneous) branching processes with disasters are collected in Theorem~\ref{thm:bp-arb.off.dis}. The case of a birth-death process, i.e.\ a binary branching process, are given in Corollary~\ref{cor:results-homog-bd-proc} and extended further to the time-inhomogeneous case in Theorem~\ref{thm:inhom-bdp}. The CSBP with disasters is treated in Theorem \ref{thm:cont-state}. In Section~\ref{sec:pdmps}, we will prove Theorem \ref{thm:convergence-$p$-jump-pros} and Corollary \ref{cor:p-jump-concave}. The duality of branching processes with disasters and $p$-jump PDMP will be established in Section~\ref{sec:bps}, where we will also prove Theorem~\ref{thm:bp-arb.off.dis}. For the time-inhomogeneous case, we first need in Section~\ref{sec:reg-var} some results on regularly varying functions, as collected in Theorem \ref{thm:asymptotics-of-D-integral}, which might be of interest in their own right. The proof of Theorem~\ref{thm:inhom-bdp} is then given in Section~\ref{sec:pr-thm3}. \section{Results}\label{S:res} \subsection{$p$-jump Processes} Let us begin by clarifying the notion of $p$-jump processes. \begin{definition}\label{def:p-jump-process}\ Let $I=[0,\upsilon]$ for $\upsilon>0$ or $I=\R^+:=[0,\infty)$. Then, let $\alpha:I\to\R$ with $\alpha(0)\geq0$ and $\alpha(\upsilon)\leq0$, if $I=[0,\upsilon]$. Furthermore, let $p\in[0,1]$ and $\cX$ be a right-continuous Markov process on $I$, that performs unit-rate jumps from a state $x$ to $px$ and between jumps fulfils $\dot X_t=\alpha(X_t)$. Such a process has generator \begin{align} \cG_\cX f(x) &= f(px) - f(x) + \alpha(x)f'(x)\label{eq:generator-of-p-jump-process} \end{align} for $f\in\cC^1(I)$ and is called a \emph{$p$-jump process with drift $\alpha$ on $I$}. \end{definition} \begin{remark}\label{rem:def-p-jump-pros}\ \begin{enumerate} \item Note that such a process is uniquely characterised by $\alpha$, whenever $\dot X_t=\alpha(X_t)$ has a unique solution on $I\cap[\varepsilon,\infty)$ for every $\varepsilon>0$. To ensure this, we will use the Lipschitz-continuity conditions $(C_1)$ and $(C_2)$ in Theorem \ref{thm:convergence-$p$-jump-pros}. The bounds of $\alpha(0)$ and $\alpha(\upsilon)$ guarantee that the process does not leave the interval $I$, such that $\cX$ is well-defined. \item Due to its multiplicative jumps, such a process can only have $0$ as an absorbing state. This happens, iff $\alpha(0)=0$. \item $1$-jump processes are deterministic, since their jumps have no effect, while $0$-jump processes always jump to 0. These two special cases will be left aside in Theorem \ref{thm:convergence-$p$-jump-pros} but considered in the following Corollary \ref{cor:p-jump-concave} for concave $\alpha$, where a more concise conclusion is possible. \end{enumerate} \end{remark} \noindent At first, we present our most general limit results for $p$-jump processes. \begin{theorem}[Convergence of $p$-jump-processes]\label{thm:convergence-$p$-jump-pros} Let $I$ and $\alpha$ be as in Definition \ref{def:p-jump-process}, $p\in(0,1)$ and $X_0\in I$. Also, suppose if $I=\R^+$ that $s_\alpha:=\sup\{x:\alpha(x)>0\}<\infty$. Furthermore, let $\alpha'_0:=\lim_{x\to0}\frac1x\alpha(x)$ and assume that $\alpha$ satisfies one of the following: \begin{itemize} \item[$(C_1)$] $\alpha$ is Lipschitz-continuous on $I$ or \item[$(C_2)$] $\alpha$ is Lipschitz-continuous on $I\cap[\varepsilon,\varepsilon^{-1}]$ for every $\varepsilon\in(0,1)$ and $\alpha'_0=\infty$. \end{itemize} Then, there is a $p$-jump process $\cX$ with drift $\alpha$ on $I$ starting in $X_0$, such that, letting $\hat\alpha=\sup_{x\in I\setminus\{0\}}\tfrac1x\alpha(x)$, the following statements hold: \begin{enumerate} \item If $\hat\alpha<\log\tfrac1p$, then $\mathbb P(X_t\xrightarrow{t\to\infty}0)=1$. Additionally, for the $k$th moment of $X_t$, $k\geq1$, the following estimates hold: \begin{enumerate} \item[$(U_1)$] In general (i.e. even for all $\hat\alpha\in\R$), \begin{align*} \limsup_{t\to\infty} \tfrac 1t \log\E[X_t^k] \leq -(1 - p^k - \hat\alpha k). \end{align*} \item[$(U_2)$] If $\hat\alpha > p^k\log\tfrac1p$, letting $\lambda:=\frac1{\hat\alpha}\log\frac1p$, we obtain the stronger bound \begin{align*} \limsup_{t\to\infty} \tfrac 1t \log\E[X_t^k] \leq -\big(1 - \tfrac1\lambda(1+\log\lambda)\big). \end{align*} \item[$(L_1)$] If there are $\delta\in\R$ and $\vartheta>0$ such that $\alpha(x)\geq\delta x-\vartheta x^2$ for all $x\in I$, such that $\delta\leq p^k\log\tfrac1p$, then \begin{align*} \liminf_{t\to\infty} \tfrac 1t \log\E[X_t^k] \geq -(1 - p^k - \delta k). \end{align*} \item[$(L_2)$] If $\delta$ in $(L_1)$ can be chosen positive, letting $\gamma:=\frac1\delta\log\frac1p$, we obtain \begin{align*} \liminf_{t\to\infty} \tfrac 1t \log\E[X_t^k] \geq -\big(1 - \tfrac1\gamma(1+\log\gamma)\big), \end{align*} which is a stronger bound than $(L_1)$, if $\delta>p^k\log\tfrac1p$. \end{enumerate} \item If $\alpha'_0\in(\log\tfrac1p,\infty]$, let $x_\alpha=\min\{x\in I\setminus\{0\}:\alpha(x)=0\}$. Then, $\cX$ converges weakly and its limit $X_\infty$ satisfies $\Pw(X_\infty\in(0,x_\alpha])=1$, $\E[X_\infty^{-1}\alpha(X_\infty)]=\log\tfrac1p$ and for every $k\geq1$ \[ \E[X_\infty^k] = \frac k{1-p^k}\E[X_\infty^{k-1}\alpha(X_\infty)]. \] Also, the distribution of $X_\infty$ is the unique stationary distribution and for every bounded and measurable function $f:I\to\R$ almost surely \begin{align*} \lim_{t\to\infty}\frac1t\int_0^tf(X_s)ds &= \E[f(X_\infty)]. \end{align*} \end{enumerate} \end{theorem} \begin{remark}\label{rem:thm-PDMPs}\ \begin{enumerate} \item Note that in Theorem \ref{thm:convergence-$p$-jump-pros}.\emph1 $\alpha'_0$ is finite, such that $\alpha(0)$ has to be $0$ and it holds $\alpha'_0=\alpha'(0)$. Hence the similar notation. Case \emph{2.} accounts for both possibilities $\alpha(0)=0$ with $\alpha'(0)=\infty$ as well as $\alpha(0)>0$. \item Although the theorem shows existence of a $p$-jump process on $\R^+$, it is notable that such a process will only assume values in $[0,\max\{X_0,s_\alpha\}]$, since by definition it can never grow beyond $\sup(\{X_0\}\cup\{x:\alpha(x)>0\})$. \end{enumerate} \end{remark} \noindent In the case where $\alpha$ is concave, the bounds $(U_i)$ and $(L_i)$ align and the continuity conditions become evident such that we can give a much more concise result. Here, we will also include the cases $p=0$ and $p=1$. \begin{corollary}\label{cor:p-jump-concave} Let $p\in[0,1]$, $I$ and $\alpha$ be as in Definition \ref{def:p-jump-process}. Additionally assume that $\alpha$ is concave, $\alpha''(0)\in[-\infty,0]$ exists and that either \begin{itemize} \item[--] $I=[0,\upsilon]$ and $\alpha'(\upsilon)>-\infty$ or \item[--] $I=\R^+$ and there is an $x>0$ such that $\alpha(x)=0$. \end{itemize} Then, letting $X_0\in I$ and $\alpha'_0:=\lim_{x\to0}\frac1x\alpha(x)$, there is a $p$-jump process $\cX$ with drift $\alpha$ on $I$ starting in $X_0$ and satisfying: \begin{enumerate} \item If $\alpha'_0<\log\tfrac1p$ or $p=0$, then $X_t\xrightarrow{t\to\infty}0$ almost surely. Also, for $k\geq1$ \begin{align*} \lim_{t\to\infty}-\tfrac1t\log\E[X_t^k] &= \begin{cases} 1 + \max\{0, - k\alpha'_0\} & \text{if }p=0,\\ 1 - p^k - k\alpha'_0 & \text{if }\alpha'_0\leq p^k\log\tfrac1p,\\ 1 - \tfrac1\gamma(1+\log\gamma) & \text{otherwise}, \end{cases} \end{align*} where $\gamma=\log(\tfrac1p)/\alpha'_0$. \item If $\alpha'_0=\log\tfrac1p$, then $\frac1t\log\E[X_t^k]\xrightarrow{t\to\infty}0$ and $\limsup_tX_t\leq m_\alpha:=\sup\{x\in[0, x_\alpha]:\alpha(x) = x\alpha'_0\}$ almost surely. In particular, if $\alpha$ is strictly concave on an interval $(0,\varepsilon)$, then $X_t\to m_\alpha=0$ almost surely. \item If $\alpha'_0>\log\tfrac1p$, then $x_\alpha:=\sup\{x:\alpha(x)>0\}\in(0,\infty)$, $\cX$ converges weakly and its limit $X_\infty$ satisfies $\mathbb P(X_\infty\in(0,x_\alpha])=1$, $\E[X_\infty^{-1}\alpha(X_\infty)]=\log\tfrac1p$ and for every $k\geq1$ \begin{align*} \E[X_\infty^k] &= \frac k{1-p^k}\E[X_\infty^{k-1}\alpha(X_\infty)]. \end{align*} Also, the distribution of $X_\infty$ is the unique stationary distribution and for every bounded and measurable function $f:\R_+\to\R$ almost surely \begin{align*} \lim_{t\to\infty}\frac1t\int_0^tf(X_s)ds &= \E[f(X_\infty)]. \end{align*} \end{enumerate} \end{corollary} \subsection{Branching Processes with Binomial Disasters} \noindent Applying Corollary \ref{cor:p-jump-concave} to a $p$-jump process, via duality with respect to probability generating functions in Theorem \ref{thm:bp-arb.off.dis} we obtain immediate limit results for the following class of branching processes with binomial disasters. \begin{definition}\label{def:hom-bp-abr-off-dis} Let $\lambda>0$, $q=(q_k)_{k\geq0}$ a distribution on $\N_0$, $\kappa>0$ and $p\in[0,1]$. A Markov process $\cZ$ on $\N_0$ with generator \begin{align*} \cG_\cZ f(z) &= \lambda z\sum_{k\geq0}q_k\big(f(z-1+k)-f(z)\big) + \kappa\sum_{k=0}^z\genfrac(){0pt}{}zkp^k(1-p)^{z-k}\big(f(k)-f(z)\big) \end{align*} for $f \in \mathcal B(\N_0)$, the set of real-valued, bounded functions on $\N$, is called a \emph{homogeneous branching process with death-rate $\lambda$ and offspring distribution $(q_k)$, subject to binomial disasters at rate $\kappa$ with survival probability $p$} and will be denoted by $\cZ^h_{\lambda,q,\kappa,p}$. \end{definition} \noindent Such a process describes the size of a population that behaves in the following way: Every individual dies with rate $\lambda$ and leaves behind a random number of offsprings distributed according to $(q_k)$. Independent of this growth mechanism, with rate $\kappa$ global events occur that kill off every individual alive at that time with probability $1-p$ independently of each other. \begin{theorem}\label{thm:bp-arb.off.dis} Let $\lambda>0$, $q=(q_k)_{k\geq0}$ a distribution on $\N_0$ with expectation $\mu:=\sum_kkq_k$, $\kappa>0$ and $p\in[0,1)$. Then, if $p=0$, $\cZ$ goes extinct almost surely with \[ \lim_{t\to\infty}-\tfrac1t\log\mathbb P(Z_t>0) = \kappa + \max\{\lambda(1-\mu),0\}. \] Otherwise, letting $\nu=\lambda(\mu-1)/(\kappa\log\frac1p)$, $\cZ:=\cZ^h_{\lambda,q,\kappa,p}$ satisfies \begin{enumerate} \item if $\nu\leq p$, $\cZ$ goes extinct almost surely and \begin{align*} \lim_{t\to\infty}-\tfrac1t\log\mathbb P(Z_t>0) &= (1-p)\kappa - \lambda(\mu-1). \end{align*} \item if $p<\nu\leq1$, $\mathcal Z$ goes extinct almost surely and \begin{align*} \lim_{t\to\infty}-\tfrac1t\log\mathbb P(Z_t>0) &= \kappa(1 - \nu - \nu\log(\tfrac1\nu)). \end{align*} \item if $\nu>1$, then $\cZ$ survives with positive probability, where \begin{align} 0 < \mathbb P(\lim_{t\to\infty}Z_t=\infty) = 1 - \mathbb P(\lim_{t\to\infty}Z_t=0) = \sum_{k=1}^z\genfrac(){0pt}{}zk(-1)^{k-1}\E[X^k] < 1-x_\ast^{z_0}, \label{eq:bp-arb.off.dis-surv.prob} \end{align} where $h(x)=\sum_kx^kq_k$ is the pgf of $q$, $x_\ast$ is the smallest fixed point of $h$ and $X$ is a random variable on $(0,1-x_\ast]$ satisfying $\E[X^{-1}(1-h(1-X))]=1+\tfrac\kappa\lambda\log\tfrac1p$ and \begin{align} \E[X^k] = \frac{\lambda k}{\lambda k + \kappa(1-p^k)}\E[X^{k-1}(1-h(1-X))]. \label{eq:bp-arb.off.dis-surv.prob.rec} \end{align} \end{enumerate} \end{theorem} \begin{remark}\ \begin{enumerate} \item Rearranging the inequalities in terms of $\mu$, we obtain \[ \textit{1. if }\mu\leq 1 + \tfrac{\kappa p}\lambda\log\tfrac1p,\quad \textit{2. if }1 + \tfrac{\kappa p}\lambda\log\tfrac1p<\mu\leq1 + \tfrac\kappa\lambda\log\tfrac1p,\quad \textit{3. if }\mu>1 + \tfrac\kappa\lambda\log\tfrac1p. \] These give insight into how supercritical the underlying branching process has to be in order to survive the disasters.\\ Also, this formulation illustrates the continuity of the theorem in $p=1$: Classical results for such processes without disasters (cf. \citealp[Theorem 11.1, p.109]{Harris1963}) show that $\cZ$ goes extinct almost surely if $\mu\leq1$ with $-\frac1t\log\Pw(Z_t>0)\to\lambda(1-\mu)$ as $t\to\infty$, which aligns with \emph1., while for $\mu>1$, $\Pw(Z_t\to\infty)=1-\Pw(Z_t\to0)=1-x_\ast^{z_0}$, which is the upper bound for the survival probability in \eqref{eq:bp-arb.off.dis-surv.prob} for $p<1$. \item While \cite{KaplanEtAl1975} have already shown the almost sure extinction in \emph{1.} and \emph{2.} as well as the fact in \emph{3.} that $\cZ$ almost surely either goes extinct or explodes, we offer an alternative proof via our duality results plus rates of convergence for the survival probability including the case $\mu=\infty$. Also, making use of \eqref{eq:bp-arb.off.dis-surv.prob.rec}, our result offers a way to compute the exact extinction probability in \emph{3}. Since the recursion in \eqref{eq:bp-arb.off.dis-surv.prob.rec} depends on the offspring pgf $h$, in general this formula can be difficult to compute. Corollary \ref{cor:results-homog-bd-proc}, however, shows that in the example of homogeneous birth-death-processes it is feasible. \end{enumerate} \end{remark} \noindent The following corollary applies Theorem \ref{thm:bp-arb.off.dis} to birth-death-processes with disasters. This does not only provide a nice transition to the next theorem, but offers an example where (using the relation \eqref{eq:bp-arb.off.dis-surv.prob.rec}) we can explicitly compute the survival probability. \begin{corollary}\label{cor:results-homog-bd-proc} Let $\cZ:=(Z_t)_t$ be a homogeneous birth-death-process with respective rates $b>0$ and $d\geq0$ that underlies binomial disasters at a rate of $\kappa>0$ with survival probability $p\in(0,1)$. \begin{enumerate} \item If $b-d\leq \kappa p\log\tfrac1p$, $\cZ$ goes extinct almost surely and \begin{align*} \lim_{t\to\infty}-\tfrac1t\log\Pw(Z_t>0) &= (1 - p)\kappa - (b-d). \end{align*} \item If $\kappa p\log\tfrac1p<b-d\leq \kappa\log\tfrac1p$, $\cZ$ goes extinct almost surely and \begin{align*} \lim_{t\to\infty}-\tfrac1t\log\Pw(Z_t>0) &= \kappa - \frac{b-d}{\log\frac1p}\Big(1 + \log\Big(\frac{\kappa\log\frac1p}{b-d}\Big)\Big). \end{align*} \item If $b-d>\kappa\log\tfrac1p$, then $ \Pw_k(\lim_{t\to\infty}Z_t=0) + \Pw_k(\lim_{t\to\infty}Z_t=\infty) = 1 $ and \begin{align*} \Pw_k(\lim_{t\to\infty}Z_t=\infty) &= \Big(1-\frac{d+\kappa\log\frac1p}b\Big)\sum_{\ell=1}^k\genfrac(){0pt}{}k\ell(-1)^{\ell-1} \prod_{m=1}^{\ell-1}\Big(1-\frac{dm+(1-p^m)\kappa}{bm}\Big). \end{align*} \end{enumerate} \end{corollary} \begin{proof First note that $\cZ$ is a $\cZ^h_{\lambda,q,\kappa,p}$-process with $\lambda=b+d$, $q_0=d/(b+d)$ and $q_2=b/(b+d)=1-q_0$. Thus, $\lambda(\mu-1)=(b+d)(q_2-q_0)=b-d$ and $\nu=(b-d)/(\kappa\log\frac1p)$, already providing \emph{1.} and \emph2. by insertion in Theorem \ref{thm:bp-arb.off.dis}. For \emph{3.} we derive $1-h(1-x)=1-q_0-q_2(1-x)^2=q_2x(2-x)$, yielding a simple recursion in \eqref{eq:bp-arb.off.dis-surv.prob.rec} concluding the proof. \end{proof} \noindent In Section \ref{sec:pr-thm3} we will develop tools for the analysis of inhomogeneous birth-death processes with time-dependent disasters, generalising the setting of Corollary \ref{cor:results-homog-bd-proc}. For this, mind the following definition, where the birth, death and disaster rates $b,d,\kappa$ as well as the survival probability $p$ are now given as functions of $t$. \begin{definition}\label{def:inhom-bd-w-dis} Let $b,d,\kappa: \R_+\to \R_+$ and $p:\R_+\to[0,1]$, where we abbreviate $b_s := b(s), d_s := d(s), \kappa_s :=\kappa(s)$ and $p_s := p(s)$. A Markov process $\cZ$ on $\N_0$ with time-dependent generator (see Section 4.7A of \citealp{EK86}) \begin{align*} \cG_{\cZ,t} f(z) &= b_tz\big(f(z+1)-f(z)\big) + d_tz\big(f(z-1)-f(z)\big)\\[1em] &\qquad +\kappa_t\sum_{k=0}^z\genfrac(){0pt}{}zkp_t^k(1-p_t)^{z-k}\big(f(k)-f(z)\big) \end{align*} for $t\geq 0$ and $f\in \mathcal B(\N_0)$, is called an \emph{inhomogeneous birth-death-process with birth-rate $b$ and death-rate $d$, subject to binomial disasters with survival probability $p$ occurring at rate $\kappa$} and will be denoted by $\cZ^{in}_{b,d,\kappa,p}$. \end{definition} \noindent Key to our approach is Lemma \ref{lem:inhom-duality}, which computes the conditioned pgf delivering some kind of \emph{stronger} duality, enabling us with Proposition \ref{prop:inhom-limit} to easily give pgf limit results in terms of that dual process. While these tools offer room for further generalisation (cf. Remark \ref{rem:prop:inhom-limit}), we give the following theorem as an example of application, where we also make use of Theorem \ref{thm:asymptotics-of-D-integral}. \begin{theorem}\label{thm:inhom-bdp} Let $b,d,\kappa$ be non-negative right-continuous functions on $\R_+$ with left limits and $p: \R_+\to[0,1]$ left-continuous with right limits such that $p_t=0$ only if $\kappa_t=0$ and, letting $\Lambda_\kappa^{-1}(t):=\inf\{s>0:\int_0^s\kappa_sds>t\}$, such that the map $-\log(p(\Lambda_\kappa^{-1}(\cdot)))$ is regularly varying. Furthermore, let $h:\R_+ \to \R_+$ continuous and non-decreasing with $\lim_{t\to\infty}t^{-\alpha}h(t)=\infty$ for some $\alpha>0$, as well as $\iota\in\{-1,1\}$ such that \begin{align*} \frac1{h(t)}\int_0^t\Big(b_s - d_s - \kappa_s\log\Big(\frac1{p_s}\Big)\Big)ds \xrightarrow{t\to\infty}\iota. \end{align*} Then, $\cZ:=\cZ^{in}_{b,d,\kappa,p}$ satisfies \begin{enumerate} \item if $\iota=1$ and for some $\varepsilon>0$ holds $\displaystyle\int_0^\infty e^{-(1-\varepsilon)h(s)}b_sds<\infty$, then $\Pw(Z_t\xrightarrow{t\to\infty}0)<1$. \item if $\iota=-1$ or for some $\varepsilon>0$ holds $\displaystyle\int_0^\infty e^{-(1+\varepsilon)h(s)}b_sds=\infty$, then $\Pw(Z_t\xrightarrow{t\to\infty}0)=1$. \end{enumerate} \end{theorem} \begin{remark}\ \begin{enumerate} \item For $h(t)=t$ and $b,d,\kappa,p$ constant, this result aligns with the homogeneous case (cf. Corollary \ref{cor:results-homog-bd-proc}). \item The regular variation condition on $p$ and $\kappa$ is equivalent to the existence of $\beta\in\R$ and some slowly varying function $\ell$, such that $ p(t) = \exp(-\Lambda_\kappa(t)^\beta\ell(\Lambda_\kappa(t)). $ We need $t^{-\alpha}h(t)\to\infty$ for some $\alpha>0$ to handle the case $\beta=-1$ in which Theorem \ref{thm:asymptotics-of-D-integral} is inconclusive. If $\beta\neq-1$, one may choose $\alpha=0$. \item The condition that $\kappa=0$ whenever $p=0$ ensures that no \emph{terminal} disasters occur, i.e. disasters that render $\cZ$ extinct with probability 1. Dropping this, $\int_0^t\kappa_s\log(1/p_s)ds$ might no longer be finite. However, letting $\kappa^-_t:=\kappa_t\cdot\delta_{0,p_t}$ and $\kappa^+_t:=\kappa_t-\kappa^-_t$, it is possible to apply Theorem \ref{thm:inhom-bdp} to the process $\cZ=\cZ^{in}_{b,d,\kappa^+,p}$ without the terminal disasters and separately compute the probability $\pi_{term}$ that at least one terminal disaster occurs, which for a unit-rate Poisson process $(P_t)$ satisfies \begin{align*} \pi_{term} &= \Pw(P_{\int_0^\infty\kappa^-_tdt}>0) = 1 - \exp\Big(-\int_{t:\,p(t)=0}\kappa_t\,dt\Big). \end{align*} Since the sets $\{\kappa^+>0\}$ and $\{\kappa^->0\}$ are disjoint, the respective counts of disasters on these sets are independent. This implies that the terminal disasters only affect the positivity of the survival probability, if $\int_{t:\,p(t)=0}\kappa_t\,dt=\infty$. \item The cases where a normalisation function $h$ as in Theorem \ref{thm:inhom-bdp} does not exist, are discussed in Remark~\ref{rem:inhom-conv-rates}.1. Rates of convergence for the survival probability in case \emph2. will briefly be discussed in Remark \ref{rem:inhom-conv-rates}.2. \end{enumerate} \end{remark} \subsection{Continuous State Branching Processes with Binomial Disasters} The application of $p$-jump processes is not limited to branching processes with discrete states. We will now discuss survival and extinction for continuous state branching processes (see e.g.\ \citealp{Lambert2008} for an overview) with binomial disasters. A similar model is studied in \cite{Bansaye2013}, where multiplicative jumps occur for any factor according to some intensity measure. Their Theorem~1 shows existence and uniqueness of the process we now define. \begin{definition}\label{def:csbp-w-dis} Let $N$ a measure on $\mathbb R_+$ with $\int_0^\infty\min(y,y^2)N(dy)<\infty$, $b\in\R, c, \kappa \in \R_+$ and $p\in(0,1)$. Then, the $\R_+$-valued Markov process with generator $$ \mathcal G_{\mathcal Z}f(z) = bzf'(z) + c z f''(z) + \kappa (f(pz) - f(z)) + \int_0^\infty (f(z+y) - f(z) - yf'(z))z N(dy), $$ for $f \in \mathcal C_b^2(\mathbb R_+)$ (the space of bounded, twice continuously differentiable functions) is called the \emph{continuous state branching process with $p$-disasters} and will be denoted by $\cZ^{cs}_{b,c,N,\kappa,p}$. \end{definition} \noindent Viewing a continuous state branching process as a scaling limit of a discrete branching process, by the law of large numbers the equivalent of a binomial disaster would be a $p$-jump of the population size, represented by the term $\kappa(f(pz)-f(z))$ in $\cG_\cZ$. The next Theorem is similar to (but less precise than) Corollary~6 of \cite{Bansaye2013}, but does not need their restrictions $\int_0^\infty y^2N(dy)<\infty$ and $c>0$. \begin{theorem}\label{thm:cont-state} Let $\cZ = \cZ^{cs}_{b,c,N,\kappa,p}$ be a continuous state branching process with $p$-disasters as in Definition \ref{def:csbp-w-dis} and let $$\alpha: \R_+ \to\R,x\mapsto bx-cx^2 - \int_0^\infty(e^{-xy}-1+xy)N(dy)$$ satisfy $\limsup_{x\to\infty}x^{-(1+\varepsilon)}\alpha(x)<0$ for some $\varepsilon>0$. \begin{enumerate} \item If $b\leq\kappa\log\frac1p$, $Z_t\xrightarrow{t\to\infty}0$ almost surely and \begin{align*} \lim_{t\to\infty}-\tfrac1t\log\Pw(Z_t>0) &= \begin{cases} (1-p)\kappa - b & \text{ if }b\leq \kappa p\log\frac1p,\\ \kappa - \frac\kappa\gamma(1+\log\gamma) & \text{ otherwise,} \end{cases} \end{align*} where $\gamma:= \kappa\log(\frac1p)/b$. \item If $b>\kappa\log\frac1p$, $\lim_{t\to\infty}\Pw(Z_t=0|Z_0=z)\in(e^{-z\xi},1)$, where $\xi$ denotes the largest root of $\alpha$. \end{enumerate} \end{theorem} \noindent We will only give an outline of the proof, since its structure is illustrative for the proofs to come and it differs only in details. \begin{proof}[Sketch of proof.] Due to a rescaling argument, we just need to show the case of $\kappa=1$. Setting $H(x,z) := e^{-xz}$ and applying the generator to the function $z\mapsto H(x,z)$ for $x$ fixed gives \begin{align*} \mathcal G_{\mathcal Z} H(x,.)(z) &= \Big( bx - cx^2 - \int_0^\infty (e^{-xy} - 1 + xy)N(dy)\Big)\frac{\partial H}{\partial x}(x,z) + H(xp,z) - H(x,z) \end{align*} In other words (cf. \eqref{eq:dual-gens} at the beginning of Section \ref{sec:bps}), $\mathcal Z$ is dual to the $p$-jump process with drift $\alpha$. Now, note that $\xi=\bar x_\alpha<\infty$, since $\limsup_{x\to\infty}x^{-(1+\varepsilon)}\alpha(x)<0$ and thus, $\alpha(x)\to-\infty$ as $x\to\infty$. Hence, either $c>0$ or $N((0,\infty))>0$. In any case, for all $x>0$ \begin{align*} \alpha''(x) &= -2c - \int_0^\infty x^2e^{-xy}N(dy) < 0 \end{align*} and $\alpha$ is strictly concave. Hence, Corollary \ref{cor:p-jump-concave} applies with $\alpha'_0=\alpha'(0)=b$.\\ Since $\limsup_{x\to\infty}x^{-(1+\varepsilon)}\alpha(x)<0$, $\cX$ comes down from infinity in the sense that $\cX$ is well-defined in the limit of $X_0\to\infty$. It follows \begin{align*} \Pw(Z_t=0|Z_0=z) &= \lim_{x\to\infty}\E[e^{-xZ_t}|Z_0=z] = \E[e^{-zX_t}|X_0=\infty]. \end{align*} Using that $x-\frac{x^2}2\leq 1-e^{-x}\leq x$ for all $x\geq0$ to estimate the rates of convergence, Corollary \ref{cor:p-jump-concave} concludes the proof, where the almost sure convergence in \emph{1.} comes from the fact that $\cZ$ is a supermartingale. (This can be seen by computing the martingale given by $Z_t\exp(-\int_0^t\frac{\cG_\cZ\text{id}(Z_s)}{Z_s}ds)$.) \end{proof} \begin{remark} In the case $b>\kappa\log\frac1p$, we have \begin{align*} \Pw(Z_t=0|Z_0=z) &\xrightarrow{t\to\infty} \E[e^{-zX_\infty}] =: L(z), \end{align*} the Laplace transform of the limit of $\cX$. Using stationarity it follows from $\E[\cG_\cX f(X_\infty)]=0$ for $f(x)=e^{-zx}$, that $L$ has to satisfy the functional equation \begin{align*} \kappa(L(pz) - L(z)) - bzL'(z) + czL''(z) + z\int_0^\infty L(z+y) - L(z) + y L'(z) N(dy) = 0, \end{align*} which for appropriate $N$ might deliver a more precise result than Theorem \ref{thm:cont-state}.\emph2. \end{remark} \section{Piecewise Deterministic Markov Processes}\label{sec:pdmps} In this section, we start by proving Theorem \ref{thm:convergence-$p$-jump-pros}, mainly by applying large deviation results for Poisson processes, given in Appendix \ref{sec:ldp}, and the work of \cite{BladtNielsen2017} regarding regenerative processes. Using this, we prove Corollary \ref{cor:p-jump-concave} in Section~\ref{sec:pr-cor-concave} for concave $\alpha$ offering a more concise result including continuities for $p\in\{0,1\}$. \subsection{Proof of Theorem \ref{thm:convergence-$p$-jump-pros}} \begin{proof} At the very beginning, suppose that the theorem already holds for $p$-jump processes on $[0,1]$, let $\alpha$ be as in the assumptions, $s:=\max\{X_0,s_\alpha\}$ and consider $a:[0,1]\to\R^+,x\mapsto\alpha(sx)/s$. It is straightforward to show that $a$ satisfies the conditions of Definition \ref{def:p-jump-process} as well as the ones of Theorem \ref{thm:convergence-$p$-jump-pros} such that there is a $p$-jump process $\overline\cX=(\overline X_t)$ with drift $a$ on $[0,1]$ starting in $sX_0$ for which the assertions of the theorem hold, where $a'_0=\alpha'_0$, $\hat{a}=\hat\alpha$ and $x_a=sx_\alpha$. Also, $a(x)\geq\delta x-\vartheta x^2$ for all $x\in[0,1]$ iff $\alpha(x)\geq\delta x - \frac\vartheta sx^2$ for all $x\in[0,s]$. Considering that $t\mapsto X_t:=s_\alpha\overline X_t$ also performs $p$-multiplicative jumps at rate 1 and in betwteen satisfies $\dot X_t=s\cdot a(\overline X_t)=\alpha(X_t)$, we obtain that the theorem holds. Hence, without loss of generality, we assume for the rest of the proof that $I=[0,1]$.\\[.5em] Note that in $(C_1)$, $\alpha$ is Lipschitz-continuous on the whole interval $[0,1]$, while in $(C_2)$, there is an $x^+>0$ such that $\alpha(x)>0$ for all $x\in(0,x^+]$. In the latter case, the initial value problem with $f'=\alpha(f)$ and $f(0)>0$ is equivalent to the restriction $f'=\alpha\big|_{[\min\{f(0),x^+\},1]}(f)$, since $\alpha(\min\{f(0),x^+\})>0$, where $\alpha$ is Lipschitz-continuous on $[\min\{f(0),x^+\},1]$. In either case, by the Picard-Lindel\"of Theorem, for every deterministic piece on an interval between two jumps $[\tau_k,\tau_{k+1})$ the initial value problem with $\dot X_t = \alpha(X_t)$ and $X_{\tau_k}:=pX_{\tau_k-}>0$ has a unique solution. Additionally, the bound $\alpha(1)\leq0$ ensures that $\cX$ does not leave the interval $[0,1]$. Thus, $\cX$ is well-defined. (Also note that, since by definition $\alpha'_0\leq\hat\alpha$, $(C_2)$ only concerns case \emph2. Moreover, $\alpha'_0<\infty$ only if $\alpha(0)=0$, in which case $\alpha'_0=\alpha'(0)$.)\\[1em] \emph{1.} Let $\beta(y):=e^y\alpha(e^{-y})$ and $Y_t:=-\log X_t\in \R_+$. Then, the process $\mathcal Y:=(Y_t)_{t\geq0}$ has the generator \begin{align*} \mathcal G_{\mathcal Y}g(y) &= \big(\mathcal G_{\mathcal X}(g\circ(-\log))\big)(e^{-y})\\[1em] &= g\big(-\log(pe^{-y})\big) - g(y) + \alpha(e^{-y})\cdot\big(-\tfrac1xg'(-\log x)\big)\big|_{x=e^{-y}}\\[1em] &= g(y+\log\tfrac1p) - g(y) - \beta(y)g'(y). \end{align*} The assertion implies that $\beta(y)=\frac1{e^{-y}}\alpha(e^{-y})\leq\hat\alpha<\log\tfrac1p$ for all $y\in \R_+$. Letting $(P_t)$ be the unit-rate Poisson process jumping simultaneously with $\mathcal Y$, it follows that \begin{align} P_t\cdot\log\tfrac1p - t\hat\alpha \leq Y_t\label{eq:bound-Y} \end{align} for all $t$, since the jumps are identical, the left side always starts in 0 and has point-wise inferior drift. The law of large numbers, giving us $\lim_{t\to\infty}P_t/t=1$ and thus almost surely $\liminf_t(Y_t/t)\geq\log\tfrac1p-\hat\alpha>0$, shows that $-\log X_t=Y_t\longrightarrow_{t\to\infty}\infty$ with probability 1.\\[0.5em] $(U_1)$ Using the generator of $\cX$ on $f(x)=x^k$, we obtain \begin{align} \tfrac d{dt}\E[X_t^k] &= \E\Big[\big(\cG_{\cX}(\cdot)^k\big)(X_t)\Big] = \E[p^kX_t^k - X_t^k + \alpha(X_t)kX_t^{k-1}] \leq \E[X_t^k](p^k-1+k\hat\alpha). \label{eq:U1} \end{align} Considering Gronwall's inequality, it is straightforward to deduce $(U_1)$ (even for arbitrary $\hat\alpha$).\\[0.5em] $(U_2)$ For the (stronger) upper bound in the case of $\hat\alpha>p^k\log\tfrac1p$ we reuse $\eqref{eq:bound-Y}$ to compute \begin{align} \E[X_t^k] &= \E[e^{-kY_t}]\notag\\[1em] &\leq u(t) := \E[e^{-k(P_t\log(1/p)-t\hat\alpha)}\wedge 1]\label{eq:u(t)}\\[1em]\notag &= \mathbb P\Big(P_t \leq \frac{\hat\alpha}{\log(1/p)}t\Big) + \sum_{\ell\geq\hat\alpha t/(\log(1/p))} e^{k(\hat\alpha t-\log(1/p)\ell)} e^{-t}\frac{t^\ell}{\ell!}\\ \notag &= \underbrace{\mathbb P\Big(P_t \leq \frac t\lambda\Big)}_{=:u_1(t)} +\underbrace{e^{-t(1-p^k-\hat\alpha k)}\mathbb P\Big(P_{p^kt}\geq\frac1{p^k\lambda}p^kt\Big)} _{=:u_2(t)}. \end{align} First, using \eqref{lld:ld2} of Lemma \ref{lem:ld-PP} with $x = \tfrac1\lambda < 1$, we compute \begin{align*} -\frac1t\log u_1(t) &\xrightarrow{t\to\infty} 1 - \frac1\lambda + \frac1\lambda\log\frac1\lambda = 1 - \tfrac1\lambda(1+\log\lambda) =:A_{\lambda}. \end{align*} For the second term, $u_2$, we see that $p^k\lambda<1$ and thus, by \eqref{lld:ld1} of Lemma \ref{lem:ld-PP}, \begin{align*} - \frac 1t \log u_2(t) \xrightarrow{t\to\infty} \ &1 - p^k - \hat\alpha k + p^k\Big(1 - \frac1{p^k\lambda} + \frac1{p^k\lambda}\log\frac1{p^k\lambda}\Big)\\[1em] =\ &1 - \hat\alpha k - \frac1\lambda\Big(1 - \log\frac1\lambda - k\log\frac1p\Big) = A_{\lambda}. \end{align*} Combining these two results, we obtain $\lim_{t\to\infty}\tfrac1t\log u(t)=-A_\lambda$, which equates to the desired bound.\\[0.5em] To recognise that the bound in $(U_2)$ is in fact stronger than the one of $(U_1)$, we need to verify that $1-p^k-\hat\alpha k\leq A_\lambda$, if $p^k\lambda<1$. In this case, the function $h:x\mapsto x+\tfrac1\lambda\log\tfrac1x$ is strictly decreasing on $[p^{k},\lambda^{-1}]$, since $h'(x) = 1 - \tfrac1{\lambda x}$. Thus, inserting $\hat\alpha=\tfrac1\lambda\log\tfrac1p$, the difference satisfies \begin{align*} A_\lambda - (1-p^k - \hat\alpha k) &= p^k + \frac1\lambda\log\frac1{p^k} - \frac1\lambda(1+\log\lambda)\\[1em] &= h(p^k)-h(\lambda^{-1}) > 0. \end{align*} $(L_1)$ and $(L_2)$: Let $w:[0,1]\to\R,x\mapsto\delta x-\vartheta x^2$. Then, $\cX$ is bounded below by a process $\mathcal W=(W_t)_t$ with generator \begin{align*} \mathcal G_{\mathcal W}f(x) &= f(px)-f(x) + w(x)f'(x), \end{align*} if $\cX$ and $\mathcal W$ have equal initial values and are coupled in such a way that they jump simultaneously at the jump times of a Poisson process $(P_t)$. Now, we will show that the moments of $\mathcal W$ have the desired asymptotic properties, using that $\mathcal W$ can be represented explicitly via \begin{align} W_t &= \frac{p^{P_t}e^{\delta t}}{X_0^{-1}+\vartheta\int_0^tp^{P_s}e^{\delta s}ds}.\label{eq:W-explicit} \end{align} Surely, $\mathcal W$ starts in $X_0$ and has the desired jumps at the times of $P$, since $W_t/p^{P_t}$ is continuous. To verify the desired deterministic growth, let $\eta(t):=X_0^{-1}+\vartheta\int_0^tp^{P_s}e^{\delta s}ds$, the denominator of $W_t$, and assume that $P$ does not jump in an interval $(t-\varepsilon,t+\varepsilon)$. Then, \begin{align*} \tfrac d{dt}W_t &= \frac{p^{P_t}\delta e^{\delta t}\eta(t)-p^{P_t}e^{\delta t}\eta'(t)}{\eta(t)^2} = W_t\cdot\frac{\delta\eta(t)-\vartheta p^{P_t}e^{\delta t}}{\eta(t)} = W_t(\delta-\vartheta W_t) = w(W_t). \end{align*} Next, we consider the process $\overline{\mathcal W}$ that arises from $\mathcal W$ by exchanging for every $t$ the path $(P_s)_{0\leq s\leq t}$ with its time-reversal $(\overline P_s^t)_{0\leq s\leq t}$ via $\overline P_s^t := P_t-P_{t-s}$, given by \begin{align*} \overline W_t &= \frac{p^{\overline P^t_t}e^{\delta t}}{X_0^{-1}+\vartheta\int_0^tp^{\overline P_s^t}e^{\delta s}ds} = \frac1{X_0^{-1}p^{-P_t}e^{-\delta t}+\vartheta\int_0^tp^{-P_{t-s}}e^{-\delta(t-s)}ds}\\[1em] &= \Big(X_0^{-1}p^{-P_t}e^{-\delta t}+\vartheta\int_0^tp^{-P_s}e^{-\delta s}ds\Big)^{-1}. \end{align*} Now, since $(P_s)_{0\leq s\leq t}$ is equally distributed as $(\overline P_{s+})_{0\leq s\leq t}$ for every $t>0$, where $\overline P_{s+} := \lim_{r\downarrow s} \overline P_r$ denotes the right-side limit, we also have that $W_t\overset d=\overline W_t$ for all $t$ and in particular $\E[W_t^k]=\E[\overline W_t^k]$. Also note that, in contrast to $W_t$, $\overline W_t$ decreases if the path of $(P_t)$ increases pointwise. Now, using this and \eqref{lld:ld3} of Lemma \ref{lem:ld-PP}, we obtain for $(L_2)$ (even for arbitrary $\delta>0$), considering that $\delta\leq\hat\alpha$ and thus $\gamma\geq\frac1{\hat\alpha}\log\frac1p>1$ as well as $\delta=\tfrac1\gamma\log\tfrac1p$, \begin{align*} \frac{1}{t} \log \E[X_t^k] &\geq\frac1t\log\E[\overline W_t^k]\\ &= \frac1t\log\E\Big[\Big(X_0^{-1}p^{-P_t}e^{-\delta t} + \vartheta\int_0^tp^{-P_s}e^{-\delta s}ds\Big)^{-k}\Big]\\ &\geq \frac1t \log \E\Big[\Big(X_0^{-1}p^{-P_t}e^{-\delta t} + \vartheta\int_0^tp^{-P_s}e^{-\delta s}ds\Big)^{-k}, P_s\leq \tfrac s\gamma\text{ for all }s\leq t\Big]\\ &\geq \frac 1t \log\Bigg(\Big(X_0^{-1}e^{(\frac1\gamma\log\frac1p-\delta)t} + \vartheta\int_0^t e^{(\frac1\gamma\log\frac1p-\delta)s}ds\Big)^{-k} \mathbb P(P_s \leq \tfrac s\gamma\text{ for all }s\leq t)\Bigg)\\ &\xrightarrow{t\to\infty} -k\cdot0 - (1-\tfrac1\gamma(1+\log\gamma)) = A_\gamma. \end{align*} For the case $\delta\leq p^k\log\tfrac1p$, we deduce analogously \begin{align*} \frac{1}{t} \log \E[X_t^k] &\geq \frac1t\log\E\Big[\Big(X_0^{-1}p^{-P_t}e^{-\delta t} + \vartheta\int_0^tp^{-P_s}e^{-\delta s}ds\Big)^{-k}\Big]\\ &\geq \frac 1t \log\Bigg(\Big(X_0^{-1}e^{(p^k\log\frac1p-\delta)t} + \vartheta\int_0^t e^{(p^k\log\frac1p-\delta)s}ds\Big)^{-k} \mathbb P(P_s \leq p^ks\text{ for all }s\leq t)\Bigg)\\ &\xrightarrow{t\to\infty} -k(p^k\log\tfrac1p-\delta) - (1 - p^k + p^k\log(p^k)) = \delta k - (1 - p^k). \end{align*} With the same argument as at the end of the proof of $(U_2)$, one can recognise that the bound in $(L_1)$ is stronger than $A_\gamma$ if $\delta\leq p^k\log\tfrac1p$.\\[0.5em] \emph{2.} Recalling that $Y_t:=-\log X_t$ and letting $T_z:=\inf\{t>0:Y_t=z\}$, we show that (i) there is $z\geq 0$ such that $\E[T_z|Y_0=z] < \infty$ and (ii) $\mathbb P(T_z<\infty|Y_0 = y) = 1$ for all $y\geq 0$:\\ Since $\alpha'_0>\log\tfrac1p$, there have to be $\xi>0$ and $\varepsilon>0$ such that $\alpha(x)/x \geq (1+\varepsilon)\log\tfrac1p$ for all $x\in(0,\xi]$. Setting $z=-\log\xi<\infty$, we see that $\beta(y)\geq(1+\varepsilon)\log\tfrac1p=:\zeta$ for all $y\geq z$. We define \[ S := S_{(z, z+\log(1/p)]} := \inf\{t:z<Y_t\leq z+\log(1/p)\}. \] Then, $\E[S | Y_0 = y] < \infty$ for all $y\leq z$. Indeed, the probability for at least $z/\log(1/p)$ jumps in some small time interval of length $\varepsilon'>0$ is positive. After the first such time interval we can be sure that $S$ has occurred. By finiteness of first moments of geometric distributions, $\E[S | Y_0 = y] < \infty$ follows. By a restart argument, we have to show that $\E[T_z | Y_0 = y] < \infty$ for all $z<y\leq z+\log(1/p)$, which will be done by using a comparison argument. For this, let $\mathcal R = (R_t)_{t\geq 0}$ be a process with generator \[ \cG_{\mathcal R}g(y) = g(y + \log(1/p)) - g(y) -\zeta g'(y). \] If $z < R_0 = Y_0 \leq z + \log(1/p)$, then -- using the same Poisson processes for $\mathcal Y$ and $\mathcal R$ -- we have that $T_z \leq T_z^{\mathcal R} := \inf\{t\geq 0:R_t = z\}$ since $\beta(y) \geq \zeta$ for $y\geq z$. Analogously to the argument following \eqref{eq:bound-Y}, we see that $R_t\to-\infty$ as $t\to\infty$ almost surely, which implies that $T_z^{\mathcal R}<\infty$. Since $(R_t - R_0 + t(\zeta - \log(1/p)))_{t\geq 0}$ is a martingale, we have by optional stopping that $ \E[R_0 - R_{T^{\mathcal R}_z}] = R_0 - z = (\zeta - \log(1/p)) \E[T_z^{\mathcal R}] = \varepsilon \log(1/p) \E[T_z^{\mathcal R}] $, hence $\E[T_z^{\mathcal R} | Y_0 = R_0] \leq 1/\varepsilon < \infty$. It is now straightforward to obtain the properties (i) and (ii).\\ Now, by (i) and homogeneity, $\cY$ is a positively recurrent delayed regenerative process in the sense of Definition 7.1.1 in \cite[p. 387]{BladtNielsen2017} with regeneration cycles starting at the state $z$ (and so is $\cX$ with cycles starting in $\xi$). By (ii), the delay is almost surely finite. Hence, Theorem 7.1.4 in \cite[p. 388]{BladtNielsen2017} gives us weak convergence of $\cY$ to a finite random variable $Y_\infty$ and thus, also $\cX$ has a weak limit $X_\infty:=e^{-Y_\infty}>0$. Furthermore, recalling that $x_\alpha=\min\{x\in(0,1]:\alpha(x)=0\}$, this is well-defined since $\alpha$ is positive on $(0,\xi)$ and Lipschitz-continuous on $[\xi,1]$, $\alpha'_0>0$ and $\alpha(1)\leq0$. This implies that the hitting time of $[0,x_\alpha)$ of $\cX$ is almost surely finite. After that, $\cX$ will never hit $[x_\alpha,1]$ again, since it will always jump before it can reach $x_\alpha$. Thus, $\Pw(X_\infty\in(0,x_\alpha))=1$. \noindent If $f:[0,1]\to\R$ is measurable and bounded, Theorem 7.1.6 of \cite[p. 391]{BladtNielsen2017} as well as the weak convergence give us that almost surely \begin{align} \lim_{t\to\infty}\frac1t\int_0^tf(X_s)ds &= \lim_{t\to\infty}\frac1t\int_0^t\E[f(X_s)]ds = \E[f(X_\infty)].\label{eq:ergodic-thm-for-X} \end{align} Since for $t>0$ and a continuous and bounded function $f$ the distribution $\mu$ of $X_\infty$ satisfies \begin{align*} \E_{\mu}[f(X_t)] &= \E\big[\E_{X_\infty}[f(X_t)]\big] = \lim_{s\to\infty}\E\big[\E_{X_s}[f(X_t)]\big] = \lim_{s\to\infty}\E[f(X_{t+s})] = \E[f(X_\infty)], \end{align*} $\mu$ is a stationary distribution. Letting $\nu$ be a stationary distribution of $\cX$ we obtain from $\eqref{eq:ergodic-thm-for-X}$ that for all $t$ \begin{align*} \E_\nu[f(X_1)] &= \frac1t\int_0^t\E_\nu[f(X_s)]ds = \E[f(X_\infty)], \end{align*} by which the stationary distribution must be unique. In particular, by stationarity $\E[\cG_\cX f(X_\infty)]=0$ holds. Hence, choosing $f(x)=\log(\rho+x)$ for $\rho>0$, we obtain \begin{align*} 0 &= \E\Big[\log\Big(\frac{\rho+pX_\infty}{\rho+X_\infty}\Big) + \frac{\alpha(X_\infty)}{\rho+X_\infty}\Big] \xrightarrow{\rho\to0} -\log(\tfrac1p) + \E[X_\infty^{-1}\alpha(X_\infty)] \end{align*} by monotone convergence, since $X_\infty>0$ almost surely. (Note here, that this argument works for both cases $(C_1)$ and $(C_2)$.) On the other hand, choosing $f(x)=x^k$, it follows \begin{align*} 0 &= \E[(pX_\infty)^k - X_\infty^k + \alpha(X_\infty)kX_\infty^{k-1}], \end{align*} which implies $\displaystyle \E[X_\infty^k] = \frac k{1-p^k}\E[X_\infty^{k-1}\alpha(X_\infty)]. $ \end{proof} \begin{remark}\label{rem:prop:convergence-pw.det.MPs} \ \begin{enumerate} \item For $\alpha'_0<\log\tfrac1p<\hat\alpha$, Theorem \ref{thm:convergence-$p$-jump-pros} is inconclusive. The weak estimates using $\hat\alpha$ in \eqref{eq:bound-Y} and \eqref{eq:U1} offer room for improvement. Also note here that for the application of $(L_1)$ and $(L_2)$ for $\alpha(0)=0$, one can often choose $\delta=\alpha'(0)=\alpha'_0$. \item Using the notion of \emph{strong large deviations}, e.g. \cite[Theorem 3.5, p.1868]{ChagantySethuraman1993}, one can compute exact asymptotics: \begin{enumerate} \item If $x>1$, then $\displaystyle{ \mathbb P(P_t\geq xt) \sim \frac{\sqrt x}{x-1} \cdot\frac{e^{-t(1-x+x\log x)}}{\sqrt{2\pi t}}}$, \item If $0<x<1$, then $\displaystyle{ \mathbb P(P_t\leq xt) \sim \frac1{(1-x)\sqrt x} \cdot\frac{e^{-t(1-x+x\log x)}}{\sqrt{2\pi t}}}$, \end{enumerate} where $\sim$ denotes asymptotic equivalence, i.e. $f\sim g\Leftrightarrow f(t)/g(t)\to 1$ as $t\to\infty$. Applying these to $u_1$ and $u_2$ in \eqref{eq:u(t)} would provide more precise upper bounds in $(U_1)$ and $(U_2)$. For similar stronger bounds in $(L_1)$ and $(L_2)$ however, one would need strong large deviation results for Poisson processes. \item Dropping the boundedness of the state space of $\cX$, i.e. considering $s_\alpha=\infty$, \eqref{eq:bound-Y} holds letting $\hat\alpha=\sup_{x\in\R_+}\tfrac1x\alpha(x)$ and so does \emph{1.} as well as $(U_1)$. Also, $(L_1)$ and $(L_2)$ still apply, starting $\mathcal W$ at $W_0:=\min\{1,X_0\}$. But we need boundedness from below of $\cY$ to prove \eqref{eq:u(t)} and the finiteness of $\E[S|Y_0=y]$ and thus lose $(U_2)$ as well as the limit results of \emph{2.} \item Whenever $(X_t)$ is a $p$-jump process with drift $\alpha$, the process of the $k$th powers, $(X_t^k)$, is a $p^k$-jump process with drift $\alpha_k:x\mapsto kx^{1-\frac1k}\alpha(x^{\frac1k})$ holding $\hat\alpha_k=k\hat\alpha$ as well as $(\alpha_k)'_0=k\alpha'_0$. Hence, it would suffice to prove $(U_1)$ and $(U_2)$ for $k=1$. For $(L_1)$ and $(L_2)$ however, the lower bound on $\alpha$ only implies that $\alpha_k(x)\geq k\delta x - k\vartheta x^{1+\frac1k}$, such that for $k>1$ we can not use the process $\mathcal W$ in \eqref{eq:W-explicit} as a lower bound process and the proof fails. \end{enumerate} \end{remark} \subsection{Proof of Corollary \ref{cor:p-jump-concave}}\label{sec:pr-cor-concave} \begin{proof} The concavity of $\alpha$ implies that $x_\alpha$ equates to $x_\alpha$ and $s_\alpha$ from Theorem \ref{thm:convergence-$p$-jump-pros}. Noting that, if $I=\R^+$ and $s=\max\{x_\alpha,X_0\}$, $\alpha'(s)>-\infty$, by the same scaling argument as in the beginning of the proof of Theorem \ref{thm:convergence-$p$-jump-pros} we can assume that $I=[0,1]$.\\[.5em] We start with $p\in(0,1)$: Since $\alpha$ is concave, $x\mapsto\frac{\alpha(x)-\alpha(y)}{x-y}$ decreases on $(y,1]$ and $y\mapsto\frac{\alpha(x)-\alpha(y)}{x-y}$ decreases on $(0,x)$. Thus, by the assertions, $|\frac{\alpha(x)-\alpha(y)}{x-y}|$ is bounded on each interval $[\varepsilon,1]$ and bounded on $[0,1]$, if $\alpha'_0<\infty$. Hence, either $(C_1)$ or $(C_2)$ holds, $\alpha$ suffices the assertions of Theorem \ref{thm:convergence-$p$-jump-pros} and the existence of $\cX$ follows and so does \emph3 by Theorem \ref{thm:convergence-$p$-jump-pros}.\emph2.\\ Recall that $\alpha'_0=\alpha'(0)$, if $\alpha'_0<\infty$. Now, for \emph{1.} we want to apply $(L_1)$ and $(L_2)$ of Theorem \ref{thm:convergence-$p$-jump-pros} using $\delta=\alpha'_0=\alpha'(0)$. Let us first assume that $\alpha''(0)>-\infty$. Then, there is $\vartheta>0$ such that $\alpha'(0)x-\vartheta x^2\leq\alpha(x)$ for all $x\in[0,1]$. (The Taylor expansion delivers the existence of a $\vartheta_0$ such that $\alpha'(0)x-\vartheta_0x^2\leq\alpha(x)$ holds for $x$ in some interval $[0,\varepsilon)$. The boundedness of $\alpha$ ensures that we can choose $\vartheta_1$ such that $\alpha'(0)x-\vartheta_1x^2\leq\alpha(x)$ holds for $x\geq\varepsilon$. Let $\vartheta=\max\{\vartheta_0,\vartheta_1\}$.)\\ On the other hand, if $\alpha''(0)=-\infty$, $\alpha$ has no parabolic lower bound as we need for $(L_1)$ and $(L_2)$ (e.g. for $\alpha(x):=ax - bx^{3/2}$ with $b\geq a>0$). We solve this by a coupling argument: Therefore, let $p$ be fixed and for all $n$ define \begin{align*} \alpha_n(x) &= \min\{\alpha(x),x\alpha(\tfrac1n)\}, \end{align*} the minimum of $\alpha$ and the secant of $\alpha$ intersecting at 0 and $\tfrac1n$. Surely, the $\alpha_n$ satisfy the conditions of Theorem \ref{thm:convergence-$p$-jump-pros}, $\alpha_n\nearrow\alpha$ point-wise, $\alpha''(0)=0>-\infty$ and $\alpha_n(0)=0$ such that $(\alpha_n)'_0=\alpha_n'(0)$. Let $\cX^{(n)}$ be processes with respective generators \begin{align*} \cG_nf(x) &= f(px)-f(x)+\alpha_n(x)f'(x) \end{align*} coupled in such a way that they jump simultaneously with $\cX$ and each started in $X_0$. Then, $X^{(n)}_t\leq X_t$ for all $n$ and thus \begin{align*} \liminf_{t\to\infty}\tfrac1t\log\E[X_t^k] &\geq \sup_n\liminf_{t\to\infty}\tfrac1t\log\E[(X_t^{(n)})^k]. \intertext{ From the previous case, we obtain for every $n$ } \liminf_{t\to\infty}\tfrac1t\log\E[(X_t^{(n)})^k] &\geq \begin{cases} -(1-p^k-\alpha_n'(0)k) & \text{if }\alpha_n'(0)\leq p^k\log\tfrac1p\\[0.5em] -(1-\tfrac1{\gamma_n}(1+\log\gamma_n)) & \text{if }p^k\log\tfrac1p<\alpha_n'(0)<\log\tfrac1p, \end{cases} \end{align*} where $\gamma_n=\log\tfrac1p/\alpha_n'(0)$. Since $\alpha_n'(0)=\alpha(\tfrac1n)/(\tfrac1n)\nearrow\alpha'(0)$ and the bounds in either case increase in $\alpha_n'(0)$, we obtain "$\leq$" in \emph1. Since $\alpha$ is concave, it satisfies $\alpha'(0)=\hat\alpha$ and thus, $(U_1)$ and $(U_2)$ of Theorem \ref{thm:convergence-$p$-jump-pros} provide "$\geq$" and \emph{1.} follows.\\ For the case $\alpha'_0=\log\frac1p$ consider a family $(\cX^{(p)})_{p\in(0,1)}$ of processes with respective generators as in \eqref{eq:generator-of-p-jump-process} with $\alpha$ fixed and $p\mapsto X_0^{(p)}$ constant, coupled in such a way that the processes jump simultaneously. Then, for every $t$ the map $p\mapsto X_t^{(p)}$ is increasing. Since $p^k\log\frac1p<\alpha'_0$ for $p$ near $p^\ast:=e^{-\alpha'_0}$, we conclude from \emph{1.} and the boundedness of the $\cX^{(p)}$ that \begin{align*} 0 &\geq \lim_{t\to\infty}\tfrac1t\log\E[(X_t^{(p^\ast)})^k] \geq \sup_{p<p^\ast}\lim_{t\to\infty}\tfrac1t\log\E[(X_t^{(p)})^k]\\[1em] &= \sup_{p<p^\ast}-\Bigg(1-\frac{\alpha'_0}{\log\tfrac1p} \Big(1-\log\Big(\frac{\alpha'_0}{\log\tfrac1p}\Big)\Big)\Bigg) = 0. \end{align*} Since $x\mapsto\frac1x\alpha(x) = \frac1x(\alpha(x)-\alpha(0)) + \frac1x\alpha(0)$ is decreasing, Theorem \ref{thm:convergence-$p$-jump-pros}.\emph2 implies \begin{align*} \alpha'_0 &\geq \E\Big[\liminf_{t\to\infty}\frac{\alpha(X_t^{(p^\ast)})}{X_t^{(p^\ast)}}\Big] \geq \sup_{p>p^\ast}\E\Big[\liminf_{t\to\infty}\frac{\alpha(X_t^{(p)})}{X_t^{(p)}}\Big] = \sup_{p>p^\ast}\log\tfrac1p = \alpha'_0 \intertext{and hence almost surely} &\liminf_{t\to\infty}\frac{\alpha(X_t^{(p^\ast)})}{X_t^{(p^\ast)}} = \lim_{t\to\infty}\frac{\alpha(X_t^{(p^\ast)})}{X_t^{(p^\ast)}} = \alpha'_0. \end{align*} Since $x\mapsto\alpha(x)/x$ is decreasing, $\limsup_tX^{(p^\ast)}_t$ has to be bounded by $m_\alpha:=\sup\{x\in[0,1]:\alpha(x) = x\alpha'_0\}$. Finally, if $\alpha$ is strictly concave near 0, $\alpha(x)/x$ strictly decreases near 0 and thus $m_\alpha = 0$, which concludes the proof for $p\in(0,1)$.\\[.5em] Considering $p=1$, we can ignore the jumps and $\cX$ becomes deterministic, i.e. the solution of $\dot X_t=\alpha(X_t)$. Then, it holds if $\alpha'_0<0=\log(\frac1p)$ that \begin{align*} \frac1t(\log(X_t)-\log(X_0)) &= \frac1t\int_0^t\frac{\dot X_s}{X_s}ds = \frac1t\int_0^t\frac{\alpha(X_s)}{X_s}ds \leq \alpha'_0 < 0. \end{align*} Hence, $X_t\to0$, $\alpha(X_t)/X_t\to\alpha'_0$ and $\frac1t\log(X_t^k)\to k\alpha'_0$, which aligns with \emph1. However, if $\alpha'_0=0$, since $\alpha$ is concave it is non-positive. Then, $\cX$ is constant, if $\alpha(X_0)=0$, and it converges monotonically to $m_\alpha=\sup\{x\in[0,1]:\alpha(x)=0\}$ if $X_0>m_\alpha$. Thus, $\lim_{t\to\infty}\frac1t\log(X_t)=\lim_{t\to\infty}\alpha(X_t)/X_t=0$ giving us \emph2. Lastly, in the case of $\alpha'_0>0$, $\cX$ will either grow towards $x_\alpha$ if started below, i.e. $X_0\in(0,x_\alpha)$, or fall towards it if started above. Either way, $X_t\to x_\alpha$, $\alpha(X_t)\to0$ and we obtain \emph3.\\[.5em] Choosing $p=0$, $\cX$ would jump to 0 after an exponentially distributed time $T$ with mean 1 and stay there indefinitely. Thus, we can write $X_t=Y_t\cdot\1_{\{T>t\}}$, where $(Y_t)$ is the deterministic process arising for $p=1$ discussed above. Then clearly, $\cX$ will always converge to 0 almost surely and \begin{align*} -\frac1t\log(\E[X_t^k]) &= -\frac 1t\log\Pw(T>t) - \frac kt\log(Y_t) \xrightarrow{t\to\infty} 1 + \max\{0,-k\alpha'_0\}. \end{align*} \end{proof} \section{Branching processes with binomial disasters}\label{sec:bps} In the following subsections, we borrow ideas from the notion of duality of Markov processes; see Chapter 4.4 in \cite{EK86}. \noindent Recall that two Markov processes $\cZ = (Z_t)_{t\geq 0}$ and $\mathcal X = (X_t)_{t\geq 0}$ with state spaces $E$ and $E'$ are called dual with respect to the function $H: E\times E' \to\mathbb R$ if \begin{align}\label{eq:dual} \E[H(Z_t, x) | Z_0=z] = \E[H(z, X_t)|X_0=x]\tag{D} \end{align} for all $z\in E, x\in E'$. When one is interested in the process $\mathcal Z$, this relationship is most helpful if the process $\mathcal X$ is easier to analyse than the process $\mathcal Z$. Moreover, frequently, the set of functions $\{H(\cdot,x): x\in E'\}$ is separating on $E$ such that the left hand side of~\eqref{eq:dual} determines the distribution of $Z_t$. In this case, the distribution of the simpler process $\mathcal X$ determines via~\eqref{eq:dual} the distribution of $\mathcal Z$, so analysing $\mathcal Z$ becomes feasible. \noindent There is no straightforward way how to find dual processes, but they arise frequently in the literature; see \cite{JansenKurt2014} for a survey. Examples span reflected and absorbed Brownian motion, interacting particle models such as the voter model and the contact process, as well as branching processes. \noindent A simple way to verify \eqref{eq:dual} for homogeneous $\cZ$ and $\cX$, is to show that \begin{align} \tfrac\partial{\partial t}\E[H(Z_t,x)|Z_0=z]\big|_{t=0} &= \tfrac\partial{\partial t}\E[H(z,X_t)|X_0=x]\big|_{t=0}\label{eq:dual-gens}\tag{D'} \end{align} for all $z$ and $x$, since then both sides of $\eqref{eq:dual}$ follow the same evolution. \subsection{Proof of Theorem \ref{thm:bp-arb.off.dis}}\label{sec:pr-thm2} In this section we will discuss branching processes of the form of Definition \ref{def:hom-bp-abr-off-dis}. Hence, let $\cZ:=\cZ^h_{\lambda,q,\kappa,p}$, where $\lambda\in(0,\infty)$ is the death-rate, $q=(q_k)_{k\geq0}$ the offspring distribution on $\N_0$, $p\in(0,1)$ the survival probability of the disasters that occur at the jump times of $(D_t)_{t\geq0}$, a Poisson process with rate $\kappa>0$. Moreover, let $h:[0,1]\to[0,1],x\mapsto\sum_{k\geq0}q_kx^k$ be the probability generating function of the offspring distribution. We start with establishing a suitable duality for $\kappa=1$. The general case will follow by a rescaling argument. \begin{lemma}\label{lem:arb-off-dualilty} Let $p\in[0,1]$, $\kappa=1$ and $(X_t)$ be a $p$-jump process with drift $x\mapsto\lambda(1-x-h(1-x))$, having the generator \begin{align} \mathcal G_{\mathcal X}f(x) &= f(px)-f(x) + \lambda\big(1-x-h(1-x)\big)f'(x) \label{eq:generator-dual-arb.off.dis} \end{align} for $f\in\mathcal C^1_b([0,1])$. Then, the duality relation \begin{align} \E[(1-X_t)^z|X_0=x] &= \E[(1-x)^{Z_t}|Z_0=z] \label{eq:duality-arb.off.dis} \end{align} holds for every $x\in[0,1],z\in\N_0$ and $t\geq0$. \end{lemma} \begin{proof} Recalling the generator of $\cZ$ from Definition \ref{def:hom-bp-abr-off-dis}, we obtain for $x\in[0,1],z\in\mathbb N_0$ and $H(x,z):=(1-x)^z$ that \begin{align*} \big(\mathcal G_{\mathcal Z} & H(x,\,\cdot\,)\big)(z)\\ &= \lambda z(1-x)^{z-1}\sum_{k\geq0}q_k\big((1-x)^k-(1-x)\big) +\sum_{\ell=0}^z\genfrac(){0pt}{}z\ell\big(p(1-x)\big)^\ell(1-p)^{z-\ell} - (1-x)^z\\[1em] &= -\frac{\partial}{\partial x}(1-x)^z\cdot\lambda\big(h(1-x)-(1-x)\big) +\big(p(1-x)+(1-p)\big)^z - (1-x)^z\\[1em] &= \big(\mathcal G_{\mathcal X}H(\,\cdot\,,z)\big)(x), \end{align*} which resembles \eqref{eq:dual-gens}. Hence, \eqref{eq:dual} gives us the desired relation. \end{proof} \noindent Now, we apply Corollary \ref{cor:p-jump-concave} to the dual process $\cX$, followed by the proof of Theorem \ref{thm:bp-arb.off.dis}. \begin{lemma}\label{lem:arb-off-appThm1} For $p\in[0,1]$, such a process $\cX$ in Lemma \ref{lem:arb-off-dualilty} exists and satisfies \begin{enumerate} \item if $p=0$ or $h'(1) < 1+\tfrac1\lambda\log\tfrac1p$, $X_t\xrightarrow{t\to\infty}0$ almost surely. Also, for $k\geq1$ \begin{align*} \lim_{t\to\infty}-\tfrac1t\log\E[X_t^k] &= \begin{cases} 1+\max\{0,-k(h'(1)-1)\} & \text{if }p=0,\\ 1-p^k - k\lambda(h'(1)-1)) & \text{if }h'(1)\leq 1+\tfrac{p^k}\lambda\log\tfrac1p,\\ 1-\tfrac1\gamma(1+\log\gamma) & \text{otherwise,} \end{cases} \end{align*} where $\gamma=\tfrac1\lambda\log\tfrac1p/(h'(1)-1)$. \item if $h'(1)=1+\tfrac1\lambda\log\tfrac1p$, $X_t\xrightarrow{t\to\infty}0$ almost surely, while $\frac1t\log\E[X_t^k]\xrightarrow{t\to\infty}0$ for all $k$. \item if $h'(1)\in(1+\tfrac1\lambda\log\tfrac1p,\infty]$, letting $x_\ast$ be the smallest fixed point of $h$, $\cX$ converges weakly to a random variable $X_\infty$ on $(0,1-x_\ast]$ that satisfies $\E[X_\infty^{-1}(1-h(1-X_\infty))]=1+\tfrac1\lambda\log\tfrac1p$ and for $k\geq1$ \begin{align} \E[X_\infty^k] &= \frac{\lambda k}{\lambda k + 1-p^k}\E[X_\infty^{k-1}(1-h(1-X_\infty))]. \label{eq:recursion-dual-moments-arb-off} \end{align} \end{enumerate} \end{lemma} \begin{proof} Since $h$ is a convex function, $\alpha:x\mapsto\lambda(1-x-h(1-x))$ is concave. Also, $\alpha(0)=\lambda(1-h(1))=0$, $\alpha(1)=-\lambda q_0\leq0$, $\alpha'_0=\alpha'(0)=\lambda(h'(1)-1)\in(-\lambda,\infty]$ and $\alpha'(1)=\lambda(h'(0)-1)=\lambda(q_1-1)\geq-\lambda>-\infty$. Hence, considering that $\alpha'(0)\geq p^k\log\tfrac1p$ iff $h'(1)\leq 1+\tfrac{p^k}\lambda\log\tfrac1p$ for all $k\geq1$, Corollary \ref{cor:p-jump-concave} implies \emph{1}. For \emph{3.}, noting that $\alpha(x)>0$ only if $0<x<1-x_\ast=x_\alpha$, only \eqref{eq:recursion-dual-moments-arb-off} remains to be shown. Here, Corollary \ref{cor:p-jump-concave}.\emph3 gives us for $k\geq1$ \begin{align*} \E[X_\infty^k] &= \frac{\lambda k}{1-p^k}\big(-\E[X_\infty^k]+\E[X_\infty^{k-1}(1-h(1-X_\infty))]\big) \end{align*} and \eqref{eq:recursion-dual-moments-arb-off} follows. Finally, if $h'(1)=1+\frac1\lambda\log\frac1p>1$, $h$ is strictly convex and thus, $\alpha$ is strictly concave, which gives us \emph{2.} \end{proof} \begin{proof}[Proof of Theorem \ref{thm:bp-arb.off.dis}] First, let the theorem hold for $\kappa=1$ and for arbitrary $\kappa>0$ consider the process $\cZ^\ast:=\cZ^h_{\lambda/\kappa,q,1,p}$. Then, $(Z_t)_t:=(Z_{\kappa t}^\ast)_t$ defines a $\cZ^h_{\lambda,q,\kappa,p}$-process and we obtain \begin{align*} \lim_{t\to\infty}-\tfrac1t\log\Pw(Z_t>0) &= \kappa\cdot\lim_{s\to\infty}-\tfrac1s\log\Pw(Z_s^\ast>0), \end{align*} which shows \emph{1.} and \emph2. Since $\lim_tZ_t=\lim_tZ_t^\ast$ almost surely, \emph{3.} follows, where we obtain \eqref{eq:bp-arb.off.dis-surv.prob.rec} by substitution.\\ Hence, without loss of generality let $\kappa=1$. Then, letting $Z_0=k$ and $X_0=1$, Lemma \ref{lem:arb-off-dualilty} implies, \begin{align} \mathbb P(Z_t>0) &= 1-\E[0^{Z_t}] = 1-\E[(1-X_t)^k] = \sum_{\ell=1}^k\genfrac(){0pt}{}k\ell(-1)^{\ell+1}\E[X_t^\ell].\label{eq:prThm2-1} \end{align} Considering that $X_t\in[0,1]$, we obtain from Bernoulli's inequality \begin{align} \E[X_t] &\leq 1-\E[(1-X_t)^k] = \mathbb P(Z_t>0) \leq k\E[X_t].\label{eq:prThm2-2} \end{align} Thus, noting that $h'(1)=\mu$ and $\gamma=1/\nu$, for $\mu\leq1+\tfrac1\lambda\log\tfrac1p$ (i.e. $\nu\leq1$) Lemma \ref{lem:arb-off-appThm1}.\emph1 and \ref{lem:arb-off-appThm1}.\emph2 show that \begin{align*} \lim_{t\to\infty}-\tfrac1t\log\mathbb P(Z_t>0) &= \begin{cases} 1+\max\{0,-k(h'(1)-1)\} & \text{if }p=0,\\ 1-p-\lambda(\mu-1) & \text{if }\nu\leq p,\\ 1-\nu(1+\log\tfrac1\nu) & \text{if }p<\nu\leq1. \end{cases} \end{align*} Additionally, considering the boundedness and thus the $\mathcal L^1$-convergence of $(X_t)_t$, we get from \eqref{eq:prThm2-2} that $Z_t$ converges to 0 in probability. Since this implies almost sure convergence of a subsequence and 0 is an absorbing state, we have $Z_t\to0$ almost surely.\\[0.5em] For \emph{2.}, noting that $\mathbb P(Z_t\xrightarrow{t\to\infty}0\mid Z_s)\geq\frac1{1+\lambda}(1-p)^{Z_s}$, i.e. the probability that the next event after $s$ is a disaster that kills all, we obtain \begin{align*} \limsup_{s\to\infty}\mathbb P(Z_t\xrightarrow{t\to\infty}0\mid\sigma(Z_r;r\leq s)) \geq \tfrac1{1+\lambda}\limsup_{s\to\infty}(1-p)^{Z_s} = \tfrac1{1+\lambda}(1-p)^{\liminf\limits_{s\to\infty}Z_s}. \end{align*} Thus, Lemma 3.1 of \cite[p.54]{KaplanEtAl1975} concludes \begin{align*} \mathbb P(Z_t\xrightarrow{t\to\infty}0) + \mathbb P(Z_t\xrightarrow{t\to\infty}\infty) = 1. \end{align*} Furthermore, for $\mu>1+\tfrac1\lambda\log\tfrac1p$ Lemma \ref{lem:arb-off-appThm1}.\emph3 shows stationarity of the distribution of $X_\infty$ and hence independence of $X_0$. Thus, using that 0 is an absorbing state and $\{Z_s=0\}\subset\{Z_t=0\}$ for $s\leq t$, we obtain from \eqref{eq:prThm2-1} that \begin{align*} \mathbb P(\lim_{t\to\infty}Z_t=0) &= \mathbb P\Big(\bigcup_{t>0}\{Z_t=0\}\Big) = \lim_{t\to\infty} \mathbb P(Z_t=0) = \E[(1-X_\infty)^k]. \end{align*} \end{proof} \subsection{Preparation: Regular Variation}\label{sec:reg-var} In this subsection, using results of chapter VIII.9 of \cite{Feller1971} and \cite{Seneta1976}, we will arrange the tools regarding regularly varying functions needed for the proof of Theorem \ref{thm:inhom-bdp}. However, we need to establish some additional notation first: \begin{remark}\label{rem:notation}\ \begin{enumerate} \item We will make use of the Bachmann-Landau notation: For a function $g: \R_+\to[0,\infty)$, let \begin{align*} o(g) &:= \{f: \R_+\to \R_+\mid \limsup_{t\to\infty}\tfrac{f(t)}{g(t)}=0\},\\ O(g) &:= \{f: \R_+\to \R_+\mid \limsup_{t\to\infty}\tfrac{f(t)}{g(t)}<\infty\},\\ \Omega(g) &:= \{f:\R_+\to \R_+\mid g\in O(f)\}. \end{align*} \item We define the relation of \emph{asymptotic equivalence} for functions $f,g:\R_+\to\R$ by \begin{align*} f\overset{t\to\infty}\sim g &\quad \Leftrightarrow \quad \lim_{t\to\infty}\frac{f(t)}{g(t)}=1. \end{align*} Often, when the running variable is either obvious or $t$, we will just write $f\sim g$. \end{enumerate} \end{remark} \begin{definition}\label{def:reg-var} A function $f:\R_+\to \R_+$ is called \emph{regularly varying with exponent $\beta\in\R$}, if for every $x>0$ \begin{align*} \frac{f(xt)}{f(t)} \xrightarrow{t\to\infty} x^\beta \end{align*} holds. A \emph{slowly varying} function is a regularly varying function with exponent 0. \end{definition} \begin{lemma}\label{lem:reg-var} Let $f:\R_+\to \R_+$ regularly varying with exponent $\beta\in\R$ and $F(t):=\int_0^tf(x)dx$. \begin{enumerate} \item $F$ is regularly varying with exponent $\max\{\beta+1,0\}$ and for $t\to\infty$, if \begin{enumerate} \item $\beta>-1$, then $F(t) \sim tf(t)(\beta+1)^{-1}$. \item $\beta=-1$, then $F(t) \in \Omega(1)\cap O(t^\varepsilon)$ for all $\varepsilon>0$. \item $\beta<-1$, then $F(t) \rightarrow c <\infty$. \end{enumerate} \item Let $(t_n)\subset \R_+$ such that $t_n\xrightarrow{n\to\infty}\infty$ and $t_{n+1}/t_n\xrightarrow{n\to\infty}1$. Then, \begin{align*} \frac{F(t_{n+1})}{F(t_n)} \xrightarrow{n\to\infty} 1. \end{align*} \item There are functions $a$ and $\varepsilon$ such that $a(t)\xrightarrow{t\to\infty}c\in\R_+$, $\varepsilon(t)\xrightarrow{t\to\infty}0$ and \begin{align*} f(t) &= t^\beta a(t)\exp\Big(\int_1^t\frac{\varepsilon(y)}ydy\Big). \end{align*} \item For each $\alpha>0$, $f\in O(t^{\beta+\alpha})\cap\Omega(t^{\beta-\alpha})$. \end{enumerate} \end{lemma} \begin{proof} \emph3. and \emph4. follow from \cite{Seneta1976}, Theorem 1.1 on page 2 and Proposition $1^0$ on page 18 respectively, while \emph1. is a consequence of \emph4. and exercises 2.1, 2.2 and 2.3 on \cite[p.86]{Seneta1976}. (A proof of these exercises is given by Theorem 1 in \citealp[p.281]{Feller1971}.)\\ Finally, by \emph{1.} $F$ is regularly varying with exponent $\beta'\geq0$. Applying \emph{3.} we see that there are functions $A$ and $\mathcal E$ with $\lim_{t\to\infty}A(t)=c\in(0,\infty)$ and $\lim_{t\to\infty}\mathcal E(t)=0$ such that \begin{align*} \frac{F(t_{n+1})}{F(t_n)} &= \Big(\frac{t_{n+1}}{t_n}\Big)^{\beta'} \cdot\frac{A(t_{n+1})}{A(t_n)} \cdot\exp\Big(\int_{t_n}^{t_{n+1}}\frac{\mathcal E(y)}ydy\Big). \end{align*} Now, the first two factors converge to 1, while the integral in the exponent is bounded by $|t_{n+1}-t_n|\cdot\frac1{t_n}\sup_{y\in[t_n,t_{n+1}]}|\mathcal E(y)|\longrightarrow_{n\to\infty}0$. \end{proof} \noindent The following Theorem is needed in the proof of Theorem \ref{thm:inhom-bdp} to build a bridge between the asymptotics of the deterministic rate functions and the almost sure asymptotics of the process $(L_t)$ from Lemma \ref{lem:inhom-duality}, which is key to the computation of the survival probability in the inhomogeneous case. \begin{theorem*}\label{thm:asymptotics-of-D-integral} Let $(D_t)_{t\geq0}$ be an inhomogeneous Poisson process with right continuous rate-function $\kappa$ with left limits, $\Lambda(t):=\int_0^t\kappa_sds$, $\Lambda^{-1}(t):=\inf\{s>0:\Lambda(s)>t\}$ and $f: \R_+\to \R_+$ such that $f(\Lambda^{-1}(\cdot))$ is regularly varying with exponent $\beta$. \begin{enumerate} \item If $\Lambda(t)\xrightarrow{t\to\infty}\Lambda(\infty)<\infty$ or $\beta<-1$, then $\int_0^tf(s)dD_s$ has an almost surely finite limit. \item If $\Lambda(t)\xrightarrow{t\to\infty}\infty$ and $\beta>-1$, \begin{align*} \int_0^tf(s)dD_s \sim \int_0^tf(s)\kappa_sds \end{align*} holds almost surely and in $\mathcal L^2$. \item If $\beta=-1$, for arbitrary $\alpha>0$ it holds $\displaystyle \frac{\int_0^tf(s)dD_s}{\int_0^tf(s)\kappa_sds} \in O(t^\alpha)\cap\Omega(t^{-\alpha}) $ almost surely. \end{enumerate} \end{theorem*} \begin{proof} First note that there is a unit-rate Poisson process, which we denote by $(P_t)$ and its jump times by $(\sigma_k)_k$, such that $D_t=P_{\Lambda(t)}$ for all $t\geq0$ and the jump times of $(D_t)$ satisfy $\tau_k=\Lambda^{-1}(\sigma_k)$. Then, supposing that \emph{2.} holds for $\kappa\equiv1$, the general case follows as \begin{align*} \int_0^tf(s)dD_s &= \sum_{k=1}^{D_t}f(\tau_k) = \sum_{k=1}^{P_{\Lambda(t)}}f(\Lambda^{-1}(\sigma_k) \sim \int_0^{\Lambda(t)}f(\Lambda^{-1}(s))ds = \int_0^tf(s)\kappa_sds. \end{align*} Thus, without loss of generality, let $\kappa\equiv1$, $(D_t)=(P_t)$ and $f$ regularly varying with exponent $\beta>-1$. Letting $F(t)=\int_0^tf(x)dx$, it remains to be shown that \begin{align*} Y_t &:= \frac1{F(t)}\sum_{k=1}^{D_t}f(\tau_k) \xrightarrow{t\to\infty}1 \end{align*} almost surely and in $\mathcal L^2$. Starting with the $\mathcal L^2$-convergence, we recall that on the event $\{D_t=n\}$, the jump times $(\tau_1,\ldots,\tau_n)$ are equal in distribution to $(U^t_{(1)},U^t_{(2)},\ldots,U^t_{(n)})$, the order statistic of $n$ iid variables $(U^t_1,\ldots,U^t_n)$, uniformly distributed on $[0,t]$. We obtain \begin{align} \E\Big[\sum_{k=1}^{D_t}f(\tau_k)\Big] &= \E\Bigg[\sum_{k=1}^{D_t}\E\Big[f(\tau_k)\Big|D_t\Big]\Bigg] = \E\Bigg[\sum_{k=1}^{D_t}\E\Big[f(U^t_k)\Big|D_t\Big]\Bigg]\notag \\[1em] &= t\E[f(U^t_1)] = t\cdot\frac1t\int_0^tf(s)ds = F(t). \label{eq:expectation-D-integral} \end{align} Hence, $\E[Y_t]=1$ and we compute \begin{align*} \|Y_t-1\|_{\mathcal L^2} &= \text{Var}\Bigg[\frac1{F(t)}\sum_{k=1}^{D_t}f(\tau_k)\Bigg] = \frac1{F(t)^2}\Bigg(\text{Var}\Bigg[\sum_{k=1}^{D_t}\E\Big[f(U^t_k)\Big|D_t\Big]\Bigg] + \E\Bigg[\sum_{k=1}^{D_t}\text{Var}\Big[f(U^t_k)\Big|D_t\Big]\Bigg]\Bigg)\\[1em] &= \frac1{F(t)^2}\Big(\text{Var}\Big[D_t\E[f(U^t_1)]\Big] + \E\Big[D_t\text{Var}[f(U^t_1)]\Big]\Big) = \frac1{F(t)^2}\Big(t\E[f(U^t_1)]^2 + t\text{Var}[f(U^t_1)]\Big)\\[1em] &= \frac{t\E[f(U^t_1)^2]}{F(t)^2} = \frac{\int_0^tf(x)^2dx}{\big(\int_0^tf(x)dx\big)^2}. \end{align*} Since $f^2$ is regularly varying with exponent $2\beta$, we obtain from Lemma \ref{lem:reg-var}.\emph1 and \ref{lem:reg-var}.\emph4 for \begin{itemize} \item $\beta>-\frac12$, that $\|Y_t-1\|_{\mathcal L^2}\sim\frac1t\cdot\frac{(\beta+1)^2}{2\beta+1}$. \item $\beta=-\frac12$, some slowly varying function $\ell$ and arbitrary $\varepsilon>0$ that \begin{align*} \|Y_t-1\|_{\mathcal L^2} = \frac{\ell(t)}t \in O(t^{-1+\varepsilon}). \end{align*} \item $\beta\in(-1,-\frac12)$, the numerator converges to a constant and the denominator converges to $\infty$. \end{itemize} Either way, the $\mathcal L^2$ convergence follows.\\ For the almost sure convergence first note that there is a subsequence $(t_n)_n$ with $t_n\nearrow\infty$ as well as $\lim_{n\to\infty}Y_{t_n}=1$ and hence, $\liminf_tY_t\leq1\leq\limsup_tY_t$ almost surely. Noting that $(Y_t)$ is a piecewise deterministic process, jumping upwards and between jumps decreasing continuously, the maximum and minimum on the $n$th deterministic piece of the path respectively are given by \begin{align*} Y^+_n := Y_{\tau_n} = \frac1{F(\tau_n)}\sum\limits_{k=1}^nf(\tau_k)\quad\text{and}\quad Y^-_n := Y_{\tau_{n+1}-} = \frac1{F(\tau_{n+1})}\sum\limits_{k=1}^nf(\tau_k) \end{align*} and we obtain for every $t$ that $Y^-_{D_t}\leq Y_t\leq Y^+_{D_t}$. Also, we deduce from Lemma \ref{lem:reg-var}.\emph2 that \begin{align*} \frac{Y^+_n}{Y^-_n} &= \frac{F(\tau_{n+1})}{F(\tau_n)} \xrightarrow{n\to\infty}1 \end{align*} almost surely, considering that $\tau_{n+1}/\tau_n\xrightarrow{n\to\infty}1$. Since the values of the local extrema of $(Y_t)_t$ are given by $Y^+$ and $Y^-$, it follows that \begin{align*} \liminf_{n\to\infty}Y^+_n &= \liminf_{n\to\infty}Y^-_n = \liminf_{t\to\infty}Y_t \leq 1 \leq \limsup_{t\to\infty}Y_t = \limsup_{n\to\infty}Y^+_n. \end{align*} Hence, it suffices to show that $Y^+_n\xrightarrow{n\to\infty}1$ almost surely. For this, let $h(n):=\min\{\sqrt n,\sqrt{F(n)}\}$ and decompose $Y^+_n$ in the following way: \begin{align} Y^+_n &= \frac1{F(\tau_n)}\sum_{k\leq h(n)}f(\tau_k) + \frac1{F(\tau_n)}\sum_{h(n)<k\leq n}\frac1{f(\tau_k)}. \label{eq:Yn+decomposition} \end{align} Considering that $A^-:=\inf_{n\geq1}\frac{\tau_n}n>0$ and $A^+:=\sup_{n\geq1}\frac{\tau_n}n<\infty$ almost surely, by Lemma \ref{lem:reg-var} we obtain for the first part \begin{align*} \frac1{F(\tau_n)}\sum_{k\leq h(n)}f(\tau_k) \ &\sim\ \frac{\beta+1}n\sum_{k\leq h(n)}\Big(\frac{\tau_k}{\tau_n}\Big)^\beta \cdot\frac{a(\tau_k)}{a(\tau_n)} \cdot\exp\Big(\int_{\tau_k}^{\tau_n}\frac{\varepsilon(y)}ydy\Big)\\[1em] &\leq \frac Cn\cdot\Big(\frac{A^+}{A^-}\Big)^{|\beta|} \exp\Big(\int_0^{nA^+}\frac{|\varepsilon(y)|}ydy\Big) \cdot\sum_{k\leq h(n)}\Big(\frac kn\Big)^\beta, \end{align*} where the constant $C$ arises from the boundedness of $a$. Now, for $\beta\geq0$ the remaining sum is bounded above by $h(n)\leq\sqrt n$, while for $-1<\beta<0$ it holds for some slowly varying function $\ell$ that \[ \sum_{k\leq h(n)}\Big(\frac kn\Big)^\beta \leq \sum_{k\leq h(n)}n^{|\beta|}\leq h(n)n^{|\beta|} = n^{|\beta|+\frac{1+\beta}2}\ell(n) = n^{\frac{1+|\beta|}2}\ell(n). \] Thus, noting that $\frac{1+|\beta|}2<1$ and $\exp(\int_0^\bullet\frac{|\varepsilon(y)|}ydy)$ is slowly varying, it follows that \[ \frac1{F(\tau_n)}\sum_{k\leq h(n)}f(\tau_k) \xrightarrow{n\to\infty}0 \] almost surely. Hence, since $\tau_n\sim n$ almost surely and thus $F(n)\sim F(\tau_n)$ and $f(\tau_k)\sim f(k)$ by Lemma \ref{lem:reg-var}.\emph2, it follows from \eqref{eq:Yn+decomposition} and Lemma \ref{lem:reg-var}.\emph{1(a)} \begin{align*} Y^+_n \ &\sim\ \frac{F(n)}{F(\tau_n)}\cdot\frac1{F(n)}\sum_{h(n)<k\leq n}\frac{f(\tau_k)}{f(k)}\cdot f(k) \ \sim\ \frac{\beta+1}n\sum_{h(n)<k\leq n}\frac{f(k)}{f(n)}\\[1em] \ &\sim\ \frac{\beta+1}n\sum_{h(n)<k\leq n}\Big(\frac kn\Big)^\beta \ \sim\ (\beta+1)\int_{h(n)/n}^1x^\beta dx \xrightarrow{n\to\infty}1 \end{align*} and the proof of \emph{2.} is done.\\[0.5em] For \emph{1.} if $\Lambda(\infty)<\infty$, also $ \lim_{t\to\infty}\int_0^tf(s)dD_s = \sum_{k=1}^{P_{\Lambda(\infty)}}f(\tau_k) $ is almost surely finite. Otherwise, we obtain from \eqref{eq:expectation-D-integral} and Lemma \ref{lem:reg-var}.\emph1 that \begin{align*} \E\Big[\int_0^tf(s)dD_s\Big] &= \int_0^tf(s)\kappa_sds \leq \int_0^\infty f(s)\kappa_sds = \int_0^\infty f(\Lambda^{-1}(s))ds < \infty, \end{align*} which also, by monotone convergence, implies the finiteness of $\int_0^\infty f(s)dD_s$ and \emph{1.} is done.\\[.5em] Lastly, for \emph{3.} we conclude that for $\alpha>0$, $F(t):=\int_0^tf(s)\kappa_sds$ and $Y_t:=\int_0^tf(s)dD_s/F(t)$ \begin{align*} 0 &\leq t^{-\alpha}Y_t \leq \frac1{F(t)}\int_0^ts^{-\alpha}f(s)dD_s. \end{align*} Now, since $t\mapsto t^{-\alpha}f(t)$ is regularly varying with exponent $-1-\alpha<-1$, \emph{1.} shows that the integral almost surely converges to some finite limit and hence, considering that $F$ is non-decreasing and non-negative, $\limsup_t t^{-\alpha}Y_t<\infty$ almost surely and $Y_t\in O(t^\alpha)$. Similarly, using \emph{2.} \begin{align*} t^\alpha Y_t &\geq \frac1{F(t)}\int_0^ts^\alpha f(s)dD_s, \end{align*} which either converges to a positive constant, if $\Lambda(\infty)<\infty$, or is asymptotically equivalent to \begin{align*} \frac{\int_0^ts^\alpha f(s)\kappa_sds}{\int_0^tf(s)\kappa_sds} &\sim \frac{\int_1^ts^\alpha f(s)\kappa_sds}{\int_1^tf(s)\kappa_sds} \geq 1. \end{align*} Either way, it follows that $\limsup_t t^\alpha Y_t>0$ and thus $Y_t\in\Omega(t^{-\alpha})$. \end{proof} \begin{remark}[More precise asymptotics for $\beta=-1$] In the case $\beta=-1$ it follows from Lemma \ref{lem:reg-var} that $F$ is slowly varying and thus, considering its monotonicity, lies in $O(t^\varepsilon)\cap\Omega(1)$ for all $\varepsilon>0$. As discussed in \cite{Polfeldt1969} however, it is not always the case, that a regularly varying function with exponent $-1$ is integrable on $\R_+$. Supposing that $F(\infty)=\infty$, we obtain the $\mathcal L^2$-convergence in Theorem \ref{thm:asymptotics-of-D-integral} analogously to the case $\beta\in(-1,-\frac12)$, while the methods we used to obtain almost sure convergence fail for $\beta=-1$. Conversely, if $F(\infty)<\infty$, similarly to the proof of Theorem \ref{thm:asymptotics-of-D-integral}.\emph1 it follows that \begin{align*} \lim_{t\to\infty}\E\Big[\int_0^tf(s)dD_s\Big] < \infty \end{align*} and the integral has a finite almost sure limit. Surely, \cite{Polfeldt1969} can be used to specify the results for this critical case. \end{remark} \subsection{Proof of Theorem \ref{thm:inhom-bdp}}\label{sec:pr-thm3} In this section we generalise the findings of Corollary \ref{cor:results-homog-bd-proc} to the time-inhomogeneous case. Recalling Definition \ref{def:inhom-bd-w-dis}, let $\cZ=\cZ^{in}_{b,d,\kappa,p}$ with birth, death and disaster rate functions $b$, $d$ and $\kappa$ respectively, and $p: \R_+\to[0,1]$ the survival probability function. Furthermore, let $(D_t)_t$ be the inhomogeneous Poisson process with rate $\kappa$ that counts the disasters up to time $t$. In what follows we will always assume $b,d$ and $\kappa$ to be right continuous with left limits and $p$ to be left continuous with right limits.\\[0.5em] We start by computing the pgf of $\mathcal Z$ for $(1-p)\kappa\equiv0$, i.e. without disasters, which will be generalised in Lemma \ref{lem:inhom-duality}. \begin{lemma}\label{lem:inhom-dual-no.dis.} Let $v(t):=\int_0^t(b_y-d_y)dy$ and $(1-p)\kappa\equiv0$. Then, for $x\in[0,1]$, $t\geq t_0\geq0$ and $k\geq0$ it holds that \begin{align*} \E[(1-x)^{Z_t}|Z_{t_0}=k] &= (1-s(t,x))^k, \end{align*} where $s(t,0)=0$ for all $t$ and $\displaystyle s(t,x)^{-1} = \tfrac1xe^{v(t_0)-v(t)} + e^{v(t_0)}\int_{t_0}^tb_ye^{-v(y)}dy $ for $x>0$. \end{lemma} \begin{proof} Given that $Z_{t_0}=k$, $Z_t$ is equal in distribution to a sum of $k$ independent copies started in 1 at time $t_0$. Thus, $\E[(1-x)^{Z_t}|Z_{t_0}=k]=\E[(1-x)^{Z_t}|Z_{t_0}=1]^k$. Hence, without loss of generality we assume $k=1$. Now, considering \cite{Kendall1948}, where birth- and death-rate are denoted by $\lambda$ and $\mu$ respectively and $v$ is denoted by $-\rho$ (cf. (11)), by $(9),(12)$ and $(10b)$ we can compute for $z\in[0,1]$ that \begin{align*} \E[z^{Z_t}|Z_0=1] &=: \varphi(z,t) = \frac{1+e^{v(t)}\int_0^tb_se^{-v(s)}ds-e^{v(t)} + \Big(e^{v(t)}-e^{v(t)}\int_0^tb_se^{-v(s)}ds\Big)z} {1 + \int_0^tb_se^{-v(s)}ds - \int_0^tb_se^{-v(s)}ds\cdot z}\\[1em] &= 1 - \frac{e^{v(t)} - e^{v(t)}z}{1 + \int_0^tb_se^{-v(s)}ds\cdot(1-z)}. \end{align*} Substitution of $z=1-x$ and reducing the fraction by $xe^{v(t)}$ concludes the proof for $t_0=0$. The general case $t_0\geq0$ is obtained considering a process $\cZ^\ast$ with birth and death rates at time $s$ given by $b^\ast(s):=b_{t_0+s}$ and $d^\ast:=d_{t_0+s}$ respectively. Then, for $t\geq t_0$ \begin{align} \E[(1-x)^{Z_t}|Z_{t_0}=1] &= \E[(1-x)^{Z^\ast_{t-t_0}}|Z^\ast_0=1] = 1 - \frac1{\frac1xe^{-v^\ast_{t-t_0}} + \int_0^{t-t_0}b^\ast(s)e^{-v^\ast_s}ds}, \label{eq:lem:inhom-no.dis.1} \intertext{where} v^\ast_s &= \int_0^s\big(b^\ast(y)-d^\ast(y)\big)dy = \int_{t_0}^{t_0+s}\big(b_y-d_y\big)dy = v(t_0+s)-v(t_0). \label{eq:lem:inhom-no.dis.2} \end{align} Substituting $y:=s+t_0$ in \eqref{eq:lem:inhom-no.dis.1} and using \eqref{eq:lem:inhom-no.dis.2} concludes the proof. \end{proof} \noindent The following lemma generalises the result above to processes with disasters, i.e. $(1-p)\kappa\not\equiv0$. It delivers a dual process $\cX$ with respect to the pgf and thus corresponds to Lemma \ref{lem:arb-off-dualilty} in the proof of Theorem \ref{thm:bp-arb.off.dis}. \begin{lemma}[A stronger duality]\label{lem:inhom-duality} Let $\log\frac10=-\infty$, $1/0=\infty$ and $1/\infty=0$. Then, for $x\in[0,1]$, $k\geq0$ and $\cD_\infty:=\sigma(D_s;s\geq0)$, it holds that \begin{align*} \E_k[(1-x)^{Z_t}|\cD_\infty] &= (1-X_t)^k \end{align*} for a piecewise deterministic process $\cX=(X_t)_{t\geq0}$ given by \begin{align} X_t^{-1} &= \tfrac1xe^{-L_t} + \int_0^te^{-L_s}b_sds, \label{eq:inhom-X-explicit} \end{align} where $\displaystyle L_t = \int_0^t\big(b_s-d_s\big)ds - \int_0^t\log\Big(\frac1{p_s}\Big)dD_s $. \end{lemma} \begin{proof} Let $t$ be fixed, $G_t(x):=\E_k[(1-x)^{Z_t}|\cD_\infty]$, $\tau_0:=0$ and $\tau_1,\tau_2,\ldots$ be the jump times of $(D_t)$, i.e. the disaster times of $\cZ$. Note that the binomial disasters provide (with the left-side limit $Z_{\tau_n-} := \lim_{s\uparrow \tau_n} Z_s$), {\small \begin{align*} \E[(1-x)^{Z_{\tau_n}}|Z_{\tau_n-}=z] &= \sum_{\ell=0}^z\genfrac(){0pt}{1}z\ell p_{\tau_n}^\ell(1-p_{\tau_n})^{z-\ell}(1-x)^\ell = \big(p_{\tau_n}(1-x) + (1-p_{\tau_n})\big)^z = (1-p_{\tau_n}x)^z. \end{align*} } \noindent Iterating this and Lemma \ref{lem:inhom-dual-no.dis.}, we obtain\\[1.5em] $\displaystyle G_t(x) = \E\Big[\E\big[(1-x)^{Z_t}\big|Z_{\tau_{D_t}},\cD_\infty\big]\Big|\cD_\infty\Big] = \E[(1-s_{D_t})^{Z_{\tau_{D_t}}}|\cD_\infty] $ \begin{align*} &= G_{\tau_{D_t}}(s_{D_t}) &\text{with } s_{D_t}^{-1} &= \frac{e^{v(\tau_{D_t})-v(t)}}x+e^{v(\tau_{D_t})}\int\limits_{\tau_{D_t}}^tb_se^{-v(s)}ds\\[1em] &= G_{\tau_{D_t}-}(p_{\tau_{D_t}}s_{D_t})=\ldots\\[1em] &= G_{\tau_{D_t-1}}(s_{D_t-1}) &\text{with } s_{D_t-1}^{-1} &= \frac{e^{v(\tau_{D_t-1})-v(\tau_{D_t})}}{p_{\tau_{D_t}}s_{D_t}} + e^{v(\tau_{D_t-1})}\int\limits_{\tau_{D_t-1}}^{\tau_{D_t}}b_se^{-v(s)}ds\\[1em] &= G_{\tau_{D_t-2}}(s_{D_t-2}) &\text{with } s_{D_t-2}^{-1} &= \frac{e^{v(\tau_{D_t-2})-v(\tau_{D_t-1})}}{p_{\tau_{D_t-1}}s_{D_t-1}} + e^{v(\tau_{D_t-2})}\int\limits_{\tau_{D_t-2}}^{\tau_{D_t-1}}b_se^{-v(s)}ds\\[1em] &=\ldots= G_{\tau_0}(s_0) &\text{with } s_0 &= \frac{e^{v(\tau_0)-v(\tau_1)}}{p_{\tau_1}s_1} + e^{v(\tau_0)}\int\limits_{\tau_0}^{\tau_1}b_se^{-v(s)}ds\\[1em] &= (1-s_0)^k. \end{align*} Now, solving the recursion, \begin{align*} s_0^{-1} &= \Bigg(\ldots\Bigg(\frac1xe^{v(\tau_{D_t})-v(t)} + e^{v(\tau_{D_t})}\int_{\tau_{D_t}}^tb_ye^{-v(y)}dy\Bigg)\\[1em] &\qquad\qquad\cdot\frac1{p_{\tau_{D_t}}}e^{v(\tau_{D_t-1})-v(\tau_{D_t})} + e^{v(\tau_{D_t-1})}\int_{\tau_{D_t-1}}^{\tau_{D_t}}b_ye^{-v(y)}dy\Bigg)\cdots\Bigg)\\[1em] &\qquad\qquad\cdot\frac1{p_{\tau_1}}e^{v(\tau_0)-v(\tau_1)} + e^{v(\tau_0)}\int_{\tau_0}^{\tau_1}b_ye^{-v(y)}dy\Bigg)\\[1em] &= \frac1xe^{-v(t)}\prod_{k=1}^{D_t}p_{\tau_k}^{-1} + \sum_{k=0}^{D_t}\prod_{\ell=1}^{k}p_{\tau_\ell}^{-1} \cdot e^{v(\tau_0)}\int_{\tau_k}^{\tau_{k+1}\wedge t}b_ye^{-v(y)}dy,\\[1em] \intertext{where the empty product equals 1. (Then, for $D_t=0$ and thus $t\leq\tau_1$, one obtains the deterministic dual from Lemma \ref{lem:inhom-dual-no.dis.}.) Letting $\beta(t):=\int_0^tb_se^{-v(s)}ds$ and considering that $v(\tau_0)=v(0)=0$, the sum equates to} &\sum_{k=0}^{D_t}\prod_{\ell=1}^kp_{\tau_\ell}^{-1} \big(\beta(\tau_{k+1}\wedge t)-\beta(\tau_{k})\big) = \int_0^t\prod_{\ell=1}^{D_s}p_{\tau_\ell}^{-1}\cdot b_se^{-v(s)}ds. \end{align*} With \begin{align*} \prod_{k=1}^{D_t}p_{\tau_k}^{-1} &= \exp\Big(\sum_{s\leq t}\log\tfrac1{p_s}\cdot(D_s-D_{s-})\Big) = \exp\Big(\int_0^t\log\tfrac1{p_s}dD_s\Big), \end{align*} it is simple to deduce that $s_0$ equals $X_t$ from $\eqref{eq:inhom-X-explicit}$ and the proof is done. \end{proof} \begin{remark}\label{rem:lem-inhom-dual}\ \begin{enumerate} \item This Lemma holds for arbitrary counting processes $(D_t)_{t\geq0}$. One might even consider a process with multiple jumps, e.g. $\Pw(\tau_k=0)>0$ for some $k$. \item The process $\cX$ here is not of the form required for Corollary \ref{cor:p-jump-concave} or Theorem \ref{thm:convergence-$p$-jump-pros}, even if we choose constant $b,d,\kappa,p$ to obtain homogeneity: $X_t$ jumps from a state $1/(\frac ax + b)$ to $1/(\frac a{px}+b)$, which is not a $p$-jump. However, in the homogeneous case, letting $b\equiv\vartheta>0$, $d\in\R_+$, $\delta:=b-d>0$ and $\kappa\equiv1$, we can see that $X_t$ equates to $\overline W_t$, the time-reversal of $W_t$ in \eqref{eq:W-explicit} we used in the proof of Theorem \ref{thm:convergence-$p$-jump-pros}. Similarly one obtains that the (homogeneous) time-reversal $\overline X_t$ has the generator, for $f\in \mathcal C^1([0,1])$, \begin{align*} \cG_{\overline{\cX}}f(x) &= \kappa(f(px)-f(x)) + (bx(1-x)-dx)f'(x). \end{align*} \item The relationship between $\cX$ and $\cZ$ can be viewed as a \emph{stronger} duality, since the duality relation \eqref{eq:dual}, from the beginning of Section \ref{sec:bps}, here does not only hold in expectation, but even in conditional expectation. (Taking expectation, \eqref{eq:dual} follows.) \end{enumerate} \end{remark} \noindent Although we are not able to use Corollary \ref{cor:p-jump-concave} here, from the previous Lemma we immediately obtain the following \begin{proposition}\label{prop:inhom-limit} Let $L_t$ as in Lemma \ref{lem:inhom-duality} and $I_t:=\int_0^te^{-L_s}b_sds$. Supposing that $L_t\xrightarrow{t\to\infty}L\in[-\infty,\infty]$ almost surely and letting $I:=\lim_{t\to\infty}I_t$, there are 3 possible outcomes for the limit of $\E_k[(1-x)^{Z_t}|\cD_\infty]$: \begin{align*} \lim_{t\to\infty}\E_k[(1-x)^{Z_t}|\cD_\infty](\omega) &= \begin{cases} 1 &\text{if }\omega\in\{L=-\infty\}\cup\{I=\infty\}.\\[1em] (1-I(\omega)^{-1})^k &\text{if }\omega\in\{L=\infty\}\cap\{I<\infty\}.\\[1em] \big(1-\frac x{\exp(-L(\omega))+xI(\omega)}\big)^k &\text{if }\omega\in\{L\in\R\}\cap\{I<\infty\}. \end{cases} \end{align*} The third case occurs if and only if \begin{align} \int_0^\infty(b_s+d_s)ds<\infty \quad\text{and}\quad \prod_{k\geq1}p(\tau_k(\omega))>0. \label{eq:inhom.condition-dep-on-x} \end{align} \end{proposition} \begin{proof} By construction and monotonicity of $(I_t)$, these three cases cover all possible outcomes. The results follow by insertion into Lemma \ref{lem:inhom-duality}. In case \emph{3.} there is $m(\omega)<\infty$ such that $m(\omega)\geq e^{L_t(\omega)}$ for all $t$, since $(L_t(\omega))_t$ converges in $\R$. Thus, almost surely \begin{align*} \int_0^\infty b_sds &\leq\int_0^\infty me^{-L_s}b_sds = mI < \infty. \end{align*} Now, the convergence of $(L_t(\omega))$ and the non-negativity of $d$ and $\log\frac1p$ give us that also $\int_0^\infty d_sds$ as well as \begin{align*} \int_0^\infty\log\tfrac1{p_s}dD_s(\omega) &= -\sum_{k\geq1}\log p_{\tau_k(\omega)} \end{align*} have to be finite, which shows that condition \eqref{eq:inhom.condition-dep-on-x} is necessary for the third case. To see the sufficiency, from \eqref{eq:inhom.condition-dep-on-x} the finiteness of $L(\omega)$ immediately follows analogously. Then, $(e^{-L_t(\omega)})_t$ is bounded and thus by finiteness of $\int_0^\infty b_sds$, also $I(\omega)$ has to be finite. \end{proof} \begin{remark}\label{rem:prop:inhom-limit}\ \begin{enumerate} \item The first part of condition \eqref{eq:inhom.condition-dep-on-x} implies that with probability 1 there is only a finite number of birth and death events, while the second part offers either the possibility of $\lim_tD_t<\infty$ or $p$ converging to 1 on the support of $\kappa$, fast enough to compensate for $(D_t)$. \item Only in the third case, the limiting probability generating function depends on $x$, which implies that, as soon as $b$ is bounded away from 0, $\cZ$ either goes extinct or explodes. \item Since this Proposition provides results depending directly on the paths of $(D_t)$, $b$, $d$ and $p$, it can easily be applied to random environments in the sense of choosing $b$, $d$ and/or $p$ to be stochastic processes. \item Another possible generalization could be to drop the assertion that the limit $L$ exists. Then, we see that the first case still only holds if $\limsup_tL_t=-\infty$ or $I=\infty$. Secondly, in the case of $I<\infty$ we still obtain a limit independent of $x$, only if $\liminf_tL_t=\infty$. Hence, only the third case changes, where we obtain bounds on the limit in terms of $\liminf_tL_t$ and $\limsup_tL_t$. \end{enumerate} \end{remark} \begin{proof}[Proof of Theorem \ref{thm:inhom-bdp}]\ \\ First note that the assertions and Theorem \ref{thm:asymptotics-of-D-integral} imply that almost surely \begin{align*} L_t = \int_0^t(b_s-d_s)ds - \int_0^t\log\Big(\frac1{p_s}\Big)dD_s &\sim \iota h(t). \end{align*} (Since $h(t)=\Omega(t^\alpha)$ for some $\alpha>0$, in the case where $\beta\leq-1$, $\int_0^t\log(1/p_s)dD_s$ has either a finite limit or it lies in $O(t^{\alpha/2})\subset o(t^\alpha)$ such that in either case it does not contribute to the asymptotics of $L_t$. Otherwise, it is asymptotically equivalent to $\int_0^t\log(1/p_s)\kappa_sds$.)\\ Now, we can apply Proposition \ref{prop:inhom-limit}:\\[0.5em] \emph{1.:} If $\iota=1$, $L_t\xrightarrow{t\to\infty}\infty$. Also, for almost every $\omega$ there is a $T(\omega)\in(0,\infty)$ such that $L_t(\omega)\geq(1-\varepsilon)h(t)$ for all $t\geq T(\omega)$. Thus, \begin{align*} I &\leq I_T + \int_T^\infty e^{-(1-\varepsilon)h(s)}b_sds < \infty. \end{align*} Hence, the second case of Proposition \ref{prop:inhom-limit} concludes that \begin{align*} \Pw(Z_t\xrightarrow{t\to\infty}0) &= \lim_{t\to\infty}\E_k[(1-1)^{Z_t}] = \E[(1-I^{-1})^k] < 1. \end{align*} \emph{2.:} If $\iota=-1$, it is clear, that $L_t\xrightarrow{t\to\infty}-\infty$ and independently of the integral condition of \emph2. the first part of Proposition \ref{prop:inhom-limit} concludes \begin{align*} \Pw(Z_t\xrightarrow{t\to\infty}0) &= \E_k[\lim_{t\to\infty}(1-1)^{Z_t}] = 1. \end{align*} Otherwise, i.e. if $\iota=1$ but the integral condition holds, $L_t\xrightarrow{t\to\infty}\infty$ and analogously to \emph{1.} we find a finite random variable $T'$ such that $L_t\leq(1+\varepsilon)h(t)$ for all $t\geq T'$ almost surely. Thus, \begin{align*} I &= \int_0^\infty e^{-L_s}b_sds \geq I_{T'} + \int_{T'}^\infty e^{-(1+\varepsilon)h(s)}b_sds = \infty. \end{align*} Then again, the first part of Proposition \ref{prop:inhom-limit} concludes the proof. \end{proof} \begin{remark}[Normalization function, rates of convergence]\label{rem:inhom-conv-rates} \begin{enumerate} \item There are two major cases in which a normalisation function $h$ as in Theorem \ref{thm:inhom-bdp} does not exist: \begin{enumerate} \item The integral $\ell(t):=\int_0^t (b_s - d_s - \kappa_s\log(\frac1{p_s}))ds$ converges to a constant. Then, $\cZ$ will exhibit only a finite number of birth events almost surely and converge to a random variable, where the third part of Proposition \ref{prop:inhom-limit} provides a way to compute the limiting distribution. \item The integral $\ell$ oscillates too strongly -- e.g. $\ell(t) = t(1+\sin(t))$. This might happen in periodic models, which were briefly discussed in \cite{Kendall1948}. In this case, Lemma \ref{lem:inhom-duality} still holds, while Proposition \ref{prop:inhom-limit} as well as Theorem \ref{thm:asymptotics-of-D-integral} do not apply. \end{enumerate} \item In case \emph{2.} of Theorem \ref{thm:inhom-bdp}, for the convergence rates of the survival probability conditioned on $\cD_\infty$, the $\sigma$-algebra of the disaster times, we can estimate for arbitrary $k\geq1$, using the processes $(X_t)$ and $(L_t)$ from Lemma \ref{lem:inhom-duality} with $X_0=1$ and Bernoulli's inequality \begin{align*} -\frac1{h(t)}\log\Pw_k(Z_t>0|\cD_\infty) &= -\frac1{h(t)}\log\big(1-(1-X_t)^k\big)\\[1em] &\sim -\frac1{h(t)}\log(X_t) = \frac1{h(t)}\log\Big(e^{-L_t} + \int_0^te^{-L_s}b_sds\Big)\\[1em] &\sim \frac1{h(t)}\max\Bigg\{-L_t,\ \log\Big(\int_0^te^{-L_s}b_sds\Big)\Bigg\}, \end{align*} where $\sim$ denotes asymptotic equivalence, i.e. $f\sim g\Leftrightarrow f(t)/g(t)\to1$ as $t\to\infty$. Then, by Theorem \ref{thm:asymptotics-of-D-integral}, $\frac{L_t}{h(t)}\to\iota$, while for $0<\delta<\varepsilon$ and $t\geq t_0$ large enough it holds \begin{align*} \int_{t_0}^te^{-(1+\delta)h(s)}b_sds &\leq \int_{t_0}^te^{-L_s}b_sds \leq \int_{t_0}^te^{-(1-\delta)h(s)}b_sds. \end{align*} With more knowledge on $h$ and $b$, this approach can be used to compute bounds on the convergence rates. \end{enumerate} \end{remark}
1,314,259,993,730
arxiv
\section{Introduction} General Relativity (GR) is one of pillars of modern physics. It has been confirmed by many kinds of observation, e.g., the precession of Mercury's orbit \cite{Clemenc:1947}, gravitational time dilation \cite{Schwartz:1977,Uggerhoj:2016} and, recently, gravitational waves \cite{Abbott:2016blz}. According to observations, the universe is expanded with acceleration at the present \cite{Riess:1998cb,Perlmutter:1998np}. Including a cosmological constant to GR is one of the possible ways to explain this expansion. Even though GR with a cosmological constant can provide a description able to satisfy current observations, the physical origin of the cosmological constant still has not been conclusively explained. There are several ways to explain such a phenomenon of the universe instead of using the cosmological constant. One of such ways is trying to modify gravity at large scale. Massive gravity theory is a kind of such modifications in which a mass term is given. A linear theory of massive gravity, Fierz-Pauli massive gravity, was proposed in 1939 as the theory of the massive spin-2 field \cite{Fierz:1939ix}. Unfortunately, this theory encounters the van Dam-Veltman-Zakharov (vDVZ) discontinuity in the massless limit \cite{vanDam:1970ab,Zakharov:1970cd}. In other words, the Fierz-Pauli theory cannot be reduced to a linearlized version of GR. It was suggested by Vainshtein \cite{Vainshtein:1972sx} that nonlinear interaction terms should be included into the theory to get rid of vDVZ discontinuity. One viable model of nonlinear massive gravity theories was proposed in 2010 by de Rham, Gabadadze and Tolley, and is called the dRGT massive gravity \cite{deRham:2010ik,deRham:2010kj}. One of the key points of this massive gravity theory is that Struckelberg fields are introduced via the reference/fiducial metric to restore diffeomorphism invariance. By using a Minkowski type fiducial metric, it is found that the dRGT massive gravity theory does not admit flat Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) solutions, so that it is not easy to obtain suitable models to provide the present-day acceleration of the universe \cite{DAmico:2011eto,Gumrukcuoglu:2011ew}. Further studies of the dRGT massive gravity have been investigated in order to provide the acceleration phase of the universe, for example, considering more general forms of the fiducial metric \cite{Fasiello:2012rw,Langlois:2012hk,Langlois:2013cya,Gumrukcuoglu:2011zh,Chullaphan:2015ija}, including more degrees of freedom, such as a scalar field \cite{Huang:2012pe,DAmico:2012hia,DeFelice:2013tsa,DeFelice:2013dua,DeFelice:2017wel,DeFelice:2017rli,Hinterbichler:2013dv,Gabadadze:2012tr,Tannukij:2015wmn,Nakarachinda:2017oyc}, and promoting the fiducial metric to a dynamical field \cite{HassanRosen2012}. Nice reviews on the massive gravity theory are also found \cite{Hinterbichler, deRham:2014zqa}. Since GR is modified by graviton mass at large scale, it is possible that the local gravity may obtain some modification due to the graviton mass. As a result, a spherically symmetric solution in dRGT massive gravity has been investigated in order to examine effects of such modification at local scale \cite{Koyama:2011yg,Koyama:2011xz,Nieuwenhuizen:2011sq,Tasinato:2013rza,Vegh:2013sk,BHsoln1,BHsoln2,BHsoln3}. By using this kind of solution, some signatures of astronomical objects in the dRGT massive gravity have been explored, e.g., in white dwarfs \cite{EslamPanah:2018evk}, neutron stars \cite{Hendi:2017ibm}, the rotation curves of galaxies \cite{Panpanich:2018cxo}, gravitational lensing \cite{Panpanich:2019mll} and the mass-radius ratio bound for compact objects \cite{Kareeso:2018qea}. Moreover, black hole solutions were also found \cite{Berezhiani:2011mt,Brito:2013xaa,Volkov:2013roa,Cai:2012db,Babichev:2014fka,Babichev:2015xha,Hu:2016hpm} and their thermodynamical properties have been investigated in \cite{Cai:2014znn,Ghosh:2015cva,Adams:2014vza,Xu:2015rfa,Capela:2011mh,Hu:2016mym,Zou:2016sab,Hendi:2017arn,Hendi:2017bys,EslamPanah:2016pgc,Hendi:2016hbe,Hendi:2016uni,Hendi:2016yof,Arraut:2014uza,Arraut:2014iba}. Besides the spherically symmetric solution, the cylindrical one has also investigated in \cite{Tannukij:2017jtn,Ponglertsakul:2018smo,Boonserm:2019mon,Ghosh:2019eoo}. The linear perturbations in GR around various kinds of black hole solutions such as Schwarzschild, Reissiner-Nordstr\"{o}m and Kerr are well investigated \cite{Chand1083}. For spherically symmetric spacetime, the general form of the equations of motion for gravitational perturbations, as well as their instabilities, have been investigated \cite{Kodama:2003jz,Ishibashi:2003ap,Kodama:2003kk}. It was found that the governing equations in linear perturbation regime are closely related to the equations of a field in curved spacetime around the black holes. Thus, in order to study the instabilities and properties of black holes, it is possible to analyze the behavior of a test field nearby, dictated as a gravitational interaction of the field itself. As a result, various kinds of the fields have been investigated, for example, scalar field \cite{BCS2009}, Dirac field \cite{CCDW2009}, vector field \cite{chm2001} and spin-3/2 gravitino field \cite{ccch2019}. As modified gravity theories have been developed, the black hole solutions in such the theories were found. In order to explore the instabilities and the dynamical properties of black holes in modified gravity, one can analyze the evolution of the field around the black hole. Up to our knowledge, the scalar field around the black hole in the dRGT massive gravity had been examined \cite{Burikham:2017gdm,Boonserm:2017qcq,Boonserm}, the Dirac field has not been investigated yet. Hence, we focus on this investigation in the present work through the Quasinormal modes (QNMs). QNMs are solutions of the wave equation with specific boundary conditions and contain the complex frequency. Analyzing the QNMs is one way to investigate the dynamical properties of a black hole. For example, the ringdown frequency profile of black hole merging is characterized by the QNMs (see, e.g., \cite{Gundlach:1993tn,Gundlach:1993tp} and \cite{Kokkotas:1999bd} for a review). This is interesting, since a new generation of gravitational wave detectors may be able to detect some signatures of the QNMs, and so might provide some hint to construct a modified gravity theory. QNMs are also great interesting in the context of the Anti-de Sitter/Conformal field theory (AdS/CFT) correspondence \cite{Horowitz:1999jd}, as well as in the context of the thermodynamic properties of black holes in loop quantum gravity (see \cite{Dreyer:2002vy} and \cite{Cardoso:2003pj} for reviews). Because of the nature of the dRGT black hole solution, which is asymptotically Anti-de Sitter (AdS)/de Sitter (dS), it is worthwhile to investigate the QNMs of the dRGT black hole and this is the main aim of this work. It is a common knowledge that the scalar perturbations around spherically symmetric black holes are governed by the Regge-Wheeler equation. For the perturbation of the Dirac field, the master equations were obtained \cite{Cho:2003qe,Cho:2005yc}. It is also found that the mathematical form of these master equations are different to ones of the bosonic cases. Furthermore, no spherically symmetric black hole solutions for the Einstein-Dirac-Maxwell system and the absence of a periodic static orbit for a Dirac particle around a black hole were proved by Finster and collaborators \cite{fsy1999,fsy2000,fksy2000}. However, according to Hawking radiation, it is possible to have radiation around the black hole as quantum fluctuations of Dirac particles. This implies that the scattering properties of a Dirac particle, or a “Dirac cloud”, around a black hole are interesting to investigate. Various methods were established to compute QNMs, e.g., the Poshl-Teller method \cite{Ferrari:1984zz}, the asymptotic iteration method \cite{Cho:2009cj,Cho:2011sf} and the WKB method \cite{SchutzWill,iyewil}. A review of QNMs in various kinds of black holes can be found in \cite{BCS2009,KZ2011}. In this work, we evaluated the QNMs of the Dirac field surrounding the dRGT black holes by using WKB and revised WKB method with Pad$\acute{e}$ approximation \cite{KZZ2019}. From a cosmological point of view, the universe is expected to be asymptotically dS on large scale. In particular, the QNMs in this kind of spacetime have been intensively investigated \cite{Mellor:1989ac,Moss:2001ga,Cardoso:2003sw,Molina:2003ff,Suneeta:2003bj,MaassenvandenBrink:2003yq,Choudhury:2003wd,Jing:2005bh,Ghosh:2005aq,LopezOrtega:2006my,Yoshida:2010zzb,Liu:2012zl,Zhang:2014xha,Tangphati:2018jdx} including for Dirac fields \cite{Zhi2003,Jing:2003wq,Wahlang:2017zvk}. Hence, we will focus on the asymptotically dS solution of the dRGT black hole. We analyze the Schr\"{o}dinger-like equation for Dirac perturbation with a particular form of the potentials due to the dRGT black hole. We find that the shape of the potentials depends crucially on the structure of the graviton mass and that the behavior of the QNMs is controlled by the graviton mass parameters. It is also found that the higher potential gives stronger damping of the QNMs. Lastly, we compare our results to the Schwarzschild-de Sitter case and find that the Dirac QNMs for the Schwarzschild-de Sitter black hole are located, approximately, in a part of parameter space from the dRGT black hole. This paper is organized as follows. In Sec. \ref{sec:dRGT}, a brief review of the dRGT massive gravity theory and its black hole solution are discussed. Then, in Sec. \ref{sec:eff potential}, the effective potential in Schr\"{o}dinger-like equation is derived and analyzed. We show how the shape of the potential depends on the mass parameters of the theory. In Sec. \ref{sec:QNM}, the QNMs are computed using the WKB method. Moreover, the accuracy of the computations are checked using the Pad$\acute{e}$ approximation. Sec. \ref{sec:conclude} contains a summary of our main conclusions and discussion. The tables of QNM frequencies and the numerical precision are listed in Appendices A and B, respectively. \section{dRGT massive gravity and black hole solution}\label{sec:dRGT} A massive gravity theory is a modified gravity theory that introduces a mass term into GR. One of the viable models of massive gravity is proposed by de Rham, Gabadaze and Tolley, called the dRGT massive gravity \cite{deRham:2010ik,deRham:2010kj}. The action for the dRGT massive gravity can be written as \begin{eqnarray}\label{action} S = \int \text{d}^4x \sqrt{-g}\; \frac{1}{2} \left[ R +m_g^2\,\, {\cal U}(g, \phi^a)\right], \end{eqnarray} where $R$ is the Ricci scalar and ${\cal U}$ is the potential for the graviton. The letter is an additional part of gravitational sector with the parameter $m_g$ interpreted as the graviton mass. The potential ${\cal U}$ in four-dimensional spacetime is of the form \begin{eqnarray}\label{potential} {\cal U}(g, \phi^a) = {\cal U}_2 + \alpha_3{\cal U}_3 +\alpha_4{\cal U}_4 , \end{eqnarray} where $\alpha_3$ and $\alpha_4$ are dimensionless free parameters of the theory and each term of the potential ${\cal U}_2$, ${\cal U}_3$ and ${\cal U}_4$, can be further expressed as \begin{eqnarray} {\cal U}_2&\equiv&[{\cal K}]^2-[{\cal K}^2] ,\\ {\cal U}_3&\equiv&[{\cal K}]^3-3[{\cal K}][{\cal K}^2]+2[{\cal K}^3] ,\\ {\cal U}_4&\equiv&[{\cal K}]^4-6[{\cal K}]^2[{\cal K}^2]+8[{\cal K}][{\cal K}^3]+3[{\cal K}^2]^2-6[{\cal K}^4], \end{eqnarray} where \begin{eqnarray} {\cal K}^\mu_{\,\,\,\nu}=\delta^\mu_\nu-\sqrt{g^{\mu\sigma}f_{ab}\partial_\sigma\phi^a\partial_\nu\phi^b}. \label{K-tensor} \end{eqnarray} Here the rectangular brackets denote traces, namely $[{\cal K}]={\cal K}^\mu_{\,\,\,\mu}$ and $[{\cal K}^n]=({\cal K}^n)^\mu_{\,\,\,\mu}$. From the above expression, one can see that there exists another metric $f_{ab}$ called reference (or fiducial) metric. The four scalar fields $\phi^a$, called St\"uckelberg fields, are introduced in order to restore the general covariance of the theory. By varying the action with respect to metric $g_{\mu\nu}$, the equations of motion interpreted as modified Einstein field equations are obtained as \begin{eqnarray} G_{\mu\nu} +m_g^2 X_{\mu\nu} = 0. \label{modEFE} \end{eqnarray} The tensor $X_{\mu\nu}$ can be interpreted as the effective energy-momentum tensor. It is straightforwardly obtained by varying the potential term ${\cal U}$ with respect to $g_{\mu\nu}$ \begin{eqnarray} X_{\mu\nu} &=& {\cal K}_ {\mu\nu} -{\cal K}g_ {\mu\nu} -\alpha\left({\cal K}^2_{\mu\nu}-{\cal K}{\cal K}_{\mu\nu} +\frac{{\cal U}_2}{2}g_{\mu\nu}\right) +3\beta\left( {\cal K}^3_{\mu\nu} -{\cal K}{\cal K}^2_{\mu\nu} +\frac{{\cal U}_2}{2}{\cal K}_{\mu\nu} - \frac{{\cal U}_3}{6}g_{\mu\nu} \right), \,\,\,\,\,\,\label{effemt}\nonumber\\ \end{eqnarray} where we have reparameterized the model parameters as follows \begin{eqnarray}\label{alphabeta} \alpha_3 = \frac{\alpha-1}{3}~,~~~\alpha_4=\frac{\beta}{4}+\frac{1-\alpha}{12}. \end{eqnarray} Since the potential terms are covariantly constructed, the tensor $X_{\mu\nu}$ obeys the covariant divergence as follows \begin{eqnarray}\label{BiEoM} \nabla^\mu X_{\mu\nu} = 0, \end{eqnarray} where $\nabla^\mu$ denotes the covariant derivative which is compatible with $g_{\mu\nu}$. Note that this constraint equation is also obtained by varying the action with respect to the fiducial metric, which also satisfies the Bianchi identities. In order to solve the field equation \eqref{modEFE}, one may need to choose the form of the fiducial metric. Note that the form of the fiducial metric will provide the form of the physical metric. Considering this, it is convenient to choose the form of the fiducial metric as \cite{Ghosh:2015cva} \begin{eqnarray}\label{fiducial metric} f_{\mu\nu}=\text{diag}(0,0,h^2 ,h^2 \sin^2\theta), \label{fmetric} \end{eqnarray} where $h$ is a constant. By using this form of the fiducial metric, one of the static and spherically symmetric solutions of the physical metric can be obtained as \begin{eqnarray} \text{d}s^2=-f(r)\text{d}t^2+f^{-1}(r)\text{d}r^2+r^2\text{d}\Omega^2, \end{eqnarray} with \begin{eqnarray}\label{f sol} f(r)=1-\frac{2M}{r}+\frac{\Lambda}{3}r^2+\gamma r+\zeta, \end{eqnarray} where $M$ is the mass of black hole and other parameters are defined as follows \begin{eqnarray} \Lambda&=&3m_g^2(1+\alpha+\beta),\\ \gamma&=&-hm_g^2(1+2\alpha+3\beta),\\ \zeta&=&h^2m_g^2(\alpha+3\beta). \end{eqnarray} Note that detailed calculation to obtain this solution can be found in \cite{Ghosh:2015cva}. This solution contains various signatures of other well-known black hole solutions found in the literature. By setting $m_{g}=0$, the Schwarzschild solution is recovered. For the very large scale limit, the solution becomes the Schwarzschild-dS solution for $1+\alpha+\beta<0$ and becomes the Schwarzschild-AdS solution for $1+\alpha+\beta>0$. Moreover, the global monopole solution can be obtained by setting $1+2\alpha+3\beta=0$. Note that the last term in Eq.~\eqref{f sol}, the constant potential $\zeta$, corresponds to the global monopole term which naturally emerges from the graviton mass. Finally, the linear term $\gamma r$ is a characteristic term of this solution, which distinguishes it from other solutions found in literature. It is convenient to introduce the dimensionless variable $\tilde{r} = r/h$ and to introduce the dimensionless model parameters \cite{Boonserm:2017qcq} \begin{eqnarray} \tilde{M}&=& \frac{M}{h},\,\,\,\,\,\, \alpha_g = m^2_gh^2,\,\,\,\,\,\, c_0 = \alpha+3\beta,\,\,\,\,\,\ c_1=1 +2\alpha +3\beta,\,\,\,\,\,\, c_2 = 1+\alpha+\beta. \end{eqnarray} As a result, the function $f$ can be written in terms of the dimensionless variables as \begin{equation} f(\tilde{r}) = 1 - \frac{2\tilde{M}}{\tilde{r}} + \alpha_g \left(c_2\tilde{r}^2 -c_1 \tilde{r}+c_0\right).\label{fdRGT} \end{equation} From this equation, one finds that the parameter $h$ characterizes the nonlinear scale of the solution and takes place at $\tilde{M} \sim \alpha_g$. Hence, one can consider the parameter $h$ as \begin{eqnarray} h = r_V = \left(\frac{M}{m_g^2}\right)^{1/3}. \end{eqnarray} This radius is well known as the Vainshtein radius \cite{Vainshtein:1972sx}. In the range $r<r_V$, the theory approaches GR, while in the range $r > r_V$, the modification of GR will be active. In order to see the structure of the black hole horizon clearly, let us consider a subclass of parameters by specifying the parameter as follows \begin{eqnarray}\label{c1c0} c_1 = 3 (4c^2_2)^{1/3},\quad c_0 = \frac{9}{\sqrt{3}} \frac{\left(2|c_2|\right)^{1/3}}{\beta_m} - \frac{1}{\alpha_g}. \end{eqnarray} Now we have only two significant parameters: $c_2$ and $\beta_m$ characterizing the strength of the graviton mass and the numbers of the horizons, respectively. For asymptotically dS solution, $0 < \beta_m < 1$ is the condition for existence of two horizons while the asymptotically AdS solution, $1 < \beta_m < 2/\sqrt{3}$ is the condition for existence of three horizons. This behavior can be found explicitly using numerical methods as illustrated in Fig \ref{fig:Horizon}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.5]{dSHorizon.pdf}\qquad \includegraphics[scale=0.5]{AdSHorizon.pdf} \end{center} {\caption{The left-(right-)hand panel shows the horizon structure of the asymptotically dS(AdS) solution in the dRGT massive gravity for various values of $\beta_m$. We set the dimensionless parameters as $\tilde{M}=1$, $\alpha_g = 1$ and $c_2 = -0.02/3 (+0.02/3)$ for the asymptotically dS(AdS) solution }\label{fig:Horizon}} \end{figure} Moreover, this subclass of parameters allows us to find the exact solutions of the horizon as follows \begin{footnotesize} \begin{eqnarray} \tilde{r}_{1(dS)} &=& \frac{2}{\left(-2c_2\right)^{1/3}}\left[\left(\frac{2\sqrt{3}}{\beta_m} + 4\right)^{1/2}\cos\left(\frac{1}{3}\sec^{-1}\left(-\frac{\sqrt{\frac{\sqrt{3}}{\beta_m} + 2}\left(2\sqrt{2}\beta_m + \sqrt{6}\right)}{5\beta_m + 3\sqrt{3}}\right)\right) - 1\right],\label{rHdS1}\\ \tilde{r}_{2(dS)} &=& \frac{-2}{\left(-2c_2\right)^{1/3}}\left[\left(\frac{2\sqrt{3}}{\beta_m} + 4\right)^{1/2}\cos\left(\frac{1}{3}\sec^{-1}\left(-\frac{\sqrt{\frac{\sqrt{3}}{\beta_m} + 2}\left(2\sqrt{2}\beta_m + \sqrt{6}\right)}{5\beta_m + 3\sqrt{3}}\right) + \frac{\pi}{3}\right) + 1\right],\\\label{rHdS2} \tilde{r}_{1(AdS)} &=& \frac{2}{\left(2c_2\right)^{1/3}}\left[1 - \left(\frac{4 - 2\sqrt{3}}{\beta_m}\right)^{1/2}\sin\left(\frac{1}{3}\sec ^{-1}\left(\frac{\sqrt{6}-2 \sqrt{2} \beta _m}{\left(3 \sqrt{3}-5 \beta _m\right) \sqrt{-\frac{\beta _m}{\sqrt{3}-2 \beta _m}}}\right) + \frac{\pi}{6}\right)\right],\label{rHAdS1}\\ \tilde{r}_{2(AdS)} &=& \frac{2}{\left(2c_2\right)^{1/3}}\left[1 - \left(\frac{4 - 2\sqrt{3}}{\beta_m}\right)^{1/2}\cos\left(\frac{1}{3}\sec^{-1}\left(\frac{\sqrt{6} - 2\sqrt{2}\beta _m}{\left(3 \sqrt{3}-5 \beta _m\right) \sqrt{-\frac{\beta _m}{\sqrt{3}-2 \beta _m}}}\right) + \frac{\pi}{3}\right)\right],\label{rHAdS2}\\ \tilde{r}_{3(AdS)} &=& \frac{2}{\left(2c_2\right)^{1/3}}\left[1 + \left(\frac{4 - 2\sqrt{3}}{\beta_m}\right)^{1/2}\cos\left(\frac{1}{3}\sec^{-1}\left(\frac{\sqrt{6} - 2\sqrt{2}\beta_m}{\left(3\sqrt{3} - 5\beta_m\right)\sqrt{-\frac{\beta_m}{\sqrt{3} - 2\beta_m}}}\right)\right)\right].\label{rHAdS3} \end{eqnarray} \end{footnotesize} \section{Dirac perturbation}\label{sec:eff potential} In this section, the effective potential of QNM is derived in the spherically symmetric spacetime for arbitrary gravity theories. It is then expressed, exactly, for the dRGT massive gravity, using the solution (\ref{fdRGT}). Let us start with the general form of the metric \begin{eqnarray} \text{d}s^2=-f(r)\text{d}t^2+\frac{1}{f(r)}\text{d}r^2+r^2(\text{d}\theta^2+\sin^2\theta\text{d}\phi^2). \end{eqnarray} For spin-half fields in curved spacetime, it is convenient to consider the calculation in the vielbein formalism. In this present work, we choose the form of the vielbein as follows \begin{eqnarray} e^\mu_{\,\,\,\hat{\alpha}}&=&\text{diag}\left(\frac{1}{\sqrt{f}}, \sqrt{f}, \frac{1}{r}, \frac{1}{r\sin\theta}\right). \end{eqnarray} Note that the indices without hats are curved spacetime indices and ones with hats are Lorentz indices. We consider a test spin-1/2 field near the spherically symmetric spacetime as a perturbed field. Therefore, one can fix the background and then the backreaction can be neglected. The Dirac equation in general curved spacetime is expressed as \begin{eqnarray} \Big[\gamma^\mu(\partial_\mu+\Gamma_\mu)+m\Big]\Psi=0,\label{Dirac eq} \end{eqnarray} where $\Psi$ and $m$ are the Dirac field and its mass respectively. $\gamma^\mu$ is the $4\times4$ Dirac gamma matrix and $\Gamma_\mu$ is the spin connection given by \begin{eqnarray} \Gamma_\mu&=&\frac{1}{2}\,\omega_{\mu\hat{\alpha}\hat{\beta}}\,\Sigma^{\hat{\alpha}\hat{\beta}}, \end{eqnarray} with \begin{eqnarray} \omega_{\mu\hat{\alpha}\hat{\beta}}=e^\rho_{\,\,\,\hat{\alpha}}(\partial_\mu e_{\rho\hat{\beta}}-\Gamma^\sigma_{\mu\rho}e_{\sigma\hat{\beta}}),\hspace{1cm} \Sigma^{\hat{\alpha}\hat{\beta}}=\frac{1}{4}[\gamma^{\hat{\alpha}},\gamma^{\hat{\beta}}]. \end{eqnarray} $\Gamma^\rho_{\mu\nu}$ is the Christoffel symbol. The representation of the Dirac gamma matrices, $\gamma^{\hat{\alpha}}$ is chosen as follows \cite{Hung2015} \begin{eqnarray} \gamma^{\hat{0}}=i\sigma^3\otimes\mathbbm{1},\hspace{1cm} \gamma^{\hat{1}}=\sigma^2\otimes\mathbbm{1},\hspace{1cm} \gamma^{\hat{2}}=\sigma^1\otimes\sigma^1,\hspace{1cm} \gamma^{\hat{3}}=-\sigma^1\otimes\sigma^2, \end{eqnarray} with the Pauli spin matrices \begin{eqnarray} \sigma^1=\left(\begin{array}{cc}0&1\\1&0\end{array}\right),\hspace{1cm} \sigma^2=\left(\begin{array}{cc}0&-i\\i&0\end{array}\right),\hspace{1cm} \sigma^3=\left(\begin{array}{cc}1&0\\0&-1\end{array}\right). \end{eqnarray} Using these, Eq. \eqref{Dirac eq} can be reexpressed as \begin{eqnarray} \left[ \frac{1}{\sqrt{f}}(i\sigma^3\otimes\mathbbm{1})\partial_t +\sqrt{f}(\sigma^2\otimes\mathbbm{1})\partial_r +\frac{1}{r}(\sigma^1\otimes\sigma^1)\partial_\theta -\frac{1}{r\sin\theta}(\sigma^1\otimes\sigma^2)\partial_\phi \right.\hspace{.6cm}&&\nonumber\\ \left. +\frac{f'}{4\sqrt{f}}(\sigma^2\otimes\mathbbm{1}) +\frac{\sqrt{f}}{r}(\sigma^2\otimes\mathbbm{1}) +\frac{\cot\theta}{2r}(\sigma^1\otimes\sigma^1) +m\right]\Psi&=&0.\,\,\, \end{eqnarray} where the prime denotes the derivative with respect to $r$. One of the ways to solve this equation is using the separation method. Since the metric admits spherical symmetry and does not depend on $t$, it is possible to separate the solution into the radial, temporal and angular parts such that \begin{eqnarray} \Psi(t,r,\theta,\phi)=\left(\begin{array}{c}iA(r)\\B(r)\end{array}\right)e^{-i\omega t}\otimes\Theta(\theta,\phi), \end{eqnarray} where $A$ and $B$ are the radial functions, and $\omega$ is the angular frequency of the solution. The spherically symmetric angular part $\Theta$ satisfies the eigen equation for the Dirac field on a two-dimensional sphere, \begin{eqnarray} \left(\sigma^1\partial_\theta-\frac{\sigma^2}{\sin\theta}\partial_\phi+\frac{\cot\theta}{2}\sigma^1\right)\Theta=i\lambda\Theta, \end{eqnarray} where $\lambda=\pm1,\pm2,\pm3,\hdots$ are the corresponding eigenvalues. As a result, the radial equation for $A$ and $B$ can be written as \begin{eqnarray} \left[\left(f\partial_r+\frac{f'}{4}+\frac{f}{r}\right)\sigma^2 +\frac{i\lambda\sqrt{f}}{r}\sigma^1\right] \left(\begin{array}{c}iA\\B\end{array}\right) &=& -\left[\omega\sigma^3+m\sqrt{f}\mathbbm{1}\right] \left(\begin{array}{c}iA\\B\end{array}\right).\label{Dirac eq2} \end{eqnarray} This radial equation is still complicated. In order to simplify it, one can introduce a function $C(r)$ to eliminate the terms $\frac{f'}{r}+\frac{f}{r^2}$. This function must satisfy the condition, $f\partial_rC+\frac{f'}{4}C+\frac{f}{r}C=0$. For the case of the dRGT massive gravity, $C(r)$ takes the form \begin{eqnarray} C(r)=C_0r^{-3/4}\Big[r \left(\Lambda r^2+3\gamma r+3\zeta+3\right)-6 M\Big]^{-1/4}, \end{eqnarray} where $C_0$ is an integration constant. By setting \begin{eqnarray} \left(\begin{array}{c}B/C\\A/C\end{array}\right) =\left(\sin\frac{\theta}{2}\,\sigma^3+\cos\frac{\theta}{2}\,\sigma^1\right)\left(\begin{array}{c}\tilde{B}\\\tilde{A}\end{array}\right), \end{eqnarray} where $\theta=\tan^{-1}(-mr/\lambda)$, Eq. \eqref{Dirac eq2} is then simplified as \begin{eqnarray} f\partial_r\tilde{B}+a\tilde{B}&=&-\omega b\tilde{A},\\ f\partial_r\tilde{A}-a\tilde{A}&=&\omega b\tilde{B}, \end{eqnarray} where \begin{eqnarray} a=\frac{\sqrt{f}}{r}\sqrt{\lambda^2+m^2r^2}, \hspace{1cm} b=1+\frac{fm\lambda}{2\omega(m^2r^2+\lambda^2)}. \end{eqnarray} Let us introduce the new coordinate $x$ called tortoise coordinate. This coordinate is related to the radial coordinate $r$ via \begin{eqnarray} f\,\partial_r=b\,\partial_{x}. \end{eqnarray} Note that $x\to-\infty$ as $r$ goes to the event horizon and $x\to\infty$ as $r$ goes to the cosmological horizon. The range of the new coordinate is thus expanded to be from $-\infty$ to $\infty$. Eventually, the decoupled radial equations are obtained as \begin{eqnarray} \left(-\partial^2_{x}+V_+\right)\tilde{A}&=&\omega^2\tilde{A},\\ \left(-\partial^2_{x}+V_-\right)\tilde{B}&=&\omega^2\tilde{B}. \end{eqnarray} Note that these equations are Schr\"{o}dinger-like equations with effective potentials, \begin{eqnarray} V_\pm=\pm\partial_{x}\left(\frac{a}{b}\right)+\left(\frac{a}{b}\right)^2. \end{eqnarray} We also note that although, there are two potentials, $V_+$ and $V_-$, obtained from the same function $a/b$, called the superpotential. This means that the potentials $V_+$ and $V_-$ are supersymmetric partners \cite{CKS1995}. They thus give the same spectra of QMNs \cite{Zhou2014}. For the case of the massless Dirac field ($m=0$), the potential is expressed as \begin{eqnarray}\label{Veff} V_\pm &=&\pm f\,\partial_r\left(\frac{\sqrt{f}}{r}\lambda\right)+\frac{f}{r^2}\lambda^2. \end{eqnarray} Substituting $f=1-\frac{2 M}{r}+\frac{\Lambda}{3}r^2+\gamma r+\zeta$, the potential can be written as \begin{eqnarray} V_\pm &=&\pm f\left[\lambda\,\left(\frac{6\mu-r(2+\gamma r+2\zeta)}{2r^3\sqrt{f}}\right)+\lambda^2\left(\frac{1}{r^2}\right)\right]. \label{potential} \end{eqnarray} Note that Eq.~(\ref{Veff}) is consistent with the Dirac perturbation in GR cases when the dRGT parameters vanish \cite{Cho:2005yc}. In this work, one chooses to study a QMN with the potential $V_+$. By using the parameters defined in the previous section, one finds that there are three crucial parameters, $c_2$, $\beta_m$ and $\lambda$. Moreover, we will see that the potential vanishes at the horizon since $f=0$, and approaches a constant value in the large $r$ (or $\tilde{r}$) limit. Note that the potential is valid only in the range $f\geq0$ due to the existence of $\sqrt{f}$. By Eq.~\eqref{potential}, it is obvious that the potential is higher when the parameter $\lambda$ is larger. This behavior is also illustrated in the left-hand panel of Fig. \ref{fig:V-lam-bm}. From the right-hand panel of this figure, one can see that the potential becomes lower when the parameter $\beta_m$ increases closer to $1$. The parameter $c_2$ controls the strength of the graviton mass or the cosmological constant. As shown in the left-hand panel of Fig. \ref{fig:fV-c2}, smaller values of $|c_2|$ yield larger distance for the cosmological horizon. This behavior is similar to one in cosmological aspect; the cosmological constant is very small and then the cosmological horizon is very far. This means that gravity is significantly modified only on very large scales. As a result, the potential becomes wider and lower when $|c_2| \rightarrow 0$, as shown in the right-hand panel of Fig. \ref{fig:fV-c2}. \begin{figure} \begin{center} \includegraphics[scale=0.48]{V-lambda.pdf}\qquad \includegraphics[scale=0.5]{V-bm.pdf} \end{center} {\caption{The left-hand panel shows the potential for different values of the parameter $\lambda$,\\ with $\beta_m =0.8$ and $c_2 =-0.02/3 $. The right-hand panel shows the potential \\for different values of $\beta_m$ with $\lambda =1$ and $c_2 =-0.02/3 $.}\label{fig:V-lam-bm}} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.5]{f-c2.pdf}\qquad \includegraphics[scale=0.5]{V-c2.pdf} \end{center} {\caption{The left-hand panel shows the horizon structure for different values of the parameter $c_2$ with $\beta_m =0.8$ and $\lambda =1$. The right-hand panel shows the potential\\ for different values of the parameter $c_2$ with $\beta_m =0.8$ and $\lambda =1 $.}\label{fig:fV-c2}} \end{figure} \section{Quasinormal modes}\label{sec:QNM} In the black hole perturbation theory with spherically symmetric spacetime, the radial equations always can be represented in a Schr\"{o}dinger-like form with an effective potential. The effective potential is determined by the specific spin of the particle and the specific kind of black hole. The QNM spectrum is one of the important physical properties of the wave which can be obtained by solving the radial equation. Therefore, the shape of the potential is important to characterize the QNM and vice versa. In a rough classification, there are two main types of effective potential corresponding to various fields in spherically symmetric spacetimes. The first is the barrier-like effective potential, which includes a local maximum and is asymptotically zero, or converges to a specific value which is smaller than the maximum at spatial infinity or the cosmological horizon. The QNMs can be obtained by taking the boundary conditions of purely ingoing waves (i.e. ingoing to the black hole event horizon) and purely outgoing waves (i.e. outgoing to spatial infinity or to the cosmological horizon). The physical phenomena of QNMs associated with this type of potential represent the ringdown profile of black hole merges and the damping rate of the late-time tail of the propagating waves (i.e. waves with positive real and negative imaginary frequencies). The second type of effective potential is asymptotic infinity or a finite maximum at spatial infinity, which usually relates to the AdS black hole. The boundary conditions for obtaining the QNMs are slightly different from the previous type of potential. In this case, we require the purely ingoing waves to the black hole event horizon, and Dirichlet or vanishing energy flux boundary conditions at spatial infinity. The QNMs of this potential are suggested to link with the AdS/CFT correspondence. For a more detailed discussion of the black hole QNMs, we refer to these review articles \cite{BCS2009,KZ2011}. The dRGT black hole model contains both types of effective potential. The asymptotically AdS case contains, however, more complicated structures \cite{Ghosh:2015cva}. The study of QNMs for the asymptotically AdS case is model dependent and beyond the scope of our current study. As a result, we focus on the dS-like (asymptotically dS) solutions in which the effective potential is always barrier-like, as mentioned in the previous section. In this case, it is useful to study how the structure of the graviton mass can provide significant deviations from the Schwarzschild-dS case. We study the Dirac QNMs using the 3rd order WKB approximation by Iyer and Will \cite{iyewil}, the 6th order WKB approximation by Konoplya \cite{kono2003}, and the recent revised WKB approach with Pad$\acute{e}$ approximation by Konoplya, Zhidenko and Zinhailo \cite{KZZ2019}. These methods are powerful tools in the study of barrier-like potentials with the aforementioned boundary conditions. We choose the effective potential $V_{+}$ in Eq.~(\ref{Veff}) to evaluate the QNMs while the potential $V_{-}$ will be the super-symmetric partner potential as mentioned in the previous section. The metric element $f$ is given by Eq.~(\ref{fdRGT}). We take the black hole mass $\tilde{M}=1$, the scaling parameter $\alpha_{g}=1$, and set the parameters $c_{0}$ and $c_{1}$ as presented in Eq.~(\ref{c1c0}). There are three leftover parameters containing crucial physical meanings as follows: the angular momentum parameter $\lambda=l+1$, $l=0,\ \pm1,\ \pm2,\hdots$ which is based on the spin-1/2 eigenvalues on the two-dimensional sphere, the strength parameter of the graviton mass, $c_{2}$, which is analogous to the cosmological constant in traditional GR, and the free parameter $\beta_{m}$ which allows the effective potential to always be dominated by barrier-like behavior between the black hole and cosmological horizons, when $0<\beta_{m}<1$. We evaluate two different sets of parameters for the QNMs. In the first set, we fix $c_{2}=-\frac{0.01}{3}$ and vary the parameter $\beta_m$ as follows: $\beta_{m}=\ 0.5,\ 0.6,\ 0.7.\ 0.8,\ 0.85, \ 0.9$ and $0.95$. This set of parameters allows us to compare, straightforwardly, the results in the dRGT case with these for the Schwarzschild-dS solution when the cosmological constant is $\Lambda=0.01$. The low-lying modes with the 3rd and 6th order WKB approximation, the 6th and 13th order revised WKB approaches with Pad$\acute{e}$ approximation, as well as the reference modes of the Schwarzschild-dS cases are explicitly presented in Tables.~\ref{Tab1}, \ref{Tab2}, \ref{Tab3}, \ref{Tab4}, \ref{Tab5} and \ref{Tab6} in Appendix A. The relations for the corresponding effective potential and the evolution of these modes with the 6th order WKB approximation are also presented in Figs.~\ref{fig:QNM1} and \ref{fig:QNM2}, for low and high $\beta_m$, respectively. For fixed $l$ and $n$, where $n$ is the mode number, the QNM frequencies shift to the smaller real part and smaller absolute values of the imaginary part when $\beta_{m}$ increases. This means that the propagating wave of QNM will oscillate and decay slower for lower effective potentials. Moreover, it is found that, for fixed $l$ and increasing $n$, the real parts of the QNM frequencies decrease while the absolute values of the imaginary parts increase. This implies that the oscillating frequency of the propagating wave is smaller and the damping rate becomes larger for higher $n$ modes. These properties can be checked analytically for the leading 3rd order term of the WKB approximation with the form of the frequency as follows, \begin{equation} \omega^{2}=\left[V_{0}+\left(-2V''_{0}\right)^{1/2}\Lambda\right]-i\left(n+\frac{1}{2}\right)\left(-V''_{0}\right)^{1/2}\left(1+\Omega\right).\label{3rd WKB freq} \end{equation} Here, $V_{0}$ denotes the maximum of $V_{+}$, and the prime now denotes the derivative with respective to the tortoise coordinate. The functions $\Lambda$ and $\Omega$ can be expressed as \begin{eqnarray} \Lambda &=&\frac{1}{\left(-2V''_{0}\right)^{1/2}}\left[\frac{1}{8}\left(\frac{V_{0}^{(4)}}{V''_{0}}\right)\left(\frac{1}{4}+\alpha^{2}\right) -\frac{1}{288}\left(\frac{V'''_{0}}{V''_{0}}\right)^{2}\left(7+60\alpha^{2}\right)\right],\nonumber\\ \Omega &=&\frac{1}{\left(-2V''_{0}\right)}\Bigg[\frac{5}{6912}\left(\frac{V'''_{0}}{V''_{0}}\right)^{4}\left(77+188\alpha^{2}\right)-\frac{1}{384} \left(\frac{V_{0}^{'''2} V_{0}^{(4)}}{V_{0}^{''3}}\right)\left(50+100\alpha^{2}\right)\nonumber\\ &&+\frac{1}{2304}\left(\frac{V_{0}^{(4)}}{V''_{0}}\right)^{2}\left(67+68\alpha^{2}\right)+\frac{1}{288}\left(\frac{V'''_{0}V_{0}^{(5)}}{V_{0}^{'''2}}\right) \left(19+28\alpha^{2}\right)-\frac{1}{288}\left(\frac{V_{0}^{(6)}}{V''_{0}}\right)\left(5+4\alpha^{2}\right)\Bigg],\nonumber\\ \end{eqnarray} where $\alpha=n+1/2$, $n=0,1,2,...$. One can see that the essential part (leading order) comes from $V_0$ which is the maximum value of the potential. As a result, the imaginary part of $\omega$ is proportional to $V_0$. This makes the waves with QNMs decay faster for higher values of $V_0$. These behaviors are also consistent with the general expectation in traditional GR cases, as in \cite{Zhi2003,Cho:2005yc,CCDW2009}. Comparison of the $n=l=0$ mode with the Schwarzschild-dS case is presented in Fig.~\ref{fig:QNM3}. It is seen that the frequencies obtained from Schwarzschild-dS case are located approximately on the linear region of parameter in the dRGT model. In other word, the results from the Schwarzschild-dS case is a subclass of ones from the dRGT model. For example, the lowest QNM frequency from the Schwarzschild-dS case with $\Lambda=0.01$ corresponds to one from the dRGT case with $c_2=-0.01/3$ and $\beta_m\sim0.77$. This implies that it is possible to obtain faster or slower decay rates of the wave in the dRGT black holes compared to one in the Schwarzschild-dS black hole. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.6]{veffb5678.pdf}\qquad \includegraphics[scale=0.8]{QNMB5678.pdf} \end{center} {\caption{The left-hand panel shows the effective potential with $c_{2}=-\frac{0.01}{3}$, $l=0$ and \\various low values of $\beta_{m}$. The right-hand panel shows the related low-lying QNMs.}\label{fig:QNM1}} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=0.6]{veffb89.pdf}\qquad \includegraphics[scale=0.8]{QNMB89.pdf} \end{center} {\caption{The left-hand panel shows the effective potential with $c_{2}=-\frac{0.01}{3}$, $l=0$ and \\various high values of $\beta_{m}$. The right-hand panel shows the related low-lying QNMs.}\label{fig:QNM2}} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[scale=1]{QNMB00} \end{center} {\caption{Comparison of the lowest QNM frequency ($n=0$ and $l=0$) for the dRGT \\ black hole with $c_{2}=-\frac{0.01}{3}$ and one for the Schwarzschild de-Sitter solution, with $\Lambda=0.01$.}\label{fig:QNM3}} \end{figure} For the second set, we fix $\beta_{m}$ and vary the parameter $c_2$ as follows $c_{2}=-\frac{0.02}{3},\ -\frac{0.03}{3},\ -\frac{0.04}{3}$, $-\frac{0.05}{3}$ and $-\frac{0.06}{3}$. The results of the low-lying modes with the 3rd and 6th order WKB approximation, and the 6th and 13th order revised WKB approaches with Pad$\acute{e}$ approximation are explicitly listed in Tables.~\ref{Tab7}, \ref{Tab8}, \ref{Tab9}, \ref{Tab10}, \ref{Tab11} and \ref{Tab12} in Appendix A. The 6th order WKB approximation results and the corresponding effective potential are shown in Fig.~\ref{fig:QNM4}. From this figure, one can see that the real parts shift to larger values and the absolute values of imaginary parts become larger for the low-lying modes with fixed $n$ and $l$ and increasing $|c_2|$. Again, the same behavior as the fixed $c_2$ and varying $\beta_m$ case is obtained, the higher potential, the faster of the decay rate of the wave. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.6]{veffc23456.pdf}\qquad \includegraphics[scale=0.8]{QNMC23456.pdf} \end{center} {\caption{The left-hand panel shows the effective potential with $\beta_{m}=0.8$, $l=0$ and various values of $c_{2}$. The right-hand panel shows the related low-lying QNMs.}\label{fig:QNM4}} \end{figure} It is worthwhile to note that, for the dRGT black hole, there exists a finite potential well between the maximum point and the cosmological horizon. However, the dominant part is still positive then the potential can be approximated as a barrier-like potential. The potential well does not influence the behavior of QNMs evaluated by the WKB formula, as discussed in \cite{KZZ2019}. Note that if the finite potential well is strong enough to influence the QNMs behavior, it will correspond to the “three turning points” effective potential mentioned in \cite{GM1992}. This is worthy of further study as an extension of the present work. As a remark, the results were evaluated by Wolfram Mathematica with versions $8.0$ to $11.0$. The data is consistent and reproducible only with enough precision numbers of numerical input. This is because the precision setting of Wolfram Mathematica is not sufficient to yield credible results. The details of the results for different precision setting used in numerical calculations are listed in Appendix B. \section{Conclusion}\label{sec:conclude} In this paper, we studied QNMs from the Dirac perturbation around the black hole in the dRGT massive gravity, called the dRGT black hole. For the dRGT black hole, a part of the graviton mass can play the role of the cosmological constant so that the dRGT black hole solution is asymptotically AdS/dS, as expressed in Eq.~\eqref{f sol} and Eq.~\eqref{fdRGT}. Conveniently, we characterized the effect of the graviton mass by two parameters, $c_2$ and $\beta_m$. The parameter $c_2$ characterizes the strength of the graviton mass and $\beta_m$ ($0<\beta_m<1$) determines the existence of the horizons. For the Dirac perturbation, we derived the radial Schr\"{o}dinger-like equation and two effective potentials $V_{\pm}$ were obtained. These are super-symmetric partner potentials, so that both give the same spectrum of QNMs. Therefore, we specified the form of $V_+$ in order to obtain the QNMs in this work. The shape of the effective potential is crucially described by three parameters; $c_2$ and $\beta_m$ characterize the graviton mass and $\lambda$ is the eigenvalue of the angular part of the perturbation equations corresponding to angular momentum. We restricted our consideration to the asymptotically dS spacetime so that the potential is barrier-like, with $c_2<0$ and $ 0<\beta_m<1$. This allowed us to use the WKB method to calculate the QNMs. We first analyzed the behavior of QNMs by considering how the QNM frequency changes according to the change in shape of the potential. We found that the behavior of the frequency is similar to that in quantum mechanics. The higher potential, the faster the wave decay (the higher of the absolute value of the imaginary part of the frequency). In terms of the graviton mass parameters, the higher potential corresponds to lower values of $\beta_m$ and $c_2$. For the angular momentum parameter $\lambda$, the higher potential corresponds to larger values of $\lambda$. This behavior can be roughly seen from Eq.~\eqref{3rd WKB freq}. In the second part of our study, numerical calculations were performed, which verified the veracity of our analytic results. For the numerical calculations, the QMNs were evaluated up to the 3rd and 6th orders of the WKB approximation. We also checked our numerical calculations by using another approximation, namely, the Pad$\acute{e}$ approximation up to the 6th and 13th orders. We found that all results are in good agreement. Since the dRGT black hole solution is asymptotically dS, we also compared our results to the Schwarzschild-dS black hole in order to distinguish between them. The Dirac QNMs for the Schwarzschild-dS located approximately in a part of the results from the dRGT black hole as shown in Fig.~\ref{fig:QNM3} as well as in Tables \ref{Tab1}, \ref{Tab2}, \ref{Tab3}, \ref{Tab4}, \ref{Tab5} and \ref{Tab6} in Appendix A. In other words, the QNM frequencies for the dRGT black hole can be more or less than ones for Schwarzschild-dS depending on the graviton mass parameters. Actually, for the dRGT black hole, it is possible to obtain faster or slower decay rates of the wave compared to the Schwarzschild-dS black hole. This is due to the fact that the dRGT solution has more free parameters than the single free parameter of the Schwarzschild-dS solution. This provides us one possible way to distinguish/test this kind of modified gravity theory. For example, it is possible to check how the QNMs during the ringdown phase of the black hole mergers deviate from those in the Schwarzschild solution \cite{Card2019}. It is worthwhile to compare our results with the black hole solutions in other kinds of massive gravity theories which can be found in \cite{fern2015}, for example. However, the black hole solution in \cite{fern2015} is asymptotically flat which is very different from the dRGT solution. Therefore, the potential is crucially different, even though it is also a barrier-like potential. Nevertheless, it is possible to apply our analysis to solution in \cite{fern2015}. Actually, the result agrees with the argument that higher potentials make the wave with QNMs decay faster than the lower one. In order to characterize the dynamics of the black hole, it is possible to consider other quantities such as the transmission probability or greybody factor. These quantities tell us how the wave emitted near by the black hole can propagate from the black hole. Then the properties of the black hole can be identified through the potential in the master equation of the Dirac perturbation. We leave this investigation for future work. Furthermore, the solution with asymptotically AdS spacetime is also interesting. In this case, it is more complicated to calculate the QNMs since the proper specific boundary conditions for the modes must be imposed. Moreover, there are three possible horizons for the dRGT black hole so that the boundary conditions should be carefully specified. Even though it might be complicated to perform the QNM analysis in this case, it is interesting in the context of the AdS/CFT correspondence. Perhaps, the QNM frequencies or graviton mass may correspond to some quantities in the dual field theory and may provide some imprints of quantum gravity theory. Furthermore, the perturbations from the other fermionic test fields are also of interest, for example, massive gravitino perturbations.
1,314,259,993,731
arxiv
\section{Digital feedback control in quantum computing} Moving from proof-of-principle demonstrations of quantum gates and algorithms to fully fledged quantum hardware requires closing the loop between qubit measurement and control. There are different categories of quantum feedback control, depending on the type of measurement and feedback law used. For clarity, we first offer a classification of quantum feedback, similarly to that used in classical feedback. Then, we focus on the particular class of discrete-time, digital feedback. \subsection{Classification of quantum feedback} A first distinction is between \hl{continuous-time} and \hl{discrete-time feedback}. In the first case, measurement and control are continuous in time and concurrent. An example is the stabilization of a qubit state using continuous partial measurement, as discussed in Refs.~\cite{Gillett10, Sayrin11, Bushev06, Koch10, Brakhane12}. In discrete-time feedback, instead, the conditional control is applied only after a measurement has been performed and processed. Here, we focus exclusively on discrete-time implementations. This class can be further divided into two categories, analog and digital. We speak of \hl{analog feedback} when the measurement result assumes a continuum of values and the feedback law is a continuous function of the result. An example is the experiment in Ref.~\citenum{deLange14}, where the feedback controller first integrates the signal produced by a weak measurement and then applies the resulting coherent operation on the qubit. If the measurement has a finite set of possible results, instead, the possible feedback actions are also finite. We refer to this as \hl{digital feedback}. The simplest example is qubit reset (section~\ref{sec:reset}), in which a strong projective measurement collapses the qubit into either the ground or excited state. Here, a $\pi$ rotation brings the qubit to ground. Another interesting example is digital feedback using ancilla-based partial measurement~\cite{Blok13, Groen13}. In this case, the measurement output is discrete, showing that partial measurement is not necessarily associated with analog feedback. In many applications, digital feedback forces determinism into one of the most controversial aspects of quantum mechanics, namely the measurement, whose result is intrinsically probabilistic. Looking at the action of digital feedback as a black box, we expect to see a definite output qubit state for a given input. In an ideal feedback scheme, measurement results and the conditioned operations vary at every run of the protocol, but the overall process is deterministic and the output state is always the same. For example, one can project a two-qubit superposition to a specific Bell state by combining a parity measurement with digital feedback (section~\ref{sec:ebm}). \subsection{Protocols using digital feedback} \label{sec:fb_protocols} Several quantum information processing (QIP) protocols call for digital feedback. One of the requirements for a quantum computer is efficient qubit initialization~\cite{DiVincenzo00}. Often, the steady state of a qubit does not correspond to a pure computational state $\ket{0}$ or $\ket{1}$, bur rather to a mixture of the two. Therefore, active initialization methods have been used in many QIP architectures. Examples are laser or microwave initialization~\cite{Monroe95, Atature06, Valenzuela06, Manucharyan09} and initialization by control of the qubit relaxation rate~\cite{Reed10b, Mariantoni11}. An alternative method, recently used with NV centers in diamond~\cite{Robledo11} and superconducting qubits (section~\ref{sec:readout}), relies on projective measurement to initialize the qubits into a pure state. However, measurement alone cannot produce the desired state with certainty, since the measurement result is probabilistic. Closing a feedback loop based on this measurement turns the unwanted outcomes into the desired state. A qubit register must not only be initialized in a pure state at the beginning of computation, but often also during the computation. For example, performing multiple rounds of error correction is facilitated by resetting ancilla qubits to their ground state after each parity check~\cite{Schindler11}. When using a qubit as a detector (e.g. of charge~\cite{Riste13} or photon parity~\cite{Sun14}), a fast reset can be used to increase the sampling rate without keeping track of past measurement outcomes. Similarly, in the multi-qubit setting, digital feedback is key to turning \\measurement-based protocols from probabilistic to deterministic. An example is the generation of entanglement by parity measurement~\cite{Ruskov03}. A parity measurement projects an initial maximal superposition state into an entangled state with a well-defined parity, i.e., with either even or odd total number of qubit excitations (section~\ref{sec:ebm}). However, once again, the outcome of the parity measurement is random. When running the protocol open-loop multiple times, the average final state has no specific parity and is unentangled. Only by forcing a definite parity using feedback can one generate a target entangled state deterministically. A variation of closed-loop control, named \hl{feedforward}, applies control on qubits different from those measured. Feedforward schemes have already found application in quantum communication, where the main objective is the secure transmission of quantum information over a distance. In quantum teleportation, a measurement on the Bell basis of two qubits projects a third qubit, at any distance, into the state of the first, to within a single-qubit rotation~\cite{Nielsen00}. The measurement result determines which qubit rotation, if any, must be applied to teleport the original state. An extension of teleportation is entanglement swapping~\cite{Nielsen00}. This protocol transfers entanglement to two qubits which never interact, and forms the basis for quantum repeaters~\cite{Briegel98}, aiming to distribute entanglement across larger distances than allowed by a lossy communication channel. Here, measurement and feedback are used in every step to first purify~\cite{Bennett96} and then deterministically transfer entangled pairs to progressively farther nodes. In quantum computing, feedforward operations are at the basis of the first schemes devised to protect a qubit state from errors. The simplest protocol is the bit-flip code~\cite{Mermin07}, which encodes the quantum state of one qubit into a an entangled state of three, and uses measurement of two-qubit operators (syndromes) in combination with feedback to correct for $\sigma_x$ (bit-flip) errors. Of similar structure is the phase-flip code, which protects against $\sigma_z$ (phase-flip) errors. To protect against errors on any axis, the minimum size of the encoding is five qubits. In \hl{quantum error correction}, projective measurement is more than a tool to detect discrete errors that have already occurred. In fact, the measurement serves to discretize the set of possible erorrs. Measuring the error syndromes forces one and only one of these errors to happen. This greatly simplifies the feedback step, which is now restricted to a finite set of correcting actions. While few-qubit error correction schemes are capable of correcting any single error, they require currently inaccessible measurement and gate fidelities. A more realistic approach is offered by topologically protected circuits such as surface codes~\cite{Fowler12}, where errors as high as $1\%$ are tolerated at the expense of a larger number of physical qubits required~\cite{Wang11}. One cycle in a surface code, aimed at maintaining a logical state encoded in a square lattice of qubits, includes the projective measurements of 4-qubit operators as error syndromes. When an error is detected on a data qubit, the corrective, coherent feedback operation is replaced by a change of sign in the operators for the following syndrome measurements involving that qubit. In other words, errors are kept track of by the classical controller rather than fixed~\cite{Kelly15, Riste15}. Beyond protecting a state from external perturbations, performing fault-tolerant quantum computing will require robustness to gate errors. In surface codes, single- and two-qubit gates on logical qubits are also based on projective measurements and in some cases require digital feedback to apply conditional rotations~\cite{Fowler12b}. In addition to the gate model~\cite{DiVincenzo00}, digital feedback is central to the paradigm of \hl{measurement-based quantum computing}~\cite{Briegel09}. In this approach, also called one-way computation, the initial state is an entangled state of a large number of qubits. All logical operations are performed by projective measurements. To make computation deterministic, feedback selects the measurement bases at each computational step, conditional on the measurement results. \subsection{Experimental realizations of digital feedback} Digital feedback has been employed for entanglement swapping with trapped ions~\cite{Riebe08} and for the unconditional teleportation of photonic~\cite{Furusawa98}, ionic~\cite{Barrett04, Riebe04}, and atomic~\cite{Sherson06, Krauter13} qubits. In linear optics, feedforward has been used to implement segments of one-way quantum computing~\cite{Tame06, Prevedel07, Chen07, Vallone08, Ukai11, Bell14} and for photon multiplexing~\cite{Vitelli13}. In the solid state, the first approach to feedback, of the analog type, was used to stabilize Rabi oscillations of a superconducting qubit~\cite{Vijay12}. Soon after, digital feedback with high-fidelity projective measurement was introduced in the solid state, also using superconducting circuits~\cite{Riste12b, CampagneIbarcq13}. Recently, digital feedback has been extended to multi-qubit protocols with superconducting qubits (section~\ref{sec:ebm} and Ref.~\citenum{Steffen13}) and NV centers in diamond~\cite{Pfaff14}. \subsection{Concepts in digital feedback} \label{sec:fbbasics} The basic ingredients for a digital feedback loop are: 1) \hl{projective} qubit \hl{readout} and 2) control conditional on the measurement result (see Fig.~\ref{fig:1_book}\textbf{a} for the simplest single-qubit loop). The main challenge for (1) is to obtain a high-fidelity readout which is also nondemolition, thus leaving the qubits in a state consistent with the measurement result. A mismatch between measurement result and post-measurement qubit state will trigger the wrong feedback action (Fig.~\ref{fig:1_book}\textbf{b}). The requirement for (2) is to minimize the time, or \hl{latency}, between measurement and conditional action. Various sources contribute to latency: the time for the signal to travel from the sample to the feedback controller, the time for the feedback controller to process the signal and discretize it, and the delay to the execution of the conditional qubit gates. If a transition between levels occurs in one of the measured qubits during this interval, for instance because of spontaneous relaxation, its state becomes inconsistent with the chosen feedback action, resulting in the wrong final state (Fig.~\ref{fig:1_book}\textbf{c}). In feedforward protocols, such as error correction or teleportation, the feedback action is applied to data qubits, which are different from the measured ancilla qubits. In this case, the loop must also be fast compared to the coherence times of data qubits. \begin{figure*} \centering \includegraphics[width=0.3\columnwidth]{Fig1a_Fb} \includegraphics[width=0.3\columnwidth]{Fig1b_Fb} \includegraphics[width=0.3\columnwidth]{Fig1c_Fb} \caption{\label{fig:1_book} \textbf{Concept of a single-qubit digital feedback loop and possible errors.} \textbf{a,} The measurement is digitized into either $H$ or $L$ for qubit declared in $\ket{0}$ or $\ket{1}$. A different unitary rotation is applied for each result. Errors occurring in case of qubit relaxation between measurement and action (\textbf{b}) or wrong measurement assignment (\textbf{c}). Top (bottom) row indicates the actual qubit state corresponding to result $H$ ($L$).} \end{figure*} The simplest example of digital feedback is single-qubit \hl{reset}. Here, the qubit is projected by measurement onto $\ket{0}$ or $\ket{1}$ and, depending on the targeted state, a $\pi$ pulse is applied conditional on the measurement result. In this example, we consider the effect of the errors in Fig.~\ref{fig:1_book}~\textbf{b},\textbf{c}, modeling the qubit as a classical three-level system, where the third level includes the possibility of transitions out of the qubit subspace. This is relevant in the case of transmon qubits with a sizeable steady-state excitation~\cite{Riste12b, CampagneIbarcq13}. We indicate with $p^M_{ij}$ the probability of obtaining the measurement result $M$ with initial state $\ket{i}$ and post-measurement state $\ket{j}$. With $\Gamma_{ij}$ we indicate the transition rates from $\ket{i}$ to $\ket{j}$, and with $\tau_\mathrm{Fb}$ the time between the end of measurement and the end of the conditional operation. For perfect pulses, the combined errors $P_{\mathrm{err}}^{\theta}$ for initial state $\cos(\theta)\ket{0} + \sin(\theta)\ket{1}$ are, to first order: \begin{equation} \label{eq:fb} \begin{aligned} &P_{\mathrm{err}}^{\theta=0} = p^L_{00}+p^H_{01}+\Gamma_{01}\tau_\mathrm{Fb}, \\ &P_{\mathrm{err}}^{\theta=\pi}= p^H_{11}+p^L_{10}+p_{12}+(\Gamma_{10}+\Gamma_{12})\tau_\mathrm{Fb}, \end{aligned} \end{equation} and weighted combinations thereof for other $\theta$. A simple way to improve feedback fidelity is to perform two cycles back to back. While the dominant error for $\theta = 0$ remains unchanged, for $\theta = \pi$ it decreases to $P_{\mathrm{err}}^{\theta=0}+p_{12}+\Gamma_{12}\tau_\mathrm{Fb}$. The second cycle compensates errors arising from relaxation to $\ket{0}$ between measurement and pulse in the first cycle. However, it does not correct for excitation from $\ket{1}$ to $\ket{2}$. For this reason, adding more cycles does not significantly reduce the error, unless the population in $\ket{2}$ is brought back to the qubit subspace. This can be done~\cite{Riste12b, CampagneIbarcq13} by a deterministic $\pi$ pulse returning the population from $\ket{2}$ to $\ket{1}$, or with a more complex feedback loop capable of resolving and manipulating all three states. \subsection{Closing the loop in cQED} \label{sec:fbcqed} Until recently, the coherence times of superconducting qubits bottlenecked both achievable readout fidelity and required feedback speed. The development of circuit quantum electrodynamics~\cite{Blais04,Wallraff04} with 3D cavities (3D cQED)~\cite{Paik11} constitutes a watershed. The new order of magnitude in qubit coherence times $(>10~\mu\mathrm{s}$), combined with Josephson parametric amplification~\cite{Castellanos-Beltran08, Vijay09}, allows projective readout with fidelities $~\sim99\%$ and realizing feedback control with off-the-shelf electronics. In the following section, we detail our implementation of high-fidelity projective readout of a transmon qubit in 3D cQED. We then shift focus to the real-time signal processing by the feedback controller, and on the resulting feedback action. \begin{figure*} \centering \includegraphics[width=0.5\columnwidth]{Fig2_Fb} \caption{\label{fig:fbcqed} \textbf{Simplified schematic of a single-qubit feedback loop in cQED.} Upon application of a measurement tone at $\omega_{\mathrm{c}}$, the signal $V_{\mathrm{out}}$ obtained from processing of the cavity output, carrying information on the qubit state, is input to the feedback controller and compared to a preset threshold $V_{\mathrm{th}}$. If $V_{\mathrm{out}}>V_{\mathrm{th}}$ (or $V_{\mathrm{th}}<V_{\mathrm{th}}$), the conditional rotation $\theta\, (\theta')$ is applied to the qubit.} \end{figure*} \section{High-fidelity projective readout of transmon qubits} \label{sec:readout} \subsection{Experimental setup} Our system consists of an Al 3D cavity enclosing two superconducting transmon qubits, labeled $\mathrm{Q}_{\mathrm{A}}$ and $\mathrm{Q}_{\mathrm{B}}$, with transition frequencies $\omega_{\mathrm{A(B)}}/2\pi = 5.606 (5.327)~\mathrm{GHz}$, relaxation times $T_{1\mathrm{A(B)}}=23~(27)~\mu\mathrm{s}$. The fundamental mode of the cavity (TE101) resonates at $\omega_{r}/2\pi=6.548~\mathrm{GHz}$ (for qubits in ground state) with $\kappa/2\pi = 430~\mathrm{kHz}$ linewidth, and couples with $g/2\pi \sim 75~\mathrm{MHz}$ to both qubits. The dispersive shifts~\cite{Wallraff04} $\chi_\mathrm{A(B)}/\pi=-3.7~(-2.6)~\mathrm{MHz}$, both large compared to $\kappa/2\pi$, place the system in the strong dispersive regime of cQED~\cite{Schuster07}. Qubit readout in cQED typically exploits dispersive interaction with the cavity. A readout pulse is applied at or near resonance with the cavity, and a coherent state builds up in the cavity with amplitude and phase encoding the multi-qubit state~\cite{Wallraff04,Majer07}. We optimize readout of $\mathrm{Q}_{\mathrm{A}}$ by injecting a microwave pulse through the cavity at $\omega_{\mathrm{RF}}=\omega_{r}-\chi_\mathrm{A}$, the average of the resonance frequencies corresponding to qubits in $\ket{00}$ and $\ket{01}$, with left (right) index denoting the state of $\mathrm{Q}_{\mathrm{B}}$ ($\mathrm{Q}_{\mathrm{A}}$) (Figs.~\ref{fig:1_paper1}\textbf{a},\textbf{d}). This choice maximizes the phase difference between the pointer coherent states. Homodyne detection of the output signal, itself proportional to the intra-cavity state, is overwhelmed by the noise added by the semiconductor amplifier (HEMT), precluding high-fidelity single-shot readout (Fig.~\ref{fig:1_paper1}\textbf{c}). We introduce a \hl{Josephson parametric amplifier} (JPA)~\cite{Castellanos-Beltran08} at the front end of the amplification chain to boost the readout signal by exploiting the power-dependent phase of reflection at the JPA (see Figs.~\ref{fig:1_paper1}\textbf{a},\textbf{b}). Depending on the qubit state, the weak signal transmitted through the cavity is either added to or subtracted from a much stronger pump tone incident on the JPA, allowing single-shot discrimination between the two cases (Fig.~\ref{fig:1_paper1}\textbf{c}). \begin{figure} \center \includegraphics[width=0.8\columnwidth]{Fig3_Fb} \caption{\label{fig:1_paper1} \textbf{JPA-backed dispersive transmon readout.} \textbf{a,} Simplified diagram of the experimental setup, showing the input path for the readout signal carrying the information on the qubit state (RF, green) and the stronger, degenerate tone (Pump, grey) biasing the JPA. Both microwave tones are combined at the JPA and their sum is reflected with a phase dependent on the total power (\textbf{b}), amplifying the small signal. An additional tone (Null) is used to cancel any pump leakage into the cavity. The JPA is operated at the low-signal gain of $\sim25~\mathrm{dB}$ and $2~\mathrm{MHz}$ bandwidth. \textbf{c,} Scatter plot in the $I-Q$ plane for sets of 500 single-shot measurements. Light red and blue: readout signal obtained with an RF tone probing the cavity for qubits in $\ket{00}$ and $\ket{01}$, respectively. Dark red and blue: the Pump tone is added to the RF. \textbf{d,} Spectroscopy of the cavity fundamental mode for qubits in $\ket{00}$ and $\ket{01}$. The RF frequency is chosen halfway between the two resonance peaks, giving the maximum phase contrast ($163^\circ$, see inset on the right). Figure taken from Ref.~\citenum{Riste12}.} \end{figure} \subsection{Characterization of JPA-backed qubit readout and initialization} The ability to better discern the qubit states with the JPA-backed readout is quantified by collecting statistics of single-shot measurements. The sequence used to benchmark the readout includes two measurement pulses, $M_A$ and $M_B$, each $700~\mathrm{ns}$ long, with a central integration window of $300~\mathrm{ns}$ (Fig.~\ref{fig:2_paper1}\textbf{a}). Immediately before $M_B$, a $\pi$ pulse is applied to $\mathrm{Q}_{\mathrm{A}}$ in half of the cases, inverting the population of ground and excited state (Fig.~\ref{fig:2_paper1}\textbf{b}). We observe a dominant peak for each prepared state, accompanied by a smaller one overlapping with the main peak of the other case. We hypothesize that the main peak centered at positive voltage corresponds to state $\ket{00}$, and that the smaller peaks are due to residual qubit excitations, mixing the two distributions. To test this hypothesis, we first \hl{digitize} the result of $M_A$ with a threshold voltage $V_\mathrm{th}$, chosen to maximize the contrast between the cumulative histograms for the two prepared states (Fig.~\ref{fig:2_paper1}\textbf{c}), and assign the value $H (L)$ to the shots falling above (below) the threshold. Then we only keep the results of $M_B$ corresponding to $M_A=H$. Indeed, we observe that \hl{postselecting} $91\%$ of the shots reduces the overlaps from $\sim6$ to $2\%$ and from $\sim9$ to $1\%$ in the $H$ and $L$ regions, respectively (Fig.~\ref{fig:2_paper1}\textbf{d}). This supports the hypothesis of partial qubit excitation in the steady state, lifted by restricting to a subset of measurements where $M_A$ declares the register to be in $\ket{00}$. Further evidence is obtained by observing that moving the threshold substantially decreases the fraction of postselected measurements without significantly improving the contrast [$\sim+0.1~ (0.2)\%$ keeping $85 ~(13)\%$ of the shots] (Fig.~\ref{fig:3_paper1}\textbf{b}). Postselection is effective at suppressing the \hl{residual excitation} of any one qubit, since the $\ket{01}$ and $\ket{10}$ distributions are both highly separated from $\ket{00}$, and the probability that both qubits are excited is only $\sim0.2\%$~. \begin{figure} \center \includegraphics[width=0.7\columnwidth]{Fig4_Fb} \caption{\label{fig:2_paper1} \textbf{Ground-state initialization by measurement.} \textbf{a,} Pulse sequence used to distinguish between the qubit states ($M_B$), upon conditioning on the result of an initialization measurement $M_A$. The sequence is repeated every $250~\mu\mathrm{s}$. \textbf{b,} Histograms of $500\,000$ shots of $M_B$, without (red) and with (blue) inverting the population of $\mathrm{Q}_{\mathrm{A}}$ with a $\pi$ pulse. \textbf{c,} Histograms of $M_A$, with $V_\mathrm{th}$ indicating the threshold voltage used to digitize the result. \textbf{d,} $M_B$ conditioned on $M_A = H$ to initialize the system in the ground state, suppressing the residual steady-state excitation. The conditioning threshold, selecting $91\%$ of the shots, matches the value for optimum discrimination of the state of $\mathrm{Q}_{\mathrm{A}}$. Figure taken from Ref.~\citenum{Riste12}.} \end{figure} The performance of JPA-backed readout and the effect of initialization by measurement are quantified by the optimum readout contrast, defined as the maximum difference between the cumulative probabilities for the two prepared states (Fig.~\ref{fig:3_paper1}\textbf{a}). Without \hl{initialization}, the use of the JPA gives an optimum contrast of $84.9\%$, a significant improvement over the $26\%$ obtained without the pump tone. Comparing the deviations from unity contrast without and with initialization, we can extract the parameters for the error model shown in Fig.~\ref{fig:3_paper1}\textbf{c}. The model~\cite{Riste12} takes into account the residual steady-state excitation of both qubits, found to be $\sim4.7\%$ each, and the error probabilities for the qubits prepared in the four basis states. Although the projection into $\ket{00}$ occurs with $99.8\pm0.1\%$ fidelity, this probability is reduced to $98.8\%$ in the time $\tau=2.4~\mu\mathrm{s}$ between $M_A$ and $M_B$, chosen to fully deplete the cavity of photons before the $\pi$ pulse preceding $M_B$. We note that $\tau$ could be reduced by increasing $\kappa$ by at least a factor of two without compromising $T_{1\mathrm{A}}$ by the Purcell effect~\cite{Houck08}. By correcting for partial equlibration during $\tau$, we calculate an actual readout fidelity of $98.1\pm0.3\%$. The remaining infidelity is mainly attributed to qubit relaxation during the integration window. \begin{figure} \center \includegraphics[width=0.9\columnwidth]{Fig5_Fb} \caption{\label{fig:3_paper1}\textbf{Analysis of readout fidelity.} \textbf{a,} Cumulative histograms for $M_B$ without and with conditioning on $M_A=H$, obtained from data in Figs.~\ref{fig:2_paper1}\textbf{c},\textbf{d}. The optimum threshold maximizing the contrast between the two prepared states is the same in both cases. Deviations of the outcome from the intended prepared state are: $8.9\%$ ($1.3\%$) for the ground state, $6.2\%$ ($2.1\%$) for the excited state without (with) conditioning. Therefore, initialization by measurement and postselection increases the readout contrast from $84.9\%$ to $96.6\%$. \textbf{b,} Readout contrast (purple) and postselected fraction (black) as a function of $V_{\mathrm{th}}$. \textbf{c,} Schematics of the readout error model, including the qubit populations in the steady state and at $\tau = 2.4~\mu\mathrm{s}$ after $M_A$. Only the arrows corresponding to readout errors are shown. \textbf{d,} Rabi oscillations of $\mathrm{Q}_{\mathrm{A}}$ without (empty) and with (full dots) initialization by measurement and postselection. In each case, data are taken by first digitizing $10\,000$ single shots of $M_B$ into $H$ or $L$, then averaging the results. Error bars on the average values are estimated from a subset of 175 measurements per point. For each angle, 7 randomly-chosen single-shot outcomes are also plotted (black dots at 0 or 1). The visibility of the averaged signal increases upon conditioning $M_B$ on $M_A=H$. Figure adapted from Ref.~\citenum{Riste12}.} \end{figure} As a test for readout fidelity, we performed single-shot measurements of a Rabi oscillation sequence applied to $\mathrm{Q}_{\mathrm{A}}$, with variable amplitude of a resonant $32~\mathrm{ns}$ Gaussian pulse preceding $M_B$, and using ground-state initialization as described above (Fig.~\ref{fig:3_paper1}\textbf{d}). The density of discrete dots reflects the probability of measuring $H$ or $L$ depending on the prepared state. By averaging over $\sim10\,000$ shots, we recover the sinusoidal Rabi oscillations without (white) and with (black) ground-state initialization. As expected, the peak-to-peak amplitudes ($85.2$ and $96.7\%$, respectively) equal the optimum readout contrasts in Fig.~\ref{fig:3_paper1}\textbf{a}, within statistical error. \subsection{Repeated quantum nondemolition measurements} In an ideal projective measurement, there is a one-to-one relation between the outcome and the post-measurement state. We perform repeated measurements to assess the \hl{nondemolition} nature of the readout, following Refs.~\citenum{Lupascu07,Boulant07}. The correlation between two consecutive measurements, $M_B$ and $M_C$, is found to be independent of the initial state over a large range of Rabi rotation angles $\theta$ (see Fig.~\ref{fig:4_paper1}\textbf{a}). A decrease in the probabilities occurs when the chance to obtain a certain outcome on $M_B$ is low (for instance to measure $M_B=H$ for a state close to $\ket{01}$) and comparable to readout errors or to the partial recovery arising between $M_B$ and $M_C$. We extend the readout model of Fig.~\ref{fig:3_paper1}\textbf{c} to include the correlations between each outcome on $M_B$ and the post-measurement state. The deviation of the asymptotic levels from unity, $P_{H|H}=0.99$ and $P_{L|L}=0.89$, is largely due to recovery during $\tau$, as demonstrated in Fig.~\ref{fig:4_paper1}\textbf{b}. From the model, we extrapolate the correlations for two adjacent measurements, $P_{H|H}(\tau=0)=0.996\pm0.001$ and $P_{L|L}(\tau=0)=0.985\pm0.002$, corresponding to the probabilities that pre- and post-measurement state coincide. In the latter case, mismatches between the two outcomes are mainly due to qubit relaxation during $M_C$. Multiple measurement pulses, as well as a long pulse, do not have a significant effect on the qubit state, supporting the nondemolition character of the readout at the chosen power. \begin{figure} \center \includegraphics[width=0.7\columnwidth]{Fig6_Fb} \caption{\label{fig:4_paper1} \textbf{Projectiveness of the measurement.} {\textbf a,} Conditional probabilities for two consecutive measurements $M_B$ and $M_C$, separated by $\tau=2.4~\mu\mathrm{s}$. Following an initial measurement pulse $M_A$ used for initialization into $\ket{00}$ by the method described, a Rabi pulse with variable amplitude rotates $\mathrm{Q}_{\mathrm{A}}$ by an angle $\theta$ along the $x$-axis of the Bloch sphere, preparing a state with $P_{\ket{01}} = \sin^2(\theta/2)$. Red (blue): probability to measure $M_C=H (L)$ conditioned on having obtained the same result in $M_B$, as a function of the initial excitation of $\mathrm{Q}_{\mathrm{A}}$. Error bars are the standard error obtained from 40 repetitions of the experiment, each one having a minimum of 250 postselected shots per point. Deviations from an ideal projective measurement are due to the finite readout fidelity, and to partial recovery after $M_B$~. The latter effect is shown in \textbf{b}, where the conditional probabilities converge to the unconditioned values, $ P_{H}=0.91$ and $ P_{L}=0.09$ for $\tau\gg T_1$, in agreement with Fig.~\ref{fig:2_paper1}, taking into account relaxation between the $\pi$ pulse and $M_C$. Error bars are smaller than the dot size. Figure taken from Ref.~\citenum{Riste12}.} \end{figure} Josephson parametric amplification has become a standard technique for the high-fidelity readout of qubits in cQED. Since this experiment and the parallel work in Ref.~\citenum{Johnson12}, projective readout of transmon~\cite{Steffen13, Chow14, Jeffrey14} and flux~\cite{Lin13} qubits has been performed using different varieties of Josephson junction-based amplifiers. The technology for these amplifiers continuously evolves to meet the needs of quantum circuits of growing complexity. One approach to high-fidelity readout of multiple qubits is to increase the amplifier bandwidth to include several resonators, each coupled to a distinct qubit~\cite{Groen13}. Recent implementations in this direction included Josephson junctions in a transmission line~\cite{OBrien14}, in low-Q resonators~\cite{Mutus14, Eichler14}, or in a circuit realizing a superconducting low-inductance undulatory galvanometer (SLUG)~\cite{Hover14}. Another approach for multi-qubit readout uses dedicated, on-chip Josephson bifurcation amplifiers~\cite{Schmitt14}. \section{Digital feedback controllers} \label{sec:fbcontrollers} The input to a feedback loop in cQED is the homodyne signal obtained by amplification and demodulation of the qubit-dependent cavity transmission or reflection, as shown above. The response of the \hl{feedback controller} is one or more qubit microwave pulses, which are generated and sent to the device (Fig.~\ref{fig:fbcqed}). This loop has a significant spatial extension, as the qubits sit in the coldest stage of a dilution refrigerator, while the feedback controller is at room temperature. A round trip involves $5-10~\mathrm{m}$ of cable, which translates to a propagation time of $25-50~\mathrm{ns}$ without accounting for delays due to filters and other microwave components. This physical limitation, which would require fast cryogenic electronics to be overcome, is only a small fraction of the total latency. A major source of delay is the \hl{processing time} in the controller, combined with the generation or triggering of the microwave pulses for the conditional qubit rotations. The details of this process depend on the type of controller. We describe the first implementations below. \begin{figure*} \centering \includegraphics[width=\columnwidth]{Fig7_Fb} \caption{\label{fig:s2_paper2} \textbf{Digital feedback loop with an ADwin controller.} \textbf{a,} Schematic of the feedback loop, consisting of an ADwin, sampling the signal, and a Textronix AWG520, conditionally generating a qubit $\pi$ pulse. \textbf{b}, Timings of the feedback loop. The measurement pulse, here $400~\mathrm{ns}$ long, reaches the cavity at $t=0$. The ADwin, triggered by an AWG5014, measures one channel of the output homodyne signal (red: qubit in $\ket{0}$, blue: $\ket{1}$), delayed by $\sim200~\mathrm{ns}$ due to a low-pass filter at its input side. After comparison of the measured voltage at $t=0.6~\mu\mathrm{s}$ to the reference threshold, the AWG520 is conditionally triggered at $t=2.54~\mu\mathrm{s}$, resulting in a $\pi$ pulse reaching the cavity at $2.62~\mu\mathrm{s}$. Figure adapted from Ref.~\citenum{Riste12b}.} \end{figure*} The first realization of a digital feedback controller used commercial components for data sampling, processing, and conditional operations~\cite{Riste12b}. The core of the controller is an \hl{ADwin-Gold}, a processor with a set of analog inputs and configurable analog and digital outputs. The ADwin samples the readout signal once, at a set delay following a trigger from an arbitrary waveform generator (Tektronix AWG5014). This delay is optimized to maximize readout fidelity. A routine determines the optimum threshold for digitizing the readout signal. This voltage is then used to assign $H$ or $L$ to the measurement. For the reset function in section~\ref{sec:reset}, the ADwin triggers another arbitrary waveform generator (Tektronix AWG520) to produce a $\pi$ pulse when the outcome is $L$. Pulse timings and signal delays in the feedback cycle are illustrated in Fig.~\ref{fig:s2_paper2}. The total time between start of the measurement and end of the feedback pulse is $\approx 2.6~\mu\mathrm{s}$, mainly limited by the processing time of the ADwin. To shorten the loop time, our second generation of digital feedback used a complex programmable logic device (\hl{CPLD}, Altera MAX V), acquiring the signal following a $8$-bit ADC, in place of the ADwin. This home-assembled feedback controller offers two advantages over the first: a programmable integration window and a response time of $0.11~\mu\mathrm{s}$ (Fig.~\ref{fig:2_book}), an order of magnitude faster than the ADwin. As the feedback response time is now comparable or faster than the typical cavity decay time, active depletion of the cavity~\cite{McClure15} will be required to take full advantage of the CPLD speed and further shorten the feedback loop. \begin{figure*} \centering \includegraphics[width=\columnwidth]{Fig8_Fb} \caption{\label{fig:2_book} \textbf{Digital feedback loop with a CPLD-based controller.} \textbf{a}, Schematics of the feedback loop, with an ADC and a CPLD (or FPGA) board replacing the ADwin in Fig.~\ref{fig:s2_paper2}. \textbf{b,} Timings of the feedback loop. The CPLD samples the signal at every clock cycle ($10~\mathrm{ns}$) and then integrates it over a window set by a marker of an AWG5014. The internal delay of the CPLD breaks down into the analog-to-digital conversion ($60~\mathrm{ns}$) and the processing to compare the integrated signal to a calibrated threshold, determining the binary output ($50~\mathrm{ns}$). These timings are multiples of the clock (reduced to $4~\mathrm{ns}$ in a recent FPGA-based implementation~\cite{reportGarrido14}). The total delay in Ref.~\cite{Riste13} is increased to $2~\mu\mathrm{s}$ to let the cavity return to the ground state before the conditional $\pi$ pulse.} \end{figure*} Further developments in the feedback controller replaced the CPLD with a field-programmable-gate-array (\hl{FPGA}) to increase the on-board memory and enable more complex signal processing. For example, the FPGA allows different weights for the measurement record and maximal correlation with the qubit evolution. A FPGA-based controller has also been employed for digital feedback at ETH Zurich~\cite{Steffen13}. Recent developments at TU Delft and at Yale~\cite{Ofek15} include the pulse generation on a FPGA board, eliminating the need of an additional AWG. For comparison, Fig.~\ref{fig:fbcomparison} shows the setup that would be required for the 3-qubit repetition code~\cite{Mermin07} using our first generation of feedback (\textbf{a}) and the most recent one based on FPGAs (\textbf{b}, \textbf{c}). \begin{figure*} \centering \includegraphics[width=0.8\columnwidth]{Fig9_Fb} \caption{\label{fig:fbcomparison} \textbf{Hardware comparison for feedback control in the bit-flip code.} The bit-flip code requires a two-bit digital feedback, acting on three qubits. Scaling the system in Fig.~\ref{fig:s2_paper2} would take an AWG520 for each qubit (\textbf{a}). A recent implementation~\cite{reportGarrido14} performs readout signal processing and pulse generation on FPGA boards, resulting in the compact controller shown in \textbf{b}, \textbf{c}.} \end{figure*} \section{Fast qubit reset based on digital feedback} \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{Fig10_Fb} \caption{\label{fig:1_paper2} \textbf{Transmon equilibration to steady state.} Time evolution of the ground-state population $P_{\ket{0}}$ starting from states $\rho_0$ and $\rho_1$ (notation defined in the text). Solid curves are the best fit to Eq.~\eqref{eq:rates_paper2}, giving the inverse transition rates $\Gamma_{10}^{-1 }= 50\pm2~\mu\mathrm{s}, \Gamma_{21}^{-1} = 20\pm2~\mu\mathrm{s}, \Gamma_{01}^{-1} = 324\pm32~\mu\mathrm{s}, \Gamma_{12}^{-1} = 111\pm25~\mu\mathrm{s}$. From the steady-state solution, we extract residual excitations $P_{\ket{1},\mathrm{ss}}=13.1\pm0.8\%,P_{\ket{2},\mathrm{ss}}=2.4\pm0.4\%$. Inset: steady-state population distribution (bars). Markers correspond to a Boltzmann distribution with best-fit temperature $127~\mathrm{mK}$, significantly higher than the dilution refrigerator base temperature ($15~\mathrm{mK}).$ Figure taken from Ref.~\citenum{Riste12b}.} \end{figure} \subsection{Passive qubit initialization to steady state} \begin{figure} \centering \includegraphics[width=0.55\columnwidth]{Fig11_Fb} \caption{\label{fig:2_paper2} \textbf{Reset by measurement and feedback.} \textbf{a,} Before feedback: histograms of $300\,000$ shots of $M_B$, with (squares) and without (circles) inverting the qubit population with a $\pi$ pulse. Each shot is obtained by averaging the homodyne voltage over the second half ($200~\mathrm{ns}$) of a readout pulse. $H$ and $L$ denote the two possible outcomes of $M_B$, digitized with the threshold $V_\mathrm{th}$, maximizing the contrast, analogously to section~\ref{sec:readout}. Full (empty) dots indicate (no) postselection on $M_A>V_{\mathrm{ps}}$. This protocol~ is used to prepare $\rho_0$ and $\rho_1$, which are the input states for the feedback sequences in \textbf{b} and \textbf{c}. \textbf{b,} After feedback: histograms of $M_C$ after applying the feedback protocol $\mathrm{Fb}_0$, which triggers a $\pi$ pulse when $M_B=L$. Using this feedback, $\sim99\%$ $(92\%)$ of measurements digitize to $H$ for $\theta=0~(\pi)$, respectively. \textbf{c,} Feedback with opposite logic $\mathrm{Fb}_1$ preparing the excited state. In this case, $\sim 98\%$ $(94\%)$ of measurements digitize to $L$ for $\theta=0~(\pi)$. Figure taken from Ref.~\citenum{Riste12b}.} \end{figure} Our first application of feedback is qubit initialization, also known as reset~\cite{DiVincenzo00}. The ideal reset for QIP is \hl{deterministic} (as opposed to heralded or postselected, see previous section) and fast compared to qubit coherence times. Obviously, the passive method of waiting several times $T_1$ does not meet the speed requirement. Moreover, it can suffer from residual steady-state qubit excitations~\cite{Corcoles11,Johnson12,Riste12,Vijay12}, whose cause in cQED remains an active research area. The drawbacks of \hl{passive initialization} are evident for our qubit, whose ground-state population $P_{\ket{0}}$ evolves from states $\rho_0$ and $\rho_1$ as shown in Fig.~\ref{fig:1_paper2}. With $\rho_0$ and $\rho_1$ we indicate our closest realization ($\sim99\%$ fidelity) of the ideal pure states $\ket{0}$ and $\ket{1}$. $P_{\ket{0}}$ at variable time after preparation is obtained by comparing the average readout homodyne voltage to calibrated levels~, as in standard three-level tomography~\cite{Thew02,Bianchetti09}. These populations dynamics are captured by a master equation model for a three-level system: \begin{equation} \label{eq:rates_paper2} \ \left(\begin{array}{c} \dot{P}_\mathrm{\ket{0}}\\ \dot{P}_\mathrm{\ket{1}}\\ \dot{P}_\mathrm{\ket{2}} \end{array}\right)= \left( \begin{array}{ccc} -\Gamma_{01} & \Gamma_{10} & \phantom{-}0 \\ \phantom{-}\Gamma_{01} & -\Gamma_{10}-\Gamma_{12} & \phantom{-}\Gamma_{21} \\ \phantom{-}0 & \Gamma_{12} & -\Gamma_{21} \end{array} \right) \left(\begin{array}{c} P_\mathrm{\ket{0}}\\ P_\mathrm{\ket{1}}\\ P_\mathrm{\ket{2}} \end{array}\right). \end{equation} The best fit to the data gives the qubit relaxation time $T_1=1/\Gamma_{10}=50\pm2~\mu\mathrm{s}$ and the asymptotic $15.5\%$ residual total excitation. \subsection{Qubit reset based on digital feedback} \label{sec:reset} \begin{figure} \centering \includegraphics[width=0.65\columnwidth]{Fig12_Fb} \caption{\label{fig:3_paper2} \textbf{Deterministic reset from any qubit state.} Ground-state population $P_{\ket{0}}$ as a function of the initial state $\rho_\theta$, prepared by coherent rotation after initialization in $\rho_0$, as in Fig.~\ref{fig:2_paper2}. The cases shown are: no feedback (circles), $\mathrm{Fb}_0$ (squares), $\mathrm{Fb}_1$ (diamonds), twice $\mathrm{Fb}_0$ (upward triangles), and $\mathrm{Fb}_0$ followed by $\mathrm{Fb}_1$ (downward triangles). The vertical axis is calibrated with the average measurement outcome for the reference states $\rho_0, \rho_1$, and corrected for imperfect state preparation. The curve with no feedback has a visibility of $99\%$, equal to the average preparation fidelity. Each experiment is averaged over $300\,000$ repetitions. Inset: error probabilities for two rounds of feedback, defined as $1-P_{\ket{t}}$, where $\ket{t}\in\{0,1\}$ is the target state. The systematic $\sim0.3\%$ difference between the two cases is attributed to error in the $\pi$ pulse preceding the measurement of $P_{\ket{1}}$ following $\mathrm{Fb}_1$. Curves: model including readout errors and equilibration (section~\ref{sec:fbbasics}).} \end{figure} Previous approaches to accelerate qubit equilibration include coupling to dissipative resonators~\cite{Reed10b} or two-level systems~\cite{Mariantoni11}. However, these are also susceptible to spurious excitation, potentially inhibiting complete qubit relaxation. Feedback-based reset circumvents the equilibration problem by not relying on coupling to a dissipative medium. Rather, it works by projecting the qubit with a measurement ($M_B$, performed by the controller) and conditionally applying a $\pi$ pulse to drive the qubit to a targeted basis state (Fig.~\ref{fig:2_paper2}). A final measurement ($M_C$) determines the qubit state immediately afterwards. In both measurements, the result is digitized into levels $H$ or $L$, associated with $\ket{0}$ and $\ket{1}$, respectively. The digitization threshold voltage $V_{\mathrm{th}}$ maximizes the readout fidelity at $99\%$. The $\pi$ pulse is conditioned on $M_B=L$ to target $\ket{0}$ (scheme $\mathrm{Fb}_0$) or on $M_B=H$ to target $\ket{1} (\mathrm{Fb}_1)$. In a QIP context, reset is typically used to reinitialize a qubit following measurement, when it is in a computational basis state. Therefore, to benchmark the reset protocol, we first quantify its action on $\rho_0$ and $\rho_1$. This step is accomplished with a preliminary measurement $M_A$ (initializing the qubit in $\rho_0$ by postselection), followed by a calibrated pulse resonant on the transmon $0\leftrightarrow1$ transition to prepare $\rho_1$. The overlap of the $M_C$ histograms with the targeted region ($H$ for $\mathrm{Fb}_0$ and $L$ for $\mathrm{Fb}_1$) averages at $96\%$, indicating the success of reset. Imperfections are more evident for $\theta=\pi$ and mainly due to equilibration of the transmon during the feedback loop. A detailed error analysis is presented below. We emphasize that qubit initialization by postselection is here only used to prepare nearly pure states useful for characterizing the feedback-based reset, which is deterministic. \subsection{Characterization of the reset protocol} An ideal reset function prepares the same pure qubit state regardless of its input. To fully quantify the performance of our reset scheme, we measure its effect on our closest approximation to superposition states $\ket{\theta}=\cos(\theta/2)\ket{0}+\sin({\theta/2})\ket{1}$. Without feedback, $P_{\ket{0}}$ is trivially a sinusoidal function of $\theta$, with near unit contrast. Feedback highly suppresses the Rabi oscillation, with $P_{\ket{0}}$ approaching the ideal value 1 (0) for $\mathrm{Fb}_0$ $(\mathrm{Fb}_1)$ for any input state. However, a dependence on $\theta$ remains, with $P_{\mathrm{err}}=1-P_{\ket{0}}$ for $\mathrm{Fb}_0$ ($1-P_{\ket{1}}$ for $\mathrm{Fb}_1$) ranging from $1.2\%$ $(1.4\%)$ for $\theta=0$ to $7.8\%$ ($8.4\%$) for $\theta = \pi$. The remaining errors are discussed in section~\ref{sec:fbbasics}. From Eqs.~\eqref{eq:fb}, using the best-fit $\Gamma_{ij}$ and $\tau_\mathrm{Fb} = 2.4~\mu\mathrm{s}$, errors due to equilibration sum to $0.7\%$ $(6.9\%)$ for $\theta=0$ $(\pi)$, while readout errors account for the remaining $0.4\%$ $(1.4\%)$. In agreement with these values, concatenating two feedback cycles suppresses the error for $\theta = \pi$ to $3.4\%$, while there is no benefit for $\theta = 0$ ($1.3\%$). \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{Fig13_Fb} \caption{\label{fig:4_paper2} \textbf{Fast qubit reset.} Initialization errors as a function of initialization time $\tau_{\mathrm{init}}$ under looped execution of a simple experiment leaving the qubit ideally in $\ket{1}$ (\textbf{a,} measurement and $\pi$ pulse) or $\ket{0}$ (\textbf{b,} measurement only). Empty symbols: initialization by waiting (no feedback). Solid symbols: initialization by feedback, with three rounds of $\mathrm{Fb}_0$ and a $\pi$ pulse on the $1\leftrightarrow2$ transition. Two data sets correspond to two different cooldowns: the one corresponding to Figs.~\ref{fig:1_paper2}-\ref{fig:3_paper2} (black) and a following one with improved thermalization (blue). Curves correspond to a master equation simulation assuming perfect pulses and measured transition rates $\Gamma_{ij}$ (dashed, no feedback; solid, triple $\mathrm{Fb}_0$ with a $\pi$ pulse on $1\leftrightarrow2$). Feedback reset successfully bounds the otherwise exponential accruement of $P_{\mathrm{err}}$ in case \textbf{a} as $\tau_{\mathrm{init}}\rightarrow0$. The reduction of $P_{\mathrm{err}}$ in \textbf{b} reflects the cooling of the transmon by feedback (see text for details). Figure adapted from Ref.~\citenum{Riste12b}.} \end{figure} \subsection{Speed-up enabled by fast reset} The key advantage of reset by feedback is the ability to ready a qubit for further computation \hl{fast} compared to coherence times available in 3D cQED~\cite{Paik11,Rigetti12}. This will be important, for example, when refreshing ancilla qubits in multi-round error correction~\cite{Schindler11}. We now show that reset suppresses the accumulation of initialization error when a simple experiment is repeated with decreasing in-between time $\tau_{\mathrm{init}}$. The simple sequence in Fig.~\ref{fig:4_paper2} emulates an algorithm that leaves the qubit in $\ket{1}$ [case (a)] or $\ket{0}$ [case (b)]. A measurement pulse follows $\tau_{\mathrm{init}}$ to quantify the initialization error $P_{\mathrm{err}}$. Without feedback, $P_{\mathrm{err}}$ in case (a) grows exponentially as $\tau_{\mathrm{init}}\to0$. This accruement of error, due to the rapid succession of $\pi$ pulses, would occur even at zero temperature, where residual excitation would vanish (i.e., $\Gamma_{i+1,i}=0$), in which case $P_{\mathrm{err}} \to 50\%$ as $\tau_{\mathrm{init}}\rightarrow0$. In case (b), $P_{\mathrm{err}}$ matches the total steady-state excitation for all $\tau_{\mathrm{init}}$. Using feedback significantly improves initialization for both long and short $\tau_{\mathrm{init}}$. For $\tau_{\mathrm{init}} \gg T_1$, feedback suppresses $P_{\mathrm{err}}$ from the $16\%$ residual excitation to $3\%$ (black symbols and curves)\footnote{We note that $P_{\ket{1}}\approxP_{\ket{2}}=1.6\%$ is a non-thermal distribution.}, cooling the transmon. Crucially, unlike passive initialization, reset by feedback is also effective at short $\tau_{\mathrm{init}}$, where it limits the otherwise exponential accruement of error in (a), bounding $P_{\mathrm{err}}$ to an average of $3.5\%$ over the two cases. Our scheme combines three rounds of $\mathrm{Fb}_0$ with a pulse on the $1\leftrightarrow2$ transition before the final $\mathrm{Fb}_0$ to partially counter leakage to the second excited state, which is the dominant error source [see Eq.~\eqref{eq:fb}]. The remaining leakage is proportional to the average $P_{\ket{1}}$, which slightly increases in \textbf{a} and decreases in \textbf{b} as $\tau_{\mathrm{init}}\to0$. In a following cooldown, with improved thermalization and a faster feedback loop (Fig.~\ref{fig:2_book}), reset constrained $P_{\mathrm{err}}\lesssim1\%$ (blue), quoted as the fault-tolerance threshold for initialization in modern error correction schemes~\cite{Wang11}. In addition to the near simultaneous implementation at ENS~\cite{CampagneIbarcq13}, similar implementations of qubit reset have followed at Yale~\cite{Ofek15} and at Raytheon BBN Technologies using a FPGA-based feedback controller. \section{Deterministic entanglement by parity measurement and feedback} \label{sec:ebm} In this section, we extend the use of digital feedback to a multi-qubit experiment, targeting the deterministic generation of \hl{entanglement by measurement}. We first turn the cavity into a \hl{parity meter} to measure the joint state of two coupled qubits. By carefully engineering the cavity-qubit dispersive shifts, we make the cavity transmission only sensitive to the excitation parity, but unable to distinguish states within each parity. Binning the final states on the parity result generates an entangled state in either case, with up to $88\%$ fidelity to the closest Bell state. Integrating the demonstrated feedback control in the parity measurement, we turn the entanglement generation from \hl{probabilistic} to \hl{deterministic}. \subsection{Two-qubit parity measurement} In a two-qubit system, the ideal parity measurement transforms an unentangled superposition state $\ket{\psi^0} = (\ket{00}+\ket{01}+\ket{10}+\ket{11})/2$ into Bell states \begin{eqnarray} \label{eq:paper5_Bell} && \ket{\Phi^{+}} = \frac{1}{\sqrt{2}}(\ket{01}+\ket{10})\,\,\, \mathrm{and} \,\,\, \ket{\Psi^{+}} = \frac{1}{\sqrt{2}}(\ket{00}+\ket{11}) \end{eqnarray} for odd and even outcome, respectively. Beyond generating entanglement between non-interacting qubits~\cite{Ruskov03, Trauzettel06, Ionicioiu07, Williams08b, Haack10}, parity measurements allow deterministic two-qubit gates~\cite{Beenakker04, Engel05} and play a key role as syndrome detectors in quantum error correction~\cite{Nielsen00,Ahn02}. A heralded parity measurement has been recently realized for nuclear spins in diamond~\cite{Pfaff12}. By minimizing measurement-induced decoherence at the expense of single-shot fidelity, highly entangled states were generated with $3\%$ success probability. Here, we realize the first solid-state parity meter that produces entanglement with unity probability. \begin{figure*} \centering \includegraphics[width=\columnwidth]{Fig14_Fb} \caption{\label{fig:1_paper5} \textbf{Cavity-based two-qubit parity readout in cQED.} \textbf{a,} Simplified diagram of the experimental setup. Single- and double-junction transmon qubits ($\mathrm{Q}_{\mathrm{A}}$ and $\mathrm{Q}_{\mathrm{B}}$, respectively) dispersively couple to the fundamental mode of a 3D copper cavity enclosing them. Parity measurement is performed by homodyne detection of the qubit state-dependent cavity response~\cite{Blais04} using a JPA~\cite{Castellanos-Beltran08}. Following further amplification at $4~\mathrm{K}$ (HEMT) and room temperature, the signal is demodulated and integrated. A FPGA controller closes the feedback loop that achieves deterministic entanglement by parity measurement (Fig.~\ref{fig:4_paper5}). \textbf{b,} Matching of the dispersive cavity shifts realizing a parity measurement. \textbf{c,} Ensemble-averaged homodyne response $\avg{V_\mathrm{P}}$ for qubits prepared in the four computational basis states. \textbf{d,} Curves: corresponding ensemble averages of the running integral $\avg{V_\mathrm{int}}$ of $\avg{V_\mathrm{P}}$ between $t_\mathrm{i}=0$ and $t_\mathrm{f}=t$. Single-shot histograms ($5\,000$ counts each) of $V_\mathrm{int}$ are shown in $200~\mathrm{ns}$ increments. Figure adapted from Ref.~\citenum{Riste12b}. } \end{figure*} \subsection{Engineering the cavity as a parity meter} Our parity meter realization exploits the dispersive regime~\cite{Blais04} in two-qubit cQED. Qubit-state dependent shifts of a cavity resonance (here, the fundamental of a 3D cavity enclosing transmon qubits $\mathrm{Q}_{\mathrm{A}}$ and $\mathrm{Q}_{\mathrm{B}}$) allow joint qubit readout by homodyne detection of an applied microwave pulse transmitted through the cavity (Fig.~\ref{fig:1_paper5}\textbf{a}). The temporal average $V_\mathrm{int}$ of the homodyne response $V_\mathrm{P}(t)$ over the time interval $[t_\mathrm{i},t_\mathrm{f}]$ constitutes the measurement needle, with expectation value \[ \langle V_\mathrm{int} \rangle = \text{Tr}(\mathcal{O} \rho), \] where $\rho$ is the two-qubit density matrix and the observable $\mathcal{O}$ has the general form \[ \mathcal{O} = \beta_0 + \beta_{\mathrm{A}} \sigma_z^\mathrm{A} + \beta_{\mathrm{B}} \sigma_z^\mathrm{B} + \beta_{\mathrm{BA}} \sigma_z^\mathrm{B} \sigma_z^\mathrm{A}. \] The coefficients $\beta_0$, $\beta_{\mathrm{A}}$, $\beta_{\mathrm{B}}$, and $\beta_{\mathrm{BA}}$ depend on the strength $\epsilon_\mathrm{p}$, frequency $f_\mathrm{p}$ and duration $\tau_\mathrm{P}$ of the measurement pulse, the cavity linewidth $\kappa$, and the frequency shifts $2\chi_{\mathrm{A}}$ and $2\chi_{\mathrm{B}}$ of the fundamental mode when $\mathrm{Q}_{\mathrm{A}}$ and $\mathrm{Q}_{\mathrm{B}}$ are individually excited from $\ket{0}$ to $\ket{1}$. The necessary condition for realizing a parity meter is $\beta_{\mathrm{A}}=\beta_{\mathrm{B}}=0$ ($\beta_0$ constitutes a trivial offset). A simple approach~\cite{Hutchison09,Lalumiere10}, pursued here, is to set $f_\mathrm{p}$ to the average of the resonance frequencies for the four computational basis states $\ket{ij}$ ($i,j\in\{0,1\}$) and to match $\chi_{\mathrm{A}}=\chi_{\mathrm{B}}$. We engineer this matching by targeting specific qubit transition frequencies $f_\mathrm{A}$ and $f_\mathrm{B}$ below and above the fundamental mode during fabrication and using an external magnetic field to fine-tune $f_\mathrm{B}$ in situ. We align $\chi_{\mathrm{A}}$ to $\chi_{\mathrm{B}}$ to within $\sim0.06\,\kappa=2\pi \times 90~\mathrm{kHz}$ (Fig.~\ref{fig:1_paper5}\textbf{b}). The ensemble-average $\avg{V_\mathrm{P}}$ confirms nearly identical high response for odd-parity computational states $\ket{01}$ and $\ket{10}$, and nearly identical low response for the even-parity $\ket{00}$ and $\ket{11}$ (Fig.~\ref{fig:1_paper5}\textbf{c}). The transients observed are consistent with the independently measured $\kappa$, $\chi_{\mathrm{A}}$ and $\chi_{\mathrm{B}}$ values, and the $4~\mathrm{MHz}$ bandwidth of the JPA at the front end of the output amplification chain. Single-shot histograms (Fig.~\ref{fig:1_paper5}\textbf{d}) demonstrate the increasing ability of $V_\mathrm{int}$ to discern states of different parity as $t_\mathrm{f}$ grows (keeping $t_\mathrm{i}=0$), and its inability to discriminate between states of the same parity. The histogram separations at $t_\mathrm{f}=400~\mathrm{ns}$ give $|\beta_{\mathrm{A}}|,|\beta_{\mathrm{B}}| < 0.02~|\beta_{\mathrm{BA}}|$. \begin{figure} \centering \includegraphics[width=0.55\columnwidth]{Fig15_Fb} \caption{\label{fig:2_paper5} \textbf{Unconditioned two-qubit evolution under continuous parity measurement.} \textbf{a,} Pulse sequence including preparation of the qubits in the maximal superposition state $\rho^{(0)}=\ket{\psi^0}\bra{\psi^0}$, parity measurement and tomography of the final two-qubit state $\rho$ using joint readout. \textbf{b,} Absolute coherences $|\rho_{11,10}|$, $|\rho_{01,10}|$, $|\rho_{00,11}|$ following a parity measurement with variable duration $\tau_\mathrm{P}$. Free parameters of the model are the steady-state photon number on resonance $\bar{n}_{\mathrm{ss}} = 2.5\pm 0.1$, the difference $(\chi_{\mathrm{A}}-\chi_{\mathrm{B}})/\pi=235\pm4~\mathrm{kHz}$, and the absolute coherence values at $\tau_\mathrm{P}=0$ to account for few-percent pulse errors in state preparation and tomography pre-rotations. Note that the frequency mismatch differs from that in Fig.~\ref{fig:1_paper5}\textbf{b} due to its sensitivity to measurement power. \textbf{c, d,} Extracted density matrices for $\tau_\mathrm{P}=0$ (\textbf{c}) and $\tau_\mathrm{P}=400~\mathrm{ns}$ (\textbf{d}), by which time coherence across the parity subspaces (grey) is almost fully suppressed, while coherence persists within the odd-parity (orange) and even-parity (green) subspaces. Error bars correspond to the standard deviation of $15$ repetitions. Figure taken from Ref.~\citenum{Riste13b}.} \end{figure} \subsection{Two-qubit evolution during parity measurement} Moving beyond the description of the measurement needle, we now investigate the \hl{collapse} of the two-qubit state during parity measurement. We prepare the qubits in the maximal superposition state $\ket{\psi^0}=\frac{1}{2}\left(\ket{00}+\ket{01}+\ket{10}+\ket{11}\right)$, apply a parity measurement pulse for $\tau_\mathrm{P}$, and perform tomography of the final two-qubit density matrix $\rho$ with and without conditioning on $V_\mathrm{int}$ (Fig.~\ref{fig:2_paper5}\textbf{a}). We choose a weak parity measurement pulse exciting $\bar{n}_{\mathrm{ss}}=2.5$ intra-cavity photons on average in the steady-state, at resonance. A delay of $3.5/\kappa=350~\mathrm{ns}$ is inserted to deplete the cavity of photons before performing tomography. The tomographic joint readout is also carried out at $f_\mathrm{p}$, but with $14~\mathrm{dB}$ higher power, at which the cavity response is weakly nonlinear and sensitive to both single-qubit terms and two-qubit correlations ($\beta_{\mathrm{A}}\sim\beta_{\mathrm{B}}\sim\beta_{\mathrm{BA}}$, as required for tomographic reconstruction~\cite{Filipp09}. \begin{figure*} \centering \includegraphics[width=\columnwidth]{Fig16_Fb} \caption{\label{fig:3_paper5} \textbf{Probabilistic entanglement generation by postselected parity measurement.} \textbf{a,} Histograms of $V_\mathrm{int}$ ($\tau_\mathrm{P}=300~\mathrm{ns}$) for the four computational states. The results are digitized into $M_\mathrm{P}=1 (-1)$ for $V_\mathrm{P}$ below (above) a chosen threshold. \textbf{b,} Parity readout fidelity $F_{\mathrm{p}}$ as a function of $\tau_\mathrm{P}$. We define $F_{\mathrm{p}}=1-\epsilon_\mathrm{e}-\epsilon_{\rm o}$, with $\epsilon_\mathrm{e}=p(M_\mathrm{P}=-1|\mathrm{even})$ the readout error probability for a prepared even state, and similarly for $\epsilon_{\rm o}$. Data are corrected for residual qubit excitations ($1-2~\%$). Error bars are smaller than the dot size. Model curves are obtained from $5\,000$ quantum trajectories for each initial state and $\tau_\mathrm{P}$, with quantum efficiencies $\eta=0.25,$ $0.5$, and $1$ for the readout amplification chain. No single value of $\eta$ matches the dependence of $F_{\mathrm{p}}$ on $\tau_\mathrm{P}$. We attribute this discrepancy to low-frequency fluctuations in the parametric amplifier bias point, not included in the model. \textbf{c,} Concurrence $\mathcal{C}$ of the two-qubit entangled state obtained by postselection on $M_\mathrm{P}=-1$ (orange) and on $M_\mathrm{P}=+1$ (green squares). Empty symbols correspond to the threshold $V_{\mathrm{th}}$ that maximizes $F_{\mathrm{p}}$, binning $p_{\mathrm{success}} \sim 50~\%$ of the data into each case. Solid symbols correspond to a threshold $V_{\mathrm{th}-} (V_{\mathrm{th}+})$ for postselection on $M_\mathrm{P}=-1(+1)$, at which $\epsilon_{\rm o} (\epsilon_\mathrm{e}) = 0.01$. Concurrence is optimized at $\tau_\mathrm{P}\sim300~\mathrm{ns}$, where $p_{\mathrm{success}} \sim 20\%$ in each case. We employ maximum-likelihood estimation~\cite{Filipp09} (MLE) to ensure physical density matrices, but concurrence values obtained with and without MLE differ by less than $3\%$ over the full data set. \textbf{d, e,} State tomography conditioned on $V_\mathrm{P}>V_{\mathrm{th}-}$ (\textbf{d}) and $V_\mathrm{P}<V_{\mathrm{th}+}$ (\textbf{e}), with $\tau_\mathrm{P}=300~\mathrm{ns}$, corresponding to the dark symbols in \textbf{c}. Figure taken from Ref.~\citenum{Riste13b}.} \end{figure*} The ideal continuous parity measurement gradually suppresses the unconditioned density matrix elements $\rho_{ij,kl}=\bra{ij}\rho\ket{kl}$ connecting states with different parity (either $i\neq k$ or $j\neq l$), and leaves all other coherences (off-diagonal terms) and all populations (diagonal terms) unchanged. The experimental \hl{tomography} reveals the expected suppression of coherence between states of different parity (Figs.~\ref{fig:2_paper5}\textbf{b},\textbf{c}). The temporal evolution of $|\rho_{11,10}|$, with near full suppression by $\tau_\mathrm{P}=400~\mathrm{ns}$, is quantitatively matched by a master-equation simulation of the two-qubit system. Tomography also unveils a non-ideality: albeit more gradually, our parity measurement partially suppresses the absolute coherence between equal-parity states, $|\rho_{01,10}|$ and $|\rho_{00,11}|$. The effect is also quantitatively captured by the model. Although intrinsic qubit decoherence contributes, the dominant mechanism is the different \hl{AC-Stark phase shift} induced by intra-cavity photons on basis states of the same parity~\cite{Lalumiere10,Tornberg10,Murch13}. This phase shift has both deterministic and stochastic components, and the latter suppresses absolute coherence under ensemble averaging. We emphasize that this imperfection is technical rather than fundamental. It can be mitigated in the odd subspace by perfecting the matching of $\chi_{\mathrm{B}}$ to $\chi_{\mathrm{A}}$, and in the even subspace by increasing $\chi_\mathrm{A,B}/\kappa$ ($\sim 1.3$ in this experiment). \subsection{Probabilistic entanglement by measurement and postselection} The ability to discern parity subspaces while preserving coherence within each opens the door to generating entanglement by parity measurement on $\ket{\psi^0}$. For every run of the sequence in Fig.~\ref{fig:2_paper5}, we discriminate $V_\mathrm{int}$ using the threshold $V_{\mathrm{th}}$ that maximizes the parity measurement fidelity $F_{\mathrm{p}}$ (Fig.~\ref{fig:3_paper5}\textbf{a}). Assigning $M_\mathrm{P}=+1\,(-1)$ to $V_\mathrm{int}$ below (above) $V_{\mathrm{th}}$, we bisect the tomographic measurements into two groups, and obtain the density matrix for each. We quantify the entanglement achieved in each case using concurrence $\mathcal{C}$ as the metric~\cite{Horodecki09}, which ranges from $0\%$ for an unentangled state to $100\%$ for a Bell state. As $\tau_\mathrm{P}$ grows (Fig.~\ref{fig:3_paper5}\textbf{b}), the optimal balance between increasing $F_{\mathrm{p}}$ at the cost of \hl{measurement-induced dephasing} and intrinsic decoherence is reached at $\sim300~\mathrm{ns}$ (Fig.~\ref{fig:3_paper5}\textbf{c}). \hl{Postselection} on $M_\mathrm{P}=\pm1$ achieves $\mathcal{C}_{|M_\mathrm{P}=-1} =45\pm 3\%$ and $\mathcal{C}_{|M_\mathrm{P}=+1}=17\pm 3\%$, with each case occurring with probability $p_{\mathrm{success}}\sim50\%$. The higher performance for $M_\mathrm{P}=-1$ results from lower measurement-induced dephasing in the odd subspace, consistent with Fig.~\ref{fig:2_paper5}. The entanglement achieved by this probabilistic protocol can be increased with more stringent postselection. Setting a higher threshold $V_{\mathrm{th}-}$ achieves $\mathcal{C}_{|M_\mathrm{P}=-1} = 77\pm2\%$ but keeps $p_{\mathrm{success}}\sim20\%$ of runs. Analogously, using $V_{\mathrm{th}+}$ achieves $\mathcal{C}_{|M_\mathrm{P}=+1}=29\pm4\%$ with similar $p_{\mathrm{success}}$ (Figs.~\ref{fig:3_paper5}\textbf{d}, \textbf{e}). However, increasing $\mathcal{C}$ at the expense of reduced $p_{\mathrm{success}}$ is not evidently beneficial for QIP. For the many tasks calling for maximally-entangled qubit pairs (ebits), one may use an optimized distillation protocol~\cite{Horodecki09} to prepare one ebit from $N=1/E_\mathcal{N}(\rho)$ pairs in a partially-entangled state $\rho$, where $E_\mathcal{N}$ is the \hl{logarithmic negativity}~\cite{Horodecki09}. The \hl{efficiency} $\mathcal{E}$ of ebit generation would be $\mathcal{E}= p_{\mathrm{success}} E_\mathcal{N}\left(\rho\right)$. For postselection on $M_\mathrm{P}=-1$, we calculate $\mathcal{E}=0.31~\mathrm{ebits/run}$ using $V_{\mathrm{th}}$ and $\mathcal{E}=0.20~\mathrm{ebits/run}$ using $V_{\mathrm{th}-}$. Evidently, increasing entanglement at the expense of reducing $p_{\mathrm{success}}$ is counterproductive in this context. \begin{figure*} \centering \includegraphics[width=\columnwidth]{Fig17_Fb} \caption{\label{fig:4_paper5} \textbf{Deterministic entanglement generation using feedback.} \textbf{a,} We close a digital feedback loop by triggering (via the FPGA) a $\pi$ pulse on $\mathrm{Q}_{\mathrm{A}}$ conditional on parity measurement result $M_\mathrm{P}=+1$. This $\pi$ pulse switches the two-qubit parity from even to odd, and allows the deterministic targeting of $ \ket{\Phi^{+}}=(\ket{01}+\ket{10})/\sqrt{2}$. \textbf{b, c,} Parity measurement results $M_\mathrm{P}=-1$ and $M_\mathrm{P}=+1$ each occur with $\sim50\%$ probability. The deterministic AC Stark phase acquired between $\ket{01}$ and $\ket{10}$ during parity measurement (due to residual mismatch between $\chi_{\mathrm{A}}$ and $\chi_{\mathrm{B}}$) is compensated by a global phase rotation in the tomography pulses. A different AC Stark phase is acquired between $\ket{00}$ and $\ket{11}$, resulting in the state shown in \textbf{c}, with the maximal overlap with even Bell state $[\ket{00}+\exp(- i \varphi_\mathrm{e})\ket{11}]/\sqrt{2}$ at $\varphi_\mathrm{e}=0.73\pi$. \textbf{d,} Generation rate of entanglement using feedback, as a function of the phase $\varphi$ of the $\pi$ pulse. The deterministic entanglement generation efficiency outperforms the efficiencies obtained with postselection (Fig.~\ref{fig:3_paper5}). Error bars are the standard deviation of $7$ repetitions of the experiment at each $\varphi$. \textbf{e,} Full state tomography for deterministic entanglement [$\varphi=(\pi-\varphi_\mathrm{e})/2$], achieving fidelity $ \bra{\Phi^{+}} \rho \ket{\Phi^{+}} = 66\%$ to the targeted $ \ket{\Phi^{+}}$, and concurrence $\mathcal{C} = 34\%$. Colored bars highlight the contribution from cases $M_\mathrm{P}=-1$ (orange) and $M_\mathrm{P}=+1$ (green). Figure taken from Ref.~\citenum{Riste13b}.} \end{figure*} \subsection{Deterministic entanglement by measurement and feedback} Motivated by the above observation, we finally demonstrate the use of feedback control to transform entanglement by parity measurement from probabilistic to deterministic, i.e., $p_{\mathrm{success}}=100\%$. While initial proposals in cQED focused on analog feedback schemes~\cite{Sarovar05}, here we adopt a digital strategy. Specifically, we use our homebuilt programmable controller (section~\ref{sec:fbcontrollers} to apply a $\pi$ pulse on $\mathrm{Q}_{\mathrm{A}}$ conditional on measuring $M_\mathrm{P}=+1$ (using $V_{\mathrm{th}}$, Fig.~\ref{fig:4_paper5}). In addition to switching the two-qubit parity, this pulse lets us choose which odd-parity Bell state to target by selecting the phase $\varphi$ of the conditional pulse. To optimize \hl{deterministic entanglement}, we need to maximize overlap to the same odd-parity Bell state for $M_\mathrm{P}=-1$ (Fig.~\ref{fig:4_paper5}\textbf{b}) as for $M_\mathrm{P}=+1$ (Fig.~\ref{fig:4_paper5}\textbf{c}). For the targeted state $ \ket{\Phi^{+}}$, this requires cancelling the deterministic AC Stark phase $\varphi_\mathrm{e}=0.73\pi$ accrued between $\ket{00}$ and $\ket{11}$ when $M_\mathrm{P}=+1$. This is accomplished by choosing $\varphi=(\pi-\varphi_\mathrm{e})/2$, which clearly maximizes the entanglement obtained when no postselection on $M_\mathrm{P}$ is applied (Figs.~\ref{fig:4_paper5}\textbf{c}, \textbf{d}). The highest deterministic $\mathcal{C}=34\%$ achieved is lower than for our best probabilistic scheme, but the boost to $p_{\mathrm{success}}=100\%$ achieves a higher $\mathcal{E}=0.41~\mathrm{ebits/run}$. A parallel development realized the probabilistic entanglement by measurement between two qubits in separate 3D cavities~\cite{Roch14}, establishing the first quantum connection between remote superconducting qubits. In another two-qubit, single-cavity system, feedback has been recently applied to enhance the fidelity of the generated entanglement~\cite{Liu15}. Following the first realizations in 3D cQED, parity measurements have been implemented using an ancillary qubit~\cite{Saira14, Chow14} in 2D. Compared to the cavity-based scheme, the use of an ancilla evades measurement-induced dephasing and is better suited to scaling to larger circuits. \section{Conclusion} We have presented the first implementation of digital feedback control in superconducting circuits, and its evolution to faster, simpler, and more configurable feedback loops. In particular, we showed the use of digital feedback for fast and deterministic qubit reset and for deterministic generation of entanglement by parity measurement. Considering the vast range of applications for feedback in quantum computing, we hope that this development is just the start of an exciting new phase of measurement-assisted digital control in solid-state quantum information processing. \section*{Acknowledgments} We thank all the collaborators who have contributed to the experiments here presented: J.~G.~van Leeuwen, C.~C.~Bultink, M.~Dukalski, C.~A.~Watson, G.~de Lange, H.-S.~Ku, M.~J.~Tiggelman, K.~W.~Lehnert, Ya.~M.~Blanter, and R.~N.~Schouten. We acknowledge L.~Tornberg and G.~Johansson for useful discussions. Funding for this research was provided by the Dutch Organization for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO, VIDI scheme), and the EU FP7 projects SOLID and SCALEQIT.
1,314,259,993,732
arxiv
\section{Introduction} Automated Planning has traditionally been one of the most widely used techniques in AI and has been successfully applied in real-world applications \cite{castillo2008samap,fdez2019personalized}. However, in order to integrate it into online execution systems, i.e., systems used in real-time scenarios which interleave planning and acting, there exist several issues which must be addressed. Firstly, planning is often too slow for real-time scenarios. In most real-world problems the search space is enormous so, despite the use of heuristics, finding a suitable plan usually takes very long. Secondly, since most real-world environments are highly dynamic, it is very likely that the environment has changed before a long plan has finished being executed. Despite great advances in the integration of planning and acting into online architectures \cite{patra2019acting,ingrand2017deliberation,guzman2012pelea,Niemueller_Hofmann_Lakemeyer_2019}, the above features still hinder the generalized adoption of automated planning in such scenarios. Because of that, many recent works which apply AI to guide agents behaviour in real-time scenarios, like video games, choose to rely on Machine Learning alone and do not integrate planning into their agent architecture. This can be clearly seen in \cite{vinyals2019grandmaster}. In this impactful work, an agent is trained to play \emph{Starcraft}, a highly competitive real-time strategy (RTS) game. This seems like a perfect problem for planning: players need to establish a long-term, goal-oriented strategy in order to achieve victory and all the dynamics of the game are known, so they can be represented into a planning domain. However, Vinyals et al. choose to integrate Deep Learning \cite{lecun2015deep} with Reinforcement Learning \cite{sutton2018reinforcement} to model the behaviour of the agent. Architectures which rely on Machine Learning (ML) and Reinforcement Learning (RL) present some advantages over planning: they usually require very little prior knowledge about the domain (they do not need a planning domain) and, once trained, they act quickly, since they do not perform any type of planning. Nevertheless, they also have some drawbacks. Firstly, they are very sample inefficient. They require a lot of data in order to learn, in the order of hundreds of thousands or even millions samples \cite{torrado2018deep}. Secondly, they usually present bad generalization properties, i.e., have difficulties in applying what they have learnt not only to new domains but also to new problems of the same domain \cite{zhang2018study}. Since both Automated Planning and Reinforcement Learning have their own pros and cons, it seems natural to try to combine them as part of the same agent architecture, which ideally would possess the best of both worlds. For that purpose, we have resorted to \emph{Goal Reasoning} \cite{aha2015goal}, a design philosophy for agents in which its entire behaviour revolves around goals. They learn to formulate goals, select goals, achieve the selected goals and select new goals when \emph{discrepancies} are detected. The main contribution of this paper is the proposal of a RL-based Goal Selection Module and its integration into a planning and acting architecture to control the behaviour of an agent in a real-time environment. We have trained and tested our approach on the GVGAI video game framework \cite{perez20152014}. GVGAI is a framework intended to evaluate the behaviour of reactive and deliberative agents in several video games. Its ultimate goal is to help advance the state of the art in General Artificial Intelligence. The Goal Selection Module here presented is based on a Convolutional Neural Network (CNN) \cite{krizhevsky2012imagenet} which has been trained with the RL algorithm known as Deep Q-Learning \cite{mnih2013playing}. The training experience has been extracted from the execution of thousands of episodes of a planning agent that randomly selects subgoals in the GVGAI environment, on both, different domains and different problems for each domain. Training problems are also different from the ones used for testing, which allows us to evaluate the generalization ability of the module with respect to both domains and problems. The CNN receives as input an image-like encoding of the current state of the game $s$ and an eligible subgoal $g$ and returns the predicted length of the plan which starts at $s$, achieves $g$ and then achieves the final goal (wins the game). The Goal Selection Module selects the subgoal $g^*$ whose associated plan has the minimum predicted length. After selecting $g^*$, the Planner Module finds a valid plan from $s$ to $g^*$, which will then be executed by the agent in GVGAI. We have conducted an experimentation to evaluate the total planning time taken by our approach, with respect to the planning time taken to produce the first solution to every original problem with a satisfying planner \footnote{We have used FF as the baseline planner.}. Our experimentation also shows a comparison of the quality of plans produced by both approaches. The results obtained show both approaches are able to find plans of good quality, but our method greatly decreases planning time when applied to complex problems. Moreover, we have observed in our experiments that using our approach planning time remains almost constant for complex problems where our baseline satisfying planner fails to find a solution in reasonable time. We think that this is an argument that can favour the adoption of planning integrated with goal selection in scenarios with tight time restrictions. Addressing Goal Selection with Deep Q-Learning and a CNN has two main advantages. Firstly, as the results of our experiments show, the Goal Selection Module learns to generalize. The use of a CNN allows it to apply what has learnt on the training levels to new levels it has never seen before. Secondly, thanks to the use of Deep Q-Learning, the Goal Selection Module learns to select goals \emph{thinking in the long term}, i.e, taking into account the subgoals it will have to achieve afterwards to beat the game. The structure of this work is the following. We first explain the GVGAI framework and the Deep Q-Learning algorithm. We then present an overview of the architecture and show how the Goal Selection Module learns. After that, we present the results of our empirical study. We then compare our approach with related work. We finish by presenting our conclusions and future work. \section{Background} \subsection{GVGAI} To test our planning and acting architecture we have used the General Video Game AI (GVGAI) Framework \cite{perez20152014}. This framework provides a game environment with a large quantity of tile-based games which are also very different in kind. For example, it comprises purely reactive games, such as \emph{Space Invaders}, and also games which require long-term planning in order to be solved successfully, such as \emph{Sokoban}. We have chosen to use deterministic versions of three GVGAI games (known as \textit{Boulder Dash, IceAndFire, and Catapults} detailed in the experiments section). We use these games to extract the experience of episodes of planning and acting our Goal Selection Module is trained on. All the games require both deliberation and long term thinking to be solved. All of them share that it is necessary to reach an exit portal after accomplishing some subgoals which involve gathering objects on given cells. As an example, Figure \ref{fig:boulder_dash} shows the configuration of a level in the game \textit{Boulder Dash}. In our version of Boulder Dash, the player must collect nine gems and then go to the exit, while minimizing the number of actions used. In order to do that, it must traverse the level (one tile at a time) while overcoming the obstacles: the player cannot pass through walls and boulders must be broken with its pickaxe before passing through. Also, the player must select which gems to collect, since there are more than nine gems available. All of this makes it really hard to find the shortest plan, even a first solution plan for a satisfying planner, as shown in the experiments. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{BoulderDash_lv0.png} \caption{A level of the BoulderDash game.} \label{fig:boulder_dash} \end{figure} One very important reason we have chosen GVGAI is because it makes available a mechanism for easily creating and integrating new games and levels. This way, we can create as many new levels for a given game as we want, which allows us to test the generalization abilities of a planning and acting architecture when its Goal Selection Module has already been trained. The Video Game Description Language, VGDL \cite{perez20152014}, is the method used to define the dynamics and interactions of all the objects in each of the games. Every level in the game is defined by a level description file, which contains the layout of the level and the initial positions of the objects. Listing 1 shows the associated level description file of the game level shown on Figure \ref{fig:boulder_dash}. Each type of object has a character associated: \emph{w} for walls, \emph{o} for boulders, \emph{x} for gems, \emph{A} for the player, \emph{e} for the exit, \emph{.} for tiles and \emph{-} for empty tiles, which are the same as normal tiles. \newpage \begin{center} \lstset{ caption=The level description file of the level shown on Figure \ref{fig:boulder_dash}., captionpos=b } \begin{lstlisting} wwwwwwwwwwwwwwwwwwwwwwwwww w...o.xx.o......o..xoxx..w w...oooooo........o..o...w w....xxx.........o.oxoo.ow wx...............oxo...oow wwwwwwwwww........o...wxxw w.-....o..............wxxw w--........Ao....o....wxxw wooo.............-....w..w w......x....wwwwx-x.oow..w w.--.....x..ooxxo-....w..w w---..e...........-----..w wwwwwwwwwwwwwwwwwwwwwwwwww \end{lstlisting} \end{center} \subsection{Deep Q-Learning} Q-Learning \cite{watkins1989learning} is one of the most widely used techniques in Reinforcement Learning, RL, \cite{sutton2018reinforcement}. As every RL technique, it learns a policy $\pi$ that, in every state $s$, selects the best action $a$ in the set of available actions $A$ in order to maximize the expected cumulative reward $R$, i.e., the expected sum of all the (discounted) rewards $r$ obtained by choosing actions according to the same policy $\pi$ from the current state $s$ until the end of the episode. According to the \emph{Reward Hypothesis}, all goals can be described as the maximization of $R$. This means that, no matter the goal an agent is pursuing, its behaviour can be modeled and learnt (more or less successfully) using a RL technique, such as Q-Learning. Q-Learning associates a value to each $(s,a)$ pair, known as the Q-value, $Q(s,a)$. This value represents the expected cumulative reward $R$ associated with executing action $a$ in state $s$, i.e., how good $a$ is when applied in $s$. This way, the policy $\pi$ learnt with Q-Learning corresponds to, given a state $s$, selecting the action $a^*$ in $A$ with the maximum Q-value associated. One of the main problems Q-Learning has is that it needs to learn the associated Q-value for each of the $(s,a)$ pairs, known as the \emph{Q-table}. If the action or state space are too big, the Q-table grows and the learning problem becomes intractable. Deep Q-Learning \cite{mnih2013playing} solves this problem. Instead of learning the Q-table, it uses a Deep Neural Network (DNN) to learn the Q-values. Thanks to the use of a DNN, it is able to generalize and correctly predict the Q-values for new $(s,a)$ pairs never seen before by the network. In our work, we select the best subgoal from a set of possible subgoals. The set of possible subgoals depends on the current state $s$. Since the state space is enormous, the size of the set of possible subgoals across all different states is also really big. For this reason, we use Deep Q-Learning in pursuit of the good generalization abilities shown by \cite{mnih2013playing}. \section{The Planning and Acting Architecture} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{ArchitectureOverview.png} \caption{An overview of the planning and acting architecture.} \label{fig:architecture} \end{figure} An overview of the planning and acting architecture can be seen on Figure \ref{fig:architecture}. The \textbf{Execution Monitoring} Module communicates with the GVGAI environment, receiving the current state $s$ of the game. It also supervises the state of the current plan. If it is not empty, it returns the next action $a$. If it is empty, the architecture needs to find a new plan. The \textbf{Goal Formulation} Module receives $s$ and generates the compound subgoal $G$, which is a list of single subgoals $\{g_1, g_2, ..., g_n\}$. Since all GVGAI games are tile-based, we have associated each subgoal with getting to its correspondent tile (cell), which permits to handle subgoals for any of the games represented in this work. The \textbf{Subgoal Pattern} contains the prior information about each game domain needed to automatically generate $G$ given $s$. It is encoded as a list of object classes that correspond to subgoals. This allows us to easily adapt the \textit{Goal Formulation} to a new GVGAI game since we only need to provide the \textit{Subgoal Pattern} with a list of object classes corresponding to subgoals in this new domain. In every game each subgoal $g \in G$ corresponds to getting to a level tile that contains an object of the classes defined in the \textit{Subgoal Pattern} or, if the player has already achieved all the necessary subgoals, the final goal $g_f$ (get to the exit) is directly attainable and $G=\{g_f\}$. The \textbf{Goal Selection} Module receives $G$ and selects the best subgoal $g^* \in G$ given $s$ (the mechanism is explained in the next section). The \textbf{PDDL Parser} encodes $g^*$ as a PDDL Single Goal, i.e., (\emph{goto tile13}), and $s$ as a PDDL Initial State, which together constitute the PDDL Problem. The \textbf{Planner} Module receives the PDDL Problem along with the PDDL Domain, provided by a human expert, and generates a plan $p(s,g^*)$ which achieves $g^*$ starting from $s$. Finally, the \textit{ Execution Monitoring Module} receives $p(s,g^*)$ and the cycle completes. It is worth noting that the list of subgoals received by the \textit{Goal Selection Module} might contain either unreachable or dead-end subgoals (the player dies). In the first case, the planner cannot find a plan and notifies that situation to the \textit{Goal Selection Module}, that selects the next best subgoal. In the second case, the agent fails to solve the problem. As explained in the following, the Deep Q-Learning learns to not select these types of subgoals. \section{Goal Selection Learning} In order to select the best subgoal $g^* \in G$ for a given $s$, the Goal Selection Module iterates over every $g \in G$ and predicts the length of its associated plan. It then selects as $g^*$ the subgoal whose associated plan has been predicted the minimum length. The Module uses a Convolutional Neural Network (CNN) \cite{krizhevsky2012imagenet} that receives $s$ and a $g \in G$, both encoded as a \emph{one-hot matrix}, and outputs the predicted plan length. Each position of this one-hot matrix corresponds to a tile of the level of a game, and encodes the objects within that tile as a \emph{one-hot vector}, i.e., a vector where each position is associated with a different type of object and which contains \emph{1} if that object is in that tile and \emph{0} otherwise. The subgoal $g$ is also encoded in the one-hot vector of its associated tile. Our approach for Goal Selection uses a Deep Q-Learning based model (which we call \textbf{DQP Model}, an acronym for \textit{Deep-Q Planning}) that predicts as $l_{P(s,g)}$ the length of the plan $P(s,g)$ that achieves $g$ \emph{and}, after reaching it, achieves the final goal $g_f$ (after obtaining all the required subgoals in an optimal way). This way, the DQP Model predicts the length of the entire plan, not only the first section of it, that we note as $p(s,g)$, which corresponds to a plan that achieves $g$ starting from $s$. Since only the length of the first section of the plan $p(s,g)$ is known, this model cannot be trained in a supervised fashion, since the length of the plan that achieves subgoals in an optimal way is unknown. To train this model, we have chosen to apply the methodology followed by Deep Q-Learning \cite{mnih2013playing}. To do so, we establish a correspondence between our problem and Reinforcement Learning (RL). Actions $a$ in RL correspond in our work to achieving a subgoal $g$, the reward $r$ obtained by executing $a$ at $s$ corresponds to the length of the plan $p_{s,g}$ that starts at $s$ and achieves a subgoal $g$, the expected cumulative reward $R$ associated with $(s,a)$ corresponds to the length $l_{P(s,g)}$ of the entire plan $P_{s,g}$, and maximizing $R$ corresponds to minimizing $l_{P(s,g)}$. Table \ref{table:RL_comparison} shows this correspondence. Moreover, when $g$ corresponds to an \textit{unreachable} or \textit{dead-end} goal (explained above), $r$ = 100, while $r$=-100 for $g$ being the final goal. This way we are representing a penalty (a really long plan length) to avoid unreachable or dead-end goals, and a big reward (a plan of \textit{negative } length) for the final goal, thus allowing the agent to learn to reject \textit{bad} goals and to select the final goal as soon as it is attainable. \begin{table}[h] \label{table:RL_comparison} \centering \begin{tabular}{|c|c|} \hline \textbf{RL} & \textbf{Our Work} \\ \hline Action $a$ & Subgoal $g$ \\ \hline Reward $r$ & $l_{p(s,g)}$ \\ \hline Cumulative Reward $R$ & $l_{P(s,g)}$ \\ \hline Maximize $R$ & Minimize $l_{P(s,g)}$ \\ \hline \end{tabular} \caption{Correspondence between RL and our problem.} \end{table} The CNN of the DQP Model predicts $l_{P(s,g)}$, which in Deep Q-Learning corresponds to the Q-value $Q(s,a)$. Since its correct value, the Q-target $Q^*(s,a)$, is unknown, it is estimated using other predicted Q-values $Q(s',a')$ in a technique known as \emph{bootstrapping}. This is the method used to learn the Q-values. The network is trained by minimizing the squared difference between $Q(s,a)$ and $Q^*(s,a)$. This loss $L$ formula is called the Bellman Equation and is shown below: \begin{equation*} L = (Q(s,a) - Q^*(s,a))^2 = \end{equation*} \begin{equation} (Q(s,a) - (r + \gamma \max_{a' \in A'} Q(s',a')))^2 \end{equation} \noindent where $s'$ is the next state (after applying $a$ in $s$), $A'$ is the set of applicable actions in $s'$ and $\gamma=1$ is the \emph{discount factor}, so actually we don't discount future rewards (plan lengths). The CNN architecture used for the DQP Model is composed of 8 convolutional layers and 2 inner fully connected (fc) layers, without considering the output layer. The first two convolutional layers contain 32 filters each one, the next three use 64 filters each, and the last three layers use 128 filters each one. Then, the first fc layer contains 128 units and the next fc layer 32 units. We normalized the dataset before using it to train the CNN. Also, in order to make learning more stable, an auxiliary, independent CNN is used to estimate the Q-targets, in a technique known as \emph{Fixed Q-targets} \cite{mnih2015human}. The DQP model use \emph{offline learning}, i.e., is trained on static datasets. These datasets are populated by performing \emph{random exploration} on the training levels of the corresponding game. Each time the Goal Selection Module must select a new subgoal $g^*$ for the current state $s$, it selects it randomly. Then, when the architecture has found $p(s,g^*)$ and executed it arriving at state $s'$, a new sample is added to the datasets. The datasets of the DQP Model are filled with samples of the form $(s,g^*,r,s')$. \section{Experiments and Analysis of Results} We have conducted an experimentation with a two-fold goal in mind: (1) to test the generalization abilities of our DQP model, by training and testing it on different levels and domains, (2) to compare the total time (planning time + goal selection time) taken by our approach, with respect to the planning time needed by a classical planner using different optimization options. We have trained and tested our approach on three different GVGAI games: BoulderDash, IceAndFire and Catapults. The (final) goal of every game is getting to the exit after meeting certain requirements, i.e., achieving several subgoals, while minimizing the number of actions used. In our deterministic version of BoulderDash, the agent must traverse the level, collect at least nine gems and then get to the exit. In this game there are two types of obstacles: boulders, which must be broken with a pickaxe before passing through, and walls, which are impassable. Subgoals in BoulderDash correspond to items of the class \emph{gem}. This information is encoded in the \textit{Subgoal Pattern Module} for the architecture to be able to correctly formulate subgoals. In IceAndFire, the agent must traverse the level, collect the ten coins present at the map and get to the exit. In this game there are impassable obstacles (walls and spikes) but, unlike BoulderDash, there are also tiles with ice and fire which can only be traversed after obtaining ice boots and fire boots, respectively. Thus, subgoals correspond to items of the class \emph{coin}, \emph{fire-boots} and \emph{ice-boots}, which must be pursued in the right order so as to correctly avoid the obstacles. In Catapults, the agent must use the catapults in order to get to the exit safely. There are four types of catapults (\emph{up}, \emph{right}, \emph{left} and \emph{down}), which correspond to the subgoals in this game. When the agent steps on a catapult, it is launched towards the corresponding direction and keeps flying until it hits a wall or another catapult, in which case this process repeats recursively. If the tile where the agent ends after this flight contains water, the agent dies and loses the level automatically, therefore the model has to learn to avoid these subgoals. Another way of losing this game is getting to a dead-end state, i.e., a state from which no subgoal (catapult) or final goal (exit) is achievable. This is why Catapults is the hardest of the three games: the agent must carefully select the correct catapults and in the right order so as to get to the exit without dying. For each game, we have represented a PDDL planning domain and we have collected datasets to train our architecture on. To do this, the agent, making use of the \textit{Planning and Action Architecture}, performed random exploration, i.e., the \textit{Goal Selection Module} selected subgoals at random and sent them to the planner, on the training levels of each game. For each level, we saved all the samples collected by the agent up to 500 unique (non-repeating) samples per level or all the unique samples obtained after 1000 iterations, since there are levels which don't contain so many unique samples. We have used 100 training levels for BoulderDash and IceAndFire and 200 levels for Catapults (we are using VGDL along with a GUI-based tool to easily create new levels), since we have extracted fewer samples for each level of this game. In total, this accounts for 50000 training samples in BoulderDash, 42950 in IceAndFire and 60018 in Catapults. These datasets were not only used to train the Planning and Acting architecture but also to select and validate different CNN architectures and hyperparameters for the Goal Selection Module. This was made by training the candidate CNN architectures on a subset of the training dataset and evaluating their performance on levels not used for training. This way, we selected the best CNN architecture which is the same one for the three games, except for the fact that we apply Batch Normalization after every convolutional layer for BoulderDash\footnote{All the resources, software, planning domains, and levels used including the test levels are available online in a public repository that will be provided should the paper be accepted. }. Once we obtained the best CNN architecture, we trained one DQP model on the entire training dataset for each game. We used 20000 training iterations for BoulderDash and IceAndFire and 25000 for Catapults. Each trained model was evaluated on the test levels. These test levels were different from the ones used for training in order to measure the generalization ability or our approach when applied to levels never seen before. The performance of our architecture was measured according to the length (number of actions) of the plans obtained and the time needed to obtain them (goal selection and planning times). In Catapults, since the agent can die, we also measure the \emph{success rate}, i.e., how often the agent can complete each level (without dying). We have chosen the Fast-Forward (FF) Planning System \cite{hoffmann2001ff} for our \textit{Planner Module} since the version of PDDL its parser uses is expressive enough to represent domains such as those of video games. We have selected the Best-First-Search (BFS) with $g=1$ and $h=5$ as the search strategy for FF when planning for a given subgoal. This way, FF finds a valid plan which achieves the subgoal, trying to minimize its number of actions although it is not guaranteed to obtain the shortest possible plan. In order to compare the performance of our Planning and Acting architecture with respect to classical planning, we tried to solve the same test levels using FF but, this time, without employing our architecture. This means we executed FF on the PDDL problem associated with each test level, solving it completely with no goal selection whatsoever, as in classical planning. We tried to obtain the optimal (shortest) plan for each level using the BFS strategy with $g=1$ and $h=1$ but, since many levels were too complex for FF to solve optimally, we also executed FF with \emph{soft} optimization options (BFS with options $g=1$ and $h=5$, as used when performing goal selection) and with no optimization options at all, making use in this case of the Enforced-Hill-Climbing (EHC) search strategy. Lastly, in order to assess the quality of the goal selection performed by our approach, we compared it with a model which selects subgoals completely at random, which we call Random Model. This baseline model corresponds to using the Planning and Acting architecture but, instead of employing the Goal Selection Module to select subgoals, it selects them at random. This way, the Random Model represents the worst possible way of selecting subgoals. The test levels used to compare the performance of the different techniques were comprised of the five levels provided by default in GVGAI for each game and also 4 new levels we created. These additional test levels (which will be referred to as \emph{hard} levels) were purposely created so that they were more complex and harder to solve by FF, but of the same size, i.e., number of tiles, as the other test levels (which will be referred to as \emph{easy} levels). For instance, in BoulderDash we discovered that FF had trouble solving levels which contained a lot of boulders. Tables 2, 3 and 4 show the performance obtained by the different approaches on both the easy and hard levels for each game. For the Planning and Acting architecture and the Random Model, we repeated each execution 15 times and averaged the results. For the FF planner, we repeated each execution 5 times for every search strategy and averaged the planning times. We allowed FF to spend a maximum of 1 hour of planning time for each level. If after this time FF had not found a plan yet, we considered the corresponding level as too complex for FF to solve. \begin{table}[h] \label{table:BoulderDashResults} \centering \resizebox{.48\textwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l||l|l|l|l|} \hline \multicolumn{10}{|c|}{\textbf{Number of Actions in BoulderDash}} \\ \hline & \multicolumn{5}{c||}{Easy Levels} & \multicolumn{4}{c|}{Hard Levels} \\ \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline DQP & 99 & 53 & 62 & 93 & 70 & 108 & \textbf{142} & \textbf{98} & 121 \\ Random & 207 & 188 & 144 & 177 & 190 & 214 & 302 & 239 & 262 \\ Optimal & - & 31 & 42 & - & 38 & - & - & - & - \\ BFS & 80 & 51 & 42 & 74 & 41 & - & - & - & 114 \\ EHC & - & 46 & 41 & 83 & 53 & 106 & - & - & - \\ \hline \end{tabular} } \vspace{.3cm} \resizebox{.48\textwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l||l|l|l|l|} \hline \multicolumn{10}{|c|}{\textbf{Total Time (s) in BoulderDash}} \\ \hline & \multicolumn{5}{c||}{Easy Levels} & \multicolumn{4}{c|}{Hard Levels} \\ \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline DQP & 1.90 & 0.63 & 1.60 & 0.79 & 1.60 & \textbf{0.89} & \textbf{1.82} & \textbf{0.86} & \textbf{1.85} \\ Random & 0.47 & 0.43 & 0.32 & 0.39 & 0.45 & 0.55 & 0.61 & 0.53 & 0.52 \\ Optimal & - & 8.41 & 15.83 & - & 407.6 & \textbf{-} & \textbf{-} & \textbf{-} & \textbf{-} \\ BFS & 231.0 & 0.75 & 0.04 & 667.6 & 0.06 & \textbf{-} & \textbf{-} & \textbf{-} & \textbf{85.6} \\ EHC & - & 0.10 & 0.04 & 0.27 & 0.06 & \textbf{682.0} & \textbf{-} & \textbf{-} & \textbf{-} \\ \hline \end{tabular} } \caption{Results obtained by each approach in BoulderDash. The symbol ``-" represents a timeout (FF could not find a plan in 1 hour).} \end{table} \begin{table}[h] \label{table:IceAndFireResults} \centering \resizebox{.48\textwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l||l|l|l|l|} \hline \multicolumn{10}{|c|}{\textbf{Number of Actions in IceAndFire}} \\ \hline & \multicolumn{5}{c||}{Easy Levels} & \multicolumn{4}{c|}{Hard Levels} \\ \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline DQP & \textbf{115} & 109 & \textbf{110} & \textbf{111} & 167 & 111 & 181 & \textbf{111} & 122 \\ Random & 140 & 109 & 117 & 135 & 182 & 117 & 181 & 114 & 143 \\ Optimal & 84 & 83 & 97 & 89 & 126 & 78 & 128 & 73 & 79 \\ BFS & 84 & 83 & \textbf{109} & \textbf{119} & 126 & 82 & 152 & \textbf{115} & 113 \\ EHC & \textbf{134} & 97 & \textbf{113} & \textbf{157} & 130 & 98 & 160 & \textbf{131} & 107 \\ \hline \end{tabular} } \vspace{.3cm} \resizebox{.48\textwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l||l|l|l|l|} \hline \multicolumn{10}{|c|}{\textbf{Total Time(s) in IceAndFire}} \\ \hline & \multicolumn{5}{c||}{Easy Levels} & \multicolumn{4}{c|}{Hard Levels} \\ \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline DQP & 1.63 & 1.30 & 1.29 & 0.48 & 1.32 & 1.46 &\textbf{0.58} & \textbf{0.60} &\textbf{1.39} \\ Random & 0.27 & 0.30 & 0.18 & 0.19 & 0.17 & 0.20 & 0.32 & 0.26 & 0.19 \\ Optimal & 0.43 & 0.79 & 0.72 & 1.33 & 0.72 & 0.74 & \textbf{11.97} & \textbf{11.07} &\textbf{9.23} \\ BFS & 0.01 & 0.02 & 0.01 & 0.03 & 0.02 & 0.15 & \textbf{7.91} & \textbf{5.32} & \textbf{5.98} \\ EHC & 0.01 & 0.01 & 0.01 & 0.02 & 0.01 & 0.60 & 1.21 & 0.72 & 0.66 \\ \hline \end{tabular} } \caption{Results obtained by each approach in IceAndFire.} \end{table} \begin{table}[h] \label{table:CatapultsResults} \resizebox{.48\textwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l||l|l|l|l|} \hline \multicolumn{10}{|c|}{\textbf{Success Rate (\%) in Catapults}} \\ \hline & \multicolumn{5}{c||}{Easy Levels} & \multicolumn{4}{c|}{Hard Levels} \\ \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline DQP & 0.0 & \textbf{26.66} & \textbf{33.33} & \textbf{26.66} & 0.0 & \textbf{20.0} & 0.0 & \textbf{33.33} & \textbf{6.66} \\ Random & 20.0 & \textbf{6.66} & \textbf{13.33} & \textbf{20.0} & 6.66 & \textbf{0.0} & 0.0 & \textbf{0.0} & \textbf{0.00} \\ \hline \end{tabular} } \vspace{.3cm} \resizebox{.48\textwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l||l|l|l|l|} \hline \multicolumn{10}{|c|}{\textbf{Number of Actions in Catapults}} \\ \hline & \multicolumn{5}{c||}{Easy Levels} & \multicolumn{4}{c|}{Hard Levels} \\ \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline DQP & - & \textbf{21} & \textbf{15} & \textbf{27} & - & 847 & - & 147 & 92 \\ Random & 25 & 21 & 15 & 27 & 22 & - & - & - & - \\ Optimal & 25 &\textbf{21} & 13 & \textbf{27} & 22 & - & - & - & - \\ BFS & 27 & \textbf{21} & \textbf{17} & \textbf{29} & 22 & - & - & - & 42 \\ EHC & 27 & \textbf{23} & 13 & \textbf{29} & 22 & 277 & - & - & - \\ \hline \end{tabular} } \vspace{.3cm} \resizebox{.48\textwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l||l|l|l|l|} \hline \multicolumn{10}{|c|}{\textbf{Total Time(s) in Catapults}} \\ \hline & \multicolumn{5}{c||}{Easy Levels} & \multicolumn{4}{c|}{Hard Levels} \\ \hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline DQP & - & 0.83 & 1.06 & 0.95 & - & \textbf{81.16} & - & \textbf{40.84} & \textbf{7.07} \\ Random & 0.36 & 0.36 & 0.19 & 0.37 & 0.49 & - & - & - & - \\ Optimal & 0.16 & 0.15 & 0.15 & 0.12 & 0.14 & \textbf{-} & - & \textbf{-} & \textbf{-} \\ BFS & 0.03 & 0.04 & 0.05 & 0.04 & 0.04 & \textbf{-} & - & \textbf{-} & \textbf{1789.6} \\ EHC & 0.05 & 0.05 & 0.04 & 0.05 & 0.04 & 3.39 & - & \textbf{-} & \textbf{-} \\ \hline \end{tabular} } \caption{Results obtained by each approach in Catapults. The symbol ``-" in the \emph{Optimal}, \emph{BFS} and \emph{EHC} rows represents a timeout (FF could not find a plan in 1 hour). In the \emph{DQP} and \emph{Random} rows it represents the corresponding approach was not able to solve that level (has a success rate of 0\%).} \end{table} \textbf{Results for BoulderDash}. Table 2 shows the results obtained by the different approaches in BoulderDash. The DQP model obtains plans which are approximately 23 \% longer than those obtained by the FF planner with the soft optimization options (\emph{BFS} and \emph{EHC} rows on top subtable of Table 2). The results obtained show this domain (game) poses difficulties for FF, which is only able to find the optimal plans for levels 1, 2 and 4, spending almost 7 minutes to do so for level 4. The BFS and EHC search strategies also present problems in this domain, particularly in hard levels. FF is only able to find a plan for level 0 using the BFS strategy (spending almost 4 minutes) and also spends more than 11 minutes to obtain a plan for level 3 with this same strategy. This shows FF has trouble solving even the easy levels. When we tried to solve the hard levels using FF, we could only find plans for levels 5 (with EHC) and 8 (with BFS), needing in both cases more than one minute of planning time. On the other hand, it can be observed that the DQP model can solve every level spending less than 2 seconds of total time, which accounts for both planning time and goal selection time. What is even more surprising is that the DQP model does not seem to spend more time in the hard levels than in the easy ones. If we take a look at the \emph{Random} row, we can observe that this model spends less time per level than the DQP model. This means that most of the time spent by the DQP model actually corresponds to the goal selection phase, i.e., every time the Goal Selection Module predicts the Q-value for a given \emph{(state, subgoal)} pair using the CNN. If we take this into consideration alongside with the fact that we are measuring \emph{total} time, which means that this time is actually split between every time the DQP model selects a subgoal, our approach drastically reduces the load of the planner for this domain, to an extent where FF can only solve less than half of the levels in reasonable time. At the same time, our approach obtains plans which are only slightly worse than those obtained by FF (using BFS or EHC), with only 23 \% more actions on average. \textbf{Results for IceAndFire}.If we now take at look at table 3, we can observe that FF solves this domain a lot better than BoulderDash, being able to find the optimum plan for every level (although it spends around 10 seconds in levels 6, 7 and 8). Both the BFS and EHC methods solve all the easy levels almost instantly. Regarding the hard levels, EHC is able to solve them easily too and so does BFS, although it needs more than 5 seconds to solve levels 6, 7 and 8. As with BoulderDash, the DQP model spends around 1 second per level, regardless of its complexity. If we now focus on the quality (number of actions) of the plans obtained, it can be observed that the DQP model obtains plans which are, on average, as good as the ones obtained by EHC (only 2 \% longer on average) and only slightly worse than those obtained using BFS (17 \% longer on average). This shows our approach performs even better in this domain than in BoulderDash although all the levels are simple enough to be solved by FF quickly (except for levels 6, 7 and 8, for which BFS spends some more time). \textbf{Results for Catapults.} Table 4 shows the results obtained for Catapults. This game is the hardest of the three by far, since for each level the subgoals (catapults) must be pursued in a very specific order or otherwise the agent will die. If we take a look at the success rate of the DQP model, we can appreciate it has trouble solving this game. On average, the DQP model obtains a sucess rate of 16 \% per level, which means it is able to solve 16 \% of the levels on average. This might seem low, but the Random model obtains a success rate of 7 \% per level, so the success rate of the DQP model is actually more than twice higher than the one obtained by Random model. This shows how hard this domain really is. If we now observe the results obtained by FF, we can see it is able to solve the easy levels without complications. However, when it comes to the hard levels, only EHC is able to solve level 5. Levels 6 and 7 can't be solved by FF (in one hour's time) with any search strategy, and level 8 can only be solved using BFS, spending almost half an hour. As with the other two domains, it seems DQP can solve the hard levels (except for level 6 for which it obtains a success rate of 0 \%) although it spends 43 seconds on average. This happens because these levels contains a lot of catapults (subgoals) and that, besides the fact that DQP makes a lot of errors while selecting subgoals, means that the planner is called many more times than for the rest of the levels. If we now take a look at the length of the plans obtained, we can see that plans obtained by DQP are on average as good as those obtained by FF on the easy levels. For the hard levels, however, the plans obtained by DQP are longer than those obtained with FF, although level 7 can only be solved by DQP. In the light of the results obtained, we can state that our approach obtains plans in the BoulderDash and IceAndFire domains of almost the same quality (length) as those obtained using classical planning, i.e., the FF planner. We have proved that, as the complexity of the problems to solve increases, the DQP model is able to solve them spending much less time than FF, to a point where for really complex problems FF fails to provide a solution in reasonable time (even with no optimization options involved). In Catapults, our approach fails to solve the levels most of the time. We have seen this is because this domain is really complex, as the success rate obtained by the Random Model shows. Due to this, even though the DQP model is able to obtain much better results than the Random Model, this is not enough for solving this domain reliably. For this reason, this domain must be solved using FF for simple levels although, as mentioned before, when the complexity of the levels increase FF is not able to solve this domain either. The results obtained by the DQP model in these domains seem to show that our approach is able to obtain good results, i.e., plans of good quality while spending little planning time, in domains of different kind, with the exception of domains where subgoals must be achieved in a very strict order, i.e., only a few of the different subgoal permutations correspond to a valid way of solving the level. However, even for these domains, it should be possible to obtain acceptable results by training the model on a bigger dataset. \section{Related Work} The use of Neural Networks (NN) in Automated Planning has been a topic of great interest in recent years. Some works have applied Deep Q-Learning to solve planning and scheduling problems as a substitute for online search algorithms. \cite{shen2017deep} uses Deep Q-Learning to solve the \emph{ship stowage planning problem}, i.e., in which slot to place a set of containers so that the slot scheme satisfies a series of constraints and optimizes several objective functions at the same time. \cite{mukadam2017tactical} also employs Deep Q-Learning, but this time to solve the \emph{lane changing problem}. In this problem, autonomous vehicles must automatically change lanes in order to avoid the traffic and get to the exit as quickly as possible. Here, Deep Q-Learning is only used to learn the long-term strategy, while relying on a low-level module to change between adjacent lanes without collisions. In our work, we also employ Deep Q-Learning but, instead of using it as a substitute for classical planning, we integrate it along with planning into our planning and acting architecture. Also, we do not focus on solving a specific problem but rather create an architecture which we hypothesize it is generalizable across a wide range of game domains. There are other works which use neural networks to solve planning problems but, instead of relying on RL techniques such as Deep Q-Learning, train a NN so that it learns to perform an \emph{explicit planning process}. \cite{toyer2018action} proposes a novel NN architecture known as \emph{Action Schema Networks} (ASNet) which, as they explain in their work, \emph{are specialised to the structure of planning problems much as Convolutional Neural Networks (CNN) are specialised to the structure of images}. \cite{tamar2016value} uses a CNN that performs the computations of the value-iteration (VI) planning algorithm \cite{bellman1957dynamic,bertsekas2015dynamic}, thus making the planning process differentiable. This way, both works use NN architectures which \emph{learn to plan}. These NNs are trained on a set of training problems and evaluated on different problems of the same planning domain, showing better generalization abilities than most RL algorithms. \cite{tamar2016value} argues that this happens because, in order to generalize well, NNs need to learn an \emph{explicit planning process}, which most RL techniques do not. Although our architecture does not learn to plan it does incorporate an off-the-shelf planner which performs explicit planning. We believe this is why our architecture shows good generalization abilities. Neural networks have also been applied to other aspects of planning. For instance, \cite{dittadi2018learning} trains a NN that learns a planning domain just from visual observations, assuming that actions have \emph{local} preconditions and effects. The learnt domain is generalizable across different problems of the same domain and, thus, can be used by a planner to solve these problems. There exist several techniques which facilitate the application of Automated Planning in real-time scenarios, such as Goal Reasoning \cite{aha2015goal}, Anytime Planning \cite{richter2010lama}, Hierarchical Planning (e.g., HTN \cite{georgievski2015htn}) and domain-specific heuristics learned using ML \cite{yoon2008learning}. \cite{guzman2012pelea} presents PELEA, a domain-independent, online execution architecture which performs planning at two different levels, \emph{high} and \emph{low}, and is able to learn domain models, low-level policies and planning heuristics. \cite{mcgann2008deliberative} proposes T-REX, an online execution system used to control autonomous underwater vehicles. This system partitions deliberation across a set of concurrent \emph{reactors}. Each reactor solves a different part of the planning problem and cooperates with the others, interchanging goals and state observations. In this work, we have proposed an architecture which uses Goal Reasoning as the method for interleaving planning and acting. \cite{jaidee2012learning} proposes a Goal Reasoning architecture which uses Case-Based Reasoning \cite{kolodner2014case} and Q-Learning in order to learn to detect discrepancies, associate discrepancies to new goals and learn policies that achieve the selected goals. In our work, we have focused on learning to select subgoals, using a NN (integrated into the Deep Q-Learning algorithm) instead of traditional Q-Learning in order to give our architecture the ability to generalize. For this reason, we believe our approach scales better when applied to big state spaces than the one proposed in \cite{jaidee2012learning}. In future work, we plan to extend our architecture so that it is also able to learn new subgoals. \cite{bonanno2016selecting} employs an architecture that does use a NN, concretely a CNN, to select subgoals for navigating a maze in the game known as \emph{Minecraft}. When a subgoal must be selected, the CNN receives an image of the current state of the game, which is used to decide the most suitable subgoal for that state. Unlike our work, a hard-coded expert procedure is used to teach the CNN which subgoal must be selected in each state. As Bonanno et al. recognise, this approach transforms the problem into a classification task, instead of a RL one. Furthermore, the set of eligible subgoals are always the same four regardless of the state of the game. In our work, the compound subgoal $G$ is different for each game state and can contain a different number of single subgoals $g \in G$ to choose from. Finally, it is worth to mention previous disruptive work on Deep RL \cite{mnih2015human} that addresses how to learn models to control the behavior of reactive agents in ATARI games. As opposite to this work, we are interested in addressing how deliberative behaviour (as planning is) can be improved by mainstream techniques in Machine Learning. This is one of the main reasons we chose the GVGAI video game framework, since it provides an important repertory of video games where deliberative behaviour is mandatory to achieve a high-level performance. \section{Conclusions and Future Work} We have proposed a goal selection method which learns to select subgoals with Deep Q-Learning in order to interleave planning and acting. We have tested our architecture on three different GVGAI games, using different levels for training and testing. We have compared our approach with a classical planner, measuring both the quality (length) of the plans and the time spent to obtain them. We have proved our approach is able to obtain plans of similar quality to those obtained by a classical planner, needing on average much less time to solve complex problems (levels). We have also shown our DQP model is applicable to domains (games) of different kind and presents good generalization properties when applied to new levels. Unlike our model, most RL techniques can't generalize well \cite{zhang2018study}. At the same time, the original DQN paper \cite{mnih2013playing} utilizes a training dataset of 10 million samples, whereas we only use around 50000 samples to train our model. We believe the reason behind all of this is that, with our approach, we are actually splitting the planning problem into two parts. On the one hand, we use RL (Deep Q-Learning specifically) to select subgoals, which can be interpreted as a form of high-level planning. On the other hand, we use a classical planner (FF) to achieve each selected subgoal, which can be viewed as a form of low-level planning. This way, the complexity of the problem to solve is split and shared between the RL algorithm and the planner. So, the same as the load of the planner is greatly reduced (which manifests as much smaller planning times), the Deep Q-Learning algorithm also obtains way better results (better generalization while being more sample-efficient) than it would normally do without the planner's help. We believe this synergy is the key element of our approach. One limitation of our work is that, in order to apply our architecture to a new game, we need to manually create its associated domain. In future work, we intend to make use of the method detailed in \cite{ignacio} to automatically obtain PDDL domains from VGDL game descriptions. We also plan to learn to formulate goals, in order to achieve truly generalization across domains. Lastly, we plan to augment our approach so that it can be used in non-deterministic environments. We believe this should be as easy as training our DQP model to predict the uncertainty or risk associated with a subgoal, in addition to the length of the corresponding plan. \section{Acknowledgments} Financial support tbd.
1,314,259,993,733
arxiv
\section{Introduction} Metallic nanoparticles, in particular those of Au, are key components of modern nanotechnology, mainly owing to their catalytic and plasmonic properties\cite{Atwater2010,Kim2019,Gelle2020}. As it is the case with most materials at the nanoscale, these properties have strong size dependence, since the nanoparticle size governs its volume and surface-to-volume ratio, which in turn determine most physical and chemical properesults andrties. In addition to size-dependence, recent studies have shown strong shape-dependence of both chemical and physical properties \cite{Burda2005, Tritsaris2011, Barmparis2016, Li2017}. For example, different shapes exhibit different Localized Surface Plasmon Resonance (LSPR) \cite{Yu2017} and have different numbers of active sites for catalysis since different facets are exposed \cite{Campbell2002}. Shape-controlled synthesis is usually achieved by fine-tuning the reaction conditions while using suitable ligands \cite{Grzelczak2008, Xia2009}. Understanding the factors that determine nanoparticle shapes, including the types and lengths of their edges, is crucial for the design of novel nanomaterials. Shape is known to affect many applications like catalysis, since for different shapes differently coordinated atoms are exposed and different coordinations may be more efficient for certain reactions. Examples where nanoparticle edges play important role include conversion of exhaust gases, sensing and CO$_2$ reduction, just to name a few. Catalytic converters in cars contain metal nanoparticles that catalyse the conversion of poisonous exhaust gases, such as CO, to non-toxic gases, such as CO$_2$. This conversion takes place solely on edges of the nanopartices \cite{lopez04,roling18}. CO$_2$ can be transformed to useful chemicals, including fuels, by means of catalytic CO$_2$ reduction reaction; such reactions take place mostly on edges of nanoparticles \cite{bagger17,bagger19}. Au nanoparticles can catalyse the reduction of CO$_2$ to CO, a significant reaction which converts the greenhouse active gas CO$_2$ into CO that can then be used in synthesis gas to produce higher hydrocarbons, enabling this way a more circular use of hydrocarbons. This reaction requires a considerable amount of energy, and small Au clusters and nanowires, where the percentage of edge atoms is very high, have shown a lot of promise in reducing that barrier \cite{Zhu_co2_red}. Metal nanoparticles are key components of modern materials for sensing, and the edges of nanoparticles play important role on the performance of the sensors \cite{clement15,hernandez18}. Nanoparticle edges, and in particular edges of Au nanoparticles, are important in two interconnected major problems of our times, namely the quality of air in urban environments through catalytic conversion and sensing of toxic gases, and CO$_2$ emissions and the greenhouse effect. Therefore, knowledge of edge properties is important for applications that have an impact on everyday life. As nanoparticles become smaller, the ratio of edge- to surface- and bulk atoms increases. As a result, the effect of edges becomes more prominent, and could affect the shape that the nanoparticle will assume. Several theoretical studies have predicted nanoparticle shapes from first-principles calculations, typically within the framework of the Wulff construction \cite{Ringe2011,Barmparis2012, Barmparis2015}. In this framework particle shape is determined by the ratios between surface energies of different $(hkl)$ crystal surfaces, while edge and vertex energies are neglected. Moreover, in contrast to the well-established concept of surface energy, little is known about the edge energy of crystalline solids. Even though the importance of the edges in the nanoscale is widely accepted, there is no consensus in the literature yet as to the extent of their effect \cite{Alpay2015,Cao2016}. Theoretical investigations have produced different results for edge energies, the main reason being the different definitions of edge length for nanoparticles, while there are very few experimental data for edge energies. At the same time, many important problems of modern nanotechnology depend on the properties and energetics of nanoparticle edges. For example, the importance of the edge and vertex atoms has been observed experimentally in the work of Campbell et al.\cite{Campbell2002}, where they found that their calorimetric data on Ru particle formation did not match a model that took into account only the surface energy, especially for smaller nanoparticles. Additionally, Alpay et al. \cite{Alpay2015} showcased electronic microscopy images of nanoparticles where their corners are rounded, proving that vertex atoms have considerably large energy and therefore are avoided in the equilibrium shape. \begin{figure} \label{structures} \includegraphics[width=\columnwidth]{figures/fig1.png} \caption{Typical samples from the 5 types of structures that were used to produced data for the ML algorithm. The computational cell is also shown. a) cubic ((100) exposed facets) nanoparticle, b) nanowire with the (100) facets exposed, extending infinitely vertically, c) (100) surface slab extending infinitely in the two lateral axes. d) octahedral ((111) exposed facets) nanoparticle and e) (111) surface slab extending infinitely in the two lateral axes. Figure was created using the VESTA software \cite{vesta}.} \label{fig:structures} \end{figure} Edge energy is usually defined as the energy required to form an edge divided by the edge length, in the limit of infinite edge length. To compute edge energy, one needs also the surface energy as any partition of an infinite bulk crystal generates both surfaces and edges. Different researchers have used different definitions and methods to calculate the edge energy of a given crystal at a given orientation. Hamilton \cite{Hamilton2006} tried 3 different definitions for edge energy: (i) taking all the atoms in the edge leading to $L = n d$ ($d$ is the distance between first neighbours), (ii) not include half of the two atoms at the vertices of the edge with $L = (n-1) d$ and (iii) a compromise between the two with $L = (n-0.5)d$. Using Molecular Dynamics (MD) calculations with semi-empirical potentials on Pd clusters, he arrived at values in the order of some meV\AA$^{-1}$, which was an order of magnitude lower than the value expected based on the calculated surface energies. Pel{\'{a}}ez et al. investigated the edge energy of Ni and Al using nanowires with different facets exposed and MD calculations with Embedded Atom Method (EAM) potentials. They found the edge energy of these structures in the order of 0.1 eV\AA $^{-1}$. \cite{Pelaez2012}. Zhao et al \cite{Zhao2016}, after showcasing the problem of the definition of an edge, introduced the idea of relative edge energy (REE) as the difference in energy over the difference of total edge length of two different structures with the same number of atoms since they attributed most of the energy difference on the edge energy. Using their method on Ru nanoparticles they arrived at values of around 50 meV\AA $^{-1}$ for the edge energy. Lai et al \cite{Lai2020} investigated the dependence of the fractional area of (111) surfaces to the edge energy on truncated octahedra and truncated cube nanostructures, which only have (111) and (100) surfaces; they used Density-Functional Theory (DFT) for the calculation of surface and edge energy of four different transition metals. They found values for the edge energy ranging from 0.17 to 0.32 eV\AA$^{-1}$. These pioneering works on edge energy of metals demonstrate that calculation of edge energies is a highly non-trivial task. Different definitions of edge energy and different computational methods result in values that span two orders of magnitude, from a few meV to hundreds of meV. The difficulty in the calculation of the edge energy lies in the definition of the edge length, which is not uniquely defined in nanostructures. The only unambiguous definition of edge energy comes for a nanowire of large enough diameter and infinite length. Here, we propose a method to tackle this challenge by calculating energy per edge atom, which then can be converted to energy per length when the length per atom for an infinite nanowire is known. To get this value, we consider several different nanostructures, with periodicity along three, two, one or zero directions, and which contain the same types of edge atoms. We use a supervised Machine Learning (ML) approach based on a multiple linear regression algorithm to obtain energies of various types of atoms in nanostructures. The method is verified to give precise results for bulk and surface energies for nine different energy functionals. After that, we proceed to use this method to also calculate edge and vertex energies. The energies obtained have an excellent accuracy with a mean absolute percentage error of 10$^{-3}$ at worst, far outperforming more conventional approaches of fitting a polynomial uni-variate equation of energy. \begin{table*} \centering \caption{Summary of the nanostructures contained in each database. The MD potential rows represent all eight of MD potentials used for both the relaxed and unrelaxed structures.} \begin{tabular}{ccc} \hline Exposed Facet & Nanostructures & Datapoints \\ \hline \hline (100) DFT unrelaxed & nanoparticles, nanowires, slabs & 22 \\ (100) DFT relaxed & nanoparticles, nanowires, slabs & 15 \\ (100) MD potential & nanoparticles & 28 \\ (111) DFT unrelaxed & nanoparticles, slabs & 13 \\ (111) DFT relaxed & nanoparticles, slabs & 9 \\ (111) MD potential & nanoparticles & 48 \\ \hline \end{tabular} \label{tab:db_summary} \end{table*} \section{Methodology} We focus on Au nanoparticles, as Au is among the most commonly used metals in modern nanotechnology, and the properties we calculate here may be useful for potential applications. Moreover, many different interatomic potentials exist for Au, and this allows for extensive testing of the present model. However, the methodology presented here can be easily transferred to any other metal. We begin by constructing databases of atomic configurations and their total energies that will be used with the Machine Learning algorithm. To this end, we calculate the total energy of many different nanostructures. The total energy is calculated either by quantum-mechanical simulations at the level of Density-Functional Theory (DFT), or by classical simulations using a variety of interatomic potentials typically used in Molecular Dynamics (MD) simulations. DFT calculations were performed with the the Vienna ab-initio simulation package (VASP) \cite{Kresse1993,Kresse1999, Kresse1996,Kresse1996a}, using the Projector Augmented Wave (PAW)\cite{Blochl1994} method (potentials version of Sep. 2000) and the Generalized Gradient Approximation of Perdew-Burke-Ernzerhof (PBE) \cite{Perdew1996} for the exchange-correlation functional. A 500 eV cut-off energy was used for all the calculations. A single ${\mathbf k}$-point was used to sample the Brilluin zone of the nanoparticles while for the periodic structures of nanowires and surface slabs a k-point sampling of ($15\times1\times1$) and ($15\times15\times1$) was used respectively. All atomic coordinates were allowed to relax to reach the minimum total energy of the system. For all DFT calculations, the lattice constant used was the one we found from DFT bulk relaxation, which was found to be 4.173 \AA, close to the experimental lattice constant of Au which is 4.08 \AA. MD calculations were performed using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) package. For these calculations the experimental lattice constant of 4.08 \AA\text{} was used. We use eight different interatomic potentials for Au: The simple pair potentials of Lennard-Jones \cite{Erkoc1997} and Morse \cite{Erkoc1997}, the many-body Effective Medium Theory (EMT) potential of Jacobsen\cite{Jacobsen1996} and five different many-body Embedded-Atom Method (EAM) potentials\cite{Ackland1987, Zhou2004, Foiles1986, Olsson2010, Grochola2005}. Databases were constructed for the ML algorithm using DFT and MD calculations of the total energy for various nanostructures, such as slabs, nanowires and high-symmetry nanoparticles. We consider two different classes of nanostructures. The first class comprises systems that only contain (100) faces and edges between these (100) faces; these structures are used to extract the edge energy of the (100)/(100) edge and the vertex energy of the (100)/(100)/(100) vertex. This class contains slabs with surfaces parallel to the (100) plane, tetragonal nanowires of infinite length with surfaces parallel to (100) and the equivalent (010) planes, and cubic nanoparticles with faces parallel to (100) and the equivalent (010) and (001) planes. In fcc Au, as well as any metal with cubic point group symmetry, the (100), (010) and (001) planes have identical atomic structure. These structures are shown in Fig. \ref{fig:structures}. The second class of nanostructures contains systems that contain only (111) faces and edges between these (111) faces; these structures are used to extract the edge energy of the (111)/(111) edge and the vertex energy of the (111)/(111)/(111) vertex. This class contains slabs with surfaces parallel to the (111) plane and octahedral nanoparticles with faces parallel to (111) and equivalent planes. In fcc Au, as well as any metal with cubic point group symmetry, the eight (hkl) planes with $h,k,l = \pm 1$ have identical atomic structure to the (111) plane. We consider regular octahedra with eight faces parallel to these eight planes, twelve edges and six vertices each. DFT calculations are limited to few hundreds of atoms due to very high computational cost, while MD can easily handle several millions of atoms. Due to the vast number of entries in each MD database, it was not necessary to include results from slabs and nanowires, and these structures are used only in DFT calculations. In total, we consider several tens of different systems for DFT calculations and about 70 nanoparticles for each classical potential calculations. In total, we constructed about seven hundred different data-points divided in 36 datasets corresponding to different structures or different calculation methods. Once the databases are made, we extract features of each structure and use machine learning (ML) with the target property being the total energy. The linear regression algorithm was used, as is implemented in the Python Scikit-Learn package (version 1.0.2)\cite{scikit-learn}. Multiple linear regression uses the formula \begin{equation} \label{linear_regression} y = a_0 + a_1x_1 + a_2x_2 + ... + a_Nx_N. \end{equation} Here y is the target property, N the number of features and $x_1$, $x_2$, ..., $x_N$ are the features/independent variables. Multiple Linear Regression fits the coefficients of the model $a_0$, $a_1$, $a_2$, ..., $a_N$ to minimize the residual sum of squares between the predicted y and the real y. For scoring the $R^2$ metric was used which is given by: \begin{equation} \label{r_sq} R^2 = 1 - \frac{\sum(y_{pred}-y_{real})^2}{\sum(y_{real}-\bar{y}_{real})^2}, \end{equation} in which the sums are over all the target values and $\bar{y}_{real}$ is defined as the mean value of the real values: \begin{equation} \bar{y}_{real} = \frac{\sum(y_{real_i})}{N_{real}} \end{equation} \begin{figure} \includegraphics[width=\columnwidth]{figures/Figure_1a_v4.png} \includegraphics[width=\columnwidth]{figures/Figure_1b_v4.png} \caption{Cohesive Energy of the unrelaxed structures as calculated with the various methods with respect to the total number of atoms for Up: cubic nanoparticles and Down: octahedral nanoparticles. } \label{Cohesive_energy} \end{figure} \section{Results and Discussion} \subsection{Model for total energy as a function of shape} We advocate that the total energy of each nanostructure equals the sum of bulk-, surface-, edge-, and vertex energies. While the decomposition of the energy into bulk- and surface energy is well established for materials since the nineteenth century \cite{rosakis14,Barmparis2015,rosakis16}, a further decomposition of surface energy into contributions from planar surfaces, edges and vertices is still under discussion. \newline \newline We start from the Gibbs free energy of a structure that can be expressed as \begin{equation} \label{Gibbs_energy} E = E_{bulk} + \sum_{hkl} \gamma_{hkl}A_{hkl}, \end{equation} in which $\gamma_{hkl}$ and $A_{hkl}$ are the surface tension and surface area of the surface with (hkl) Miller indices. In the case of the structures studied in the present work only one type of (hkl) surfaces are exposed and hence Eq. \eqref{Gibbs_energy} can be rewritten as \begin{equation} \label{Gibbs_2} E = E_{bulk} + \gamma A , \end{equation} where $\gamma$ is the surface tension of the exposed facets and A is the total surface area. In Eq. \eqref{Gibbs_energy} and Eq. \eqref{Gibbs_2} the contributions of the edges and vertices can be considered to be included in the surface term. If these contributions are decomposed from the surface term, the Gibbs equation takes the form \begin{equation} \label{Gibbs_3} E = E_{bulk} + \gamma A' + \tau L + N_v v . \end{equation} Here, $A'$ is the surface without counting the edge and vertex atoms, $\tau$ is the edge energy per unit length, $L$ the total edge length, $N_v$ is the number of vertices and $v$ is the energy of a vertex atom. Again in the structures studied, only one type of edge and vertices is present but even if this was not the case the idea can be generalized with a sum like Eq. \eqref{Gibbs_energy} Eq. \eqref{Gibbs_3}, with some assumptions, can be transformed to a polynomial form with only one unknown variable. We tried fitting the energies to a polynomial of $x=N^{1/3}$ where $N$ is the total number of atoms. For large $x$, Eq. \eqref{Gibbs_3} would be a cubic polynomial of $x$ as $L\sim x, A\sim x^2$ and $V\sim x^3$. Zhao et al \cite{Zhao2016} used a similar idea by expressing all terms of Eq. \eqref{Gibbs_3} with respect to the total edge length $L$. In both their case and in the present work, the fit of the simulation data to the equation yielded unsatisfactory results, which in many cases also had the wrong signs e.g. negative vertex or edge energy, implying an energetic favouring of the formation of edges/vertices which cannot hold. Additionally, since the equation was polynomial in nature, the extracted coefficients depended strongly on the initial guess and even after imposing constraints based on physical intuition the results were still not promising. Considering the above limitations, a different approach was adopted: assuming that an energy decomposition exists, we can express the total energy as a linear function of the numbers of atoms of bulk, planar surfaces, edges, and vertices. In the following, we will refer to atoms on planar surfaces, simply as "surface atoms", explicitly excluding edge and vertex atoms. The atoms of the nanostructure are characterized as bulk, surface edge, and vertex atoms based on their position with respect to the symmetry elements of each nanostructure, as well as their coordination number, $z$. The later is equal to the number of first neighbours of the given atom. The coordination number $z$, is maximum for bulk atoms and becomes lower as we consider surface, edge and vertex atoms, respectively. For fcc Au, these numbers range from $z=12$ for bulk atoms down to $z=3$ for atoms at the vertices of cubic nanoparticles. We assume that the total energy of the nanoparticle is linear in the four numbers of atoms, that is, \begin{equation} \label{energy_decomp_equation} E = E_b N_b + E_s N_s + E_e N_e + E_v N_v, \end{equation} while the total number of atoms in the nanoparticle, $N$, is the sum of the numbers of the four different kinds of atoms: \begin{equation} \label{number_of_atoms} N = N_b + N_s + N_e + N_v. \end{equation} The reference energy for all calculations was that of an isolated Au atom, i.e. the energy of an isolated Au atom is zero. In this model, the total energy of a nanostructure can be decomposed into the average contributions of each atom type, namely $E_b$, $E_s$, $E_e$ and $E_v$ for bulk, surface, edge and vertex atoms respectively. Similar decompositions of the energy into various types of atoms have been used in the past for a thermodynamic description of size dependent shape evolution \cite{Barnard2004}, to explain enhanced catalytic activity of nanoparticles \cite{lopez04}, or to account for the mechanical properties of nanocrystalline solids \cite{galanis13}. Eq. \eqref{energy_decomp_equation} should give satisfactory predictions if the interatomic interactions were described by pair potentials, and is expected to hold very well for noble metals, like Au, whose atoms possess filled $d$ shells and are therefore spherically symmetric. Indeed, we find that Eq. \eqref{energy_decomp_equation} is valid for Au for all cases considered in the present study. \begin{figure} \includegraphics[width=\columnwidth]{figures/predicted_vs_calculated_v2.png} \includegraphics[width=\columnwidth]{figures/size_effect_v2.png} \caption{Up: The calculated DFT total energy versus predicted total energy for the DFT datasets. Down: The percentage error versus the number of atoms.} \label{ml_dft} \end{figure} \subsection{Atomistic simulations} We start by calculating the total energy of all structures, using both MD and DFT codes. We then repeat the calculation allowing the atomic positions to relax and minimize the total energy of the system. As expected, a relatively small percentage of atoms, mostly at the outmost layers change their positions during relaxation, and the overall shape is well preserved in the process. For each structure, we calculate the cohesive energy, defined as \begin{equation} \label{cohesive energy} E_c = E/N, \end{equation} where $N$ is the total number of atoms and $E$ is the calculated total energy. All energies are reported with reference to the energy of isolated atom: For all calculation methods, the energy of one atom at the center of a very large simulation box is zero. Fig. \ref{Cohesive_energy} shows the results for the cohesive energy for cubic and octahedral nanoparticles. For a given nanoparticle, the number of vertex atoms is always the same, independent of size, while the ratio of edge to bulk and surface to bulk declines as the total number of atoms increases. For large nanoparticles of average diameter $d$, the number of atoms, $N$ is proportional to $d^3$ as the particle density, $N/V$ is constant and the volume, $V$ is proportional to $d^3$. For similar reasons, $N_s$ will be proportional to $d^2$ and $N_e$ will be proportional to $d$. Therefore, for a large nanoparticle with $N$ atoms, $N_b, N_s, N_e$ and $N_v$ will be proportional to $N, N^{2/3}, N^{1/3}$ and $N^0$, respectively. For this reason, the cohesive energy reaches a constant value for large values of $N$, which is independent of shape and is equal to the bulk cohesive energy, which in turn should equal the coefficient $E_b$ of Eq. \eqref{energy_decomp_equation}. As the value of $E_b$ is model-dependent, the various methods give slightly different asymptotic value for the cohesive energy, as shown in Fig. \eqref{Cohesive_energy}. \subsection{Machine learning computations} Databases were constructed for DFT and each different MD interatomic potential, using the number of each atom type and the total energy of each nanostructure. For the ML model, the features are the number of each atom type ($N_b$, $N_s$, $N_e$, $N_v$) and the property is the total energy of the nanoparticle. In the linear model, the coefficients of the fit are the various energies $E_b, E_s, E_e, E_v$ of Eq. \eqref{energy_decomp_equation}. The ML model that used Eq. \eqref{energy_decomp_equation} to fit the data produced an excellent score, $R^2>0.99$, for all the datasets; only the Lennard-Jones potential has smaller $R^2$. Interestingly enough, the relatively simple energy decomposition model was quite accurate even in cases that wasn't expected, like the DFT based datasets, both unrelaxed and relaxed, in which the model reproduced the total energy with a percentage error of less than 0.5\%. The results can be seen in Fig. \ref{ml_dft}. For Fig. \ref{ml_dft}, all the datapoints were used to extract the energy coefficients of Eq. \eqref{energy_decomp_equation} and then used to predict the energy of all the nanostructures. On the right of Fig. \ref{ml_dft} the percentage error of the prediction is plotted versus the size of the nanostructures in which a relatively small size effect is revealed with the smaller nanostructures giving rise to higher deviations in the prediction but the prediction accuracy is still excellent. The ML-predicted values for the $E_b$, $E_s$, $E_e$ and $E_v$ of all the MD potentials were quite robust and did not change with varying the dataset size. Leave-One-Out cross-validation yielded excellent predictions for the MD datasets (except the Lennard-Jones) with a mean absolute percentage error in the order of $10^{-7}$ for the unrelaxed datasets while for the relaxed ones, the same error was in the order of $10^{-3}$ while the same cross-validation for the DFT datasets yielded a mean absolute percentage error in the order of $10^{-3}$ for both relaxed and unrelaxed structures. \subsection{Linear regression} We consider four different datasets: relaxed/unrelaxed and (100)-oriented/(111)-oriented structures. The calculated coefficients for all datasets are summarized in Table \ref{tab:ene}. In all cases, the relation between the coefficients was \begin{equation} E_b < E_s < E_e < E_v < 0 \label{eq_inequality} \end{equation} as expected, since the number of bonds for the atom types is in decreasing order: bulk, surface, edge, vertex. To a first approximation, energy is lower for larger number of bonds, therefore energies of Au atoms should increase with decreasing coordination number. Also, in all cases energies of Au atoms in a relaxed structure should be lower than energies of isolated Au atoms which is set to zero in this study. Preserving the ordering of Eq. \eqref{eq_inequality} is not trivial; in many cases, numerical errors cause some coefficients to have wrong ordering or even be positive \cite{Zhao2016}. In all cases, the bulk energy, $E_b$ is the same in all four tables and changes by less than 10 meV/eV upon relaxation, as the relaxation does not affect very much atoms with bulk coordination. The experimental value of the cohesive energy of Au is -3.81 eV/atom \cite{Kittel2005}. Most empirical potentials reproduce this value or give values that are very close to it. The DFT value of -3.27 eV/atom deviates from the experimental value by -14\%, in accordance to general trends in DFT calculations for the cohesive energy of metals \cite{Tran2016}. For the surface energy, $E_s$, the ML regression predicts correctly that it is lower for (111) surfaces found in octahedra compared to (100) surfaces found in cubes; this holds for all potentials. Moreover, in all cases the relaxed surface energy is lower than the unrelaxed surface energy, as expected. The experimental value for the surface tension of Au(111) is $\gamma=1.5$ J/m$^2$ \cite{Tyson1977}. In order to translate this value to the notation used in the present work, we use the definition that surface tension is the excess energy related to bulk divided by the area, $A$, i.e., \begin{equation} \gamma = \frac{E - NE_b}{A} \quad \Rightarrow \quad \gamma = \frac{E_s-E_b}{A_{at}}. \label{eq:gamma} \end{equation} In Eq.\eqref{eq:gamma}, the area per atom, $A_{at}=A/N_s$ for Au(111) equals $A_{at}=a^2\sqrt{3}/4$ where $a=4.08$ \AA\ is the experimental lattice constant of Au. The second equation in \eqref{eq:gamma} holds for slabs, where $N_e=N_v=0$, or in the limit of large nanoparticles, where $N_e$ and $N_v$ are negligible. Using experimental values for the cohesive energy ($E_b$), surface tension ($\gamma$) and $A_{at}$ for (111), we obtain the experimental value of $E_s$ to be $E_s = -3.13$ eV/atom. Contrary to the case with the cohesive energy, the empirical potentials, that are fitted to experimental values, give slightly worse result for the surface energy compared to the first-principles DFT calculation which gives $E_s=-2.95$ eV. Notice that the DFT values for $E_s$ is higher than the experimental value; however, as the DFT bulk energy, $E_b=-3.27$ eV, is also higher than the experimental cohesive energy of Au, $E_b=-3.81$ eV, the DFT value for surface tension, $\gamma$, of Au(111) turns out to be lower than the experimental value. The values of surface tensions for Au(111) and Au(100) calculated from the DFT values for $E_s$, are 0.68 J/m$^2$ and 0.83 J/m$^2$ respectively, in excellent agreement to previously reported DFT values from slab calculations \cite{Barmparis2012}. The vertex energy, $E_v$, is quite sensitive to numerical errors: the nanoparticles used in the empirical potential calculations of the present study contain many millions of atoms and only eight vertex atoms (cubes) or six vertex atoms (octahedra). Therefore, vertex energy is a quantity that is extremely vulnerable to numerical errors in empirical potential calculations. EMT and some EAM potentials give reasonable vertex energies that are close to the DFT values (Table \ref{tab:ene}). The relative vertex energies $v=E_v-E_b$ from the DFT calculations are \begin{center} $v_{(100)/(100)/(100)}$ = 1.19 eV \end{center} \begin{center} $v_{(111)/(111)/(111)}$ = 0.98 eV \end{center} As expected, the vertex formed by three (111) surfaces, where the Au atom has four neighbors has lower energy than the vertex formed by three (100) surfaces, where the Au atom has three neighbors. It should be noted that, for the case of $v_{(111)/(111)/(111)}$, including smaller nanoparticles with less than 90 atoms in the database causes deviation of up to 0.5 eV from the reported value. The reason behind this deviation probably stems from the fact that in these nanoparticles there are few bulk atoms that are also relatively close to the surfaces due to the acute angles of the octahedra. On the contrary, all other values for energies of bulk, surface, edge atoms, and the energy of the (100)/(100)/(100) vertex were found to be well-converged within the present dataset, and any modifications of the dataset size result in changes of few meV per atom at most. \begin{table} \caption{Average energy of bulk, surface, edge and vertex atoms in unrelaxed and relaxed cubic and octahedral nanostructures. Cubes expose (100) surfaces; octahedra expose (111) surfaces. All energies are given in eV/atom and the value in the parentheses represents the difference percentage to the relevant DFT value.} \begin{center} \begin{footnotesize} {\bf (100)-oriented (cubes), unrelaxed} \begin{tabular}{ccccc} \hline Potential & $E_b$ & $E_s$ & $E_e$ & $E_v$ \\ \hline \hline LJ\cite{Erkoc1997} & -3.65 (0.12) & -2.17 (-0.22) & -1.22 (-0.47) & -0.67 (-0.58) \\ MORSE\cite{Erkoc1997} & -3.81 (0.17) & -2.34 (-0.16) & -1.34 (-0.42) & -0.74 (-0.53) \\ EMT\cite{Jacobsen1996} & -3.80 (0.16) & -3.45 (0.24) & -3.00 (0.30) & -2.50 (0.57)\\ EAM-A\cite{Ackland1987} & -3.79 (0.16) & -3.38 (0.22) & -2.88 (0.25) & -2.01 (0.26)\\ EAM-Z\cite{Zhou2004} & -3.93 (0.20) & -3.37(0.21) & -2.52 (0.09) & -1.50 (-0.06)\\ EAM-F\cite{Foiles1986} & -3.93 (0.20) & -3.45 (0.24) & -2.74 (0.19) & -2.07 (0.30)\\ EAM-O\cite{Olsson2010} & -3.81 (0.17) & -3.31 (0.19) & -2.71 (0.17) & -2.12 (0.33)\\ EAM-G\cite{Grochola2005} & -3.92 (0.20) & -3.16 (0.14) & -1.84 (-0.20) & -0.91 (-0.43)\\ DFT-PBE & -3.27 (0.00) & -2.78 (0.00) & -2.31 (0.00) & -1.59 (0.00)\\ \end{tabular} \vspace{1.0em} {\bf (100)-oriented (cubes), relaxed} \begin{tabular}{ccccc} \hline Potential & $E_b$ & $E_s$ & $E_e$ & $E_v$ \\ \hline \hline LJ\cite{Erkoc1997} & -3.65 (0.12) & -2.17 (-0.24) & -1.22 (-0.48) & -0.66 (-0.68)\\ MORSE\cite{Erkoc1997} & -3.81 (0.17) & -2.35 (-0.17) & -1.37 (-0.42) & -0.74 (-0.64)\\ EMT\cite{Jacobsen1996} & -3.80 (0.16) & -3.50 (0.24) & -3.28 (0.39) & -2.83 (0.36)\\ EAM-A\cite{Ackland1987} & -3.79 (0.16) & -3.39 (0.20) & -3.12 (0.32) & -1.89 (-0.09)\\ EAM-Z\cite{Zhou2004} & -3.93 (0.20) & -3.41 (0.21) & -2.99 (0.27) & -1.59 (-0.24)\\ EAM-F\cite{Foiles1986} & -3.93 (0.20) & -3.46 (0.23) & -3.13 (0.33) & -2.36 (0.13)\\ EAM-O\cite{Olsson2010} & -3.81 (0.17) & -3.33 (0.18) & -3.00 (0.27) & -2.37 (0.14)\\ EAM-G\cite{Grochola2005} & -3.92 (0.20) & -3.26 (0.16) & -2.48 (0.05) & -0.83 (-0.60)\\ DFT-PBE & -3.27 (0.00) & -2.82 (0.00) & -2.36 (0.00) & -2.08 (0.00)\\ \end{tabular} \vspace{2.0em} {\bf (111)-oriented (octahedra), unrelaxed} \begin{tabular}{ccccc} \hline Potential & $E_b$ & $E_s$ & $E_e$ & $E_v$ \\ \hline \hline LJ\cite{Erkoc1997} & -3.65 (0.12) & -2.42 (-0.17) & -1.8 (-0.32) & -0.93 (-0.46)\\ MORSE\cite{Erkoc1997} & -3.81 (0.17) & -2.58 (-0.11) & -1.89 (-0.28) & -1.00 (-0.42)\\ EMT\cite{Jacobsen1996} & -3.80 (0.16) & -3.54 (0.22) & -3.31 (0.26) & -2.77 (0.60)\\ EAM-A\cite{Ackland1987} & -3.79 (0.16) & -3.50 (0.20) & -3.23 (0.23) & -1.94 (0.12)\\ EAM-Z\cite{Zhou2004} & -3.93 (0.20) & -3.50 (0.20) & -3.06 (0.16) & -1.63 (-0.06)\\ EAM-F\cite{Foiles1986} & -3.93 (0.20) & -3.56 (0.22) & -3.20 (0.22) & -2.40 (0.39)\\ EAM-O\cite{Olsson2010} & -3.81 (0.17) & -3.43 (0.18) & -3.10 (0.18) & -2.41 (0.39)\\ EAM-G\cite{Grochola2005} & -3.92 (0.20) & -3.34 (0.15) & -2.56 (-0.03) & -1.32 (-0.24)\\ DFT-PBE & -3.27 (0.00) & -2.91 (0.00) & -2.63 (0.00) & -1.73 (0.00)\\ \end{tabular} \vspace{1.0em} {\bf (111)-oriented (octahedra), relaxed} \begin{tabular}{ccccc} \hline Potential & $E_b$ & $E_s$ & $E_e$ & $E_v$ \\ \hline \hline LJ\cite{Erkoc1997} & -3.65 (0.12) & -2.42 (-0.18) & -1.76 (-0.34) & -0.88 (-0.62)\\ MORSE\cite{Erkoc1997} & -3.81 (0.17) & -2.58 (-0.13) & -1.93 (-0.28) & -0.89 (-0.61)\\ EMT\cite{Jacobsen1996} & -3.80 (0.16) & -3.57 (0.21) & -3.44 (0.29) & -2.97 (0.30)\\ EAM-A\cite{Ackland1987} & -3.79 (0.16) & -3.51 (0.19) & -3.3 (0.24) & -2.79 (0.22)\\ EAM-Z\cite{Zhou2004} & -3.93 (0.20) & -3.52 (0.19) & -3.28 (0.23) & -2.06 (0.10)\\ EAM-F\cite{Foiles1986} & -3.93 (0.20) & -3.58 (0.21) & -3.38 (0.27) & -2.48 (0.08)\\ EAM-O\cite{Olsson2010} & -3.81 (0.17) & -3.44 (0.17) & -3.24 (0.21) & -2.49 (0.09)\\ EAM-G\cite{Grochola2005} & -3.92 (0.20) & -3.39 (0.15) & -3.00 (0.12) & -0.30 (-0.87)\\ DFT-PBE & -3.27 (0.00) & -2.95 (0.00) & -2.67 (0.00) & -2.29 (0.00)\\ \end{tabular} \end{footnotesize} \end{center} \label{tab:ene} \end{table} \subsection{Edge energies of Au} The trends for edge energy are similar to the trends observed for surface energy and the values of Table \ref{tab:ene} are ranked as (cube, unrelaxed) $>$ (cube, relaxed) $>$ (octahedron, unrelaxed) $>$ (octahedron, relaxed). For the DFT results, atoms at the edges between (111) surfaces have lower energy by about 10\% lower than the energy of edges between (100) surfaces. The edge energy density, $\tau$, is defined as the excess energy over bulk- and surface energy divided by the total length of edges, $L$. For a system that contains edges, Eq. \eqref{eq:gamma} is generalized to \begin{equation} \tau = \frac{E - NE_b-N_s(E_s-E_b)}{L}. \label{eq:tau} \end{equation} In\eqref{eq:tau}, the term $E_s-E_b$ is the energy difference between a surface and a bulk atom and is equal to $\gamma A_{at}$ (see \eqref{eq:gamma}). Using the decompositions for the total energy, \eqref{energy_decomp_equation} and the total number of atoms, \eqref{number_of_atoms} \begin{equation} \tau = \frac{E - NE_b-N_s(E_s-E_b)}{L} \\ \Rightarrow \\ \tau = \frac{E_e-E_b}{D_{at}}. \label{eq:tau2} \end{equation} The second equation above holds for nanowires, where $N_v=0$, or in the limit of large nanoparticles, where $N_v$ is negligible. $D_{at} = L/N_e$ represents the distance between neighboring edge atoms. For (100)-oriented nanowires $D_{at} = a = 4.173$ while for (111)/(111) edges it is $D_{at} = a\sqrt{2}/2 = 2.951$ \AA. Using this value and the DFT values for $E_e$ and $E_b$, we can extract the edge energy densities, $\tau$, of Au shown in Table \ref{tab:tau}. As expected, the close-packed (111) surfaces that have lower surface energy form edges that are energetically favoured compared to (100). The relaxed values are $\tau_{(100)/(100)} = 0.22$ eV/\AA\ and $\tau_{(111)/(111)} = 0.20$ eV/\AA. As was the case with surface tension, $\gamma$, edge energy density is affected very little by atomic relaxation. The calculated numbers are close to the values reported in other works in the literature. In the work of Lai et al \cite{Lai2020} the values for the edge energies of the (100) and (111) surfaces of gold where both found to be 0.177 eV/\AA \text{} using DFT-PBEsol calculations on nanowires and nanoparticles. They also calculated the edge energies of other metals and all were in the same order of magnitude. Rolling et al \cite{Roling2017} with DFT-RPBE calculations used a model based on bond strength to calculate the average energy per atom for differently coordinated atoms for various metals. Specifically, for Au they found $E_5$ = 2.05 eV (for five-fold coordinated atoms) and $E_7$ = 2.21 eV (for seven-fold coordinated atoms) which correspond to the edge energy $E_e$ of atoms in cubic and octahedral nanoparticles respectively. These energies are similar to the ones found in this work both in absolute value and relative magnitudes. \begin{table} \caption{Edge energies of the (100)/(100) and (111)/(111) edges of Au, in eV/\AA, as calculated by machine-learning linear-regression algorithm based on DFT-PBE data. Values for relaxed and unrelaxed structures are given. } \begin{center} \begin{tabular}{ccccc} \hline Edge type & unrelaxed & relaxed \\ \hline \hline (100)/(100) & 0.23 & 0.22 \\ (111)/(111) & 0.22 & 0.20 \\ \end{tabular} \end{center} \label{tab:tau} \end{table} \section{Conclusions} In this work we consider decomposing the energy of a gold nanoparticle into contributions from the bulk, surfaces, edges and vertices. We find that such a decomposition is accurate for a variety of systems and many different calculation methods for the total energy. The parameters of the decomposition formula are calculated using machine learning techniques, as well as the total energy of a Au nanostructure given the number of atoms in the bulk, surfaces, edges, and vertices. Our model and method are found to be valid for gold nanostructures to a great accuracy and hold not only for pair potentials, but also for complicated many-body potentials, and for DFT. By fitting calculated total energies of Au nanostructures, we obtain values for cohesive energy, surface tension, edge energy density and vertex energy of Au. These quantities are known to play a pivotal role in the determination of the shape of a nanoparticle, though the contribution of edge energy density and vertex energies become significant for smaller nanoparticles. Values for cohesive energy and surface tension presented here agree with other published works. Edge energies lie between 0.20 and 0.23 eV/\AA~depending on the edge type, in agreement with respect to the order of magnitude with some recent works, while vertex energies are of the order of 1 eV. The validity of our edge and vertex energy calculation is further re-enforced by the very accurate total energy calculations that use these quantities. The present method can be easily generalized to other metals and other shapes or nanostructures, provided more structures are included in the databases that are used as input to the ML code. The extension of the present method to surfaces other than (100) and (111), that are used as an example here, is straightforward. Modern computational materials science codes allow for easy generation of databases of structures and first-principles total energies that can be used as input to the ML regression. In a future implementation, adsorption on metal surfaces could also be taken into account. As such, the present method could contribute towards the design of nanoparticles with tailored chemical and physical properties. \section*{Funding} This work was funded by the Hellenic Foundation for Research and Innovation through project MULTIGOLD, grant HFRI-FM17-1303 / KA10480. \section*{Acknowledgments} We acknowledge computational time granted from the National Infrastructures for Research and Technology S.A. (GRNET S.A.) in the National HPC facility, ARIS, under projects pr007027-NANOGOLD and pr009029-NANO-COMPDESIGN. \bibliographystyle{RS}
1,314,259,993,734
arxiv
\section{Introduction} \label{sect:intro} \object{Vela X-1} (4U\,0900$-$40) is an eclipsing high mass X-ray binary (HMXB) consisting of the B0.5Ib super giant \object{HD\,77581} and a neutron star with an orbital period of 8.964\,days \citep{kerkwijk95a} at a distance of $\sim$2.0\,kpc \citep{nagase89a}. The optical companion has a mass of $\sim$23\,\ensuremath{M_\odot}\xspace and a radius of $\sim$30\,\ensuremath{R_\odot}\xspace \citep{kerkwijk95a}. Due to the small separation of the binary system with an orbital radius of just $1.7\,{\rm R_\star}$, the massive 1.9\,\ensuremath{M_\odot}\xspace neutron star \citep[\mbox{Vela~X-1}\xspace is the most massive compact object known to be a neutron star;][]{quaintrell03a,barziv01a} is deeply embedded in the dense stellar wind of its optical companion HD\,77581\xspace \citep[$\dot M_\star = 4 \times 10^{-6}$\,\ensuremath{M_\odot}\xspace$\text{yr}^{-1}$;][]{nagase86a}. X-ray lines indicate that this wind is inhomogeneous with many dense clumps \citep{oskinova08a} embedded in a far thinner, highly ionized component \citep{sako99a}. The neutron star revolves with a long spin period of $\sim$283\,s \citep{rappaport75a,mcclintock76a}. Both the spin period and spin period derivative have changed erratically since their first measurements, as expected for a wind-accreting system. The evolution of the spin period is most appropriately described by a random walk model \citep{tsunemi89a,ziolkowski85a}. Although the source exhibits strong pulse-to-pulse variations, a pulse-profile folded over several pulse periods shows remarkable stability \citep[for 10 pulses or more;][]{staubert80a}, even over decades \citep{raubenheimer90a}. At energies below 5\,keV, the pulse-profile consists of a complex five-peaked structure, which transforms at energies above 20\,keV into a simple double-peaked pulse-profile \citep{staubert80a} where the two peaks are thought to be due to the two accreting magnetic poles of the neutron star. With \mbox{$\sim$}$4 \times 10^{36}\,\text{erg\, s}^{-1}$, the X-ray luminosity of \mbox{Vela~X-1}\xspace is typical of a high mass X-ray binary. Observations in the past, however, have shown that the source is variable with observed flux reductions to less than 10\% of its normal value \citep[off states;][]{kreykenbohm99a,kretschmar99a,inoue84a}, while periods of increased activity have also been observed during which the flux increases within an hour to a multiple of the previous value, reaching peak flux levels close to 1\,Crab \citep{kreykenbohm99a,haberl90a,kendziorra89a}. In this respect, \mbox{Vela~X-1}\xspace is similar to sources such as \object{4U\,1700$-$377} and \object{4U\,1907$+$09}, for which low luminosity states and flares have also been observed, as is rather typical for wind-accreting systems \citep[see e.g.][]{fritz06a,vandermeer05a,zand97a,haberl89a}. Although \mbox{Vela~X-1}\xspace is a well studied object, only observations by \textsl{INTEGRAL}\xspace revealed that the flares in \mbox{Vela~X-1}\xspace can be brighter than previously anticipated \citep{staubert04a,krivonos03a}. While the flaring activity is thought to be due to a strongly increased accretion rate, $\dot{M}$, the origin of the $\dot{M}$ variations is unknown. The phase averaged X-ray spectrum of \mbox{Vela~X-1}\xspace was usually modeled with a power law modified at higher energies by an exponential cutoff \citep{tanaka86a,white83a} or with the Negative Positive EXponential \citep[NPEX-model;][]{mihara95a}. The spectrum was found to be modified by strongly orbital-phase-dependent photoelectric absorption at lower energies due to the dense stellar wind and an accretion wake trailing the neutron star \citep{goldstein04a,kreykenbohm99a,feldmeier96a,haberl90a}. At 6.4\,keV, an iron fluorescence line and occasionally an iron edge at 7.27\,keV \citep{nagase86a} were observed in the X-ray spectrum. At higher energies, cyclotron resonant scattering features (CRSFs) between 25 and 32\,keV \citep{makishima92a,choi96a,kreykenbohm02b} and at \ca55\,keV \citep{kendziorra92a,orlandini97c,kreykenbohm99a,labarbera03a,attie04a} were present, although the interpretation of the 25\,keV feature is still sometimes debated \citep{orlandini05a}. The remainder of this paper is structured as follows. In Sect.~\ref{sect:data}, the data and software used are described. Section~\ref{sect:analysis} describes first the temporal analysis of the data, i.e. light curves, determination of the pulse period, quasi-periodic modulations, and then the analysis of the eclipse and the spectral analysis. The results are discussed in Sect.~\ref{sect:discussion} and a summary is presented in Sect.~\ref{sect:summary}. \section{Data} \label{sect:data} \subsection{Instrument and Software} The \textsl{INTEGRAL}\xspace observatory \citep{winkler03b} is in a highly eccentric orbit with a period of $71^{\text{h}}49^{\text{m}}$, ideal for long uninterrupted observations with a low X-ray background. Due to the high eccentricity of the orbit the perigee passage (when \textsl{INTEGRAL}\xspace is inside the radiation belts) has a duration of only \ca8\,h, which minimizes the time when no science observations are possible due to the high radiation background. \textsl{INTEGRAL}\xspace has four science instruments, which provide coverage from 3\,keV up to 10\,MeV as well as in the optical: the imager IBIS/ISGRI \citep[20\,keV to 800\,keV]{ubertini03a} with moderate energy resolution and a large effective area, the spectrometer SPI \citep[20\,keV to 10\,MeV;][]{vedrenne03a} for the analysis of nuclear lines, the X-ray monitor JEM-X \citep[3\,keV to 35\,keV]{lund03a}, and the optical monitor OMC \citep{mas-hesse03a}. All high-energy instruments are coded-mask telescopes \citep[see e.g.][for a review of this technique]{zand92a}. To improve the imaging quality, the satellite performs raster observations of the vicinity of an X-ray source, retaining the target in the field-of-view of ISGRI. Due to this ``dithering'' strategy the off-axis angle of the target source changes significantly during the observation. When the source is more than \ca4\fdg{5} away from the pointing direction, it is in the partially coded field-of-view of ISGRI. With increasing distance from the pointing direction, the coding factor decreases, causing increased uncertainties in the images, flux values, and spectra. Individual pointings made during these dithering observations are called science windows (SCWs). They have typical durations of 1800\,s, 2200\,s, or 3600\,s. These SCWs are then associated with \textsl{INTEGRAL}\xspace revolutions, i.e. complete orbits of the \textsl{INTEGRAL}\xspace satellite around the Earth. \begin{figure} \centerline{\includegraphics[width=\columnwidth]{9956ima.ps}} \caption{Intensity mosaic image using ISGRI data of the Vela region. All five revolutions from Rev.~137 to Rev.~141 have been used for this mosaic. \mbox{Vela~X-1}\xspace is by far the most significant source. Several more sources are also detected. Note that the times of the eclipses of \mbox{Vela~X-1}\xspace (see Fig.~\ref{fig:lc}) have been excluded from this mosaic, resulting in an exposure of \ca960\,ksec. The noisy rim of the mosaic is due to the low coding factor in the outermost part of the partially coded field-of-view. } \label{fig:image} \end{figure} To prepare the \textsl{INTEGRAL}\xspace data for analysis, we used the Offline Science Analysis Software (OSA), version 7.0, and its associated calibration files. In particular, we make extensive use of the tool \textsl{ii\_light}. We carefully checked the behavior of \textsl{ii\_light} (see Appendix~\ref{app_ii}), because the IBIS cookbook\footnote{available at\\ \url{http://isdc.unige.ch/?Support+documents}.} cautions that \textsl{ii\_light} should only be used to analyze the timing behavior within a given science window. For further analysis, we used HEADAS release 6.2. Spectral fitting was done with \textsl{XSPEC} 11.3.2ad \citep{dorman01a,arnaud96a}. \subsection{Data} \label{data} As part of the AO1 core program \citep{winkler01a}, \textsl{INTEGRAL}\xspace observed the Vela region (see Fig.~\ref{fig:image}) continuously for five consecutive \textsl{INTEGRAL}\xspace revolutions from the beginning of revolution~137 (JD\,2452970.86) until the end of revolution~141 (JD\,2452970.86) resulting in approximately 1\,Msec of data (see Fig.~\ref{fig:lc}). The observation was performed in a $5\times5$ dithering pattern with stable pointings $2^\circ$ apart. We chose to use \emph{all} available science windows from Rev.~137 to Rev.~141 to be able to derive a contiguous light curve with as few interruptions as possible (data gaps due to the perigee passage of the satellite are obviously unavoidable; see Fig.~\ref{fig:lc}). Since \mbox{Vela~X-1}\xspace is a bright source, the OSA software has no problem in detecting the source and determining its flux level accurately even when the source is at an off-axis angle of more than $14^\circ$; in any case, fewer than 5\% of the pointings had an off-axis angle larger than $14^\circ$. For studying the timing behavior on timescales short compared with a SCW, i.e. period determination and search for QPOs (see Sect.~\ref{sect:period}), the absolute flux is not important and the temporal properties are unaffected by a non-optimal off-axis flux correction. \begin{figure*}[ht] \centerline{\includegraphics[width=0.965\textwidth]{9956lc.eps}} \vfill \caption{Variability of \mbox{Vela~X-1}\xspace for the complete Vela region observation from Revolution 137 to 141. \textbf{a} ISGRI 20--40\,keV light curve (time resolution 1\,SCW, i.e. $\sim$1800\,s) and \textbf{b} 20--30\,keV vs.\ 40--60\,keV hardness ratio as defined by Eq.~\ref{eq:hardness} (rebinned by a factor of $\sim$3 with respect to the light curve). Labels indicate the revolution number. Short vertical lines below the X-ray light curve show \textsl{INTEGRAL}\xspace's perigee passages, during which the instruments are switched off. The long dashed vertical lines show the ingress and egress times. The dotted vertical line indicates the derived eclipse center (see also Table~\ref{tab:ephemeris}), while the dash-dotted line indicates the time of mean longitude $T_{90}$ based on the ephemeris from \citet{nagase89a}. Note that the offset of the newly derived $T_{90}$ (see Table~\ref{tab:ephemeris}) in comparison to that of \citet{nagase89a} is too small to be visible in this figure. See text for further discussion. } \label{fig:lc} \end{figure*} Figure~\ref{fig:image} shows the image of the Vela region from these observations. While \mbox{Vela~X-1}\xspace is by far the brightest source in the field-of-view of ISGRI, we also detect 4U\,0836$-$429 as a very prominent source reaching about a third of the average intensity of \mbox{Vela~X-1}\xspace, and the relatively weak sources \object{H\,0918$-$5459}, the \object{Vela~Pulsar}, and two sources first reported by \textsl{INTEGRAL}\xspace \citep{denhartog04a,sazonov05a}. Since the two brightest sources \mbox{Vela~X-1}\xspace and \object{4U\,0836$-$429} are well separated (about 6\fdg{7}) and all the weaker sources are even more distant, contamination of the spectrum of \mbox{Vela~X-1}\xspace due to the presence of the other sources is of no concern. Data from JEM-X and SPI have not been used in this analysis due to the far smaller field-of-view of JEM-X and since \mbox{Vela~X-1}\xspace is off-center in the observed field (\mbox{Vela~X-1}\xspace was only within the fully coded field-of-view of JEM-X for less than ten out of the $\sim$550 individual pointings). The SPI instrument, on the other hand, provides a high spectral resolution, although, due to its low effective area, it is not possible to study data on timescales of seconds as required here. \section{Data analysis} \label{sect:analysis} \subsection{Light curves and flux} \label{sect:lc} \mbox{Vela~X-1}\xspace was found in a strongly variable state during the Nov/Dec 2003 observation by \textsl{INTEGRAL}\xspace. While periods of increased activity were observed before \citep{kreykenbohm99a,haberl94a}, the behavior found in this observation \citep[see also][]{staubert04a} is indeed extreme. Most prominently, on 2003 November 28 (JD\,2452971.67), \textsl{INTEGRAL}\xspace observed the brightest flare ever seen from \mbox{Vela~X-1}\xspace \citep[designated flare~1; see Fig.~\ref{fig:lc} and also][]{krivonos03a}. During the flare, the 20--40\,keV count rate increased from a SCW averaged pre-flare value of \ca55\,\ensuremath{\text{counts}\,\text{s}^{-1}}\xspace ($\sim$300\,mCrab, or $1.6\times 10^{-9}\,\text{erg}\,\text{s}^{-1}$) by a factor of more than seven to 405\,\ensuremath{\text{counts}\,\text{s}^{-1}}\xspace (2.3\,Crab) within only 90\,minutes -- normal flaring activity reaches peak flux values not higher than 1\,Crab \citep[see e.g.][]{kreykenbohm99a}. \mbox{Vela~X-1}\xspace was therefore more than ten times brighter than on average (see Table~\ref{tab:increase}) which implies that flare~1 is a giant flare. The average flux values for \mbox{Vela~X-1}\xspace in Table~\ref{tab:increase} were derived from a spectrum with 400\,ksec of exposure obtained between revolution 81 and 89 in June/July 2003, when \mbox{Vela~X-1}\xspace was in a normal state. In the following, we designate a flare with a peak flux of more than 2 Crab as a giant flare, as opposed to normal flares, which do not reach this flux level. These flares reach their peak rapidly, i.e. $T_\text{rise}/T_\text{total} < 0.3$, where $T_\text{rise}$ is the time from the onset of the flare to the peak. While flare~1 was detected in all energy bands, it was most pronounced in the 20--30\,keV band. As stated above, \mbox{Vela~X-1}\xspace was never in the field-of-view of JEM-X during the flare, such that coverage at even lower energies is unavailable. After the peak (duration about half an hour, see Fig.~\ref{fig:flare_lc}), the flare decayed quickly in less than 2\,h to an intensity level of $<$1\,Crab and within \ca11\,h to a SCW averaged count rate of \ca35\,\ensuremath{\text{counts}\,\text{s}^{-1}}\xspace (200\,mCrab), somewhat lower than before the onset of the flare (see Fig.~\ref{fig:lc}). The total energy released between 20\,keV and 40\,keV amounts to $1.15\times10^{41}$\,ergs. We emphasize that these fluxes and those given in Fig.~\ref{fig:lc} and Table~\ref{tab:increase} are SCW averaged fluxes. These values average over fluctuations on shorter timescales such as pulsations, during which the source sometimes reached significantly higher intensities (see below and Fig.~\ref{fig:flare_lc}). \begin{table} \caption{Flux values of \mbox{Vela~X-1}\xspace in science window 013700420010, during which giant flare~1 reached its maximum. The increase indicates the factor by which \mbox{Vela~X-1}\xspace was brighter during the flare than during its normal state. The flux values are averages for the entire science window. For the peak flux reached in the pulses, see Table~\ref{tab:flares}.} \label{tab:increase} \begin{tabular}{r@{ \ }c@{ }r@{ \ }rrcc} \hline \hline \multicolumn{3}{c}{energy} & Flux (normal) & Flux (flare) & Flux (flare) & increase \\ \multicolumn{3}{c}{[keV]} & \multicolumn{2}{c}{[ $10^{-10}$ ergs cm$^{-2}$ s$^{-1}$ ]} & [Crab] & \\ \hline 20 &--& 30 & 10.7 & 136.3 & \phantom{$<$}2.8 & 13 \\ 30 &--& 40 & 5.6 & 58.6 & \phantom{$<$}1.9 & 10 \\ 40 &--& 50 & 1.9 & 16.6 & \phantom{$<$}0.7 & \phantom{1}9 \\ 50 &--& 60 & 0.6 & 4.5 & \phantom{$<$}0.3 & \phantom{1}8 \\ 60 &--& 80 & 0.7 & 2.6 & \phantom{$<$}0.1 & \phantom{1}4 \\ 80 &--& 100 & 0.1 & 0.6 & $<$0.1& \phantom{1}6 \\ \hline \end{tabular} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{9956gian.eps} \caption{Close up of the light curve with a time resolution of 20\,s of the peak of giant flare~1 on 2003 November 28. The peak is reached at MJD\,52971.18 with 923\,\ensuremath{\text{counts}\,\text{s}^{-1}}\xspace (corresponding to 5.2\,Crab) in the 20 to 40\,keV band. This value is significantly higher than the count rate averaged over one science window, since the count rate is not constant. Furthermore, the source is strongly pulsating, exhibiting the well known double pulse.} \label{fig:flare_lc} \end{figure} \begin{table} \caption{Overview of the observed flares. See Fig~\ref{fig:lc} for the numbering of the flares. The time is the onset of the flare. To obtain the peak fluxes $F_\text{peak}$, a light curve with a time resolution of 20\,s was used. $T_\text{rise}$ is the time from the onset of the flare to the peak, while $T_\text{total}$ is the duration of the flare.} \label{tab:flares} \begin{tabular}{c@{ }c@{ }r@{ }c@{ }cp{0.3\columnwidth}} \hline \hline Flare & Time & Duration & $F_\text{peak}$ & $\frac{T_\text{rise}}{T_\text{total}}$ & Remarks\rule{0pt}{1.1em} \\ & [MJD] & \multicolumn{1}{c}{[s]} & \multicolumn{1}{c}{[Crab]} & & \\ \hline 1 & 52971.15 & 11\,200 & 5.2 & 0.15 & giant flare, spectral softening\\ 2 & 52975.34 & 5\,200 & 2.6 & 0.83 & no spectral change\\ 3 & 52976.50 & 1\,800 & 5.3 & 0.28 & giant flare, very short\\ 4 & 52977.15 & 12\,900 & 1.9 & 0.13 & spectral softening \\ 5 & 52980.31 & 31\,400 & 3.9 & 0.63 & high intensity state, no spectral change\\ \hline \end{tabular} \end{table} In the following days, three more flares (flares~2 to~4, see Fig.~\ref{fig:lc}), as listed in Table~\ref{tab:flares}, were observed. All three flares were shorter and less intense than flare~1 on a science window averaged basis, but still reached SCW averaged intensities close to 1\,Crab. Unfortunately, \textsl{INTEGRAL}\xspace was in engineering mode during flare~3 and the standard OSA pipeline rejects these data; therefore, only count rates could be obtained and no further analysis of this flare is possible. On 2003 December 7 (JD\,245981.10), another intense flare was observed (designated Flare~5, see Fig.~\ref{fig:lc}). Unlike flare~1, during which the brightness of the source increased rapidly, it took $\sim$8\,h for flare~5 to reach its SCW averaged maximum 20--40\,keV flux of $\sim$1.2\,Crab. The decay lasted $\sim$5\,h until \mbox{Vela~X-1}\xspace reached its pre-flare count rate of $\sim$35\,\ensuremath{\text{counts}\,\text{s}^{-1}}\xspace (200\,mCrab in 20--40\,keV). Although quite bright, flare~5 is therefore significantly less intense than giant flare~1, and also far longer, i.e. it is a high intensity state. In the following, we designate flares that have a duration of many hours and exhibit a rather slow increase, i.e. $T_\text{rise}/T_\text{total} \gtrsim 0.5$, as high intensity states, as opposed to flares which exhibit a rapid increase, such as flare~1. With $1.22\times10^{41}$\,ergs the fluence of flare~5, however, was very similar to the energy release of flare~1. \begin{figure} \includegraphics[width=\columnwidth]{9956shor.eps} \caption{ISGRI 20--40\,keV light curve of the short flare~3 on 2003 December 3 with a time resolution of 20\,s. The count rate increases from $<$40\,\ensuremath{\text{counts}\,\text{s}^{-1}}\xspace at the onset of the flare to 929\,\ensuremath{\text{counts}\,\text{s}^{-1}}\xspace during the peak (thus exceeding even flare~1) in less than 800\,s only to decay within 1200\,s to a count rate of $<$60\,\ensuremath{\text{counts}\,\text{s}^{-1}}\xspace again. Since the flare is extremely short, its peak intensity is not evident in a light curve using SCW long bins as in Fig.~\ref{fig:lc}.} \label{flare2} \end{figure} A more detailed analysis using a light curve with a 20\,s time resolution showed that the source pulsated all the time, including during flares, and exhibited the well known strong pulse-to-pulse variations (Fig.~\ref{fig:flare_lc}). In the maximum, a peak 20--40\,keV count rate of \ca920\,\ensuremath{\text{counts}\,\text{s}^{-1}}\xspace in one 20\,s time bin (5.2\,Crab) was observed for flare~1 and \ca930\,\ensuremath{\text{counts}\,\text{s}^{-1}}\xspace (5.3\,Crab) for flare~3. Flare~3 on December 3 was therefore also a giant flare (see Fig.~\ref{flare2}). However, flare~3 was significantly shorter: the entire flare lasted less than 2000\,s, but it was as bright as flare~1 (see Table~\ref{tab:flares}). \begin{table} \caption{Observed off states during our \mbox{Vela~X-1}\xspace observation. The duration is the time between the onset of the off state (i.e. sudden intensity decrease) and end of the off state (sudden intensity increase; see Fig.~\ref{fig:offstate}). Off state~5 is special as the source stays at a very low luminosity level after the end of the off state; in this case, the duration equals the time when no pulsations are detected (cf.\ Sect.~\ref{disc:offstates}). } \label{tab:offstates} \begin{tabular}{crrr} \hline \hline Off state & \multicolumn{1}{c}{start} & \multicolumn{1}{c}{stop} & \multicolumn{1}{c}{duration} \\ & \multicolumn{1}{c}{[MJD]} & \multicolumn{1}{c}{[MJD]} & \multicolumn{1}{c}{[s]} \\ \hline 1 & 52981.095 & 52981.105 & 860 \\ 2 & 52981.275 & 52981.291 & 1380 \\ 3 & 52981.348 & 52981.365 & 1470 \\ 4 & 52981.373 & 52981.379 & 520 \\ 5 & 52981.422 & 52981.445 & 1980 \\ \hline \end{tabular} \end{table} Extending the analysis to the non-flaring parts of the light curve, we detected a quasi-periodic oscillation (QPO), similar to other accreting X-ray pulsars \citep[e.g. in 4U\,0115+63 or V\,0332+53;][]{heindl99b,mowlavi05a,takeshima94a}. Detecting a QPO, especially if it is transient, can be difficult. Using dynamic power spectra (PSDs), we found a short-lived QPO with a period in the range of a few 1000\,s, but no evidence for the presence of any quasi-periodic behavior below 140\,s (half the pulse period, Sect.~\ref{sect:period}). The short-lived QPO with a period of \ca6820\,sec appears to be regular and inconsistent with pure stochastic behavior (see Fig.~\ref{fig:qpo}). Subsequent period searches on the corresponding data subset clearly detect the period. We note that the quasi-periodic modulation shown in Fig.~\ref{fig:qpo} is far stronger and inconsistent with the NOMEX effect\footnote{For an explanation of the NOMEX support structure and its implications, see e.g. \citet{lubinski04a} and \citet{reglero01a}.}, which can cause intensity variations from SCW to SCW, but not within a given SCW. In this case, the amplitude of the oscillations was large, i.e. the mean count rate was \ca45\,cps with an amplitude of 25\,cps. During the bottom part of the oscillation, however, the flux sometimes decreased to approximately zero counts for a short time (see Fig.~\ref{fig:qpo}), i.e. the source turns off (see the following discussion). Although these modulations are evident (see Fig.~\ref{fig:qpo}), the statistics for the entire observation are weak since this event was short lived. Furthermore, it is well known that pure red noise can also produce quasi-periodic flux variations \citep[for a discussion using \object{Mrk\,766} as an example, see][]{benlloch01a}, therefore these modulations must be treated with care. During and after the part of the light curve in which the quasi-periodic oscillation was present, we observed several off states, during which no significant residual flux was detectable by ISGRI and no modulation due to the otherwise omnipresent pulsations was present (Fig.~\ref{fig:offstate}). The onset of these off states occured without any identifiable transition phase. The source simply switches off, or more precisely, the luminosity of the source drops below the detection limit of ISGRI. The same statement is also true for the end of the off states, where \mbox{Vela~X-1}\xspace switches on again and immediately resumes its normal (pre off state) intensity level and behavior. We identified five such off states during the entire observation, which all occured in the 12\,h from MJD\,52981.0 and MJD\,52981.5 (Table~\ref{tab:offstates}). Figure~\ref{fig:offstate} shows off states~3 and~4, which are separated by only 600\,s (about two and a half pulse periods). In between the two off states, \mbox{Vela~X-1}\xspace pulsated normally and at a normal intensity level. Since the two off states occured in rapid succession, they could also be considered as one longer off state interrupted by 2 normal pulses. Off state~5 differed from the other four off states: the onset of off state~5 is also sudden, but after the pulsations are observed again, the flux level remains low for several thousand seconds. Off states~1 to~4 are shown in Fig.~\ref{fig:qpo}, where the flux declines to almost zero for a short time (note, however, that the light curve in Fig.~\ref{fig:qpo} has a different binning). \begin{figure} \includegraphics[width=\columnwidth]{9956qpo.eps} \caption{A closeup of a part of the pulse averaged light curve of \mbox{Vela~X-1}\xspace (i.e. a light curve with a time resolution of 283\,s). A quasi-periodic modulation with a period of \ca6820\,s is evident in this part of the light curve, as indicated by the overplotted sine wave. Note that during the trough between 2\,h and 3\,h, and especially following the quasi-periodic modulation, the count rate decreased several times to zero for a short time, i.e. the source became completely undetectable by ISGRI (see Table~\ref{tab:offstates} and Sect.~\ref{sect:lc} for discussion).} \label{fig:qpo} \end{figure} \begin{figure} \centerline{\includegraphics[width=\columnwidth]{9956off.eps}} \caption{20--40\,keV light curve with 20\,s resolution of off states~3 and~4 of \mbox{Vela~X-1}\xspace, which are most remarkable as the source suddenly becomes undetectable by ISGRI and then turns on again two times within one hour. During the off state no pulsations are discernible, while outside the off state the double pulse-profile of \mbox{Vela~X-1}\xspace is visible. } \label{fig:offstate} \end{figure} \subsection{Pulse Period} \label{sect:period} Due to strong pulse-to-pulse variations, the determination of the period of pulsars with long pulse periods such as \mbox{Vela~X-1}\xspace is rather difficult. We therefore used a two step approach. First, using epoch folding \citep{leahy83b}, we derived an approximate period of $283.5\pm0.1$\,s for the \mbox{Vela~X-1}\xspace observation. We then used this period as a starting point for a pulse-profile template-fitting approach. In this approach we derive sets of pulse-profiles from the beginning to the end of the observation using sufficient data for the long-term pulse-profile to emerge. Since the long-term pulse-profile is known to be extremely stable \citep[see e.g.][]{staubert80a,raubenheimer90a,kreykenbohm02b}, it was then possible to compare these pulse-profiles and determine the shift in seconds between any two profiles. Using these shifts, the starting period, and the number of elapsed pulses between the two profiles, we performed a linear fit to obtain a refined period and possibly a $\dot P$ with the refined period until no shift was detectable between the first profile at the start of the observation and the last profile at the end of the observation \citep[for a detailed explanation of this method, see][]{fritz06a}. With this technique, we were able to derive a refined averaged pulse period of $283.5320\pm0.0002$\,s. Our analysis, however, showed, that the pulse period exhibited fluctuations on timescales shorter than one orbital period with an amplitude of up to 0.003\,s (for a detailed discussion of the evolution of the pulse period, see Staubert et al., in preparation). No pulse period ``glitches'' were observed, which is consistent with the short duration of the flares, during which no significant angular momentum transfer onto the neutron star is expected. \begin{figure} \includegraphics[width=\columnwidth]{9956prof.eps} \caption{Energy resolved pulse-profiles of \mbox{Vela~X-1}\xspace using all data outside eclipse including giant flare~1. The folding pulse period is 283.5320\,s (see Sect.~\ref{sect:period}). The profile has been adjusted such that the pulse minimum is at phase 0. The pulses are clearly detected also in the highest energy band up to 100\,keV.} \label{fig:pulseprofiles} \end{figure} The pulse-profile of \mbox{Vela~X-1}\xspace is known for its remarkable energy dependence \citep[see for example][and references therein]{labarbera03a,kreykenbohm02b,kreykenbohm99a,raubenheimer90a} and, despite its complicated structure, also for its long-term stability \citep{raubenheimer90a,staubert80a}. While the complicated profile for energies below 10\,keV consists of up to 5 peaks, the profile at energies $>$20\,keV consists of two simple energy independent peaks. The pulse-profiles shown by \citet{labarbera03a,kretschmar97c}, and \citet{staubert80a}, however, showed that the high-energy pulse profile of \mbox{Vela~X-1}\xspace is also quite complex: around 30\,keV, the main peak shows a triangular shape, i.e. a linear rising flank followed by a sudden and sharp fall while the secondary peak is sinusoidal \citep[Fig.~3 of][]{kretschmar97c}. Pulse-profiles were obtained for all data including the flares, but without the eclipses (see Fig.~\ref{fig:pulseprofiles}). The \textsl{INTEGRAL}\xspace data confirm these previous results and show that the secondary peak remains sinusoidal from 15\,keV up to 100\,keV, while the main peak evolves from a simple almost sinusoidal shape around 20\,keV into a ``triangular'' shape with a smooth linearly rising flank followed by a steeply falling edge at 40\,keV and above. Apart from confirming earlier results, our \textsl{INTEGRAL}\xspace data therefore indicate that the high-energy pulse-profile is stable on timescales of decades and over significant luminosity ranges. We also derived pulse-profiles for flare~1 only. Despite the dramatic intensity increase, however, the number of pulses added was rather small. The signal-to-noise ratio diminished, and significant systematic artefacts became evident in the folded light curve due to the low number of pulses. In any case, no significant change in the pulse-profiles was evident within the uncertainties. We therefore do not discuss these profiles further here. \subsection{The eclipse}\label{sect:eclipse} During this long observation of \mbox{Vela~X-1}\xspace, two eclipses were observed. When examining Fig.~\ref{fig:lc}, the eclipses appear to be far longer than the 1.69$\pm$0.06\,d reported by \citet{watson77a}. A detailed analysis of the ``technical'' circumstances, i.e. the \textsl{INTEGRAL}\xspace orbits and the perigee passages of the satellite, revealed that the perigee passages of \textsl{INTEGRAL}\xspace occur directly before both observed eclipses of \mbox{Vela~X-1}\xspace (see Fig.~\ref{fig:lc}). Fortunately, there are a few science windows between the end of the perigee passage (i.e. the start of a new revolution) and the ingress into eclipse, such that it is possible to determine the start and end of the eclipse accurately. Since the optical companion of \mbox{Vela~X-1}\xspace is a massive B0.5Ib super giant with an extended variable atmosphere \citep{sato86a}, it is difficult to determine the ingress and egress times of \mbox{Vela~X-1}\xspace in soft X-ray observations to high precision due to strong photoelectric absorption \citep[see e.g.][and references therein]{stuhlinger07a}. At higher energies, such as in the 20--40\,keV \textsl{INTEGRAL}\xspace data, photoelectric absorption poses less of a problem, although the determination of the egress or ingress time is only possible with an uncertainty of several 100\,s (see Fig.~\ref{fig:egress}), due to the changing extension of the atmosphere of the companion star and the general variability of the source. \begin{figure} \centerline{\includegraphics[width=\columnwidth]{9956egrs.eps}} \caption{Close-up of the egress of \mbox{Vela~X-1}\xspace from the first eclipse using a pulse averaged light curve of \mbox{Vela~X-1}\xspace (time resolution of 283.5\,s). The flux increases exponentially with time until a normal flux level is reached as indicated by the solid red line. The egress time is determined as the time when the count rate was no longer consistent with zero and is marked by a thick vertical line.} \label{fig:egress} \end{figure} Using a barycentered pulse averaged light curve, we determined the ingress and egress times of both eclipses with an accuracy of about one pulse period (see Fig.~\ref{fig:egress}). Using these times, we derived the duration of the eclipses in the 20--40\,keV energy band to be $1.70\pm0.01$\,d, consistent with the 1.69\,d reported by \citet{watson77a}. Then we derived the center times of the two eclipses to MJD\,52974.223 and MJD\,52983.195, respectively. Using these two times, we derived a new mid-eclipse reference time of MJD\,52974.227$\pm0.007$. Note that the uncertainty in our measurement of the mid-eclipse reference time is far smaller than for earlier measurements \citep{vanderklis84a,watson77a}, since the time resolution is significantly higher. To compare this mid-eclipse time with the $T_{90}$ of the ephemeris given by \citet{bildsten97a}, we need to convert the mid-eclipse time $T_\text{ecl}$ to the time of mean longitude $T_{90}$. Since the eccentricity $e$ of \mbox{Vela~X-1}\xspace is non-zero, this conversion is not straightforward. The $T_{90}$ is slightly offset from the center of the eclipse by $\Delta T = T_\text{ecl} - T_{90}$ (for a purely circular orbit, $T_{90}$ and $T_\text{ecl}$ are identical, and see also Fig.~\ref{fig:lc}). This offset is given by \citep{deeter87a}: \begin{equation} \Delta T = -\frac{P_\text{orb}}{\pi} e \cos \omega \left( 1 + \frac{1}{2 \tan^2 i} + \frac{(\sin i - \beta) (1 - \beta \sin i)}{2\beta\sin^2 i} \right) \end{equation} where $\beta$ depends on the semi-major axis and the stellar radius: \begin{equation} \beta = \sqrt{1-\frac{(R/a)^2}{1-e^2}} . \end{equation} For the case of \mbox{Vela~X-1}\xspace, $\beta = 0.8$ \citep{deeter87a}. Applying the corresponding values from Table~\ref{tab:ephemeris}, we obtain a significant offset of 0.226$\pm$0.005\,d (the uncertainty is due to the uncertainty in the inclination $i$). We therefore calculate MJD\,52974.001$\pm$0.012 to be the new measurement of $T_{90}$ (in good agreement with the $T_{90}$ = 52974.008$\pm$0.008 obtained from pulse-timing analysis by Staubert et al., in prep.). Comparing this $T_{90}$ with the ephemeris of \citet{bildsten97a}, we find that the time of mean longitude $T_{90}$ differs by about 0.005\,d. This shift is unsurprising given that the uncertainty in the orbital period is of the order of 0.00004\,d and the uncertainty in $T_{90}$ of \citet{bildsten97a} was 0.0012\,d. Since then, \mbox{Vela~X-1}\xspace has orbited its companion 455 times such that the accumulated uncertainty amounts to a maximum shift of about 0.02\,d, far more than the observed shift of 0.005\,d. We therefore provide an updated $T_{90}$ and orbital period $P_\text{orb}$ for the ephemeris in Table~\ref{tab:ephemeris}, but leave the other parameters of the ephemeris unchanged. \begin{table} \caption{Ephemeris of \citet{kerkwijk95a} and \citet{bildsten97a}, which have been used for obtaining the pulse period and the improved mid eclipse time $T_\text{ecl}$, the corresponding $T_{90}$, and the improved orbital period $P_\text{orb}$. See text for discussion of the new $T_{90}$. } \footnotetext{XXX} \label{tab:ephemeris} \begin{tabular}{lr@{\,}ll} \hline \hline $T_{90}$ & MJD\,44278.5466&$\pm0.0037$ & (1) \\ $T_{90}$ & MJD\,48895.2186&$\pm0.0012$ & (2) \\ $T_\text{ecl}$ & MJD\,52974.227&$\pm0.007$ & (3)\\ $T_{90}$ & MJD\,52974.001 &$\pm0.012$ & (3) \\ $P_\text{orb}$ & 8.964416 &$\pm0.000049$\,d & (1) \\ $P_\text{orb}$ & 8.964368 & $\pm0.000040$\,d & (2) \\ $P_\text{orb}$ & 8.964357 & $\pm0.000029$\,d & (3) \\ $a \sin i$ & 113.89 & lt-sec & (2) \\ $i$ & $>73$& $^\circ$ & (1) \\ Ecc. $e$ & 0.0898 & $\pm0.0012$ & (2) \\ $\omega$ & 152.59 & $\pm0.92$ & (3) \\ \hline \end{tabular} References. (1)~\cite{kerkwijk95a}; (2) \cite{bildsten97a}; (3) this work \end{table} \subsection{Spectrum} We combined all ISGRI data outside the eclipse to create a spectrum of high signal-to-noise ratio. Since a fully working physical model of the X-ray production mechanism in accreting X-ray pulsars does not exist due to the complexity of the problem \citep[significant progress has, however, been made; see e.g.][]{becker07a}, an empirical model has to be used, which consists of a power law modified by a high-energy cutoff at high energies. However, special care has to be taken not to introduce any artificial features by a ``break'' or a sudden onset of the cutoff as in the case of the ``High-energy cutoff'' in \textsl{XSPEC} \citep[see for example ][and references therein]{kreykenbohm99a}. This is true particularly if the source exhibits cyclotron resonant scattering features (CRSFs). We therefore used a power law modified by the Fermi-Dirac cutoff \citep{tanaka86a} to model the spectrum. This model describes well the continuum of \mbox{Vela~X-1}\xspace in the ISGRI range, i.e. above 20\,keV, and was in fact used successfully in the past to fit the spectrum of \mbox{Vela~X-1}\xspace \citep{kreykenbohm99a} and other sources. To model the well known CRSF in the spectrum of \mbox{Vela~X-1}\xspace at 50\,keV \citep{kendziorra92a}, we used a Gaussian optical depth profile \citep[GABS,][]{coburn02a}. The full model for the ISGRI spectrum is then given by \begin{equation} I_\text{cont}(E) \propto E^{-\Gamma} \times \frac{1}{ \exp\left(\frac{E-E_\text{cut}}{E_\text{fold}}\right)+1} \times \exp\left(-\tau_{\text{GABS}}(E)\right) \end{equation} A systematic error of 1\% was applied to account for the uncertainties in the response matrix of ISGRI. \begin{figure} \includegraphics[width=\columnwidth]{9956spec.eps} \caption{Spectrum of \mbox{Vela~X-1}\xspace as obtained by \textsl{INTEGRAL}\xspace-ISGRI \textbf{a} data and folded model. \textbf{b} residuals when using only a power law modified by the Fermi-Dirac cutoff. \textbf{c} residuals for the best-fit model with a cyclotron absorption line at 53.4\err{0.7}{0.6}\,keV. See Table~\ref{tab:spectrum} for all spectral parameters.} \label{fig:spectrum} \end{figure} \begin{table} \caption{Fit results for the overall spectrum excluding the eclipse. When fixing $\Gamma$ to one of the values found in the literature \citep[fixed~1 and fixed~2;][]{kreykenbohm99a,labarbera03a}, \ensuremath{E_{\text{cut}}}\xspace and \ensuremath{E_{\text{F}}}\xspace are better constrained, but the resulting \ensuremath{\chi^2_{{\text{red}}\xspace}}-values are higher. All uncertainties here and elsewhere in the paper are on a 90\% confidence level. The DOF are the degrees of freedom.} \label{tab:spectrum} \begin{tabular}{lrr@{}cr@{}lr@{}l} \hline \hline \multicolumn{2}{c}{Parameter} & \multicolumn{2}{c}{free} & \multicolumn{2}{c}{fixed 1} &\multicolumn{2}{c}{fixed 2} \\ Exposure & [ksec] & \multicolumn{2}{c}{508} & \multicolumn{2}{c}{508} & \multicolumn{2}{c}{508}\\ \hline $\Gamma$ && 1.6 & \err{0.3}{0.6} & 1.8 &\ fix & 2.0 &\ fix\\ E$_\text{cut}$ & [keV] & 35.6 & \err{7.5}{11.5} & 41.3 & \err{1.8}{1.5}& 47.3 & \err{3.5}{0.6}\\ E$_\text{fold}$ & [keV] & 11.2 & \err{0.5}{0.3} & 10.9 & \err{0.4}{0.3} & 10.0 & \err{0.6}{0.6}\\ E$_\text{C} $ & [keV]& 53.4 & \err{0.7}{0.6} & 53.6 & \err{0.6}{0.6} & 53.5 & \err{0.9}{0.3}\\ $\sigma_\text{C}$ & [keV] & 7.6 & \err{0.7}{0.7} & 8.0 & \err{0.6}{0.5} & 8.2 & \err{0.9}{0.2}\\ $\tau_\text{C} $ & & 1.0 & \err{0.1}{0.1} & 1.0 & \err{0.1}{0.1} & 1.1 & \err{0.2}{0.1}\\ \hline \ensuremath{\chi^2_{{\text{red}}\xspace}} (DOF) & & 1.3 &\ (20) & 1.4 &\ (21) & 1.6 &\ (21) \\ \end{tabular} \end{table} The broad band continuum is described accurately by a power law with Fermi Dirac cutoff, but the spectral parameters were not well constrained (see Table~\ref{tab:spectrum}). This was expected, since no data below 20\,keV were available to enable us to determine the spectral slope, the cutoff, and folding energy simultaneously, especially since the cutoff energy is expected to be around \ca20\,keV, close to the lower end of the energy range. Fixing $\Gamma$ to some (arbitrary) value from the literature allows to $E_\text{cut}$ and $E_\text{fold}$ to be constrained more accurately, however, depending on the choice of $\Gamma$ also results in a rather bad fit (see Table~\ref{tab:spectrum}). In any case, the parameters of the CRSF do not depend strongly on the choice of $\Gamma$. After fitting the broadband continuum, highly significant features remain, which are due to the cyclotron line at \ca50\,keV (see Fig.~\ref{fig:spectrum}b). After the inclusion of a Gaussian absorption line at 53.4\err{0.7}{0.6}\,keV, the resulting fit is acceptable ($\ensuremath{\chi^2_{{\text{red}}\xspace}} = 1.3$ with 20 degrees of freedom) and no significant features remain (see Fig.~\ref{fig:spectrum} and Table~\ref{tab:spectrum}). Note that the shallow line like residuals below \ca30\,keV in Fig.~\ref{fig:spectrum}b are \emph{not} due to the disputed CRSF at 25\,keV but merely a consequence of the 50\,keV line. The question of whether the 25\,keV line exists or not cannot be answered by this observation because no data below 20\,keV are available, which would be crucial to determine the continuum and to detect a CRSF at 25\,keV\footnote{Using \textsl{INTEGRAL}\xspace data from a 2\,Msec long observation in \textsl{INTEGRAL}\xspace AO3 including JEM-X data, \citep{schanne07a}, however, confirmed the existence of the 25\,keV CRSF.}. The resulting best fit is shown in Fig.~\ref{fig:spectrum}a and the residuals in Fig~\ref{fig:spectrum}c. Note that after the inclusion of the Gaussian absorption line at 53.4\,keV, some residuals remain (see Fig.~\ref{fig:spectrum}c) at 80\,keV. In this energy range, some strong background lines are present in the ISGRI background \citep[tungsten and lead,][]{terrier03a} that might be responsible for these residuals. However, it is well known that CRSFs have a complex line shape \citep{schoenherr07a,araya00a,araya99a} and are therefore not well modeled by simple Gaussian or Lorentzian functions \citep[see for example][]{kreykenbohm05a,heindl99b}. The residuals present in Fig.~\ref{fig:spectrum}c could therefore also be due to an insufficient description of the CRSF by a Gaussian absorption line and therefore incorrect modeling of the underlying continuum. To improve this unsatisfactory situation and to derive real physical parameters from the line shapes, efforts are made to create line shapes using Monte Carlo simulations \citep{schoenherr07a}. A more detailed analysis of the spectrum using various spectral models, studying the evolution of the spectral parameters with time, and, in particular, using phase resolved spectroscopy, and a comparison with previous work is beyond the scope of this paper and will be discussed in a forthcoming publication. \subsection{Spectral evolution during the flares} \label{Sect:hardness} After studying the averaged spectrum, we now consider the spectral evolution of the source during the flares, especially giant flare~1. Figure~\ref{fig:lc}b shows the hardness ratio over the flare, defined to be \begin{equation}\label{eq:hardness} \text{HR} = \frac{H-S}{H+S} \end{equation} where $H$ is the count rate in the hard band (40--60\,keV) and $S$ the count rate in the soft band (20--30\,keV). The hardness ratio remained constant throughout most of the observation, i.e. no correlation with orbital phase was evident. During the flares, the behavior of the hardness ratio, however, was remarkable. Shortly before the onset of giant flare~1 as well as during the flare, a clear deviation in the hardness ratio from its overall average value of $-0.74$ was evident; the hardness ratio declined suddenly to $-0.82$ and later during the flare even to $-0.85$ (see Fig.~\ref{fig:lc}b). A similar behavior was observed for flare~4: with the sudden onset of the flare, the hardness ratio dropped from a pre-flare value of approximately $-0.72$ to $-0.84$, the same level as in giant flare~1, although flare~4 was far shorter and reached only a third of the peak flux of flare~1. This result means that the source became significantly softer during these flares. Strikingly, however, this softening did \emph{not} always occur: during flares~2 and~5, no significant change in the hardness ratio was apparent. In fact, the hardness ratio during these flares remained constant at the average value. This result is remarkable, since with a duration of \ca9\,h, flare~5 lasted far longer than the other flares and its peak flux reached $>$50\% of the peak flux of giant flare~1; it was therefore far longer and brighter than flare~4, but it is associated with no spectral softening. Although the source was bright during flares~1 and~5, it was not possible to derive meaningful spectral fits for spectra from both flares since the net exposure time was still short. We refrain from discussing the spectra of the flares in more detail since the spectral parameters could not be reliably constrained. \begin{figure} \includegraphics[width=1.0\columnwidth]{9956hid.eps} \caption{Hardness Intensity Diagram of \mbox{Vela~X-1}\xspace; data from the eclipses have been excluded. The time resolution is one science window (typically 1800\,s). The hardness ratio is defined as given in Eq.~\ref{eq:hardness}. The data points from the flares (see Fig.~\ref{fig:lc} and Table~\ref{tab:flares}) are indicated by individual symbols. Note that flares~1 and~4 both show significant softening opposite to flares~2 and~5, which show no softening (see Sect.~\ref{Sect:hardness}). No data are available for flare~3 (see Sect.~\ref{sect:lc}). } \label{fig:hid} \end{figure} ``Hardness intensity diagrams'' (HIDs) are a useful way to study the spectral evolution of a source. A HID for the full observation of \mbox{Vela~X-1}\xspace, excluding the eclipses, is shown in Fig.~\ref{fig:hid}. As expected, the HID of \mbox{Vela~X-1}\xspace does not show the \textsf{q} shape observed for black holes \citep{fender04a}. In \mbox{Vela~X-1}\xspace, most of the data points are clustered around the average values of intensity and hardness ratio. The only exception are the data points from giant flare~1 and the other flares, which are above the general cluster of data points due to their -- by definition -- higher intensity. The data points of flare~1 and flare~4 are shifted, due to the spectral softening during the flare, as is already evident from the hardness ratio (see Fig.~\ref{fig:lc}b). Note that the softening did not evolve during the flares, but the flares are softer than the average spectrum from the very onset. On the other hand, flares~2 and~5, although of comparable intensity, did \emph{not} show any softening. Instead, the data points of the flares form two distributions (flares~1 and~4 versus flares~2 and~5) that are disjunct (see Fig.~\ref{fig:hid}). On the whole, the HID shows a slight hardening towards lower count rates. Another method of searching for spectral evolution are color-color diagrams (CCDs); e.g. V\,0332+53 shows an interesting evolution \citep{reig06a}. We derived a CCD for \mbox{Vela~X-1}\xspace but no interesting behavior was found. \section{Discussion} \label{sect:discussion} \mbox{Vela~X-1}\xspace has been known for a long time to have a highly time-variable light curve and to show intensity variations of up to a multiple or a fraction of the original intensity on time scales of days, hours, or even seconds. \subsection{The flares} Although \mbox{Vela~X-1}\xspace has exhibited extensive flaring activity in the past \citep[see][among others]{kreykenbohm99a,haberl94a,lapshov92a,haberl90a,nagase83a}, however, giant flares (as flares~1 and~3) had not been seen before. The first question is therefore whether these flares are a rare phenomenon or just could not be detected. The problem is that all these flares are rather short -- even giant flare~1 and long flare~5 have a duration of a few hours. Unless the source is constantly observed as in our observation, typical sky monitors such as the All Sky Monitor ({ASM}\xspace) onboard \textsl{RXTE}\xspace fail to detect these flares. The \textsl{RXTE}\xspace-{ASM}\xspace monitors the entire observable sky regularly, typically observing \mbox{Vela~X-1}\xspace up to 10 times a day, but often only two or three 90\,s long dwells per day are available\footnote{See the \textsl{RXTE}\xspace-{ASM}\xspace project web page at \url{http://xte.mit.edu/}.}. This monitoring (which is furthermore irregular) does not allow us to detect such short-lived events as these giant flares. As shown by \citet{staubert04a}, the \textsl{RXTE}\xspace-{ASM}\xspace failed to detect the giant flares~1 and~3 when only taking 90\,s dwells into account. During flares~2 and~5, it showed only minor increases, but given the typical uncertainty in the {ASM}\xspace data and the overall scatter, no unusual behavior of \mbox{Vela~X-1}\xspace could be detected. We note that flare~1 was present when relaxing the assumption of using only 90\,s ASM dwells to dwells with $>$80\,s exposure; however, these dwells often have larger $\chi^2_{\rm red}$-values than recommended. Furthermore, detecting short-term variability with the {ASM}\xspace is problematic, since the {ASM}\xspace sometimes also erroneously detects significant flux (even at the level of the flares) while \mbox{Vela~X-1}\xspace is in eclipse. We therefore conclude that monitoring flares on a single dwell level with the {ASM}\xspace is unreliable and uninterrupted observations are required. This finding also readily explains why these giant flares have to date been unnoticed, since long uninterrupted observations spanning at least one full binary orbit of \mbox{Vela~X-1}\xspace are only rarely performed. The analysis of the hardness ratio of the flares in Sect.~\ref{Sect:hardness} shows that there seem to be at least two different types of flares (see also Fig.~\ref{fig:hid}): the first type (flares~1 and ~4) showed dramatic increases in the count rate and the onset of the flare was very sudden. These flares showed significant spectral softening during the flare. They could appear at any time, i.e. were unrelated to the previous evolution in a similar way to flare~1, which is superimposed on a downward trend. Although flare~4 was short, the change in the spectral hardness ratio during the flare was significant (see Fig.~\ref{fig:lc}b). The second type was more similar to a high intensity state than an actual flare. Flares of this type are longer than the first type. The hardness ratio does not change during these flares, indicating that the spectrum does not change, and the source simply becomes brighter. For the short flare~3, no hardness ratio could be obtained since \textsl{INTEGRAL}\xspace was in engineering mode during that time; but given that it is short and features a very dramatic rise, it is likely that it belongs to the first type. Although the hardness ratio in Fig.~\ref{fig:lc}b is already conclusive, a hardness ratio comparing the high-energy spectrum (above 20\,keV) with the low-energy spectrum (below 10\,keV) would be even more interesting since changes due to photoelectric absorption are only visible at low-energies. However, no JEM-X data are available. It is therefore impossible to determine how bright the flares were at lower energies with our current data set. Given the flux level of more than 5 Crab in the peak of flare 1 and the significant softening observed (see Sect.~\ref{Sect:hardness}), we can assume that the source was also extremely bright at lower energies. However, high photoelectric absorption could have dampened the brightness again in the classical X-ray band. The evolution of \ensuremath{N_{\text{H}}}\xspace during a flare could possibly help to differentiate between flare types and their underlying mechanisms, which is impossible with our current data set. The mechanism behind these different types of flares, however, is not understood. Simulating asymmetric adiabatic accretion flows onto a neutron star in the wind of OB stars, \citet{taam89a} demonstrated that a temporary accretion disk may form in systems such as \mbox{Vela~X-1}\xspace. The formation of this temporary accretion disk is the result of an interaction between the incident flow and shocks in the wake region and is a general property of a binary system consisting of a neutron star and an OB companion. Another consequence is a reversal of the accretion flow. Associated with the flow reversal is the destruction of the accretion disk, resulting in a significantly increased accretion rate. During this short phase, the material stored in the temporary accretion disk is accreted onto the neutron star. \citet{taam89a} showed that these flow reversals occur on timescales of several hours, and predicted flares that would last from 15 to 60\,minutes, in agreement with the observation of flares~3 and~4 and also with the 1\,h long flare observed by \citet{kreykenbohm99a}. Furthermore, the overall flaring recurrence timescale during our observation agrees with that given by \citet{taam89a}. Later hydrodynamical studies found that wind-accretion onto the neutron star is already a highly instable process in itself. The accretion wake following the neutron star contains dense filaments of compressed gas with density variations of a factor of 100 compared with the undisturbed wind. When accreted, these density fluctuations produce abrupt changes in the X-ray luminosity \citep[see e.g.][]{blondin90a}. Even more important for wind-accretion, however, is the velocity structure of the wind, since \begin{equation} L_X \propto \frac{\rho}{v^3} \end{equation} where $\rho$ is the density and $v$ the velocity of the wind \citep{bondi44a}. Hydrodynamical simulations have shown that not only the density but also the velocity of the stellar wind changes dramatically with time including sharp drops and spikes \citep[see e.g.][]{runacres02a,runacres05a}. Furthermore, the shock trailing the neutron star oscillates with brief periods of disk formation, forcing the accretion flow to change its pattern, generating the so-called ``flip-flop instability'' \citep{matsuda91a,matsuda87a}. This instability then produces disk-like rotational inflows that change their direction repeatedly \citep[see][and references therein]{benensohn97a}, and is an intrinsic phenomenon that occurs whenever a gas stream flows past a neutron star or black hole, no density or velocity gradient being necessary \citep{matsuda91a}. The timescale for this flip-flop behavior is calculated to be of the order of 45\,min \citep{benensohn97a}, which would agree with the flare observed by \citet{kreykenbohm99a} and flares~3 and~4 in Fig.~\ref{fig:lc}. On the other hand, the flip-flop behavior does not explain flare~1, which is superimposed on a general downward trend, because it is far longer (several hours), nor flare~5 which is about 12\,h long, since the calculated accretion rates vary on an even shorter timescale of \ca100\,s or less \citep[see e.g. Fig.~3 of][]{benensohn97a}. While the above scenario explains well the short flares seen during our observation (flares~2, 3, and~4) and the overall rapid variability of \mbox{Vela~X-1}\xspace, the flip-flop instability fails to explain the intense long flares presented in this paper (flares~1 and~5) that last several hours; another mechanism must therefore be at work that can alter $\dot M$ by a factor of up to 10, not only for \ca100\,s, but for many hours. \citet{kaper93a} demonstrated that, due to the clumpiness of the shocked wind, the local density varies by a factor of 100, which can explain the flaring X-ray luminosity \citep[see also][]{oskinova08a}. \citet{leyder07a} argued that dense clumps trapped in an otherwise thin and more homogeneous wind might be responsible for long flares, when the clumps are being accreted. Since giant flare~1 lasts several hours, however, it is unlikely that a single blob could feed the accretion for such a long time, given that the 6\,h and 12\,h durations of giant flare~1 and of flare~5 correspond to a considerable fraction of the neutron star orbit. Instead, the clumpy OB star wind \citep{kaper93a,lucy80a} is probably viscously smeared out in a (small) accretion disk of the system. This filled accretion disk can then feed the neutron star with a significantly higher $\dot M$ than usual over several hours. A change of $\dot P$, however, cannot be observed, since the transferred angular momentum during such a flaring episode is far too small (see also Sect.~\ref{sect:period}). When \mbox{Vela~X-1}\xspace is less active, the OB star wind is probably less structured. In summary, we conclude that the observed long flares are due to a strongly structured OB star wind. Since it is difficult to detect or monitor the flares unless during a pointed observation, it is not possible to determine how frequent such events are. Apart from the flares in our observation, reports of flares -- although far less intense -- are common in the literature \citep[][among others]{kreykenbohm99a,haberl94a,haberl90a}. In another long (\ca2\,Msec) observation of the \mbox{Vela~X-1}\xspace region in 2005 November by \textsl{INTEGRAL}\xspace, \mbox{Vela~X-1}\xspace again exhibited very intense flares \citep{kreykenbohm06b,schanne07a}. We therefore conclude that bright flares are quite common in \mbox{Vela~X-1}\xspace. However, we caution that \mbox{Vela~X-1}\xspace does not always show high activity. During a long \textsl{INTEGRAL}\xspace observation in Summer 2003, \mbox{Vela~X-1}\xspace was in a quiet phase with only little flaring activity detected. \subsection{Short term variability} Although the flip-flop instability discussed in the previous section fails to explain the large flares exhibited by \mbox{Vela~X-1}\xspace, it could explain its short-term variability: \mbox{Vela~X-1}\xspace exhibits strong pulse-to-pulse variations and other short-term fluctuations superimposed on the general variability, while at the same time the long-term pulse-profile remains constant over decades \citep{raubenheimer90a,kreykenbohm02b}. \citet{taam91a} developed several scenarios and the resulting behavior of the mass accretion rate. These authors obtain a time variation in $\dot M$ that exhibits short flare-like events on time scales of \ca100\,s, i.e. less than one pulse period of \mbox{Vela~X-1}\xspace, for an accretion rate of $0.7\dot{M}_\text{HL}$, where $\dot{M}_\text{HL}$ is the Hoyle-Lyttleton mass capture rate in terms of the Eddington rate \citep{hoyle41a,hoyle39a}, and a wind velocity of about 1000\,km $s^{-1}$; such behavior is typical for O- and B-type stars and precisely that required to explain the observed short-term variability. \citet{watanabe06a} studied the stellar wind of the \mbox{Vela~X-1}\xspace system and were able to explain the observed line intensities by using a wind model developed by \citet{castor75a} with a terminal wind velocity of 1100\,km\,s$^{-1}$, which agrees with the calculations of \citet{taam91a}. Since this instability and the transfer of angular momentum appear to be intrinsic to wind-accretion \citep{matsuda91a}, the accompanying torque reversals can therefore also be expected to exhibit randomly varying short spin-up / spin-down episodes, or a ``flip-flop'' behavior. As \citet{boynton84a} suggested, this instability might therefore be identified with the random-walk behavior of the pulse period, observed in many wind-accreting sources including \mbox{Vela~X-1}\xspace. \subsection{The off states} \label{disc:offstates} In a similar way to the flaring activity, the off states reported by \citet{inoue84a}, \citet{kretschmar99a}, and \citet{kreykenbohm99a}, where the source was not simply weaker but below the detection limit of the respective instruments and no pulsations were observed, are remarkable. After the off state observed by \citet{kreykenbohm99a}, the source resumed its normal, pulsating behavior without any transition phase \citep[see Fig.~4 of][]{kreykenbohm99a}; also no unusual behavior of the source was observed after the end of the off state. \citet{inoue84a} reported that these off states occur without any prior indication. In our observation, we observed several short off states (see Table~\ref{tab:offstates}), which also occured without any prior indication and without a transition phase such as a slow decay. Furthermore, as shown in Fig.~\ref{fig:qpo}, off state~1 occurred during a phase in which the source had an average intensity level of about 250\,mCrab. The count rate decreased dramatically to zero for \ca850\,sec (corresponding to three pulse periods) with no pulsations being visible. \citet{kreykenbohm99a} reported observing \mbox{Vela~X-1}\xspace in an off state of 550\,sec (corresponding to 2 pulse periods) at the start of an observation, although the total duration of this off state is unknown, and throughout the observation, no pulsations were detected. It is possible that the events observed by \citet{kreykenbohm99a} and in Fig.~\ref{fig:offstate} are similar, i.e. the observation of the former started just in the middle of a short off state similar to that in Fig.~\ref{fig:offstate}. The reasons for these off states and the sudden reappearance of pulsations are not understood, but \mbox{Vela~X-1}\xspace appears to experience them on a regular (but non-periodic) basis (see Fig.~\ref{fig:offstate} and Table~\ref{tab:offstates}). Although several explanations of these phenomena have been put forth \citep[including transiting planets,][]{hayakawa84a}, none can explain the observed off states, since all of these ideas (for example, variations in the mass-loss rate of the optical companion) require a significantly longer timescale. ``Blobs'' in the stellar wind \citep{feldmeier97a}, for example, would not only require extremely high optical depths to block the X-rays and gamma-rays completely, but would also need very sharp borders to explain the observed sudden turn-off and turn-on of the source (sometimes within a single time bin of 20\,s, as shown in Fig.~\ref{fig:offstate}). We therefore consider it highly unlikely that typical clumps in the stellar wind are responsible for the off states shown in Fig.~\ref{fig:offstate} and other explanations must be considered. The wind of early-type super giants like HD\,77581\xspace is known to be inhomogeneous and clumpy \citep[see discussion above and both][]{walter07a,blondin90a}. Typical models show that the density in the stellar wind of super giants can vary by several orders of magnitude \citep{runacres05a}. These density variations should not only correspond to the presence of clumps -- i.e. regions of strongly increased density -- but also holes -- regions of strongly reduced density. In these holes, the density is lower than the average density of the wind by a factor of $10^3$ \citep{runacres05a}. If the neutron star enters these holes, $\dot M$ would then also decrease by a factor of $\sim10^3$ and the X-ray luminosity would be reduced accordingly. Furthermore, the density fluctuations predicted by these models occur suddenly \citep[see Fig.~1 in][]{runacres05a}, as is the onset of the off states (see Fig.~\ref{fig:lc}). On the other hand, these models predict that the density always varies; it could therefore be expected that the off states should be rather common, which does not seem to be case. Therefore, some additional mechanism must be at work that is triggered only rarely. Another mechanism that can explain these off states is the propeller effect \citep{illarionov75a}. In short, the propeller effect inhibits the accretion of material onto the compact object when the Alfv{\'e}n radius (where the ram pressure of the infalling gas and the magnetic pressure are equal) is larger than the co-rotation radius \citep{pringle72a, lamb73a}. The Alfv\'en radius, however, is not constant: it depends on the amount of infalling material, $\dot M$. If $\dot M$ drops, the Alfv\'en radius will increase and once larger than the co-rotation radius, accretion is no longer possible and the X-ray source basically switches off and no pulsations are observable. \citet{cui97b} observed this effect in \object{GX\,1$+$4}: in very low luminosity states, no pulsations were observable while in high luminosity states, the source was strongly pulsating \citep[however, other explanations are also possible to explain the absence of pulsations in GX\,1$+$4; see][and references therein]{ferrigno07a}. The magnetic field strength for which the system enters the propeller regime is then given by \citep{cui97b} \begin{equation} B = C \times \left(\frac{P}{1\,\text{s}}\right)^{7/6} \sqrt{\frac{F_\text{X}}{10^{-9}\,\text{erg cm}^{-2}\text{s}^{-1}}}\ \left(\frac{d}{1\,\text{kpc}}\right) \ \left(\frac{M}{1.4\ensuremath{M_\odot}\xspace}\right)^{1/3} \label{eq:propeller_mag} \end{equation} where the constant $C=4.8\ 10^{10}\,\text{G}\,$, $P$ is the spin period of the neutron star, $F_\text{X}$ is the bolometric X-ray flux, $d$ is the distance, and $M$ is the mass of the neutron star. Since the strength of the magnetic field of \mbox{Vela~X-1}\xspace is known from the observation of the cyclotron resonant scattering features \citep{kreykenbohm02b}, Eq.~\ref{eq:propeller_mag} can be used to obtain directly the critical flux limit for the onset of the propeller effect: \begin{eqnarray} F_\text{X,Propeller} & = & 4.3 \times 10^{-7} \text{erg cm}^{-2}\text{s}^{-1}\nonumber \\ & & \times \left(\frac{B}{10^{12}\,\text{G}}\right)^2 \left(\frac{P}{1\,\text{s}}\right)^{-7/3} \left(\frac{d}{1\,\text{kpc}}\right)^{-2} \left(\frac{M}{1.4\,\ensuremath{M_\odot}\xspace}\right)^{-2/3} . \label{eq:propeller_flux} \end{eqnarray} Applying the corresponding values for \mbox{Vela~X-1}\xspace, i.e. \hbox{$B = 2.6\cdot 10^{12}$\,G}, $P= 283.5$\,s, $d=2$\,kpc, and $M$=1.9\,\ensuremath{M_\odot}\xspace (see Sect.~\ref{sect:intro}), we obtain a critical flux of \begin{equation*} F_\text{X,Propeller,\mbox{Vela~X-1}\xspace} \approx 1.1 \times 10^{-12}\,\text{erg cm}^{-2}\text{s}^{-1} \end{equation*} Compared with the typical bolometric flux of several times $10^{-9}$\,erg cm$^{-2}$s$^{-1}$, this critical flux is lower by about three orders of magnitude. This observed bolometric flux corresponds to an intrinsic luminosity of $\sim6\times 10^{32}$\,erg s$^{-1}$. We can therefore safely conclude that the propeller effect \emph{alone} cannot force the source to switch off when \mbox{Vela~X-1}\xspace is in normal accretion mode. A possible scenario would, however, be to combine both -- individually unsuccessful -- mechanisms: the neutron star was evidently in a region in the stellar wind of strongly variable density, as demonstrated by the significant overall variability of the source and the presence of many flares. The models for the stellar wind predict density variations of up to a factor of $10^{3-5}$ \citep{walter07a}. If the neutron star enters this region with very low density, its luminosity will drop accordingly because the X-ray luminosity depends linearly on $\dot M$, and produce a luminosity of the order of less than $\sim 10^{33}$\,ergs s$^{-1}$. As shown above at such low luminosity levels, \mbox{Vela~X-1}\xspace enters the propeller regime. This explains why no residual pulsations are observed during the off states. Off state~5, which is considerably longer than the other off states and does not show a sudden end similar to the dip observed by \citet{kretschmar99a}, exhibits, however, a different behavior. Unlike the off states shown in Fig.~\ref{fig:offstate}, these dips do \emph{not} show a very sudden onset or end; instead their onset (and end) is rather smooth and pulsations could also be observed for some time after the onset of the dip \citep[see Fig.~2 in][]{kretschmar99a}. These authors also observed a dramatic increase in the photoelectric absorption (more than$10^{24}$\,cm$^{-2}$, Compton thick). These dips therefore belong to a second class of dips which are caused by significantly increased photoelectric absorption \citep{charles78a} due to an optically thick cloud in the wind of the stellar companion passing through the line-of-sight. Such clouds are known to be quite common in super-giant systems \citep[see for example][]{nagase86a} and also \mbox{Vela~X-1}\xspace \citep{watanabe06a}. The existence of these clouds (also referred to as blobs or clumps) in the wind of an OB stellar companion was proposed by \citet{lucy80a}, who suggested that the winds of OB stars themselves are not homogeneous, but break into a population of blobs that are radiatively driven through an ambient gas. This model was later modified to include radiatively driven shocks in the stellar wind. Since then, models of OB star winds have usually included blobs and shocks \citep[see e.g.][and references therein]{feldmeier97a}. During the dip observed by \citet{kretschmar99a}, the column density, \ensuremath{N_{\text{H}}}\xspace, was of the order of $10^{24}\,\text{cm}^{-2}$, similar to that measured by, for example, \citet{leyder07a} for the density of clumps in the wind of \object{HD 74194}/\object{IGR\,J08408$-$4503}. We therefore conclude that off states, characterized by a sudden onset and end, are usually short, and could be caused by a sudden drop in $\dot M$ that allows \mbox{Vela~X-1}\xspace to enter the propeller regime. Intensity dips, however, are significantly longer, show a smooth transition, and spectra taken during the dip provide measurements of photoelectric absorption of more than $10^{24}$\,cm$^{-2}$. These dips are readily explained by a dense blob in the wind passing through the line of sight. \subsection{Connection with SFXTs} The similarity between the flares and off states in \mbox{Vela~X-1}\xspace and the behavior of Supergiant Fast X-ray Transients \citep[SFXTs,][]{sguera05a} is intriguing. SFXTs are high mass X-ray binaries that show very brief outbursts on timescales of hours or even only tens of minutes, and then remain undetectable for months between outbursts \citep{negueruela08a}. The giant flares of \mbox{Vela~X-1}\xspace reported in this work are similar to these outbursts, which are assumed to be due to the accretion of a dense blob of material embedded in a thin stellar wind \citep{walter07a}: this would then imply that $L_\text{X}$ is a direct tracer of the density of such blobs in the stellar wind. The same holds true for \mbox{Vela~X-1}\xspace, for which giant flares such as flare~1 are also probably due to the accretion of a dense blob in the stellar wind, as discussed above; \citet{ferrigno08a} reached a similar conclusion in explaining the flares observed in \object{1E\,1145.1$-$6141}. However, we also conclude that short flares and the general variability can be explained well by the flip-flop instability, and no additional clouds in the stellar wind are necessary. \citet{grebenev07a} considered, why SFXTs are unobservable when they are not in outburst. According to these authors, SFXTs should be bright persistent objects since the neutron star is deeply embedded in the stellar wind. Similar to our explanation of the off states of \mbox{Vela~X-1}\xspace, \citet{grebenev07a} invoke the propeller effect to explain the absence of detectable X-rays from the SFXTs when the sources are in quiescence. Unlike \mbox{Vela~X-1}\xspace, however, SFXTs are usually in the off state, while \mbox{Vela~X-1}\xspace is in a normal accretion mode. The reason why SFXTs are switched off by default could either be a significantly thinner stellar wind (resulting in a lower $\dot M$) or a significantly stronger magnetic field, which would inhibit the accretion even at the densities encountered typically in the stellar wind. We therefore conclude that SFXTs and \mbox{Vela~X-1}\xspace are very similar systems, except that SFXTs are normally in the propeller regime, while \mbox{Vela~X-1}\xspace is normally in the accreting regime. Both, however, can switch sides, i.e. SFXTs can go into outburst, while \mbox{Vela~X-1}\xspace can enter the propeller regime. \subsection{QPOs} Several accreting X-ray pulsars show one or more quasi-periodic oscillations (QPOs) in addition to the normal pulse period \citep{shirakawa02a}. Accreting X-ray pulsars that exhibit long period QPOs include \object{4U\,0115$+$63} \citep[$P_\text{QPO}\sim500$\,s][]{heindl99b} and \object{V\,0332$+$53} \citep[$P_\text{QPO}\sim20$\,s][]{mowlavi05a,takeshima94a} among others. The reason for the existence of QPOs is not always clear, although the 0.05\,Hz QPO in V\,0332$+$53 is thought to be due to inhomogeneities in the accretion disk \citep{mowlavi05a}, while a second QPO at 0.22\,Hz \citep{qu05a} is almost (but not entirely) coincident with the pulse period of the system. It can therefore be assumed to originate in the X-ray production region, and maybe also be linked to the strong pulse-to-pulse variations in that system. In \object{4U\,1907$+$09}, another wind-accreting system, a transient QPO was observed with a period of \ca18\,s \citep{zand98a}. Due to the close similarity of this system with \mbox{Vela~X-1}\xspace, we could expect to observe a QPO with a period between 10\,s and 40\,s. As discussed above, no evidence for a QPO with a period below 140\,s was found. However, a QPO with a long periodicity in the range of several thousand seconds was observed instead. The quasi-periodic behavior observed in \mbox{Vela~X-1}\xspace (see Fig.~\ref{fig:qpo}) is intriguing, albeit quite short lived. Such a periodicity -- if real -- in the luminosity of the source would indicate that the accretion rate onto the neutron star also varies periodically, which implies the presence of a quasi-periodic structure in the stellar wind of the companion star, either in density or in velocity or both. Such periodic structures in the wind are certainly not due to the flip-flop instability discussed above, because the instabilities produce a chaotic non-periodic modulation in the accretion rate \citep[see e.g. Figs.~4 and~6 of][]{taam91a}. These periodic structures could be driven by instabilities in the atmospheres of very luminous stars, which are likely to reach distances of a few stellar radii. Various mechanisms for the production of these instabilities are discussed in the literature, among them radiation-pressure-driven hydrodynamical instabilities \citep{shaviv01a,shaviv01b}, rotational instabilities \citep{fullerton97a}, non-radial pulsations \citep{owocki02a}, and surface magnetic fields \citep{ud-doula02a}. One of the phenomena predicted by some models \citep[e.g.][]{uddoula02b} are ``ray'' like structures in the stellar wind due to the magnetic field of the OB star. It can be imagined that when the neutron star passes through these more or less periodic ray structures in the wind, the varying mass accretion rate $\dot M$ could then lead to the quasi-periodic behavior shown in Fig.~\ref{fig:qpo}. When the neutron star enters a region where the stellar wind has a significantly lower density, the accretion rate $\dot M$ could drop to a level at which \mbox{Vela~X-1}\xspace enters the propeller regime (see Sect.~\ref{disc:offstates}), as occured in off state~1 (see Fig.~\ref{fig:qpo}). This scenario is interesting because it helps to understand the other off states in the following part of the light curve (see Fig.~\ref{fig:qpo}), and to explain the density variations discussed in Sect.~\ref{disc:offstates}. The quasi-periodic modulations in the source intensity could therefore easily be identified with similar structures in the wind. It should then be possible to draw conclusions from the period and observed intensity variations, about the applicable models and parameters and the underlying physics. However, these models, as currently present in the literature, are general and applicable numbers cannot readily be ascertained. In principle, however, as soon as these numbers can be obtained, the systematic and long-term probing of the stellar wind structure by the neutron star in \mbox{Vela~X-1}\xspace could provide physical constraints of the various wind models. \section{Summary} \label{sect:summary} We have presented the analysis of a long, continuous \textsl{INTEGRAL}\xspace observation of \mbox{Vela~X-1}\xspace spanning about 1.7 binary orbits. In detail, our results are: \begin{itemize} \item \mbox{Vela~X-1}\xspace was found to be in a highly active state; \item the observation of several flares were recorded, two of them giant flares with a peak brightness of more than 5\,Crab; \item a transient QPO with a period of \ca6820\,s was observed, which we attributed to inhomogeneities in the stellar wind; \item several off states were identified during which the source becomes undetectable by \textsl{INTEGRAL}\xspace and no pulsations were visible. The onset of the off states was very sudden, in addition to their end; \item the pulse period was determined to be $P=283.5320\pm0.0004$\,s with no evidence for a spin-up or spin-down during the entire observation; \item non-sinusoidal high-energy pulse-profiles were obtained up to 100\,keV; \item the spectrum exhibited the well known CRSF at 53.4\,keV, but due to a lack of low-energy data, the elusive line at \ca25\,keV could not be observed; \item \mbox{Vela~X-1}\xspace exhibited two types of flares: rapidly rising flares, which were correlated with a spectral change, and high intensity states, which were longer and during which the spectrum remained unchanged; \item the short flares can be explained well by the flip-flop instability: the predicted timescales of 15 to 60\,minutes are in perfect agreement with the observed duration of the short flares; \item two different types of off states exist, which are probably either caused by a dense blob blocking the line-of-sight, or the onset of the propeller effect due to a drop in $\dot M$; \item the similarity between \mbox{Vela~X-1}\xspace and SFXTs is striking: giant flares (in SFXTs: outbursts) are probably caused by the accretion of a dense blob from the stellar wind, while off states (in SFXTs: quiescence) are likely to be caused by the onset of the propeller effect. \end{itemize} \acknowledgements{We acknowledge financial support from DLR grants 50OG9601 and 50OG0501, NASA grant NNG05GK55G, and a travel grant from the Deutscher Akademischer Austauschdienst. IK acknowledges the hospitality of the Universities of Alicante, Warwick, and Erlangen-N\"urnberg, and the University of California at San Diego. We thank the members of the pulsar team supported by the International Space Science Institute (ISSI) in Berne, Switzerland, for discussions which greatly helped shape the ideas presented in this paper, and ISSI itself for its hospitality. JMT acknowledges the support of the Spanish Ministerio de Educaci\'on y Ciencia (MEC) under grant PR2007-0176. This work is based on observations with \textsl{INTEGRAL}\xspace, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain), Czech Republic and Poland, and with the participation of Russia and the USA. } \bibliographystyle{aa}
1,314,259,993,735
arxiv
\section{Introduction} Recently, G.\ Tabuada proposed a series of noncommutative counterparts of the celebrated conjectures, for example, Grothendieck standard conjecture of type $\mathsf{C}$ and type $\mathsf{D}$, Voevodsky nilpotence conjecture, Tate conjecture, Weil conjecture, and so on. After proposing the noncommutative counterparts, he proved additivity with respect to the $\mathsf{SOD}$s (semi-orthogonal decomposition, see the notation Section \ref{notation}) for most of these conjectures. Then, he was able to give new evidence of the conjectures by a good knowledge of the semi-orthogonal decompositions of derived category of varieties. For the details, the reader can refer to ``Noncommutative counterparts of celebrated conjectures'' \cite{tabuada2019noncommutative}. \par In this paper, the author provides a version of the rational Hodge conjecture to the small $\3\dg$ categories. This new conjecture is equivalent to the classical Hodge conjecture when the $\3\dg$ category is $\mathsf{Per}_{\3\dg}(\mathsf{X})$, where $\mathsf{X}$ is a projective smooth variety. It is equivalent to the version of Hodge conjecture in \cite{perry2020integral} for the admissible subcategories of $\b\mathsf{X}$. \par For $\operatorname{\mathsf{Per}}_{\3\dg}(\mathsf{X})$, $\mathsf{HH}_{0}(\mathsf{Per}_{\3\dg}(\mathsf{X}))\cong \oplus \mathsf{H}^{\mathsf{p},\mathsf{p}}(\mathsf{X},\mathbb{C})$ by $\mathsf{HKR}$ isomorphism. In order to generalize the Hodge conjecture, we need to find natural intrinsic rational Hodge classes in $\mathsf{HH}_{0}(\mathcal{A})$, and most importantly, it becomes the usual rational Hodge classes when $\mathcal{A}=\mathsf{Per}_{\3\dg}(\mathsf{X})$. Classically, it is well known that the images of rational topological $\mathsf{K}$-groups under topological Chern character recovers the rational Betti cohomolgy. The topological $\mathsf{K}$-theory was generalized to the noncommutative spaces by A.\ Blanc\cite{blanc_2016}, it turns out that the image of rational topological $\mathsf{K}$-group $\mathsf{K}_{0}^{\mathsf{top}}(\mathcal{A})_{\mathbb{Q}}$ under the topological Chern character becomes the even rational Betti cohomology when $\mathcal{A}=\mathsf{Per}_{\3\dg}(\mathsf{X})$. \par There is a functorial commutative diagram$\colon$ $$\xymatrix{&&\mathsf{HH}_{0}(\mathcal{A})\\ \mathsf{K}_{0}(\mathcal{A})\ar[rru]^{\mathsf{Ch}}\ar[d]\ar[r]_{\mathsf{Ch}}&\mathsf{HN}_{0}(\mathcal{A})\ar[d]^{j}\ar[ru]_{\pi}&\\ \mathsf{K}^{\mathsf{top}}_{0}(\mathcal{A})\ar[r]^{\mathsf{Ch}^{\mathsf{top}}}&\mathsf{HC}^{\mathsf{per}}_{0}(\mathcal{A})&}$$ \begin{defn} Let $\mathcal{A}$ be a small $\3\dg$ category. The Hodge classes of $\mathcal{A}$ is defined as $$\mathsf{Hodge}(\mathcal{A}):=\pi(\mathsf{j}^{-1}(\mathsf{Ch}^{\mathsf{top}}(\mathsf{K}_{0}^{\mathsf{top}}(\mathcal{A})_{\mathbb{Q}})))\subset \mathsf{HH}_{0}(\mathcal{A}).$$ \end{defn} Clearly, the Chern character $\mathsf{Ch}: \mathsf{K}_{0}(\mathcal{A})\rightarrow \mathsf{HH}_{0}(\mathcal{A})$ maps $\mathsf{K}_{0}(\mathcal{A})$ to $\mathsf{Hodge}(\mathcal{A})$. We define the noncommutative Hodge conjecture for any $\3\dg$ categories as follow. \begin{conj} (Noncommutative Hodge conjecture) The Chern character $\mathsf{Ch}: \mathsf{K}_{0}(\mathcal{A})\mapsto \mathsf{HH}_{0}(\mathcal{A})$ maps $\mathsf{K}_{0}(\mathcal{A})_{\mathbb{Q}}$ surjectively into the Hodge classes $\mathsf{Hodge}(\mathcal{A})$. \end{conj} For the smooth proper $\3\dg$ categories, we propose an equivalent version of rational Hodge conjecture, for the reason that they are equivalent, see Remark \ref{remconj}. We write $\mathsf{H}$ as the isomorphism $\mathsf{HC}^{\mathsf{per}}_{0}(\mathcal{A})\cong^{\mathsf{H}} \oplus \mathsf{HH}_{2\mathsf{n}}(\mathcal{A})$ which is the Hodge decomposition by degeneration of noncommutative Hodge-to de Rham spectral sequence\cite{kaledin2016spectral}. Note that we choose a splitting. Define the rational classes in $\mathsf{HC}^{\mathsf{per}}_{0}(\mathcal{A})$ as $\mathsf{Ch}^{\mathsf{top}}(\mathsf{K}^{\mathsf{top}}_{0}(\mathcal{A})_{\mathbb{Q}})\cap \mathsf{j} (\mathsf{HN}_{0}(\mathcal{A}))$. Then we define the Hodge classes in $\mathsf{HH}_{0}(\mathcal{A})$ as $$\mathsf{Hodge}(\mathcal{A})=\mathsf{Pr}\circ\mathsf{H}(\mathsf{Ch}^{\mathsf{top}}(\mathsf{K}^{\mathsf{top}}_{0}(\mathcal{A})_{\mathbb{Q}})\cap \mathsf{j} (\mathsf{HN}_{0}(\mathcal{A}))).$$ Here the map $\mathsf{Pr}$ is the projection from $\oplus\mathsf{HH}_{2\mathsf{n}}(\mathcal{A})$ to $\mathsf{HH}_{0}(\mathcal{A})$. Clearly the natural Chern character map $\mathsf{K}_{0}(\mathcal{A})_{\mathbb{Q}}$ to $\mathsf{Hodge}(\mathcal{A})$. \begin{defn}\label{smpconjA}(= Definition \ref{smpconj}) Hodge conjecture for smooth proper $\3\dg$ categories: the Chern character $\mathsf{Ch}: \mathsf{K}_{0}(\mathcal{A})\rightarrow \mathsf{HH}_{0}(\mathcal{A})$ maps $\mathsf{K}_{0}(\mathcal{A})_{\mathbb{Q}}$ surjectively into the Hodge classes $\mathsf{Hodge}(\mathcal{A})$. \end{defn} We prove that the noncommutative Hodge conjecture is equivalent to the classical Hodge conjecture when the $\3\dg$ category is $\mathsf{Per}_{\3\dg}(\mathsf{X})$. The version of Hodge conjecture is equivalent with the one in \cite{perry2020integral} for admissible subcategories of $\b\mathsf{X}$, see Theorem \ref{admissible}. \begin{thm}(=Theorem \ref{NHodge}). Let $\mathsf{X}$ be a smooth projective variety. $$\text{\it Hodge conjecture for}\ \mathsf{X}\ \Leftrightarrow\ \text{\it Noncommutative Hodge conjecture for}\ \mathsf{Per}_{\3\dg}(\mathsf{X}).$$ \end{thm} The author also proves that the Hodge conjecture is additive for geometric semi-orthogonal decomposition with independent method. \begin{thm}(=Theorem \ref{SODHodge}). Suppose we have a nontrivial semi-orthogonal decomposition of derived category $\b\mathsf{X}=\langle \mathcal{A},\mathcal{B} \rangle$ such that $\mathcal{A}$ and $\mathcal{B}$ are geometric, that is, $\mathcal{B}\cong \1\b\mathsf{Y}$ and $\mathcal{A}\cong \2\b\mathbb{Z}$ for some varieties $\mathsf{Y}$ and $\mathsf{Z}$. Then, Hodge conjecture is true for $\mathsf{X}$ if and only if it is true for $\mathsf{Y}$ and $\mathsf{Z}$. \end{thm} \begin{rem} We use this to obtain some results to prove that the commutative Hodge conjecture is a birational invariant for $4$ and $5$ dimensional varieties, see Theorem \ref{4fouldRinvariant}, which may be classically known for the experts, see also \cite{meng2019hodge}. \end{rem} After establishing the language of noncommutative Hodge conjecture, the author proves that the conjecture is additive for general $\mathsf{SOD}$s and the noncommutative motives. \begin{thm}\label{sod}(=Theorem \ref{SODHodge}). Suppose we have a $\mathsf{SOD}$, $\b\mathsf{X}=\langle \mathcal{A},\mathcal{B} \rangle$. There are natural $\3\dg$ liftings $\mathcal{A}_{\3\dg}$, $\mathcal{B}_{\3\dg}$ of $\mathcal{A}$, $\mathcal{B}$ corresponding to $\3\dg$ enhancement $\mathsf{Per}_{\3\dg}(\mathsf{X})$ of $\b\mathsf{X}$. $$\text{\it Hodge conjecture for}\ \mathsf{X}\ \Leftrightarrow \text{\it Noncommutative Hodge conjecture for}\ \mathcal{A}_{\3\dg}\ \text{\it and}\ \mathcal{B}_{\3\dg}.$$ \end{thm} \begin{thm}\label{Nmotive}(=Theorem \ref{SODNMotive}) Let $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ be smooth and proper $\3\dg$ categories. Suppose there is a direct sum decomposition$\colon$ $\mathcal{U}(\mathcal{C})_{\mathbb{Q}}\cong \mathcal{U}(\mathcal{A})_{\mathbb{Q}}\oplus \mathcal{U}(\mathcal{B})_{\mathbb{Q}}$, see section \ref{section4.2} for the definition of $\mathcal{U}(\bullet)$ and $\mathcal{U}(\bullet)_{\mathbb{Q}}$. We have the following. $$\text{\it Noncommutative Hodge conjecture for}\ \mathcal{C} \Leftrightarrow \text{\it Noncommutative Hodge conjecture for}\ \mathcal{A}\ and\ \mathcal{B}.$$ \end{thm} Let $\mathcal{A}$ be a sheaf of Azumaya algebras on $\mathsf{X}$. Using work of G.\ Tabuada and Michel Van~den Bergh on Azumaya algebras\cite[Theorem 2.1]{tabuadavandenbergh2015}, $\mathcal{U}(\mathsf{Per}_{\3\dg}(\mathsf{X},\mathcal{A}))_{\mathbb{Q}}\cong \mathcal{U}(\mathsf{Per}_{\3\dg}(\mathsf{X}))_{\mathbb{Q}}$. We have the following. \begin{thm}\label{Twisted}(=Theorem \ref{egTwistedscheme1}) Noncommutative Hodge conjecture for $\mathsf{Per}_{\3\dg}(\mathsf{X},\mathcal{A})$ $\Leftrightarrow$ Noncommutative Hodge conjecture for $\mathsf{Per}_{\3\dg}(\mathsf{X})$. \end{thm} This formulation of the noncommutative Hodge conjecture is compatible with the semi-orthogonal decompositions. Therefore, good knowledge of semi-orthogonal decomposition of varieties can simplify the Hodge conjecture, and gives new evidence of the Hodge conjecture. The survey ``Noncommutative counterparts of celebrated conjectures''\cite[Section 2]{tabuada2019noncommutative} provides many examples of the applications to the geometry for some conjectures via this approach. The examples also apply to the noncommutative Hodge conjecture, and we give some further examples which are combined in the theorem below. \begin{thm}\label{Example} Combining Theorem \ref{sod}, Theorem \ref{Nmotive}, and Theorem \ref{Twisted}, we have \begin{enumerate} \item \textbf{Fractional Calabi--Yau categories.}\\ Let $\mathsf{X}$ be a hypersurface of degree $\leq \mathsf{n}+1$ in $\mathbb{P}^{\mathsf{n}}$. There is a semi-orthogonal decomposition $$\mathsf{Perf}(\mathsf{X})=\langle \mathcal{T}(\mathsf{X}),\mathcal{O}_{\mathsf{X}},\cdots,\mathcal{O}_{\mathsf{X}}(\mathsf{n}-\mathsf{deg}(\mathsf{X}))\rangle.$$ $\mathcal{T}(\mathsf{X})$ is a fractional Calabi--Yau of dimension $\frac{(\mathsf{n}+1)(\mathsf{deg}-2)}{\mathsf{deg}(\mathsf{X})}$\cite[Theorem 3.5]{Kuznetsov_2019}. We write $\mathcal{T}_{\3\dg}(\mathsf{X})$ for the full $\3\dg$ subcategory of $\mathsf{Per}_{\3\dg}(\mathsf{X})$ whose objects belong to $\mathcal{T}(\mathsf{X})$. Then $$\text{\it Hodge conjecture of}\ \mathsf{X}\Leftrightarrow \text{\it Noncommutative Hodge conjecture of}\ \mathcal{T}_{\3\dg}(\mathsf{X}).$$ \item \textbf{Twisted scheme.}\\ (A).\ Let $\mathsf{X}$ be a cubic fourfold containing a plane. There is a semi-orthogonal decomposition $$\mathsf{Perf}(\mathsf{X})=\langle\mathsf{Perf}(\mathsf{S},\mathcal{A}),\mathcal{O}_{\mathsf{X}},\mathcal{O}_{\mathsf{X}}(1),\mathcal{O}_{\mathsf{X}}(2)\rangle.$$ $\mathsf{S}$ is a $\mathsf{K}_{3}$ surface, and $\mathcal{A}$ is a sheaf of Azumaya algebra over $\mathsf{S}$\cite[Theorem 4.3]{Kuznetsov2010DerivedCO}. Since the noncommutative Hodge conjecture is true for $\mathsf{Per}_{\3\dg}(\mathsf{S},\mathcal{A})$ by Theorem \ref{Twisted}, hence the Hodge conjecture is true for $\mathsf{X}$.\\ (B).\ Let $\mathsf{f}\colon \mathsf{X}\longrightarrow \mathsf{S}$ be a smooth quadratic fibration, for example, smooth quadric in relative projective space $\mathbb{P}^{\mathsf{n}+1}_{\mathsf{S}}$ \cite{Kuznetsov2005DerivedCO}. There is a semi-orthogonal decomposition $$\mathsf{Perf}(\mathsf{X})=\langle \mathsf{Perf}(\mathsf{S},\mathsf{Cl}_{0}),\mathsf{Perf}(\mathsf{S}),\cdots,\mathsf{Perf}(\mathsf{S})\rangle.$$ $\mathsf{Cl}_{0}$ is a sheaf of Azumaya algebra over $\mathsf{S}$ if the dimension $\mathsf{n}$ of the fiber of $\mathsf{f}$ is odd. \par Thus, if $\mathsf{n}$ is odd, the Hodge conjecture of $\mathsf{X}$ $\Leftrightarrow$ $\mathsf{S}$. Moreover, if $\dim \mathsf{S}\leq 3$, the Hodge conjecture for $\mathsf{X}$ is true. \item \textbf{HP duality.}\\ We write $\mathsf{Hodge}(\bullet)$ if the (noncommutative) Hodge conjecture is true for varieties (smooth and proper $\3\dg$ categories). Let $\mathsf{Y}\rightarrow\mathbb{P}(\mathsf{V}^{\ast})$ be the $\mathsf{HP}$ dual of $\mathsf{X}\rightarrow\mathbb{P}(\mathsf{V})$, then $\mathsf{Hodge}(\mathsf{X})\Leftrightarrow\mathsf{Hodge}(\mathsf{Y})$. Choosing a linear subspace $\mathsf{L}\subset \mathsf{V}^{\ast}$. Let $\mathsf{X}_{\mathsf{L}}=\mathsf{X}\times_{\mathbb{P}(\mathsf{V})}\mathbb{P}(\mathsf{L}^{\perp})$ and $\mathsf{Y}_{\mathsf{L}}=\mathsf{Y}\times_{\mathbb{P}(\mathsf{V}^{\ast})}\mathbb{P}(\mathsf{L})$ be the corresponding linear section. Assume $\mathsf{X}_{\mathsf{L}}$ and $\mathsf{Y}_{\mathsf{L}}$ are of expected dimension and smooth. If we assume $\mathsf{Hodge}(\mathsf{X})$, then $\mathsf{Hodge}(\mathsf{X}_{\mathsf{L}})\Leftrightarrow \mathsf{Hodge}(\mathsf{Y}_{\mathsf{L}})$. \end{enumerate} \par \end{thm} We can prove (3) directly from the description of $\mathsf{HPD}$, see Theorem \ref{HPD}. For more examples constructed from $\mathsf{HPD}$, see Example \ref{egHPD}. Motivated from the noncommutative techniques, Theorem \ref{Example} (3), we expect that we can establish duality of the Hodge conjecture for certain linear section of the projective dual varieties by classical methods of algebraic geometry. \begin{conj}(=Conjecture \ref{Conjprojectivedual}) Let $\mathsf{X}\subset\mathbb{P}(\mathsf{V})$ be a projective smooth variety. Suppose the Hodge conjecture is true for $\mathsf{X}$. Let $\mathsf{Y}\subset\mathbb{P}(\mathsf{V}^{\ast})$ be the projective dual of $\mathsf{X}\subset\mathbb{P}(\mathsf{V})$. Choosing a linear subspace $\mathsf{L}\subset\mathsf{V}^{\ast}$. Suppose the linear section $\mathsf{X}_{\mathsf{L}}=\mathsf{X}\cap \mathbb{P}(\mathsf{L}^{\perp})$ and $\mathsf{Y}_{\mathsf{L}}=\mathsf{Y}\cap \mathbb{P}(\mathsf{L})$ are both of expected dimension and smooth. Then, the Hodge conjecture of $\mathsf{X}_{\mathsf{L}}$ is equivalent to the Hodge conjecture of $\mathsf{Y}_{\mathsf{L}}$. \end{conj} Finally, we obtain some results by the algebraic techniques. A $\3\dg$ algebra $\mathsf{A}$ is called connective if $\mathsf{H}^{\mathsf{i}}(\mathsf{A})=0$ for $\mathsf{i} > 0$. According to \cite[Theorem 4.6]{raedschelders2020proper}, if $\mathsf{A}$ is a connective smooth proper $\3\dg$ algebra, then $\mathcal{U}(\mathsf{A})_{\mathbb{Q}}\cong \mathcal{U}(\mathsf{H}^{0}(\mathsf{A})/\mathsf{Jac}(\mathsf{H}^{0}(\mathsf{A})))_{\mathbb{Q}}\cong \oplus \mathcal{U}(\mathbb{C})_{\mathbb{Q}}$. Thus, we have the following. \begin{thm}\label{algebra1} The noncommutative Hodge conjecture is true for smooth proper and connected $\3\dg$ algebra $\mathsf{A}$, see Theorem \ref{propersmoothconectivealgebraHodge}. In particular, the noncommutative Hodge conjecture is true for smooth and proper algebras. \end{thm} We also provide another proof for the case of smooth and proper algebras, see Theorem \ref{algebra}. Theorem \ref{algebra1} implies that if a variety $\mathsf{X}$ admits a tilting bundle (or sheaf), then the Hodge conjecture is true for $\mathsf{X}$, see the Corollary \ref{Tiltingsheaf} in the text. \subsection*{Notation}\label{notation} We assume the varieties to be defined over $\mathbb{C}$. We write $\mathsf{SOD}$ for semi-orthogonal decomposition of triangulated categories. We say a semi-orthogonal decomposition is geometric if its components are equivalent to some derived categories of projective smooth varieties. We always assume the $\3\dg$ categories to be small categories. We write $\mathsf{k}$ as the field $\mathbb{C}$ in some places without mentioning. \begin{acks} The author is grateful to his supervisor Will Donovan for helpful supports, discussions, and suggestions. The author would like to thank Anthony Blanc and Dmitry Kaledin for helpful discussions through E-mail. The author also thanks Shizhuo Zhang for informing the author about Alexander Perry's work when the author finished most parts of the paper. The author is indebted to Alexander Perry for helpful comments and suggestions. The author thanks Michael Brown' comments, and pointing out a gap in the previous version concerning the issue of splitting of Hodge filtration. This leads the author to revising this new version which can avoid the issue of splitting of Hodge filtration. \end{acks} \vspace{5mm} \section{Preliminary} \subsection{The classical Hodge conjecture} Given a projective smooth variety $\mathsf{X}$, there is a famous Hodge decomposition $$\mathsf{H}^{\mathsf{k}}(\mathsf{X}(\mathbb{C}),\mathbb{Z})\otimes\mathbb{C} \cong \oplus_{\mathsf{p}+\mathsf{q}=\mathsf{k}}\mathsf{H}^{\mathsf{p}}(\mathsf{X},\Omega_{\mathsf{X}}^{\mathsf{q}})$$ where $\mathsf{H}^{\mathsf{p}}(\mathsf{X},\Omega_{\mathsf{X}}^{\mathsf{q}})$ can be identified with the $(\mathsf{p},\mathsf{q})$ classes in $\mathsf{H}^{\mathsf{p}+\mathsf{q}}(\mathsf{X}(\mathbb{C}),\mathbb{C})$. We define the rational (integral) Hodge classes as rational (integral) $(\mathsf{p},\mathsf{p})$ classes. By $\Poincare$ duality, there is a cycle map which relates the $\mathsf{Chow}$ group of $\mathsf{X}$ with its Betti cohomology $$\mathsf{Cycle}\colon \quad \mathsf{CH}^{\ast}(\mathsf{X})\longrightarrow \mathsf{H}^{\ast}(\mathsf{X}(\mathbb{C}),\mathbb{C}).$$ Clearly, the image lies in the integral Hodge classes. We obtain the rational cycle map when we tensor with $\mathbb{Q}$. The famous Hodge conjecture concerns whether the image of the (rational) cycle map is exactly the (rational) integral Hodge classes. It is well known that the integral Hodge conjecture is not true in general \cite{ATIYAH196225}, and the rational Hodge conjecture is still open. For more introductions to the classical Hodge conjecture, the reader can refer to the survey ``Some aspects of the Hodge conjecture'' \cite{voisin_2003}. \begin{rem} The rational (and integral) Hodge conjecture is true for weight one by Lefschetz one-one theorem. According to the $\Poincare$ duality, the rational Hodge conjecture is true for weight $\mathsf{n}-1$, $\mathsf{n}$ is the dimension of the variety. In particular, the rational Hodge conjecture is true for varieties of dimension less than or equal to 3. \end{rem} This paper focuses on the non-weighted rational Hodge conjecture. That is, we concern whether the rational cycle map maps $\mathsf{CH}^{\ast}(\mathsf{X})_{\mathbb{Q}}$ surjectively into the rational Hodge classes. \begin{thm}(Part of Grothendieck-Riemann-Roch [SGA6 exp.XIV]\cite{RR})\label{GRR} Let $\mathsf{X}$ be a smooth projective variety. There is a commutative diagram, where $\mathsf{Ch}_{\mathbb{Q}}$ are the certain Chern characters, $\mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}$ is the rational $0^{th}$ algebraic $\mathsf{K}$ group of the coherent sheaves. $$\xymatrix{\mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}\ar[r]^{\mathsf{Ch}_{\mathbb{Q}}}\ar[d]^{\cong}_{\mathsf{Ch}_{\mathbb{Q}}}&\mathsf{H}^{\ast}(\mathsf{X},\mathbb{C})\\ \mathsf{CH}^{\ast}(\mathsf{X})_{\mathbb{Q}}\ar[ru]_{\mathsf{cycle}}&}$$ \end{thm} The image of the Chern character is in the rational Hodge classes, and the rational Hodge conjecture can be reformulated that $\mathsf{Ch}_{\mathbb{Q}}$ maps $\mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}$ surjectively into the rational Hodge classes. \begin{prop}\label{corollary 2.1} We have the Mukai vector $\mathsf{v}(\bullet)$ $$\mathsf{v}\colon \mathsf{K}_{0}(\mathsf{X})\longrightarrow \oplus\mathsf{H}^{\mathsf{p},\mathsf{p}}(\mathsf{X}),\quad \quad \mathsf{E}\mapsto \mathsf{Ch}(\mathsf{E})\sqrt{\mathsf{Td}(\mathsf{X})}$$ The non-weighted Hodge conjecture can be reformulated that $\mathsf{v}_{\mathbb{Q}}$ maps $\mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}$ surjectively into the rational Hodge classes. \end{prop} \begin{proof} There is a commutative diagram$\colon$ $$\xymatrix@C6pc@R3pc{\mathsf{K}_{0}(\mathsf{X})\ar[r]^{\mathsf{v}}\ar[rd]_{\mathsf{Ch}}&\oplus\mathsf{H}^{\mathsf{p},\mathsf{p}}(\mathsf{X})\ar[d]_{\cong}^{\frac{1}{\sqrt{\mathsf{Td}(\mathsf{X})}}}\\ &\oplus\mathsf{H}^{\mathsf{p},\mathsf{p}}(\mathsf{X})}$$ Since the vertical morphism is an isomorphism which preserves the rational Hodge classes, $\mathsf{v}_{\mathbb{Q}}~$ maps $\mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}$ surjectively into the rational Hodge classes if and only if $\mathsf{Ch}_{\mathbb{Q}}$ maps $\mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}$ surjectively into the rational Hodge classes. Thus, the statement follows from the Theorem \ref{GRR} above. \end{proof} \subsection{Noncommutative geometry} We briefly recall the theory of noncommutative spaces. We regard certain $\3\dg$ categories as noncommutative counterparts of varieties. We will recall the basic notions. For survey of the $\3\dg$ categories, the reader can refer to the survey by B.\ Keller, ``On differential graded categories'' \cite{Keller2006OnDG}. \begin{defn} The $\mathbb{C}$-linear category $\mathcal{A}$ is called a $\3\dg$ category if $\mathsf{Mor}(\bullet,\bullet)$ are differential $\mathbb{Z}$-graded $\mathsf{k}$-vector spaces. For every objects $\mathsf{E}$, $\mathsf{F}$, $\mathsf{G}$ $\in$ $\mathcal{A}$, the compositions $$\mathsf{Mor}(\mathsf{F},\mathsf{E})\otimes \mathsf{Mor}(\mathsf{G},\mathsf{F})\rightarrow \mathsf{Mor}(\mathsf{G},\mathsf{E}) $$ of complexes are associative. Furthermore, there is a unit $\mathsf{k} \rightarrow \mathsf{Mor}(\mathsf{E},\mathsf{E})$. Note that the composition law implies that $\mathsf{Mor}(\mathsf{E},\mathsf{E})$ is a differential graded algebra. \end{defn} \begin{eg} A basic example of $\3\dg$ categories is $\4\dg$, whose objects are complexes of $\mathsf{k}$-~vector space. The morphism spaces are refined as follows$\colon$ \par Let $\mathsf{E},\mathsf{F}\in \mathsf{C}_{\3\dg}(\mathsf{k})$, define degree $\mathsf{n}$ piece of the morphism $\mathsf{Mor}(\mathsf{E},\mathsf{F})$ to be $\mathsf{Mor}(\mathsf{E},\mathsf{F})(\mathsf{n}):=\Pi \mathsf{Hom}(\mathsf{E}_{\mathsf{i}},\mathsf{F}_{\mathsf{i}+\mathsf{n}})$. The $\mathsf{n}^{\text{th}}$ differential is given by $\mathsf{d}_{\mathsf{n}}(\mathsf{f})= \mathsf{d}_{\mathsf{E}}\circ \mathsf{f} - (-1)^{\mathsf{n}}\mathsf{f}\circ \mathsf{d}_{\mathsf{F}}$, $\mathsf{f}\in \mathsf{Mor}(\mathsf{E},\mathsf{F})(\mathsf{n})$. \end{eg} \begin{defn} We call $\mathsf{F}\colon \mathcal{C}\longrightarrow \mathcal{D}$ a dg functor between $\3\dg$ categories if $\mathsf{F}\colon \mathsf{Hom}(\mathsf{E},\mathsf{G})\longrightarrow \mathsf{Hom}(\mathsf{F}(\mathsf{E}),\mathsf{F}(\mathsf{G}))$ is in $\mathsf{C}(\mathsf{k})$ (morphisms are morphism of chain complexes), $\mathsf{E}$, $\mathsf{G} \in \mathcal{C}$. We call $\mathsf{F}$ to be quasi-equivalent if $\mathsf{F}$ induces isomorphisms on homologies of morphisms and equivalence on their homotopic categories. \end{defn} \begin{defn} The $\3\dg$ functor $\mathsf{F}\colon\mathcal{A}\longrightarrow \mathcal{B}$ is derived Morita equivalent if it induces an equivalence of derived categories by composition $$\mathsf{F}^{\ast}\colon\mathsf{D}(\mathcal{B})\cong \mathsf{D}(\mathcal{A}).$$ Note that if $\3\dg$ functor $\mathcal{A} \longrightarrow \mathcal{B}$ is a quasi-equivalence, then it is derived Morita equivalent, the reader can refer to ``Categorical resolutions of irrational singularities''\cite[Proposition 3.9]{Kuznetsov2015CategoricalRO} for an explicit proof. \end{defn} We consider the category of small $\3\dg$ categories, whose morphisms are the $\3\dg$ functors. It is written as $\mathsf{dg-cat}$. According to G.\ Tabuada \cite{10.1155/IMRN.2005.3309}, there is a model structure on $\mathsf{dg-cat}$ with derived Morita equivalent $\3\dg$ functors as weak equivalences. We write $\mathsf{Hmo}(\mathsf{dg-cat})$ as the associated homotopy category for such model structure. Given two $\3\dg$ categories $\mathcal{A}$ and $\mathcal{B}$, we have a bijection $\mathsf{Hom}_{\mathsf{Hmo}}(\mathcal{A},\mathcal{B})\cong \mathsf{Iso}\ \mathsf{rep}(\mathcal{A}^{op}\otimes^{\mathsf{L}} \mathcal{B})$, where $\mathsf{rep}(\mathcal{A}^{op}\otimes^{\mathsf{L}} \mathcal{B})$ is the subcategory of $\mathsf{D}(\mathcal{A}\otimes^{\mathsf{L}}\mathcal{B})$ with bi-module $\mathsf{X}$ such that $\mathsf{X}(\mathcal{A},\bullet)$ is a perfect $\mathcal{B}$ module. Linearizing the category, we obtain $\mathsf{Hmo}_{0}$ whose morphism spaces become $\mathsf{K}_{0}(\mathsf{rep}(\mathcal{A}^{op}\otimes \mathcal{B}))$. After $\mathbb{Q}$ linearization and idempotent completion, we get the category of pre-noncommutative motive $\mathsf{PChow}_{\mathbb{Q}}$. \begin{defn}\label{additiveinvariant} Any functor to an additive category $\mathcal{C}$, $\mathsf{F}\colon \mathsf{dg-cat} \longrightarrow \mathcal{C}$, is called an additive invariant in the sense of G.\ Tabuada \cite{10.1155/IMRN.2005.3309} if $\colon$\\ (1) It maps the Morita equivalences to isomorphisms.\\ (2) For pre-triangulated $\3\dg$ categories $\mathcal{A}$, $\mathcal{B}$ and $\mathsf{X}$ with natural morphism $\mathsf{i}\colon\mathcal{A}\longrightarrow \mathsf{X}$ and $\mathsf{j}\colon\mathcal{B}\longrightarrow \mathsf{X}$ which induces semi-orthogonal decomposition of triangulated categories $\mathsf{Ho}(\mathsf{X})=\langle \mathsf{Ho}(\mathcal{A}),\mathsf{Ho}(\mathcal{B})\rangle$, there is an isomorphism $\mathsf{F}(\mathsf{X})\cong \mathsf{F}(\mathcal{A})\oplus \mathsf{F}(\mathcal{B})$ which is induced by $\mathsf{F}(\mathsf{i})+\mathsf{F}(\mathsf{j})$. \end{defn} The following theorem is due to G.\ Tabuada. \begin{thm}(G.\ Tabuada\cite[Theorem 4.1]{10.1155/IMRN.2005.3309}\label{theorem 1.21}) The functor $\mathsf{F}$ in Definition \ref{additiveinvariant} that induces $\mathsf{Hmo}\longrightarrow \mathcal{A}$ is an additive invariant if and only if it factors through $\mathsf{Hmo}\longrightarrow \mathsf{Hmo}_{0}\longrightarrow \mathcal{A}$. That is, $\mathsf{Hmo}_{0}$ plays a role as the usual motives, and the additive invariants should be regarded as noncommutative Weil cohomology theories. \end{thm} \begin{rem} Due to many people's works, see a survey \cite{NM}, the Hochschild homology, algebraic $\mathsf{K}$-theory, (periodic) cyclic homology theory are all additive invariants. The Hochschild homology of proper smooth variety is the noncommutative counterpart of Hodge cohomology, and periodic cyclic homology corresponds to the de Rham cohomology. \end{rem} Given a proper smooth variety $\mathsf{X}$, there is a natural $\3\dg$ enhancement $\mathsf{Per}_{\3\dg}(\mathsf{X})$, which is a $\3\dg$ enhancement of $\mathsf{Perf}(\mathsf{X})$. In this sense, the $\3\dg$ categories can be regarded as noncommutative counterpart of varieties. In order to focus on the nice spaces, for example, the $\mathsf{Chow}$ motive concerns the proper smooth varieties, we restrict the $\mathsf{dg-cat}$ to the smooth proper $\3\dg$ categories. \begin{defn} $\mathsf{Dg}$ category $\mathcal{A}$ is called smooth if $\mathcal{A}$ is perfect $\mathcal{A}-\mathcal{A}$ bi-module. It is called smooth and proper if $\mathcal{A}$ is derived Morita equivalent to a smooth $\3\dg$ algebra of finite type. \end{defn} It is well known that the property of $\3\dg$ categories being smooth and proper is closed under derived Morita equivalence and tensor product \cite[Chapter 1, Theorem 1.43]{NM}. People also define the properness as $\mathsf{Hom}_{\mathcal{A}}(\bullet,\bullet)$ being perfect $\mathsf{k}-\mathsf{mod}$. According to a book of G.\ Tabuada, ``Noncommutative motive'' \cite[Proposition 1.45]{NM}, such a definition of smooth and properness is equivalent to our definition. \begin{defn}[Noncommutative $\mathsf{Chow}$ motive]\label{NMotives} \cite{NM} We write $\mathsf{Hmo}^{\mathsf{sp}}_{0}$ as a full sub-category of $\mathsf{Hmo}_{0}$ whose objects are smooth proper $\3\dg$ categories. $\mathbb{Q}$ linearizing the category $\mathsf{Hmo}^{\mathsf{sp}}_{0}$, that is, the morphisms become $\mathsf{K}_{0}(\mathcal{A}^{\mathsf{op}}\otimes \mathcal{B})_{\mathbb{Q}}$\cite[Cor 1.44]{NM}, we obtain $\mathsf{Hmo}^{\mathsf{sp}}_{0,\mathbb{Q}}$. Then, we define $\mathsf{NChow}_{\mathbb{Q}}$ to be idempotent completion of $\mathsf{Hmo}^{\mathsf{sp}}_{0,\mathbb{Q}}$. \end{defn} There is a universal additive invariant$\colon$ $$\mathcal{U}\colon \operatorname{\mathsf{dg-cat}}^{\mathsf{sp}}\longrightarrow \operatorname{\mathsf{NChow}}.$$ Let $\underline{\mathbb{C}}$ be the category with one object whose morphism space is $\mathbb{C}$. Then for any $\mathcal{A}\in \operatorname{\mathsf{dg-cat}}$, $\mathsf{Hom}_{\mathsf{NChow}} (\mathcal{U}(\mathbb{C}),\mathcal{U}(\mathcal{A}))\cong \mathsf{K}_{0}(\mathsf{rep}(\mathcal{A}))\cong \mathsf{K}_{0}(\mathcal{A}):=\mathsf{K}_{0}(\mathsf{D}^{c}(\mathcal{A}))$. Since we have a functorial morphism $\mathsf{Hom}_{\mathsf{NChow}} (\mathcal{U}(\mathbb{C}),\mathcal{U}(\mathcal{A}))\longrightarrow \mathsf{Hom}_{\mathbb{C}}(\mathsf{HH}_{0}(\mathbb{C}),\mathsf{HH}_{0}(\mathcal{A}))$, there is a Chern character map $$\mathsf{Ch}\colon \mathsf{K}_{0}(\mathcal{A})\longrightarrow \mathsf{HH}_{0}(\mathcal{A}).$$ Given any $\mathcal{A}$ module $X\in \mathsf{D}^{c}(\mathcal{A})$ , it is defined via the following diagram of $\3\dg$ categories. $$\xymatrix{&\mathsf{Per}_{\3\dg}(\mathcal{A})\\ \underline{\mathbb{C}}\ar[ru]^{X}&\mathcal{A}\ar[u]}$$ It induces morphisms of Hochschild complexes naturally, and then an element in $\mathsf{HH}_{0}(\mathcal{A})$ via isomorphism $\mathsf{HH}_{0}(\mathcal{A})\cong \mathsf{HH}_{0}(\mathsf{Per}_{\3\dg}(\mathcal{A}))$. The isomorphism is because the Yoneda embedding $\mathcal{A}\longrightarrow \mathsf{Per}_{\3\dg}(\mathcal{A})$ is a derived Morita equivalence. $\mathsf{Per}_{\3\dg}(\mathcal{A})$ is defined as a full subcategory of $\3\dg$ $\mathcal{A}$ module whose objects are isomorphic to objects in $\mathsf{Perf}(\mathcal{A})$. \par In general, given any additive invariant $\mathsf{F}$ with $\mathsf{F}(\mathsf{k})\cong \mathsf{k}$, we have a Chern character map $\mathsf{K}_{0}(\mathcal{A})\longrightarrow \mathsf{F}(\mathcal{A})$. For example, the (periodic) cyclic homology, and the negatiave cyclic homology. \par It is natural to ask what are the relations between $\mathsf{Chow}$ motive $\mathsf{Chow}_{\mathbb{Q}}$ and noncommutative $\mathsf{Chow}$ motive $\mathsf{NChow}_{\mathbb{Q}}$. There is a nice answer due to remarkable works of Kontsevich and G.~Tabuada. \begin{thm}\label{ChowNChow} (\cite[Theorem 1.1]{tabuada2011chow}) There is a symmetric monoidal functor $$\phi \colon \mathsf{SmProjec}^{\mathsf{op}}\longrightarrow \mathsf{dg-cat}^{\mathsf{op}},\ \mathsf{X}\mapsto \mathsf{Per}_{\3\dg}(\mathsf{X})$$ such that the natural diagram is commutative. $$\xymatrix{\mathsf{SmProjec}^{\mathsf{op}}\ar[r]^{\phi}\ar[d]&\mathsf{dg-cat}^{\mathsf{sp}}\ar[d]\\ \operatorname{\mathsf{Chow}}_{\mathbb{Q}}\ar[d]&\mathsf{Hmo}^{\mathsf{sp}}_{0}\ar[d]\\ \operatorname{\mathsf{Chow}}_{\mathbb{Q}}/-\otimes\mathbb{Q}(1)\ar[r]^{\phi'}& \operatorname{\mathsf{NChow}}_{\mathbb{Q}}\subset \mathsf{Hmo}^{\ast}_{0,\mathbb{Q}}}$$ \end{thm} With this commutative diagram, G.\ Tabuada was able to generalize some famous conjectures to the noncommutative spaces, see ``Noncommutative counterparts of celebrated conjectures''~\cite{tabuada2019noncommutative}. \vspace{5mm} \section{Hodge conjecture and geometric semi-orthogonal decompositions } In this section, we prove that the Hodge conjecture is additive for the geometric semi-orthogonal decompositions. In particular, the Hodge conjecture is a derived invariant. \par \begin{thm}\label{GSODHodge1} Suppose we have a nontrivial semi-orthogonal decomposition of derived categories $\b\mathsf{X}=\langle \mathcal{A},\mathcal{B} \rangle$ such that $\mathcal{A}$ and $\mathcal{B}$ are geometric, that is, $\mathcal{B}\cong \1\b\mathsf{Y}$ and $\mathcal{A}\cong \2\b\mathbb{Z}$ for some varieties $\mathsf{Y}$ and $\mathsf{Z}$. Then Hodge conjecture is true for $\mathsf{X}$ if and only if it is true for $\mathsf{Y}$ and $\mathsf{Z}$. \end{thm} \begin{proof}[Proof] Let's assume $\mathsf{j}\colon \2\b\mathbb{Z}\hookrightarrow\b\mathsf{X}$ to be an embedding with left adjoint $\mathsf{L}$, $\mathsf{i}\colon \1\b\mathsf{Y}\hookrightarrow \b\mathsf{X}$ with right adjoint $\mathsf{R}$. According to D.\ Orlov \cite[Theorem 2.2]{1996alg.geom..6006O}, they are all Fourier-Mukai functors. There is a diagram of triangulated categories$\colon$ $$\xymatrix{\1\b\mathsf{Y} \ar[r]_{\mathsf{i}} &\b\mathsf{X} \ar[r]_{\mathsf{L}}\ar@/_/[l]_{\mathsf{R}}&\2\b\mathbb{Z}\ar@/_/[l]_{\mathsf{j}} }$$ with $\mathsf{R}\circ \mathsf{i}\cong \mathsf{id}$, $\mathsf{L}\circ \mathsf{j}\cong \mathsf{id}$, $\mathsf{R}\circ \mathsf{j}\cong 0$ and $\mathsf{L}\circ \mathsf{i}\cong 0$. Apply $0^{\mathsf{th}}$ $\mathsf{K}$-theory and $0^{\mathsf{th}}$ Hochschild homology theory, there are diagrams $$\xymatrix{\mathsf{K}_{0}(\1\b\mathsf{Y})\ar[r]_{\mathsf{i}} &\mathsf{K}_{0}(\b\mathsf{X})\ar[r]_{\mathsf{L}}\ar@/_/[l]_{\mathsf{R}}&\mathsf{K}_{0}(\2\b\mathbb{Z})\ar@/_/[l]_{\mathsf{j}} }.$$ $$\xymatrix{\mathsf{HH}_{0}(\mathsf{Y})\ar[r]_{\mathsf{i}_{\mathsf{H}}} &\mathsf{HH}_{0}(\mathsf{X})\ar[r]_{\mathsf{L}_{\mathsf{H}}}\ar@/_/[l]_{\mathsf{R}_{\mathsf{H}}}&\mathsf{HH}_{0}(\mathsf{Z})\ar@/_/[l]_{\mathsf{j}_{\mathsf{H}}} }.$$ Here we define $\mathsf{HH}_{0}(\bullet)$ as a subspace of de Rham cohomology. For example, $\mathsf{HH}_{0}(\mathsf{X}):=\oplus_{\mathsf{p}}\mathsf{H}^{\mathsf{p},\mathsf{p}}(\mathsf{X})\hookrightarrow \mathsf{H}^{\mathsf{even}}_{\mathsf{DR}}(\mathsf{X})$. The morphisms of $\mathsf{HH}_{0}$ are induced by the Mukai vector of the corresponding kernel of functors. For example, take $\mathsf{E}\in \mathsf{D}^{\mathsf{b}}(\mathsf{X}\times\mathsf{Y})$, then $$\Phi_{\mathsf{v}(\mathsf{E})}: \mathsf{HH}_{0}(\mathsf{X})\longrightarrow \mathsf{HH}_{0}(\mathsf{Y})$$ is defined as $\mathsf{q}_{\ast}(\mathsf{p}^{\ast}(\bullet)\cup \mathsf{v}(\mathsf{E}))$. Firstly, $\Phi_{\mathsf{v}(\mathsf{E})}$ should induce morphism of de Rham cohomology, it is easy to prove that $\Phi_{\mathsf{v}(\mathsf{E})}$ maps $\mathsf{HH}_{\ast}(\mathsf{X})=\oplus_{\mathsf{p}-\mathsf{q}=\ast}\mathsf{H}^{\mathsf{p},\mathsf{q}}(\mathsf{X})$ to $\mathsf{HH}_{\ast}(\mathsf{Y})=\oplus_{\mathsf{p}-\mathsf{q}=\ast}\mathsf{H}^{\mathsf{p},\mathsf{q}}(\mathsf{Y})$. The reader can also see proof in \cite[Proposition 5.39]{bookHuybrechts}. $$\xymatrix{&\mathsf{X}\times\mathsf{Y}\ar[dl]_{\mathsf{p}}\ar[dr]^{\mathsf{q}}&\\ \mathsf{X}&&\mathsf{Y}}$$ The morphisms of $\mathsf{K}_{0}$ groups are induced by the Fourier-Mukai functor. According to \cite[Chapter 5, Section 5.2]{bookHuybrechts}, the Mukai vector $\mathsf{v}$ is compatible with morphism of $\mathsf{K}_{0}$-theory, namely, we have a diagram $$\xymatrix{\mathsf{K}_{0}(\mathsf{Y})\ar[r]_{\mathsf{i}}\ar[d]_{\mathsf{v}_{\mathsf{Y}}}&\mathsf{K}_{0}(\mathsf{X})\ar[r]_{\mathsf{L}}\ar@/_/[l]_{\mathsf{R}}\ar[d]_{\mathsf{v}_{\mathsf{X}}}&\mathsf{K}_{0}(\mathsf{Z})\ar[d]_{\mathsf{v}_{\mathsf{Z}}}\ar@/_/[l]_{\mathsf{j}}\\ \mathsf{HH}_{0}(\mathsf{Y})\ar[r]_{\mathsf{i}_{\mathsf{H}}}&\mathsf{HH}_{0}(\mathsf{X})\ar[r]_{\mathsf{L}_{\mathsf{H}}}\ar@/_/[l]_{\mathsf{R}_{\mathsf{H}}}&\mathsf{HH}_{0}(\mathsf{Z})\ar@/_/[l]_{\mathsf{j}_{\mathsf{H}}}}$$ The morphisms $\mathsf{R}_{\mathsf{H}}$, $\mathsf{i}_{\mathsf{H}}$, $\mathsf{j}_{\mathsf{H}}$, and $\mathsf{L}_{\mathsf{H}}$ preserve rational classes. We first prove that $\mathsf{i}_{\mathsf{H}}+\mathsf{j}_{\mathsf{H}}$ induces an isomorphism of Hochschild homologies. Clearly $\mathsf{i}+\mathsf{j}$ is an isomorphism of $\mathsf{K}_{0}(\mathsf{X})$ groups. Since Hochschild homology is an additive invariant, we have a non-canonical isomorphism $\mathsf{HH}_{0}(\mathsf{X})\cong \mathsf{HH}_{0}(\mathsf{Y})\oplus \mathsf{HH}_{0}(\mathsf{Z})$, which implies $\dim_{\mathbb{C}}\mathsf{HH}_{0}(\mathsf{X})=\dim_{\mathbb{C}}\mathsf{HH}_{0}(\mathsf{Y})+\dim_{\mathbb{C}}\mathsf{HH}_{0}(\mathsf{Z})$. This was proved by classical $\3\dg$ methods and the $\mathsf{HKR}$ isomorphism. The reader can also refer to A.\ Kuznetsov's paper ``Hochschild homology and semi-orthogonal decomposition'' \cite[Theorem 7.3(i)]{2009arXiv0904.4330K}. \par Since $\mathsf{i}_{\mathsf{H}}$ and $\mathsf{j}_{\mathsf{H}}$ are injective, which will be proved below, therefore $\mathsf{i}_{\mathsf{H}}+\mathsf{j}_{\mathsf{H}}$ being an isomorphism is equivalent to the fact that $\mathsf{Im}(\mathsf{i}_{\mathsf{H}})\cap \mathsf{Im}(\mathsf{j}_{\mathsf{H}})=0$. It suffices to prove that $\mathsf{L}_{\mathsf{H}}\circ \mathsf{i}_{\mathsf{H}}= 0$. If this is true, let $\alpha \in \mathsf{Im}(\mathsf{i}_{\mathsf{H}})\cap \mathsf{Im}(\mathsf{j}_{\mathsf{H}})$, then $\alpha= \mathsf{i}_{\mathsf{H}}\alpha_{\mathsf{Y}}=\mathsf{j}_{\mathsf{H}}\alpha_{\mathsf{Z}}$, therefore $\mathsf{L}_{\mathsf{H}}\alpha = (\mathsf{L}_{\mathsf{H}}\circ \mathsf{i}_{\mathsf{H}})\alpha_{\mathsf{Y}}=(\mathsf{L}_{\mathsf{H}}\circ \mathsf{j}_{\mathsf{H}})\alpha_{\mathsf{Z}}=\alpha_{\mathsf{Z}}=0$, hence $\alpha=0$. In order to prove the claim $\mathsf{L}_{\mathsf{H}}\circ \mathsf{i}_{\mathsf{H}}=0$, we need the following lemma. \begin{lem}\label{lem 2.2} Suppose an object $\mathsf{E}\in \mathsf{D}^{\mathsf{b}}(\mathsf{X}\times \mathsf{Y})$ induces a trivial Fourier--Mukai transform $\Phi_{\mathsf{E}} \colon \b\mathsf{X}\longrightarrow \1\b\mathsf{Y}$, then $\mathsf{E}\cong 0 \in \mathsf{D}^{\mathsf{b}}(\mathsf{X}\times \mathsf{Y})$. \end{lem} \begin{proof}[Proof of the lemma] Given any closed point $\mathsf{x}\in \mathsf{X}$, we have a natural closed embedding $\mathsf{l}_{\mathsf{x}}\colon \mathsf{x}\times \mathsf{Y}\hookrightarrow \mathsf{X}\times \mathsf{Y}$, and a simple calculation shows that $\Phi_{\mathsf{E}}(\mathsf{k}(\mathsf{x}))\cong \mathbb{L}\mathsf{l}_{\mathsf{x}}^{\ast}\mathsf{E}$ via identifying $\mathsf{x}\times \mathsf{Y}$ with $\mathsf{Y}$. Therefore, $\Phi_{\mathsf{E}}$ being trivial implies that $\mathbb{L}\mathsf{l}_{\mathsf{x}}^{\ast}\mathsf{E}$ is trivial. Since this is true for any closed points of $\mathsf{X}$, support of $\mathsf{E}$ is empty, which implies $\mathsf{E}\cong 0$. \end{proof} Back to the proof of Theorem \ref{GSODHodge1}. Since the functor $\mathsf{L}\circ \mathsf{i}\cong 0$ as Fourier--Mukai functor, by lemma above the kernel corresponding to $\mathsf{L}\circ \mathsf{i}$ is trivial. In particular, its Mukai vector is trivial, hence $\mathsf{L}_{\mathsf{H}}\circ\mathsf{i}_{\mathsf{H}}=0$. \par Now it is prepared enough to prove Theorem \ref{GSODHodge1}. Suppose Hodge conjecture for $\mathsf{X}$. Let $\alpha_{\mathsf{Y}}\in \oplus\mathsf{H}^{\mathsf{p},\mathsf{p}}(\mathsf{Y},\mathbb{Q})$, consider $\alpha=\mathsf{i}_{\mathsf{H}}\alpha_{\mathsf{Y}}\in \oplus\mathsf{H}^{\mathsf{p},\mathsf{p}}(\mathsf{X},\mathbb{Q})$. Since Hodge conjecture holds for $\mathsf{X}$, there exists an $\mathsf{E}\in \mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}$ such that $\mathsf{v}(\mathsf{E})= \alpha$. Let $\mathsf{E}_{\mathsf{Y}}= \mathsf{R}(\mathsf{E})$, then the image of $\mathsf{v}(\mathsf{E}_{\mathsf{Y}})$ and $\alpha_{\mathsf{Y}}$ under $\mathsf{i}_{\mathsf{H}}$ coincide. Since $\mathsf{R}_{\mathsf{H}}\circ \mathsf{i}_{\mathsf{H}}= \mathsf{id}_{\mathsf{H}}$, then $\mathsf{i}_{\mathsf{H}}$ is an injective morphism, therefore $\mathsf{v}(\mathsf{E}_{\mathsf{Y}})=\alpha_{\mathsf{Y}}$. This implies Hodge conjecture for $\mathsf{Y}$. The Hodge conjecture is true for $\mathsf{Z}$ by the similar argument. \par Suppose Hodge conjecture is true for $\mathsf{Y}$ and $\mathsf{Z}$, we prove that it is also true for $\mathsf{X}$. Let $\alpha \in \oplus \mathsf{H}^{\mathsf{p},\mathsf{p}}(\mathsf{X},\mathbb{Q})$, consider $\mathsf{R}_{\mathsf{H}}(\alpha)\in \mathsf{HH}_{0}(\mathsf{Y})_{\mathbb{Q}}$ and $\mathsf{L}_{\mathsf{H}}(\alpha)\in \mathsf{HH}_{0}(\mathsf{Z})_{\mathbb{Q}}$. Since the Hodge conjecture is true for $\mathsf{Y}$ and $\mathsf{Z}$, there exists an $\mathsf{E}_{\mathsf{Y}}\in \mathsf{K}_{0}(\mathsf{Y})_{\mathbb{Q}}$ and an $\mathsf{E}_{\mathsf{Z}}\in \mathsf{K}_{0}(\mathsf{Z})_{\mathbb{Q}}$ such that $\mathsf{v}(\mathsf{E}_{\mathsf{Y}})= \mathsf{R}_{\mathsf{H}}(\alpha)$, $\mathsf{v}(\mathsf{E}_{\mathsf{Z}})=\mathsf{L}_{\mathsf{H}}(\alpha)$. Define $\alpha'=\mathsf{i}_{\mathsf{H}}\circ \mathsf{R}_{\mathsf{H}}(\alpha)+ \mathsf{j}_{\mathsf{H}}\circ \mathsf{L}_{\mathsf{H}}(\alpha)$. We prove that $\alpha'=\alpha$. Since $\mathsf{i}_{\mathsf{H}}\oplus \mathsf{j}_{\mathsf{H}}$ induces an isomorphism, there exist $\alpha_{1}\in \mathsf{HH}_{0}(\mathsf{Y})$ and $\alpha_{2}\in \mathsf{HH}_{0}(\mathsf{Z})$ such that $\alpha= \mathsf{i}_{\mathsf{H}}(\alpha_{1})+ \mathsf{j}_{\mathsf{H}}(\alpha_{2})$. Applying morphism $\mathsf{R}_{\mathsf{H}}$, we obtain $\alpha_{1}=\mathsf{R}_{\mathsf{H}}(\alpha)$. Apply morphism $\mathsf{L}_{\mathsf{H}}$, we obtain $\alpha_{2}=\mathsf{L}_{\mathsf{H}}(\alpha)$. Thus $\alpha = \mathsf{i}_{\mathsf{H}}\circ \mathsf{R}_{\mathsf{H}}(\alpha)+\mathsf{j}_{\mathsf{H}}\circ \mathsf{L}_{\mathsf{H}}(\alpha)$. Define $\mathsf{E}= \mathsf{i}(\mathsf{E}_{\mathsf{Y}})+\mathsf{j}(\mathsf{E}_{\mathsf{Z}})\in \mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}$, then $\mathsf{v}(\mathsf{E})= \mathsf{v}(\mathsf{i}(\mathsf{E}_{\mathsf{Y}}))+\mathsf{v}(\mathsf{j}(\mathsf{E}_{\mathsf{Z}}))=\mathsf{i}_{\mathsf{H}}(\mathsf{R}_{\mathsf{H}}(\alpha))+\mathsf{j}_{\mathsf{H}}(\mathsf{L}_{\mathsf{H}}(\alpha))= \alpha$. \end{proof} \begin{rem} The statement of the theorem is still true if there is a semi-orthogonal decomposition of $\b\mathsf{X}$ that has more than two components. The proof is essentially the same. \end{rem} \begin{cor}\label{derivedinvariant} If $\b\mathsf{X}\cong \1\b\mathsf{Y}$, then Hodge conjecture of $\mathsf{X}$ $\Leftrightarrow$ Hodge conjecture of ~$\mathsf{Y}$. \end{cor} \begin{cor} \label{EC} Suppose $\b\mathsf{X}$ admits a full exceptional collection, then the Hodge conjecture is true for $\mathsf{X}$. \end{cor} \begin{eg} The Grassmannians \cite{Kapranov_1985}, certain homogeneous spaces (see a brief survey in \cite[Section 1.1]{Kuznetsov_2016}), and smooth projective toric varieties\cite[Theorem 1.1]{2005math......3102K} admit full exceptional collection, hence the Hodge conjecture is true for these examples. \end{eg} \begin{eg} Let $\mathsf{M}=\mathbb{P}^{1}\times\cdots_{\mathsf{n}}\times \mathbb{P}^{1}$ with polarization $\mathcal{L}=\mathcal{O}(\mu_{1})^{\boxtimes}$ for a sequence of positive number $\mu=(\mu_{1},\cdots,\mu_{\mathsf{n}})$. Consider the obvious equivariant structure of group $\mathsf{PGL}_{2}$. Then the Mumford GIT quotient $\mathsf{X}(\mu)=\mathsf{M}//_{\mathcal{L}}\mathsf{PGL}_{2}$ admits a full exceptional collection for generic $\mu$ \cite[Section 6]{BallardFaveroKatzarkov+2019+235+303}. It is interesting that for finitely many $\mu$, $\mathsf{X}(\mu)$ is isomorphic to a Ball quotient by a classical result of Deligne and Mostow \cite{PMIHES_1986__63__5_0}. \end{eg} \begin{rem} If $\langle \mathsf{E}_{1},\mathsf{E}_{2},\cdots, \mathsf{E}_{\mathsf{m}} \rangle$ is a full exceptional collection of $\b\mathsf{X}$, then according to the proof in Theorem \ref{GSODHodge1}, $\{\mathsf{Ch}(\mathsf{E}_{\mathsf{i}})\}_{\mathsf{i}=1}^{\mathsf{m}}$ forms a basis of $\mathsf{Hodge}(\mathsf{X},\mathbb{Q})$. \end{rem} \begin{eg} Let $\mathsf{X}$ be the projective space $\mathbb{P}^{n}$. There is a semi-orthogonal decomposition $\b\mathsf{X}=\langle \mathcal{O},\mathcal{O}(1),\cdots, \mathcal{O}(n)\rangle$. We assume $n=3$ for simplicity. Since $\mathcal{O}(\mathsf{i})$ is a line bundle, $c_{\mathsf{j}}(\mathcal{O}(\mathsf{i}))=0$, $\mathsf{j}\geq 2$. Write $\mathsf{H}$ as hyperplane of $\mathbb{P}^{3}$, then $\mathsf{Ch}(\mathcal{O}(\mathsf{i}))=1+\mathsf{i}\cdot \mathsf{H}+\frac{\mathsf{i}^{2}}{2}\cdot \mathsf{H}^{2}+\frac{\mathsf{i}^{3}}{6}\cdot \mathsf{H}^{3}$, $\mathsf{HH}_{0}(\mathbb{P}^{3})_{\mathbb{Q}}\cong\mathbb{Q}\oplus \mathbb{Q} \mathsf{H}\oplus \mathbb{Q} \mathsf{H}^{2}\oplus \mathbb{Q} \mathsf{H}^{3}$. The vectors $\mathsf{Ch}(\mathcal{O})$, $\mathsf{Ch}(\mathcal{O}(1))$, $\mathsf{Ch}(\mathcal{O}(2))$, and $\mathsf{Ch}(\mathcal{O}(3))$ are linear independent which generate $\mathsf{HH}_{0}(\mathbb{P}^{3})_{\mathbb{Q}}$. \end{eg} \vspace{5mm} \section{Noncommutative Hodge conjecture} In this section, we propose the noncommutative Hodge conjecture, and prove that the noncommutative Hodge conjecture is additive for semi-orthogonal decomposition. We obtain more evidence of the Hodge conjecture via good knowledge of semi-orthogonal decomposition. Finally, we prove that the noncommutative Hodge conjecture is true for smooth proper connective $\3\dg$ algebras. \par \subsection{Formulation} \begin{defn} Let $\mathcal{A}$ be a small dg category. The Hodge classes of $\mathcal{A}$ is defined as $$\mathsf{Hodge}(\mathcal{A}):=\pi(\mathsf{j}^{-1}(\mathsf{Ch}^{\mathsf{top}}(\mathsf{K}_{0}^{\mathsf{top}}(\mathcal{A})_{\mathbb{Q}})))\subset \mathsf{HH}_{0}(\mathcal{A}).$$ $$\xymatrix{&&\mathsf{HH}_{0}(\mathcal{A})\\ \mathsf{K}_{0}(\mathcal{A})\ar[rru]^{\mathsf{Ch}}\ar[d]\ar[r]_{\mathsf{Ch}}&\mathsf{HN}_{0}(\mathcal{A})\ar[d]^{j}\ar[ru]_{\pi}&\\ \mathsf{K}^{\mathsf{top}}_{0}(\mathcal{A})\ar[r]^{\mathsf{Ch}^{\mathsf{top}}}&\mathsf{HC}^{\mathsf{per}}_{0}(\mathcal{A})&}$$ \end{defn} \begin{conj}\label{Mainconj} (Noncommutative Hodge conjecture) The Chern character $\mathsf{Ch}: \mathsf{K}_{0}(\mathcal{A})\mapsto \mathsf{HH}_{0}(\mathcal{A})$ maps $K_{0}(\mathcal{A})_{\mathbb{Q}}$ surjectively into the Hodge classes $\mathsf{Hodge}(\mathcal{A})$. \end{conj} \begin{rem} Note that we obtain the abstract rational Hodge classes in $\mathsf{HH}_{0}(\mathcal{A})$. Classically, the Hodge conjecture concerns the weight. However, to the author's knowledge, we don't know how to obtain the weight of the abstract Hodge classes. In the paper, we always assume the conjecture as a non-weighted Hodge conjecture. \end{rem} \begin{thm}\label{admissible} The Conjecture \ref{Mainconj} is equivalent to the one in A.\ Perry's paper \cite[Conjecture 5.11]{perry2020integral} in the case of admissible subcategories of $\b\mathsf{X}$. \end{thm} \begin{proof} For the admissible subcategories of $\b\mathsf{X}$, the Hodge classes are defined as the classes of $\mathsf{Ch}^{\mathsf{top}}(\mathsf{K}_{0}^{\mathsf{top}}(\mathcal{A})_{\mathbb{Q}})$ in $\mathsf{HC}^{\mathsf{per}}_{0}(\mathcal{A})$ that lie in $\mathsf{HH}_{0}(\mathcal{A})$ under the Hodge decomposition\cite{kaledin2016spectral}. The map $\mathsf{j}: \mathsf{HN}_{0}(\mathcal{A})\rightarrow \mathsf{HC}^{\mathsf{per}}_{0}(\mathcal{A})$ is injective by degeneration of noncommutative Hodge-to de Rham spectral sequence. Choose a splitting of the Hodge decomposition of $\mathsf{HC}^{\mathsf{per}}_{0}(\mathcal{A})$ (the one in \cite{perry2020integral}), and induce a splitting for $\mathsf{HN}_{0}(\mathcal{A})$, we get a commutative diagram, $$\xymatrix{&&&\mathsf{HH}_{0}(\mathcal{A})\ar[ddd]^{=}\\ \mathsf{K}_{0}(\mathcal{A})\ar[rrru]^{\mathsf{Ch}}\ar[r]\ar[d]&\mathsf{HN}_{0}(\mathcal{A})\ar[rru]_{\pi}\ar[r]^{\cong}_{\mathsf{H}}\ar[d]^{\mathsf{j}}&\oplus_{i\leq 0} \mathsf{HH}_{2\mathsf{i}}(\mathcal{A})\ar[d]\ar[ur]\ar[ru]_{\mathsf{Pr}}&\\ \mathsf{K}^{\mathsf{top}}_{0}(\mathcal{A})\ar[r]&\mathsf{HC}^{\mathsf{per}}_{0}(\mathcal{A})\ar[r]^{\cong}_{\mathsf{H}}&\oplus_{\mathsf{i}} \mathsf{HH}_{2\mathsf{i}}(\mathcal{A})\ar[rd]^{\mathsf{Pr}}&\\ &&&\mathsf{HH}_{0}(\mathcal{A})}$$ Note that the projection $\mathsf{Pr}\circ\mathsf{H}: \mathsf{HN}_{0}(\mathcal{A})\rightarrow \mathsf{HH}_{0}(\mathcal{A})$ is naturally the morphism $\pi$. The Hodge classes defined in \cite{perry2020integral} is isomorphic to the image $pr\circ\mathsf{H}\circ(\mathsf{Ch}^{\mathsf{top}}(\mathsf{K}^{\mathsf{top}}_{0}(\mathcal{A})_{\mathbb{Q}})\cap \mathsf{j}(\mathsf{HN}_{0}(\mathcal{A})))$ in $\mathsf{HH}_{0}(\mathcal{A})$. By the commutative diagram, it is exactly the classes $\pi(\mathsf{j}^{-1}(\mathsf{Ch}^{\mathsf{top}}(\mathsf{K}^{\mathsf{top}}_{0}(\mathcal{A})_{\mathbb{Q}})))\subset \mathsf{HH}_{0}(\mathcal{A})$. \end{proof} \begin{lem} Let $\mathcal{A}$ be a smooth proper $\3\dg$ category, the noncommutative Hodge-to de Rham spectral sequence degenerates \cite{kaledin2016spectral}. \end{lem} \begin{defn}\label{smpconj} (Hodge conjecture for smooth proper $\3\dg$ categories) Define the Hodge classes in $\mathsf{HH}_{0}(\mathcal{A})$ as $pr\circ\mathsf{H}(\mathsf{Ch}^{\mathsf{top}}(\mathsf{K}^{\mathsf{top}}_{0}(\mathcal{A})_{\mathbb{Q}})\cap \mathsf{j}(\mathsf{HN}_{0}(\mathcal{A})))$. Then the Hodge conjecture is that the Chern character $\mathsf{Ch}: \mathsf{K}_{0}(\mathcal{A})\rightarrow \mathsf{HH}_{0}(\mathcal{A})$ maps $\mathsf{K}_{0}(\mathcal{A})_{\mathbb{Q}}$ surjectively into the Hodge classes. \end{defn} \begin{rem}\label{remconj} This is equivalent to the conjecture \ref{Mainconj} by the same argument in the proof of Theorem \ref{admissible}. We formulate this version because of it is Hodge original. \end{rem} \par \begin{thm}\label{NHodge} Let $\mathsf{X}$ be a smooth projective variety. Hodge conjecture for $\mathsf{X}$ $\Leftrightarrow$ Noncommutative Hodge conjecture for $\mathsf{Per}_{\3\dg}(\mathsf{X})$. \end{thm} \begin{proof} The commutative Hodge conjecture claims that the Chern character $\mathsf{Ch} \colon \mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}\longrightarrow \bigoplus_{\mathsf{p}}\mathsf{H}^{\mathsf{p},\mathsf{p}}(\mathsf{X},\mathbb{C})$ maps $\mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}$ surjectively to the rational Hodge classes. The noncommutative Hodge conjecture claims that the map $\mathsf{Ch}_{\mathbb{Q}}\colon \mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}=\mathsf{K}_{0}(\mathsf{Per}_{\3\dg}(\mathsf{X}))_{\mathbb{Q}}\longrightarrow \mathsf{Hodge}(\mathsf{Per}_{\3\dg}(\mathsf{X}))$ is surjective. \par There is a commutative diagram$\colon$ $$\xymatrix{\mathsf{HH}_{0}(\mathsf{Per}_{\3\dg}(\mathsf{X}))\ar[ddd]^{\cong}&&\mathsf{K}^{\mathsf{top}}_{0}(\mathsf{Per}_{\3\dg}(\mathsf{X}))\ar[rd]\ar@/_/[ddd]&\\ &\mathsf{K}_{0}(\mathsf{Per}_{\3\dg}(\mathsf{X}))\ar[ru]\ar[lu]^{\mathsf{Ch}}\ar[r]^{\mathsf{Ch}}\ar[d]^{\cong}&\mathsf{HN}_{0}(\mathsf{Per}_{\3\dg}(\mathsf{X}))\ar[llu]^{\pi}\ar[r]^{\hookrightarrow}\ar[d]^{\cong}&\mathsf{HC}_{0}^{\mathsf{per}}(\mathsf{Per}_{\3\dg}(\mathsf{X}))\ar[d]^{\cong}\\ &\mathsf{K}_{0}(\mathsf{X})\ar[rd]\ar[ld]_{\mathsf{Ch}}\ar[r]^{\mathsf{Ch}}&\bigoplus_{\mathsf{i}\leq 0} \mathsf{H}^{\mathsf{p},\mathsf{p}-2\mathsf{i}}(\mathsf{X},\mathbb{C})\ar[lld]^{\pi}\ar[r]^{\hookrightarrow}&\mathsf{H}^{\mathsf{even}}_{\mathsf{dR}}(\mathsf{X},\mathbb{C})\\ \oplus_{\mathsf{p}}\mathsf{H}^{\mathsf{p},\mathsf{p}}(\mathsf{X},\mathbb{C})&&\mathsf{K}^{\mathsf{top}}_{0}(\mathsf{X})\ar[ru]&}$$ We explain the commutative diagram. There is a natural quasi isomorphism of double complexes of periodic cyclic homology $\mathsf{Tot}^{\bullet,\bullet}(\mathsf{Per}_{\3\dg}(\mathsf{X}))\rightarrow \mathsf{Tot}^{\bullet,\bullet}(\mathsf{R}\Gamma(\oplus \Omega^{\mathsf{i}}_{\mathsf{X}}[\mathsf{i}]))$ which is described by B.\ Keller in \cite{Keller1998}. After identifying $\mathsf{HC}_{0}^{\mathsf{per}}(\mathsf{Per}_{\3\dg}(\mathsf{X}))$ with $\mathsf{H}_{\mathsf{dR}}^{\mathsf{even}}(\mathsf{X},\mathbb{C})$, the noncommutative Chern character becomes the usual Chern character. The reader can refer to C.\ Weibel \cite[Proposition 3.8.1]{K-theory/0046} or \cite[Proposition 4.32]{blanc_2016}. Hence, the noncommutative Chern character maps $\mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}$ surjectively to the noncommutative rational Hodge classes if and only if the commutative Chern character maps $\mathsf{K}_{0}(\mathsf{X})_{\mathbb{Q}}$ surjectively to the commutative rational Hodge classes. \end{proof} \begin{thm}\label{MoritaHodge} Suppose $\mathsf{F}\colon \mathcal{A} \longrightarrow \mathcal{B}$ is a derived Morita equivalence, then Hodge conjecture is true for $\mathcal{A}$ if and only if it is true for $\mathcal{B}$. \end{thm} \begin{proof} The topological and algebraic $\mathsf{K}$-theory, Hochschild homology, periodic (negative) cyclic homology are all additive invariants. We have a commutative diagram, $$\xymatrix{ \mathsf{K}^{\mathsf{top}}_{0}(\mathcal{A})\ar[rddd]\ar[rrr]^{\cong}&&&\mathsf{K}^{\mathsf{top}}_{0}(\mathcal{B})\ar[lddd]\\ &\mathsf{K}_{0}(\mathcal{A})\ar[lddd]_{\mathsf{Ch}}\ar[r]^{\cong}\ar[d]\ar[lu]&\mathsf{K}_{0}(\mathcal{B})\ar[rddd]^{\mathsf{Ch}}\ar[d]\ar[ru]&\\ &\mathsf{HN}_{0}(\mathcal{A})\ar[ldd]^{\pi}\ar[r]^{\cong}\ar[d]&\mathsf{HN}_{0}(\mathcal{B})\ar[d]\ar[rdd]_{\pi}&\\ & \mathsf{HC}_{0}^{\mathsf{per}}(\mathcal{A})\ar[r]^{\cong}&\mathsf{HC}_{0}^{\mathsf{per}}(\mathcal{B})&\\ \mathsf{HH}_{0}(\mathcal{A})\ar[rrr]^{\cong}&&&\mathsf{HH}_{0}(\mathcal{B})}$$ whose rows are isomorphisms. It is clear that any morphism of dg categories induce a morphism of Hodge classes: write $\phi$ as the corresponding morphism form additive invariants of $\mathcal{A}$ to $\mathcal{B}$. Let $x\in \mathsf{Hodge}(\mathcal{A})$, this implies that there is $ \mathsf{x}'\in \mathsf{HN}_{0}(\mathcal{A})$ such that $\pi(\mathsf{x}')=\mathsf{x}$, and $\mathsf{y}\in \mathsf{K}_{0}^{\mathsf{top}}(\mathcal{A})_{\mathbb{Q}}$ such that $\mathsf{j}(\mathsf{x}')=\mathsf{Ch}_{\mathbb{Q}}^{\mathsf{top}}(\mathsf{y})$. Apply $\phi$, we get $\phi(\mathsf{x})=\pi(\phi(\mathsf{x}'))$, and $\mathsf{j}(\phi(\mathsf{x}'))=\mathsf{Ch}_{\mathbb{Q}}^{\mathsf{top}}(\phi(\mathsf{y}))$, that is, $\phi(\mathsf{x})\in \mathsf{Hodge}(\mathcal{B})$. There is a commutative diagram. $$\xymatrix{\mathsf{K}_{0}(\mathcal{A})\ar[r]^{\cong}\ar[d]^{\mathsf{Ch}}&\mathsf{K}_{0}(\mathcal{B})\ar[d]^{\mathsf{Ch}}\\ \mathsf{Hodge}(\mathcal{A})\ar[r]^{\cong}&\mathsf{Hodge}(\mathcal{B})}$$ The isomorphism of Hodge classes is as follows: Take $\mathsf{z}\in \mathsf{Hodge}(\mathcal{B})$, since $\Phi$ induces isomorphis $\mathsf{HH}_{0}(\mathcal{A})\cong \mathsf{HH}_{0}(\mathcal{B})$, there exist unique $\mathsf{x}\in \mathsf{HH}_{0}(\mathcal{A})$ such that $\phi(\mathsf{x})=\mathsf{z}$. It can be shown that $\mathsf{x}\in \mathsf{Hodge}(\mathcal{A})$ by diagram chasing. \end{proof} \begin{cor}\label{Uniqueenhanced} For the unique enhanced triangulated categories, we can define its Hodge conjecture via its smooth and proper $\3\dg$ enhancement (if it exists). The Hodge conjecture does not depend on the $\3\dg$ enhancement. \end{cor} \begin{proof} This is because two $\3\dg$ enhancements of the unique enhanced triangulated categories are connected by a chain of quasi-equivalences, and the corollary follows from Theorem \ref{MoritaHodge}. \end{proof} \begin{rem} For a projective smooth variety $\mathsf{X}$, $\b\mathsf{X}\cong \mathsf{Perf}(\mathsf{X})$ is a unique enhanced triangulated category. Thus, it suffices to check whether the conjecture is true for any pre-triangulated $\3\dg$ enhancement of $\b\mathsf{X}$. \end{rem} \begin{thm}\label{SODHodge} Suppose we have a $\mathsf{SOD}$, $\b\mathsf{X}=\langle \mathcal{A},\mathcal{B} \rangle$. There are natural $\3\dg$ enhancement $\mathcal{A}_{\3\dg}$, $\mathcal{B}_{\3\dg}$ of \mbox{$\mathcal{A}$, $\mathcal{B}$} corresponding to $\3\dg$ enhancement $\mathsf{Per}_{\3\dg}(\mathsf{X})$ of $\b\mathsf{X}$. $$\text{\it Hodge conjecture for}\ \mathsf{X}\ \Leftrightarrow \text{\it Noncommutative Hodge conjecture for}\ \mathcal{A}_{\3\dg}\ \text{\it and}\ \mathcal{B}_{\3\dg}.$$ \end{thm} \begin{proof} We still write $\mathcal{A}$ and $\mathcal{B}$ as dg categories corresponding to the natural $\3\dg$ enhancement again. We can lift the semi-orthogonal decomposition to the $\3\dg$ world by \cite[Proposition 4.10]{Kuznetsov2015CategoricalRO}. That is, there is a diagram $$\xymatrix{\mathcal{B}\ar[r]_{\mathsf{i}} &\mathsf{D}\ar[r]_{\mathsf{L}}\ar@/_/[l]_{\mathsf{R}}&\mathcal{A}\ar@/_/[l]_{\mathsf{j}} }$$ where $\mathsf{D}$ is certain gluing of $\mathcal{A}$ and $\mathcal{B}$ and it is quasi-equivalent to $\mathsf{Per}_{\3\dg}(\mathsf{X})$. Therefore, we still have a diagram such that $\mathsf{i}+\mathsf{j}$ induces isomorphism of $\mathsf{K}$ group, and $\mathsf{i}_{\mathsf{H}}+\mathsf{j}_{\mathsf{H}}$ induces $$\xymatrix{\mathsf{K}_{0}(\mathcal{B})\ar[r]_{\mathsf{i}}\ar[d]_{\mathsf{Ch}}&\mathsf{K}_{0}(\mathsf{D})\ar[r]_{\mathsf{L}}\ar@/_/[l]_{\mathsf{R}}\ar[d]_{\mathsf{Ch}}&\mathsf{K}_{0}(\mathcal{A})\ar[d]_{\mathsf{Ch}}\ar@/_/[l]_{\mathsf{j}}\\ \mathsf{Hodge}(\mathcal{B})\ar[r]_{\mathsf{i}_{\mathsf{H}}}&\mathsf{Hodge}(\mathsf{D})\ar[r]_{\mathsf{L}_{\mathsf{H}}}\ar@/_/[l]_{\mathsf{R}_{\mathsf{H}}}&\mathsf{Hodge}(\mathcal{A})\ar@/_/[l]_{\mathsf{j}_{\mathsf{H}}}}$$ Hence $\mathsf{Ch}_{\mathsf{D},\mathbb{Q}}$ maps $\mathsf{K}_{0}(\mathcal{D})_{\mathbb{Q}}$ surjectively to $\mathsf{Hodge}(\mathsf{D})$ if and only if $\mathsf{Ch}_{\mathcal{B},\mathbb{Q}}$ and $\mathsf{Ch}_{\mathcal{A},\mathbb{Q}}$ map $\mathsf{K}_{0}(\mathcal{B})_{\mathbb{Q}}$ and $\mathsf{K}_{0}(\mathcal{A})_{\mathbb{Q}}$ surjectively to $\mathsf{Hodge}(\mathcal{B})$ and $\mathsf{Hodge}(\mathcal{A})$ respectively. But the noncommutative Hodge conjecture is true for $\mathsf{D}$ if and only if it is true for the Hodge conjecture of $\mathsf{X}$ by the Theorem \ref{NHodge} and Theorem \ref{MoritaHodge}. Thus, the statement follows. \end{proof} \begin{rem} Similar to the geometric case \ref{GSODHodge1}, the statement is still true if there are more than two components for $\mathsf{SOD}$s. \end{rem} \par \begin{thm} Let $\mathcal{A}$ be a admissible subcategories of $\b\mathsf{X}$ where $X$ is a smooth projective smooth variety. \end{thm} We immediately reprove Theorem \ref{GSODHodge1}. \begin{cor}\label{GSODHodge2} Let $\mathsf{X}$ be a projective smooth variety, suppose there is a $\mathsf{SOD}$, $\b\mathsf{X}= \langle \2\b\mathbb{Z}, \1\b\mathsf{Y}\rangle$. Then Hodge conjecture is true for $\mathsf{X}$ if and only for $\mathsf{Z}$ and $\mathsf{Y}$. In particular Hodge conjecture is a derived invariant. \end{cor} \begin{proof} According to Theorem \ref{SODHodge}, Hodge conjecture is true for $\mathsf{X}$ if and only if it is true for corresponding $\3\dg$ enhancement of $\2\b\mathbb{Z}$ and $\1\b\mathsf{Y}$. Since $\2\b\mathbb{Z}$ and $\1\b\mathsf{Y}$ are unique enhanced triangulated categories \cite{Lunts_2010}, hence the Hodge conjecture is true for $\mathsf{X}$ if and only for $\mathsf{Z}$ and $\mathsf{Y}$. \end{proof} \begin{cor}\label{Orlovformula} Consider blow up $\mathsf{X}$ of $\mathsf{Y}$ with smooth center $\mathsf{Z}$, according to Orlov's blow-up formula \cite[Theorem 4.2]{Bondal2002DerivedCO}, we have a $\mathsf{SOD}$, $\b\mathsf{X}=\langle \2\b\mathbb{Z},\cdots, \2\b\mathbb{Z}, \1\b\mathsf{Y}\rangle$. Hence the Hodge conjecture is true for $\mathsf{X}$ if and only if for $\mathsf{Z}$ and $\mathsf{Y}$. \end{cor} \begin{rem} It was known by classical method. We can even write down the $\mathsf{Chow}$ groups with respect to the blow up, for explicit details, the reader can refer to the book of C.\ Voisin, ``Hodge theory and complex algebraic geometry $\uppercase\expandafter{\romannumeral2}$'' \cite[Theorem 9.27]{voisin_2003} \end{rem} \begin{cor}\label{EC2} We reprove Corollary \ref{EC}: Suppose $\b\mathsf{X}$ admits a full exceptional collection, then the Hodge conjecture is true for $\mathsf{X}$. \end{cor} For low dimensional varieties, Hodge conjecture is a birational invariant. We use the following lemma$\colon$ \begin{lem}\label{zigzagBlowu} (\cite[Theorem 0.1.1]{Abramovich1999TorificationAF}) Let $\mathsf{X}$ and $\mathsf{Y}$ be proper smooth varieties. If $\mathsf{X}$ is birational to $\mathsf{Y}$, then there is a chain of blow-ups and blow-downs of smooth centers connecting $\mathsf{X}$ and $\mathsf{Y}$. $$\xymatrix{&\mathsf{X}_{1}\ar@{-->}[ld]\ar@{-->}[rd]&\cdots&\mathsf{X}_{3}\ar@{-->}[ld]\ar@{-->}[rd]&\\ \mathsf{X}&&\mathsf{X}_{2}&&\mathsf{Y}}$$ \end{lem} The following may be well known for the expects, see also \cite{meng2019hodge}. Here, we use the noncommutative techniques to reprove the results. \begin{thm}\label{4fouldRinvariant} Since Hodge conjecture is true for $0$, $1$, $2$ and $3$ dimensional varieties, the Hodge conjecture is a birational invariant for $4$ and $5$ dimensional varieties. \end{thm} \begin{proof} Combining Corollary \ref{Orlovformula} and Lemma \ref{zigzagBlowu}, and observe that $\mathsf{X}$ and $\mathsf{Y}$ are connected by a chain of blow-ups of smooth center whose dimension is less or equal to $3$. \end{proof} \subsection{Application to geometry and examples}\label{section4.2} \par The survey ``Noncommutative counterparts of celebrated conjecture'' \cite[Section 2]{tabuada2019noncommutative} provides many examples of the applications to the geometry for some celebrated conjectures. The examples also apply to the noncommutative Hodge conjecture. In this subsection, we still show some interesting examples. \par There is a universal functor $$\mathcal{U} \colon \mathsf{dg-cat} \longrightarrow \mathsf{NChow}.$$ We call $\mathcal{U}(\mathcal{A})$ the noncommutative $\mathsf{Chow}$ motive corresponds to $\mathcal{A}$. We write the image of $\mathcal{U}(\mathcal{A})$ in $\mathsf{NChow}_{\mathbb{Q}}$ as $\mathcal{U}(\mathcal{A})_{\mathbb{Q}}$. Similar to works of G.\ Tabuada, the noncommutative Hodge conjecture is compatible with the direct sum decomposition of the noncommutative $\mathsf{Chow}$ motives. \begin{thm}\label{SODNMotive} Let $\mathcal{A}$, $\mathcal{B}$ and $\mathcal{C}$ be smooth and proper $\3\dg$ categories. Suppose there is a direct sum decomposition$\colon$ $\mathcal{U}(\mathcal{C})_{\mathbb{Q}}\cong \mathcal{U}(\mathcal{A})_{\mathbb{Q}}\oplus \mathcal{U}(\mathcal{B})_{\mathbb{Q}}$, then noncommutative Hodge conjecture holds for $\mathcal{C}$ if and only if it holds for $\mathcal{A}$ and $\mathcal{B}$. \end{thm} \begin{proof} This follows from the fact that the periodic (negative) cyclic homology and rational (topological or algebraic) $\mathsf{K}$-theory are all additive invariants, and the corresponding target categories are idempotent complete. The proof is similar to Theorem \ref{SODHodge}. \end{proof} \begin{eg} Suppose we have a semi-orthogonal decomposition$\colon$$\mathsf{H}^{0}(\mathcal{C})=\langle \mathsf{H}^{0}(\mathcal{A}),\mathsf{H}^{0}(\mathcal{B})\rangle$, then $\mathcal{U}(\mathcal{C})\cong \mathcal{U}(\mathcal{A})\oplus \mathcal{U}(\mathcal{B})$. \end{eg} \subsubsection{Fractional Calabi--Yau categories} \begin{thm}\label{egCY}(\cite[Theorem 3.5]{Kuznetsov_2019}) Let $\mathsf{X}$ be a hypersurface of degree $\leq \mathsf{n}+1$ in $\mathbb{P}^{\mathsf{n}}$. There is a semi-orthogonal decomposition $\colon$ $$\mathsf{Perf}(\mathsf{X})=\langle \mathcal{T}(\mathsf{X}),\mathcal{O}_{\mathsf{X}},\cdots,\mathcal{O}_{\mathsf{X}}(\mathsf{n}-\mathsf{deg}(\mathsf{X}))\rangle.$$ $\mathcal{T}(\mathsf{X})$ is a fractional Calabi--Yau of dimension $\frac{(\mathsf{n}+1)(\mathsf{deg}(\mathsf{X})-2)}{\mathsf{deg}(\mathsf{X})}$. Then $$\mathcal{U}(\mathsf{X})\cong \mathcal{U}(\mathcal{T}_{\3\dg}(\mathsf{X}))\oplus \mathcal{U}(\mathsf{k})\oplus\cdots \oplus \mathcal{U}(\mathsf{k}).$$ Therefore, Hodge conjecture of $\mathsf{X}$ $\Leftrightarrow$ Noncommutative Hodge conjecture of $\mathcal{T}_{\3\dg}(\mathsf{X})$. \end{thm} \subsubsection{Twisted scheme.} \begin{defn} Let $\mathsf{X}$ be a scheme with structure sheaf $\mathcal{O}_{\mathsf{X}}$. $\mathcal{A}$ is a sheaf of Azumaya algebra over $\mathsf{X}$. We call the derived category of perfect $\mathcal{A}$ module $\mathsf{Perf}(\mathsf{X},\mathcal{A})$ the twisted scheme. \end{defn} \begin{thm}\label{egTwistedscheme1} Noncommutative Hodge conjecture for $\mathsf{Per}_{\3\dg}(\mathsf{X},\mathcal{A})$ $\Leftrightarrow$ Noncommutative Hodge conjecture for $\mathsf{Per}_{\3\dg}(\mathsf{X})$. \end{thm} \begin{proof} According to \cite[Theorem 2.1]{tabuadavandenbergh2015}, $\mathcal{U}(\mathsf{Per}_{\3\dg}(\mathsf{X},\mathcal{A}))_{\mathbb{Q}}\cong \mathcal{U}(\mathsf{Per}_{\3\dg}(\mathsf{X}))_{\mathbb{Q}}$. Thus, by Theorem \ref{SODNMotive}, the statement follows. \end{proof} \subsubsection{Cubic fourfold containing a plane.} \begin{eg}\label{egTwistedscheme2} Let $\mathsf{X}$ be a cubic fourfold containing a plane. There is a semi-orthogonal decomposition\cite[Theorem 4.3]{Kuznetsov2010DerivedCO} $$\mathsf{Perf}(\mathsf{X})=\langle\mathsf{Perf}(\mathsf{S},\mathcal{A}),\mathcal{O}_{\mathsf{X}},\mathcal{O}_{\mathsf{X}}(1),\mathcal{O}_{\mathsf{X}}(2)\rangle.$$ $\mathsf{S}$ is a $\mathsf{K}_{3}$ surface, and $\mathcal{A}$ is a sheaf of Azumaya algebra over $\mathsf{S}$. Since the noncommutative Hodge conjecture is true for $\mathsf{Per}_{\3\dg}(\mathsf{S},\mathcal{A})$ which is unique enhanced, hence the Hodge conjecture is true for $\mathsf{X}$. \end{eg} \subsubsection{Quadratic fibration.} \begin{eg}\label{egQF} Let $\mathsf{f}\colon \mathsf{X}\longrightarrow \mathsf{S}$ be a smooth quadratic fibration, for example, the smooth quadric in relative projective space $\mathbb{P}^{\mathsf{n}}_{\mathsf{S}}$. There is a semi-orthogonal decomposition $$\mathsf{Perf}(\mathsf{X})=\langle \mathsf{Perf}(\mathsf{S},\mathsf{Cl}_{0}),\mathsf{Perf}(\mathsf{S}),\cdots,\mathsf{Perf}(\mathsf{S})\rangle.$$ $\mathsf{Cl}_{0}$ is a sheaf of Azumaya algebra over $\mathsf{S}$ if the dimension $\mathsf{n}$ of the fiber of $\mathsf{f}$ is odd \cite{Kuznetsov2005DerivedCO}. Thus, the Hodge conjecture of $\mathsf{X}$ $\Leftrightarrow$ $\mathsf{S}$. Moreover, if $\dim \mathsf{S}\leq 3$, the Hodge conjecture for $\mathsf{X}$ is true. \end{eg} \subsubsection{HP duality}\ Let $\mathsf{X}$ be a projective smooth variety with morphism $\mathsf{f}\colon \mathsf{X}\longrightarrow \mathbb{P}(\mathsf{V})$. Set $\mathcal{O}_{\mathsf{X}}(1)=\mathsf{f}^{*}\mathcal{O}_{\mathbb{P}(\mathsf{V})}(1)$. Assume there is a $\mathsf{SOD}$ $$\b\mathsf{X}=\langle \mathcal{A}_{0},\mathcal{A}_{1}(1),\cdots,\mathcal{A}_{\mathsf{m}-1}(\mathsf{m}-1)\rangle$$ where $\mathcal{A}_{\mathsf{m}-1}\subset \cdots \subset\mathcal{A}_{1}\subset \mathcal{A}_{0}$. Define $\mathsf{H}:= \mathsf{X}\times _{\mathbb{P}(\mathsf{V})}\mathsf{Q}$, where $\mathsf{Q}$ is the incidence quadric in $\mathbb{P}(\mathsf{V})\times \mathbb{P}(\mathsf{V}^{\ast})$. Then, there is a $\mathsf{SOD}$ $$\mathsf{D}^{\mathsf{b}}(\mathsf{H})=\langle \mathcal{L},\mathcal{A}_{1,\mathbb{P}(\mathsf{V}^{\ast})}(1),\cdots,\mathcal{A}_{\mathsf{m}-1,\mathbb{P}(\mathsf{V}^{\ast})}(m-1)\rangle.$$ Projective smooth variety $\mathsf{Y}$ with morphism $g:\mathsf{Y}\longrightarrow \mathbb{P}(\mathsf{V}^{\ast})$ is called homological projective dual of $\mathsf{X}$ if there is an object $\mathcal{E}\in \mathsf{D}^{\mathsf{b}}(\mathsf{H}\times_{\mathbb{P}(\mathsf{V}^{\ast})}\mathsf{Y})$ which induces an equivalence from $\1\b\mathsf{Y}$ into~$\mathcal{L}$. \par We refer to \cite[Section 2.3]{kuznetsov2015semiorthogonal} or Kuznetsov's original paper \cite{PMIHES_2007__105__157_0}. Let $(\mathsf{Y},\mathsf{g})$ be a $\mathsf{HP}$ dual of $(\mathsf{X},\mathsf{f})$, then \\ 1. There is a $\mathsf{SOD}$ $$\1\b\mathsf{Y}=\langle \mathcal{B}_{\mathsf{n}-1}(1-\mathsf{n}),\cdots,\mathcal{B}_{1}(-1),\mathcal{B}_{0}\rangle$$ where $\mathcal{B}_{\mathsf{n}-1}\subset \cdots \subset \mathcal{B}_{1}\subset \mathcal{B}_{0}$. Moreover $\mathcal{A}_{0}\cong \mathcal{B}_{0}$ via Fourier-Mukai functor. \\ 2. (Symmetry) $(\mathsf{X},\mathsf{f})$ is a $\mathsf{HP}$ dual of $(\mathsf{Y},\mathsf{g})$. \\ 3. For any subspace $\mathsf{L}\subset \mathsf{V}^{\ast}$, define $\mathsf{X}_{\mathsf{L}}=\mathsf{X}\times_{\mathbb{P}(\mathsf{V})}\mathbb{P}(\mathsf{L}^{\perp})$ and $\mathsf{Y}_{\mathsf{L}}= \mathsf{Y}\times _{\mathbb{P}(\mathsf{V}^{\ast})}\mathbb{P}(\mathsf{L})$. If we assume that they have the expected dimension, $\dim\mathsf{X}_{\mathsf{L}}=\dim\mathsf{X}-\dim \mathsf{L}$, $\dim\mathsf{Y}_{\mathsf{L}}=\dim\mathsf{Y}-(\dim \mathsf{V}-\dim \mathsf{L})$, and write $\dim \mathsf{L}=\mathsf{r}$, $\dim \mathsf{V}=\mathsf{N}$, then there are $\mathsf{SOD}$ such that $\mathcal{L}_{\mathsf{X},\mathsf{L}}\cong \mathcal{L}_{\mathsf{Y},\mathsf{L}}$. $$\mathsf{D}^{\mathsf{b}}(\mathsf{X}_{\mathsf{L}})=\langle \mathcal{L}_{\mathsf{X},\mathsf{L}},\mathcal{A}_{\mathsf{r}}(\mathsf{r}),\cdots,\mathcal{A}_{\mathsf{m}-1}(\mathsf{m}-1)\rangle.$$ $$\mathsf{D}^{\mathsf{b}}(\mathsf{Y}_{\mathsf{L}})=\langle \mathcal{B}_{\mathsf{n}-1}(1-\mathsf{n}),\cdots, \mathcal{B}_{\mathsf{N}-\mathsf{r}}(\mathsf{r}-\mathsf{N}),\mathcal{L}_{\mathsf{Y},\mathsf{L}}\rangle.$$ \begin{thm}\label{HPD} We write $\mathsf{Hodge}(\bullet)$ if the (noncommutative) Hodge conjecture is true for varieties (smooth and proper $\3\dg$ categories). Then, $\mathsf{Hodge}(\mathsf{X})$ $\Leftrightarrow$ $\mathsf{Hodge}(\mathcal{A}_{0})$ $\Leftrightarrow$ $\mathsf{Hodge}(\mathcal{B}_{0})$ $\Leftrightarrow$ $\mathsf{Hodge}(\mathsf{Y})$. If we assume $\mathsf{Hodge}(\mathsf{X})$, then $\mathsf{Hodge}(\mathsf{X}_{\mathsf{L}})\Leftrightarrow \mathsf{Hodge}(\mathsf{Y}_{\mathsf{L}})$. \end{thm} \begin{proof} The midterm equivalence $\mathsf{Hodge}(\mathcal{A}_{0})\Leftrightarrow\mathsf{Hodge}(\mathcal{B}_{0})$ is because $\mathcal{A}_{0}\cong\mathcal{B}_{0}$ via a Fourier-Mukai functor, and then there is an isomorphism of natural $\3\dg$ enhancements $\mathcal{A}_{\3\dg,0}\cong\mathcal{B}_{\3\dg,0}$ in $\mathsf{Hmo}$, see a proof in \cite[Section 9]{bernardara2014semiorthogonal}. Since $\mathcal{L}_{\mathsf{X},\mathsf{L}}\cong \mathcal{L}_{\mathsf{Y},\mathsf{L}}$ via Fourier-Mukai functor, the statement $\mathsf{Hodge}(\mathsf{X}_{\mathsf{L}})\Leftrightarrow \mathsf{Hodge}(\mathsf{Y}_{\mathsf{L}})$ follows from the same argument. \end{proof} \begin{rem} The $\mathsf{HPD}$ can be generalized to the noncommutative version, see the discussion in \cite[Section 3.4]{kuznetsov2015semiorthogonal} or the paper by Alexander Perry, ``Noncommutative homological projective duality'' \cite{PERRY2019877}. \end{rem} \begin{eg}\label{egHPD} One of the nontrivial examples of the Homological projective duality comes from the Grassmannian-Pfaffian duality. Let $\mathsf{W}$ be a dimension $\mathsf{n}$ vector space, $\mathsf{X}=\mathsf{Gr}(2,\mathsf{W})$ the Grassmannian of 2-dimensional sub-vector spaces of $\mathsf{W}$. Consider the projective space $\mathbb{P}(\wedge^{2}\mathsf{W}^{\ast})$, there is a natural filtration called the Pfaffian filtration$\colon$ $\mathsf{Pf}(2,\mathsf{W}^{\ast})\subset \mathsf{Pf}(4,\mathsf{W}^{\ast})\cdots \subset \mathbb{P}(\wedge^{2}\mathsf{W}^{\ast})$. $$\mathsf{Pf}(2\mathsf{k},\mathsf{W}^{\ast})=\{\omega\in\mathbb{P}(\wedge^{2}\mathsf{W}^{\ast})\mid\mathsf{rank}(\omega)\leq 2\mathsf{k}\}$$ The intermediate Pfaffians are no longer smooth but with singularities. The singularity of $\mathsf{Pf}(2\mathsf{k},\mathsf{W}^{\ast})$ is $\mathsf{Pf}(2\mathsf{k}-2,\mathsf{W}^{\ast})$. Classically, it was known that $\mathsf{Y}=\mathsf{Pf}(2\lfloor\frac{\mathsf{n}}{2}\rfloor-2,\mathsf{W}^{\ast})$ is the classical projective dual of $\mathsf{X}=\mathsf{Gr}(2,\mathsf{W})$ via the Pl\"{u}cker embedding. For $\mathsf{n}\leq 7$, the noncommutative categorical resolution of $\mathsf{Pf}(2\lfloor\frac{\mathsf{n}}{2}\rfloor-2,\mathsf{W}^{\ast})$ is the homological projective dual of $\mathsf{Gr}(2,\mathsf{W})$. However, it was not known for the cases $\mathsf{n}\geq 8$. The interested reader can refer to a survey \cite[Section 4.4, Conjecture 4.4]{kuznetsov2015semiorthogonal} or Kuznetsov's original paper \cite{Kuznetsov2006HomologicalPD}. \par The known nontrivial Grassmannian-Pfaffian duality are the cases $\mathsf{n}=6, 7$. In these cases, Hodge conjecture is true for $\mathsf{X}$ since it has a full exceptional collection, then the noncommutative Hodge conjecture is true for the noncommutative categorical resolution of the Pfaffians. However, the Hodge conjecture is trivial for the noncommutative category since it automatically has full exceptional collections, or the geometric resolution of the Pfaffians are of the form $\mathbb{P}_{\mathsf{Gr}(2,\mathsf{W})}(\mathsf{E})$ \cite[Section 4]{Kuznetsov2006HomologicalPD} for some vector bundle $\mathsf{E}$. It has a full exceptional collection too. \par We expect to obtain duality of the Hodge conjecture for $\mathsf{X}_{\mathsf{L}}$ and $\mathsf{Y}_{\mathsf{L}}$ when they are smooth, and have the expected dimension. According to the Lefschetz hyperplane theorem, there is a commutative diagram for $\mathsf{i}\leq \dim \mathsf{X}_{\mathsf{L}}-1\colon$ $$\xymatrix{\mathsf{CH}^{\mathsf{i}}(\mathsf{X}_{\mathsf{L}})_{\mathbb{Q}}\ar[r]&\mathsf{H}^{\mathsf{i}}(\mathsf{X}_{\mathsf{L}},\mathbb{Q})\\ \mathsf{CH}^{\mathsf{i}}(\mathsf{Gr}(2,\mathsf{W}))_{\mathbb{Q}}\ar[r]^{\cong}\ar[u]&\mathsf{H}^{\mathsf{i}}(\mathsf{Gr}(2,\mathsf{W}),\mathbb{Q})\ar[u]^{\cong}}$$ The Hodge conjecture is true for weight less than $\dim\mathsf{X}_{\mathsf{L}}$. By the hard Lefschetz isomorphism, it is still true for weight greater than $\dim\mathsf{X}_{\mathsf{L}}$. Thus, if $\dim\mathsf{X}_{\mathsf{L}}$ is odd, the Hodge conjecture for $\mathsf{X}_{\mathsf{L}}$ is true. \par The following examples for $\mathsf{n}=6,7$ are from paper \cite[Section 10]{Kuznetsov2006HomologicalPD}. \par $\uppercase\expandafter{\romannumeral1}$.\ $\mathsf{n}=6$, $\dim\mathsf{X}_{\mathsf{L}}=8-\dim\mathsf{L}$, $\dim\mathsf{Y}_{\mathsf{L}}=\dim\mathsf{L}-2$. When $\dim\mathsf{L}=6$, the expected dimension of $\mathsf{X}_{\mathsf{L}}$ is $2$ while the expected dimension of $\mathsf{Y}_{\mathsf{L}}$ is $4$. This is the duality between Pfaffian cubic fourfold and the $\mathsf{K}_{3}$ surface \cite{Kuznetsov2006HomologicalPD}. When $\dim\mathsf{L}=5$, $\dim\mathsf{X}_{\mathsf{L}}=\dim\mathsf{Y}_{\mathsf{L}}=3$, the Hodge conjecture is true by dimension reason. When $\dim\mathsf{L}=4$, $\mathsf{Y}_{\mathsf{L}}=\mathsf{Pf}(4,6)\cap \mathbb{P}^{3}$ is a cubic surface. Then $\mathsf{X}_{\mathsf{L}}=\mathsf{Gr}(2,6)\cap \mathbb{P}^{10}$ has a full exceptional collection. $\mathsf{X}_{\mathsf{L}}$ is a rational Fano $4$-fold \cite[Section 2.2, Theorem 2.2.1]{XF}. Hence, the Hodge conjecture is true for $\mathsf{X}_{\mathsf{L}}$ by weak factorization theorem \cite[Theorem 0.1.1]{Abramovich1999TorificationAF}. When $\dim\mathsf{L}=3$, $\dim\mathsf{X}_{\mathsf{L}}=5$, the Hodge conjecture is true for $\mathsf{X}_{\mathsf{L}}$. When $\dim\mathsf{L}=2$, $\mathsf{X}_{\mathsf{L}}$ admits a full exceptional collection. We obtain a table.\\ \begin{center} \begin{tabular}{|c|c|c|c|} \hline $\dim\mathsf{L}$ & $\dim\mathsf{X}_{\mathsf{L}}$&$\dim\mathsf{Y}_{\mathsf{L}}$&classically \\ \hline 2&6&0& \\ \hline 3& 5&1&Known\\ \hline 4&4&2& Known,\ $\mathsf{X}_{\mathsf{L}}$\ is\ a\ rational\ Fano\ 4-fold \\ \hline 5&3&3&Known,\ they\ are\ 3-fold\\ \hline 6&2&4&Known,\ $\mathsf{Y}_{\mathsf{L}}$\ is\ a\ cubic\ 4-fold\\ \hline \end{tabular} \end{center} $\uppercase\expandafter{\romannumeral2}$.\ $\mathsf{n}=7$, $\dim\mathsf{X}_{\mathsf{L}}=10-\dim\mathsf{L}$, $\dim\mathsf{Y}_{\mathsf{L}}=\dim\mathsf{L}-4$. For example, take $\dim\mathsf{L}=7$. The expected dimension of $\mathsf{X}_{\mathsf{L}}$ and $\mathsf{Y}_{\mathsf{L}}$ are both $3$. The Hodge conjecture is true for them by dimension reason. When $\dim\mathsf{L}=5$, $\dim\mathsf{X}_{\mathsf{L}}=5$, the Hodge conjecture is true for $\mathsf{X}_{\mathsf{L}}$. When $\dim\mathsf{L}=6$, $\dim \mathsf{X}_{\mathsf{L}}=4$, it is a fano $4$-fold. When $\dim\mathsf{L}=8$, $\dim\mathsf{Y}_{\mathsf{L}}=4$, it is a fano $4$-fold. Since fano varieties are uniruled, the Hodge conjecture is true for fano $4$-folds \cite{CM78}. When $\dim\mathsf{Y}_{\mathsf{L}}=9$, $\mathsf{Y}_{\mathsf{L}}$ is a fano $5$-fold, the Hodge conjecture is true for fano 5-folds by \cite{DA}. When $\dim\mathsf{L}=10$, $\mathsf{Y}_{\mathsf{L}}$ admits a full exceptional collection. We obtain a table. \begin{center} \begin{tabular}{|c|c|c|c|} \hline $\dim\mathsf{L}$ & $\dim\mathsf{X}_{\mathsf{L}}$&$\dim\mathsf{Y}_{\mathsf{L}}$&classically \\ \hline 5& 5&1&Known,\ since\ dimension\ of\ $\mathsf{X}_{\mathsf{L}}$\ is\ odd\\ \hline 6&4&2&Known,\ $\mathsf{X}_{\mathsf{L}}$\ is\ a\ fano\ 4-fold\\ \hline 7&3&3&Known\ by\ dimension\ reason\\ \hline 8&2&4&Known,\ $\mathsf{Y}_{\mathsf{L}}$\ is\ a\ fano\ 4-fold\\ \hline 9&1&5&Known,\ $\mathsf{Y}_{\mathsf{L}}$\ is\ a\ fano\ 5-fold \\ \hline 10&0&6& \\ \hline \end{tabular} \end{center} \begin{rem} We thanks Claire Voisin pointing out to the author a classical result that the Hodge conjecture is true for uniruled $4$-folds \cite{CM78}. Even though most examples here can be proved by classical methods, we hope that we can use geometry of dual varieties to prove Hodge conjecture of these examples, see also the Conjecture \ref{Conjprojectivedual} below. We leave the blanks in the tables since it is not known for the author whether the Hodge conjecture is proved for these cases previously. \end{rem} $\uppercase\expandafter{\romannumeral3}$.\ For $\mathsf{n}\geq 8$, the $\mathsf{HPD}$ is not constructed. However, when $\mathsf{n}=10$, there is an interesting picture inspired by the Mirror Symmetry which was constructed by E.\ Segal and RP.\ Thomas \cite[Theorem A]{Segal:2014jua}. \par Let $\mathsf{L}$ be a $5$-dimensional subspace of $\wedge^{2}\mathsf{W}^{\ast}$, $\mathsf{L}^{\perp}\subset \wedge^{2}\mathsf{W}$. Write $\mathsf{X}=\mathsf{Gr}(2,10)\subset \mathbb{P}^{44}$ and $\mathsf{Y}=\mathsf{Pf}(8,10)\subset\mathbb{P}^{44}$; $\mathsf{X}_{\mathsf{L}}=\mathbb{P}(\mathsf{L}^{\perp})\cap\mathsf{X}$, $\mathsf{Y}_{\mathsf{L}}=\mathbb{P}(\mathsf{L})\cap\mathsf{Y}$. We choose general linear subspace $\mathsf{L}$ such that both $\mathsf{X}_{\mathsf{L}}$ and $\mathsf{Y}_{\mathsf{L}}$ are smooth. In particular, $\mathsf{Y}_{\mathsf{L}}$ is quintic $3$-fold and $\mathsf{X}_{\mathsf{L}}$ is a Fano $11$-fold. According to E.\ Segal and RP.\ Thomas \cite[Theorem A]{Segal:2014jua}, there is a fully faithful embedding $$\mathsf{D}^{\mathsf{b}}(\mathsf{Y}_{\mathsf{L}})\hookrightarrow \mathsf{D}^{\mathsf{b}}(\mathsf{X}_{\mathsf{L}}).$$ Let $\mathcal{A}$ be the exceptional collections $\{\Sym^{3}\mathsf{S}, \Sym^{2}\mathsf{S},\mathsf{S},\mathcal{O}\}$ of $\mathsf{D}^{\mathsf{b}}(\mathsf{Gr}(2,10))$, where $\mathsf{S}$ is the tautological bundle on $\mathsf{Gr}(2,10)$. It restricts to an exceptional collections in $\mathsf{D}^{\mathsf{\mathsf{b}}}(\mathsf{X}_{\mathsf{L}})$ by techniques in \cite{Kuznetsov2006HomologicalPD}. Then, let $\langle \mathcal{A},\mathcal{A}(1),\cdots,\mathcal{A}(4)\rangle$ be an exceptional collection in $\mathsf{D}^{\mathsf{b}}(\mathsf{X}_{\mathsf{L}})$. They are right orthogonal to the above embedding of $\mathsf{D}^{\mathsf{b}}(\mathsf{Y}_{\mathsf{L}})$, see description in \cite[Remark 3.8]{Segal:2014jua}. The Hochschild homology $\mathsf{HH}_{0}(\mathsf{X}_{\mathsf{L}})\cong \mathbb{C}^{24}$ and $\mathsf{HH}_{0}(\mathsf{Y}_{\mathsf{L}})\cong \mathbb{C}^{4}$. Therefore, $0^\text{th}$ Hochschild homology of the right orthogonal complement of $\langle \mathcal{A},\mathcal{A}(1),\cdots,\mathcal{A}(4),\mathsf{D}^{\mathsf{b}}(\mathsf{Y}_{\mathsf{L}})\rangle$ is trivial. Thus, the Hodge conjecture for $\mathsf{X}_{\mathsf{L}}$ follows from the additive theory. \par Inspired by the examples above, we expect that even though we do not have $\mathsf{HPD}$, the duality of the Hodge conjecture between linear section of the dual varieties can be proved by classical methods. \end{eg} \begin{conj}\label{Conjprojectivedual} Let $\mathsf{X}\subset\mathbb{P}(\mathsf{V})$ be a projective smooth variety. Suppose the Hodge conjecture is true for $\mathsf{X}$. Let $\mathsf{Y}\subset\mathbb{P}(\mathsf{V}^{\ast})$ be the projective dual of $\mathsf{X}\subset\mathbb{P}(V)$. Choose a linear subspace $\mathsf{L}\subset\mathsf{V}^{\ast}$. Suppose the linear sections $\mathsf{X}_{\mathsf{L}}=\mathsf{X}\cap \mathbb{P}(\mathsf{L}^{\perp})$ and $\mathsf{Y}_{\mathsf{L}}=\mathsf{Y}\cap \mathbb{P}(\mathsf{L})$ are both of expected dimension and smooth. Then, the Hodge conjecture of $\mathsf{X}_{\mathsf{L}}$ is equivalent to the Hodge conjecture of $\mathsf{Y}_{\mathsf{L}}$. \end{conj} \subsection{Connective dg algebras} In this subsection, we prove that the noncommutative Hodge conjecture is true for the connective $\3\dg$ algebras. \begin{defn} $\mathsf{A}$ is called a connective $\3\dg$ algebra if $\mathsf{H}^{\mathsf{i}}(\mathsf{A})=0$ for $\mathsf{i}> 0 $. \end{defn} \begin{thm}\label{propersmoothconectivealgebraHodge} If $\mathsf{A}$ is a smooth and proper connective $\3\dg$ algebra, the noncommutative Hodge conjecture is true for $\mathsf{A}$. \end{thm} \begin{proof} According to recent work of Theo Raedschelders and Greg Stevenson \cite[Corollary 4.3, Theorem 4.6]{raedschelders2020proper}, $\mathcal{U}(\mathsf{A})_{\mathbb{Q}}\cong\mathcal{U}(\mathsf{H}^{0}(\mathsf{A})/\mathsf{Jac}(\mathsf{H}^{0}(\mathsf{A})))_{\mathbb{Q}}\cong \oplus \mathcal{U}(\mathbb{C})_{\mathbb{Q}}$. Hence, the noncommutative Hodge conjecture is true for connective $\3\dg$ algebras. In particular, it is true for the proper smooth algebras (concentrated in degree 0). \end{proof} \par We provide another proof which involves more calculation for smooth and proper algebras. Clearly, proper algebras are finite dimensional algebras. Due to R.\ Rouquier \cite[section 7]{rouquier_2008}, $\mathsf{Pdim}_{\mathsf{A}^{\mathsf{e}}}(\mathsf{A})= \mathsf{Pdim}(\mathsf{A})$, smooth algebras are finite global dimensional algebras. Consider the acyclic quiver $\mathsf{Q}$ with finitely many vertices. Let $\mathsf{A}:= \mathsf{kQ/I}$ be the quiver algebra with relations, where $\mathsf{kQ}$ is the path algebra of $\mathsf{Q}$. Then, $\mathsf{A}$ is a smooth and proper algebra. The noncommutative Hodge conjecture is true for $\mathsf{A}$. \begin{thm}\label{algebra} Let $\mathsf{A}= \mathsf{kQ/I}$. Consider natural Chern character map $$\mathsf{Ch}\colon \mathsf{K}_{0}(\mathsf{A})\longrightarrow \mathsf{HH}_{0}(\mathsf{A}).$$ Then, $\Im\mathsf{Ch}_{\mathbb{Q}}\otimes \mathbb{C}= \mathsf{HH}_{0}(\mathsf{A})$. In particular, the noncommutative Hodge conjecture is true for $\mathsf{A}$. \end{thm} \begin{proof} Firstly, for the algebra $\mathsf{A}$, $\mathsf{HH}_{0}(\mathsf{A})\cong \mathsf{A}/[\mathsf{A},\mathsf{A}]\cong \mathsf{k}\langle \mathsf{e}_{1},\mathsf{e}_{2},\cdots, \mathsf{e}_{\mathsf{n}}\rangle$ where $\mathsf{e}_{\mathsf{i}}$ is vertex of the quiver $\mathsf{Q}$. We write $\mathsf{S}_{\mathsf{i}}=\mathsf{A}\cdot \mathsf{e}_{\mathsf{i}}$ which is considered as a left $\mathsf{A}$ module, $[\mathsf{S}_{\mathsf{i}}]\in \mathsf{K}_{0}(\mathsf{A})$. We prove that $\mathsf{Ch}([\mathsf{S}_{\mathsf{i}}])= \mathsf{e}_{\mathsf{i}}$. According to the paper of McCarthy, ``Cyclic homology of an exact category'' \cite[section 2]{MCCARTHY1994251}, there is an natural identification of Hochschild homology$\colon$ $$\bigoplus_{\mathsf{n}} \mathsf{Hom}_{\mathsf{A}}(\mathsf{A},\mathsf{A})\otimes \cdots \otimes\mathsf{Hom}_{\mathsf{A}}(\mathsf{A},\mathsf{A})\longrightarrow \bigoplus_{\mathsf{X},\mathsf{Y},\mathsf{n}} \mathsf{Hom}_{\mathsf{A}}(\mathsf{X},\mathsf{E}_{1})\otimes\cdots \otimes \mathsf{Hom}_{\mathsf{A}}(\mathsf{E}_{\mathsf{n}},\mathsf{Y}).$$ It is a natural quasi-isomorphism, the left hand side is exactly the bar complexes of $\mathsf{A}$. $\mathsf{X}$ and $\mathsf{Y}$ are both projective left $\mathsf{A}$ modules. Under this identification, the image of the Chern character of object $[\mathcal{P}]$ that is projective $\mathsf{A}$ module is the homology class of $\mathsf{id}_{\mathcal{P}}$ in the right hand side complex. Consider the local picture$\colon$ $$\mathsf{Bar}\colon \mathsf{Hom}_{\mathsf{A}}(\mathsf{S}_{\mathsf{i}},\mathsf{A})\otimes \mathsf{Hom}_{\mathsf{A}}(\mathsf{A}, \mathsf{S}_{\mathsf{i}})\longrightarrow \mathsf{Hom}_{\mathsf{A}}(\mathsf{S}_{\mathsf{i}},\mathsf{S}_{\mathsf{i}})\oplus \mathsf{Hom}_{\mathsf{A}}(\mathsf{A},\mathsf{A}).$$ Let $\mathsf{f} \in \mathsf{Hom}(\mathsf{S}_{\mathsf{i}},\mathsf{A})$ be the natural inclusion, $\mathsf{e}_{\mathsf{i}}\in \mathsf{Hom}_{\mathsf{A}}(\mathsf{A},\mathsf{S}_{\mathsf{i}})$ be the multiplication by $\mathsf{e}_{\mathsf{i}}$. Then $\mathsf{Bar}(\mathsf{f}\otimes \mathsf{e}_{\mathsf{i}})=\mathsf{id}_{\mathsf{S}_{\mathsf{i}}}-\mathsf{e}_{\mathsf{i}}$. Therefore, $[\mathsf{e}_{\mathsf{i}}]=[\mathsf{id}_{\mathsf{S}_{\mathsf{i}}}]$ in $\mathsf{HH}_{0}(\mathsf{Proj}\ \mathsf{A})$. Hence $\mathsf{Ch}([\mathsf{S}_{\mathsf{i}}])= [\mathsf{e}_{\mathsf{i}}]$. Finally, $\Im\mathsf{Ch}_{\mathbb{Q}}\otimes \mathbb{C}= \mathsf{HH}_{0}(\mathsf{A})$. Since $\Im\mathsf{Ch}_{\mathbb{Q}} \subset \mathsf{HH}_{0,\mathbb{Q}}(\mathsf{A})$, therefore $\Im\mathsf{Ch}_{\mathbb{Q}} = \mathsf{HH}_{0,\mathbb{Q}}(\mathsf{A})$. \end{proof} A finite dimensional algebra $\mathsf{A}$ is (derived) Morita equivalent to an elementary algebra which is isomorphic to $\mathsf{kQ/I}$ for some quiver $\mathsf{Q}$. Clearly $\mathsf{kQ/I}$ is smooth and proper if $\mathsf{A}$ is smooth and proper. Then according to Theorem \ref{algebra}, the Hodge conjecture is true for any smooth and finite dimensional algebra $\mathsf{A}$. \begin{rem} A.\ Perry pointed out to the author that if $\mathsf{A}$ is a smooth and proper algebra, $\mathsf{Perf}(\mathsf{A})$ can be an admissible subcategory of the $\mathsf{Perf}(\mathsf{X})$ which admits full exceptional collections for some smooth and projective varieties $\mathsf{X}$ by Orlov \cite[section 5.1]{Orlov2016SMOOTHAP}. Therefore, the noncommutative Hodge conjecture of $\mathsf{A}$ is true. \end{rem} \par Classically, given any projective smooth variety $\mathsf{X}$, there is a compact generator $\mathsf{E}$ of $\mathsf{D}_{\mathsf{Qch}}(\mathsf{X})$. Write $\mathsf{E}$ again after the resolution to an injective complex. Denote $\mathsf{A}=\mathsf{Hom}_{\3\dg}(\mathsf{E},\mathsf{E})$, then there is an equivalence $\mathsf{D}^{\mathsf{per}}(\mathsf{A})\cong \mathsf{Perf}(\mathsf{X})$ and chain of derived Morita equivalences between $\mathsf{Per}_{\3\dg}(\mathsf{A})$ and $\mathsf{Per}_{\3\dg}(\mathsf{X})$. Thus, commutative Hodge for $\mathsf{X}$ $\Leftrightarrow$ Noncommutative Hodge for $\3\dg$ algebra $\mathsf{A}$. By the results above, suppose $\mathsf{A}$ is a smooth and finite dimensional algebra, then the Hodge conjecture of $\mathsf{A}$ is true. \begin{defn} Let $\mathsf{X}$ be a projective smooth variety. An object $\mathsf{T}$ is a called tilting sheaf if the following property holds$\colon$ \\ (1) $\mathsf{T}$ classical generates $\b\mathsf{X}$.\\ (2) $\mathsf{A}:=\mathsf{Hom}(\mathsf{T},\mathsf{T}) $ is of finite global dimension. \\ (3) $\mathsf{Ext}^{k}(\mathsf{T},\mathsf{T})=0$ for $k> 0$. \par The reader can refer to Alastair Craw's note, ``Explicit methods for derived categories of sheaves'' \cite{craw} for more discussions. \end{defn} Due to Van den Bergh, there are many examples of varieties which admit a tilting bundle. \begin{eg}(Van den Bergh \cite[theorem A]{2002math......7170V}) Suppose there is a projective morphism $\mathsf{f}:\mathsf{X}\longrightarrow \mathsf{Y}=\mathsf{Spec}\ \mathsf{R}$ between noetherian schemes. Furthermore, $\mathsf{Rf}_{\ast}(\mathcal{O}_{\mathsf{X}})\cong \mathcal{O}_{\mathsf{Y}}$ and the fibers are at most one dimensional. Then there is a tilting bundle $\mathcal{E}$ of $\mathsf{X}$. \end{eg} \begin{cor}\label{Tiltingsheaf} Suppose $\mathsf{X}$ admits a tilting sheaf, then Hodge conjecture for $\mathsf{X}$ is true. \end{cor} \begin{proof} Let $\mathsf{T}$ be a tilting sheaf of $\mathsf{X}$. We write $\mathsf{T}$ again after resolution to an injective complex. Define $\mathsf{A}:= \mathsf{Hom}_{\3\dg}(\mathsf{T},\mathsf{T})$, which is quasi-isomorphic (hence derived Morita equivalent) to a smooth and finite dimensional algebra. Thus, the Hodge conjecture for $\mathsf{X}$ is true. \end{proof}
1,314,259,993,736
arxiv
\section{Introduction} A common task in molecular simulation is ensuring that observables of the simulation match experimentally measured values. For example, in simulations of protein structure\cite{Lindorff-Larsen2010} or liquids\cite{Shivakumar2010}, quantitative agreement with experiments is the standard for assessing correctness of a model. When there is no quantitative agreement, changing the potential energy function or adding additional components to the simulation are possible ways to improve the fit. Making such changes can be an ambiguous and challenging process, especially if the potential energy function has multiple terms that can be modified. Minimal biasing techniques are a class of methods that modify a potential energy function to improve quantitative agreement with experimental values while minimizing the the change in the potential energy function. The definition of ``minimal'' and the way the potential energy function is modified vary from method to method. Recent reviews of minimal biasing methods can be found in \citet{Sormanni2017}, \citet{bonomi2017}, and \citet{Olsson2013}. Table 1 in \citet{bonomi2017} provides an overview of 28 minimal biasing methods, categorizing them by whether they maximize entropy, maximize parsimony, or use Bayesian inference. These three categories correspond broadly to the criteria used to ensure that the biasing function introduces a \textit{minimal} change to the potential energy function. This review focuses on two techniques developed by the authors that are categorized as maximum entropy methods: experiment directed simulation\cite{white2014efficient} (EDS) and experiment directed metadynamics\cite{White2015b} (EDM). EDS is for matching ensemble average scalars and EDM is for matching free energy surfaces (probability distributions of observables). EDS, like other minimal biasing methods, modifies a potential energy function to change ensemble averages of observables to match a specific value. These observables are typically equivalent to collective variables, but are intended to be only those that area experimentally verifiable quantities. What separates EDS from other methods is that it does not use replicas and can be used to construct a continuous NVE trajectory. For example, the Bayesian landscape tilting method of \citet{Beauchamp2014} relies on post-processing so that there cannot be a continuous NVE trajectory. Another example is the replica method of \citet{Lindorff-Larsen2005} which relies on replica-exchange of biasing forces and thus cannot result in a continuous trajectory. This does not mean EDS is ``better''; indeed these two methods appear to provide better sampling and scaling than EDS. However, the ability to compute an NVE trajectory allows dynamic observables like hydrogen-bonding lifetimes to be computed. The key results from EDS have been to improve thermodynamic observables and indirectly improve dynamic observables. For example, EDS was recently used to create a state-of-the-art DFT water model that gives near perfect agreement with X-ray scattering results, water diffusivity, and proton-hopping behavior by improving only the water oxygen-oxygen coordination number\cite{White2017}. EDM is a maximum entropy method that matches ensemble probability distributions, or equivalently free energy surfaces, using prescribed functions. So far, EDM has proved most useful for matching radial distribution functions (RDFs), e.g. to then do coarse-grained modeling. EDM is less often used because it is rare to have experimental data that gives a probability distribution, other than a normal distribution (which is better treated with methods like in \citet{Hummer2015}). Other groups have since arrived at the same approach as EDM and typically it is now called ``targeted metadynamics'' because it uses the method of metadynamics to arrive at a target free energy surface\cite{Marinelli2015,Gil-Ley2016}. EDM is also equivalent to variationally enhanced sampling metadynamics\cite{valsson2014variational}, although there the target is a tool to improve sampling, and the biased simulation is not the goal. The name EDM was chosen by the authors because at the time there were new metadynamics methods that could target a collective variable domain\cite{Dama2015}. Now the term ``targeted'' seems to apply exclusively to target distributions, and EDM is probably best described as falling under the umbrella of targeted metadynamics methods. Here we review the maximum entropy derivations that underpin both EDS and EDM, describe some of the recent research benefiting from these methods, and discuss implementation details such as understanding the effect of EDS and EDM on the virial and accounting for uncertainty in the experimental data. \section{Theory} EDS minimally modifies an ensemble so that the ensemble average of some scalar matches a desired value, such as a value obtained from an experiment. The EDS bias is minimal because the resulting biased ensemble maximizes entropy\cite{Pitera2012, PhysRev.106.620}. Maximum entropy and minimum relative entropy are equivalent approaches to derive these minimal biasing equations\cite{Roux2013c}. We will use the minimum relative entropy approach because it has an intuitive interpretation as a distance metric. Figure~\ref{fig:eds_phase_space} illustrates how the biased ensemble is one of many choices that matches our constraints, but is as close as possible to the unbiased ensemble as measured via relative entropy. Consider an unbiased potential energy function, $U(\vec{r})$, which has a probability distribution of $P(\vec{r})$ under the NVT ensemble, following the Boltzmann distribution: $P(\vec{r})\propto \exp\left(-\beta U(\vec{r})\right)$. $\vec{r}$ is the set of coordinate vectors of the $N$ particles of the ensemble and $\beta = \frac{1}{kT}$. $k$ is the Boltzmann constant and $T$ is absolute temperature. We would like to find a biased ensemble $P'(\vec{r})$ which is as similar as possible to the unbiased ensemble. Similarity can be defined via the relative entropy: \begin{equation} \Delta S_{\textrm{rel}} = \int d\vec{r}\, P(\vec{r})\ln \frac{P(\vec{r})}{P'(\vec{r})} \end{equation} where $P'(\vec{r})$ is the biased ensemble probability distribution, and the integral is taken over all coordinates. Having a lower $\Delta S_{\textrm{rel}}$ means that the biased and unbiased ensembles are more similar. The biased ensemble should also have an observable average that matches a target value, which could be obtained from an experiment. This constraint is represented as \begin{equation} \label{eq:eds-constraint} \left<s(\vec{r})\right> = \int d\vec{r}\, P'(\vec{r})s(\vec{r}) = \hat{s} \end{equation} where $s(\vec{r})$ is an instantaneous value for our observable (collective variable) which we are matching to $\hat{s}$, the desired scalar value. This is sometimes called a forward model and depends only on positions. We will relax this assumption below. \begin{figure} \centering \begin{tikzpicture}[scale=.5] \filldraw[color=red!60, fill=red!5, very thick](0,0) circle (3); \draw[gray, thick] (2.1213203,2.1213203)--(5,4); \node [red, pin=right:$P^{'}(\vec{r})$] at (2.1213203,2.1213203){\textbullet}; \node [red, pin=right:$P_{0}(\vec{r})$] at (5,4) {\textbullet}; \draw [decorate,decoration={brace,amplitude=10pt},xshift=-5pt,yshift=0pt] (2.3213203,2.3213203) -- (5,4) node [black,midway,xshift=-3pt,yshift=19pt]{$\Delta S_{rel}$}; \coordinate (A) at (-0.1,-0.2); \node at (A) [above = 1mm of A] {$P(\vec{r})$}; \coordinate (A) at (-0.2,-1.2); \node at (A) [above = 1mm of A] {$\in \langle s(\vec{r})\rangle=\hat{s}$ }; \end{tikzpicture} \caption{A schematic of the minimum relative entropy derivation. $P(\vec{r})$ is the unbiased probability distribution from the unbiased potential energy. We are finding a biased probability distribution, $P\vec{r})$, that is consistent with Equation~\ref{eq:eds-constraint}. There is a hypersurface of possible such probability distributions. With the condition that we minimimize relative entropy, or ``distance'' in this schematic, we find a unique point in the hypersurface $P'(\vec{r})$. } \label{fig:eds_phase_space} \end{figure} These equations represent (1) a constraint (to have our average simulation value match the target $\hat{s}$), and (2) a scalar to minimize (the relative entropy). When these conditions are present, we can find the optimal value via the method of Lagrange multipliers. The Lagrangian is: \begin{equation} \label{eq:lagrange} \mathcal{L}\left[\lambda, P(\vec{r}), P'(\vec{r})\right] = \Delta S_{\textrm{rel}} - \lambda\left(\int d\vec{r}\, P'(\vec{r})s(\vec{r}) - \hat{s}\right) \end{equation} Equation~\ref{eq:lagrange} can be minimized by taking the functional derivative equal to zero: $\frac{\delta \mathcal{L}}{\delta P'(\vec{r})}$ = 0. Solving for $P'(\vec{r})$ gives this expression for the biased ensemble: \begin{equation} P'(\vec{r}) = \frac{1}{Z'}e^{-\beta (U(\vec{r}) + \lambda s(\vec{r}))} \end{equation} where $Z'$ is a normalization constant. This gives an expression for the biased potential energy as $U'(\vec{r}) = U(\vec{r}) + \lambda s(\vec{r})$. This result shows that if the bias is \textit{linear} in the instantaneous observable, the ensemble is minimally biased. This derivation can be repeated for multiple dimensions\cite{Pitera2012} and for functions instead of scalars\cite{White2015b}. The general result is: \begin{equation} \label{eq:bias} U'(\vec{r}) = U(\vec{r}) + \sum_i \lambda_i s_i(\vec{r}) + \sum_j \mu_j\left[v(\vec{r})\right] \end{equation} where $i$ is the index of ensemble averages that are matched to a desired value and $j$ is the index of free energy surfaces that are matched to desired functions. $\mu_j\left[v(\vec{r})\right]$ is a bias added that depends on $v(\vec{r})$, another instantaneous observable, and causes the biased ensemble to match a specific distribution in the free energy surfaces $F_t\left[v(\vec{r})\right]$. Specifically, \begin{equation} \int d\vec{r}\delta(v(\vec{r}) - v')P'(\vec{r}) = q(v') \end{equation} where $\delta$ is the Dirac delta function and $q(v')$ is a desired probability distribution for $v(\vec{r})$ (i.e., $q[v(\vec{r})] = -\frac{1}{\beta}\ln F_t\left[v(\vec{r})\right]$). $q(v)$ could be obtained for example from a scattering or FRET experiment\cite{Boura2011}. The derivation above gives the form of the biased potential energy that is minimally biased, but it does not enable calculation of the Lagrange multipliers. That is the purpose of the EDS and EDM methods. EDS and EDM are time-dependent methods that change the potential energy of a simulation to arrive at the bias in Equation~\ref{eq:bias}. EDS is intended to be used in a two-step process: finding the Lagrange multiplier (adaptive) and then running a standard MD simulation using the modified force field given by Equation~\ref{eq:bias}. During the adaptive phase of EDS, the Lagrange multipliers are called coupling constants to distinguish from the time-independent Lagrange multipliers. These are indicated as $\alpha_\tau$ where $\tau$ is a discrete step index. The Lagrange multipliers are set to be the average of $\alpha_\tau$. $\alpha_\tau$ is defined as \begin{equation} \label{eq:eds-update} \alpha_{\tau + 1} = \alpha_\tau + \eta_\tau g_\tau \end{equation} \begin{equation} \label{eq:eds-gradient} g_\tau = \frac{2\beta}{w}\left(\left<s\right>_\tau - \hat{s}\right)\left(\left<s^2\right>_\tau - \left<s\right>^2_\tau\right) \end{equation} where $w$ is an arbitrary constant used to ensure unit homogeneity, $\left<\cdot\right>_\tau$ is the ensemble average between step $\tau - 1 $ and $\tau$, and $\eta_t$ is \begin{equation} \eta_\tau = \frac{A}{\sqrt{\sum_i^{\tau} g_i^2}} \end{equation} where $A$ is a user-defined constant that controls the size of the first step. The point of $\eta_\tau$ is to reduce the size of the steps over time. Note that because $g_{\tau}$ is in the sum, $\left|\alpha_0\right| = A$. Typically the gradients should be clipped for stability and $A$ is a natural upper/lower bound for clipping it. Thus a user of this method must choose $A$ and the time between updates. This update procedure is derived in \citet{white2014efficient} and is based on per-coordinate infinite horizon stochastic gradient descent\cite{McMahan2010}. \citet{Hocky2017} recently found that Equation~\ref{eq:eds-gradient} can be modified to use covariance in multiple dimensions to improve convergenece and that replacing Equation~\ref{eq:eds-update} with Levenberg--Marquardt optimization further improves convergence. The EDM method finds the $\mu\left[v(\vec{r})\right]$ bias function in Equation~\ref{eq:bias}. For compactness, we will now omit the dependence on $\vec{r}$. Unlike EDS, EDM consists of a single phase where $\mu(v)$ changes less over time. The update equation is \begin{equation} \label{eq:edm-update} \mu(v)_{\tau+1} = \mu(v)_\tau + \frac{1}{q(v_\tau)}\exp({-\theta(v_\tau, v)_\tau})G(v, v_\tau) \end{equation} where $v_\tau$ is the value of $v(\vec{r})$ at time $\tau$, $G$ is a kernel function (e.g., Gaussian), $q(v_\tau)$ is the probability at position $v_\tau$ from the target probability distribution and $\theta(v_\tau, v)_\tau$ is a function which controls convergence. Following the above update rule causes the simulation to converge to the following distributions\cite{White2015b, Dama2014a}, depending on the choice of $\theta$: \begin{equation} \label{eq:edm-converge} \theta(v_\tau, v)_\tau =\left\{\begin{array}{lr} 1, & \textrm{non-convergent} \\ \hat{\mu}_\tau, & q(v)\\ \beta \Delta T \mu(v_\tau)_\tau / T, & q(v)^{\Delta T / (\Delta T + T)} P(v)^{T / (\Delta T + T)} \end{array}\right. \end{equation} where $\Delta T$ is the ``tempering factor''\cite{White2015b}, $P(v)$ is the marginal unbiased distribution for $v(\vec{r})$, and $\hat{\mu}$ is the total or average of all previous $\mu_\tau$. The first condition, $\theta = 1$, is similar to normal metadynamics; it reaches a distribution similar to $q(v)$ but may oscillate due to a lack of dampening in the update size. Condition two, called globally tempered, converges correctly to $q(v)$. The last condition, locally tempered, converges to an adjustable mixture of the unbiased and target distribution. EDM is traditionally implemented as globally tempered, so that the the final distribution is indeed the target. The first condition is good for tuning parameters, since it makes progress more quickly. The final condition, locally tempered, allows a pseudo-Bayesian tuning of prior belief in the unbiased ensemble. It is pseudo-Bayesian because the ratio of influence from the prior belief to evidence is computed, not set, in Bayesian modeling. An important consideration of both EDS and EDM is that they add potential energy during the update step which quickly becomes kinetic energy. Thus, it is important that the thermostat used to maintain constant temperature in the NVT can dissipate energy faster than it is added by the update steps. This is necessary during the adaptive phase of EDS and EDM prior to convergence. \subsection{Treating Uncertainty in Experimental Data} One complication of minimal biasing methods is uncertainty in the experimental data. For example, the experimental data could be the radius of gyration of a polymer with a reported uncertainty in the mean of 5 nm. Should the radius of gyration be matched exactly, or only to within 5 nm? It is possible to stop the adaptive phase of EDS early, for example when the average is within the uncertainty of the experimental data. An early stop is ad-hoc and part of the maximum entropy derivation unlike other methods which are built to address uncertainty\cite{Hummer2015,Brookes2016}. \citet{Cesari2016} proposed a modification to the maximum entropy derivation above to address experimental uncertainty. Instead of the constraint in Equation~\ref{eq:eds-constraint}, this constraint is used \begin{equation} \label{eq:bussi-constraint} \int d\vec{r}\, P''(\vec{r}, \epsilon)\left[s(\vec{r}) + \epsilon\right] = \hat{s} \end{equation} where $\epsilon$ is an auxiliary variable that allows deviations in the average of $s(\vec{r})$ and $P''(\cdot)$ is the probability distribution that will be solved after maximizing entropy. $\epsilon$ is a random variable from a prior distribution $P_0(\epsilon)$ that describes the uncertainty in the experimental data. For example, $P_0(\epsilon)$ could be a normal distribution or a Laplace distribution. \citet{Cesari2016} show that $P''(\vec{r}, \epsilon) = 1 / Z'' P'(\vec{r})P_0(\epsilon)e^{\lambda\epsilon}$. This leads to a different update step of \begin{equation} \label{eq:bussi-update} g_\tau = \frac{2\beta}{w}\left(\left<s\right>_\tau + \xi(\alpha_\tau) - \hat{s}\right)\left(\left<s^2\right>_\tau + \frac{\partial \xi(\alpha_\tau)}{\partial\alpha} - \left<s\right>^2_\tau\right) \end{equation} where $\xi(\alpha_\tau)$ is the analytic posterior of $\epsilon$: \begin{equation} \xi(\alpha_\tau) = \frac{\int d\epsilon\, P_0(\epsilon) e^{-\alpha_\tau \epsilon}\epsilon}{\int d\epsilon\, P_0(\epsilon) e^{-\alpha_\tau \epsilon}} \end{equation} Returning to the radius of gyration example above, if the uncertainty ($P_0(\epsilon)$ is assumed to be a normal distribution normal with $\sigma = 5$nm, then $\xi(\alpha_\tau)= -25\alpha_\tau$. This new update-step can be used to rigorously include experimental uncertainty in to the EDS method. EDM is able to tune the relative importance of the target distribution and the unbiased ensemble with the locally-tempered variant in Equation~\ref{eq:edm-converge}. This could be used to match intuition. For example, if you believe that the target distribution which comes from experimental data is twice as accurate as the molecular dynamics simulation, you could choose $\Delta T / (\Delta T + T) = 0.66$ giving about twice as much weight to the target distribution. \section{Applications of EDS and EDM} \begin{figure} \centering \includegraphics[width=5cm]{pmf-fig.pdf} \caption{A 1-D EDS calculation where the mean of a probability distribution function (PDF) is being biased to match the dash vertical orange line. The top plot shows the unbiased and biased PDFs. The biased PDF shows as much of the shape of the unbiased PDF as possible while matching the new biased mean. The bottom shows the Lagrange multipliers that give all possible biased means. The red dot indicates the current Lagrange multiplier for the biased PDF. There is a unique $\lambda$ for all possible biased means, as discussed in the Theory section. } \label{fig:eds-model} \end{figure} A model 1-D system is shown in Figure~\ref{fig:eds-model} as a probability distribution function, $P(r)$. EDS is being used to modify the average value of $r$ to match a new set point, $\hat{r}$. EDS adds a linear bias, whose strength is indicated with the red dot, to create the biased PDF $P'(r)$ according to Equation~\ref{eq:bias}. Notice how the features of $P(r)$ are mostly maintained in $P'(r)$. The bottom plot shows how each value of $\hat{r}$ corresponds to a unique biasing strength. \begin{figure} \centering \includegraphics[width=5cm]{aimd.pdf} \caption{Water oxygen-oxygen radial distribution function from {\em ab inito} molecular dynamics at 300 K NVT and experiments from \citet{Skinner2013}. BLYP and BLYP-D3 are from DFT with and without dispersion corrections with the BLYP exchange functional. Note that BLYP-D3 is typically done at 330K, which gives much better agreement\cite{Tse2015}. See \citet{White2017} for complete system details. The BYLP-EDS line is DFT with EDS bias added to the water oxygen-oxygen coordination number. EDS shows near quantitative agreement with experiment. Copyright 2017 AIP Publishing LLC. } \label{fig:eds-aimd} \end{figure} A more sophisticated system which demonstrates the capabilities of improving dynamic observables is the recent work on EDS {\em ab initio} molecular dynamics (AIMD) simulations of water\cite{White2017}. DFT water with the BLYP exchange functional poorly represents water structure as seen in Figure~\ref{fig:eds-aimd} (black line). It is over-structured and has water self-diffusion coefficients that are too high (0.005-0.005 \AA$^2$ / ps)\cite{Tse2015} compared with the experimental value of 0.23 \AA$^2$\cite{DC9786600199}. EDS was used to improve the coordination number of the oxygen-oxygen (O$_w$-O$_w$) water molecules and resulted in near perfect agreement with experimental scattering data (Figure~\ref{fig:eds-aimd}). A number of unrelated observables improved as well, including RDFs and the water self-diffusion coefficient, which increased to 0.06$\pm\,$\AA$^2$/ps. \citet{White2017} further demonstrated that the EDS bias could be transferred to excess proton-water simulations and that when combined with DFT dispersion corrections\cite{Sitarz2000}, the agreement further improves. The accuracy improvement with the dispersion corrections shows that EDS is general in its applicability to DFT methods. \citet{Cortina2018} used EDS to study the KPC-2 carbapenemase enzyme, which is responsible for drug resistance in the majority of carbapenem-resistant Gram-negative bacteria\cite{doi:10.1021/acs.jmedchem.7b00158}. They used EDS to modify protein-protein distances while doing a committor analysis\cite{du1998transition} to identify transition states in the carbapenem-enzyme complex. \citet{Cesari2016} used a method similar to EDS (modified update step) to improve agreement with experimental NMR $^3$J coupling data for RNA oligonucleotids. After biasing, they used the simulation results to improve the underlying Amber force field and thus create a transferrable model. \citet{Cesari2016} also developed a novel approach to account for experimental data uncertainty by adding auxiliary variables to the EDS update step. EDM has been used less than EDS due to the rarity of experimental data giving an exact probability distribution. One example explored in the original EDM paper was to construct a mean field bias that mimics an alanine dipeptide being in the backbone of a protein\cite{White2015b}. This was done by computing a potential of mean force (PMF) for $\phi,\psi$ dihedral angles from PDB crystallography data for the alanine-alanine sequence in protein structures. This corresponds to a probability distribution and a molecular dynamics simulation of alanine dipeptide was done with its $\phi,\psi$ dihedral angles biased to match the desired values. The result was a molecular dynamics simulation where the dipeptide was biased to behave as part of a longer protein structure. A similar approach was later used by \citet{Hocky2017} with EDS to built a pseudo-mean field model for actin filaments. Another example of EDM can be found in \citet{Gil-Ley2016} who used it (called targeted metadynamics) to improve agreement of RNA oligonucleatides' dihedral angles with PDB crystallography data. They similarly built dihedral angle PMFs using the crystallography data and biased molecular dynamics simulations of RNA to adopt the dihedral angle probability distribution functions. \citet{Gil-Ley2016} then took the converged EDM bias and transferred it to a different simulation of an RNA tetramer and found little to no improvement of agreement with the crystallography data. This may have been due to the underlying assumption that a PMF derived from crystallographic data is not representative of the room temperature ensemble. Nevertheless, EDM is a promising technique for incorporating experimental data as a type of auxiliary potential energy function in simulations. \subsection{Coarse-Grain Modeling} EDS and EDM are well-suited for coarse-grain (CG) modeling because they guarantee that a CG model matches experimental data. \citet{Dannenhoffer-Lafage2016} showed that EDS can be applied before force-matching (a CG method) and the improvements of agreement with experimental data is maintained. \citet{Dannenhoffer-Lafage2016} began with all atom molecular dynamics simulations of Ethylene carbonate with the force field from \citet{Masia2004}, which is known to not match center-of-mass coordination numbers from more accurate DFT calculations from \citet{Borodin2009}. \citet{Dannenhoffer-Lafage2016} biased the all-atom simulations with EDS to match these known coordination numbers. They then used force-matching to create multiple CG models with one, two, and three site CG beads, using either biased all-atom or unbiased all-atom simulation data. The biased all-atom simulations showed better agreement with coordination numbers, indicating that the improvement in all-atom simulations translates to improvement in the CG model with EDS. Of course, EDS could be used directly on the CG model but that can lead to complications in how observables are calculated on CG models\cite{Wagner2016b}. EDS has been applied to CG simulations of the G- and F-actin proteins as monomers and trimers\cite{Hocky2017}. Actin proteins are globular proteins that can polymerize into long semi-flexible filaments. Their hypothesis was that they could model a subsystem from within the polymerized actin structure by incorporating information about structural fluctuations from simulations of the larger system via EDS. By biasing the first and second moment of two important collective variables in actin monomer structure, they were able to observe filament-like conformations, and importantly, the fluctuations of these observables for an actin monomer, even in the system which contained only a single monomer solvated in water. \citet{Hocky2017} also studied a number of questions about the EDS method in their system and found the following conclusions: (1) the linear term of EDS better matches target values and maintains system fluctuations than harmonic biases; (2) replacing the variance term in Equation~\ref{eq:eds-gradient} with a covariance matrix for all biased dimensions has faster convergence; (3) Levenberg--Marquardt\cite{Stinis2005} converges faster than stochastic gradient descent. This paper also derived a simple equation that can be used to guess the value of $\lambda$ without doing a stochastic minimization (whose accuracy depends on the distance of the target observables from the unbiased observables), and hence can serve as a good initial guess for starting an EDS simulation. \subsection{Enhanced Sampling} EDM may require enhanced sampling if there are slow degrees of freedom orthogonal to the biased collective variable. This is most conveniently treated via the extensive literature on enhanced sampling with metadynamics, since EDM is a type of metadynamics and typically implemented within a metadynamics code. EDS is not as simple because it requires that the $\left<\cdot\right>_\tau$ term in Equation~\ref{eq:eds-gradient} be taken over an NVT ensemble. This limits the enhanced sampling techniques to those that still give correct NVT ensemble averages. One example is parallel-tempering replica-exchange. It is possible to use metadynamics if an appropriate estimator\cite{Tiwary2015} is done to compute the averages but this has not been explored in practice. \citet{Amirkulova2018} demonstrated the use of enhanced sampling and EDS with the parallel-tempering well-tempered ensemble (PT-WTE)\cite{Bonomi2010}. The PT-WTE method is a enhancement of enhanced parallel-tempering replica-exchange that improves exchange rates and reduces the required number of replicas\cite{Deighan2012a}. PT-WTE satisfies the observable that the ensemble averages, $\left<\cdot\right>_\tau$, can be computed during the course of the simulation because PT-WTE only changes the magnitude of potential energy fluctuations, not their expectations. One apparent drawback is that the method loses the one-replica observable of EDS that allows computation of dynamic observables. However, this only applies to the adaptive phase. During the fixed-bias second phase of EDS, one replica can again be used to allow analysis of dynamic observables. \citet{Amirkulova2018} studied the GYG peptide with the EDS plus enhanced sampling approach. Eight simulations were conducted, each with 1, 8, or 16 replicas. The simulations had EDS, PT-WTE, and/or parallel-tempering. EDS was used to bias proton chemical shifts to improve agreement with experimental NMR data. The simulation results showed that PT-WTE improves sampling in the EDS method and does not change agreement with experimental data. PT-WTE also converged the EDS bias with fewer replicas than the PT method. One consideration in all EDS simulations is the effect on unbiased observables. The Ramachandran plots of the simulations with and without EDS bias are compared in Figure ~\ref{fig:gyg_fes}. When EDS and enhanced sampling are used (Figure~\ref{fig:gyg_fes} a)), the simulation explores a larger region of configurational space relative to the control simulation (Figure~\ref{fig:gyg_fes} b)). Also, the global minimum changes when using EDS in Figure~\ref{fig:gyg_fes}, bringing it closer to what was found in \citet{Ting2010} for the GYG sequence. \begin{figure} \centering \includegraphics[scale=0.45]{images/gyg_fes.png} \caption{Free energy surface of GYG peptide along dihedral angles (Ramachandran plot) from \citet{Amirkulova2018}. a) EDS and enhanced sampling with PT-WTE and b) no EDS and no enhanced sampling. The difference between panels a and b shows that EDS changes the global free energy minimum, which better matches data from \citet{Ting2010}, and PT-WTE improves sampling based on explored regions. Copyright 2018 World Scientific Publishing Company.} \label{fig:gyg_fes} \end{figure} \section{The Role of Pressure} Thus far we have discussed NVT and NVE ensembles. EDM and EDS add new forces to the simulation and thus affect the system virial. This can lead to undesirable density changes when the bias is subsequently used in an NPT simulation. It is possible to rectify this change in virial, and thus density, by adding a further constraint that the virial be unchanged while still maximizing entropy. As shown in the appendix, this leads to an unsolvable equation and thus it is not possible to simultaneously set the virial and other constraints. One intuitive reason for this is that the virial theorem requires the average virial be proportional to average temperature. Therefore if we tried to bias an ensemble so that its virial is fixed, then the average temperature would change, violating our constant average temperature. One way around this challenge lies in the correlation between biased observables in EDS. When biasing multiple observables which are correlated, there is in fact no longer a unique set of Lagrange multipliers\cite{Pitera2012}. These extra degrees of freedom in the choice of Lagrange multipliers that maximize Equation~\ref{eq:lagrange} enable us to choose the Lagrange multipliers that minimally change the virial. Equation~\ref{eq:eds-gradient} can be modified so that our coupling constants minimize the additional pressure which they exert\cite{Louwerse2006}: \begin{equation} \label{eq:eds-virial-gradient} g_\tau = \frac{2\beta}{w}\left(\left<s\right>_\tau - \hat{s}\right)\left(\left<s^2\right>_\tau - \left<s\right>^2_\tau\right) + 2\nu \Delta p_\tau\frac{\partial \Delta p_\tau}{\partial \alpha_\tau} \end{equation} \begin{equation} \Delta p_\tau = -\frac{\alpha_\tau}{w}\frac{ds}{dV} = -\frac{\alpha_\tau}{w}\sum_{ij}\left(\frac{\partial s}{\partial v_{ij}}\frac{\partial v_{ij}}{\partial V}\right) \end{equation} where $\nu$ is a parameter that controls the importance of minimizing the virial, $\Delta p_\tau$ is the change in the system virial pressure due to the EDS bias, $\frac{\partial s}{\partial v_{ij}}$ is the partial derivative of the observable with respect to one component of the triclinic box matrix, and $\frac{\partial v_{ij}}{\partial V}$ can be computed from the adjugate of the triclinic box matrix. When biasing multiple observables, $\Delta p_\tau$ should be the total over all observables. $g_\tau$ is in units of $s^2$ per energy, so $\nu$ must be in units of volume squared times $s^2$ per energy squared. Practically we can choose a unitless $\nu^*$, which is defined from \begin{equation} \nu = \nu^* \frac{w^2\beta^2}{\rho^2} \end{equation} Equation~\ref{eq:eds-virial-gradient} was used in a molecular dynamics simulation of 128 modified SPC/E water molecules. This modified SPC/E water has increased charges ($q_O = -0.94$) to distort its coordination number. EDS was then applied to correct the coordination number using experimentally derived coordination numbers from \citet{Skinner2013} with varying strengths of the virial correction. The coordination number moment definition may be found in \citet{white2014efficient}. Figure~\ref{fig:virial-cvs} shows that coordination number and its moments are still correctly biased with the virial term. The EDS parameters were a range ($A$) of 50 kj/mol, a period of 25 fs, and the Levenberg--Marquardt optimization procedure. A Nose-Hoover thermostat with a time constant of 25 fs and a timestep of 0.5 fs were used for the molecular dynamics in the LAMMPS simulation engine\cite{Plimpton1995}. The NPT barostat was Parrinello-Rahman. \begin{figure} \centering \includegraphics[width=\textwidth]{pressure-figs/pressures.pdf} \caption{The EDS contribution to virial as EDS is applied to water coordination number moments 0 through 3 in molecular dynamics simulation of over-structured SCP/E water model. Panels a and b compare the virial contribution in a normal EDS and EDS with virial minimization strength of $\nu^* = 1$. There is lower per-collective variable virial contribution with virial minimization. Panel c shows the effect of different $\nu^*$ values on the total virial contribution from EDS.} \label{fig:virial-virial} \end{figure} Figure~\ref{fig:virial-virial} shows the effect of the virial penalty term on the EDS virial contribution. The virial contribution plotted and computed here is the mean per-particle virial energy contribution. That is $\Delta p_\tau / \rho / kT$ which removes the effect of particle number and energy scale. Panel a shows the virial contribution of each collective variable with $\nu^* = 0$ (no virial minimization). The net average virial contribution is 39.2 kT. Panel b shows $\nu^* = 10$ and there is a much lower contribution of 5.02 kT. Panel c compares the net virial contribution of three different $\nu^*$ values. \begin{figure} \centering \includegraphics[width=\textwidth]{pressure-figs/cvs.pdf} \caption{EDS applied to water coordination number moments 0 through 3 in molecular dynamics simulation of over-structured SCP/E water model. The columns show increasing strength of virial minimization. Each plot is collective variable scaled by its set-point. The vertical dashed line spearates the NVT adaptive simulation and a fixed bias NPT phase. The results show that increasing the strength of the virial minimization actually improves convergence by reducing large-magnitude changes in biasing force. Without virial minimization, EDS can produce nonphysical densities when the NVT bias is transferred to NPT. The change in density moves the CVs. For $\nu^* = 0$, this meant CV values beyond the y-limits of the plot.} \label{fig:virial-cvs} \end{figure} Figure~\ref{fig:virial-cvs} shows the impact on the convergence of the biased coordination number and its moments with the virial minimizing terms. Interestingly, adding a virial minimization term actually improves convergence. This is due to the well-known effect of regularization, which improves convergence in optimization\cite{scholkopf2001learning}. The virial minimization term is proportional to $\alpha_\tau$, so it brings down the magnitude of $\alpha_\tau$. This prevents the large swings seen in the $\nu=0$ system. Thus, adding the virial minimization term not only reduces the virial contribution but can improve convergence by minimizing magnitudes of the coupling constants. The true test can be seen in the NPT portion past 20ps where the bias is fixed but the box dimensions are free. EDS with $\nu^* = 0$ gives poor densities in NPT with a bias computed in NVT. The virial minimization reduces the change in density with a trade-off in match to the bias set-points. Virial minimization is in general recommended due to its enhanced convergence and better representation of virial forces. \section{Implementations} There are three implementations of EDS available: Colvars\cite{COLVARS}, Plumed1.3\cite{PLUMED}, and Plumed2\cite{TRIBELLO2014604}. The Colvars and Plumed1.3 implementations match the original EDS manuscript. The Plumed2 implementation is actively maintained as a plugin\cite{bussi2018analyzing} and has features from more recent EDS articles\cite{Hocky2017}. For example, the covariance term in Equation~\ref{eq:eds-gradient} can be computed using the full sample covariance matrix or the sample variance. The update steps can also be computed using the Levenberg--Marquardt method\cite{Stinis2005, Hocky2017}, instead of Equation~\ref{eq:eds-update}. The Plumed2 implementation is recommended. EDM is implemented in Plumed2 as well, under the ``targeted metadynamics'' keyword within the metadynamics biasing class. One of the common use-cases for EDM is to bias RDFs, which is not exactly a collective variable. RDFs are ``function collective variables'' in the sense that there is a distribution at each timestep. Thus the bias update equation, shown in Equation~\ref{eq:edm-update}, applies to each observed point in the RDF at each step (see \citet{White2015b} for equations). This can lead to thousands of updates per timestep on even small systems. To improve performance and scaling, we have created an implementation in LAMMPS that is more tightly integrated with the simulation engine than Plumed2 as described in \citet{White2015b}. Note that the addition of hills is parallelized here, which is unusual and enables the scaling as a function of particle size. Hill addition is 90\% of the CPU utilization. This was the implementation used in \citet{White2015b} for an ethylene carbonate electrolyte simulation and Lennard-Jones benchmark systems. \section{Conclusions} Minimal biasing techniques are an emerging area that improve the accuracy of molecular simulation and better utilize comparable experimental data. EDS and EDM are maximum entropy techniques that minimally bias an ensemble so that average observables or probability distributions of observables match set values. EDS and EDM both converge the potential to Equation~\ref{eq:bias}, allowing an NVE simulation with fixed potential energy in a single replica so that dynamic observables can be computed. EDS and EDM can bias multiple observables simultaneously and be combined with enhanced sampling methods. It is possible to treat uncertainty in experimental data using methods from \citet{Cesari2016}. Explicitly setting pressure is not possible in maximum entropy biasing, but in EDS it is possible to minimize the change to pressure when biasing multiple collective variables. This minimization can actually improve convergence of EDS by acting as a regularization. A variety of example systems have been presented here and there are implementations available in most simulation engines for both EDS and EDM. \section*{Acknowledgement(s)} The authors thank Prof. Pengfei Huo for reviewing the appendix mathematics, Rainier Barrett for help preparing the manuscript and Prof. Glen Hocky for assistance in surveying the literature and preparing the manuscript. \section*{Funding} This work was supported by the National Science Foundation CBET Div Of Chem, Bioeng, Env, \& Transp Sys (Grant \#1751471). \section{References} \bibliographystyle{unsrtnat}
1,314,259,993,737
arxiv
\section{Introduction} \label{sect:intro} With its ability to probe generic many-body excitations, Raman spectroscopy has become an important tool for understanding strongly correlated electronic systems, including the Mott insulators.\cite{Devereaux:RMP:2007} Unfortunately, in Raman spectroscopy the photon momenta are generally negligible when compared with the inverse lattice scale, thus making it essentially a zero-momentum-transfer probe and limiting its usefulness in certain cases. In the specific context of the $U(1)$ Dirac spin liquid state in the spin-1/2 kagome lattice, in which the low-energy effective theory is described by chargeless spin-1/2 fermions (spinons) coupled to an emergent $U(1)$ gauge field,\cite{Hermele:PRB:2008} we previously proposed Raman spectroscopy as a way to detect the spinon continuum and the fluctuations of the emergent gauge field.\cite{Ko:PRB:2010} The prospect of detecting the emergent gauge field in experiments is particularly significant, given the role such gauge fields have played in theories of quantum spin liquids. Unfortunately, our calculations also reveal that the signal coming from the emergent gauge field is suppressed by a factor of $q^2$, where $q$ is the momentum transferred to the system. Indeed, Raman spectroscopy on herbertsmithite ZnCu$_3$(OH)$_6$Cl$_2$, a possible material realization of the $U(1)$ Dirac spin liquid state, has found a broad continuum in the spectrum that could be attributed to the spinon continuum, but has shown no signs of the emergent gauge field.\cite{Wulferding:PRB:2010} Given the large momentum carried by x-ray, one would imagine that resonant inelastic x-ray scattering (RIXS)\cite{Kotani:RMP:2001} may provide a better prospect of detecting the emergent gauge field. However, in our derivation, the detection of the emergent gauge field in Raman spectroscopy depends crucially on the coupling of the external photons to the spin-chirality terms $\vv{S}_i \cdot (\vv{S}_j \times \vv{S}_k)$ in the system, which in turn relies crucially on the link between the photon polarizations and the direction of the virtual electron hops in the lattice induced by the virtual absorption and emission of the photons. This link is in general absent in the current theoretical discussions and experimental setups of RIXS, in which the virtual absorption and emission of the photons are accompanied by \emph{intra-site} electron hops. In this paper, we propose performing RIXS near a \emph{pre-edge}, in which case the usual virtual processes with intra-site photon-induced electron hops are suppressed, thus allowing virtual processes with \emph{inter-site} photon-induced electron hops to manifest. Indeed, inter-site dipolar contributions have previously been identified in the absorption and Auger spectrum of\cite{Uozumi:EPL:1992, Danger:PRL:2002, Uozumi:JESRP:2004} TiO$_2$ and\cite{Shukla:PPL:2006, Kotani:PRB:2008} La$_2$CuO$_4$. To analyze the contributions by such inter-site processes to the RIXS signals, we modify the Shastry--Shraiman formalism\cite{Shastry:PRL:1990,Shastry:IJMPB:1991} used in deriving the corresponding results in Raman spectroscopy, and show that the spin-chirality terms indeed appear in both the square lattice (cuprate) and the kagome lattice (herbertsmithite), but that the first appearance of such terms occurs at a higher order in the latter case. This paper is organized as follows: in Sec.~\ref{sect:Review} the Shastry--Shraiman formalism in Raman spectroscopy is reviewed to set the stage for Sec.~\ref{sect:RIXS}, in which the modifications to this formalism to the case of RIXS are discussed and illustrated. In Sec.~\ref{sect:SSS} the possibility of detecting the spin-chirality terms in RIXS is considered in this modified formalism and our new proposal is presented, followed by detailed analyses for the square and the kagome lattices, as well as brief discussions for the triangular and the honeycomb lattices. Further discussions ensue in Sec.~\ref{sect:discussions}, in which our motivating example of the $U(1)$ Dirac spin liquid is considered again to put our proposal into perspective. \section{Review of the Shastry--Shraiman formalism in Raman Spectroscopy} \label{sect:Review} In the Shastry--Shraiman formalism, the electron-photon interaction $H_C$ is treated as a time-dependent perturbation on the time-independent Hamiltonian $H_{\textnormal{ind}}$. The latter consists of the Hubbard Hamiltonian $H_{\textnormal{Hb}} = \sum_{ij,\sigma} t_{ij} c^{\dagger}_{i\sigma} c_{j\sigma} + U \sum_i n_{i\uparrow} n_{i\downarrow}$ and the free photon Hamiltonian $H_{\gamma} = \sum_{\vv{q},\alpha} \omega_{\vv{q}} a^{\alpha\dagger}_{\vv{q}} a^\alpha_{\vv{q}}$ (here $\alpha$ labels the photon polarizations). Applying Fermi's golden rule, the transition rate $W_{fi}$ from an initial state $\ket{i}$ to a final state $\ket{f}$ is given by: \begin{equation} \label{eq:Fermi} W_{fi} = 2\pi |\bra{f} T \ket{i}|^2 \delta(\E_f - \E_i) \punct{,} \end{equation} where $T$ is the scattering $T$-matrix. Keeping only terms that are second order in the photon operators $a^{\alpha}_{\vv{q}}$, the $T$-matrix can be decomposed as $T = T_{\textnormal{R}} + T_{\textnormal{NR}}$, with $T_{\textnormal{R}}$ the resonant part and $T_{\textnormal{NR}}$ the non-resonant part, of which only the former is important in a Mott insulator. Next, the energy denominator that appears in $T_{\textnormal{R}}$ is further expanded by treating the hopping part $H_t = \sum_{ij,\sigma} t_{ij} c^{\dagger}_{i\sigma} c_{j\sigma}$ of the Hubbard Hamiltonian as a perturbation on the remaining terms in $H_{\textnormal{ind}}$. To be more precise, \begin{align} T_\textnormal{R} & = H^{(1)}_{C} \frac{1}{\E_i - (H_{\textnormal{Hb}} + H_{\gamma}) + i \eta} H^{(1)}_{C} \label{eq:TR_def} \\ & = H^{(1)}_{C} \frac{1}{\E_i - H_U - H_\gamma + i\eta} \notag \\ & \quad \times \sum_{n=0}^{\infty} \left( H_t \frac{1}{\E_i - H_U - H_\gamma + i\eta} \right)^n H^{(1)}_{C} \punct{,} \label{eq:TR_expand} \end{align} where $\E_i$ is the initial energy of the unperturbed system, $H^{(1)}_{C}$ consists of the terms in $H_C$ that are first order in the photon operators $a^{\alpha}_{\vv{q}}$, and $H_U = U \sum_i n_{i\uparrow} n_{i\downarrow}$ is the on-site Coulomb repulsion in the Hubbard Hamiltonian. In the context of Raman scattering, $H^{(1)}_C \sim \sum_{ij,\sigma} i t_{ij} c^\dagger_{i\sigma} c_{j\sigma} (g_i \vv{e}_\alpha a_{\vv{k}}^\alpha e^{i\vv{k}\cdot\vv{x}} + g_f \overline{\vv{e}}_\beta a_{\vv{q}}^\beta e^{-i\vv{q}\cdot\vv{x}})$, where $g_i$ ($g_f$) denotes the appropriate coupling constants between the electron and the incoming (outgoing) photon, $\vv{x} = (\vv{x}_i + \vv{x}_j)/2$ is the mid-point between site $i$ and $j$, and $\vv{e}_\alpha$ ($\vv{e}_\beta$) and $\vv{k}$ ($\vv{q}$) denote the polarization vector and momentum of the incoming (outgoing) photon, respectively. \newcommand{One caveat is that $\E_i$ in principle depends on the initial state, which is assumed to be an eigenstate of the time-independent Hamiltonian $H_{\txt{ind}}$. An arbitrary spin state in the lattice basis will in general not be an eigenstate of $H_{\txt{ind}}$ and technically $\E_i$ itself has to be treated perturbatively. This subtlety can be neglected when working in the zeroth order in $t/U$. Moreover, at the end of the day the $T$-matrix will be evaluated with respect to some many-body states that are the approximate ground states or low-lying excited states of $H_{\txt{ind}}$ within specific models, in which case it is legitimate to treat $\E_i$ as a $c$-number, to be absorbed into the definition of the resonant frequency.}{One caveat is that $\E_i$ in principle depends on the initial state, which is assumed to be an eigenstate of the time-independent Hamiltonian $H_{\textnormal{ind}}$. An arbitrary spin state in the lattice basis will in general not be an eigenstate of $H_{\textnormal{ind}}$ and technically $\E_i$ itself has to be treated perturbatively. This subtlety can be neglected when working in the zeroth order in $t/U$. Moreover, at the end of the day the $T$-matrix will be evaluated with respect to some many-body states that are the approximate ground states or low-lying excited states of $H_{\textnormal{ind}}$ within specific models, in which case it is legitimate to treat $\E_i$ as a $c$-number, to be absorbed into the definition of the resonant frequency.} In a Mott insulator, $T_{\textnormal{R}}$ connects between two spin states (i.e., states with zero double occupancy) and hence can in principle be expressed in terms of spin operators. In the Shastry--Shraiman formalism, this is achieved by inserting a complete set of states (in the lattice occupation basis with a fixed spin quantization axis) in between the operators in Eq.~(\ref{eq:TR_expand}), under which the energy denominators $(\E_i - H_U - H_\gamma + i\eta)^{-1}$ become $c$-numbers.\footnote{One caveat is that $\E_i$ in principle depends on the initial state, which is assumed to be an eigenstate of the time-independent Hamiltonian $H_{\txt{ind}}$. An arbitrary spin state in the lattice basis will in general not be an eigenstate of $H_{\txt{ind}}$ and technically $\E_i$ itself has to be treated perturbatively. This subtlety can be neglected when working in the zeroth order in $t/U$. Moreover, at the end of the day the $T$-matrix will be evaluated with respect to some many-body states that are the approximate ground states or low-lying excited states of $H_{\txt{ind}}$ within specific models, in which case it is legitimate to treat $\E_i$ as a $c$-number, to be absorbed into the definition of the resonant frequency.} Moreover, once an initial spin state (in that same basis) is specified and a particular choice of individual term is chosen for each $H_t$ and $H^{(1)}_C$ in Eq.~(\ref{eq:TR_expand}), the intermediate states are uniquely determined and thus can be trivially resummed. Hence the matrix elements of $T_{\textnormal{R}}$ with respect to spin states can be expressed as a sum of chains of electronic operators. These chains of electronic operators can be visualized as virtual processes in which electrons hop around the lattice and can be converted into spin operators by the identities $\tilde{\chi}_{\sigma\sigma'} \equiv c^\dagger_{\sigma'} c_{\sigma} = \frac{1}{2} \delta_{\sigma \sigma'} + \vv{S} \cdot \vvsym{\tau}_{\sigma \sigma'}$ and $\chi_{\sigma\sigma'} \equiv c_{\sigma} c^\dagger_{\sigma'} = \frac{1}{2} \delta_{\sigma \sigma'} - \vv{S} \cdot \vvsym{\tau}_{\sigma \sigma'}$. For $t \ll U$ and near resonance (i.e., $\omega_i \approx U$, in which $\omega_i$ is the energy of the incoming photon), the contributions to $T_{\textnormal{R}}$ are dominated by virtual processes in which all intermediate states have exactly one hole and one doubly occupied site (a.k.a.\@ doublon). In such case $(\E_i - H_U - H_\gamma + i\eta) \approx (\omega_i - U)$, and the matrix elements of $T_{\textnormal{R}}$ can thus be organized as an expansion in $t/(\omega_i - U)$. \begin{figure} \begin{center} \subfigure[\label{fig:Raman_1}]{\includegraphics[scale=1]{FleuryLoudon.eps}} \\ \subfigure[\label{fig:Raman_2}]{\includegraphics[scale=1]{kagome_Raman_1.eps}} \quad \subfigure[\label{fig:Raman_3}]{\includegraphics[scale=1]{kagome_Raman_2.eps}} \caption{\label{fig:Raman} (Color online) Virtual processes that contribute to the resonant Raman scattering $T$-matrix $T_{\textnormal{R}}$. The process depicted in (a) contributes to the Fleury--Loudon term while the ones depicted in (b) and (c) contribute (among others) to the spin-chirality terms at the leading order. Here and henceforth thick (blue) arrows denote electron hops that are accompanied by virtual absorptions or emissions of photons, thin (magenta) unbroken arrows denote movements of electrons in non-photon-induced internal hops, and thin (magenta) broken arrows denote movements of holes in non-photon-induced internal hops. The order of hops is indicated by lowercase roman letters next to the corresponding arrows.} \end{center} \end{figure} To the lowest nontrivial order in $t/(\omega_i - U)$, the $T$-matrix obtained in the Shastry--Shraiman formalism reproduces the Fleury--Loudon Hamiltonian $H_{\textnormal{FL}} = \sum_{\vv{r},\vv{r}'} \frac{2 t_{\vv{r}\vv{r}'}^2}{U-\omega_i} (\vv{e}_i \cdot \vvsym{\mu}) (\overline{\vv{e}}_f \cdot \vvsym{\mu}) (1/4 - \vv{S}_{\vv{r}} \cdot \vv{S}_{\vv{r}'} ) $, \cite{Fleury:PR:1968} and is contributed by virtual processes of the form shown in Fig.~\ref{fig:Raman_1}. At the $t^4/(\omega_i - U)^3$ order, individual processes that contribute to the spin-chirality terms, such as the ones shown in Figs.~\ref{fig:Raman_2} and \ref{fig:Raman_3}, start to appear. However, in the case where only nearest-neighbor hoppings are included, the sum of their contributions is found to vanish in the square and the triangular lattices, while it remains nonzero in the honeycomb and the kagome lattices.\cite{Ko:PRB:2010} It is worth noting that the spin-chirality terms appear exclusively in the $(\efei{x}{y} -\efei{y}{x})$ polarization channel in the scattering $T$-matrix. \section{Modification of the Shastry--Shraiman formalism to RIXS} \label{sect:RIXS} Since both Raman scattering and RIXS are resonant two-photon processes, it should be possible to modify the Shastry--Shraiman formalism to the case of RIXS. Indeed, similar expansion of $T_R$ had been made by van den Brink and van Veenendaal.\cite{Brink:EPL:2006} However, in that work the denominator in Eq.~(\ref{eq:TR_def}) was expanded by treating the (appropriately modified, see below) Hubbard Hamiltonian as a perturbation on the free photon and the atomic (see below) Hamiltonians. In practice, the Hubbard Hamiltonian is then expanded in the usual way one derives the Heisenberg Hamiltonian.\cite{Brink:EPL:2007, Forte:PRB:2008} For the present work, we follow instead the spirit of the Shastry--Shraiman formalism and take the terms in $H_{\textnormal{ind}}$ in which site indices change as perturbations on the free photon and the on-site terms in $H_{\textnormal{ind}}$. Since the main purpose of this paper is to identify virtual processes that may give rise to the spin-chirality terms, of which the two expansion schemes agree except that the prefactors coming from the energy denominators are organized differently, we shall not dwell on the relative merit of these two expansions, which may depend on one's identification of the resonant energy. Instead, we simply state here the necessary modifications to the Shastry--Shraiman formalism in the case of RIXS. First, $H^{(1)}_C$ now corresponds to virtual transitions in which an electron hops from a core state to a valence state while a photon is absorbed, or in which an electron hops from a valence state to the core state while a photon is emitted. Hence we may write $H^{(1)}_C = \sum_{c,v} i g_i \vv{e}_{\alpha} c^\dagger_{v} J^{\alpha}_{vc} c_{c} a^{\alpha}_{\vv{k}} e^{i\vv{k} \cdot \vv{r}} + i g_f \overline{\vv{e}}_{\beta} c^\dagger_{c} (J^\dagger)^{\beta}_{cv} c_{v} a^{\beta}_{\vv{q}} e^{i\vv{q} \cdot \vv{r}'}$, where the subscript $c$ ($v$) labels a core (valence) state and $J$ is a (possibly polarization-dependent) matrix that accounts for the matrix elements of the atomic transitions.\cite{Groot:PRB:1998} As usual it is necessary to include only the core and valence states that are near resonance in $H^{(1)}_C$. Second, $H_{\textnormal{ind}}$ must now include extra terms that account for the single-particle energies of the core states and possibly of the high-energy valence states, as well as their interactions with the low-energy valence states. Schematically, we can write $H_{\textnormal{ind}} = H_{\gamma} + H_{\textnormal{atomic}} + H'_{\textnormal{Hb}}$, in which the atomic Hamiltonian $H_{\textnormal{atomic}}$ accounts for the energy difference between the core state and the valence state excited from it, while the modified Hubbard Hamiltonian $H'_{\textnormal{Hb}}$ accounts for the interactions of the valence electrons among themselves, with the lattice potential, and with the core hole. In particular, since $H_U$ captures only the low-energy effective Coulomb repulsion among the low-energy valence electrons, in principle it is necessary to include in $H'_{\textnormal{Hb}}$ generic Coulomb interactions $u_{\alpha\beta\gamma\delta} c^{\dagger}_{\alpha} c^{\dagger}_{\beta} c_{\gamma} c_{\delta}$ in which at least one of the electron operators corresponds to a core or high-energy valence state. However, on physical ground it may be argued that the dominant effect of the core or high-energy valence state on the low-energy valence states would be modifications to the hopping parameters and the on-site potentials, i.e., $u_{\alpha\beta\gamma\delta} c^{\dagger}_{\alpha} c^{\dagger}_{\beta} c_{\gamma} c_{\delta} \sim c^\dagger_{c} c_{c} V_{c, ij} c^\dagger_{i} c_{j}$ and $c^\dagger_{e} c_{e} V_{e, ij} c^\dagger_{i} c_{j}$, where the subscript $c$ ($e$) labels a core (high-energy valence) state while $i,j$ are site labels of low-energy valence states. In practice, the effect of the core hole may be well captured by an on-site potential $U_c$ localized at the site where the core hole is present.\cite{Forte:PRB:2008} Third, unlike in Raman scattering, the core hole in RIXS has a very short lifetime and can decay via Auger processes. This introduces an uncertainty to the core-hole energy, which can be captured by replacing the infinitesimal $\eta$ in Eqs.~(\ref{eq:TR_def})~and~(\ref{eq:TR_expand}) by a finite energy broadening $\Gamma$. \begin{figure} \includegraphics[scale=0.8]{2p3d.eps} \caption{\label{fig:2p3d} (Color online) The lowest-order virtual processes that contribute to the one-magnon excitation in $2p \rightarrow 3d$ RIXS.} \end{figure} To illustrate the modified Shastry--Shraiman formalism, we briefly outline the virtual processes that contribute to the single-magnon excitation in $2p \rightarrow 3d$ RIXS and to the two-magnon excitation in $1s \rightarrow 4p$ RIXS in cuprates, both of which have previously been proposed\cite{Groot:PRB:1998, Forte:PRB:2008} and experimentally studied.\cite{Duda:Uppsala:1996,Hill:PRL:2008} For the $2p \rightarrow 3d$ RIXS, the lowest-order processes that contribute to the single-magnon excitations are purely atomic in nature and involve simply the photon-induced virtual transitions of a $2p$ core electron to and from the $3d$ valence states (see Fig.~\ref{fig:2p3d} for illustration). Since the $2p$ states are spin-orbit coupled, spin is not a good quantum number, and a spin flip of the $3d$ electrons can occur, as long as it is accompanied by an appropriate change in the photon polarization. In the modified Shastry--Shraiman formalism, the chain of electron operators that is associated with the process depicted in Fig.~\ref{fig:2p3d} is: \begin{align} T_{\textnormal{1-magnon}} & \propto \sum_{p,p',\sigma,\sigma'} (c^\dagger_{p} (J^\dagger)^{\beta}_{p\sigma} c_\sigma) (c^\dagger_{\sigma'} J^\alpha_{\sigma' p'} c_{p'}) \notag \\ & = \operatorname{tr} \{ M^{\alpha\beta} \chi \} \notag \\ & = m^{\alpha\beta}_0 - 2 \vv{m}^{\alpha\beta} \cdot \vv{S} \punct{,} \label{eq:2p3d} \end{align} where the subscript $p$ labels the $2p$ states while the subscript $\sigma$ labels the spin of the valence $3d$ states. Also, $\chi_{\sigma\sigma'} \equiv c_{\sigma} c^\dagger_{\sigma'} = \frac{1}{2} \delta_{\sigma\sigma'} - \vv{S} \cdot \vvsym{\tau}_{\sigma\sigma'}$ as before while $M^{\alpha\beta}_{\sigma' \sigma} = m^{\alpha\beta}_0 \delta_{\sigma'\sigma} + \vv{m}^{\alpha\beta} \cdot \vvsym{\tau}_{\sigma' \sigma} = \sum_{p} J^{\alpha}_{\sigma' p} (J^\dagger)^{\beta}_{p \sigma} $. Note that we have adopted a matrix convention for spin indices in the second line (which will be similarly adopted henceforth). From Eq.~(\ref{eq:2p3d}) it can be seen that the spin-flip term arises from structure of the $2p \rightarrow 3d$ transition matrix elements. \begin{figure} \begin{center} \subfigure[\label{fig:1s4p_1}]{\includegraphics[scale=0.7]{1s4p_1.eps}} \qquad \subfigure[\label{fig:1s4p_2}]{\includegraphics[scale=0.7]{1s4p_2.eps}} \caption{\label{fig:1s4p} (Color online) The lowest-order virtual processes that contribute to the two-magnon excitation in $1s \rightarrow 4p$ RIXS.} \end{center} \end{figure} For the $1s \rightarrow 4p$ RIXS, two-magnon excitation occurs when the core hole and $4p$ valence electron ``shake up'' the low-energy valence electrons and induce a pair exchange. Two such processes are depicted in Fig.~\ref{fig:1s4p}. The chains of electron operators that are associated with the processes depicted in Figs.~\ref{fig:1s4p_1} and \ref{fig:1s4p_2} are, respectively: \begin{align} T^{(a)}_{\textnormal{2-magnon}} & \propto (c^\dagger_{is} (J^\dagger)^{\beta}_{sp} c_{ip}) \cc[t_{ij}]{i}{j} \cc[t_{ji}]{j}{i} (c^\dagger_{ip'} J^\alpha_{p' s'} c_{is'}) \notag \\ & = \operatorname{tr} \{ J^{\alpha} (J^\dagger)^{\beta} \} t_{ij}t_{ji} \operatorname{tr} \{ \chi_j \tilde{\chi}_i \} \notag \\ & = N^{\alpha\beta} t_{ij} t_{ji} \left( \frac{1}{2} - \vv{S}_i \cdot \vv{S}_j \right) \punct{,} \label{eq:1s4p_1} \\ T^{(b)}_{\textnormal{2-magnon}} & \propto (c^\dagger_{is} (J^\dagger)^{\beta}_{sp} c_{ip}) \cc[t_{ji}]{j}{i} \cc[t_{ij}]{i}{j} (c^\dagger_{ip'} J^\alpha_{p' s'} c_{is'}) \notag \\ & = \operatorname{tr} \{ J^{\alpha} (J^\dagger)^{\beta} \} t_{ji} t_{ij} \operatorname{tr} \{ \chi_i \tilde{\chi}_j \} \notag \\ & = N^{\alpha\beta} t_{ij} t_{ji} \left( \frac{1}{2} - \vv{S}_i \cdot \vv{S}_j \right) \punct{,} \label{eq:1s4p_2} \end{align} where $N^{\alpha\beta} \equiv \operatorname{tr} \{ J^{\alpha} (J^\dagger)^{\beta} \}$, the subscript $s$ ($p$) labels the $1s$ ($4p$) states, and the electron operators with no orbital labels are assumed to be that of the valence $3d$ states. For brevity here and henceforth we omit the spin indices in virtual hops that involve only the $3d$ electrons, assuming that they are appropriately summed within parentheses. Thus, e.g., $(t_{ij} c^\dagger_{i} c_{j}) \equiv \sum_\sigma t_{ij} c^\dagger_{i\sigma} c_{j\sigma}$. Similarly, here and henceforth the sums over the spin and (for core hole and high-energy valence electrons) the orbital indices are assumed in the two photon-induced hops, such that, e.g., $(c^\dagger_{ip'} J^{\alpha}_{p's'} c_{is'}) \equiv \sum_{p',s'} c^\dagger_{ip'} J^{\alpha}_{p's'} c_{is'}$. Note also that the matrix notation in the second line of Eqs.~(\ref{eq:1s4p_1}) and (\ref{eq:1s4p_2}) is now extended to include the orbital indices of the core and the $4p$ electrons. Under the assumption that the only effect of the $1s$ core hole and the $4p$ valence electron is to introduce an additional potential $U_c$ at the site $i$ of which the atomic transition occurs, the coefficients of $T^{(a)}_{\textnormal{2-magnon}}$ and $T^{(b)}_{\textnormal{2-magnon}}$ that come from the energy denominators are, respectively: \begin{align} C^{(a)}_{\textnormal{2-magnon}} & = \left( \frac{1}{\delta\omega + i \Gamma} \right)^2 \frac{1}{\delta\omega -(U + U_c) + i \Gamma} \punct{,} \label{eq:1s4p_1_denom} \\ C^{(b)}_{\textnormal{2-magnon}} & = \left( \frac{1}{\delta\omega + i \Gamma} \right)^2 \frac{1}{\delta\omega - (U - U_c) + i \Gamma} \punct{.} \label{eq:1s4p_2_denom} \end{align} Here $\delta\omega = \omega_i - (\E_{4p} - \E_{1s})$ is the detuning from the atomic transition. We remark that one must subtract from $C_{\textnormal{2-magnon}}$ the corresponding coefficients with $U_c = 0$ to obtain the actual contributions of these two processes to the two-magnon \emph{transition}, since the contributions of these two processes with $U_c$ set to $0$, together with other virtual processes in which the intermediate virtual exchange does not involve the site $i$, constitute a part of the $T$-matrix that is proportional to the Heisenberg Hamiltonian and thus is not responsible for any actual transitions. \begin{figure} \begin{center} \subfigure[\label{fig:SSS_fail_1}]{\includegraphics[scale=0.65]{SSS_fail_1.eps}} \quad \subfigure[\label{fig:SSS_fail_2}]{\includegraphics[scale=0.65]{SSS_fail_2.eps}} \caption{\label{fig:SSS_fail} (Color online) Two processes whose contributions to the spin-chirality terms cancel out each other.} \end{center} \end{figure} \section{Spin-chirality terms in RIXS} \label{sect:SSS} To obtain the spin-chirality terms, at least three lattice sites must be involved in the virtual processes. For example, one might consider the higher order processes in the $1s \rightarrow 4p$ RIXS shown in Figs.~\ref{fig:SSS_fail_1} and \ref{fig:SSS_fail_2}. From the modified Shastry--Shraiman formalism, the associated chains of electron operators are, respectively: \begin{align} T^{(a)}_{\textnormal{3-sites}} & \propto (c^\dagger_{is} (J^\dagger)^{\beta}_{sp} c_{ip}) \cc[t_{i\ell}]{i}{\ell} \cc[t_{\ell j}]{\ell}{j} \cc[t_{ji}]{j}{i} (c^\dagger_{i p'} J^\alpha_{p' s'} c_{i s'}) \notag \\ & = \operatorname{tr} \{ J^{\alpha} (J^\dagger)^{\beta} \} t_{i\ell}t_{\ell j} t_{ji} \operatorname{tr} \{ \chi_\ell \chi_j \tilde{\chi}_i \} \notag \\ & = N^{\alpha\beta} t_{i\ell}t_{\ell j} t_{ji} \left(2 i \SSS{\ell}{j}{i} + \ldots\right) \punct{,} \label{eq:SSS_fail_1} \end{align} \begin{align} T^{(b)}_{\textnormal{3-sites}} & \propto (c^\dagger_{is} (J^\dagger)^{\beta}_{sp} c_{ip}) \cc[t_{ij}]{i}{j} \cc[t_{j\ell}]{j}{\ell} \cc[t_{\ell i}]{\ell}{i} (c^\dagger_{i p'} J^\alpha_{p' s'} c_{i s'}) \notag \\ & = \operatorname{tr} \{ J^{\alpha} (J^\dagger)^{\beta} \} t_{ij}t_{j \ell} t_{\ell i} \operatorname{tr} \{ \chi_j \chi_\ell \tilde{\chi}_i \} \notag \\ & = N^{\alpha\beta} t_{i\ell}t_{\ell j} t_{ji} \left(2 i \SSS{j}{\ell}{i} + \ldots \right) \punct{.} \label{eq:SSS_fail_2} \end{align} Moreover, it can be seen that the coefficient coming from the energy denominators is $C_{\textnormal{3-sites}} = (\delta\omega + i \Gamma)^{-2} (\delta\omega - (U - U_c) + i \Gamma)^{-2}$ for both processes. Hence, while each process by itself contributes to the spin-chirality terms, the sum of their contributions vanishes.\footnote{Note however that the sum of their contributions to the $\vv{S} \cdot \vv{S}$ terms (omitted in the $\ldots$) does not vanish.} Similar calculations show that the cancellation also occurs in the two analogous virtual processes in which electrons hop around the four-site loop in the square lattice. Furthermore, it can be shown that such cancellations also occur for similar virtual processes in the $2p \rightarrow 3d$ RIXS. In Raman scattering, the analogous processes, in which the hops \textit{i} and \textit{v} in Fig.~\ref{fig:SSS_fail} do not exist and which the hops \textit{ii} and \textit{iv} are photon induced, do not cancel out each other. Instead, the anticlockwise loop in Fig.~\ref{fig:SSS_fail_1} contributes to the $\efei{y}{x}$ photon polarization channel while the clockwise loop in Fig.~\ref{fig:SSS_fail_2} contributes to the $\efei{x}{y}$ channel, resulting in a nonvanishing contribution to the $(\efei{x}{y} - \efei{y}{x})$ channel when summed. This suggests that in order for the spin-chirality terms to be realized in the scattering $T$-matrix, it is crucial for the photon polarizations to be coupled with the directions of \emph{inter-site} electron hops---a link that does not appear in the usual RIXS setups. That said, one should also note that the dipole moment between a core orbital at one site and a valence orbital at one of its nearest-neighbor site is in general nonvanishing. Thus, in principle, a photon from the incident x-ray beam can also induce a core-to-valence excitation across the two sites. Such an inter-site transition is in general suppressed by the reduced wavefunction overlap and is thus masked by the corresponding intra-site transition. Moreover, for hard x-ray the distance between two nearest-neighbor sites may also be equal to many wavelengths of the incident x-ray, which further reduces the transition amplitude for such an inter-site transition at near-horizontal incidence (relative to the two-dimensional lattice plane) as a result of the rapid oscillation of the electric field across the two sites. However, if the frequency of the incident x-ray is tuned to that of a forbidden atomic transition (e.g., the $1s \rightarrow 3d$ transition in the Cu$^{2+}$ materials), then the near-resonant dipole inter-site transition needs only to compete with a near-resonant \emph{quadruple} intra-site transition and a \emph{detuned} dipole intra-site transition. With adequate luminosity, this may allow the signals from the inter-site transition to manifest in the spectrum. Indeed, contributions from such inter-site transitions have previously been identified in the x-ray absorption and Auger spectroscopy of\cite{Uozumi:EPL:1992, Danger:PRL:2002, Uozumi:JESRP:2004} TiO$_2$ and\cite{Shukla:PPL:2006, Kotani:PRB:2008} La$_2$CuO$_4$. Moreover, for nearly two-dimensional materials such as cuprates and herbertsmithite, the rapid oscillation of the electric field between two lattice sites can be alleviated by arranging the x-ray to be at near-normal incidence relative to the two-dimensional lattice plane. Generally, by tuning the angle of incidence, the electric field across two nearest-neighbor sites can be made relatively uniform while a sufficiently large in-plane momentum of the photon is maintained, such that a significant portion of the Brillouin zone can be explored. Furthermore, one may also consider resonances induced by soft x-ray (e.g., the $2s \rightarrow 3d$ and the $3s \rightarrow 3d$ resonances in cuprates), which have larger wavefunction overlaps between the core orbitals and their nearest-neighbor valence orbitals. \newcommand{The $3d$ orbitals in herbertsmithite is also rotated relative to the kagome plane.\cite{Shores:JACS:2005} While this would affect the precise values of the hopping magnitude, it should not affect the sign pattern as presented in the figure or the statement that the amplitudes are non-vanishing.}{The $3d$ orbitals in herbertsmithite is also rotated relative to the kagome plane.\cite{Shores:JACS:2005} While this would affect the precise values of the hopping magnitude, it should not affect the sign pattern as presented in the figure or the statement that the amplitudes are non-vanishing.} For the rest of this section we shall assume that the signals from such inter-site transitions can indeed be detected and consider in detail whether the spin-chirality terms can indeed arise from such a case. Specifically, we shall focus on $s \rightarrow 3d$ inter-site transitions in Cu$^{2+}$ materials with the square and the kagome lattice geometries, having in mind the realistic materials of cuprates and herbertsmithite. We shall also briefly comment on the cases of the triangular and the honeycomb lattices, in which the derivations of the spin-chirality terms are closely related to that of the square and the kagome lattices, respectively, and in which the former may be relevant to the new spin liquid candidate\cite{HDZhou:PRL:2011} Ba$_3$CuSb$_2$O$_9$. For brevity we shall drop the factors $g_i$ and $g_f$ that are common to all virtual processes. In such virtual processes with photon-induced inter-site hopping, it is easy to check that the intermediate state obtained after a photon-induced hop has an energy denominator of $\E_D = \delta\omega - (U - U_c) + i\Gamma$, where $\delta\omega = \omega_i - (\E_{3d} - \E_{\textnormal{core}})$ is again the detuning from the atomic transition. The resonant condition is thus given by $\omega_i \approx (\E_{3d} -\E_{\textnormal{core}}) + (U - U_c)$, under which $\E_D \approx i \Gamma$. \modified{For cuprates, $U \approx 8.8$~eV and $U_c \approx 7.0$~eV, while $\Gamma \approx 0.75$~eV for the \emph{$K$-edge}.\cite{Forte:PRB:2008} In comparison, in the effective one-band Hubbard model for cuprates, $t \approx 0.4$~eV.\cite{Hybertsen:PRB:1990}} Observe that the expression of $\E_D$ involves $U_c$, suggesting that the true resonant frequency of the inter-site dipole transition is offset from that of the intra-site quadruple transition by $U_c$. Indeed, the relative frequency shift between the inter-site dipole transition and the intra-site quadruple transition has been used to explain the ``three peaks'' feature of Ti pre-\textit{K}-edge absorption spectra in TiO$_2$.\cite{Uozumi:EPL:1992} The existence of such frequency shift may thus allow the signals from the intra-site quadruple transition to be further suppressed relative to inter-site dipole transition when the frequency of the incident photon is tuned. \begin{figure} \begin{center} \subfigure[\label{fig:orientation_square}]{\includegraphics[scale=0.65]{orientation_square.eps}} \qquad \subfigure[\label{fig:orientation_kagome}]{\includegraphics[scale=0.65]{orientation_kagome.eps}} \caption{\label{fig:orientation} (Color online) Orientations of the $3d$ orbitals in (a) cuprates and (b) herbertsmithite, and their effects on the signs of the photon-induced hopping amplitudes in the $s \rightarrow 3d$ RIXS. Here the red (solid) and cyan (shaded) fillings indicate the relative signs of the angular part of the electron wavefunctions. Note from the figure that none of the photon-induced hopping magnitude is required to vanish by symmetry. } \end{center} \end{figure} If we further assume that $\Gamma \ll U$ and that $U$ and $U_c$ are of the same order of magnitude, assumptions that appear to be valid in cuprates, then the virtual processes are dominated by those having intermediate states with exactly one low-energy valence doublon \emph{not located at the core-hole site}, and we can organize the $T$-matrix as an expansion in $t/\Gamma$ similar to that in the Raman case. Since the effect of the photon polarizations is now mostly reflected in the directions of the induced electron hops and since the spin-orbit coupling is negligible, we may take $H_C^{(1)} \sim \pm J (\vv{e}_{\alpha} \cdot \vvsym{\mu}) \sum_{\sigma} c^\dagger_{i+\mu, \sigma} c_{i,s,\sigma} \equiv \pm J (\vv{e}_{\alpha} \cdot \vvsym{\mu}) (c^\dagger_{i+\mu} c_{i,s})$ for the electron hop associated with the virtual absorption of photon and $H_C^{(1)} \sim \pm J (\overline{\vv{e}}_{\alpha} \cdot \vvsym{\mu}) \sum_{\sigma} c^\dagger_{i+\mu, s, \sigma} c_{i,\sigma} \equiv \pm J (\overline{\vv{e}}_{\alpha} \cdot \vvsym{\mu}) (c^\dagger_{i+\mu, s} c_{i})$ for the electron hop associated with the virtual emission of photon, in which operators with the subscript $s$ correspond to the $s$ orbitals while operators without orbital labels correspond to the valence $3d$ orbitals. Note that $J$ is now a real scalar constant. The $\pm$ signs in the above equations are determined by the relative orientations of the $d$ orbitals and are illustrated\footnote{The $3d$ orbitals in herbertsmithite is also rotated relative to the kagome plane.\cite{Shores:JACS:2005} While this would affect the precise values of the hopping magnitude, it should not affect the sign pattern as presented in the figure or the statement that the amplitudes are non-vanishing.} in Fig.~\ref{fig:orientation}. \begin{figure} \begin{center} \subfigure[\label{fig:SSS_sq_1}]{\includegraphics[scale=0.8]{square_1.eps}} \qquad \subfigure[\label{fig:SSS_sq_2}]{\includegraphics[scale=0.8]{square_2.eps}} \caption{\label{fig:SSS_sq} (Color online) Two processes with inter-site photon-induced transitions that contribute to the spin-chirality terms in the $s \rightarrow 3d$ RIXS. } \end{center} \end{figure} Now consider the particular case of the square lattice with only uniform nearest-neighbor hopping $t$. For such a lattice, virtual processes that involve valence electrons on at least three sites first appear at the order of two internal hops (i.e., at the ($t^2 J^2/\E_D^3$)-th order). Two such processes are depicted in Figs.~\ref{fig:SSS_sq_1} and \ref{fig:SSS_sq_2}. The corresponding contributions to the $T$-matrix are, respectively: \begin{align} T^{(a)}_{\textnormal{sq}} & = \frac{1}{\E_D^3} \cc[\ef{y} J]{1,s}{3} \cc[t]{3}{4} \cc[t]{4}{2} \cc[\ei{x} J]{2}{1,s} \notag \\ &= \frac{t^2 J^2}{\E_D^3} \efei{y}{x} \operatorname{tr}\{ \chi_3 \chi_4 \chi_2 \} \notag \\ & \doteq -\frac{2 i t^2 J^2}{\E_D^3} \efei{y}{x} \ \SSS{3}{4}{2} \notag \\ & = - \frac{2 i t^2 J^2}{\E_D^3} \efei{y}{x} \times \yytox \punct{,} \label{eq:SSS_sq_1} \end{align} \begin{align} T^{(b)}_{\textnormal{sq}} & = \frac{1}{\E_D^3} \cc[-\ef{x} J]{1,s}{2} \cc[t]{2}{4} \cc[t]{4}{3} \cc[-\ei{y} J]{3}{1,s} \notag \\ &= \frac{t^2 J^2}{\E_D^3} \efei{x}{y} \operatorname{tr}\{ \chi_2 \chi_4 \chi_3 \} \notag \\ & \doteq - \frac{2 i t^2 J^2}{\E_D^3} t^2 J^2 (\efei{x}{y}) \ \SSS{2}{4}{3} \notag \\ & = \frac{2 i t^2 J^2}{\E_D^3} \efei{x}{y} \times \yytox \punct{,} \label{eq:SSS_sq_2} \end{align} where $\doteq$ denotes the part of the $T$-matrix that contains the spin-chirality terms, and that a graphic representation of the spin-chirality terms have been adopted on the fourth line of Eqs.~(\ref{eq:SSS_sq_1}) and (\ref{eq:SSS_sq_2}). For a fixed core-hole site, at this order there are three additional pairs of processes that contribute to the spin-chirality terms, which can be obtained from the processes depicted in Fig.~\ref{fig:SSS_sq} by successive $90^\circ$ rotations about the core-hole site. Summing over all these processes, to this order the contribution to the spin-chirality terms by a core hole at site $i$ is given by: \begin{align} T^{i}_{\textnormal{sq}} & \doteq \frac{2 i t^2 J^2}{\E_D^3} (\efei{x}{y} - \efei{y}{x}) \notag \\ & \quad \times \left( \ytoxx + \xxtoyy + \yytox + \xtoy \right) \punct{.} \label{eq:SSS_sq_site} \end{align} Summing over all possible core-hole sites, and now restoring the exponential factor $e^{i (\vv{k}_i - \vv{k}_f) \cdot \vv{r}_i}$, we have:\footnote{To be accurate, the position vector $\vv{r}$ in the exponential factor should be located at the bond center of the respective photon-induced hop. However, assuming that the momentum transferred is a fraction of the reciprocal lattice vector, it is permissible to neglect displacements that are only fractions of the lattice spacing.} \begin{align} T_{\textnormal{sq}} & \doteq \sum_{R} \frac{2 i t^2 J^2}{\E_D^3} (\efei{x}{y} - \efei{y}{x}) e^{i (\vv{k}_i - \vv{k}_f) \cdot \vv{R}} \notag \\ & \quad \times \left( \ytoxx + \xxtoyy + \yytox + \xtoy \right)_\vv{R} \punct{,} \label{eq:SSS_sq_sum} \end{align} where the subscript $\vv{R}$ next to the parenthesis labels the site with which the spin-chirality terms are associated. From Eq.~(\ref{eq:SSS_sq_sum}) we see that for the square lattice there are indeed contributions to the $T$-matrix that couple to the spin-chirality terms at momentum $\vv{q} = \vv{k}_i - \vv{k}_f$ (i.e., the momentum transferred by the photon). \begin{figure} \begin{center} \includegraphics[scale=0.8]{SquareTriangular.eps} \caption{\label{fig:SquareTriangular} (Color online) Mapping of two-internal-hop processes between the square and the triangular lattices. } \end{center} \end{figure} The above analysis for the square lattice can be readily modified to the case of the triangular lattice, since processes with less than two internal hops can involve valence electrons at at most \emph{two} sites and hence do not give rise to any spin-chirality terms, while the two-internal-hop processes in the triangular lattice and the square lattice are topologically the same (see Fig.~\ref{fig:SquareTriangular} for illustration). For example, the contribution to the spin-chirality terms by the process depicted in Fig.~\ref{fig:SquareTriangular}(b) can be read off as: \begin{equation} T^{(b)}_{\textnormal{tri}} \doteq \frac{2 i t^2 J_i J_f}{\E_D^3} \efei{w}{x} \raisebox{-5pt}{\includegraphics{dntriangle.eps}} \punct{,} \label{eq:SSS_tri_ex} \end{equation} where the superscript $w$ in the photon polarization $\ef{w}$ corresponds to the unit vector $\uv{w}$ as depicted in the figure. Assuming that all photon-induced hops have the same amplitude, so that $J_i = J_f = J$ for all processes, we can sum up all contributions as in the square lattice case to obtain: \begin{align} T_{\textnormal{tri}} & \doteq \sum_{R} \frac{\sqrt{3} i t^2 J^2}{2 \E_D^3} (\efei{x}{y} - \efei{y}{x}) e^{i (\vv{k}_i - \vv{k}_f) \cdot \vv{R}} \notag \\ & \quad \times \Bigg( 3 \raisebox{-5pt}{\includegraphics{uptriangle.eps}} + 3 \raisebox{-5pt}{\includegraphics{dntriangle.eps}} + \raisebox{-5pt}{\includegraphics{x2ww.eps}} + \raisebox{-10pt}{\includegraphics{ww2v.eps}} \notag \\ & \quad \quad + \raisebox{-5pt}{\includegraphics{v2xx.eps}} + \raisebox{-5pt}{\includegraphics{xx2w.eps}} + \raisebox{-10pt}{\includegraphics{w2vv.eps}} + \raisebox{-5pt}{\includegraphics{vv2x.eps}} \Bigg)_\vv{R} \punct{.} \label{eq:SSS_tri_sum} \end{align} Hence, as in the square lattice case, the spin-chirality terms do appear in the triangular lattice at the ($t^2 J^2/\E_D^3$)-th order. \begin{figure} \begin{center} \subfigure[\label{fig:SSS_kgm_fail_0}]{\includegraphics[scale=0.9]{kagome_fail_0.eps}} \quad \subfigure[\label{fig:SSS_kgm_fail_1}]{\includegraphics[scale=0.9]{kagome_fail_1.eps}} \quad \subfigure[\label{fig:SSS_kgm_fail_2}]{\includegraphics[scale=0.9]{kagome_fail_2.eps}} \\ \subfigure[\label{fig:SSS_kgm_fail_3a}]{\includegraphics[scale=0.9]{kagome_fail_3a.eps}} \quad \subfigure[\label{fig:SSS_kgm_fail_3b}]{\includegraphics[scale=0.9]{kagome_fail_3b.eps}} \\ \subfigure[\label{fig:SSS_kgm_fail_3c}]{\includegraphics[scale=0.9]{kagome_fail_3c.eps}} \quad \subfigure[\label{fig:SSS_kgm_fail_3d}]{\includegraphics[scale=0.9]{kagome_fail_3d.eps}} \caption{\label{fig:SSS_kgm_fail} (Color online) Topologically distinct resonant virtual processes with inter-site photon-induced transitions in the $s \rightarrow 3d$ RIXS in the kagome lattice, up to three internal hops. } \end{center} \end{figure} Next we consider the kagome lattice with only uniform nearest-neighbor hopping $t$. In Fig.~\ref{fig:SSS_kgm_fail} we list all the distinct \emph{topologies} (as opposed to geometries, such that, e.g., Fig.~\ref{fig:SSS_kgm_fail_2} is also representative of processes in which the third site is located at other locations) of resonant virtual processes up to three internal hops. Unfortunately, none of these processes generate any spin-chirality terms. To see this, first observe that the processes depicted in Figs.~\ref{fig:SSS_kgm_fail_0}--\ref{fig:SSS_kgm_fail_3a} involve valence electrons at only \emph{two or fewer} sites and thus cannot generate any spin-chirality terms (note that only \emph{core} electrons are involved at site 1 in all of the processes listed in Fig.~\ref{fig:SSS_kgm_fail}). Next, to rule out the processes depicted in Figs.~\ref{fig:SSS_kgm_fail_3b}--\ref{fig:SSS_kgm_fail_3d}, note that a spin state is annihilated by two successive creation operators or two successive annihilation operators on the same site, \emph{regardless of the spin characters of these two operators}. Consequently, if a site is transversed more than once in a virtual process, then each internal loop contributes to a separate trace in the Shastry--Shraiman derivation. For instance, corresponding to Fig.~\ref{fig:SSS_kgm_fail_3d} we have: \begin{align} T^{(g)}_{\textnormal{kagome}} & = \efei{x}{x} \frac{t^3 J^2}{\E_D^4} \cc{1,s}{2} \cc{2}{3'} \cc{3'}{1'} \cc{1'}{2} \cc{2}{1,s} \notag \\ & = \efei{x}{x} \frac{t^3 J^2}{\E_D^4} \cc{1,s}{2} \operatorname{tr} \{ \chi_{3'} \chi_{1'} \} \cc{2}{1,s} \notag \\ & = \efei{x}{x} \frac{t^3 J^2}{\E_D^4} \operatorname{tr}\{ \chi_{2} \} \operatorname{tr} \{ \chi_{3'} \chi_{1'} \} \notag \\ & = \efei{x}{x} \frac{t^3 J^2}{\E_D^4} \left( \frac{1}{2} + 2 \SdotS{3'}{1'} \right) \punct{.} \label{eq:SSS_kgm_fail_3g} \end{align} Similarly, $T^{(e)}_{\textnormal{kagome}} \sim ( 1/2 + 2 \SdotS{2}{3})$ and so does $T^{(f)}_{\textnormal{kagome}}$. \begin{figure} \begin{center} \subfigure[\label{fig:SSS_kgm_1}]{\includegraphics[scale=0.9]{SSS_kagome_1.eps}} \\ \subfigure[\label{fig:SSS_kgm_2a}]{\includegraphics[scale=0.9]{SSS_kagome_2a.eps}} \qquad \subfigure[\label{fig:SSS_kgm_2b}]{\includegraphics[scale=0.9]{SSS_kagome_2b.eps}} \caption{\label{fig:SSS_kgm} (Color online) Virtual processes with inter-site photon-induced transitions that contribute to the spin-chirality terms in the $s \rightarrow 3d$ RIXS in the kagome lattice. } \end{center} \end{figure} For the kagome lattice, the spin-chirality terms first appear at the $(t^4 J^2/\E_D^5)$-th order, which arise from virtual processes in which the doublon hop through a hexagon. One such process is depicted in Fig.~\ref{fig:SSS_kgm_1}, whose contribution is given by: \begin{align} T^{(\textnormal{hex})}_{\textnormal{kagome}} & = \efei{w}{x} \frac{t^4 J^2}{\E_D^5} \cc{1,s}{6} \cc{6}{5} \cc{5}{4} \cc{4}{3} \cc{3}{2} \cc{2}{1,s} \notag \\ & = \efei{w}{x} \frac{t^4 J^2}{\E_D^5} \operatorname{tr}\{ \chi_6 \chi_5 \chi_4 \chi_3 \chi_2 \} \notag \\ & \doteq \efei{w}{x} \frac{i t^4 J^2}{2 \E_D^5} \sum_{6 \geq a > b > c \geq 2} \vv{S}_{a} \cdot (\vv{S}_{b} \times \vv{S}_{c}) \punct{,} \label{eq:SSS_kgm_hex} \end{align} where the superscript $w$ in $\overline{e}^{w}_{f}$ corresponds to the unit vector $\hat{w}$ as depicted in the figure. One can check that processes with the same topology as the one depicted in Fig.~\ref{fig:SSS_kgm_1} sum to a nonzero contribution to the spin-chirality terms in the $(\efei{x}{y} - \efei{y}{x})$ channel at momentum $\vv{q}$ equal to that transferred by the photon, which for brevity we shall not write down explicitly. It can also be checked that any process at this order with a different topology does not contribute to any spin-chirality terms. Compared with the corresponding terms in the square lattice, the spin-chirality terms in the kagome lattice are down by a factor of $(t/\E_D)^2$, which can be significant in the limit where $t \ll \Gamma$ even if the resonant condition is met. In such case one may want to consider also processes in which not all energy denominators are equal to $\E_D$. With this relaxed criterion, contributions to the spin-chirality terms can be found at the order of two internal hops, in which the doublon hops through the core-hole site (see Figs.~\ref{fig:SSS_kgm_2a} and \ref{fig:SSS_kgm_2b} for illustrations). For instance, the process depicted in Fig.~\ref{fig:SSS_kgm_2a} contributes: \begin{align} T^{(U_c)}_{\textnormal{kagome}} & = -\efei{w}{x} \frac{t^2 J^2}{\E_D^2 (\E_D + U_c) } \cc{1,s}{3} \cc{3}{1} \cc{1}{2} \cc{2}{1,s} \notag \\ & = -\efei{w}{x} \frac{t^2 J^2}{\E_D^2 (\E_D + U_c) } \operatorname{tr}\{ \chi_3 \chi_1 \chi_2 \} \notag \\ & \doteq -\efei{w}{x} \frac{2 i t^2 J^2}{\E_D^2 (\E_D + U_c) } \vv{S}_{3} \cdot (\vv{S}_{1} \times \vv{S}_{2}) \punct{.} \label{eq:SSS_kgm_Uc} \end{align} Again it can be checked that all processes with the same topology as the one depicted in Fig.~\ref{fig:SSS_kgm_2a} (which includes the one depicted in Fig.~\ref{fig:SSS_kgm_2b}) sum to a nonzero contribution to the spin-chirality terms in the $(\efei{x}{y} - \efei{y}{x})$ channel. It is worth noting that such ``back-tracking'' processes are also present in the square lattice and carry opposite signs from the ordinary ones depicted in Fig.~\ref{fig:SSS_sq}. Thus in the limit where $t \ll \Gamma$, the ratio of prefactors in the spin-chirality terms in the kagome lattice over that in the square lattice is given by $(\E_D + U_c)^{-1} / \left( \E_D^{-1} - (\E_D + U_c)^{-1} \right) = \E_D / U_c$. It is, however, worth noting that $t/\Gamma$ is not expected to be small in the case of cuprates. Since the honeycomb lattice has the same hexagon loops as in the kagome lattice and has no shorter (in terms of the number of hops) loops, it can be readily checked that the spin-chirality terms again first appear in the honeycomb lattice at the $(t^4 J^2/\E_D^5)$-th order when $t \lesssim |\E_D|$ and at the $(t^2 J^2/ (\E_D + U_c)\E_D^2)$-th order when $t \ll |\E_D|$, with Figs.~\ref{fig:SSS_kgm_1} and \ref{fig:SSS_kgm_2b} the typical contributing processes in the respective cases. \section{Discussions and Conclusions} \label{sect:discussions} In this paper, we consider the question of whether RIXS can be used to detect many-body excitations that are coupled to the spin-chirality terms in a Mott insulator. We find that the spin-chirality terms are in general absent in the usual experimental setups, in which the spectroscopy is done near an absorption edge. The absence of the spin-chirality terms in these setups can be traced to the lack of linkage between the virtual electron hops and the photon polarizations. However, we argue that RIXS still holds a prospect of observing the effects of the spin-chirality terms if one instead considers spectroscopy near a \emph{pre-edge}, in which case the intra-site dipole transitions are forbidden. Focusing on the Cu$^{2+}$ materials with the square and the kagome lattice geometries, we find that the spin-chirality terms are indeed presented in both cases under our new proposal. However, in the kagome case such terms appear only at a higher order in our expansion. In addition, we also find that as far as the spin-chirality terms are concerned, the scenario for the triangular lattice is analogous to that of the square lattice, while the scenario for the honeycomb lattice is analogous to that of the kagome lattice. It is worth noting that the situation we encounter in RIXS is essentially the reverse of what happens in the Raman case, in which the spin-chirality terms occur at the ($t^4/U^3$)-th order in the kagome and the honeycomb lattices but not in the square or the triangular lattices. In comparison to the similar scheme to detect the spin-chirality terms in Raman spectroscopy,\cite{Shastry:PRL:1990,Shastry:IJMPB:1991} which had already been realized,\cite{Sulewski:PRL:1991} the present scheme in RIXS suffers from the reduced wavefunction overlaps in the inter-site dipole transitions. However, it has the advantage that excitations with finite momentum can be probed. To put this into perspective, let us return to the motivation we presented in the introduction, namely the emergent gauge boson in the $U(1)$ Dirac spin liquid. In the $U(1)$ Dirac spin liquid, the spin-chirality terms in the $T$-matrix correspond to flux-flux correlators, viz.: \begin{align} \sum_f W_{fi} & = \sum_f 2\pi |\bra{f} T \ket{i}|^2 \delta(\E_f - \E_i) \\ & \sim \bra{i} b(\vv{\Delta k}, \Delta\omega) b(\vv{0}, 0) \ket{i} + \cdots \notag \\ & \propto \frac{q^2 \Theta(\Delta\omega - v_F \Delta k)}% {(\Delta\omega^2 - v_F^2 \Delta k^2)^{1/2}} + \cdots \punct{,} \label{eq:spectral} \end{align} where $\Theta$ denotes the step function, $v_F$ is the Fermi velocity at the Dirac cone of the $U(1)$ Dirac spin liquid, $b$ is the ``magnetic field'' associated with the emergent gauge boson, and $\Delta\omega = \omega_i - \omega_f$ ($\vv{\Delta k} = \vv{k}_i - \vv{k}_f)$ is the energy (momentum) transferred from the photon. If we assume that $\E_D$ in RIXS and $(\omega_i - U)$ in Raman spectroscopy are of the same order, the intensity of the signal from the gauge boson in RIXS will be modified from that in Raman spectroscopy by a factor roughly equal to $J^2 \E_D/t^2 (\E_D + U_c)$ or $J^2/\E_D^2$, depending on which limit one considers in RIXS. However, such comparison is not particularly meaningful since we have not considered how the \emph{background} signals compare in the two cases. However, the advantage offered by RIXS is not so much in the intensity of the signal but rather in its lineshape. In the Raman case where $\vv{\Delta k} \approx 0$, the signature of the emergent gauge boson can manifest only as a power-law behavior near zero energy transfer, which can easily be masked by the elastic or quasielastic peak. In contrast, in RIXS the signal from the gauge boson has a sharp threshold at $\Delta\omega = v_F \Delta k$, which varies as $\vv{\Delta k}$ varies. Thus, assuming modest intensities of the signals, it would be much easier to discern the emergent gauge boson in the case of RIXS. Of course, one should not underestimate the experimental challenges in realizing the proposal laid out in this paper. However, enormous progress in RIXS has been made in recent decades,\cite{Kotani:RMP:2001} with two-magnon excitations being observed\cite{Hill:PRL:2008, Braicovich:PRL:2009} and three-magnon excitations being proposed. \cite{Ament:arXiv:1002.3773} It is our hope that our proposal will further stimulate new theoretical and experimental advances in the field. \begin{acknowledgments} We thank George Sawatzky, Akio Kotani, and Peter Abbamonte for helpful information. This research was supported in part by the DOE under Grant No.\@ DE-FG02-03ER46076 (W.H.K and P.A.L.) and in part by the NSF under Grant No.\@ NSF PHY05-51164 (W.H.K). \end{acknowledgments} \input{RIXS_biblio.bbl} \end{document}
1,314,259,993,738
arxiv
\section{Introduction} The distance scale to globular clusters is of great interest for two principal reasons. Firstly, the best estimates for the absolute ages of globular clusters require that the distance to the clusters be known (e.g.\ Renzini 1991; Chaboyer 1996). These age determinations provide the best estimate for the age of the universe, but are very sensitive to the adopted globular cluster distance scale. For example, a revision in the distance scale by 0.10 mag changes the derived ages by 10\%. The second reason for the interest in the globular cluster distance scale is that globular clusters contain stellar populations (RR Lyrae stars, tip of the red giant branch) which are commonly used as distance indicators in astronomy. Thus, globular clusters can serve as nearby calibrators of these standard candles. The release of the Hipparcos data set has caused a number of workers to re-examine the question of the globular cluster distance scale. Hipparcos provided high quality parallaxes for a number of nearby metal-poor stars, yielding a calibration of the absolute magnitude of metal-poor stars which could be used to derive the distance to globular clusters from main sequence fitting (Reid 1997, 1998; Gratton {\it et al.}\ 1997; Chaboyer {\it et al.}\ 1998; Grundahl {\it et al.}\ 1998; Pont {\it et al.}\ 1998). In addition, Hipparcos provided proper motions for a large number of RR Lyrae stars. These proper motions have been combined with the method of statistical parallax to estimate the absolute magnitude of RR Lyrae stars, one of the standard candles found in globular clusters (Fernley {\it et al.}\ 1998a). In this chapter, the results from these studies and other recent investigations of the globular cluster distance scale will be reviewed. This is not meant to be a comprehensive review, as only results from the last few years are discussed in detail. The calibration of the absolute magnitude of the RR Lyrae stars is presented in \S \ref{sect1}. Astrometric distances derived from internal proper motion and radial velocity studies are discussed in \S \ref{sect2}. Section \ref{sect3} contains a summary of results based upon main sequence fitting; a more complete discussion may be found in the chapter by Gratton {\it et al.}\ in this volume. The results of white dwarf sequence fitting are presented in \S \ref{sect4}, while the potential of other distance indicators is discussed in \S \ref{sect5}. The various results are compared in \S \ref{sect7} which summarizes the current status of the globular cluster distance scale. \section{RR Lyrae stars \label{sect1}} RR Lyrae stars are radially pulsating variable stars, which have traditionally been used as standard candles in astronomy. As RR Lyrae stars are found in many globular clusters, they are one of the primary distance indicators to globular clusters. However, RR Lyrae stars are not perfect standard candles; it has been known for many years that their absolute magnitude (\hbox {$\rm M_v(RR)$}) is a function of metallicity (Sandage 1981a,b). This has traditionally been parameterized as a simple linear relationship between \hbox {$\rm M_v(RR)$}\ and \hbox{$ [{\rm Fe}/{\rm H}]$}: \begin{eqnarray} \hbox {$\rm M_v(RR)$}\ = \alpha\,\hbox{$ [{\rm Fe}/{\rm H}]$} + \beta. \label{eqmvrr} \end{eqnarray} There are a variety of different techniques which can be used to determine \hbox {$\rm M_v(RR)$} , and hence calibrate RR Lyrae stars as standard candles to be used in determining the distances to globular clusters. Some of these methods are best for ascertaining the zero-point of the \hbox {$\rm M_v(RR)$}-metallicity relation ($\beta$), while others are best in determining the variation of \hbox {$\rm M_v(RR)$}\ with metallicity ($\alpha$). The determination of each of these quantities will be discussed in turn. \subsection{Variation of \hbox {$\rm M_v(RR)$}\ with metallicity } Sandage (1981a,b) first derived the coefficients in equation (\ref{eqmvrr}) empirically, and estimated a ``steep" slope $\alpha$ of 0.35. From a theoretical calibration based on synthetic horizontal branch (HB) population models, Lee {\it et al.}\ (1990, 1994) derived a ``shallower" $\alpha$ in the range 0.17-0.19. It is important to note that theoretical models predict that a simple linear relationship between \hbox {$\rm M_v(RR)$}\ and \hbox{$ [{\rm Fe}/{\rm H}]$}\ does not exist. Theory predicts that stars of the same metallicity evolve through the RR Lyrae instability strip at different luminosities depending on whether they originate on the red or blue side of the instability strip. Thus \hbox {$\rm M_v(RR)$}\ depends on HB morphology. At a given metallicity, RR Lyrae variables are more luminous in clusters with blue HB morphology types than in red HB morphology type (Lee 1991). This difficulty suggests that the standard \hbox {$\rm M_v(RR)$}\ calibration should not be applied to globular clusters with extremely blue HB morphologies (e.g.\ see Fig.\ 1 in Caputo 1997). Furthermore, theoretical HB models do not predict a simple linear relationship between \hbox {$\rm M_v(RR)$}\ and \hbox{$ [{\rm Fe}/{\rm H}]$}\ even among globular clusters with similar HB types (Caputo 1997; Caloi {\it et al.}\ 1997). For example, with $[\alpha/{\rm Fe}] = +0.4$, Caputo (1997) predicts $\alpha=0.19$ for $\hbox{$ [{\rm Fe}/{\rm H}]$} < -1.6$ and $\alpha = 0.32$ for $\hbox{$ [{\rm Fe}/{\rm H}]$} > -1.6$. The large differences between these two slopes, suggest that the traditional assumption of a simple linear relationship between \hbox {$\rm M_v(RR)$}\ and \hbox{$ [{\rm Fe}/{\rm H}]$}\ is not valid. This is certainly true in detail, but a global fit over the range of metallicities typically found in globular clusters with RR Lyrae stars ($-2.2 \le \hbox{$ [{\rm Fe}/{\rm H}]$} \le -1.0$) finds $\alpha = 0.25$, with a maximum deviation of 0.02 mag in \hbox {$\rm M_v(RR)$}. Even a fit over a very broad metallicity range ($-2.2 \le \hbox{$ [{\rm Fe}/{\rm H}]$} \le 0.0$) leads to maximum deviations of only 0.04 mag in \hbox {$\rm M_v(RR)$}\ between a simple linear fit and the relationship derived directly from the models. Given the small residuals between the linear fit and the \hbox {$\rm M_v(RR)$}\ values predicted by theoretical models, it is justified to assume a linear relation between \hbox {$\rm M_v(RR)$}\ and \hbox{$ [{\rm Fe}/{\rm H}]$}\ for distance determinations. The semi-empirical Baade-Wesselink method supports a shallow slope, $\alpha = 0.20\pm 0.04$ (Fernley {\it et al.}\ 1998b). Note that the Baade-Wesselink results include many high metallicity points ($\hbox{$ [{\rm Fe}/{\rm H}]$} \ge 0.5$) and the theoretical models would predict a bias to a higher slope. The shallow slope is also supported by HST observations of globular cluster HBs in M31, where $\alpha = 0.13\pm 0.07$ (Fusi Pecci {\it et al.} 1996). The clusters span a range in metallicity ($-1.8 \le \hbox{$ [{\rm Fe}/{\rm H}]$} \le -0.4$) which is unlikely to introduce a significant bias to the derived value of $\alpha$. Using the relation between Fourier decomposition and luminosity for RRab stars in globular clusters Kov\'{a}cs \& Jurcsik (1996) determined that $\alpha$ is less than 0.20. Overall, the observations and theoretical models appear to favor somewhat shallow slopes ($\alpha \le 0.26$). In the rest of this review a value of $\alpha = 0.23\pm 0.04$ is adopted. The 1-$\sigma$ range of this value (0.19 --- 0.27) encompasses the majority of recent determinations of $\alpha$. \subsection{Zero-point of \hbox {$\rm M_v(RR)$}\ with metallicity} \subsubsection{Statistical Parallax} A traditional method used to determine the absolute magnitude of RR Lyrae stars is statistical parallax (see Layden this volume). This method determines the absolute magnitude of RR Lyrae stars in the field. Hipparcos obtained a large number of proper motions which can be used in the statistical parallax solution. Current statistical parallax solutions find $\hbox {$\rm M_v(RR)$} = 0.77\pm 0.13$ mag at $<\hbox{$ [{\rm Fe}/{\rm H}]$} > = -1.60$ (Gould \& Popowski 1998; also Layden, this volume). Combining this with the estimate for the slope given in the previous section yields \begin{eqnarray} \hbox {$\rm M_v(RR)$}\ = (0.23\pm 0.04)(\hbox{$ [{\rm Fe}/{\rm H}]$} + 1.6) + (0.77\pm 0.13). \label{statpi} \end{eqnarray} \subsubsection{Calibration via the LMC} Given a distance estimate to the LMC, the observed magnitude of RR Lyrae stars in the LMC can be used to calibrate \hbox {$\rm M_v(RR)$}\ (e.g.\ Walker 1992). Walker (this volume) summarizes current distance estimates to the LMC, and concludes that the distance modulus to the LMC is $18.55\pm 0.10$ mag. Walker (1992) determined the mean magnitude of a large number of RR Lyrae stars in several clusters in the LMC. Combining this data with the above distance modulus to the LMC yields $\hbox {$\rm M_v(RR)$} = 0.39\pm 0.10$ mag at $<\hbox{$ [{\rm Fe}/{\rm H}]$}> = -1.90$. With $\alpha = 0.23\pm 0.04$ this yields \begin{eqnarray} \hbox {$\rm M_v(RR)$}\ = (0.23\pm 0.04)(\hbox{$ [{\rm Fe}/{\rm H}]$} + 1.6) + (0.46\pm 0.11) \label{lmc} \end{eqnarray} (allowing for an error of 0.15 dex in the mean LMC \hbox{$ [{\rm Fe}/{\rm H}]$}). A comparison between equations (\ref{statpi}) and (\ref{lmc}) indicates that these two methods for determining the zero-point of the \hbox {$\rm M_v(RR)$}-\hbox{$ [{\rm Fe}/{\rm H}]$}\ relation differ by $1.8\,\sigma$. \subsubsection{Theoretical HB Models} Theoretical stellar evolution models may be used to derive the absolute magnitude of the zero-age horizontal branch (ZAHB). It is important to note that the results of these calculations depend sensitively on the assumed helium abundance used in the calculations, along with the physics used in the construction of the stellar models. A change in the assumed main sequence helium abundance by 4\% (from $Y=0.23$ to $Y=0.24$ for example) leads to a change in the predicted HB luminosity of approximately 0.05 mag. Cassisi {\it et al.}\ (1998) show that improvements in the physics used in the theoretical models over the last 10 years has lead to an increase in the predicted ZAHB luminosity by about 0.15 mag. For this reason, only globular cluster distance determinations based upon the latest input physics will be considered in this subsection. A number of authors have used theoretical ZAHB models to derive the distance to specific globular clusters. Brocato {\it et al.}\ (1997) constructed ZAHB models for M68, and compared these to the observations obtained by Walker (1994). The existence of a blue tail on the M68 HB allowed Brocato {\it et al.}\ (1997) to derive the distance and reddening to M68 simultaneously. They obtained ${\rm (m - M)_V}= 15.25$ mag and ${\rm E(B-V)} = 0.05$ using their most recent models. Walker (1994) obtained a mean apparent magnitude of $V=15.67\pm 0.04$ mag for the RR Lyrae stars in M68. Estimates of the metallicity of M68 vary from $\hbox{$ [{\rm Fe}/{\rm H}]$} = -2.17$ (Minniti {\it et al.}\ 1993) to $\hbox{$ [{\rm Fe}/{\rm H}]$} = -1.99$ (Carretta \& Gratton 1997). Taking the average of these two metallicity estimates, the distance modulus derived by Brocato {\it et al.}\ (1997) implies $\hbox {$\rm M_v(RR)$} = 0.42$ mag at $\hbox{$ [{\rm Fe}/{\rm H}]$} = -2.08$. Salaris {\it et al.}\ (1997) performed a fit to M68 using their ZAHB models and isochrones. The distance modulus and reddening were determined by shifting the ZAHB models and isochrones in order to match the observed main-sequence ridge line and the ZAHB level in the RR Lyrae region. Salaris {\it et al.}\ (1997) obtained ${\rm (m -M)_V}= 15.26$ mag and ${\rm E(B-V)} = 0.06$. This implies $\hbox {$\rm M_v(RR)$} = 0.41$ mag, which is very similar to the result obtained by Brocato {\it et al.}\ (1997). Finally, Caloi {\it et al.}\ (1997) have used their ZAHB models to determine the distance to three globular clusters. Their models differ from other workers in that they do not use mixing length theory, but adopt the Canuto \& Mazzitelli (1991) convection treatment. Using the same M68 data, Caloi {\it et al.}\ (1997) determined a distance modulus of ${\rm (m - M)_V}=15.37$ mag (assuming ${\rm A_V = 3.2\,E(B-V)}$). This is 0.11 mag larger than the value found by Brocato {\it et al.}\ (1997) and Salaris {\it et al.}\ (1997). However, the work of Caloi {\it et al.}\ (1997) ignored the fact that the $\alpha$ capture elements are enhanced over their solar ratio in metal poor stars (e.g.\ Nissen {\it et al.}\ 1994). Caloi {\it et al.}\ (1997) used their $Z=0.0001$ models to compare to M68, while Salaris {\it et al.}\ (1997) and Brocato {\it et al.}\ (1997) take into account $\alpha$ element enhancement by using their $Z=0.0002$ models. The results of Caloi {\it et al.}\ (1997) may be corrected to include $\alpha$ element enhancement with the aid of their Table 2. Performing such a correction leads to ${\rm (m - M)_V}=15.28$ mag, implying $\hbox {$\rm M_v(RR)$} = 0.39$ mag in good agreement with Brocato {\it et al.}\ (1997) and Salaris {\it et al.}\ (1997). Averaging these three determinations for the distance to M68 yields $\hbox {$\rm M_v(RR)$} = 0.41$ mag at $\hbox{$ [{\rm Fe}/{\rm H}]$} = -2.08$ based upon theoretical ZAHB models. Caloi {\it et al.}\ (1997) also derived the distance to M5. Once again, correcting their published value for the effects of $\alpha$ element enhancement leads to a distance modulus of ${\rm (m - M)_V} = 14.51$ mag for M5. M5 has a metallicity of $\hbox{$ [{\rm Fe}/{\rm H}]$} = -1.17$ from high dispersion spectroscopic analysis (Sneden {\it et al.} 1992) and mean RR Lyrae apparent magnitude of $V=15.05\pm 0.06$ mag (Reid 1996). Thus, the theoretical ZAHB models of Caloi {\it et al.}\ (1997) imply $\hbox {$\rm M_v(RR)$} = 0.56$ mag at $\hbox{$ [{\rm Fe}/{\rm H}]$} = -1.17$. This may be combined with the M68 calibration above to yield a calibration of \hbox {$\rm M_v(RR)$}\ based upon theoretical ZAHB models from three different groups: \begin{eqnarray} \hbox {$\rm M_v(RR)$}\ = (0.23\pm 0.04)(\hbox{$ [{\rm Fe}/{\rm H}]$} + 1.6) + (0.49\pm 0.10). \label{hbtheory} \end{eqnarray} The error in the zero-point has been estimated from a consideration of the uncertainties associated with the theoretical HB models, discussed in the beginning of this section. \section{Astrometric Distances \label{sect2}} A comparison of the proper motion and radial velocity dispersions within a cluster allows for a direct determination of GC distances, independent of reddening (Cudworth 1979). Although this method requires that a dynamical model of a cluster be constructed, it is the only method considered here which directly measures the distance to a GC without the use of a `standard' candle. The chief disadvantage of this technique is its relatively low precision. This problem is avoided by averaging together the astrometric distances to a number of different GCs. Rees (1996) presents new astrometric distances to eight GCs, along with two previous determinations. As pointed out by Rees, there are possibly large systematic errors in the dynamical modeling of M15, NGC 6397 and 47 Tuc. As such, these clusters will be excluded from our analysis. Rees (private communication) has performed a new reduction of the M2 proper motions, yielding a total of seven clusters whose distances have been estimated astrometrically. Table \ref{tabastro} tabulates the astrometric distances from Rees (1996) along with the new distance determination to M2. Unless otherwise noted, the numbers are those given by Rees (1996). For the \hbox{$ [{\rm Fe}/{\rm H}]$}\ values, preference has been given to the high dispersion results of Kraft, Sneden and collaborators. Table \ref{tabastro} also includes the HB type of the clusters taken from Harris (1996). This is defined to be $(B-R)/(B+V+R)$, where $B$, $V$ and $R$ are the numbers of blue, variable and red HB stars. Taking the weighted average of the \hbox {$\rm M_v(RR)$}\ values listed in Table \ref{tabastro} results in $\hbox {$\rm M_v(RR)$} = 0.60\pm 0.10$ mag at $<\hbox{$ [{\rm Fe}/{\rm H}]$}> = -1.60$, where the average \hbox{$ [{\rm Fe}/{\rm H}]$}\ value has been calculated using the same weights as in the \hbox {$\rm M_v(RR)$}\ average. This implies \begin{eqnarray} \hbox {$\rm M_v(RR)$}\ = (0.23\pm 0.04)(\hbox{$ [{\rm Fe}/{\rm H}]$} + 1.6) + (0.60\pm 0.10) \label{astrometric} \end{eqnarray} using the \hbox {$\rm M_v(RR)$}-\hbox{$ [{\rm Fe}/{\rm H}]$}\ slope adopted in \S 2.1. \begin{table}[htb] \begin{center} \begin{minipage}{11.06cm} \caption{Astrometric Distances} \label{tabastro} \begin{tabular}{llccll} \hline \multicolumn{1}{c}{Cluster}& \multicolumn{1}{c}{\hbox{$ [{\rm Fe}/{\rm H}]$}}& \multicolumn{1}{c}{HB Type}& \multicolumn{1}{c}{${\rm (m - M)_O}$}& \multicolumn{1}{c}{V(HB)} & \multicolumn{1}{c}{$\rm M_V(HB)$} \\ \hline M5\footnote{\hbox{$ [{\rm Fe}/{\rm H}]$}\ from Sneden {\it et al.}\ (1992).} & $-1.17$ & $+0.31$ & 14.44 & $15.05$ & $0.51 \pm 0.41$\\ M4\footnote{\hbox{$ [{\rm Fe}/{\rm H}]$}\ from Zinn \& West (1984).} & $-1.33$ & $-0.06$ & 11.18 & $13.37$ & $0.67 \pm 0.23$\\ M3\footnote{V(HB) from Buonanno {\it et al.}\ (1994). Reddening from Zinn (1985). \hbox{$ [{\rm Fe}/{\rm H}]$}\ \hspace*{1cm} from Kraft {\it et al.}\ (1992).} & $-1.47$ & $+0.08$ & 14.91 & $15.63$ & $0.69 \pm 0.59$\\ M13\footnote{V(HB) from Buonanno {\it et al.}\ (1989). \hbox{$ [{\rm Fe}/{\rm H}]$}\ from Kraft {\it et al.}\ (1997)} & $-1.58$ & $+0.97$ & 14.06 & $14.83$ & $0.71 \pm 0.23$\\ M2\footnote{V(HB) from Harris (1996). \hbox{$ [{\rm Fe}/{\rm H}]$}\ from Zinn \& West (1984)} & $-1.62$ & $+0.96$ & 15.26 & $16.05$ & $0.63 \pm 0.25$\\ M22$^b$ & $-1.75$ & $+0.91$ & 12.17 & $14.10$ & $0.58 \pm 0.19$\\ M92\footnote{\hbox{$ [{\rm Fe}/{\rm H}]$}\ from Sneden {\it et al.}\ (1991).} & $-2.25$ & $+0.91$ & 14.76 & $15.13$ & $0.31 \pm 0.32$\\ \hline \end{tabular} \end{minipage} \end{center} \end{table} The four most metal-poor clusters all have very blue HB types. For these clusters, theoretical models suggest that the RR Lyrae stars will be more luminous than for stars with redder HB types. An average of the blue HB clusters finds $\hbox {$\rm M_v(RR)$} = 0.59\pm 0.12$ at $<\hbox{$ [{\rm Fe}/{\rm H}]$}> = -1.71$, while the three clusters with redder HB types yield $\hbox {$\rm M_v(RR)$} = 0.64\pm 0.19$ at $<\hbox{$ [{\rm Fe}/{\rm H}]$}> = -1.31$. Translating these two estimates to $\hbox{$ [{\rm Fe}/{\rm H}]$} = -1.60$ (using $\alpha = 0.23\pm 0.04$) yields $\hbox {$\rm M_v(RR)$} = 0.62\pm 0.12$ for the blue HB clusters and $\hbox {$\rm M_v(RR)$} = 0.57\pm 0.19$ for the other clusters. There does not appear to be a significant difference between the two \hbox {$\rm M_v(RR)$}\ calibrations, and so the averaging used to derive equation (\ref{astrometric}) appears to be valid. \section{Main Sequence Fitting \label{sect3}} Hipparcos provided high quality parallaxes for a number of metal-poor field stars. This has prompted a number of authors to determine new distances to globular clusters using main sequence fitting. Main sequence fitting is discussed in detail in the chapter by Gratton {\it et al.}\ in this book. The results of the published investigations are summarized in Table \ref{mainfit}. The typical distance modulus errors quoted by the various authors is $\pm 0.10$ mag. The authors took quite different approaches in dealing with issues such as sample selection, reddening, biases, etc. In general, the distance moduli derived by various authors for a given globular cluster are in good agreement. For example, the various distance modulus estimates to M13 agree to within $\pm 0.03$ mag. The Grundahl {\it et al.}\ (1998) distance estimate to M13 is particularly noteworthy as they utilized Str\"{o}mgren photometry, while the other authors used B,V photometry. Of course, all of these work utilize the same basic assumption, that the nearby metal-poor stars have identical properties to their metal-poor counterparts in globular clusters. \begin{table}[htb] \begin{center} \begin{minipage}{11.55cm} \caption{Main Sequence Fitting Distances} \label{mainfit} \begin{tabular}{llllc} \hline \multicolumn{1}{c}{Cluster}& \multicolumn{1}{c}{${\rm (m-M)_O}$}& \multicolumn{1}{c}{${\rm E(B-V)}$}& \multicolumn{1}{c}{${\rm (m - M)_V}$}& \multicolumn{1}{c}{Reference}\\ \hline 47 Tuc NGC 104 &13.56 & 0.04 & 13.69 & 1\\[-2pt] &13.44 & 0.055 & 13.62 & 2\\[5pt] NGC 288 &15.00 & 0.01 & 15.03 & 1\\[-2pt] &14.83 & 0.033 & 14.94 & 2\\[5pt] NGC 362 &14.86 & 0.056 & 15.04 & 2\\[5pt] M68 NGC 4590 &15.18 & 0.040 & 15.31 & 2\\[5pt] M5 NGC 5904 &14.52 & 0.02 & 14.58 & 1\\[-2pt] &14.41 & 0.03 & 14.51 & 3\\[-2pt] &14.49 & 0.035 & 14.60 & 2\\[5pt] M13 NGC 6205 &14.38 & 0.021 & 14.45 & 4\\[-2pt] &14.45 & 0.02 & 14.51 & 1\\[-2pt] &14.41 & 0.02 & 14.47 & 3\\[-2pt] &14.39 & 0.020 & 14.45 & 2\\[5pt] M92 NGC 6341 &14.72 & 0.025 & 14.80 & 2\\[-2pt] &14.68 & 0.02 & 14.74 & 5\footnote{This is the distance modulus derived by Pont {\it et al.}\ when they do not include the known binaries in their fit.}\\[5pt] NGC 6397 &12.24 & 0.19 & 12.85 & 1\\ NGC 6752 &13.16 & 0.04 & 13.29 & 1\\[-2pt] &13.20 & 0.04 & 13.33 & 3\\[-2pt] &13.21 & 0.035 & 13.32 & 2\\[5pt] M71 NGC 6838 &13.19 & 0.28 & 14.09 & 1\\[5pt] M30 NGC 7099 &14.82 & 0.039 & 14.94 & 2\\ \hline \end{tabular} {\small REFERENCES. --- (1) Reid 1998; (2) Gratton {\it et al.}\ (1997); (3) Chaboyer {\it et al.}\ (1998); (4) Grundahl {\it et al.}\ (1998); (5) Pont {\it et al.}\ (1998).} \end{minipage} \end{center} \end{table} Some of the globular clusters listed in Table \ref{mainfit} have very good RR Lyrae mean magnitudes, and so the main sequence fitting distances may be compared amongst each other, and to other methods using \hbox {$\rm M_v(RR)$}\ (equation \ref{eqmvrr}). For example M92 has a mean RR Lyrae magnitude of $V=15.10\pm 0.03$ mag (Carney {\it et al.}\ 1992). Averaging the distance moduli obtained by Gratton {\it et al.}\ (1997) and Pont {\it et al.}\ (1998) yields $\hbox {$\rm M_v(RR)$} = 0.33\pm 0.10$ mag (at $\hbox{$ [{\rm Fe}/{\rm H}]$} = -2.25$ from Sneden {\it et al.}\ 1991). The Gratton {\it et al.}\ (1997) distance modulus to M68 yields $\hbox {$\rm M_v(RR)$} = 0.36\pm 0.10$ mag (at $\hbox{$ [{\rm Fe}/{\rm H}]$} = -2.08$ using the data for M68 given in \S 2.2.3). These two estimates for \hbox {$\rm M_v(RR)$}\ can be directly compared at an intermediate metallicity ($\hbox{$ [{\rm Fe}/{\rm H}]$} = -2.16$) using equation \ref{eqmvrr} which yields $\hbox {$\rm M_v(RR)$} = 0.35\pm 0.10$ mag for M92 and $\hbox {$\rm M_v(RR)$} = 0.34\pm 0.10$ mag for M68. Note that M92 has a blue HB (HB-type of +0.91), while the M68 has a much redder HB (HB-type of 0.17). This comparison indicates, that for these two clusters the HB type does not have a significant effect on \hbox {$\rm M_v(RR)$}. Good RR Lyrae photometry also exists for M5 (see references in \S 2.2.3). Averaging together the three main sequence fitting results for M5 presented in Table \ref{mainfit} results in ${\rm (m-M)_V} = 14.56\pm 0.10$ mag and $\hbox {$\rm M_v(RR)$} = 0.49\pm 0.10$ mag at $\hbox{$ [{\rm Fe}/{\rm H}]$} = -1.17$. Taking the mean determination of \hbox {$\rm M_v(RR)$}\ for M92, M68 and M5 from main sequence fitting (and using $\alpha = 0.23\pm 0.04$ in equation \ref{eqmvrr}) yields \begin{eqnarray} \hbox {$\rm M_v(RR)$}\ = (0.23\pm 0.04)(\hbox{$ [{\rm Fe}/{\rm H}]$} + 1.6) + (0.45\pm 0.10). \label{eqmainfit} \end{eqnarray} \section{White Dwarf Fitting \label{sect4}} Renzini {\it et al.}\ (1996) have utilized deep HST WFPC2 observations of NGC 6752 to obtain accurate photometry of the cluster white dwarfs. In addition, they obtained similar photometry of nearby white dwarfs which appear to have similar masses to the cluster white dwarfs. Using the parallaxes of the nearby white dwarfs to determine their absolute magnitude, they determined the distance to NGC 6752 using a procedure similar to main sequence fitting. The key assumption in this method is that the masses of the local white dwarfs are similar to the masses of the white dwarfs in NGC 6752. The derived distance modulus is ${\rm (m - M)_V} = 13.18\pm 0.10$ mag assuming ${\rm E(B-V)} = 0.04$. This reddening estimate is from Zinn (1985), and is identical to those found by Burnstein \& Heiles (1982) and Carney (1979). The average distance modulus for NGC 6752 from main sequence fitting is ${\rm (m - M)_V} =13.31\pm 0.10$ mag (Table \ref{mainfit}), leading to a difference of 0.13 mag between the main sequence and white dwarf fitting distance estimates to NGC 6752. This cluster has a very blue HB, and so determination of its HB magnitude at the position of the RR Lyrae instability strip is very difficult. In order to compare white dwarf fitting to the other distance determination techniques, equation (\ref{eqmainfit}) can be combined with the difference between the white dwarf and main sequence fitting distances to NGC 6752 to yield \begin{eqnarray} \hbox {$\rm M_v(RR)$}\ = (0.23\pm 0.04)(\hbox{$ [{\rm Fe}/{\rm H}]$} + 1.6) + (0.58\pm 0.10). \label{eqwhitefit} \end{eqnarray} \section{Other Distance Indicators \label{sect5}} There are a variety of other methods which have been used to obtain distances to globular clusters. Jimenez \& Padoan (1998) have compared theoretical luminosity functions to observed luminosity functions of M5 and M55. For M5, they determined ${\rm (m-M)_V} = 14.55\pm 0.10$ mag. This can be compared to the average distance modulus derived from main sequence fitting of ${\rm (m-M)_V} = 14.56\pm 0.10$ mag (Table \ref{mainfit}). Kov\'{a}cs and Walker (1998) have presented a detailed analysis of double-mode RR Lyrae stars in M15, M68 and IC 4499. This analysis is based upon linear pulsation models and is free of systematic effects due to ambiguities in the various zero-points (bolometric corrections, magnitudes, etc). The derived absolute magnitudes are 0.2 -- 0.3 mag brighter than corresponding Baade-Wesselink values which are tied to the statistical parallax zero-point. Simon \& Clement (1993) used hydrodynamic pulsation models to show that physical properties (such as absolute magnitude) of RRc stars could be derived from their pulsation period and Fourier phase parameters. Kaluzny {\it et al.}\ (1998) present \hbox {$\rm M_v(RR)$}\ for seven globular clusters based upon this method. For example, for M68, they find $\hbox {$\rm M_v(RR)$} = 0.38$ mag, which compares to $\hbox {$\rm M_v(RR)$} = 0.41$ mag from theoretical HB models (\S 2.2.3) and $\hbox {$\rm M_v(RR)$} = 0.36$ mag from main sequence fitting (\S \ref{sect4}). For M5, Kaluzny {\it et al.}\ (1998) tabulate $\hbox {$\rm M_v(RR)$} = 0.61$ mag which agrees well with the theoretical HB models ($\hbox {$\rm M_v(RR)$} = 0.56$ mag) and is somewhat fainter than that derived from main sequence fitting ($\hbox {$\rm M_v(RR)$} = 0.49$ mag). The discovery of a detached eclipsing binary system within a globular cluster would allow for a near direct distance determination to the globular cluster (Paczy\'{n}ski 1997). If the binary is well detached and uncomplicated, accurate photometry and radial velocities can be combined with a surface brightness-color relation to obtain the distance to the globular cluster. A number of authors have searched for such binaries in globular clusters (e.g.\ Yan \& Mateo 1994, McVean {\it et al.}\ 1997, Kaluzny {\it et al.}\ 1998). McVean {\it et al.}\ (1997) have identified one eclipsing binary system in the globular cluster M71 which appears to be a detached or semi-detached system, with the detached model being more likely. Detached eclipsing binary systems have great potential as distance indicators to globular clusters which will (hopefully) be realized in the next few years. \section{Summary \label{sect7}} The release of the Hipparcos data set has led a number of authors to study the distance scale to globular clusters. The Hipparcos data set of high quality parallaxes for a number of nearby metal-poor stars has renewed interest in the use of main sequence fitting to determine distances to globular clusters. In addition, the Hipparcos data on proper motions of field RR Lyrae stars has been used to determine a new calibration of the absolute magnitude of the RR Lyrae stars (via the statistical parallax method) which can be used to determine the distances to globular clusters. Over the last few years, a variety of other methods have been used to derive distances to globular clusters. Given that many globular clusters contain RR Lyrae stars, these distance determinations can be compared via their calibration of the absolute magnitude of the RR Lyrae stars. This calibration is presented in equations (\ref{statpi}) --- (\ref{eqwhitefit}) and summarized in Table \ref{result}. \begin{table}[htb] \begin{center} \begin{minipage}{6.56cm} \caption{\hbox {$\rm M_v(RR)$}\ at $\hbox{$ [{\rm Fe}/{\rm H}]$} = -1.6$} \label{result} \begin{tabular}{ll} \hline \multicolumn{1}{c}{Method}& \multicolumn{1}{c}{\hbox {$\rm M_v(RR)$}}\\ \hline Statistical Parallax & $0.77\pm 0.13$\\ Astrometric Distances& $0.60\pm 0.10$\\ White Dwarf Fitting & $0.58\pm 0.10$\\ Theoretical HB models& $0.49\pm 0.10$\\ LMC & $0.46\pm 0.11$\\ Main Sequence Fitting& $0.45\pm 0.10$\\ \hline \end{tabular} \end{minipage} \end{center} \end{table} The various calibrations fall into three groups. Main sequence fitting using Hipparcos parallaxes, theoretical HB models and the RR Lyrae in the LMC all favor a bright calibration, implying a `long' globular cluster distance scale. White dwarf fitting and the astrometric distances yield a somewhat fainter RR Lyrae calibration, while the statistical parallax solution yields faint RR Lyrae stars implying a `short' distance scale to globular clusters. The various secondary distance indicators discussed in \S \ref{sect5} all favor the long distance scale. It is interesting to note that Hipparcos provides support for both the long (from main sequence fitting) and short distance scales (from statistical parallax). A straight average of all six calibrations presented in Table \ref{result} yields $\hbox {$\rm M_v(RR)$} = 0.56$ mag with a standard deviation of 0.12 mag. If the statistical parallax solution is removed from the average, then $\hbox {$\rm M_v(RR)$} = 0.52$ mag with a standard deviation of $0.07$ mag. At the present time, their is no reason to doubt the validity of the statistical parallax solution. A number of authors, using a variety of data sources have all reached similar conclusions (see Layden, this volume). A possible explanation for the different result obtained using statistical parallax compared to the other methods is that it is the only method which calibrates the field RR Lyrae population (as opposed to the RR Lyrae stars in a globular cluster). Perhaps there is a systematic difference between the field and globular cluster RR Lyrae populations. However, a study of the pulsation properties of RR Lyrae variables in the field and in globular clusters found essentially indistinguishable period-temperature distributions for the two populations, suggesting that there is no significant difference in luminosity between them (Catelan 1998). For the above reasons, it appears prudent at this time to include the statistical parallax solution in the average. This leads to a best estimate of the \hbox {$\rm M_v(RR)$}\ calibration which can be used to set the globular cluster distance scale of \begin{eqnarray} \hbox {$\rm M_v(RR)$}\ = (0.23\pm 0.04)(\hbox{$ [{\rm Fe}/{\rm H}]$} + 1.6) + (0.56\pm 0.12), \label{eqfinal} \end{eqnarray} where the standard deviation among the six independent distance techniques has been used as the error in the zero-point. This is 0.1 mag fainter than that obtained from main sequence fitting, but is 0.2 mag brighter than the statistical parallax solution. Equation (\ref{eqfinal}) may be compared to my best estimate for the the calibration of the RR Lyrae distance scale prior to the release of the Hipparcos data which implied $\hbox {$\rm M_v(RR)$} = 0.66\pm 0.10$ mag at $\hbox{$ [{\rm Fe}/{\rm H}]$} = -1.6$ (Chaboyer {\it et al.}\ 1996). The impact of this distance scale on the mean age of the oldest globular clusters can be evaluated using the formulae presented by Chaboyer {\it et al.}\ (1998) in the caption to their Figure 3. From this, equation (\ref{eqfinal}) implies a mean age of the oldest globular clusters of $13\pm 2$ Gyr. The dominant uncertainty in this age estimate is the uncertainty in the distance scale to the globular clusters. In order to reduce the uncertainty in the absolute ages of the globular clusters, the differences between the `long' distance scale (based upon main sequence fitting, theoretical HB models and the RR Lyrae in the LMC) and the `short' distance scale (based upon the statistical parallax method) must be reconciled. \vspace*{0.5cm} \parindent 0pt {\bf References} \begin{description} \item[]Brocato, E., Castellani, V.\ \& Piersimoni, A.\ 1997, ApJ, 491, 789 \item[]Buonanno, R., Corsi, C. E., Cacciari, C., Ferraro, F.R.\ \& Fusi Pecci, F.\ 1994, A\&A, 290, 69 \item[]Buonanno, R., Corsi, C. E.\ \& Fusi Pecci, F.\ 1989, A\&A, 216, 80 \item[]Burnstein, D.\ \& Heiles, C.\ 1982, AJ, 87, 1165 \item[]Caloi, V., D'Antona, F.\ \& Mazzitelli, I.\ 1997, A\&A, 320, 823 \item[]Canuto V.M.\ \& Mazzitelli, I.\ 1991, ApJ, 370, 295 \item[]Caputo, F.\ 1997, MNRAS, 284, 994 \item[]Carney, B.W.\ 1979, AJ, 84, 515 \item[]Carney, B.W., Storm, J., Trammell, S.R.\ \& Jones, R.V.\ 1992, PASP, 104, 44 \item[]Carretta, E. \& Gratton, R.G.\ 1997, A\&AS, 121, 95 \item[]Cassisi, S., Castellani, V, Degl'Innocenti, S.\ \& Weiss, A.\ 1998, A\&AS, 129, 267 \item[]Catelan, M.\ 1998, ApJ, 495, L81 \item[]Chaboyer, B.\ 1996, Nuclear Physics B Proceedings Supplement, 51B, 10 \item[]Chaboyer, B., Demarque, P., Kernan, P.J.\ \& Krauss, L.M.\ 1996, Science, 271, 957 \item[]Chaboyer, B., Demarque, P., Kernan, P.J.\ \& Krauss, L.M.\ 1998, ApJ, 494, 96 \item[]Cudworth, K.M.\ 1979, AJ, 84, 1212 \item[]Fernley, J., Barnes, T.G., Skillen, I., Hawley, S.L., Hanley, C.J., Evans, D.W., Solano, E.\ \& Garrido, R.\ 1998a, A\&A, 330, 515 \item[]Fernley, J., Carney, B.W.\ Skillen, I.\ Cacciari, C.\ \& Janes, K.\ 1998b, MNRAS, 293, L61 \item[]Fusi Pecci, F., Buonanno, R., Cacciari, C., Corsi, C. E., Djorgovski, S. G., Federici, L., Ferraro, F. R., Parmeggiani, G., \& Rich, R. M.\ 1996, AJ, 112, 1461 \item[]Gould, A.\ \& Popowski, P.\ 1998, ApJ, in press \item[]Gratton, R.G., Fusi Pecci, F., Carretta, E., Clementini, G., Corsi, C.E.\ \& Lattanzi, M.\ 1997, ApJ, 491, 749 \item[]Grundahl, F., VandenBerg, D.A.\ \& Andersen, M.I.\ 1998, ApJ, 500, L179 \item[]Harris, W.E.\ 1996, AJ, 112, 1487 \item[]Jimenez, R.\ \& Padoan, P.\ 1998, ApJ, 498, 704 \item[]Kaluzny, J, Hilditch, R.W., Clement, C.\ \& Rucinski, S.M.\ 1998, MNRAS, 296, 347 \item[]Kov\'{a}cs, G., \& Jurcsik, J. 1996, ApJ, 466, L17 \item[]Kov\'{a}cs, G., \& Walker, A.R.\ 1998, ApJ, submitted \item[]Kraft, R.P., Sneden, C., Langer, G. E.\ \& Prosser, C.F.\ 1992, AJ, 104, 645 \item[]Kraft, R.P., Sneden, C., Smith, G.H., Shetrone, M.D., Langer, G.E.\ \& Pilachowski, C.A.\ 1997, AJ, 113, 279 \item[]Lee, Y.-W. 1991 ApJ, 373, L43 \item[]Lee, Y. -W., Demarque, P., \& Zinn, R. J. 1990, ApJ, 350, 155 \item[]Lee, Y. -W., Demarque, P., \& Zinn, R. J. 1994, ApJ, 423, 248 \item[]McVean, J.R., Milone, E.F., Mateo, M.\ \& Yan, L.\ 1997, ApJ, 481, 782 \item[]Minniti, D., Geisler, D., Peterson, R.C.\ \& Claria, J.J.\ 1993, ApJ, 413, 548 \item[]Nissen, P., Gustafsson, B., Edvardsson, B.\ \& Gilmore, G.\ 1994, A\&A, 285, 440 \item[]Paczy\'{n}ski, B. 1997, in The Extragalactic Distance Scale, eds. M.\ Livio, M.\ Donahue \& N.\ Panagia (Cambridge Univ. Press, Cambridge) 273 \item[]Pont, F., Mayor, M, Turon, C.\ \& VanDenberg, D.A.\ 1998, A\&A, 329, 87 \item[]Rees, R.F.\ 1996, in Formation of the Galactic Halo .... Inside and Out, eds.\ H.\ Morrison \& A.\ Sarajedini (San Fransico: ASP), 289 \item[]Reid, I.N., 1996, MNRAS, 278, 367 \item[]Reid, I.N., 1997, AJ, 114, 161 \item[]Reid, I.N., 1998, AJ, 115, 204 \item[]Renzini, A.\ 1991, in Observational Tests of Cosmological Inflation, eds.\ T.\ Shanks, {\it et al.}, (Dordrecht: Kluwer), 131 \item[]Renzini, A., Bragaglia, A., Ferraro, F.R., Gilmozzi, R., Ortolani, S., Holberg, J.B., Liebert, J., Wesemael, F.\ \& Bohlin, R.C.\ 1996, ApJ, 465, L23 \item[]Sandage, A. R. 1981a, ApJ, 244, L23 \item[]Sandage, A. R. 1981b, ApJ, 248, 161 \item[]Simon, N.R., \& Clement, C.M.\ 1993, ApJ, 410, 526 \item[]Sneden, C., Kraft, R.P., Prosser, C.F.\ \& Langer, G. E.\ 1991, AJ, 102, 2001 \item[]Sneden, C., Kraft, R.P., Prosser, C.F.\ \& Langer, G. E.\ 1992, AJ, 104, 2121 \item[]Walker, A.R.\ 1992, ApJ, 390, L81 \item[]Walker, A.R.\ 1994, AJ, 108, 555 \item[]Zinn, R.\ 1985, ApJ, 293, 424 \item[]Zinn, R.\ \& West, M.\ 1984, ApJS, 55, 45 \end{description} \end{document}
1,314,259,993,739
arxiv
\section{Introduction} Quantum information employs individual and entangled quantum systems in order to carry a number of information processing tasks offering an advantage over their classical counterparts \cite{Nielsen_2000}. One major sub-division is called quantum communication, which aims to faithfully transmit photonic quantum states across communication links (optical fibers or free-space channels) between remote parties (typically called Alice and Bob) \cite{Gisin_2007}. One important quantum communication protocol is called quantum key distribution (QKD), whose goal is to remotely generate a shared secret key between Alice and Bob \cite{Lo_2014, Xu_2019, Pirandola_2019}. Its effectiveness has been demonstrated over long distances \cite{Boaron_2018}, which is desirable for practical applications. In the past, most quantum communication experiments have focused on point-to-point applications, and only recently interest has increased over network and multi-user applications, with significant effort focusing on the underlaying communication infrastructure supporting future networks of quantum computers, the so-called Quantum Internet \cite{Wehner_2018}. As in a standard communication network, routing will be an essential function in order to implement dynamic functionality the single-photons. A direct way to implement single-photon routers with potentially fast response time is through the use of interferometers \cite{Ma_2011,Rambo_2013,svarc,Hall_2011}. In \cite{Ma_2011} a Mach-Zehnder interferometer (MZI) with a phase modulator in one of its arms was used to route the single-photon on demand to one of its outputs. Single-photon switches with two inputs and two outputs have also been presented based on a MZI design \cite{Rambo_2013}. In \cite{svarc} a coupler based on a MZI was presented as well, where photons can be routed at any splitting ratio as a tunable switch. In these papers three routing configurations are presented and in all of them an extra active phase stabilization system is required due to the use of MZI. Aiming for a more stable design, another configuration employed a Sagnac fiber-optical interferometer \cite{Hall_2011}. In this case the switching was performed by modulation of the Kerr non-linearity provided by an auxiliary external optical pulse. More complex single-photon routers based on one-atom switches \cite{Shomroni_2014} and solid-state quantum memories \cite{Sun_2018} have also been presented. Single-photon switches have applications in other scenarios as well. For instance, one important QKD protocol is based on the use of entangled photon pairs in order to establish a shared secret key between remote parties (typically called Alice and Bob) \cite{Ekert_1991}. One crucial point is to avoid the presence of practical imperfections or limitations, usually referred to as loopholes \cite{Larsson_2014}, that can compromise the security. A popular implementation is based on energy-time entanglement, where the unpredictability on the emission time of photon pairs leads to an entangled state \cite{Franson_1989}. Energy-time entanglement is well suited for long-distance propagation over optical fibers due to its robustness for decoherence \cite{Marcikic_2004}. One typical loophole present in energy-time implementations is due to the required temporal post-selection procedure \cite{Aerts_1999}. This loophole has been recently resolved both in the continuous \cite{Lima_2010, Cuevas_2013, Carvacho_2015}, and pulsed pump respectively \cite{Vedovato_2018}. In the specific case of the pulsed pump (referred to as time-bin entanglement), high-speed optical switches are needed, and in \cite{Vedovato_2018} interferometric switches based on a MZI configuration were employed. Optical switches are also needed in some QKD schemes where fast decoding for path-encoded qubits is needed \cite{Gonzalez_2015}. In this paper we propose and experimentally demonstrate a high-speed optical switch based on a fiber-optical Sagnac interferometer. Like in \cite{Hall_2011}, our proposal allows intrinsic phase stability due to the Sagnac design and thus does not require additional control systems, an issue that is present in implementations based on an MZI. As an improvement over previous designs, our new configuration uses fast telecom electro-optic phase modulators placed inside the interferometer to provide the required relative phase difference. Since we only rely on off-the-shelf components, we expect our work to be directly applicable to quantum networks and optical network operating within the telecom infrastructure. As a second important improvement over previous designs, we provide a Sagnac interferometer with polarization-independent switching capabilities even though telecom modulators are only able to modulate the phase for one defined polarization. This is possible by resorting to an innovative design for placing the modulators inside the Sagnac loop. The fact that the phases can be properly applied independently of the polarization of the incident photon, opens new possibilities for adopting our switch in quantum information protocols relying on polarization entanglement, which is arguably the main resource of quantum non-locality. For instance, our Sagnac loop can be used for controlling the degree of entanglement of polarization entangled states \cite{Glima09}, for modern self-testing protocols of quantum measurements \cite{Glima16}, and for quantum random number generation based on quantum non-locality \cite{Glima18}. \section{Experimental description} In our work we occupy the stability properties of a Sagnac interferometer. A Sagnac interferometer consists of a beamsplitter whose two outputs are connected together, such that the light beams propagate through the same path over the two opposing directions, yielding passive and intrinsic stability to the interferometer \cite{Culshaw_2006}. The probability that a single-photon exits through either output port depends on the relative phase-shift imposed between the two counter-propagating paths. If the interferometer is subjected to slow perturbations (compared to the total propagation time between input and output), then the probability to come out from the input port is always unity, since both paths undergo the same phase shift \cite{Culshaw_2006}. For this reason, the Sagnac is stable against slow phase perturbations. The experimental setup is depicted in Fig. \ref{Fig1}. We take advantage of a Sagnac configuration while employing two phase modulators inside the Sagnac loop in order to yield polarization independence with respect to the input state. We generate single-photon pairs through the process of spontaneous parametric down-conversion (SPDC) in a non-linear periodically poled potassium titanyl phosphate (PPKTP) waveguide crystal \cite{Fiorentino_2007}. The crystal (ADVR inc.) is phase-matched to create degenerate orthogonally-polarized (type-II) 1546 nm photon pairs when pumped with laser light with a wavelength of 773 nm. The waveguide is designed to only support a single propagation mode for the pump light and the down-converted photons. Light from an external cavity tunable laser is coupled to the waveguide after focusing with an 11 mm focal length aspheric lens, followed by an identical one at the output collimating the wavefront of the down converted photons. A dichroic mirror is used to remove excess pump light before the photons are deterministically split at a polarizing-beam splitter (PBS). At each output port of the PBS, the single-photons are coupled to single-mode fibers through multi-axis translation stages. \begin{figure}[h!] \centering\includegraphics[width=12cm]{Fig1.pdf} \caption{Experimental setup for the polarization-independent single-photon switch. In the area inside the green border is the Mach-Zehnder structure housing both phase modulators to yield polarization independency. Please see text for details. \label{Fig1} } \end{figure} The reflected output from the PBS following the crystal is connected to a free-running mode InGaAs single-photon detector module (D$_\mathrm{t}$), with 15\% overall detection efficiency (IdQuantique id220). The output from this detector is used to trigger the two other single-photon detectors D$_1$ and D$_2$ (IdQuantique id210, working in gated mode, also 15\% detection efficiency) placed at the outputs of the Sagnac interferometer. The Sagnac is built from a 50:50 fiber beamsplitter (BS), whose outputs create the two paths $|A \rangle$ and $|B \rangle$ indicated in Fig.\ref{Fig1}. Inside the Sagnac a Mach-Zehnder-like structure (MZs) with polarizing beam splitters (PBS) placed at its input and output is constructed to house the orthogonally placed phase modulators (PM). The paths $|A \rangle$ and $|B \rangle$ reach the MZs from opposite directions, as seen in Fig. \ref{Fig1}. Both PBSs are free-space units mounted in a compact module with input and output integrated coupling lenses. A half-wave plate (HWP) placed in the same PBS module (shown in the inset in Fig. \ref{Fig1}) is used to ensure the correct polarization state (vertical) enters one of the fiber-pigtailed lithium niobate electro-optic phase modulators (PM) with 10 GHz bandwidth. The input/output fibers from both PMs are polarization maintaining. The MZs is therefore employed to decompose each $|A \rangle$ and $|B \rangle$ paths into orthogonal polarization components, with one of these rotated by 90$^{\circ}$ by the HWP. This arrangement is employed as standard commercial PMs only act upon one polarization component, in order to ensure the polarization insensitivity of our setup. Following the MZs outputs, both $|A \rangle$ and $|B \rangle$ components propagate back to the fiber BS. There is a 100 m optical fiber delay between input $|B \rangle$ and the MZs, which is used to ensure that the modulation imposed at the PMs only acts upon one of the propagating directions in the Sagnac. After recombination at the BS, the single-photon can propagate to either D$_1$, following the optical circulator, or D$_2 $. Both modulators are driven by a single signal generator producing 32 ns wide pulses. The generator is triggered by D$_\textrm{t}$, with electronic delays used to appropriately synchronize the timing. The number of detections per second is processed by a field programmable gate array (FPGA) coincidence detection unit (CDU). Fig. \ref{Fig2} shows the rate of coincident detections at D$_1$, as a function of the delay imposed on the electrical pulse driving the PMs. \begin{figure}[h!] \centering\includegraphics[width=12cm]{Fig2.pdf} \caption{Coincident single-photon detections between D$_t$ and D$_1$ as a function of the relative delay imposed on the driving pulses applied to the phase modulators. The integration time is 10 s, and the error bars show the standard deviation. \label{Fig2}} \end{figure} We assume an arbitrary input polarization state at the Sagnac's input of the form $|\psi\rangle =\alpha|H\rangle + \beta|V\rangle$ , where $|H\rangle (|V\rangle)$, is the quantum state representation for the horizontal (vertical) polarization state respectively and $\alpha$ and $\beta$ are complex coefficients such that $|\alpha|^2 + |\beta|^2 = 1$. After the input BS we have that the state of the photon is $|\psi\rangle=1/\sqrt{2}\left((|A\rangle + i|B\rangle)\otimes(\alpha|H\rangle + \beta|V\rangle)\right)$, where $i$ is the relative phase shift between the transmitted and reflected ports of the BS. Manual polarization controllers inside the Sagnac loop are used to ensure that the both orthogonal polarization components from the input state always propagate through the same path in the MZs, i.e. the $|H\rangle$ component propagating through $|A\rangle$ takes the path containing PM$_1$ in the MZs, while the same $|H\rangle$ component injected in the opposite $|B\rangle$ path is routed to the same PM (since the manual polarization controller switched it to $|V\rangle$ before the PBS input). This is necessary to benefit from the inherent phase stability of the Sagnac. At the end of the interferometer, we can consider $|D_1\rangle$ and $|D_2\rangle$ as the two possible paths (outputs) that $|A\rangle$ and $|B\rangle$ could take after being combined in the beam splitter, the final state of the system is: \begin{align}\nonumber |\psi\rangle=\frac{1}{2}\left[\{i(e^{i\phi}+1)|D_{1}\rangle+(e^{i\phi}-1)|D_{2}\rangle\}\otimes\{\alpha|H\rangle+e^{iKl}\beta|V\rangle\}\right] \end{align} where $\phi$ is the phase shift provided by each PM (Since we employed a single generator to drive both PMs, the phase shift applied to each modulator, is the same and equal to $\phi$), $K$ is the propagation constant and $l$ the length difference between the two arms of the MZs, which is guaranteed to be longer than the short-coherence length of the single-photons ($\sim $ 1 mm). We can then write the probability that a single-photon is detected at either output of the Sagnac loop as: \begin{equation} |{{\big\langle \textrm{D}_{1}|\psi \rangle }|}^2 = {cos}^2\left(\frac{\phi}{2}\right) \label{eq3} \end{equation} \begin{equation} |{{ \langle \textrm{D}_{2}|\psi \rangle }|} ^2 = {sin}^2\left(\frac{\phi}{2}\right) \label{eq4} \end{equation} Therefore the outputs are complementary as a function of $\phi$, as expected for an interferometer. Such a scheme can then be used as an optical switch for photonic quantum states, since the single-photon output probability depends on the applied phase shift $\phi$. \section{Results} We experimentally demonstrate the scheme measuring the coincidence rate between triggering detector D$_\textrm{t}$ and the two output detectors D$_{1}$ and D$_{2}$, as a function of the relative phase $\phi$, which is proportional to the driving voltage applied to the PMs. This is done initially for an input horizontal polarization-state, set through the HWP placed before the circulator's input (Fig. \ref{Fig1}). For each applied voltage we take 20 measurements of 10 s each, plotting the results in Fig. \ref{Fig3}, with the error bars representing one standard deviation. The x-axis in Fig. \ref{Fig3} corresponds to the voltage applied to the PMs and it is proportional to modifying the relative phase $\phi$ of the Sagnac. From equations (\ref{eq3}) and (\ref{eq4}), we can see that for $\phi = 0^\circ$, the probability of detecting a photon is maximum in D$_1$ and minimum in D$_2$, with the opposite when $\phi = 180^\circ$. From Fig. \ref{Fig3} approximately 4 V are needed for a $\pi$ phase shift. We demonstrate the polarization-independent character of the setup by repeating the measurement for three other input states: vertical, diagonal $|\textrm{D}\rangle =\tfrac{1}{\sqrt{2}}\left(|H\rangle + |V\rangle \right)$ and the anti-diagonal state $|\textrm{A}\rangle =\tfrac{1}{\sqrt{2}}\left(|H\rangle - |V\rangle \right)$, obtaining very similar results (Fig. \ref{Fig3}). The total loss in the experiment from the input of the Sagnac, to the single-photon detectors is approximately 5 dB, limited mainly by the insertion loss of the phase modulators. \begin{figure}[ht] \centering\includegraphics[width=15cm]{Fig3.pdf} \caption{Experimental results for four input polarization states: $|H\rangle$ (horizontal),$|V\rangle$ (vertical), $|D\rangle$ (diagonal) and $|A\rangle$ (antidiagonal), as a function of applied voltage to the phase modulators. The data points represent coincident detections at D$_1$ and D$_2$ , both triggered by D$_\textrm{t}$. Error bars represent one standard deviation arising from 20 repeated measurements (10 s integration time per measurement) for each voltage. \label{Fig3}} \end{figure} A typical figure of merit of an interferometer is given by the visibility V = $(\textrm{C}_{1} - \textrm{C}_{2}) / (\textrm{C}_{1} + \textrm{C}_{2})$, where C$_1$ and C$_2$ are the number of counts at detectors D$_1$ and D$_2$, when triggered by D$_\textrm{t}$, respectively for the same integration time. We obtain an average visibility over all 4 input polarization states of 97.63 $\pm$ 0.21\% without subtracting accidental counts, leading to an average extinction ratio of 19.21 dB. We obtain 25.1 detections/s on average for all input states in one output, when the applied voltage is adjusted such that constructive interference is obtained at that output. One consequence of the internal MZs is that it is subjected to relative phase fluctuations between each arm due to thermal variations. While it does not affect the relative phase set between the arms "A" and "B", and thus the Sagnac switching capability, it has the effect that a polarization state that propagates through both arms (in other words any state except $|H\rangle$ or $|V\rangle$) will undergo a polarization rotation at the end of the Sagnac loop. Nonetheless, since thermal fluctuations represent slowing varying polarization rotations, the output polarization states at either port of the switch can easily be compensated with varying HWPs \cite{Xavier_2008}. This effect can be further mitigated by thermally insulating the MZs. \section{Conclusions} We have experimentally demonstrated a polarization independent single-photon optical switch that is capable of deterministically routing two different spatial output single-photon modes, while taking advantage of the intrinsic stability of the Sagnac interferometer. The switching speed was only limited by the electronics and detection hardware, but it is fully capable to operate on the GHz range. Our work presents a new design of fiber-optical Sagnac interferometer that has many potential applications in quantum information science. Furthermore, our design can also be used in classical optical networks with the benefit that it works independently of the arrival state of polarization of a light pulse \cite{Agrawal}. Finally our work presents an alternative to previously used switches, since it also has the extra advantage of being fully compatible with optical network hardware, as it uses off-the-shelf components in the 1550 nm telecom window, which support ultra-fast switching speeds. \section*{Acknowledgments} We acknowledge Felipe Toledo for experimental assistance and Daniel Mart\'inez, Esteban S. G\'omez and Niklas Johansson for valuable discussions. G. X. acknowledges Ceniit Link\"{o}ping University, the Swedish Research Council (VR 2017-04470) and QuantERA grant SECRET (VR grant no. 2019-00392) for financial support. A. A. acknowledges financial support from the Knut and Alice Wallenberg Foundation through the Wallenberg Center for Quantum Technology (WACQT). G.L. was supported by Fondo Nacional de Desarrollo Cient\'{i}fico y Tecnol\'{o}gico (FONDECYT) (1200859) and Millennium Institute for Research in Optics. P.G-G. acknowledges support from ANID FONDECYT/POSTDOCTORADO/N$^{\circ}$ Proyecto 3200820. J.C. acknowledges support from ANID/REC/PAI77190088.
1,314,259,993,740
arxiv
\section{Introduction} \label{intro} Quantum mechanics is a probabilistic theory, as most of its predictions are irreducibly statistical. It is therefore understandable that the first attempts to clarify its content made use of the well-tested concept of \emph{statistical ensembles}, describing identical abstract copies of the system under consideration, each of which would represent a different state in which the system might be found to be in. This statistical \emph{ensemble interpretation} of quantum physics was originally held by Albert Einstein, and subsequently supported by a number of authors, like for instance Leslie E. Ballentine~\cite{Ball}. We can summarize the core of this view by directly quoting Einstein~\cite{Eins}: ``The attempt to conceive the quantum-theoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems.'' In other terms, according to the statistical ensemble interpretaton, a state vector $|\psi\rangle$ doesn't describe the state of an individual system, but a more abstract entity: an ensemble of identical copies of the same system, each of which is in a different possible state. This would mean for instance that when we write $|\psi\rangle$ in the form of a superposition: \begin{equation} \label{superposition} |\psi\rangle = \sum_{i=1}^{n} \alpha_i |a_i\rangle, \quad \sum_{i=1}^n |\alpha_i|^2=1, \end{equation} \noindent where the $|a_i\rangle$, $i=1,\dots ,n$, are the eigenstates of a given observable $A$ (a self-adjoint operator), one should not understand it as if it was a real, actual state, describing a condition in which the individual system would nonsensically be, at once, in all these different mutually exclusive states, but simply as a convenient mathematical notation expressing the fact that, following a long series of measurements of the observable $A$, on identically prepared systems (the preparation being described by the vector $|\psi\rangle$), the systems will be found to be in one of the eigenstates $|a_i\rangle$, $|a_i|^2 100 \%$ of the time. Of course, from a purely instrumentalistic point of view, there are no problems in adopting the minimalistic view that quantum theory is not about individual systems, but about statistical ensembles of similarly prepared systems. Indeed, it is a matter of fact that when experimenters measure a physical quantity in the laboratory, on a quantum system, what they do is precisely to repeat the same experiment a large number of times, on identically prepared systems, in order to calculate probabilities as limits of relative frequencies of outcomes. In other terms, there certainly exists at least one uncontroversial ensemble to which the state vector $|\psi\rangle$ refers to: the ensemble of identically prepared quantum entities which are subject to a series of identical measurements, as well as the ensemble of outcomes associated with them. Problems however begin when one quits a purely instrumentalistic-empiricistic view and begins to wonder what could be the reality of a microscopic quantum entity, like for instance an electron. If we consider the paradigmatic example of classical statistical mechanics, we can observe that the statistical ensemble this theory deals with is a \emph{purely theoretical construct}, resulting from the existence of another kind of ensemble, which instead is very concrete: the ensemble of identical microscopic entities forming the macroscopic system under consideration (for example the identical molecules forming a classical ideal gas). Each of these microscopic subsystems possesses a well-defined state, and each possible combination of the subsystems' individual states defines a specific state of the macroscopic system. But since we have no access, in terms of knowledge, to the state of each individual microscopic subsystem, we cannot have access to the actual state of the macroscopic system, which therefore can only be described in probabilistic terms, by means of an abstract statistical ensemble. If quantum mechanics was just a theory dealing with systems formed by sub-entities possessing well-defined states, we could assume that the statistical content of $|\psi\rangle$ could be traced back, in a way or another, to our lack of knowledge about the different individual states in which the system's components are in, in a sort of generalization of a state of statistical mechanics. But such a view is difficult (if not impossible) to maintain if we consider our today's ability of performing experiments also with a single microscopic entity at a time -- like for examples single neutrons in Rauch's celebrated interferometry experiments~\cite{Rauch} -- and when we do so, we still necessitate to use a pure probabilistic language to conveniently describe the outcomes of the measurements. So, if we take seriously our ability to perform measurements on individual entities of a non-composite kind, it seems we are forced to conclude that the only possible origin of the statistical ensemble associated with $|\psi\rangle$ is in the series of experiments we perform on identically prepared systems. But if experiments are performed on identically prepared systems, i.e., on identical systems which are all exactly in the same condition, how is it possible that each single experiment can exhibit a potentially different outcome? Consider for instance a marble moving rectilinearly on a table, characterized by a spatial position $x_0$ and velocity $v_0$ of its center of mass, at time $t=0$, and assume that we perform the experiment consisting in observing its position at a subsequent time $t_1$, and that the outcome of the experiment is $x_1$. If we consider an ensemble of identically prepared systems, i.e., of identical marbles all prepared in exactly the same state $(x_0,v_0)$, at time $t=0$, and perform on each of them the same position measurement at time $t=1$, the outcome will always be $x_1$. In other words, even though we have an ensemble of experiments, we only have a single outcome! Different from the quantum case, the state of a marble, when we consider it in relation to a position measurement, cannot be described as a superposition of different possible outcome. And this means that an ensemble of experiments performed on identically prepared systems is a necessary but certainly not a sufficient condition to obtain a statistical description of the system under consideration. The above is of course well known, and hidden variables theories have been attempted precisely in the hope of making up for this inconvenience. But associating hidden variables to, say, an electron's state, is about hypothesizing that the electron would be a sort of classical composite entity, with the hidden variables expressing our ignorance regarding the \emph{actual} states in which its different subcomponents would be in. In other terms, the typical ``hidden variables hypothesis'' is that when we prepare the system in a state $|\psi\rangle$, we have no practical control on the actual values taken by these hidden variables, which are responsible for the final outcome of the experiment. This idea that we need additional variables to describe the state of a quantum entity, in addition to the specification of $|\psi\rangle$, is of course very natural, and if proven correct it would provide a complete solution to the measurement problem, much in the spirit of a classical statistical theory. But such idea has encountered the immovable obstacle of the celebrated No-Go theorems~\cite{Gleason, Kochen, Bell}. Also, a hidden variable theory, with the hidden variables referring to our ignorance about the actual condition of the system, should be described by a probabilistic theory obeying the classical Kolmogorovian axioms, whereas these axioms are disobeyed by quantum mechanics.~\cite{Accardi, Pitowski} Clearly, all these problems turn around the fundamental question of giving a sensible meaning to a notion of probability associated with individual physical systems the state of which is assumed to be completely known. In other terms, the fundamental question we must ask is: can we understand probabilities not as quantifiers of our ignorance about the state of the system, but as quantifiers of our ignorance about something else? What would it be then this ``something else''? And, would it characterize in some objective way some of the features of the system under consideration? It is the purpose of the present article to provide a simple and clarifying answer to this fundamental question, on the basis of Aerts' \emph{hidden-measurement approach},~\cite{Aerts4, Aerts4b, Aerts7, Aerts10} by analyzing an extremely simple and well-known physical system: a six-faces die. More precisely, in Sections~\ref{Rolling a die is a quantum process}, \ref{A die with a Hilbert space representation} and \ref{Producing interferences with a single die}, we show how to perform simple rolling experiments on a single die and describe the outcomes by means of the Born rule and the projection postulate, and how a single die can actually interfere with itself and violate the classical law of total probabilities. Furthermore, in Sec.~\ref{Violating Bell's inequality with two entangled dice}, we show how to connect (entangle) two identical dice and perform coincidence experiments that are capable to maximally violating Bell's inequality. Finally, in Sec.~\ref{Concluding remarks}, we offer some concluding remarks. \section{Rolling a die is a quantum process} \label{Rolling a die is a quantum process} Undoubtedly, one of the most typical examples of a probabilistic experiment is the rolling of a die. If we ask what is the probability ${\cal P}(i)$ of obtaining the number $i\in \{1,2,3,4,5,6\}$, we can answer by simply applying Laplace's classical definition of probabilities: the ratio of favorable cases to all possible cases. Considering that a standard die has six different faces, if it is a fair die this ratio is simply: ${\cal P}(i) =1/6$, $\forall i\in \{1,2,3,4,5,6\}$. A fundamental point in Laplace's definition of probability is the assumption that none of these possible cases has to be favored, in whatsoever way, by the selection procedure. And this means that, in the case of the die, the rolling process has to be genuinely \emph{random}, in the sense that it has to be such that it cannot distinguish between the different faces of the die (Jaynes' principle of indifference). An important question to be asked is the following: What is the fundamental difference between the probabilities delivered by the experiment consisting in rolling a die and those delivered by a typical quantum measurement? Surprisingly, as we shall see, there are no fundamental differences, and the example of the die, if carefully analyzed, is actually able to provide all the important answers regarding a plausible origin of probabilities in quantum mechanics. Let us start by observing that a die, when considered from the viewpoint of a rolling experiment, is a single, non-composite entity, about which we know in principle everything we need to know. In other terms, we are not here in a situation such that we could attach hidden variables to the die's state, so that if these variables were known they would allow us to dispense with the probabilities. Indeed, before rolling the die, if we really want we can perfectly well determine its exact state, for instance by taking a look to its upper face with respect to our hand's palm, and the exact location it occupies on the latter, at a given moment, but this knowledge, however complete it may be, is not going to help us in predicting the final outcome of the rolling experiment, when the die is thrown on the table. This is so because the main source of randomness in the experiment is not in our lack of knowledge about the initial state of the die, which we can assume to be fully known, but about the specific interaction taking place between the die and our hand which throws it, as well as, consequently, between the die and the table on which it will roll before exhibiting its final upper face. In other terms, if we really want to talk about hidden variables here, these will have to be attributed to the rolling experiment per se, and not to the initial die's state. Of course, one can object that there is no fundamental difference between a rolling experiment with a die and our previous description of the marble moving rectilinearly on the table. Indeed, also the die, like the marble, follows a deterministic trajectory, which is just more complicated. So, the only important difference lies here in the fact that the actual die's trajectory depends on the unpredictable interaction with our hand, and this is the reason why we cannot easily predict the final outcome of the rolling experiment, whereas we can easily predict the positions of the marble on the table at whatever instant. In other terms, each rolling experiment with the die is a different experiment, in the sense that it expresses a different interaction between the die and the hand, and consequently between the die and the surface of the table, and this is why the result is in practice unpredictable. Each throw is per se a deterministic process, but since we lack knowledge about how the die is each time thrown, we can only describe the outcomes in probabilistic terms. A way to cope with this problem of indeterminacy is of course to construct an extremely precise machine able to throw the die always exactly in the same way, and for instance calibrate the machine (by varying for instance a certain parameter $\lambda$ which would control, say, the angular velocity and vertical speed with which the die is thrown) in such a way that if the die is placed on it with face $j$ up, it will also end its run on the table with the same face $j$ up. In other terms, by means of a very precise instrument, in replacement of our imprecise hand, we can produce a perfectly \emph{controlled} rolling experiment, and this time the probability of obtaining the number $i$ will be given by ${\cal P}_\lambda (i) = \delta_{ij}$, which is just another way to say that the outcome is now perfectly predetermined and we don't need any more to describe the experiment in probabilistic terms. Considering the above reasoning, it would appear that our statement that the rolling of a die can explain the nature of quantum probabilities, as there would be no essential differences between a rolling experiment and a quantum measurement, has been denied. But this is just because we are not considering the possibility of experimenting with a die from the right perspective. The right perspective we have here in mind is the one usually adopted in a typical gambling with dice at the casino, for instance in the game known as craps. Indeed, as is well known, in this kind of game one cannot roll the dice by using a machine, but one has instead to always use one's hand. The reason for this is precisely that the casino wants to prevent the player from taking a full control over the rolling experiment, as a full control would mean of course full predictability of the final outcome, and therefore a sure win. On the other hand, the casino doesn't forbid the player to know the initial states of the dice before the throw. This is so because a throw made by hand is not controllable by the player, and therefore knowing their initial states is not helpful in determining their final upper faces. So, if we consider the roll of a die as an experiment of the measurement kind, with the die being the physical entity which is measured, we can observe that from the viewpoint of the casino certain measurements are allowed, whereas certain others are strictly forbidden: we cannot perform a measurement with a high-precision machine, which is able to perfectly control the way the die is thrown (by fixing the parameter $\lambda$), but we can perform it by using a low-precision ``hand machine,'' which doesn't allow us to precisely control the way the die is actually thrown. Of course, the fact that certain die's experiments are forbidden from the casino's point of view doesn't mean they cannot be carried out: the point here is that a casino is only interested in having purely probabilistic outcomes, and that's why it \emph{imposes} to its players to carry out their rolling experiments by hand, and not by means of a high-precision instrument. Our point is that nature, like the casino, imposes a similar restriction when we deal with microscopic entities: we can only carry out experiments of the ``hand'' kind, and not of the ``high precision machine'' kind. Having said that, and before analyzing some further the rolling experiment with a die, we need to make clear in which sense it can be considered a measurement. The question is: What is actually measured? The question is relevant because, as we said, we are here assuming that we perfectly know the state of the die before rolling it on the table: we know its mass, volume, its specific geometry, the total number of its faces, the material it is made of, its exact position and orientation on the palm of the hand, etc. So, what are we actually measuring here? Here again, we must adopt the viewpoint of the casino, for instance in a typical craps dice game. What matters, in the logic of the game are the upper faces obtained following a very specific rolling procedure which literally \emph{creates} a couple of upper faces (one for each die), the value of which will then determine the possible win or loss of the player. An important and subtle point here is about properly distinguishing a die's \emph{faces} from a die's \emph{upper face}. The six different faces of a die are of course always actually and stably existing, for as long as the die is not destroyed. In certain circumstances however, one of its six faces can temporarily become a so-called ``upper face.'' This happens each time a die is located on the flat surface of a game table. In that circumstance, the face having the highest gravitational energy corresponds to what is conventionally called the upper face of the die. Of course, a die can find itself located on the surface of a game table for a number of different reasons, one of which is surely the one of having taken part of a rolling experiment. Now, before such experiment is executed, each of the six \emph{actual faces} of the die are only \emph{potential upper faces}, as is clear that only one of the six faces will have the property of being ultimately placed upward, perpendicularly to the gravitational field. In other terms, in general, a rolling experiment, if properly understood, involves a pure \emph{creation aspect}. What is created are not the six faces of the die, which were existing also before the experiment and will continue to exist after it, but a specific upper face, which wasn't necessarily existing prior to the experiment (depending where the die was located). So, if we consider the rolling experiment with a die from the perspective of a process of creation of an upper face or, better, from the perspective of the measurement of the specific value written on the die's upper face, following a die's roll, we can observe that, similarly to the case of a quantum measurement on a microscopic system, the very process of measurement (observation) creates the property which is measured, i.e., it is the very measurement that actualizes the property which, prior to the measurement, was only existing in potential terms. What is important to note is that the presence of a process of \emph{actualization of potential} is a typical signature of quantum (or quantum-like) systems, exhibiting non-classical properties which can produce interference effects. And this means that, surprisingly, much of the ``weirdness'' of quantum physics is in fact already contained in the analysis of the most traditional examples that have been used to illustrate the classical probability calculus since the birth of probability theory, if we only interpret these examples as physical experiments testing specific properties, or measuring specific observables. To make this point fully explicit, in the next section we shall define specific rolling experiments on a very particular type of die, and show that the entire experimental situation can be easily described by means of a (real) Hilbert space structure, giving rise to typical quantum mechanical interference effects and therefore to a violation of the classical law of total probabilities. \section{A die with a Hilbert space representation} \label{A die with a Hilbert space representation} The die we are going to consider is a traditional six-faces die. However, instead of numbering the faces, as usual, from $1$ to $6$, we shall only consider two numbers: $+1$ and $-1$, which for simplicity will be represented on the die's faces by the symbols ``$+$'' and ``$-$.'' As there is a total of six faces, the two symbols ``$+$'' and ``$-$'' will be repeated three times each on the die, in a way which is illustrated in Fig.~\ref{Quantum-die-faces}. In addition to that, we shall consider that the surface of each face of the die is made of a particular material, which is able to slide with very low friction on the game table, along a specific direction, indicated on each die's face by two parallel left-right arrows (see Fig.~\ref{Quantum-die-faces}), but present a very high coefficient of friction as regards to the possibility of sliding in a direction perpendicular to that specified by the arrows. Note that the die is designed in such a way that the arrows of two opposite faces are always oriented in the same direction. \begin{figure}[!ht] \centering \includegraphics[scale =.5]{quantumdiefaces.pdf} \caption{The six-faces of the die, three of which show a ``$+$'' symbol, and the other three a ``$-$'' symbol. The surface of each face has a specific orientation, indicated by the two parallel left-right arrows, corresponding to the direction along which the die's face can easily glide on the flat surface of the game table. \label{Quantum-die-faces}} \end{figure} Apart from this peculiarity regarding the material with which the die's faces are made, the die is fair, in the sense that it is an object of perfectly homogeneous density. (To fix ideas, one can consider for instance that the surface of the game table is made of ice and that the two parallel arrows are two small metal blades, like those used on ice skates.) The game table is a rectangular bi-dimensional surface, placed perpendicularly to the gravitational field, thus defining two orthogonal directions, corresponding to the two sides of the rectangle (which for simplicity will be considered of infinite length hereinafter), indicated as the $x$-direction and $z$-direction. (The reason of using the letter ``$z$'' instead of the letter ``$y$,'' as usual, will become clear later). At the beginning of the game the die is placed on the surface of the game table with its upper face oriented either along the $z$-direction, or along the $x$-direction. Considering that only two different symbols are marked on the die's faces, this means that we only have to distinguish $4$ different states in which the die can be prepared: $|+\rangle_x$, $|-\rangle_x$, $|+\rangle_z$ and $|-\rangle_z$, as illustrated in Fig.~\ref{die-states}. \begin{figure}[!ht] \centering \includegraphics[scale =.6]{die-states.pdf} \caption{The four different states of the die, corresponding to the two different possible orientations of the upper face's symbol with respect to the $x$ and $z$ directions, defined by the two sides of the game table. \label{die-states}} \end{figure} Let us observe that we have denoted these four different states by means of the typical quantum mechanical ket-notation, as an anticipation of the fact that we will be able to describe our measurements on the die-system by means of the Hilbert space formalism. This said, it is now time to define the observables that we are going to consider in relation to our die, and make precise how these observables are measured (i.e., observed) in concrete terms. As usual in a game with dice, we are interested in observing the value exhibited by the die's upper face, as a result of a die's roll, i.e., as a result of a specific roll measurement. More precisely, we shall denote by $F_z$ the observable associated with the reading of the number marked on the die's upper face ($+1$, or $-1$), following a roll along the $z$-direction. The roll -- which in the following we shall simply call a $z$-\emph{roll} -- is performed by a human operator (the player, or the experimenter), by means of a special instrument, similar to a ``flipper ball shooter,'' thanks to which we ideally assume it is possible to produce a perfect roll of the die along the $z$-direction, as illustrated in Fig.~\ref{flipper-ball-shooter}. \begin{figure}[!ht] \centering \includegraphics[scale =.4]{z-roll-shooter.pdf} \caption{The special shooter used by the experimenter allows the human operator to perfectly control the direction along which the die will be rolled, but not the impulsion which will be transferred to it. \label{flipper-ball-shooter}} \end{figure} More precisely, to measure $F_z$ the experimenter has to perfectly orient the shooter along the $z$-direction, placing it behind one of the two faces whose normal vector is parallel to the $z$-direction (which one of the two faces is actually chosen by the experimenter is irrelevant in terms of the outcome, because of the symmetry of the die), pull the knob in some arbitrary way (compressing in this way the spring in the mechanism), then release it, thus communicating a random (a priori unpredictable) impulsion to the die, which will after that either roll or not roll, according to its initial state. In fact, if the state of the die before the measurement of $F_z$ is either $|+\rangle_z$ or $|-\rangle_z$, then, because of the low friction of the face in contact with the game table, the shooter will not be able to cause the die to roll, but only to glide on it, along the $z$-direction, for a certain time, until all translational kinetic energy will be converted into heat (we recall that the upper and lower faces of the die have the arrows oriented always in the same direction). In other terms, it is possible in this case to predict in advance, with certainty, the outcome of the measurement (without disturbing the system), which means that $|+\rangle_z$ and $|-\rangle_z$ are \emph{eigenstates} of $F_z$, with \emph{eigenvalues} $+1$ and $-1$, respectively. Using a standard Hilbertian representation, we can therefore write: \begin{equation} \label{eigenvalue equation-z} F_z |\pm\rangle_z = \pm |\pm\rangle_z, \quad \phantom{.}_{z}\langle\pm | \mp\rangle_z = 0, \quad \phantom{.}_{z}\langle\pm | \pm\rangle_z =1. \end{equation} On the other hand, if the initial state of the die, before the measurement of $F_z$, is either $|+\rangle_x$ or $|-\rangle_x$, then, since the two (opposite) die's faces associated with these two states (see Fig.~\ref{die-states}) present an extremely high coefficient of friction with the game table, with respect to the z-direction, the die will not anymore slide following the action of the shooting machine, but roll along the $z$-direction (i.e., rotate around the $x$-direction). Of course, all initial rotational and translational energy will progressively be converted into thermal energy, so that in the end the die will stop and show a specific upper face. The dynamics of the rolling die can of course be very complex, but typically most of the energy communicated to it by the shooter will be initially transformed into rotational kinetic energy, then, because of the positive work performed by the friction forces, the rotational energy will be gradually transformed into translational kinetic energy and heat. For the purpose of our analysis, what is important to observe is that the die will roll for as long as the non-elastic effects associated with so-called \emph{rolling frictions} remain lower than the \emph{sliding frictions}, since in this case the die requires less energy to be moved by rolling than by sliding. But since two of the four faces involved in the rolling movement along the $z$-direction present an extremely low sliding friction, it is highly probable that the die will end its run sliding on one of them, before it will ultimately totally stops. In other terms, apart from exceptional circumstances, which we can simply ignore not to complicate our discussion unnecessarily, we can ideally assume that, following a $z$-roll, if the die's initial state is $|+\rangle_x$, or $|-\rangle_x$, then the final state will be either $|+\rangle_z$, or $|-\rangle_z$, i.e., an eigenstate of the $F_z$ observable. Now, since the human operator has absolutely no control regarding the way the shooter will actually produce the roll of the die (apart from the rolling direction), it is clear that there is an equal probability of $1/2$ to obtain either the final state $|+\rangle_z$, associated with the eigenvalue $+1$, or the final state $|-\rangle_z$, associated with the eigenvalue $-1$. This means that, with respect to the measurement of $F_z$, states $|+\rangle_x$ and $|-\rangle_x$ have to be considered as a \emph{superposition} of the two eigenstates $|+\rangle_z$ and $|-\rangle_z$ of $F_z$. A natural choice for their representation in terms of orthonormal states is therefore the following: \begin{equation} \label{superposition-x} |\pm\rangle_x = \frac{1}{\sqrt{2}}\left(|+\rangle_z \pm |+\rangle_z\right). \end{equation} So far, we have only considered the observable $F_z$, corresponding to an observation relative to the $z$-direction. In the same way, we can of course also consider the observable $F_x$, consisting in the observation of the number marked on the die's upper face following a roll along the $x$-direction ($x$-\emph{roll}), which is defined -- \emph{mutatis mutandis} -- likewise the $z$-roll, orienting in this case the shooter along the $x$-direction. Of course, the same discussion as per above can be repeated for $F_x$, and we can write: \begin{equation} \label{eigenvalue equation-x} F_x |\pm\rangle_x = \pm |\pm\rangle_x, \quad \phantom{.}_{x}\langle\pm | \mp\rangle_x = 0, \quad \phantom{.}_{x}\langle\pm | \pm\rangle_x =1. \end{equation} Again, with respect to the measurement of $F_x$, states $|+\rangle_z$ and $|-\rangle_z$ have to be considered as a \emph{superposition} of the two eigenstates $|+\rangle_x$ and $|-\rangle_x$ of $F_x$, so that we can write: \begin{equation} \label{superposition-z} |\pm\rangle_z = \frac{1}{\sqrt{2}}\left(|+\rangle_x \pm |+\rangle_x\right). \end{equation} Introducing the projection operators $P_{z,\pm}= |\pm\rangle_z \phantom{.}_{z}\langle\pm |$ onto the eigenspaces associated with states $|\pm\rangle_z$, and the projection operators $P_{x,\pm}= |\pm\rangle_x \phantom{.}_{x}\langle\pm |$, onto the eigenspaces associated with states $|\pm\rangle_x$, we can write: \begin{equation} \label{observables} F_z = P_{z,+}- P_{z,-}, \quad F_x = P_{x,+}- P_{x,-}. \end{equation} Also, we can give a more explicit representation of these observables by setting: \begin{equation} \label{column-z} |+\rangle_z = \left(\begin{array}{c} 1\\0\\ \end{array}\right), \quad |-\rangle_z = \left(\begin{array}{c} 0\\1\\ \end{array}\right). \end{equation} Then, according to (\ref{eigenvalue equation-z}), (\ref{superposition-x}) and (\ref{eigenvalue equation-x}), we have: \begin{equation} \label{column-x} |+\rangle_x = \frac{1}{\sqrt{2}}\left(\begin{array}{c} 1\\1\\ \end{array}\right), \quad |-\rangle_x = \frac{1}{\sqrt{2}}\left(\begin{array}{c} 1\\-1\\ \end{array}\right) , \end{equation} \begin{equation} \label{Fz-and-Fx} F_z = \left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right), \quad F_x = \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right), \end{equation} \begin{equation} \label{Pz+-and-Pz-} P_{z,+} = \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right), \quad P_{z,-} = \left(\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right), \end{equation} \begin{equation} \label{Px+-and-Px-} P_{x,+} = \frac{1}{2}\left(\begin{array}{cc} 1 & 1 \\ 1 & 1 \end{array}\right), \quad P_{x,-} = \frac{1}{2}\left(\begin{array}{rr} 1 & -1 \\ -1 & 1 \end{array}\right). \end{equation} Considering the way the two observables $F_z$ and $F_x$ have been operationally defined, in terms of the $z$-roll and $x$-roll experiments, respectively, it is easy to check that the above matrix representation, in association with the \emph{Born rule}, allows to consistently describe all the probabilities involved in the measurements of these two observables, which are the following: \begin{equation} \label{probability1} {\cal P} (|\pm\rangle_z\stackrel{z-\texttt{roll}}{\longrightarrow}|\pm\rangle_z)= \phantom{.}_{z}\langle\pm | P_{z,\pm} |\pm\rangle_z=|\!\phantom{.}_{z}\langle\pm |\pm\rangle_z|^2 = 1, \nonumber \end{equation} \begin{equation} \label{probability2} {\cal P} (|\pm\rangle_z\stackrel{z-\texttt{roll}}{\longrightarrow}|\mp\rangle_z)= \phantom{.}_{z}\langle\pm | P_{z,\mp} |\pm\rangle_z=|\!\phantom{.}_{z}\langle\mp |\pm\rangle_z|^2 = 0, \nonumber \end{equation} \begin{equation} \label{probability3} {\cal P} (|\sigma\rangle_x\stackrel{z-\texttt{roll}}{\longrightarrow}|\rho\rangle_z)= \phantom{.}_{x}\langle\sigma | P_{z,\rho} |\sigma\rangle_x =|\!\phantom{.}_{z}\langle\rho |\sigma\rangle_x|^2 =\frac{1}{2}, \quad\rho,\sigma\in\{+,-\}. \end{equation} Similarly, for the $x$-roll measurement, we have the probabilities: \begin{equation} \label{probability1bis} {\cal P} (|\pm\rangle_x\stackrel{x-\texttt{roll}}{\longrightarrow}|\pm\rangle_x)= \phantom{.}_{x}\langle\pm | P_{x,\pm} |\pm\rangle_x=|\!\phantom{.}_{x}\langle\pm |\pm\rangle_x|^2 = 1, \nonumber \end{equation} \begin{equation} \label{probability2bis} {\cal P} (|\pm\rangle_x\stackrel{x-\texttt{roll}}{\longrightarrow}|\mp\rangle_x)= \phantom{.}_{x}\langle\pm | P_{x,\mp} |\pm\rangle_x=|\!\phantom{.}_{x}\langle\mp |\pm\rangle_x|^2 = 0, \nonumber \end{equation} \begin{equation} \label{probability3bis} {\cal P} (|\sigma\rangle_z\stackrel{x-\texttt{roll}}{\longrightarrow}|\rho\rangle_x)= \phantom{.}_{z}\langle\sigma | P_{x,\rho} |\sigma\rangle_z =|\!\phantom{.}_{x}\langle\rho |\sigma\rangle_z|^2 =\frac{1}{2}, \quad\rho,\sigma\in\{+,-\}. \end{equation} Also, in accordance with the quantum mechanical \emph{projection postulate}, we can observe that the $z$-roll and $x$-roll experiments are to be considered \emph{ideal measurements}, as is clear that following the measurement of $F_z$ (resp. $F_x$), the initial state of the die-system is projected onto an eigenstate of $F_z$ (resp. $F_x$), a fact which can also be expressed by considering that the probabilities of finding the system either in state $|+\rangle_x$ or $|-\rangle_x$ (resp. $|+\rangle_z$ or $|-\rangle_z$), following a $z$-roll (resp. a $x$-roll), is equal to zero. \section{Producing interferences with a single die} \label{Producing interferences with a single die} The attentive reader will have certainly noticed that $F_z=\sigma_z$ and $F_x=\sigma_x$, where $\sigma_z$ and $\sigma_x$ are two of the three Pauli matrices (and this explains why we have unconventionally chosen letters $z$ and $x$ to denote the two rolling directions on the plane of the game table). Now, considering that (see any book of quantum mechanics) $[\sigma_x, \sigma_z]=-2i\sigma_y\neq 0$, it immediately follows that the two observables $F_z=\sigma_z$ and $F_x=\sigma_x$ are \emph{experimentally incompatible}, as the matrices representing them do not commute. The existence of experimental incompatibility of certain observables is what distinguish, among other things, quantum physics from classical physics. More precisely, the presence of relations of non-commutation between certain observables is at the origin in quantum theory of so-called \emph{interference effects}, which in turn are responsible for a violation of the classical \emph{formula of total probability}. Let us show how interference effects, and consequently the violation of total probability, simply manifest in our measurements with the die. To do so, let us first generally observe that if $|\psi\rangle$ is the initial state of a given system, $A$ is a self-adjoint operator associated to a physical observable, and $P_{\alpha} $ is the projection operator associated with one of its eigenvalues $\alpha$, then, if $\alpha$ is the observed outcome of a measurement of $A$, according to the projection postulate the pre-measurement state $|\psi\rangle$ will ``collapse,'' following the measurement process, into the post-measurement state: \begin{equation} \label{projection} |\psi_{\alpha}\rangle = \frac{P_{\alpha}|\psi\rangle}{\sqrt{\langle\psi| P_{\alpha}|\psi\rangle}}. \end{equation} Then, considering a second self-adjoint observable $B$, not necessarily commuting with $A$, we can ask what is the probability that the outcome of a measurement of $B$ would be one of its eigenvalues $\beta$, associated with the projection operator $P_{\beta} $, \emph{conditional} to the fact the previous measurement of $A$ produced $\alpha$ as an outcome. According to (\ref{projection}) and the Born rule, we know that such a conditional probability is given by: \begin{equation} \label{conditional-probability} {\cal P}_{\psi}(B=\beta|A=\alpha) = \langle \psi_{\alpha} |P_{\beta} |\psi_{\alpha}\rangle = \frac{\langle \psi | P_{\alpha}P_{\beta}P_{\alpha}|\psi\rangle}{\langle\psi| P_{\alpha}|\psi\rangle}. \end{equation} Considering then that ${\cal P}_{\psi}(A=\alpha)=\langle\psi| P_{\alpha}|\psi\rangle$ is the probability of obtaining the outcome $\alpha$ when $A$ is measured with the system in state $|\psi\rangle$, we can write: \begin{equation} \label{joint-probability} {\cal P}_{\psi}(B=\beta|A=\alpha) {\cal P}_{\psi}(A=\alpha) = \langle \psi | P_{\alpha}P_{\beta}P_{\alpha}|\psi\rangle. \end{equation} This means that, by definition of a conditional probabilistic statement, the term on the right hand side of (\ref{joint-probability}) has to be interpreted as a \emph{joint probability} for the measurement of observables $A$ and $B$. However, since the two observables are not necessarily compatible, the joint probability is not here to be understood in the sense of the joint probability of two \emph{simultaneous} measurements, as $A$ and $B$ cannot in general be measured simultaneously, but as the joint probability of two \emph{sequential} measurements: \begin{equation} \label{joint-probability-bis} {\cal P}_{\psi}(A=\alpha \,\,\texttt{then}\,\, B=\beta) = \langle \psi | P_{\alpha}P_{\beta}P_{\alpha}|\psi\rangle. \end{equation} Defining the projection operator $P_{\bar \alpha}=\mathbb{I}-P_\alpha $, we can of course also write: \begin{equation} \label{joint-probability-tris} {\cal P}_{\psi}(A\neq\alpha \,\,\texttt{then}\,\, B=\beta) = \langle \psi | P_{\bar \alpha}P_{\beta}P_{\bar \alpha}|\psi\rangle, \end{equation} and observing that: \begin{equation} \label{projection-relations} P_\beta = \left(P_\alpha +P_{\bar \alpha }\right)P_\beta \left(P_\alpha +P_{\bar \alpha }\right) = P_\alpha P_\beta P_\alpha + P_{\bar \alpha}P_\beta P_{\bar \alpha}+P_\alpha P_\beta P_{\bar \alpha}+P_{\bar \alpha}P_\beta P_\alpha, \end{equation} we obtain from (\ref{joint-probability-bis}), (\ref{joint-probability-tris}) and (\ref{projection-relations}): \begin{equation} \label{quantum-total-probability} {\cal P}_{\psi}(B=\beta) = {\cal P}_{\psi}(A=\alpha \,\,\texttt{then}\,\, B=\beta) + {\cal P}_{\psi}(A\neq\alpha \,\,\texttt{then}\,\, B=\beta) + 2 \Re \,\langle \psi |P_{\alpha}P_{\beta}P_{\bar \alpha}|\psi\rangle. \end{equation} Eq.~(\ref{quantum-total-probability}) can be considered as the quantum generalization of the classical formula of total probability. When observables $A$ and $B$ are compatible, that is, when they commute, then also the corresponding projection operators commute, and since $P_{\alpha}P_{\bar \alpha}=P_{\alpha }-P_{\alpha}^2 = 0$, the third term in (\ref{quantum-total-probability}), which is a typical interference term, vanishes (and we recover the classical formula of total probability). Formula (\ref{quantum-total-probability}) being general, it also applies in relation to the two non-commuting observables $F_z$ and $F_x$, associated with the observation of the upper face of our die, following a $z$-roll and a $x$-roll experiment, respectively. Therefore, exactly as for a quantum microscopic system, the die is able to produce interference effects, and consequently violate the law of total probability. Let us check this fact more explicitely, in a specific example. For this, we set $P_\beta =P_{z,+}$, $P_\alpha = P_{x,+}$, $P_{\bar \alpha} = P_{x,-}$, and $|\psi\rangle = |+\rangle_z$. Then, since a $z$-rolling experiment cannot change the upper face of the die, when its upper face is oriented along the $z$-direction (the die will only glide, instead of rolling, since also the face in contact with the game table is oriented in the gliding sense), we have: \begin{equation} \label{example1} {\cal P}_{|+\rangle_z}(F_z=+1)={\cal P} (|+\rangle_z\stackrel{z-\texttt{roll}}{\longrightarrow}|+\rangle_z) =1. \end{equation} On the other hand, for the two joint-sequential probabilities, we have: \begin{eqnarray} \label{example2} {\cal P}_{|+\rangle_z}(F_x=+1\,\,\texttt{then}\,\, F_z=+1) &=& {\cal P} (|+\rangle_z\stackrel{x-\texttt{roll}}{\longrightarrow}|+\rangle_x)\,{\cal P} (|+\rangle_x\stackrel{z-\texttt{roll}}{\longrightarrow}|+\rangle_z)\nonumber\\ &=& \frac{1}{2}\cdot\frac{1}{2}=\frac{1}{4}, \end{eqnarray} \begin{eqnarray} \label{example3} {\cal P}_{|+\rangle_z}(F_x=-1\,\,\texttt{then}\,\, F_z=+1) &=& {\cal P} (|+\rangle_z\stackrel{x-\texttt{roll}}{\longrightarrow}|-\rangle_x)\,{\cal P} (|-\rangle_x\stackrel{z-\texttt{roll}}{\longrightarrow}|+\rangle_z)\nonumber\\ &=& \frac{1}{2}\cdot\frac{1}{2}=\frac{1}{4}. \end{eqnarray} Now, as is clear that $1\neq \frac{1}{4} + \frac{1}{4}$, the die manifestly violates the classical total probability's formula, in accordance with the fact that the matrices associated with the $F_z$ and $F_x$ observables do not commute. On the other hand, according to (\ref{column-z}), (\ref{Pz+-and-Pz-}) and (\ref{Px+-and-Px-}), we obtain for the inteference term: \begin{equation} \label{interference-term} 2 \Re \,\phantom{.}_{z}\langle + |P_{x,+}P_{z,+}P_{x,-}|+\rangle_z = \frac{1}{2}\begin{array}{cc} (1 & 0) \\ \phantom{(1} & \phantom{0)} \end{array}\left(\begin{array}{cc} 1 & 1 \\ 1 & 1 \end{array}\right) \left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right) \left(\begin{array}{rr} 1 & -1 \\ -1 & 1 \end{array}\right) \left(\begin{array}{c} 1\\0\\ \end{array}\right)= \frac{1}{2}\nonumber, \end{equation} which is precisely the value we need to add to (\ref{example2}) plus (\ref{example3}) in order to recover (\ref{example1}), in accordance with (\ref{quantum-total-probability}). We leave it to the reader to verify that (\ref{quantum-total-probability}) correctly describes the relation between marginal probabilities, joint-sequential probabilities and interference contributions, for other choices of the initial state and of the conditioning. \subsection{Discussion} \label{Discussion1} For some readers it may come as a surprise that also macroscopic, ordinary systems, can behave in a quantum (or quantum-like) manner. This however is well known since a long time now, at least by foundational researchers, like those of the Geneva-Brussel school of quantum mechanics, whose approach to the foundation of physical theories originated from the pioneering work of Josef-Maria Jauch and Constantin Piron in Geneva,~\cite{Piron1, Piron2, Piron3}, and subsequently from that of Diederik Aerts and collaborators in Brussels~\cite{Aerts1, Aerts5, Aerts2, Aerts6, Aerts3, Aerts4b}. It is important to observe that the quantum behavior of a macroscopic system, like a die, is not a consequence of its internal coherence, but of the way we have decided to actively experimenting with it, by means of some very specific experimental protocols. More precisely, the quantum behavior of a die, and of other macroscopic systems exhibiting a quantum (or quantum-like) structure, is a consequence of the fact that we are not conceiving our observations on the system (i.e., our measurements) only as processes of pure discovery, but also as processes of creation, i.e., processes through which we can create, in an unpredictable manner, the very quantities we are measuring (this is what is sometimes called the \emph{observer effect}. See~\cite{Sassoli-Observer} and the references cited therein). In our die-system this is has been done by considering the two observables $F_z$ and $F_x$, corresponding to the reading of the symbol marked on the upper face of the die, following particular rolling experiments. According to the ``rules of the quantum (and casino) game,'' these ``reading measurements'' cannot be performed by passively looking at the die on the table, but by means of procedures which require that the die is first rolled along the $z$ or $x$ directions (what we have called a $z$-roll and $x$-roll). These are, undoubtedly, readings of a very special kind! Of course, when experimenting with our die, we also have the possibility of looking at it in every moment, and directly ``see'' which one of the faces is in the upper position (if any). This possibility, of continuously monitoring the orientation of the die is in fact what confers the die-model its great explicative power, allowing for a full visualization of the measurement process, as it evolves (something we cannot do with microscopic entities). However, such a possibility of continuously monitoring cannot be used here in practical terms, as an alternative to the $z$-roll and $x$-roll experiments, to determine the value of the quantum-like $F_z$ and $F_x$ ``upper-face'' observables, in the same way as when we play craps in a casino we must conveniently roll the dice for our possible win to be validated. Now, if we consider the die-system and its unusual ``roll measurements'' a meaningful structural analogy of what truly goes on, behind the scenes, during a quantum measurement with a microscopic entity, then we can highlight, as was done many years ago by Diederik Aerts~\cite{Aerts4, Aerts4b, Aerts7, Aerts10}, a very simple and physically transparent mechanism which would be at the origin of quantum probabilities. Indeed, coming back to our discussion in the Introduction, what our analysis of the die clearly shows is that we don't need an ensemble of entities to generate a statistical ensemble: a statistical ensemble can also be naturally attached to a given physical entity when, instead of considering its properties and states in purely static terms, we also understand them in \emph{dynamical terms}. What we don't have to forget is that although the state of an entity is a description of its actual properties, i.e., those properties whose actuality would be confirmed with certainty, should we decide to test them (i.e., to observe them), such a description also contains dynamical information about what the entity can possibly become if, when in a given state, we act on it in a certain way, according to a certain experimental protocol. If this protocol has a built-in random element, having its origin in the presence of some unavoidable fluctuations in the experimental context, then of course the becoming of the entity can only be described in probabilistic terms. Also, considering that when we act on a system we generally modify its state, it is natural to interpret the logical connectives subtending the probabilistic calculus of quantum systems not in a classical and static \emph{propositional} way, but in a strictly \emph{dynamical} way.~\cite{Smets} This way of conceiving and interpreting the reality of quantum systems, in relation to the measurements we perform on them, is well illustrated in our experiments with the die. When for instance the die is in state $|+\rangle_z$, we can say that it possesses, in actual terms, the property of having what we may call a ``+'' $z$-\emph{upper face}. This we can say because we can predict with certainty (with probability equal to $1$) that the outcome of an observation of $F_z$ is $+1$, which is exactly what it is meant by possessing a ``+'' $z$-upper face. This certain prediction is possible because when the state is $|+\rangle_z$, and the system is operated through a $z$-roll, the fluctuations in the experimental context -- those produced by the random interaction of our hand with the shooter -- are not able to affect the final outcome. But this is not the case if we act on the system by means of a $x$-roll, i.e., if we measure $F_x$ instead of $F_z$. Indeed, in this case our lack of knowledge about the exact (deterministic) interaction which the hand selects, when pulling on the shooter, translates into our lack of knowledge about the final upper face exhibited by the die. In other terms, if, on the one hand, we can say that the die in state $|+\rangle_z$ possesses in \emph{actual} terms the ``+'' $z$-\emph{upper face} property, on the other hand we can only say that, when in such state, it possesses in \emph{potential} terms the ``+'' and ``$-$'' $x$-\emph{upper face} properties, and we can express this potentiality in more precise terms by writing the state of the system as the superposition: $|+\rangle_z = (|+\rangle_x + |-\rangle_x)/\sqrt{2}$. So, the state describes what the system \emph{is}, its actual properties, which correspond to those observations whose results we can predict with certainty, but also, indirectly, it describes what the system \emph{can possibly become}, when we observe properties which are not yet possessed by it, and therefore can only be created by the observational process, in a way which cannot be predicted in advance. This approach to the measurement problem has been called by Aerts the \emph{hidden-measurement approach}, where the term ``hidden'' refers to the deterministic interaction between the measured system and the measuring apparatus, which is selected in a random way during the measurement process, because of the presence of unavoidable and uncontrollable fluctuations in the experimental context.~\cite{Aerts4,Aerts4b,Aerts7,Aerts10} More precisely, according to this approach (which has been substantiated in full mathematical terms by Aerts and Coecke~\cite{Aerts7,Coecke2}), to a given quantum measurement we can associate an entire collection of ``hidden'' deterministic measurements, and when the measurement is actually performed only one of these hidden-measurements does actually take place. Each one of these hidden deterministic measurements determine, in a unique way, a given outcome, but since we lack knowledge about which one is actually selected, we also lack knowledge about the final outcome. This is what would constitute the essential difference between classical probabilities, obeying Kolmogorov's axioms, and non-classical, quantum probabilities, disobeying Kolmogorov's axioms. The formers correspond to situations where the lack of knowledge is only about the state of the system, whereas the latter would correspond to situations of full knowledge of the system's state, but maximum lack of knowledge about the exact measurement interaction taking place between the system and the apparatus. What is interesting to observe is that between these two extremes, one can also describe intermediate pictures, giving rise to intermediate probabilities which can neither be fitted into a quantum probability model, nor into a classical probability model.~\cite{Accardi, Pitowski, Aerts3, Massimiliano2} A few additional comments are in order. In our description of the rolling experiment with the die, we have described measurements, probabilities and outcomes in terms of matrices and vectors in a real two-dimensional Hilbert space, using the projection postulate and the Born rule. However, to do so we have limited the possible states and observations on the system by only considering the two observables $F_z$ and $F_x$, relative to the two orthogonal directions $z$ and $x$, and we have also assumed that the system can only be prepared in one of the four states $|+\rangle_x$, $|-\rangle_x$, $|+\rangle_z$ and $|-\rangle_z$, as illustrated in Fig.~\ref{die-states}. In other terms, we have only considered a subset of all possible states of the die on the game table; a subset which is closed under the action of the observables $F_z$ and $F_x$. Of course, nothing prevents us from also defining more general observables $F_u$, associated to other directions $u$, different from $z$ or $x$. However, if we measure $F_u$ (by orienting the shooter along the $u$-direction) when the die-system is, say, in state $|+\rangle_z$, then if $u$ is not parallel or orthogonal to $z$, the post-measurement state will not in general be an eigenstate of $F_u$. This means that in most cases $F_u$ has to be considered a \emph{generalized observable}, and that the rolling experiments on the die-system cannot be generally described in terms only of so-called (von Neumann) ideal measurements and the projection postulate. More of course should be said about these important ideas, and the mathematical developments they have originated, particularly the difference between classical and quantum properties, classical and quantum probabilities, phase space and Hilbert space structures, as well as the intermediate structures corresponding to situations of partial (non-maximal) absence of knowledge, but this would go beyond the mostly didactical scope of the present paper. What we shall do instead, in the next section, is to show how it is possible to connect two identical dice, in order to create a double-die system which, like microscopic entangled systems, is able to violate Bell's inequality. In other terms, by ``playing'' with dice we can shed light not only on the origin of quantum interference effects and quantum probabilities, but also on the phenomenon of entanglement. \section{Violating Bell's inequality with two entangled dice} \label{Violating Bell's inequality with two entangled dice} Before describing our system of two entangled dice, and show how we can perform experiments that will produce a violation of Bell's inequality, let us briefly recall what the latter is all about~\cite{Bell0, Clauser, Bell1, Merm} (see for instance~\cite{Vald} for a simple but general proof). Bell was able to write a mathematical inequality incorporating certain general assumptions about physical systems, so that if the inequality is found to be experimentally violated, then at least one of the assumptions used in its derivation must be wrong. Let us simply recall the expression of Bell's inequality, without proving it (we consider here the so-called CHSH generalization of it). On a given physical entity we assume that four different experiments can be performed: $e^A_a$, $e^A_{a'}$, $e^B_b$ and $e^B_{b'}$. Let us call $o^A_a$, $o^A_{a'}$, $o^B_b$ and $o^B_{b'}$ the outcomes associated to these experiments, which we assume can only take the two values $+1$ or $-1$. We also assume that experiments $e^A_a$ and $e^A_{a'}$ can be performed together with either of experiments $e^B_b$ and $e^B_{b'}$, thus defining additional \emph{coincidence} experiments: $e_{ab}^{AB}$, $e_{ab'}^{AB}$, $e_{a'b}^{AB}$ and $e_{a'b'}^{AB}$. To each coincidence experiment $e_{cd}^{AB}$, $c\in\{a,a'\}$, $d\in\{b,b'\}$, one can associate the expectation value $E^{AB}_{cd}$ of the product of outcomes $o^A_c o^B_d$, by: \begin{eqnarray} \label{expectation value} E^{AB}_{cd}&=&\sum {\cal P}_{cd}^{AB}(o^A_c,o^B_d) o^A_co^B_d\nonumber\\ &=& +{\cal P}_{cd}^{AB}(+1,+1) + {\cal P}_{cd}^{AB}(-1,-1) - {\cal P}_{cd}^{AB}(+1,-1) - {\cal P}_{cd}^{AB}(-1,+1), \end{eqnarray} where ${\cal P}_{cd}^{AB}(o^A_c,o^B_d)$ is the probability that the coincidence experiment $e_{cd}^{AB}$ yields the outcomes $(o^A_c,o^A_d)$. Assuming, as Bell did, that the experiments' outcomes are independently determined by some hidden variables, so that the expectation (\ref{expectation value}) can be written as the integral of the product of the two outcomes over these hidden variables (an hypothesis often referred to as \emph{Bell locality}), it is possible to prove the following relation~\cite{Bell1, Bell0}: \begin{equation} \label{Bell inequalities} I\equiv |E^{AB}_{ab} - E^{AB}_{ab'}| + |E^{AB}_{a'b'} + E^{AB}_{a'b}|\leq 2. \end{equation} As is well known, (\ref{Bell inequalities}) is violated by certain quantum systems, like for instance those formed by two entangled spin-$1/2$ entities in a \emph{singlet (zero) spin state}, for which one can show that $I= 2\sqrt{2}>2$.~\cite{Aspect1, Aspect2, Massimiliano-elastic} In other terms, quantum systems formed by two entangled subsystems usually violate Bell's locality assumption, and this remains true even though the two subsystems are separated by a very large spatial distance. This means that no local physical theory, in the sense specified by Bell, can agree with all statistical implications of quantum mechanics, and that spatial separation doesn't imply \emph{experimental separation}. In order to gain some insight into the content of Bell's inequality, and understand what could be the reason of its violation by microscopic systems, like singlet spin states, we want now to show how two dice can be \emph{connected} to create a macroscopic entangled system which also violates (\ref{Bell inequalities}). This will shed some light into the nature of \emph{quantum correlations} (the ``spooky actions at a distance,'' as Einstein used to call them). To do so, we need to slightly modify the die we have previously defined in Fig.~\ref{Quantum-die-faces}. The only change we need to consider is the permutation of the ``$+$'' and ``$-$'' symbols on two of its faces (with no change of the corresponding orientations), so as to obtain the die described in Fig.~\ref{Quantum-die-faces-bis}. \begin{figure}[!ht] \centering \includegraphics[scale =.5]{quantum-die-faces-entanglement-version.pdf} \caption{The six-faces of the die which is used to create an entangled two-die system. The only difference with the die previously described in Fig.~\ref{Quantum-die-faces}, is in the permutation of two of the ``$+$'' and ``$-$'' symbols, whereas the relative orientation of the faces' surfaces has remained the same. \label{Quantum-die-faces-bis}} \end{figure} Considering two identical dice of this kind, we can easily create an entangled double-die system by \emph{connecting them through space} by means of a rigid rod, whose two ends are glued at the center of two of the opposed faces of the two dice, as indicated in Fig.~ \ref{dice-connected}. \begin{figure}[!ht] \centering \includegraphics[scale =.6]{two-dice-connected.pdf} \caption{The system of two (entangled) dice, connected through space by a rigid rod glued on two of their opposing faces. \label{dice-connected}} \end{figure} Clearly, the presence of the rod creates \emph{actual} correlations between the six different faces of the two dice. Here we are only interested in the correlations between the four faces of each die which correspond to a possible outcome (as a final upper face) in relation to a $x$-roll experiment. As it can be deduced from Fig.~\ref{Quantum-die-faces-bis} and Fig.~\ref{dice-connected}, the correlations in question are those described in Fig.~\ref{Dice-faces-correlations}. \begin{figure}[!ht] \centering \includegraphics[scale =.5]{dice-faces-correlations.pdf} \caption{The correlations between the four faces of the two dice whose normal vectors are orthogonal to the $z$-direction, due to the presence of the connecting rod. \label{Dice-faces-correlations}} \end{figure} The length of the rod is of course arbitrary. We only assume that it is made of a material which is sufficiently light in comparison to the mass of the two dice, and extremely rigid as well. In fact, the rod is not an essential ingredient in our analysis: we just use it to help us visualize the two dice as two spatially separated entities, and clearly identify in the rod the source of their connection through space. But we could very well avoid the use of the rod by for instance directly gluing together the two opposing faces of the two dice. Having said that, we assume that the glue used to connect the rod is sufficiently strong, so that if the two dice are rolled together, simultaneously, in the same $x$-direction, they will be able to maintain their connection while rolling, i.e., to remain a whole entity. But we also assume that the glue, although strong, is not as strong as to allow the two dice to remain connected if only one die is rolled at a time (if the two dice are, say, made of metal, then instead of the glue we can imagine using a rod of a magnetic kind). In other terms, if we apply the shooter to only one of the two dice, to produce a $x$-roll, then, because of the inertia of the other die, the impact will cause the rod to suddenly detach and fall, thus disconnecting the two dice (one of which will be rolling, or sliding, whereas the other one will remain essentially still). On the other hand, if two shooters are used at the same time, on both dice, the torque experienced by the rod will be much lower, so that it will not detach and the two dice will be able to roll together on the game table, as a one piece entity (the double-die cannot slide, but only roll, as one of its two lower faces is always a high-friction face). Keeping in mind the above, we now assume that two players are placed each one close to one of the two dice, who we shall call player $A$ and player $B$. Player $A$ performs on its dice (say, the left one) experiments $e^A_a$ and $e^A_{a'}$, which are defined as follow. Experiment $e^A_a$ consists in observing $F_x$, i.e., in using a shooter to produce a roll of the die along the $x$-direction, then reading the number marked on the obtained die's upper face, producing in this way one of the two outcomes: $o^A_a=+1$, or $o^A_a=-1$. Experiment $e^A_{a'}$ is much simpler, as it consists in simply looking at the die's upper face and checking whether it is flat or not. If it is so, then the outcome is $o^A_{a'}=+1$, otherwise it is $o^A_{a'}=-1$. Player $B$ performs on its dice (the right one) the same experiments as player $A$. In other terms, $e^B_b$ is defined as $e^A_a$, and $e^B_{b'}$ as $e^A_{a'}$. Of course, since all the faces of the two dice are by definition flat, and that the only faces of the two dice which are oriented toward the $x$-direction are those with a ``+'' symbol, all of the four above mentioned experiments, when singly performed, can only produce the outcome $+1$. The same remains obviously true when the coincidence experiments $e_{ab'}^{AB}$, $e_{a'b}^{AB}$ and $e_{a'b'}^{AB}$ are performed at the same time by the two players, whose outcomes are always $(+1,+1)$. The situation changes however when one considers the coincidence experiment $e_{ab}^{AB}$, which \emph{creates upper faces' correlations}. Indeed, if the two players use simultaneously a shooter to impart a $x$-roll to their respective dice, then, as we explained, the rod will not separate and the two dice will remain connected as they roll. Therefore, according to Fig.~\ref{Dice-faces-correlations}, the only possible outcomes of the coincidence experiment $e_{ab}^{AB}$ are $(+1,-1)$ and $(-1,+1)$, and of course they have the same probability to occur, which is equal to $1/2$. According to (\ref{expectation value}), we thus obtain that $E^{AB}_{ab'}= E^{AB}_{a'b}=E^{AB}_{a'b'}=1$, and $E^{AB}_{ab}=-1$, so that: \begin{equation} \label{Bell violation classical} |E^{AB}_{ab} - E^{AB}_{ab'}| + |E^{AB}_{a'b'} + E^{AB}_{a'b}| = \left|-1-\left(+1\right)\right|+\left|+1+\left(+1\right)\right| =4. \end{equation} In other terms, not only the double-die system breaks Bell's inequality, but it does so in a maximal way. Before discussing the physical content of the above violation, let us just emphasize that the fact that the violation is maximal is only due to the fact that the connection between the two dice is such that the outcomes $(+1,+1)$ and $(-1,-1)$ are impossible, and consequently the difference $|E^{AB}_{ab} - E^{AB}_{ab'}| = |E^{AB}_{ab} - 1|$ in (\ref{Bell violation classical}) necessarily takes its maximal value. However, it is possible to use two dice of a more general geometry, like two prisms with an arbitrary number of faces, to produce weaker violations of inequality (\ref{Bell inequalities})~\cite{Massimiliano-preprint}. \subsection{Discussion} \label{Discussion2} Let us now discuss the physical content of our experiment with the double-die system, to see what it reveals as regards a possible mechanism responsible for the violation of Bell's inequality. But before doing so, we would like to mention that it was Diederik Aerts who, many years ago, challenged for the first time the widespread belief that quantum structures would be only present at the microscopic level of our realty. He did that not only by showing that one can conceive ``classical'' machines exhibiting the typical (non-Kolmogorovian) probabilistic structure possessed by microscopic systems~\cite{Aerts3, Aerts4, Aerts4b}, but also, as we have done in the second part of this article, that one can use ordinary macroscopic entities to violate Bell's inequality (here the CHSH version of it). The historical model used by Aerts to violate Bell's inequality was a machine made by vessels, tubes and water, known as the \emph{connected vessels of water} model~\cite{Aerts5, Aerts2} (an alternative, simplified version of such model, using a single uniform elastic band, has also been recently described by this author.~\cite{Massimiliano-elastic}). These models, like our two-die system, violate the inequality in a maximal way ($I=4$). However, Aerts was also able to conceive more elaborated macroscopic systems which can violate the inequality exactly in the same way as is done by a photon in a singlet state ($I=2\sqrt{2}$).~\cite{Aerts-rod, AertsBroekaert} Having said that, let us now analyze what our model teaches us regarding the nature of the correlations involved in the violation of Bell's inequality. Here we must distinguish between two different sorts of correlations: \emph{correlations of the first kind}, which are already present in the system before the execution of the experiment, and \emph{correlations of the second kind}, which aren't present before the execution of the experiment, but are literally created by it.~\cite{Aerts2} As far as this author can judge, a quite widespread belief is that Bell's inequalities would be violated because of the presence in the entangled state of correlations of the first kind, which therefore are only discovered (and not created) by the coincidence measurements. This belief appears to be supported by the observation that in Aspect's famous polarization experiments with entangled photons in singlet states,~\cite{Aspect1, Aspect2} it was possible to take the precaution to change randomly, in very short times, the orientations of the polarizers during the flight of the two entangled photons, thus enforcing relativistic separation between them. This means that, according to relativity theory, any form of interaction/communication \emph{through space} between the two measured subsystems has been excluded, and therefore it is quite natural to conclude that the observed correlations can only be of the first kind. This conclusion appears however to be in contradiction with the observation that a singlet state is a rotational invariant state, so that it doesn't describe the state of two-entities having already actualized their respective polarizations, although the way it is mathematically written may wrongly suggest so. So, what does our double-die system tells us about this subtle distinction between correlations of the first and second kind, and their role in the violation of Bell's inequality? At first sight the model seems to confirm the belief that correlations of the first kind would be responsible for the violation. Indeed, because of the presence of the rod, the faces of the two dice are clearly all correlated, and such correlations clearly exist even before the coincidence experiment $e_{ab}^{AB}$ is executed. But we must here properly distinguish ``faces'' from ``upper faces.'' As we discussed at some length in Sec.~\ref{Rolling a die is a quantum process}, a rolling experiment involves an unmistakable creation aspect: the creation of a specific \emph{upper face} from the many available faces of the die, which are all obviously existing prior to the experiment as ``faces,'' but certainly not as ``upper faces.'' So, in the same way that by rolling a die we create an upper face, which did not exist prior to the rolling experiment, when we roll a system made of two connected dice we create a correlation between two upper faces, which did not exist prior to the experiment. The subtle point here is that during a coincidence experiment the correlation between two ``upper faces'' is created from the existing correlations between ``faces,'' i.e., from the correlations between ``\emph{potential} upper faces,'' which therefore can also to be understood as \emph{potential correlations} between ``upper faces.'' In other terms, if correctly interpreted, our double-die model demonstrates that it is indeed the mechanism of \emph{creation of correlations} (correlations of the second kind, as Aerts proposed to call them~\cite{Aerts5, Aerts2}) which is responsible for the violation of Bell's inequality. To convince oneself that this is indeed the correct interpretation, one can try to use only correlations of the first kind to violate Bell's inequality in our double-die model, and observe that it is impossible. One can do that by simply replacing the observable $F_x$, which is responsible for the creation of an upper face (through the $x$-roll), by an observable $\tilde F$, of a purely discovery kind, consisting in simply taking notice of the already actual upper face showed by the die. One can easily check then, that if $\tilde F$ is used instead of $F_x$, then, depending on the state in which the system is prepared, one obtains either $I=0$ or $I=2$, in accordance with Bell's inequality. So, our model confirms what has been repeated several times by Aerts~\cite{Aerts-rod}: ``The possibility of violating Bell inequalities is not only a property of quantum entities. Bell inequalities can also be violated by coincidence measurements on a classical macroscopical entity. In fact Bell inequalities can always be violated if during the coincidence experiments one breaks one entity into separated pieces, and by this act creates the correlations.'' However, differently from the macroscopic models that have been studied in the past, in our model it is not the fact that the entity is broken that creates correlations, but, on the contrary, the fact that it is not broken! \section{Concluding remarks} \label{Concluding remarks} In this paper we have didactically introduced the reader to some important ideas regarding the possibility of a realistic interpretation of the behavior of microscopic quantum systems. We have done so by somehow reverting the logic of Einstein's celebrated quote, that God doesn't play dice, showing that the simple act of rolling a die (according to certain protocols) is a truly quantum experiment, which can be described using the projection postulate and the Born rule, and which is capable to produce interference effects. This allowed us to gain some intuition into a possible origin of quantum probabilities, which can be understood as epistemic statements associated to our lack of knowledge not about the state of the system, but about the exact interaction taking place between the system and the measurement apparatus, according to Aerts' hidden-measurement approach. This possibility, of understanding quantum probabilities not as irreducible (ontic) quantities, but as contextual (epistemic) quantities, allows us to demystify much of the mystery associated to the quantum measurement, which can be understood as a physical (and not psychophysical) creation process induced by the interaction of the system with the measuring apparatus. Also, it shows that quantum structures are not limited to the microworld, but are also present in macroscopic systems, if we only limit in a certain way our possibilities of actively experimenting on them, according to specific experimental protocols. Another interesting aspect indirectly touched by our analysis is the possibility of generally understanding probability theory as a theory dealing with the measurement of properties associated to physical systems, operationally defined by means of certain specific observational protocols. If we understand probabilities in this way, many interpretational difficulties immediately disappear. For instance, in so-called Bertrand paradox,~\cite{Bertrand} the fact that different randomization procedures yield different probabilities can simply be understood as the measurement of different properties (or observables), associated to different observational protocols, and therefore to different sets of hidden-measurement interactions. In that respect, it is worth observing that the hidden-measurement mechanism is in fact a very general one, in the sense that it can be used to describe any probabilistic situation, and not only the typical quantum ones. In other words, it is able to provide, in a way, a full description of all type of probability structures one can encounter in the world, as established in Ref.~\cite{Aerts-1994} (Sect. 4). In the second part of this article we have studied another important and mysterious feature of quantum systems: entanglement. We have done so by showing that Bell's inequality can easily be violated by macroscopic systems, provided the measurement process can create correlations, and not just discover correlations. Our double-die system cannot however be described by using a Hilbert space formalism and self-adjoint observables, as is clear that $I=2\sqrt{2}$ is the maximal possible violation in this ambit~\cite{Cirel}. On that respect, it is worth emphasizing that, according to a theorem of Pitowski ~\cite{Pitowski}, when Bell-type inequalities are violated, then one cannot use a classical Kolmogorovian probability model to represent the probabilities associated with the experiments under consideration. Quantum microscopic systems are of course important examples of non-Kolmogorovian probability models, but they are not the only ones, as the example of the double-die, violating Bell's inequality in a maximal way, clearly shows. That said, let us conclude by observing that although the double-die system certainly elucidates a possible mechanism behind the violation of Bell's inequality, what it doesn't reveal is how quantum systems are able to implement such mechanism. Indeed, the reason for the creation of correlations in the double-die is of course the presence of the connecting rod: because of it we cannot consider the two dice as two spatially separated entities, but as a whole non separable entity. This \emph{macroscopic wholeness} property~\cite{Aerts2} of the double-die system is of course totally without mystery, being the result of their connection through space by means of the glued rod. On the other hand, although it is also possible to conclude with Aspect that~\cite{Aspect2} ``[...] an entangled EPR photon pair is a non-separable object; that is, it is impossible to assign individual local properties (local physical reality) to each photon,'' what is much more difficult to understand is how the two photons can actually remain connected, considering that they do not possess the property of macroscopic wholeness. This apparent paradox, of a microscopic entity made of a pair of spatially separated entities which can nevertheless remain experimentally connected, independently of their spatial distance, is of course what fundamentally distinguishes our double-die macroscopic system from an entangled microscopic system. Another important distinction is of course the fact that for a rod to allow quasi-instantaneous correlations between the two dice (as it happens between two correlated pairs in Aspect's experiments), when these are separated by a very large spatial distance, it should become an arbitrarily rigid and light object (the typical mechanical properties attributed in the past to the luminiferous ether), which of course is impossible to achieve with a concrete macroscopic object. Now, it is precisely because of this conceptual difficulty, of having to give sense to a connection through space which cannot be detected nor imagined in a non-magical way, that the general belief among physicists is that correlations of the first kind would be responsible for the violation of Bell's inequality in experiments with entangled pairs, like those conducted by Aspect. However, this belief cannot be considered satisfactory, as we are not able to conceive -- as far as the present author knows -- macroscopic models that would be able to violate Bell's inequality by only using correlations of the first kind. This appears to be the case if the double-system in question obeys Bell's locality assumption, as is the case of the enigmatic macroscopic device described by Mermin~\cite{Merm}, but also if the locality is manifestly disobeyed, as our two-die system (and similar interconnected systems~\cite{Aerts5, Aerts2, Massimiliano-elastic}) clearly show. This explains why we have so strongly emphasized in this paper the deep conceptual difference between a ``face'' and an ``upper face'' of a die. Indeed, if we don't properly understand such a fundamental distinction, we are easily led to believe that macroscopic systems like a double-die would actually demonstrate the possibility of violating Bell's inequality by means of correlations of the first kind, which on the contrary is not the case, as we have explained in the second part of our paper. Having said this, let us point out a possible solution of this apparent conundrum, which consists in simply abandoning the preconception that microscopic entities would always be present in our three-dimensional space, i.e., that physical reality should only exist within space. In other terms, it is about accepting that, quoting here Aerts, space would only be~\cite{Aerts4} ``[...] a momentaneous crystallization of a theatre for reality where the motions and interactions of the macroscopic material and energetic entities take place. But other entities -- like quantum entities for example -- `take place' outside space, or - and this would be another way of saying the same thing -- within a space that is not the three dimensional Euclidean space." If we accept the idea that \emph{non-locality} is actually an expression of \emph{non-spatiality},~\cite{Aerts4, Aerts2, Massimiliano1, Massimiliano2, Massimiliano3, Massimiliano-God} then of course there are no conceptual problems in considering that two non-spatial microscopic entities could remain, as time passes by, intimately connected (not ``through space'' but, more generally, ``through reality''!), and that it would be their non-spatial connection the responsible for the \emph{creation of correlations} that violate Bell's inequality. Also, if the connection is assumed to be non-spatial, then the process of creation of correlations needs not be limited by relativistic constraints, as is the case for macroscopic objects, compatibly with the superluminal correlative effects observed in EPR-like experiments.
1,314,259,993,741
arxiv
\section{Introduction} \label{secdBint} We consider Schr\"{o}dinger operators $H$ (with separate boundary conditions), induced by the differential expression \begin{align*} \tau = -\frac{d^2}{dx^2} + q(x) \end{align*} on some interval $(a,b)$. Here $q$ is a real-valued, locally integrable function on $(a,b)$ refered to as the potential. In~\cite{kst2} (see also~\cite{kt},~\cite{kst3}), Kostenko, Sakhnovich and Teschl developed a Weyl--Titchmarsh theory for $H$ under the sole hypothesis that there is some nontrivial real entire solution $\phi$ of \begin{align*} -\phi''(z,x) + q(x)\phi(z,x) = z\phi(z,x), \quad x\in(a,b),~z\in\C, \end{align*} which lies in $L^2(a,b)$ near $a$ and satisfies the boundary condition at $a$ if $\tau$ is in the limit-circle case at $a$. Here, by real entire solution we mean that $\phi(\cdot,c)$ and $\phi'(\cdot,c)$ are real entire functions for one (and hence for all) $c\in(a,b)$. For such a solution to exist it is necessary and sufficient that for some $c\in(a,b)$, $H_{(a,c)}$ has purely discrete spectrum (see~\cite[Lemma~2.2]{kst2}), where $H_{(a,c)}$ is the restriction of $H$ to $L^2(a,c)$ with Dirichlet boundary conditions at $c$. In particular they were able to prove a local Borg--Marchenko uniqueness result for their singular Weyl $m$-function under restrictions on the exponential growth of the solution $\phi$. Their proof follows the simple proof of Bennewitz~\cite{ben} which covers the case of regular left endpoints. However, since the spectral measure determines the singular Weyl $m$-function only up to some real entire function, their Borg--Marchenko theorem does not immediately yield a uniqueness result for the spectral measure. In fact, all one would need is some growth restriction on the difference of two singular $m$-functions with the same spectral measure. This will be done in~\cite{SingWT2} in the case when the spectrum of the operators is assumed to be purely discrete and has finite convergence exponent. The present paper uses a completely different approach. We utilize de Branges' theory of Hilbert spaces of entire functions in order to obtain a uniqueness result for the spectral measure. In particular we use de Branges' ordering theorem to conclude that the de Branges spaces associated to Schr\"{o}dinger operators with the same spectral measure are equal. In Section~\ref{secdB} we start with a brief review of the theory of de Branges spaces. For a detailed discussion we refer to de Branges' book~\cite{dBbook}. The following section introduces the de Branges spaces associated with a self-adjoint Schr\"{o}dinger operator as above. The core of this section is quiet similar to~\cite[Section~3]{remling} (see also~\cite{remling2}) with the sole difference that we do not assume the left endpoint to be regular. Section~\ref{secdBuniq} is devoted to our uniqueness result for the spectral measure. Finally, in the last section we apply our result to perturbed Bessel operators. \section{de Branges spaces}\label{secdB} An analytic function $N$ in the upper complex half-plane $\C^+$ is said to be of bounded type if it can be written as the quotient of two bounded analytic functions. For such a function the number \begin{align*} \limsup_{y\rightarrow\infty} \frac{\ln|N(\I y)|}{y} \in[-\infty,\infty), \end{align*} is refered to as the mean type of $N$. A de Branges function is an entire function $E$, which satisfies the estimate \begin{align*} |E(z)| > |E(z^\ast)|, \quad z\in\C^+. \end{align*} The de Branges space $B$ associated with such a function consists of all entire functions $F$ such that \begin{align*} \int_\R \left|\frac{F(\lambda)}{E(\lambda)}\right|^2 d\lambda < \infty, \end{align*} and such that $F/E$ and $F^\#/E$ are of bounded type in $\C^+$ with nonpositive mean type. Here $F^\#$ is the entire function defined by \begin{align*} F^\#(z) = F(z^\ast)^\ast, \quad z\in\C. \end{align*} Equipped with the inner product \begin{align*} \dbspr{F}{G} = \frac{1}{\pi} \int_\R \frac{F(\lambda)G(\lambda)^\ast}{|E(\lambda)|^2} d\lambda, \quad F,~G\in B, \end{align*} the vector space $B$ turns into a Hilbert space (see~\cite[Theorem~21]{dBbook}). For each $\zeta\in\C$, the point evaluation in $\zeta$ is a continuous linear functional on $B$, i.e. \begin{align*} F(\zeta) = \dbspr{F}{K(\zeta,\cdot)}, \quad F\in B, \end{align*} where the reproducing kernel $K$ is given by (see~\cite[Theorem~19]{dBbook}) \begin{align}\label{eqndBrepker} K(\zeta,z) = \frac{E(z)E^\#(\zeta^\ast)-E(\zeta^\ast)E^\#(z)}{2\I (\zeta^\ast-z)}, \quad \zeta,~z\in\C. \end{align} Note that though there is a multitude of de Branges functions giving rise to the same de Branges space (including norms), the reproducing kernel $K$ is independent of the actual de Branges function. Our uniqueness result relies on the ordering theorem of de Branges; ~\cite[Theorem~35]{dBbook}. In order to state it let $E_1$, $E_2$ be two de Branges functions and $B_1$, $B_2$ the corresponding de Branges spaces. \begin{theorem}\label{thmdBOrdering} Suppose $B_1$, $B_2$ are isometrically embedded in $\LdBsm$, for some Borel measure $\rho$. If $E_1/E_2$ is of bounded type in the upper complex half-plane and has no real zeros or singularities, then $B_1$ contains $B_2$ or $B_2$ contains $B_1$. \end{theorem} Moreover, we will need the following converse statement. \begin{lemma}\label{lemdBordcon} If $B_1$ is contained in $B_2$ or $B_2$ is contained in $B_1$, then $E_1/E_2$ is of bounded type in the upper complex half-plane. \end{lemma} \begin{proof} Without loss of generality assume $B_1$ is contained in $B_2$. As a consequence of~\cite[Theorem~25]{dBbook}, there are entire functions $F_1$, $G_1\in B_1\subseteq B_2$ such that \begin{align*} E_1(z) = F_1(z) + zG_1(z), \quad z\in\C, \end{align*} Now the function \begin{align*} \frac{E_1(z)}{E_2(z)} = \frac{F_1(z)}{E_2(z)} + z\frac{G_1(z)}{E_2(z)}, \quad z\in\C^+, \end{align*} is of bounded type since both summands are by the definition of $B_2$. \end{proof} \section{Schr\"{o}dinger operators and de Branges spaces} In this section let $(a,b)$ be some bounded or unbounded interval, $q$ a real-valued, locally integrable function on $(a,b)$ and $\tau$ the differential expression \begin{align*} \tau = -\frac{d^2}{dx^2} + q(x), \end{align*} on $(a,b)$. By $H$ we denote some associated self-adjoint Schr\"{o}dinger operator in $L^2(a,b)$ with separate boundary conditions if $\tau$ is in the limit-circle case at both endpoints. Concerning the regularity of the potential $q$ near the endpoint $a$, we will only assume that there is some real entire solution $\phi$ of \begin{align*} -\phi''(z,x) + q(x)\phi(z,x) = z\phi(z,x), \quad x\in(a,b),~z\in\C, \end{align*} such that for each $z\in\C$, $\phi(z,\cdot)$ is nontrivial, lies in $L^2(a,b)$ near $a$ and satisfies the boundary condition at $a$ if $\tau$ is in the limit-circle case at $a$. Here by real entire we mean that for some (and hence for all) $c\in(a,b)$, $\phi(\cdot,c)$ and $\phi'(\cdot,c)$ are real entire functions. For the proof of our uniqueness result we will need the following simple lemma on the asymptotics of the solution $\phi$. Note that we always use the principal square root with branch cut along the negative real axis. \begin{lemma}\label{lemSchrPhiAsym} For each $x$, $\tilde{x}\in(a,b)$ we have the asymptotics \begin{align*} \frac{\phi(z,x)}{\phi(z,\tilde{x})} = \E^{(x-\tilde{x})\sqrt{-z}}\left(1+\oo(1)\right), \end{align*} as $|z|\rightarrow\infty$ along the imaginary axis. \end{lemma} \begin{proof} For each $z\in\C$ let $c(z,\cdot)$ and $s(z,\cdot)$ be the solutions of $(\tau-z)u=0$ with the initial conditions \begin{align*} c(z,\tilde{x}) = s'(z,\tilde{x}) = 1 \quad\text{and}\quad c'(z,\tilde{x})=s(z,\tilde{x})=0. \end{align*} Now if $x\geq \tilde{x}$ the claim follows from \begin{align*} \phi(z,x) = \phi(z,\tilde{x}) \left(c(z,x) + \frac{\phi'(z,\tilde{x})}{\phi(z,\tilde{x})} s(z,x)\right), \quad z\in\C\backslash\R, \end{align*} and the well-known asymptotics of the quotient on the right-hand side (see~\cite[Lemma~9.19]{tschroe}) and the solutions $c$ and $s$ (see~\cite[Lemma~9.18]{tschroe}). The case when $x<\tilde{x}$ follows by reversing the roles of $x$ and $\tilde{x}$. \end{proof} For each $c\in(a,b)$ we denote by $L^2(a,c)$ the closed linear subspace of $L^2(a,b)$ consisting of all functions which vanish outside of $(a,c)$. Now as in the case of regular left endpoints, one may define the transform of a function $f\in L^2(a,c)$ as \begin{align}\label{eqndBftrans} \hat{f}(z) = \int_a^b \phi(z,x)f(x) dx, \quad z\in\C. \end{align} It is a result of~\cite[Section~3]{kst2} that there is some Borel measure $\rho$ on $\R$ such that \begin{align}\label{eqndBIsotrans} \int_\R |\hat{f}(\lambda)|^2 d\rho(\lambda) = \int_a^b |f(x)|^2 dx, \quad f\in L^2(a,c), \end{align} for all values $c\in(a,b)$. Moreover, this transformation uniquely extends to a unitary map from $L^2(a,b)$ onto $\LdBsm$ and the operator $H$ is mapped onto multiplication with the independent variable in $\LdBsm$. Note that $\rho$ is uniquely determined by these properties and hence refered to as the spectral measure of $H$ associated with the solution $\phi$. From these results one sees that the transforms of all functions $f\in L^2(a,c)$, equipped with the norm inherited from the space $\LdBsm$, form a Hilbert space. In order to show that they even form a de Branges space, fix some $c\in(a,b)$ and consider the entire function \begin{align}\label{eqndBschrE} E(z,c) = \phi(z,c) + \I \phi'(z,c), \quad z\in\C. \end{align} Using the Lagrange identity and the fact that the Wronskian of two solutions satisfying the same boundary condition at $a$ (if any) vanishes in $a$, one gets \begin{align*} \frac{E(z,c) E^\#(\zeta^\ast,c) - E(\zeta^\ast,c) E^\#(z,c)}{2\I (\zeta^\ast -z)} = \int_a^c \phi(\zeta,x)^\ast \phi(z,x) dx, \quad \zeta,~z\in\C^+. \end{align*} In particular, taking $\zeta=z$ this shows that $E(\cdot,c)$ is a de Branges function. Moreover, note that $E(\cdot,c)$ does not have any real zero $\lambda$, since otherwise both, $\phi(\lambda,c)$ and $\phi'(\lambda,c)$ would vanish. By $B(c)$ we denote the de Branges space associated with the de Branges function $E(\cdot,c)$, endowed with the inner product \begin{align*} \dbspr{F}{G}_{B(c)} = \frac{1}{\pi} \int_\R \frac{F(\lambda) G(\lambda)^\ast}{|E(\lambda,c)|^2} d\lambda = \frac{1}{\pi}\int_\R \frac{F(\lambda)G(\lambda)^\ast}{\phi(\lambda,c)^2 + \phi'(\lambda,c)^2}d\lambda, \quad F,~G\in B(c). \end{align*} Now using~\eqref{eqndBrepker} and a similar calculation as above, one shows that the reproducing kernel $K(\cdot,\cdot,c)$ of this space is given by \begin{align}\label{eqndBschrRepKer} K(\zeta,z,c) = \int_a^c \phi(\zeta,x)^\ast \phi(z,x)dx, \quad \zeta,~z\in\C. \end{align} \begin{theorem}\label{thmdBschrBT} For each $c\in(a,b)$ the transformation $f\mapsto\hat{f}$ is unitary from $L^2(a,c)$ onto $B(c)$, in particular \begin{align*} B(c) = \left\lbrace \left. \hat{f} ~\right|\, f\in L^2(a,c) \right\rbrace. \end{align*} \end{theorem} \begin{proof} Fix some $\lambda\in\R$ and consider the function \begin{align*} f_\lambda(x) = \phi(\lambda,x) \indik_{(a,c)}(x), \quad x\in(a,b). \end{align*} The transform of this function is given by \begin{align*} \hat{f}_\lambda(z) = \int_a^c \phi(\lambda,x) \phi(z,x)dx = K(\lambda,z,c), \quad z\in\C. \end{align*} In particular this shows that the transform lies in $B(c)$. Moreover, we have \begin{align*} \|f_\lambda\|^2 = \int_a^c |\phi(\lambda,x)|^2 dx = K(\lambda,\lambda,c) = \dbspr{K(\lambda,\cdot,c)}{K(\lambda,\cdot,c)}_{B(c)} = \|K(\lambda,\cdot,c)\|_{B(c)}^2. \end{align*} Hence our transform is an isometry on the linear span $D$ of the functions $f_\lambda$, $\lambda\in\R$. But this span is dense in $L^2(a,c)$ since it contains the eigenfunctions of the operator $H_{(a,c)}$. Moreover, the linear span of the functions $K(\lambda,\cdot,c)$, $\lambda\in\R$ is dense in $B(c)$. Indeed, each $F\in B(c)$ such that \begin{align*} 0 = \dbspr{F}{K(\lambda,\cdot,c)}_{B(c)} = F(\lambda), \quad \lambda\in\R, \end{align*} vanishes identically. Thus our transformation restricted to $D$ extends uniquely to a unitary map $V$ from $L^2(a,c)$ onto $B(c)$. In order to identify $V$ with our transformation note that for each fixed $z\in\C$, both $f\mapsto\hat{f}(z)$ and $f\mapsto Vf(z)$ are continuous on $L^2(a,c)$. \end{proof} As an immediate consequence of Theorem~\ref{thmdBschrBT} and the fact that our transformation from~\eqref{eqndBftrans} extends to a unitary map from $L^2(a,b)$ onto $\LdBsm$, we get the following corollary. \begin{corollary}\label{cordBschrembL2} For each $c\in(a,b)$ the de Branges space $B(c)$ is isometrically embedded in $\LdBsm$, that is \begin{align*} \int_\R |F(\lambda)|^2 d\rho(\lambda) = \|F\|_{B(c)}, \quad F\in B(c). \end{align*} Moreover, the linear span of the spaces $B(c)$, $c\in(a,b)$ is dense in $\LdBsm$, i.e. \begin{align}\label{eqndBschrdense} \overline{\bigcup_{c\in(a,b)} B(c)} = \LdBsm. \end{align} \end{corollary} The following corollary shows that the de Branges spaces $B(c)$, $c\in(a,b)$ are totally ordered, strictly increasing and continuous in some sense. \begin{corollary}\label{cordBschrincl} If $c_1$, $c_2\in(a,b)$ with $c_1<c_2$, then $B(c_1)$ is isometrically embedded in but not equal to $B(c_2)$. Moreover, for each $c\in(a,b)$ we have \begin{align}\label{eqndBschrcontinuous} \overline{\bigcup_{x\in(a,c)} B(x)} = B(c) = \bigcap_{x\in(c,b)}B(x). \end{align} \end{corollary} \begin{proof} The embedding is clear from Theorem~\ref{thmdBschrBT} and Corollary~\ref{cordBschrembL2}. Moreover, Theorem~\ref{thmdBschrBT} shows that $B(c_2)\ominus B(c_1)$ is unitarily equivalent to $L^2(c_1,c_2)$, hence $B(c_1)$ is not equal to $B(c_2)$. The second claim follows from the similar fact that \begin{align*} \overline{\bigcup_{x\in(a,c)} L^2(a,x)} = L^2(a,c) = \bigcap_{x\in(c,b)} L^2(a,x). \end{align*} \end{proof} Note that the solution $\phi$ is not uniquely determined. Indeed,~\cite[Corollary~2.3]{kst2} shows that any other solution with the same properties as $\phi$ is given by \begin{align*} \tilde{\phi}(z,x) = \E^{g(z)} \phi(z,x), \quad x\in(a,b),~z\in\C, \end{align*} where $g$ is some real entire function. Furthermore~\cite[Remark~3.8]{kst2} shows that the corresponding spectral measures are related by \begin{align*} \tilde{\rho} = \E^{-2g} \rho. \end{align*} In particular they are mutually absolutely continuous. Using Theorem~\ref{thmdBschrBT} it is easily seen that for each $c\in(a,b)$, multiplication with the entire function $e^g$ maps $B(c)$ isometrically onto $\tilde{B}(c)$. \section{A uniqueness result for the spectral measure}\label{secdBuniq} In this section let $q_1$, $q_2$ be two real-valued, locally integrable functions on intervals $(a_1,b_1)$ respectively $(a_2,b_2)$ and $H_1$, $H_2$ two associated self-adjoint Schr\"{o}dinger operators with separate boundary conditions. Suppose there are nontrivial real entire solutions $\phi_1$, $\phi_2$ which are square integrable near the left endpoint and satisfy the boundary condition there, if any. As in the previous section we denote with $\rho_1$, $\rho_2$ the corresponding spectral measures, with $E_1$, $E_2$ the corresponding de Branges functions, with $B_1$, $B_2$ the corresponding de Branges spaces and with $K_1$, $K_2$ the corresponding reproducing kernels. We say $H_1$ and $H_2$ are equal up to some shift if there is a linear function $\eta$ with gradient one, mapping $(a_1,b_1)$ onto $(a_2,b_2)$ such that $q_1=q_2\circ\eta$ and $H_1=U^{-1}H_2U$, where $U$ is the unitary map from $L^2(a_1,b_1)$ onto $L^2(a_2,b_2)$ induced by $\eta$. \begin{theorem}\label{thmdBuniqS} Suppose there is some real entire function $g$ such that \begin{align}\label{eqnquotE1E2} \E^{g(z)} \frac{E_1(z,x_1)}{E_2(z,x_2)}, \quad z\in\C^+, \end{align} is of bounded type for some $x_1\in (a_1,b_1)$ and $x_2\in(a_2,b_2)$. If $\rho_1=\E^{-2g} \rho_2$ then $H_1$ and $H_2$ are equal up to some shift. \end{theorem} \begin{proof} First of all note that without loss of generality we may assume that $g$ vanishes identically, since otherwise we replace $\phi_1$ with $\E^g \phi_1$. Moreover, because of Lemma~\ref{lemdBordcon} the function in~\eqref{eqnquotE1E2} is of bounded type for all $x_1\in(a_1,b_1)$ and $x_2\in(a_2,b_2)$. Now fix some arbitrary $x_1\in(a_1,b_1)$. Since for each $x_2\in(a_2,b_2)$, both $B_1(x_1)$ and $B_2(x_2)$ are isometrically contained in $L^2(\R;\rho_1)$ we infer from Theorem~\ref{thmdBOrdering} (note that~\eqref{eqnquotE1E2} has no real zeros or singularities because $E_1(\cdot,x_1)$ and $E_2(\cdot,x_2)$ do not have real zeros) that $B_1(x_1)$ is contained in $B_2(x_2)$ or $B_2(x_2)$ is contained in $B_1(x_1)$. We claim that the infimum $\eta(x_1)$ of all $x_2\in(a_2,b_2)$ such that $B_1(x_1)\subseteq B_2(x_2)$ lies in $(a_2,b_2)$. Indeed otherwise we either had $B_2(x_2)\subseteq B_1(x_1)$ for all $x_2\in(a_2,b_2)$ or $B_1(x_1)\subseteq B_2(x_2)$ for all $x_2\in(a_2,b_2)$. In the first case this would mean that $B_1(x_1)$ is dense in $\LdBsm$, which is not possible in view of Corollary~\ref{cordBschrincl}. The second case would imply that for every function $F\in B_1(x_1)$ and $\zeta\in\C$ we have \begin{align*} |F(\zeta)| & \leq \dbspr{F}{K_2(\zeta,\cdot,x_2)}_{B_2(x_2)} \\ & \leq \|F\|_{B_2(x_2)} \dbspr{K_2(\zeta,\cdot,x_2)}{K_2(\zeta,\cdot,x_2)}_{B_2(x_2)} \\ & = \|F\|_{B_1(x_1)} K_2(\zeta,\zeta,x_2), \end{align*} for each $x_2\in(a_2,b_2)$. Since $K_2(\zeta,\zeta,x_2)\rightarrow0$ as $x_2\downarrow a_2$ by~\eqref{eqndBschrRepKer}, we had $B_1(x_1)=\lbrace 0\rbrace$ contradicting Theorem~\ref{thmdBschrBT}. Now from~\eqref{eqndBschrcontinuous} we infer that \begin{align*} B_2(\eta(x_1)) = \overline{\bigcup_{x_2<\eta(x_1)} B_2(x_2)} \subseteq B_1(x_1) \subseteq \bigcap_{x_2>\eta(x_1)} B_2(x_2) = B_2(\eta(x_1)), \end{align*} hence $B_1(x_1)=B_2(\eta(x_1))$, including norms. The function $\eta: (a_1,b_1)\rightarrow(a_2,b_2)$ defined this way is strictly increasing because of Corollary~\ref{cordBschrincl} and continuous by~\eqref{eqndBschrcontinuous}. Moreover, since for each $\zeta\in\C$ \begin{align*} K_2(\zeta,\zeta,\eta(x_1)) = K_1(\zeta,\zeta,x_1) \rightarrow 0, \end{align*} as $x_1\downarrow a$ we infer that $\eta(x_1)\downarrow a_2$ as $x_1\downarrow a_1$. Finally~\eqref{eqndBschrdense} shows that $\eta$ actually has to be a bijection. Using the formula for the reproducing kernels~\eqref{eqndBschrRepKer} once more we get for each $z\in\C$ \begin{align*} \int_{a_1}^{x_1} |\phi_1(z,x)|^2 dx = \int_{a_2}^{\eta(x_1)} |\phi_2(z,x)|^2 dx, \quad x_1\in(a_1,b_1). \end{align*} Now by the implicit function theorem $\eta$ is differentiable (note that the integrand does not vanish if $z\in\C\backslash\R$) with \begin{align}\label{eqndBSMunique} |\phi_1(z,x_1)|^2 = \eta'(x_1) |\phi_2(z,\eta(x_1))|^2, \quad x_1\in(a_1,b_1). \end{align} Using Lemma~\ref{lemSchrPhiAsym} twice we get for all $x_1$, $\tilde{x}_1\in(a_1,b_1)$ the asymptotics \begin{align*} \E^{-2(x_1-\tilde{x}_1)\re \sqrt{-z}}\left(1+\oo(1)\right) & = \left|\frac{\phi_1(z,x_1)}{\phi_1(z,\tilde{x}_1)}\right|^2 = \frac{\eta'(x_1)}{\eta'(\tilde{x}_1)} \left|\frac{\phi_2(z,\eta(x_1))}{\phi_2(z,\eta(\tilde{x}_1))}\right|^2 \\ & = \frac{\eta'(x_1)}{\eta'(\tilde{x}_1)} \E^{-2(\eta(x_1)-\eta(\tilde{x}_1))\re \sqrt{-z}}\left(1+\oo(1)\right), \end{align*} as $|z|\rightarrow\infty$ along the imaginary axis. Now this shows \begin{align*} \eta(x_1) - \eta(\tilde{x}_1) = x_1 - \tilde{x}_1, \quad x_1,~\tilde{x}_1\in(a_1,b_1), \end{align*} i.e. $\eta$ is linear with gradient one. Using~\eqref{eqndBSMunique} once more, we get for each $\lambda\in\R$ \begin{align*} \phi_1(\lambda,x_1)^2 = \phi_2(\lambda,\eta(x_1))^2, \quad x_1\in(a_1,b_1). \end{align*} Taking logarithmic derivatives we obtain \begin{align}\label{eqndBSMuniquephiphi} \frac{\phi_1'(\lambda,x_1)}{\phi_1(\lambda,x_1)} = \frac{\phi_2'(\lambda,\eta(x_1))}{\phi_2(\lambda,\eta(x_1))}, \end{align} for almost all $x_1\in(a_1,b_1)$. Differentiating this equation once more, we get \begin{align*} \frac{\phi_1''(\lambda,x_1)}{\phi_1(\lambda,x_1)} = \frac{\phi_2''(\lambda,\eta(x_1))}{\phi_2(\lambda,\eta(x_1))}, \end{align*} for almost all $x_1\in(a_1,b_1)$. Thus also \begin{align*} q_1(x_1)= \lambda + \frac{\phi_1''(\lambda,x_1)}{\phi_1(\lambda,x_1)} = \lambda + \frac{\phi_2''(\lambda,\eta(x_1))}{\phi_2(\lambda,\eta(x_1))} = q_2(\eta(x_1)) \end{align*} for almost all $x_1\in(a_1,b_1)$. Finally note that~\eqref{eqndBSMuniquephiphi} implies that $\phi_1$ and $\phi_2\circ\eta$ are linearly dependent. In particular if $\tau_1$ (and hence also $\tau_2$) is in the l.c.~case at the left endpoint this shows that the boundary condition of $H_1$ and $H_2$ there is the same. Furthermore, if $\tau_1$ (and hence also $\tau_2$) is in the l.c.~case at the right endpoint, then $H_1$ and $H_2$ have some common eigenvalue $\lambda$. Now the fact that $\phi_1(\lambda,\cdot)$ and $\phi_2(\lambda,\cdot)$ satisfy the respective boundary condition at the right endpoint shows that $H_1$ is equal to $H_2$ up to some shift. \end{proof} Note that even if one fixes the left endpoint, the operator is determined by the spectral measure in general only up to some shift. This is due to the fact that we allowed the left endpoint to be infinite. Indeed if one takes finite fixed left endpoints, the operators are uniquely determined by the spectral measure. \begin{corollary}\label{cordBuniqnoshift} Suppose that $-\infty<a_1=a_2$ and that there is some real entire function $g$ such that \begin{align*} \E^{g(z)} \frac{E_1(z,x_1)}{E_2(z,x_2)}, \quad z\in\C^+, \end{align*} is of bounded type for some $x_1\in (a_1,b_1)$ and $x_2\in(a_2,b_2)$. If $\rho_1=\E^{-2g} \rho_2$ then $b_1=b_2$, $q_1=q_2$ and $H_1=H_2$. \end{corollary} \begin{proof} This follows immediately from Theorem~\ref{thmdBuniqS} and $\lim_{x_1\downarrow a_1}\eta(x_1) = a_1$. \end{proof} Below we will see that some kind of growth restriction on the solutions $\phi_1$ and $\phi_2$ suffices to guarantee that~\eqref{eqnquotE1E2} is of bounded type. However, note that this condition in Theorem~\ref{thmdBuniqS} can not be dropped and some assumption has to be imposed on the solutions $\phi_1$ and $\phi_2$. As an example consider the interval $(0,\pi)$, the potential $q_1=0$ and let $H_1$ be the associated Schr\"{o}dinger operator with Dirichlet boundary conditions. As our real entire solution $\phi_1$ we choose \begin{align*} \phi_1(z,x) = \frac{\sin\sqrt{z}x}{\sqrt{z}}, \quad x\in(0,\pi),~z\in\C. \end{align*} The associated spectral measure $\rho_1$ is given by \begin{align*} \rho_1 = \frac{2}{\pi} \sum_{n\in\N} n^2 \delta_{n^2}, \end{align*} where $\delta_{n^2}$ is the unit Dirac measure in the point $n^2$. Now choose some sequence $\kappa_n$, $n\in\N$ of positive reals such that all but finitely many of these numbers are equal to one. From the solution of the inverse spectral problem in the regular case it is known (see e.g.~\cite{levitan},~\cite{poestrub}) that there is some potential $q_2\in L^2(0,\pi)$ and a corresponding operator $H_2$ with Dirichlet boundary conditions such that the spectral measure $\rho_2$ associated with the real entire solution $\phi_2$ of \begin{align*} -\phi_2''(z,x) + q_2(x)\phi_2(z,x) = z\phi_2(z,x), \quad x\in(0,\pi),~z\in\C, \end{align*} with initial conditions \begin{align*} \phi_2(z,0) = 0 \quad\text{and}\quad \phi_2'(z,0) = 1, \end{align*} is given by \begin{align*} \rho_2 = \frac{2}{\pi} \sum_{n\in\N} \kappa_n n^2 \delta_{n^2}. \end{align*} Now pick some real entire function $g$ such that \begin{align*} g(n^2) = \frac{\ln\kappa_n}{2}, \quad n\in\N, \end{align*} and switch to the real entire solution \begin{align*} \tilde{\phi}_2(z,x) = \E^{g(z)} \phi_2(z,x), \quad x\in(0,\pi),~z\in\C. \end{align*} Then the spectral measure associated to this solution is equal to $\rho_1$, but the corresponding operators $H_1$ and $H_2$ are different (at least if not all $\kappa_n$ are equal to one). However, also note that in this case~\eqref{eqnquotE1E2} fails to be of bounded type rather badly, since the function $\E^g$ is not even of finite exponential order. We conclude this section by showing that condition~\eqref{eqnquotE1E2} in Theorem~\ref{thmdBuniqS} holds if the solutions $\phi_1$, $\phi_2$ satisfy some growth condition. Therefore recall that an entire function $F$ belongs to the Cartwright class $\mathcal{C}$ if it is of finite exponential type and the logarithmic integral \begin{align*} \int_\R \frac{\ln^+|F(x)|}{1+x^2}dx < \infty, \end{align*} exists, where $\ln^+$ is the positive part of the natural logarithm $\ln$. In particular note that the class $\mathcal{C}$ contains all entire functions of exponential order less then one. Now a Theorem of Kre\u{\i}n~\cite[Theorem~6.17]{rosrov},~\cite[Section~16.1]{lev} states that the class $\mathcal{C}$ consists of all entire functions which are of bounded type in the upper and in the lower complex half-plane. Now since the quotient of two functions of bounded type is of bounded type itself, this immediately yields the following uniqueness result. \begin{corollary}\label{cordBuniqScor} Suppose that $E_1(\cdot,x_1)$ and $E_2(\cdot,x_2)$ belong to the class $\mathcal{C}$ for some $x_1\in(a_1,b_1)$ and $x_2\in(a_2,b_2)$. If $\rho_1=\rho_2$ then $H_1$ is equal to $H_2$ up to some shift. \end{corollary} Again as in Corollary~\ref{cordBuniqnoshift}, if one takes finite fixed left endpoints, the operator is uniquely determined by the spectral measure. In particular as a special case one recovers the classical result due to Marchenko that the spectral measure uniquely determines the operator, if the left endpoint is regular. However our result covers a larger class of potentials, as we will show in the next section. There we will apply our results in order to obtain a uniqueness theorem for perturbed Bessel operators. \section{Application to perturbed Bessel operators} Let $l\in[-\frac{1}{2},\infty)$, $0<b\leq\infty$ and consider the differential expression \begin{align*} \tau = - \frac{d^2}{dx^2} + \frac{l(l+1)}{x^2} + q(x), \end{align*} on $(0,b)$, where $q$ is some real-valued, locally integrable function on $(0,b)$. We will assume that the function \begin{align}\label{eqndBBesselqbar} \overline{q}(x) = \begin{cases} x|q(x)|, & \text{if }l>-\frac{1}{2}, \\ x(1-\ln x) |q(x)|, & \text{if }l=-\frac{1}{2}, \end{cases} \end{align} is integrable near zero. According to~\cite[Theorem~2.4]{kst}, $\tau$ is in the limit-circle case at zero if and only if $l\in[-\frac{1}{2},\frac{1}{2})$. Now let $H$ be some associated self-adjoint operator with the boundary condition \begin{align}\label{eqndBBesselBC} \lim_{x\downarrow 0} x^l ((l+1)f(x) - xf'(x)) = 0, \end{align} at zero, if necessary. In~\cite[Lemma~2.2]{kst} it has been shown that there is a nontrivial real entire solution $\phi$ of exponential order $\nicefrac{1}{2}$ which lies in $L^2(0,b)$ near zero and satisfies the boundary condition~\eqref{eqndBBesselBC} at zero if $l\in[-\frac{1}{2},\frac{1}{2})$. Note that this solution is unique up to scalar multiples because of the growth restriction, as it has been shown in~\cite[Lemma~6.4]{kst2}. The associated spectral measure is denoted by $\rho$. Now in order to state our uniqueness theorem let $l_1$, $l_2\in[-\frac{1}{2},\infty)$, $0<b_1$, $b_2\leq\infty$ and $q_1$, $q_2$ be two potentials such that the functions $\overline{q}_1$, $\overline{q}_2$ defined as in~\eqref{eqndBBesselqbar} are integrable near zero. Furthermore, let $H_1$, $H_2$ be two corresponding self-adjoint operators with the boundary condition~\eqref{eqndBBesselBC} at zero, if necessary. By $\phi_1$, $\phi_2$ we denote some real entire solutions of exponential order $\nicefrac{1}{2}$ which lie in $L^2(0,b)$ near zero and satisfy the boundary condition there, if any. Finally let $\rho_1$, $\rho_2$ be the associated spectral measures. Our uniqueness results from the preceding section now yield the following uniqueness theorem. \begin{theorem} If $\rho_1=\rho_2$ then $l_1=l_2$, $b_1=b_2$, $q_1=q_2$ and $H_1=H_2$. \end{theorem} \begin{proof} Since the solutions are of exponential order $\nicefrac{1}{2}$, we may immediately apply Corollary~\ref{cordBuniqScor} and obtain $b_1=b_2$, \begin{align*} \frac{l_1(l_1+1)}{x^2} + q_1(x) = \frac{l_2(l_2+1)}{x^2} + q_2(x), \end{align*} for almost all $x\in(0,b_1)$ and $H_1=H_2$. Now since the functions $\overline{q}_1$ and $\overline{q}_2$ are integrable near zero we infer that $l_1(l_1+1)=l_2(l_2+1)$ and hence also $l_1=l_2$. \end{proof} \bigskip \noindent {\bf Acknowledgments.} I thank Gerald Teschl for helpful discussions and hints with respect to the literature.
1,314,259,993,742
arxiv
\subsection{The \texttt{OP-COUNT} Heuristic} \subsubsection{Domain Model.} The domain is described by a set of variables $f\in\mathcal{F}$ which can assume values from a (finite) domain $D(f) \subseteq \mathbb{N}$. A state is given by the particular assignment of values to these variables: $\mathbb{S} = \{f = v~|~ v \in D(f)~\forall f \in \mathcal{F}\}$. The value of variable $f$ in state $\mathbb{S}$ is referred to as $\mathbb{S}(f)$. The action model $\mathcal{A}$ consists of operators $a = \langle C_a, E_a\rangle$ where $C_a$ is the cost of the action, and $E_a = \{\langle f, v_o, v_n \rangle~|~f\in\mathcal{F}; v_o, v_n\in \{-1\}\cup D(f)\}$ is the set of effects. The transition function $\delta(\cdot)$ determines the next state after the application of action $a$ to state $\mathbb{S}$ as - {\small \begin{align*} \delta(a, \mathbb{S}) = \bot \text{ if } \exists \langle f, v_o, v_n\rangle \in E_a \text{ s.t. } v_o \not= -1 \wedge v_o\not= \mathbb{S}(f);\\ = \{f = v_n \forall \langle f, v_o, v_n\rangle \in E_a; \text{ else } f = \mathbb{S}(f)\} \text{ otherwise.} \end{align*} }% \subsubsection{Plans and Operator Counts.} A planning problem is a tuple $\Pi = \langle \mathcal{F}, \mathcal{A}, \mathbb{I}, \mathbb{G} \rangle$, where $\mathbb{I}, \mathbb{G}$ are the initial and (partial) goal states respectively. The solution to the planning problem is a \emph{plan} $\pi = \langle a_1, a_2, \ldots \rangle,~\pi(i)=a_i \in \mathcal{A}$ such that $\delta(\pi, \mathbb{I}) \models \mathbb{G}$, where the cumulative transition function is given by $\delta(\pi, \mathbb{S}) = \delta(\langle a_2, a_3, \ldots\rangle, \delta(a_1, \mathbb{S}))$. The cost of the plan is given by $C(\pi) = \sum_{a \in \pi}C_a$ and an \emph{optimal plan} $\pi^*$ is such that $C(\pi^*) \leq C(\pi)~\forall\pi$. The operator count for an action $a$ given a plan $\pi$ is given by $\lambda(a,\pi) = |\{i~|~a = \pi(i)\}|$ and the total operator count of the plan $\lambda(\pi) = |\pi|$. \subsubsection{Compliant Variables.} We define compliant variables as those that whenever they occur as a precondition of an action, they must also be an effect, and vice versa. Thus, $f \in \mathcal{F}$ is \emph{compliant} iff $\forall a \in \mathcal{A}, \langle f, v_o, v_n\rangle \in E_a \implies v_o \not=-1 \wedge v_n \not=-1$; $f$ is referred to as \emph{rogue} otherwise. Let $\Phi \subseteq \mathcal{F}$ be the set of all compliant variables, and the set of compliant variables whose values are specified in the goal be $\phi \subseteq \Phi$, henceforth referred to as goal compliant conditions. \subsubsection{The State Transformation Equation.} Let $|\phi| = m$ and $|\mathcal{A}| = n$. Consider an $m\times n$ matrix $\mathbf{M}$ whose $ij^{th}$ element $M_{ij} \in \mathbb{Z}$ is the numerical change in $f_i \in \phi$ produced by action $a_j \in \mathcal{A}$, i.e. $M_{ij} = v_n - v_o;~\langle f_i, v_o, v_n \rangle \in E_{a_j}$. Also, let $\mathbf{D}$ be a vector of size $m$ whose $i^{th}$ entry $d_i$ is the change in a goal compliant $f \in \phi$ from the current state to the final state, i.e. $d_i = v_g - v_c; v_g = f_i \in \mathbb{G}, v_c = f_i \in \mathbb{S}$; and let $\mathbf{x}$ be a vector of size n, whose $i^{th}$ element is $x_i \in \mathbb{N}$. Then the following equality holds: \begin{align} \label{11} \mathbf{M}\mathbf{x} & = \mathbf{D} \end{align} The integer solution $\mathbf{x}^*$ to this system of linear equations with the least $|\mathbf{x}^*|$ gives a lower bound on the operator counts required to solve the planning problem, i.e. $|\mathbf{x}^*| \leq |\pi^*|$. We can compute a real-valued approximation in closed-form, by \begin{align} \min~~~&||\mathbf{Q}\mathbf{x}||_2^2\\ s.t.~~~\mathbf{M}\mathbf{x} &= \mathbf{D} \end{align} using the Lagrangian multiplier method for this optimization problem as follows - {\begin{align} \label{101} L(\mathbf{x}) &= \frac{1}{2}||\mathbf{Q}\mathbf{x}||^2 + \lambda^T(\mathbf{D} - \mathbf{M}\mathbf{x})\\ \implies \mathbf{x}^* &= \mathbf{Q}^{-2}\mathbf{M}^T(\mathbf{M}\mathbf{Q}^{-2}\mathbf{M}^T)^{-1}\mathbf{D} \end{align}} Here $\mathbf{Q}$ is a $n\times n$ matrix of action costs whose $ij^{th}$ entry $Q_{ij} = C_{a_i} \text{ if } i = j;~0 \text{ otherwise}$ (for unit cost domains) $\mathbf{Q}$ is an identity matrix and $\mathbf{x}^* = \mathbf{M}^T(\mathbf{M}\mathbf{M}^T)^{-1}\mathbf{D}$ The most costly operation here is the calculation of the pseudo inverse, which can be done in $\approx \mathcal{O}(n^{2.3})$ time. Further, $\mathbf{M}$ is problem independent, and hence the factor $\mathbf{Z} = \mathbf{Q}^{-2}\mathbf{M}^T(\mathbf{M}\mathbf{Q}^{-2}\mathbf{M}^T)^{-1}$ can be \textit{precomputed} given an action model. Thus it follows that we can readily use $||\mathbf{QZD}||$ as a heuristic for state-space search. Note that this formulation can also determine infeasibility of goal reachability immediately (in domains where actions are not reversible this is extremely useful) when the system is unsolvable, as shown in Algorithm \ref{algo}. Unfortunately, the use of the $l_2$-\emph{norm}, that helps us in obtaining the closed-form polynomial bound heuristic, also makes the heuristic inadmissible. \begin{algorithm}[btp] \caption{Using \texttt{OP-COUNT} Heuristic for State-Space Search} \label{CompCount} \label{algo} \begin{algorithmic} \Procedure{Pre-compute}{$\Pi$} \State Compute $\mathbf{M}, \mathbf{Q}$ \State Convert $\mathbf{M}$ to row echelon form $\rightarrow \mathbf{T}$ is the transformation matrix, $r$ is the rank \State $\mathbf{Y} \leftarrow \mathbf{M}[1:r, :],~\mathbf{Z} \leftarrow \mathbf{Q}^{-2}\mathbf{Y}^T(\mathbf{Y}\mathbf{Q}^{-2}\mathbf{Y}^T)^{-1}$ \EndProcedure \vspace{3pt} \Procedure{$h(\mathbb{S}) = $ \cc}{$\mathbb{S}, \mathbb{G}$} \State Compute $\mathbf{D}=\mathbf{G}-\mathbf{S}$ \State Compute $T^d = \mathbf{T} \times \mathbf{D}$ and $\tau = \mathbf{T}^d[1:r]$ \If {$t^d_i \not= 0~\forall i \geq r+1$} \textit{No solution!} \Else \indent return $\lceil\mathbf{Q\times\mathbf{Z}\times\tau}\rceil$ \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \subsubsection{Sparse coding.} Since operator counts are integers, we would ideally want an integer solution to Eqn \ref{101} (which makes the problem computationally intractable). Unfortunately, the polynomial bound Lagrangian method described above does not address this aspect giving rise to bad heuristic values for certain section of problems. To describe this problem geometrically, we consider a planning domain with two compliant operators (of unit cost), such that $\mathbf{x}=<x_1, x_2>$. If the plane inscribed by $\mathbf{Mx=D}$ in the two dimensional space is close two either of the axis, the $l_2$ norm calculated above results in small fractional values, and hence a less informed heuristic. As can be seen in the figure \ref{fig:1}, the actual operator counts for the given example (with $M=\begin{pmatrix} 15 & 4 \end{pmatrix}$ and $D= \begin{pmatrix} 12 \end{pmatrix}$) should have been $x_1=0$ and $x_2=3$. But the $l_2$ minimization results in small fractional values with $x_1=0.77$ and $x_2=0.77$, and the heuristic values of $h_{l_2}=1.54$ instead of $|\pi^*|=3$. \begin{figure}[!h] \centering \begin{tikzpicture} \draw (0,3) node[below right] {$\mathbf{Mx=D}$} -- (0.8,0); \draw (0,0)[red] circle (0.75cm) node[below right]{$l_2$-norm}; \draw[thick,color=gray,step=1cm, dashed] (-2,-2) grid (4,4); \draw[->] (-2,0) -- (4,0) node[below right] {$x_1$}; \draw[->] (0,-2) -- (0,4) node[left] {$x_2$}; \end{tikzpicture} \caption{Eucledian norm minimization produces small fractional values for $x_1$ and $x_2$} \label{fig:1} \end{figure} Thus, we propose a different approximation method to obtain integer values for individual operator counts, remaining within the polynomial time bound. We notice that in most cases $n\gg m$ and also $n \gg |\mathbf{x}^*|$ due to the combinatorial explosion during grounding of domains. Thus, we propose an operator count heuristic that exploits this knowledge about the sparsity of $\mathbf{x}^*$. Ideally, we would like to solve the following problem, \begin{eqnarray} \min && |\mathbf{x}|_{l_0} \nonumber \\ s.t.~~~~~Mx &=& D \nonumber \\ \mathbf{x} &\succeq& 0 \nonumber \end{eqnarray} \noindent since minimizing the $l_0$ norm results in the sparsest solution. But, we encounter two problems. Firstly, the optimal operator counts ($\mathbf{x}^*$), although sparse, might not be the sparsest solution. Secondly, minimizing the $l_0$ norm is \emph{NP}-hard~\cite{ge2011note}. Thus, we draw upon compressed sensing techniques to enforce a level of sparsity when computing the vector $\mathbf{x}$. To this end, we suggest minimization of \textit{$l_1$-norm} (\textit{$l_1$-LP}) or weighted \textit{$l_1$-norm} (\textit{$\omega$-$l_1$-LP}) \cite{candes2008enhancing} to enforce positive integer solutions. Geometrically, as can be seen in figure \ref{fig:2} these norms produce a more informed heuristic ($h_{l_1}=1.60$ and $h_{\omega-l_1}=3.4$) for the aforementioned problem. This method tries to compress (minimize) the norm ball (or box for that matter) as much as possible till it fits in the plane $\mathbf{Mx=D}$. The operator (dimension) that induces a tighter constraint ($x_1$ in our case), limits the expansion of the norm ball, producing a less informed heuristic ($h_{l_1}=1.60$). The weighted \textit{$l_1$-norm} method addresses this problem by minimizing the \textit{$l_1$-norm} and iteratively penalizing the increase along the tightest dimension till convergence is reached or maximum number of iterations are achieved, resulting in a more informed heuristic ($h_{\omega-l_1}=3.4$). \begin{figure}[!h] \centering \begin{tikzpicture} \draw (0,3) node[below right] {$\mathbf{Mx=D}$} -- (0.8,0); \draw[red, rotate around={45:(0,0)}] (-0.56,-0.56) rectangle (0.56,0.56) node[below right] {$l_1$ norm}; \draw[thick,color=gray,step=1cm, dashed] (-3,-3) grid (4,4); \draw[->] (-2,0) -- (4,0) node[below right] {$x_1$}; \draw[->] (0,-2) -- (0,4) node[left] {$x_2$}; \end{tikzpicture} \begin{tikzpicture} \draw (0,3) node[below right] {$\mathbf{Mx=D}$} -- (0.8,0); \draw[red] (0,3) -- (0.4,0) node[below right] {$\omega$-$l_1$ norm}; \draw[red] (0,3) -- (-0.4,0); \draw[red] (0,-3) -- (0.4,0); \draw[red] (0,-3) -- (-0.4,0); \draw[thick,color=gray,step=1cm, dashed] (-3,-3) grid (4,4); \draw[->] (-2,0) -- (4,0) node[below right] {$x_1$}; \draw[->] (0,-2) -- (0,4) node[left] {$x_2$}; \end{tikzpicture} \caption{Eucledian norm minimization produces small fractional values for $x_1$ and $x_2$} \label{fig:2} \end{figure} For \textit{$\omega$-$l_1$-LP}, we empirically observe that rounding up the individual operator counts produce a more informed heuristic. Thus, we arrive at a polynomial time proxy for integer solutions. \subsubsection{Evaluations.} The table shows the evaluation of the proposed heuristics across a total of 83 problems from five well-known unit cost planning domains. Each entry in the table represents the percentage difference in the initial state heuristic value and the optimal plan length averaged across the problems in each domain. The \%-compliance column shows the average number of goal compliant predicates in the problems. Rows 1-3 show the performance of our heuristic on the original domains (`-' indicates that the heuristics could not be computed due to absence of any goal complaint variables). Rows 3-6 show the performance in domains where the $\%$-compliance was increased (this was done by identifying instances in the action model where variables assume a don't care condition, i.e. a value of -1, and replacing it with appropriate values as entailed by domain axioms). Finally, rows 6-9 show the performance of our heuristics in problems with more completely specified goals (which results in higher percentage compliance). As expected, our heuristic performs better as $\%$-compliance increases across a particular domain. The performance of $l_1$ LP and $\omega$-$l_1$ LP highlights the usefulness of compressed sensing techniques in obtaining better integer approximations to the MILP. \input{tabletex} \subsection{Discussion and Related Work} \subsubsection{Relation to Existing Heuristics.} The proposed heuristic has close associations with both heuristics on state change equations and operator counts~\cite{pommerening2014lp, bonet2014flow, van2007lp}. Specifically, compliant conditions capture the net change criteria very succinctly and are thus extremely useful where such properties are relevant. Another interesting connection to existing work is with respect to graph-plan based heuristics \cite{blum1997fast}, except here we are relaxing preconditions instead of delete effects. \subsubsection{Compliance.} Our approach works better in domains that have many goal compliant conditions, e.g. in manufacturing domains \cite{nau1995ai} or in puzzles like Sudoku \cite{babu2010linear}. Thus goal completion strategies and semantic preserving actions have a direct effect on the quality of the heuristic. Intermediate representations such as transition normal form (TNF) \cite{pommerening2015normal} should be investigated in this context. \subsubsection{Landmarks.} Our purpose here is not to compete with the most sophisticated heuristics of today but to motivate a special case that can be computed extremely efficiently. We discussed the simplest version of this formulation here, but it can be easily extended to incorporate more informative features like \emph{landmarks} \cite{porteous}. A landmark constraint is added by simply subtracting the corresponding net change from $\mathbb{D}$: $d_i \leftarrow d_i - k_a\times(x_n-x_o)$ if $\langle d_i, x_o, x_n\rangle \in E_{a} \text{ and } a \in \mathcal{A}\text{ is an action landmark}$ with cardinality $k_a$; and the closed form solution remains valid. In fact in terms of plan recognition with operator counts, observations are landmarks and the same approach applies. This demonstrates the flexibility of our approach. \subsubsection{Resource Constrained Interaction.} The approach is especially relevant in the context of multi-agent interactions constrained by usage $\pi^\alpha(\eta)$ of a shared resource $\eta$ by a plan $\pi^\alpha$ of an agent $\alpha$. For example, in an adversarial setting, if an agent $\alpha_2$ wanted to stop $\alpha_1$ from executing its plan, all it needs to do is to ensure that $\exists\eta \text{ s.t. } \pi^{\alpha_1}(\eta) + \pi^{\alpha_2}(\eta) > |\eta|$. Similarly, in a cooperative setting, if agent $\alpha_2$ wanted to ensure that $\alpha_1$'s plan succeeds, it would need to make sure that $\forall\eta~\pi^{\alpha_1}(\eta) + \pi^{\alpha_2}(\eta) \leq |\eta|$. In fact, as resource variables are compliant, our approach may provide quick estimates of an agent's intent without computing the entire plan. \vspace{5pt} {\small \noindent\emph{Acknowledgment.} This research is supported in part by the ONR grants N00014-13-1-0176, N00014-13-1-0519 and N00014-15-1-2027, and ARO grant W911NF-13-1-0023. } \bibliographystyle{plain}
1,314,259,993,743
arxiv
\section{Introduction} We consider nonlinear dispersive equations \begin{equation}\label{dis} \begin{aligned} & i \partial_t u(t,x) + \mathcal{L}\left(\nabla, \tfrac{1}{\varepsilon}\right) u(t,x) =\vert \nabla\vert^\alpha p\left(u(t,x), \overline u(t,x)\right)\\ & u(0,x) = v(x), \end{aligned} \end{equation} where we assume a polynomial nonlinearity $p$ and that the structure of \eqref{dis} implies at least local wellposedness of the problem on a finite time interval $]0,T]$, $T<\infty$ in an appropriate functional space. Here, $u$ is the complex-valued solution that we want to approximate. Concrete examples are discussed in Section \ref{sec:examples}, including the cubic nonlinear Schr\"odinger (NLS) equation \begin{equation}\label{nlsIntro} i \partial_t u + \mathcal{L}\left(\nabla\right) u = \vert u\vert^2 u, \quad \mathcal{L}\left(\nabla\right) = \Delta, \end{equation} the Korteweg--de Vries (KdV) equation \begin{equation}\label{kdvIntro} \partial_t u +\mathcal{L}\left(\nabla\right) u = \frac12 \partial_x u^2, \quad \mathcal{L}\left(\nabla\right) = i\partial_x^3, \end{equation}\black as well as highly oscillatory Klein--Gordon type systems \begin{align}\label{kgrIntro} i \partial_t u = -\mathcal{L}\left(\nabla, \tfrac{1}{\varepsilon}\right) u + \frac{1}{\varepsilon^2}\mathcal{L}\left(\nabla, \tfrac{1}{\varepsilon}\right)^{-1} \textstyle p(u,\overline u), \quad \mathcal{L}\left(\nabla, \tfrac{1}{\varepsilon}\right) = \frac{1}{\varepsilon}\sqrt{\frac{1}{\varepsilon^2}-\Delta}. \end{align} In the last decades, Strichartz and Bourgain space estimates allowed to establish well-posedness results for dispersive equations in low regularity spaces \cite{Burq-Gerard-Tzvetkov,Bour93a,Keel-Tao,Strichartz,Tao06}. Numerical theory for dispersive PDEs, on the other hand, is in general still restricted to smooth solutions. This is due to the fact that most classical approximation techniques were originally developed for linear problems and thus, in general, neglect nonlinear frequency interactions in a system. In the dispersive setting \eqref{dis} the interaction of the differential operator $\mathcal{L}$ with the nonlinearity $p$, however, triggers oscillations both in space and in time and, unlike for parabolic problems, no smoothing can be expected. At low regularity and high oscillations, these nonlinear frequency interactions play an essential role: Note that while the influence of $i\mathcal{L}$ can be small, the influence of the interaction of $+i\mathcal{L}$ and $-i\mathcal{L}$ can be huge, and vice versa. Classical \emph{linearised} frequency approximations, used, e.g., in splitting methods or exponential integrators, see Table \ref{tab1} below, are therefore restricted to smooth solutions. The latter is not only a technical formality: The severe order reduction in case of non-smooth solutions is also observed numerically, see, e.g., \cite{JL00,OS18} and Figure \ref{fig:osc}, and only very little is known on how to overcome this issue. For an extensive overview on numerical methods for Hamiltonian systems, geometric numerical analysis, structure preserving algorithms, and highly oscillatory problems we refer to the books Butcher \cite{Butcher}, Engquist et al. \cite{EFHI09}, Faou \cite{Faou12}, E. Hairer et al. \cite{HW,H2Tri}, Holden et al. \cite{HLRS10}, Leimkuhler $\&$ Reich \cite{LR04}, McLachlan $\&$ Quispel \cite{McLacQ02}, Sanz-Serna $\&$ Calvo \cite{SanBook} and the references therein. In this work, we establish a new framework of resonance based approximations for dispersive equations which will allow us to approximate with high order accuracy a large class of equations under (much) lower regularity assumptions than classical techniques require. The key in the construction of the new methods lies in analysing the underlying oscillatory structure of the system~\eqref{dis}. We look at the corresponding mild solution given by Duhamel's formula \begin{equation}\label{duh} u(t) = e^{ it \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} v - ie^{ it \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)}\vert \nabla\vert^\alpha \int_0^t e^{ -i\xi \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} p\left(u(\xi), \overline u(\xi)\right) d\xi \end{equation} and its iterations \begin{equs}\label{It1} u(t) = e^{ it \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} v - ie^{ it \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} \vert \nabla\vert^\alpha\mathcal{I}_1( t, \mathcal{L},v,p) +\vert \nabla\vert^{2\alpha} \int_0^t \int_0^\xi \ldots d\xi_1 d \xi. \end{equs} The principal oscillatory integral $\mathcal{I}_1( t, \mathcal{L},v,p)$ thereby takes the form \[ \mathcal{I}_1( t, \mathcal{L},v,p) = \int_0^t \mathcal{Osc}(\xi, \mathcal{L}, v,p) d\xi \] with the central oscillations \begin{align}\label{osc} \mathcal{Osc}(\xi, \mathcal{L}, v,p) = e^{ -i\xi \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} p\left(e^{ i \xi \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} v , e^{ - i \xi \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} \overline v \right) \end{align} driven by the nonlinear frequency interactions between the differential operator $\mathcal{L}$ and the nonlinearity $p$. In order to obtain a suitable approximation at low regularity, it is central to resolve these oscillations -- characterised by the underlying structure of resonances -- numerically. Classical linearised frequency approximations, however, neglect the nonlinear interactions in~\eqref{osc}. This linearisation is illustrated in Table \ref{tab1} below for splitting and exponential integrator methods (\cite{H2Tri,HochOst10}). \begin{table}[h!] \begin{subequations} \begin{empheq}[box=\widefbox]{align*} & \text{\em Numerical scheme} &\quad & \text{\em Approximation of nonlinear oscillations}\\ &\text{splitting method }&\quad & \mathcal{Osc}(\xi, \mathcal{L}, v,p) \approx p\left(v , \overline v \right)\\ &\text{exponential method }&\quad & \mathcal{Osc}(\xi, \mathcal{L}, v,p) \approx e^{ -i\xi \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} p(v,\overline v) \end{empheq} \end{subequations} \caption{Classical {frequency approximations} of the principal oscillations \eqref{osc}.}\label{tab1} \end{table} \noindent The aim of this paper is to introduce a framework which allows us to embed the underlying {nonlinear} oscillations \eqref{osc} and their higher-order counterparts into the numerical discretisation. The main idea for tackling this problem is to introduce a decorated tree formalism that optimises the structure of the local error by mapping it to the particular regularity of the solution. While first-order resonance-based discretisations have been presented for particular examples, e.g., the Nonlinear Schrödinger (NLS), Korteweg--de Vries (KdV), Boussinesq, Dirac and Klein--Gordon equation, see \cite{HS16,BFS17,BS19,OS18,OstS19,SWZ20}, no general framework could be established so far. Each and every equation had to be targeted carefully one at a time based on a {sophisticated resonance analysis}. This is due to the fact that the structure of the underlying oscillations \eqref{osc} strongly depends on the form of the leading operator $\mathcal{L}$, the nonlinearity $p$ and in particular their nonlinear interactions. In addition, to the lack of a general framework, only very little is known about the higher-order counterpart of resonance based discretisations. Indeed, some attempts have been made for second-order schemes (see, e.g., \cite{HS16} for KdV and \cite{KOS19} for NLS) but they are not optimal. This is due to the fact that the leading differential operator $\mathcal{L} $ triggers a full spectrum of frequencies $k_j \in \Z^{d}$. Up to now it was an unresolved issue on how to control their nonlinear interactions up to higher order, in particular, in higher spatial dimensions where stability poses a key problem. Even in case of a simple NLS equation it was an open question so far whether stable low regularity approximations of order higher than one can be achieved in spatial dimensions $d\geq 2$. In particular, previous works suggest a severe order reduction (\cite{KOS19}). To overcome this we introduce a new tailored decorated tree formalism. Thereby the decorated trees encode the Fourier coefficients in the iteration of Duhamel's formula, where the node decoration encodes the frequencies which is in the spirit close to \cite{Christ,oh1,Gub11}. The main difficulty then lies in controlling the nonlinear frequency interactions within these iterated integrals up to the desired order with the constraint of a given a priori regularity of the solution. The latter is achieved by embedding the underlying oscillations, and their higher order iterations, via well-chosen Taylor series expansions into our formalism: The dominant interactions will be embedded exactly whereas only the lower order parts are approximated within the discretisation. We base our algebraic structures on the ones developed for SPDEs with Regularity Structure \cite{reg} which is a generalisation of Rough Paths \cite{Lyo91,Lyo98,Gub04,Gub10}. Part of the formalism is inspired by \cite{BHZ} and the recentering map used for giving a local description of the solution of {singular SPDEs}. We adapt it to the context of dispersive PDEs by using a new class of decorated trees {encoding the underlying dominant frequencies}. The framework of decorated trees and the underlying Hopf algebras have allowed the resolution of a large class of singular SPDEs \cite{reg,BHZ,ajay,BCCH} which include a natural random dynamic on the space of loops in a Riemannian manifold in \cite{BGHZ}, see \cite{EMS} for a very brief survey on these developments. With this general framework, one can study properties of singular SPDEs solutions in full subcritical-regimes \cite{CHS,Berglund,support,CMW}. The formalism of decorated trees together with the description of the renormalised equation in this context (see \cite{BCCH}) was directly inspired from numerical analysis of ODEs, more precisely, from the characterisation of Runge-Kutta methods via B-series. Indeed, B-series are numerical (multi-)step methods for ODEs represented by a tree expansion, see, e.g., \cite{Butcher72,Berland,MR2657947,H2Tri,IQT,MR2803804}. We also refer to \cite{LieSeries} for a review of B-series on Lie groups and homogeneous manifolds as well as to \cite{WordSeries} providing an alternative structure via word series. The field of singular SPDEs took advantage of the B-series formalism and extended their structures via the adjunction of decorations and Taylor expansions. Now, through this work, numerical analysis is taking advantage of these extended structures and enlarges their scope. This work proposes a new application of the Butcher-Connes-Kreimer Hopf algebra \cite{Butcher72,CK} to dispersive PDEs. It gives a new light on structures that have been used in various fields such as numerical analysis, renormalisation in quantum field theory, singular SPDEs and also dynamical systems for classifying singularities via resurgent functions introduced by Jean Ecalle (see \cite{Ecalle1,Ecalle1,FM}). This is another testimony of the universality of this structure and adds a new object to this landscape. Our construction is motivated by two main features: Taylor expansions that are at the foundation of the numerical scheme (added at the level of the algebra as for singular SPDEs) and the frequency interaction (encoded in a tree structure for dispersive PDEs). The combination of the two together with the Butcher-Connes-Kreimer Hopf algebra allows us to design a novel class of schemes at low regularity. We observe a similar Birkhoff type factorisation as in SPDEs and perturbative quantum field theory. This factorisation allows us to single out oscillations and to perform the local error analysis. Our main result is the new general resonance based scheme presented in Definition \ref{scheme} with its error structure given in Theorem \ref{thm:genloc}. Our general framework is illustrated on concrete examples in Section \ref{sec:examples} and simulations show the efficacy of the scheme. The algebraic structure in Section \ref{General framework} has its own interest where the main objective is to understand the frequency interactions. The Birkhoff factorisation given in Section~\ref{sec::Brikhoff} is designed for this purpose and is a good help for proving Theorem \ref{thm:genloc}. This factorisation seems new in comparison to the literature. {\bf Assumptions.} We impose periodic boundary condition that is $x \in \mathbf{T}^d$. However, our theory can be extended to the full space $\R^d$. We assume that the differential operator $\mathcal{L}$ is real and consider two types of structures of the system \eqref{dis} which will allow us to handle dispersive equations at low regularity (such as NLS and KdV) and highly oscillatory Klein--Gordon type systems; see also \eqref{nlsIntro}-\eqref{kgrIntro}. \begin{itemize} \item The differential operators $\mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right) = \mathcal{L}\left(\nabla \right) $ and $\vert \nabla\vert^\alpha$ cast in Fourier space into the form \begin{equs}\label{Lldef} \mathcal{L}\left(\nabla \right)(k) = k^\sigma + \sum_{\gamma : |\gamma| < \sigma} a_{\gamma} \prod_{j} k_j^{\gamma_j} ,\qquad \vert \nabla\vert^\alpha(k) = \prod_{ \gamma : |\gamma| {\leq \alpha}} k_j^{\gamma_j} \end{equs} for some $ \alpha \in \R $, $ \gamma \in \Z^d $ and $ |\gamma| = \sum_i \gamma_i $, where for $k = (k_1,\ldots,k_d)\in \Z^d$ and $m = (m_1, \ldots, m_d)\in \Z^d$ we set \begin{equs} k^\sigma = k_1^\sigma + \ldots + k_d^\sigma, \qquad k \cdot m = k_1 m_1 + \ldots + k_d m_d. \end{equs} \item We also consider the setting of a given high frequency $\frac{1}{\vert \varepsilon \vert} \gg 1$. In this case we assume that the operators $\mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right) $ and $\vert \nabla\vert^\alpha$ take the form \begin{equs}\label{Leps} \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right) = \frac{1}{\varepsilon^{\sigma}} + \mathcal{B}\left(\nabla, \frac{1}{\varepsilon}\right), \qquad \vert \nabla\vert^\alpha = \mathcal{C}\left(\nabla, \frac{1}{\varepsilon}\right) \end{equs} for some differential operators $\mathcal{B}\left(\nabla, \frac{1}{\varepsilon}\right)$ and $\mathcal{C}\left(\nabla, \frac{1}{\varepsilon}\right)$ which can be bounded uniformly in $ \vert \varepsilon\vert$ and are relatively bounded by differential operators of degree $\sigma$ and degree $\alpha< \sigma$, respectively. This allows us to include for instance highly oscillatory Klein--Gordon type equations \eqref{kgrIntro} (see, also Section \ref{sec:kgr}). \end{itemize} \begin{figure}[h!] \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=1.\textwidth]{u0.eps} \subcaption{$H^1$ data} \end{subfigure} \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=1.\textwidth]{u0Smooth.eps} \subcaption{$\mathcal{C}^\infty$ data} \end{subfigure}\caption{Initial values for Figure \ref{fig:osc}: $u_0 \in H^1$ (left) and $u_0 \in \mathcal{C}^\infty$ (right).}\label{fig:ini} \end{figure} \begin{figure}[h!] \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=1.\textwidth]{classH1New.eps} \subcaption{$H^1$ data} \end{subfigure} \begin{subfigure}[c]{0.49\textwidth} \includegraphics[width=1.\textwidth]{classSmoothNew.eps} \subcaption{$\mathcal{C}^\infty$ data} \end{subfigure} \caption{Order reduction of classical schemes based on linearised frequency approximations (cf. Table \ref{tab1}) in case of low regularity data (error versus step size for the cubic Schrödinger equation). For smooth solutions classical methods reach their full order of convergence (right picture). In contrast, for less smooth solutions they suffer from severe order reduction (left picture). The initial values in $H^1$ and $\mathcal{C}^\infty$ are plotted in Figure \ref{fig:ini}. The slope of the reference solutions (dashed lines) is one and two, respectively.}\label{fig:osc} \end{figure} In the next section we introduce the resonance based techniques to solve the dispersive PDE \eqref{dis} and illustrate our approach on the example of cubic nonlinear Schr\"odinger equation \eqref{nlsIntro}, see Example \ref{ex:introExNLS}. \subsection{Resonances as a computational tool}\label{sec:res} Instead of employing classical linearised frequency approximations (cf. Table~\ref{tab1}) we want to embed the underlying nonlinear oscillations \begin{align}\label{oscii} \mathcal{Osc}(\xi, \mathcal{L}, v,p) = e^{ -i\xi \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} p\left(e^{ i \xi \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} v , e^{ - i \xi \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} \overline v \right) \end{align} (and their higher-order counterparts) into the numerical discretisation. In case of the one dimensional cubic Schr\"odinger equation \eqref{nlsIntro} the central oscillations~\eqref{oscii} for instance take in Fourier the form (see Example \ref{ex:introExNLS} for details) $$ \mathcal{Osc}(\xi, \Delta, v,\text{cub}) =\sum_{\substack{k_1,k_2,k_3 \in \Z\\-k_1+k_2+k_3 = k} } e^{i k x } \overline{\hat{v}}_{k_1} \hat{v}_{k_2} \hat{v}_{k_3} \int_0^\tau e^{i s \mathscr{F}(k) } ds $$ with the underlying resonance structure \begin{align}\label{resiNL} \mathscr{F}(k) = 2 k_1^2 - 2 k_1 (k_2+k_3) + 2 k_2 k_3. \end{align} Ideally we would like to resolve all nonlinear frequency interactions \eqref{resiNL} exactly in our scheme. \black However, these consequent into a generalised convolution (of Coifman--Meyer type \cite{Coif}) which can not be converted as a product into the physical space. Thus the iteration would need to be carried out fully in Fourier space which does not yield a scheme which can be {practically} implemented in higher spatial dimensions, see also Remark \ref{rem:FFT} below. The latter in general also holds true in the abstract setting \eqref{oscii}.\black In order to obtain an {efficient and practical} resonance based discretisation we extract the dominant and lower-order parts from the resonance structure \eqref{oscii}. More precisely, we filter out the dominant parts $ \mathcal{L}_{\text{dom}}$ and treat them exactly while only approximating the lower order terms in the spirit of \begin{equation}\label{oscNew} \mathcal{Osc}(\xi, \mathcal{L}, v,p) = \left[ e^{i \xi \mathcal{L}_{\text{dom}} \left(\nabla, \frac{1}{\varepsilon}\right)} p_{\text{dom}}\left(v,\overline v\right) \right] p_{\text{low}}(v,\overline v) + \mathcal{O}\Big(\xi\mathcal{L}_\text{low}\left(\nabla\right)v\Big). \end{equation} Here, $\mathcal{L}_{\text{dom}}$ denotes a suitable dominant part of the high frequency interactions and \begin{equation}\label{Llow} \mathcal{L}_\text{low} = \mathcal{L} - \mathcal{L}_{\text{dom}} \end{equation} the corresponding non-oscillatory parts (details will be given in Definition \ref{dom_freq}). The crucial issue is to determine $\mathcal{L}_{\text{dom}}$, $p_{\text{dom}}$ and $\mathcal{L}_\text{low}, p_{\text{low}}$ in~\eqref{oscNew} with an interplay between keeping the underlying structure of PDE and allowing a practical implementation at a reasonable cost. We refer to Example \ref{ex:introExNLS} for the concrete characterisation in case of cubic NLS, where $\mathcal{L}_\text{low} = \nabla$ and $\mathcal{L}_{\text{dom}} = \Delta$. Thanks to the resonance based ansatz \eqref{oscNew} the principal oscillatory integral \[ \mathcal{I}_1( t, \mathcal{L},v,p) = \int_0^t \mathcal{Osc}(\xi, \mathcal{L}, v,p) d\xi \] in the expansion of the exact solution \eqref{It1} \begin{equation}\label{duli} \begin{aligned} u(t) = e^{ it \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} v - ie^{ it \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} \vert \nabla\vert^\alpha \mathcal{I}_1( t, \mathcal{L},v,p) + \mathcal{O}\left( t^2\vert \nabla\vert^{2 \alpha}{ q_1( v)} \right) \end{aligned} \end{equation} (for some polynomial $q_1$) then takes the form \begin{equation}\label{pri1} \begin{aligned} \mathcal{I}_1( t, \mathcal{L},v,p) & = \int_0^t \left[ e^{i \xi \mathcal{L}_{\text{dom}}} p_{\text{dom}}\left(v,\overline v\right) \right] p_{\text{low}}(v,\overline v) + \mathcal{O}\Big(\xi\mathcal{L}_\text{low}\left(\nabla\right){ q_2(v)}\Big)d \xi\\ & = t p_{\text{low}}(v,\overline v) \varphi_1\left(i t \mathcal{L}_{\text{dom}} \right) p_{\text{dom}}\left(v,\overline v\right) + \mathcal{O}\Big(t^2\mathcal{L}_\text{low}\left(\nabla\right){ q_2(v)}\Big) \end{aligned} \end{equation} (for some polynomial $q_2$) where for shortness we write $\mathcal{L} = \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)$ and define $\varphi_1(\gamma) = \gamma^{-1}\left(e^\gamma -1\right)$ for $\gamma \in \C$. Plugging \eqref{pri1} into \eqref{duli} yields for a small time step $\tau$ that \begin{multline}\label{ue1} u(\tau) = e^{ i\tau \mathcal{L}} v - \tau ie^{ i\tau \mathcal{L}} \vert \nabla\vert^\alpha \Big[p_{\text{low}}(v,\overline v) \varphi_1\left(i \tau \mathcal{L}_{\text{dom}} \left(\nabla, \frac{1}{\varepsilon}\right)\right) p_{\text{dom}}\left(v,\overline v\right) \Big] \\ + \mathcal{O}\left( \tau^2\vert \nabla\vert^{2 \alpha} q_ 1(v) \right) + \mathcal{O}\Big(\tau^2\vert \nabla\vert^{ \alpha}\mathcal{L}_\text{low} \left(\nabla\right)q_2({ v})\Big) \end{multline} for some polynomials $q_1, q_2$. The expansion of the exact solution \eqref{ue1} builds the foundation of the first-order resonance based discretiatzion \begin{align}\label{GenScheme1} u^{n+1} = e^{ i\tau \mathcal{L}} u^n - \tau ie^{ i\tau \mathcal{L}} { \vert \nabla\vert^\alpha} \Big[p_{\text{low}}(u^n,\overline u^n) \varphi_1\left(i \tau\mathcal{L}_{\text{dom}} \left(\nabla, \frac{1}{\varepsilon}\right) \right) p_{\text{dom}}\left(u^n,\overline u^n\right) \Big] . \end{align} Compared to classical linear frequency approximations (cf. Table \ref{tab1}) the main gain of the more involved resonance based approach \eqref{GenScheme1} is the following: All dominant parts $\mathcal{L}_\text{dom}$ are captured exactly in the discretisation, while only the lower order/non-oscillatory parts $\mathcal{L}_\text{low}$ are approximated. Henceforth, within the resonance based approach \eqref{GenScheme1} the local error only depends on the lower order, non-oscillatory operator $\mathcal{L}_\text{low}$, while the local error of classical methods involves the full operator $\mathcal{L}$ and, in particular, its dominant part $\mathcal{L}_\text{dom}$. Thus, the resonance based approach \eqref{GenScheme1} allows us to approximate a more general class of solutions \begin{multline}\label{DDD} u \in \underbrace{ \mathcal{D}\left(\vert \nabla\vert^\alpha \mathcal{L}_{\text{low}}\left(\nabla, \frac{1}{\varepsilon}\right)\right) }_{\text{resonance domain}} \cap \mathcal{D}\left(\vert \nabla\vert^{2\alpha}\right)\\ \supset { \mathcal{D}\left(\vert \nabla\vert^\alpha \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)\right) }\cap \mathcal{D}\left(\vert \nabla\vert^{2\alpha}\right) =\underbrace{ \mathcal{D}\left(\vert \nabla\vert^\alpha \mathcal{L}_{\text{dom}}\left(\nabla, \frac{1}{\varepsilon}\right)\right) }_{\text{classical domain}}\cap \mathcal{D}\left(\vert \nabla\vert^{2\alpha}\right). \end{multline} \noindent{\bf Higher order resonance based methods.} Classical approximation techniques, such as splitting or exponential integrator methods, can easily be extended to higher order, see, e.g., \cite{H2Tri,HochOst10,Ta12}. The step from a first- to higher-order approximation lies in subsequently employing a higher order Taylor series expansion to the exact solution \begin{align*} u(t) = u(0) + t\partial_t u(0) + \ldots + \frac{t^{r}}{r!} \partial_t^{r} u(0) + \mathcal{O} \left( t^{r+1} \partial_{t}^{r+1} u\right). \end{align*} Within this expansion, the higher order iterations of the oscillations \eqref{osc} in the exact solution are, however, not resolved, but subsequently linearised. Therefore, classical high order methods are restricted to smooth solutions as their local approximation error in general involves high order derivatives \begin{equation}\label{highT} \mathcal{O} \left( t^{r+1} \partial_{t}^{r+1} u\right)=\mathcal{O} \left( t^{r+1} \mathcal{L}^{r+1}\left(\nabla, \tfrac{1}{\varepsilon}\right) u\right). \end{equation} This phenomenon is also illustrated in Figure \ref{fig:osc} where we numerically observe the order reduction of the Strang splitting method (of classical order two) down to the order of the Lie splitting method (of classical order one) in case of rough solutions. In particular we observe that classical high order methods do not pay off at low regularity as their error behaviour reduces to the one of lower order methods. At first glance our resonance based approach can also be straightforwardly extended to higher order. Instead of considering only the first order iteration~\eqref{It1} the natural idea is to iterate Duhamel's formula \eqref{duh} up to the desired order $r$, i.e, for initial value $u(0)=v$, \begin{equation} \begin{aligned}\label{Itpi} u(t) &= e^{i t \mathcal{L}} v -i e^{i t \mathcal{L}}\nabla^{\alpha} \int_0^te^{ -i \xi_1 \mathcal{L}} p\left( e^{ i \xi_1 \mathcal{L}} v,e^{- i \xi_1 \mathcal{L}} \overline v\right) d\xi_1\\& -e^{i t \mathcal{L}}\nabla^{\alpha} \int_0^te^{ -i \xi_1 \mathcal{L}} \Big[D_1 p \left( e^{ i \xi_1 \mathcal{L}} v,e^{- i \xi_1 \mathcal{L}} \overline v\right)\\&\qquad \cdot e^{ i \xi_1\mathcal{L}}\nabla^{\alpha} \int_0^{\xi_1} e^{ -i \xi_2 \mathcal{L}} p\left( e^{ i \xi_1 \mathcal{L}} v,e^{- i \xi_1 \mathcal{L}} \overline v\right) d\xi_2 \Big]d\xi_1 \\ & +e^{i t \mathcal{L}}\nabla^{\alpha} \int_0^te^{ -i \xi_1 \mathcal{L}} \Big[D_2p \left( e^{ i \xi_1 \mathcal{L}} v,e^{- i \xi_1 \mathcal{L}} \overline v\right)\\&\qquad \cdot e^{ -i \xi_1\mathcal{L}}\nabla^{\alpha} \int_0^{\xi_1} e^{ i \xi_2 \mathcal{L}} \overline{p\left( e^{ i \xi_1 \mathcal{L}} v,e^{- i \xi_1 \mathcal{L}} \overline v\right) }d\xi_2 \Big]d\xi_1 \\ & + \ldots+\nabla^{\alpha} \int_0^t\nabla^{\alpha} \int_0^\xi \ldots \nabla^{\alpha}\int_0^{\xi_r} d\xi_{r} \ldots d\xi_1 d \xi \end{aligned} \end{equation} where $ D_1 $ (resp. $ D_2 $) corresponds to the derivative in the first (resp. second) component of $ p $. The key idea will then be the following: Instead of linearising the frequency interactions by a simple Taylor series expansions of the oscillatory terms $e^{\pm i \xi_\ell \mathcal{L}}$ (as classical methods would do), we want to embed the dominant frequency interactions of \eqref{Itpi} exactly into our numerical discretisation. By neglecting the last term involving the iterated integral of order $r$ we will then introduce the desired local error $\mathcal{O}\Big(\nabla^{(r+1)\alpha} t^{r+1}q(u)\Big)$ for some polynomial $q$. Compared to the first order approximation \eqref{oscNew} this is, however, much more involved as {high order iterations} of the nonlinear frequency interactions need to be controlled. The control of these iterated oscillations is not only a delicate problem on the discrete (numerical) level, concerning accuracy, stability, etc., but already on the continuous level: We have to encode the structure (which strongly depends on the underlying structure of the PDE, i.e., the form of operator $\mathcal{L}$ and the shape of nonlinearity $p$) and at the same time keep track of the regularity assumptions. In order to achieve this in the general setting \eqref{dis} we will introduce the decorated tree formalism in Section \ref{sec:rho}. Beforehand let us first illustrate the main ideas on the example of the cubic periodic Schr\"odinger equation. \begin{example}[cubic periodic Schr\"odinger equation]\label{ex:introExNLS} We consider the one dimensional cubic Schr\"odinger equation \begin{align}\label{nlsIntro2} i \partial_t u + \partial_x^2 u = \vert u\vert^2 u \end{align} equipped with periodic boundary conditions, that is $x \in \mathbf{T}$. The latter casts into the general form \eqref{dis} with \begin{equation}\label{nlsDo} \begin{aligned} \mathcal{L}\left(\nabla, \tfrac{1}{\varepsilon}\right) & = \partial_x^2, \quad \alpha = 0 \quad \text{and}\quad p(u,\overline u) =u^2 \overline u . \end{aligned} \end{equation} In the case of cubic NLS, the central oscillatory integral (at first order) takes the form (cf. \eqref{osc}) \begin{equation}\label{I1d} \mathcal{I}_1(\tau, \partial_x^2,v) = \int_0^\tau e^{-i s \partial_x^2}\left[ \left( e^{- i s \partial_x^2} \overline v \right) \left ( e^{ i s \partial_x^2} v \right)^2\right] d s. \end{equation} Assuming that $v\in L^2$ the Fourier transform $ v(x) = \sum_{k \in \Z}\hat{v}_k e^{i k x} $ allows us to express the action of the free Schrödinger group as a Fourier multiplier, i.e., \[ e^{\pm i t \partial_x^2}v(x) = \sum_{k \in \Z} e^{\mp i t k^2} \hat{v}_k e^{i k x}. \] With this at hand we can express the oscillatory integral \eqref{I1d} as follows \begin{equation}\label{Ia} \begin{aligned} \mathcal{I}_1(\tau, \partial_x^2,v) & = \sum_{\substack{k_1,k_2,k_3 \in \Z\\-k_1+k_2+k_3 = k} } e^{i k x } \overline{\hat{v}}_{k_1} \hat{v}_{k_2} \hat{v}_{k_3} \int_0^\tau e^{i s \mathscr{F}(k) } ds \end{aligned} \end{equation} with the underlying resonance structure \begin{align}\label{resNLS}{ \mathscr{F}(k) = 2 k_1^2 - 2 k_1 (k_2+k_3) + 2 k_2 k_3}. \end{align} In the spirit of \eqref{oscNew} we need to extract the dominant and lower-order parts from the resonance structure \eqref{resNLS}. The choice is based on the following observation. Note that $2k_1^2$ corresponds to a second-order derivative, i.e., with the inverse Fourier transform $\mathcal{F}^{-1}$, we have \[ \mathcal{F}^{-1}\left(2k_1^2 \overline{\hat v}_{k_1} \hat v_{k_2} \hat v_{k_3}\right) = \left(- 2\partial_x^2 \overline v\right) v^2 \] while the terms $k_\ell \cdot k_m$ with $\ell \neq m$ correspond only to first-order derivatives, i.e., \[ \mathcal{F}^{-1}\left(k_1 \overline{\hat v}_{k_1} k_2 \hat v_{k_2} \hat v_{k_3}\right) = -\vert\partial_x v\vert^2 v, \quad \mathcal{F}^{-1}\left( \overline{\hat v}_{k_1} k_2 \hat v_{k_2} k_3 \hat v_{k_3}\right) = - (\partial_x v)^2\overline v. \] This motivates the choice \[ \mathscr{F}(k) = \mathcal{L}_{\text{dom}}(k_1) + \mathcal{L}_{\text{low}}(k_1,k_2,k_3) \] with \begin{equation}\label{domNLS0} { \mathcal{L}_{\text{dom}}(k_1) = 2k_1^2 \quad \text{and}\quad \mathcal{L}_{\text{low}}(k_1,k_2,k_3) = - 2 k_1 (k_2+k_3) + 2 k_2 k_3}. \end{equation} In terms of \eqref{GenScheme1} we thus have \begin{equation} \begin{aligned} \label{domNLS} & \mathcal{L}_{\text{dom}} = - 2\partial_x^2, \quad \quad p_{\text{dom}}(v,\overline v) = \overline v \quad \text{and}\quad p_{\text{low}}(v,\overline v) = v^2 \end{aligned} \end{equation} and the first-order NLS resonance based discretisation \eqref{GenScheme1} takes the form \begin{align}\label{schemeNLSintro} u^{n+1} = e^{ i\tau \partial_x^2} u^n - \tau ie^{ i\tau \partial_x^2} \Big[(u^n)^2 \varphi_1\left(-2 i \tau \partial_x^2 \right) \overline u^n \Big] . \end{align} Thanks to \eqref{ue1} we readily see by \eqref{domNLS0} that the NLS scheme \eqref{schemeNLSintro} introduces the approximation error \begin{equation}\label{natscal} \mathcal{O}\left(\tau^2 \mathcal{L}_{\text{low}}q(u)\right)= \mathcal{O}\left(\tau^2 \partial_xq(u)\right) \end{equation} for some polynomial $q$ in $u$. Compared to the error structure of classical discretisation techniques, which involve the full and thus dominant operator $\mathcal{L}_{\text{dom}} = \partial_x^2$, we thus gain one derivative with the resonance based scheme \eqref{schemeNLSintro}. This favorable error at low regularity is underlined in Figure \ref{fig.nlsintro}. \begin{figure}[h!]\centering \includegraphics[width=0.65\textwidth]{NewH2.eps} \caption{Error versus step size (double logarithmic plot). Comparison of classical and resonance based schemes for the cubic Schr\"odinger equation \eqref{nlsIntro2} with $H^2$ initial data.}\label{fig.nlsintro} \end{figure} \end{example} In Example \ref{ex:introExNLS} we illustrated the idea of the resonance based discretisation on the cubic periodic Schr\"odinger equation in one spatial dimension. In order to control frequency interactions in the general setting \eqref{dis} in arbitrary dimensions $d\geq 1$ up to arbitrary high order we next introduce our decorated tree formalism. \subsection{Main idea of decorated trees for high order resonance based schemes }\label{sec:rho} The iteration of Duhamel's formulation \eqref{Itpi} can be expressed using decorated trees. We are interested in computing the iterated frequency interactions in \eqref{Itpi}. This motivates us to express the latter in Fourier space. Let $ r $ be the order of the scheme and let us assume that we truncate \eqref{Itpi} at this order. Its $k-$th Fourier coefficient at order $ r $ is given by \begin{equs} \label{decoratedV1} U_{k}^{r}(\tau, v) & = \sum_{T \in \CV^r_k} \frac{\Upsilon^{p}(T)(v)}{S(T)} \left( \Pi T \right)(\tau) \end{equs} where $ \CV^r_k $ is a set of decorated trees which incorporate the frequency $k$, $ S(T) $ is the symmetry factor associated to the tree $ T $, $ \Upsilon^{p}(T) $ is the coefficient appearing in the iteration of Duhamel's formulation and $ (\Pi T)(t) $ represents a Fourier iterated integral. The exponent $r$ in $ \CV^r_k $ means that we consider only trees of size $ r +1 $ which are the trees producing an iterated integral with $ r + 1$ integrals. The decorations that need to be put on the trees are illustrated in Example~\ref{ex:introExNLS trees}. The main difficulty then lies in developing for every $T \in \CV^r_k$ a suitable approximation to the iterated integrals $ (\Pi T)(t) $ with the aim of minimising the local error structure (in the sense of regularity). In order to achieve this, the key idea is to embed - in the spirit of \eqref{oscNew} - the underlying resonance structure of the iterated integrals $ (\Pi T)(t) $ into the discretisation. \begin{example}[cubic periodic Schr\"odinger equation with decorated trees] \label{ex:introExNLS trees} When \newline $r=2$, decorated trees for cubic NLS are given by \begin{equs} T_0 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,1); \coordinate (tri) at (0,-1); \draw[kernels2] (tri) -- (root); \node[var] (rootnode) at (root) {\tiny{$ k $}}; \node[not] (trinode) at (tri) {}; \end{tikzpicture} \qquad T_1 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,2); \coordinate (tri) at (0,0); \coordinate (trib) at (0,-2); \coordinate (t1) at (-2,4); \coordinate (t2) at (2,4); \coordinate (t3) at (0,5); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[kernels2] (trib) -- (tri); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_{\tiny{3}} $}}; \node[not] (trinode) at (trib) {}; \end{tikzpicture} \qquad T_2 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,2); \coordinate (tri) at (0,0); \coordinate (trib) at (0,-2); \coordinate (t1) at (-2,4); \coordinate (t2) at (2,4); \coordinate (t3) at (0,4); \coordinate (t4) at (0,6); \coordinate (t41) at (-2,8); \coordinate (t42) at (2,8); \coordinate (t43) at (0,10); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \draw[kernels2] (trib) -- (tri); \node[not] (trinode) at (trib) {}; \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} \qquad T_3 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,2); \coordinate (tri) at (0,0); \coordinate (trib) at (0,-2); \coordinate (t1) at (-2,4); \coordinate (t2) at (2,4); \coordinate (t3) at (0,4); \coordinate (t4) at (0,6); \coordinate (t41) at (-2,8); \coordinate (t42) at (2,8); \coordinate (t43) at (0,10); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2,tinydots] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols,tinydots] (t3) -- (t4); \draw[kernels2] (t4) -- (t41); \draw[kernels2,tinydots] (t4) -- (t42); \draw[kernels2,tinydots] (t4) -- (t43); \draw[kernels2] (trib) -- (tri); \node[not] (trinode) at (trib) {}; \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} \label{treeK} \end{equs} where on the nodes we encode the frequencies such that they add up depending on the edge decorations. The root has no decoration. For example, in $T_1$ the two extremities of the blue edge have the same decoration given by $ -k_1 + k_2 + k_3 $ where the minus sign comes from the dashed edge. Therefore, $ \CV^r_k $ contains infinitely many trees (finitely many shapes, but infinitely many ways of splitting up the frequency $ k $ among the branches). An edge of type $\<thick>$ encodes a multiplication by $ e^{-i \tau k^2} $ where $k$ is the frequency on the nodes adjacent to this edge. An edge of type $ \<thin> $ encodes an integration in time of the form \begin{equs} \int_0^{\tau} e^{i s k^2} \cdots d s. \end{equs} In fact $r +1$, the truncation parameter corresponds to the maximum number of integration in time that is the number of edges with type $\<thin>$. The dashed dots on the edges correspond to a conjugate and a multiplication by $(-1)$ applied to the frequency at the top of this edge. Then, if we apply the map $ \Pi $ (which encodes the oscillatory integrals in Fourier space, see Section~\ref{sec::recursive_pi}) to these trees, we obtain: \begin{equs} \label{defPiex} \begin{aligned} (\Pi T_0)(\tau) & = e^{-i \tau k^2}, \\ (\Pi T_1)(\tau) & = - i e^{-i \tau k^2} \int_0^\tau e^{i s k^2}\left[ \left( e^{ i s k_1^2} \right) \left ( e^{ -i s k_2^2} \right) \left ( e^{ -i s k_3^2} \right) \right] d s \\ & = - i e^{-i \tau k^2} \int_0^{\tau} e^{is \mathscr{F}(k)} ds \\ (\Pi T_2)(\tau) & = -i e^{-i \tau k^2} \int_0^\tau e^{i s k^2}\left[ \left( e^{ i s k_4^2} \right) \Big( (\Pi T_1)(s) \Big) \left ( e^{ - i s k_5^2} \right) \right] d s \\ (\Pi T_3)(\tau) & = -i e^{-i \tau k^2} \int_0^\tau e^{i s k^2}\left[ \Big( \overline{(\Pi T_1)(s)} \Big)\left ( e^{ -i s k_4^2} \right) \left ( e^{ -i s k_5^2} \right) \right] d s \end{aligned} \end{equs} where the resonance structure $\mathscr{F}(k)$ is given in \eqref{resNLS}. One has the constraints $k= -k_1 +k_2 +k_3$ for $T_1$, $k= -k_1 + k_2 + k_3 -k_4 + k_5$ for $ T_2 $ and $k= k_1 - k_2 - k_3 +k_4 + k_5$ for $ T_3 $. Using the definitions in Section~\ref{sec:genScheme}, one can compute the following coefficients: \begin{equs} \frac{\Upsilon^p(T_0)(v)}{S(T_0)} & = \hat v_k, \quad \frac{\Upsilon^p(T_1)(v)}{S(T_1)} = \bar{\hat{v}}_{k_1} \hat{v}_{k_2} \hat{v}_{k_3} \\ \frac{\Upsilon^p(T_2)(v)}{S(T_2)} & = 2 \overline{\hat{v}}_{k_1} \hat v_{k_2} \hat v_{k_3} \overline{\hat{v}}_{k_4} \hat v_{k_5}, \quad \frac{\Upsilon^p(T_3)(v)}{S(T_3)} = \hat v_{k_1} \overline{\hat{v}}_{k_2} \overline{\hat{v}}_{k_3} \hat v_{k_4} \hat v_{k_5} \end{equs} which together with the character $ \Pi$ encode fully the identity {\eqref{decoratedV1}}.\\ \end{example} Our general scheme is based on the approximation of $(\Pi T)(t)$ for every tree in $\CV_k^r$. This approximation is given by a new map of decorated trees denoted by $\Pi^{n,r}$ where $r$ is the order of the scheme and $n$ corresponds to the a priori assumed regularity of the initial value $v$. This new character $\Pi^{n,r}$ will embed the dominant frequency interactions and neglect the lower order terms in the spirit of \eqref{oscNew}. Our general scheme will thus take the form \begin{equs} \label{decoratedV2} U_{k}^{n,r}(\tau, v) & = \sum_{T \in \CV^r_k} \frac{\Upsilon^{p}(T)(v)}{S(T)} \left( \Pi^{n,r} T \right)(\tau) \end{equs} where the map $ \Pi^{n,r} T $ is a low regularity approximation of order $ r $ of the map $ \Pi T $ in the sense that \begin{equs}\label{eq:loci} \left(\Pi T - \Pi^{n,r} T \right)(\tau) = \mathcal{O}\left( \tau^{r+2} \mathcal{L}^{r}_{\text{\tiny{low}}}(T,n) \right). \end{equs} Here $\mathcal{L}^{r}_{\text{\tiny{low}}}(T,n)$ involves all lower order frequency interactions that we neglect in our resonance based discretisation. At first order this approximation is illustrated in~\eqref{pri1}. The scheme \eqref{decoratedV2} and the local error approximations~\eqref{eq:loci} are the main results of this work (see Theorem~\ref{thm:genloc}). Let us give the main ideas on how to obtain them. The approximation $ \Pi^{n,r} $ is constructed from a character $ \Pi^n $ defined on the vector space $ \mathcal{H} $ spanned by decorated forests taking values in a space $ \mathcal{C} $ which depends on the frequencies of the decorated trees (see, e.g., \eqref{treeK} in case of NLS). However, we will add at the root the additional decoration $r$ which stresses that this tree will be an approximation of order $r$. For this purpose we will introduce the symbol $\CD^r$ (see, e.g., \eqref{exDr} for $T_1$ of NLS). Indeed, we disregard trees which have more integrals in time than the order of the scheme. In particular we note that $ \Pi^n \CD^r(T) = \Pi^{n,r} T$. The map $ \Pi^n $ is defined recursively from an operator $ \CK $ which will compute a suitable approximation (matching the regularity of the solution) of the integrals introduced by the iteration of Duhamel's formula. This map $ \CK $ corresponds to the high order counterpart of the approach described in Section~\ref{sec:res}: {It} embeds the idea of singling out the dominant parts and integrating them exactly while only approximating the lower order terms, allowing for an improved local error structure compared to classical approaches. The character $ \Pi^n $ is the main map for computing the numerical scheme in Fourier space. \begin{example}[cubic periodic Schr\"odinger equation: computation of $\Pi^n$] \label{ex:introExNLS Pi_n} We consider the decorated trees $\CD^r(\bar T_1)$ and $\bar T_1$ given by \begin{equs} \label{exDr} \bar T_1 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture},\, \quad \CD^r(\bar T_1) = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ r $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture}. \end{equs} One can observe that $ (\Pi T_1)(t) = e^{-i k^2 t} (\Pi \bar T_1)(t). $ We will define recursively two maps $\mathscr{F}_{\text{\tiny{dom}}} $ and $ \mathscr{F}_{\text{\tiny{low}}} $ (see Definition~\ref{dom_freq} below) on decorated trees that compute the dominant and the lower part of the nonlinear frequency interactions within the oscillatory integral $ (\Pi \bar T_1)(t) $. In this example, one gets back the values already computed in \eqref{domNLS0}; i.e., \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}(\bar T_1) = \mathcal{L}_{\text{\tiny{dom}}}(k_1), \quad \mathscr{F}_{\text{\tiny{low}}}(\bar T_1) =\mathcal{L}_{\text{\tiny{low}}}(k_1,k_2,k_3). \end{equs} Moreover, the dominant part of $T_1$ is due to the observation that $(\Pi T_1)(t) = e^{-i k^2 t} (\Pi \bar T_1)(t)$ given by: \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}( T_1) = -k^2 +\mathscr{F}_{\text{\tiny{dom}}}(\bar T_1) , \end{equs} because the tree $T_1$ does not start with an intregral in time. Then, one can write: \begin{equs} (\Pi \bar T_1)(t) = -i \int_0^\tau e^{i s\mathscr{F}_{\text{\tiny{dom}}}(\bar T_1) } e^{i s\mathscr{F}_{\text{\tiny{low}}}(\bar T_1) } ds \end{equs} and Taylor-expand around $0$ the lower order term, i.e., the factor containing $ \mathscr{F}_{\text{\tiny{low}}}(\bar T_1)$. The term $\Pi^{n,1} \bar T_1 = \Pi^n \CD^1(\bar T_1)$ is then given by: \begin{equs} \label{schemeT1} ( \Pi^{n,1} \bar T_1)(t) = - i \int_0^\tau e^{i s\mathscr{F}_{\text{\tiny{dom}}}(\bar T_1) } ds + \mathscr{F}_{\text{\tiny{low}}}(\bar T_1) \int_0^\tau s e^{i s \mathscr{F}_{\text{\tiny{dom}}}(\bar T_1) } ds . \end{equs} One observes that we obtain terms of the form $ \frac{P}{Q} e^{i R} $ where $P,Q, R$ are polynomials in the frequencies $ k_1, k_2, k_3 $. Linear combinations of these terms are actually the definition of the space $ \mathcal{C} $. For the local error, one gets \begin{equs}\label{lol} ( \Pi^{n,1} \bar T_1)(t) - ( \Pi \bar T_1)(t) = \CO( t^3 \mathscr{F}_{\text{\tiny{low}}} (\bar T_1)^2 ). \end{equs} Here the term $ \mathscr{F}_{\text{\tiny{low}}} (\bar T_1)^2$ corresponds to the regularity that one has to impose on the solution. One can check by hand that the expression of $ \Pi^{n,1} \bar T_1$ can be mapped back to the physical space. Such a statement will in general hold true for the character $ \Pi^n $, see Proposition~\ref{physical_space}. This will be important for the practical implementation of the new schemes, see also Remark \ref{rem:FFT} below. We have not used $n$ in the description of the scheme yet. In fact, it plays a role in the expression of $ \Pi^{n,1} \bar T_1 $. One has to compare $ n $ with the regularity required by the local error \eqref{lol} introduced by the polynomial $ \mathscr{F}_{\text{\tiny{low}}} (\bar T_1)^2 $, but also with the term $\mathscr{F}_{\text{\tiny{dom}}}(\bar T_1)^2 $. Indeed, if the initial value is regular enough we may want to Taylor expand all the frequencies, i.e., even the dominant parts, in order to get a simpler scheme, see also Remark~\ref{rem:regi} below. \end{example} In order to obtain a better understanding of the error introduced by the character $ \Pi^n $, one needs to isolate each interaction. Therefore, we will introduce two characters $ \hat \Pi^n : \mathcal{H} \rightarrow \mathcal{C} $ and $ A^n : \mathcal{H} \rightarrow \C $ such that \begin{equs} \label{Birkhoff1} \Pi^n = \left( \hat \Pi^n \otimes A^n \right) \Delta \end{equs} where $ \Delta : \mathcal{H} \rightarrow \mathcal{H} \otimes \mathcal{H}_+ $ is a coaction and $ (\mathcal{H},\Delta) $ is a right comodule for a Hopf algebra $ \mathcal{H}_+ $ equipped with a coproduct $ \Delta^{\!+} $ and an antipode $ \mathcal{A} $. In fact, on can show that: \begin{equs}\label{Birkhoff2} \hat \Pi^n = \left( \Pi^n \otimes \left( \mathcal{Q} \circ \Pi^n \mathcal{A} \cdot \right)(0) \right) \Delta, \quad A^n = (\mathcal{Q} \circ \Pi^n \cdot)(0) \end{equs} where $ \Pi^n $ is extended to a character on $ \mathcal{H}_+ $ and $ \mathcal{Q} $ is a projection defined on $ \mathcal{C} $ which keeps only the terms with no oscillations. The identity \eqref{Birkhoff2} can be understood as a Birkhoff type factorisation of $ \hat \Pi^n $ using the character $ \Pi^n $. This identity is also reminiscent in the main results obtained for singular SPDEs \cite{BHZ} where two twisted antipodes play a fundamental role providing a variant of the algebraic Birkhoff factorisation. \begin{example}[cubic periodic Schr\"odinger equation: Birkhoff factorisation] Integrating the first term in \eqref{schemeT1} exactly yields two contributions: \begin{equs} \int_0^\tau e^{i s\mathscr{F}_{\text{\tiny{dom}}}(\bar T_1) } ds = \frac{e^{i\tau\mathscr{F}_{\text{\tiny{dom}}}(\bar T_1) }}{i\mathscr{F}_{\text{\tiny{dom}}}(\bar T_1) } - \frac{1}{i\mathscr{F}_{\text{\tiny{dom}}}(\bar T_1) }. \end{equs} Plugging these two terms into $(\Pi T_2)(\tau)$ defined in \eqref{defPiex}, we see that we have to control the following two terms: \begin{equs} \begin{aligned} \label{firstterm} - e^{-i \tau k^2} & \int_0^\tau e^{i s k^2}\left[ \left( e^{ i s k_4^2} \right) \Big( \frac{e^{i\tau\mathscr{F}_{\text{\tiny{dom}}}(\bar T_1) - is \bar k^2 } }{i\mathscr{F}_{\text{\tiny{dom}}}(\bar T_1) } \Big) \left ( e^{- i s k_5^2} \right) \right] d s \\ - e^{-i \tau k^2} & \int_0^\tau e^{-i s k^2}\left[ \left( e^{ i s k_4^2} \right) \Big( - \frac{e^{-i s \bar k^2 }}{i\mathscr{F}_{\text{\tiny{dom}}}(\bar T_1) } \Big) \left ( e^{ -i s k_5^2} \right) \right] d s \end{aligned} \end{equs} where $ \bar k = -k_1 + k_2 + k_3$. The frequency analysis is needed again for approximating the time integral and defining an approximation of $(\Pi T_2)(\tau) $. One can see that the dominant part of these two terms may differ. This implies that one can get two different local errors for the approximation of these two terms, the final local error is the maximum between the two. At this point, we need an efficient algebraic structure for dealing with all these frequency interactions in the iterated integrals. We first consider a character $ \hat \Pi^n $ that keeps only the main contribution that is the second term of \eqref{firstterm}. For any decorated tree $ T $, one expects $ \hat \Pi^n $ to be of the form: \begin{equs} (\hat \Pi^n \CD^r(T))(t) = B^n(\CD^r(T))(t) e^{it\mathscr{F}_{\text{\tiny{dom}}}( T) } \end{equs} where $ B^n(\CD^r(T))(t) $ is a polynomial in $ t $ depending on the decorated tree $ T $. The character $ \hat \Pi^n $ singles out oscillations by keeping at each iteration only the non-zero one. This separation between the various oscillations can be encoded via the Butcher-Connes-Kreimer coaction $ \Delta : \mathcal{H} \rightarrow \mathcal{H} \otimes \mathcal{H}_+ $. An example of computation is given below: \begin{equs} \Delta & \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ r $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2) ; \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ r $} ] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} \otimes \mathbf{1} + \mathbf{1} \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,0) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} + \lambda \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,1) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} + \cdots \\ & + \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ r $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ \ell $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r-1,0) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} + \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ r $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var1] (rootnode) at (t3) {\tiny{$ ^1_{\ell} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r-1,1) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} + \cdots, \quad \hat \CD^{(r,m)} \left(\begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} \right) = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,m) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} \end{equs} where $ \ell = - k_1 + k_2 + k_3$. The space $\mathcal{H}_+$ corresponds to the forest of planted trees where for each planted tree the edge connecting the root to the rest of the tree must be blue (an integration in time). Only blue edges are cut (located on the right hand side of the tensor product) and the trunk is on the left hand side. The extra terms missing in the computation correspond to the higher order terms introduced by the Taylor approximation. Indeed, one plays with decorations introducing a $m$th derivative on the blue edges cut denoted by $ \hat \CD^{(r,m)} $ and decorations on the nodes where the edges were previously attached. A node of the form $ \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \node[var1] (rootnode) at (root) {\tiny{$ _\ell^1 $}}; \end{tikzpicture} $ in the example above corresponds to the frequency $ \ell $ and the monomial $ \lambda $. The length of the Taylor extension is dictated by the order of the scheme $ r $. The operator $ \hat \CD^{(r,m)} $ is non-zero only if $ m \leq r+1 $. The formulae \eqref{Birkhoff1} and \eqref{Birkhoff} give the relation between $ \Pi^n $ and $ \hat \Pi^n $ which can be interpreted as a Birkhoff factorisation with the explicit formula \eqref{Birkhoff} for $ A^n $. Such a factorisation is new and does not seem to have an equivalent in the literature. It is natural to observe this factorisation in this context: The integration in time $\int^{t}_0 ... ds$ gives two different frequency interactions that can be controlled via a projection $ \mathcal{Q} $ which needs to be iterated deeper in the tree. This will be the equivalent of the Rota-Baxter map used for such a type of factorisation. \end{example} The coproduct $ \Delta^{\!+} $ and the coaction $ \Delta $ are extremely close in spirit to the ones defined for the recentering in \cite{reg,BHZ}. Indeed, for designing a numerical scheme, we need to perform Taylor expansions and these two maps are performing them at the level of the algebra. The main difference with the tools used for singular SPDEs \cite{BHZ} is the length of the Taylor expansion which is now dictated by the order of the scheme. The structure, we propose in Section~\ref{sec:genframe} is new and reveals the universality of deformed Butcher-Connes-Kreimer coproducts which appear in \cite{BHZ}. The non-deformed version of this map is coming from the analysis of B-series in \cite{Butcher72,MR2657947,MR2803804}, which is itself an extension of the Connes-Kreimer Hopf algebra of rooted trees \cite{CK,CKI} arising in perturbative QFT and noncommutative geometry. One can notice that our approximation $ \Pi^{n,r} $ depends on $ n $ which has to be understood as the regularity we assume a priori on the solution. We design our framework such that for smooth solutions the numerical schemes are simplified, recovering in the limit classical linearised approximations as in Table \ref{tab1}. \begin{remark}\label{rem:regi} The term $ \mathcal{L}^{r}_{\text{\tiny{low}}}(T,n) $ in the approximation \eqref{eq:loci} is obtained by performing several Taylor expansions. Depending on the value $ n $, we get different numerical schemes (see also the applications in Section \ref{sec:examples}). In the sequel, we focus on two specific values of $ n $ associated to two particular schemes. We consider $ n^{r}_{\text{\tiny{low}}}(T) $ and $ n^{r}_{\text{\tiny{full}}}(T) $ given by: \begin{equs} n^{r}_{\text{\tiny{low}}}(T) = \deg(\mathcal{L}^{r}_{\text{\tiny{low}}}(T)), \quad n^{r}_{\text{\tiny{full}}}(T) = \deg(\mathcal{L}^{r}_{\text{\tiny{full}}}(T)), \end{equs} where $ \mathcal{L}^{r}_{\text{\tiny{low}}}(T) $ corresponds to the error obtained when we integrate exactly the dominant part $ \mathcal{L}_{\text{\tiny{dom}}}(T)$ and Taylor expand only the lower order part $ \mathcal{L}_{\text{\tiny{low}}}(T) $, while the term $\mathcal{L}^{r}_{\text{\tiny{full}}}(T)$ corresponds to the error one obtains when we Taylor expand the full operator $\mathcal{L}(T) = \mathcal{L}_{\text{\tiny{dom}}}(T) + \mathcal{L}_{\text{\tiny{low}}}(T) $. One has \begin{equ}[e:deg] \deg(\mathcal{L}^{r}_{\text{\tiny{low}}}(T,n)) = \left\{ \begin{aligned} & n^{r}_{\text{\tiny{low}}}(T), & \quad \, \text{if } & \, n \leq n^{r}_{\text{\tiny{low}}}(T) , \\ & n, & \quad \, \text{if } & \, n^{r}_{\text{\tiny{low}}}(T) \leq n \leq n^{r}_{\text{\tiny{full}}}(T) , \\ & n^{r}_{\text{\tiny{full}}}(T), & \quad \, \text{if } & \, n \geq n^{r}_{\text{\tiny{full}}}(T) . \\ \end{aligned} \right. \end{equ} At the level of the scheme, we get \begin{equ}[e:scheme] \Pi^{n,r} T = \left\{ \begin{aligned} & \Pi_{\text{\tiny{low}}}^{r} T, & \quad \, \text{if } & \, n \leq n^{r}_{\text{\tiny{low}}}(T) , \\ & \Pi^{n,r} T , & \quad \, \text{if } & \, n^{r}_{\text{\tiny{low}}}(T) \leq n \leq n^{r}_{\text{\tiny{full}}}(T) , \\ & \Pi_{\text{\tiny{full}}}^{r} T , & \quad \, \text{if } & \, n \geq n^{r}_{\text{\tiny{full}}}(T) \\ \end{aligned} \right. \end{equ} where we call $ \Pi_{\text{\tiny{low}}}^{r} T $ the minimum regularity resonance based scheme. This scheme corresponds to the minimisation of the local-error and we can observe a plateau. Indeed, if $ n $ is too small then by convention we get this scheme. This could be the case if one does not compute the minimum regularity needed a priori. The other scheme $ \Pi_{\text{\tiny{full}}}^{r} T $ corresponds to a classical exponential type discretisation, where enough regularity is assumed such that also the dominant components of the iterated integrals can be expanded into a Taylor series as in \eqref{highT}. Then, we observe a second plateau: indeed assuming more regularity will not change the scheme as we have already Taylor-expanded all the components. Compared to $ \Pi_{\text{\tiny{low}}}^{r} T $ the scheme $ \Pi_{\text{\tiny{full}}}^{r} T $ is in general much simpler as no nonlinear frequency interactions are taken into account. This comes at the cost that a smaller class of equations can be solved as much higher regularity assumptions are imposed. Between these two schemes, lies a large class of intermediate schemes $ \Pi^{n,r} T $ which we call {\em low regularity resonance based schemes}. They take advantage of Taylor-expanding a bit more when more regularity is assumed. Therefore, the complexity of the schemes is decreasing as $n $ increases see also Section \ref{sec:examples}. We can represent these different regimes through the diagram below. \begin{equs} \begin{tikzpicture}[scale=1,baseline=2cm] \fill [blush, domain=0:2, variable=\x] (0, 0) -- plot ({\x }, 2) -- (2, 0) -- cycle; \fill [greenryb, domain=2:4, variable=\x] (2, 0) -- plot ({\x }, {\x}) -- (4, 0) -- cycle; \fill [brightcerulean, domain=4:6, variable=\x] (4, 0) -- plot ({\x }, 4) -- (6, 0) -- cycle; \draw [thick] [->] (0,0)--(6,0) node[right, below] {$n$}; \draw[xshift=2 cm, thick] (-1pt,0pt)--(1pt,0pt) node[below] {$ n^{r}_{\text{\tiny{low}}}(T) $}; \draw[xshift=3 cm,yshift=1 cm] node[below] {$\substack{ \text{\tiny{Low Regularity }} \\ \text{\tiny{Resonance scheme } } }$}; \draw[xshift=1 cm,yshift=1 cm] node[below] {$\substack{ \text{\tiny{Minimum Regularity }} \\ \text{\tiny{Resonance scheme } } }$}; \draw[xshift=5 cm,yshift=1 cm] node[below] {$\substack{ \text{\tiny{Classical Exponential }} \\ \text{\tiny{Integrator type scheme } } }$}; \draw[xshift=4 cm, thick] (-1pt,0pt)--(1pt,0pt) node[below] {$ n^{r}_{\text{\tiny{full}}}(T) $}; \draw [thick] [->] (0,0)--(0,5) node[above, left] {$\deg(\mathcal{L}^{r}_{\text{\tiny{low}}}(T,n))$}; \draw[yshift=2 cm, thick] (-1pt,0pt)--(1pt,0pt) node[left] {$ n^{r}_{\text{\tiny{low}}}(T) $}; \draw[yshift=4 cm, thick] (-1pt,0pt)--(1pt,0pt) node[left] {$ n^{r}_{\text{\tiny{full}}}(T) $}; \draw [domain=0:2, variable=\x] plot ({\x}, {2}) node[right] at (1.5,2) {}; \draw [domain=2:4, variable=\x] plot ({\x}, {\x}) node[right] at (1.5,2) {}; \draw [domain=4:6, variable=\x] plot ({\x}, {4}) node[right] at (1.5,2) {}; \end{tikzpicture} \end{equs} \end{remark} \begin{remark} Within our framework we propose a stabilisation technique. This will allow us to improve previous higher order attempts breaking formerly imposed order barriers of higher order resonance based schemes, such as the order reduction down to $3/2$ suggested for Schrödinger equations in dimensions $d\geq 2$ in \cite{KOS19}. Details are given in Remark \ref{rem:stab} as well as Section \ref{sec:examples}. \end{remark} \begin{remark}\label{rem:FFT} The aim is to choose the central approximation $\Pi^{n,r} T$ as an interplay between optimising the local error in the sense of regularity while allowing for a practical implementation. We design our schemes in such a way that products of functions can always be mapped back to physical space. In practical computations, this will allow us to benefit from the Fast Fourier Transform (FFT) with computational effort of order $\mathcal{O}\left(\vert K\vert^d \text{log}\vert K\vert ^d\right)$ in dimension $d$, where $K$ denotes the highest frequency in the discretisation. However, it comes at the cost that the approximation error \eqref{eq:loci} involves lower order derivatives. If, on the other hand, we would embed all nonlinear frequency interactions into the discretisation the resulting schemes would need to be carried out fully in Fourier space causing large memory and computational efforts of order $\mathcal{O}\left(K^{d \cdot \text{deg}{p}}\right)$, where $\text{deg}(p)$ denotes the degree of the nonlinearity $p$ \end{remark} {\begin{remark} For notational simplicity, we focus on equations with polynomial nonlinearities (cf. \eqref{dis}). Nevertheless, our scheme \eqref{decoratedV2} allows for a generalisation to non-polynomial nonlinearities of type $$f(u) g(\overline u)$$ for smooth functions $f$ and $g$. In the latter case the iteration of Duhamel's formula boils down to a two steps algorithm. More precisely, imagine that we got a first expansion of the form $ e^{i s\mathcal{L}} v + A(v,s) $ where $ A(v,s) $ is a linear combination of iterated integrals. Then, when iterating Duhamel's formula we need to plug this expansion into the nonlinearity and perform a Taylor expansion around the point $ e^{is \mathcal{L}}v $: \begin{equs} f(e^{i s\mathcal{L}} v + A(v,s)) = \sum_{m \leq r} \frac{A(v,s)^m}{m!} f^{(m)}(e^{i s\mathcal{L}} v ) + \mathcal{O}(A(v,s)^{r+1}). \end{equs} Carrying out the same manipulation for $ g(\overline{e^{i s\mathcal{L}} v + A(v,s)}) $ we end up with terms of type $$ \frac{A(v,s)^m}{m!} f^{(m)}(e^{i s\mathcal{L}} v ) \frac{\overline{A(v,s)}^n}{n!} g^{(n)}({e^{- i s\mathcal{L}}\overline v}) . $$ At this point we can not directly write down our resonance based scheme due to the fact that the oscillations are still encapsulated inside $ f $ and $g$. In order to control these oscillations and their nonlinear interactions, we need to pull the oscillatory phases $ e^{\pm is\mathcal{L}} $ out of $f$ and $g$. This is achieved via expansions of the form \begin{equs} f(e^{is \mathcal{L}}v) = \sum_{\ell \leq r} \frac{s^{\ell}}{\ell!} e^{is \mathcal{L}} \mathcal{C}^{\ell}[f,\mathcal{L}](v) + \mathcal{O}(s^{r+1} \mathcal{C}^{r+1}[f,\mathcal{L}](v)) \end{equs} where $\mathcal{C}^{\ell}[f,\mathcal{L}]$ denote nested commutators which in general require (much) less regularity than powers of the full operator $\mathcal{L}^\ell$. After these two linearisation steps, we are able to use the same machinery that leads to the construction of our scheme~\eqref{decoratedV2}. Such commutators were also recently exploited in \cite{RS} for second-order methods. \end{remark}} \subsection{Outline of the paper} Let us give a short review of the content of this paper. In Section~\ref{sec:genframe}, we introduce the general algebraic framework by first defining a suitable vector space of decorated forests $ \hat \mathcal{H} $. Next we define the dominant frequencies of a decorated forest (see Definition~\ref{dom_freq}) and show that one can map them back into physical space (see Corollary~\ref{physical_map}) which will be important for the efficiency of the numerical schemes (cf. Remark~\ref{rem:FFT}). Then, we introduced two spaces of decorated forests $ \mathcal{H}_+ $ and $\mathcal{H}$. The latter $ \mathcal{H} $ is used for describing approximated iterated integrals. The main difference with the previous space is that now we project along the order $ r $ of the method. We define the maps for the coaction $ \Delta : \mathcal{H} \rightarrow \mathcal{H} \otimes \mathcal{H}_+ $ and the coproduct $ \Delta^{\!+} : \mathcal{H}_+ \rightarrow \mathcal{H}_+ \otimes \mathcal{H}_+ $ in \eqref{eq:co-action_plus} and \eqref{eq:coproduct_plus}. In addition we provide a recursive definition for them in \eqref{def_deltas}. We prove in Proposition~\ref{Hopf_algebras} that these maps give a right-comodule structure for $ \mathcal{H} $ over the Hopf algebra $ \mathcal{H}_+ $. Moreover, we get a simple expression for the antipode $ \mathcal{A} $ in Proposition~\ref{antipode_rec}. In Section~\ref{sec::Iterated integrals}, we construct the approximation of the iterated integrals given by the character $ \Pi : \hat \mathcal{H} \rightarrow \mathcal{C} $ (see \eqref{Pi}) through the character $ \Pi^n : \mathcal{H} \rightarrow \mathcal{C} $ (see \eqref{recursive_pi_r}). The main operator used for the recursive construction is $ \CK $ given in Definition~\ref{Taylor_exp}. We introduce a new character $ \hat \Pi^n : \mathcal{H} \rightarrow \mathcal{C} $ through a Birkhoff type factorisation obtained from the character $ \Pi^n $ (see Proposition~\ref{Birkhoff}). Thanks to $ \hat \Pi^n $, we are able to conduct the local error analysis and show one of the main results of the paper: the error estimate on the difference between $ \Pi $ and its approximation $ \Pi^n $ (see Theorem~\ref{approxima_tree}). In Section~\ref{sec:genScheme}, we introduce decorated trees stemming from Duhamel's formula via the rules formalism (see Definition~\ref{rules}). Then, we are able to introduce the general scheme (see Definition~\ref{genscheme}) and conclude on its local error structure (see Theorem~\ref{thm:genloc}). In Section~\ref{sec:examples} we illustrate the general framework on various applications and conclude in Section~\ref{sec:num} with numerical experiments underlying the favourable error behaviour of the new resonance based schemes for non-smooth, and in certain cases even for smooth, solutions. \subsection*{Acknowledgements} {\small We wish to thank the anonymous referee for her/his extremely valuable remarks. First discussions on this work were initiated while the authors participated in the workshop "Algebraic and geometric aspects of numerical methods for differential equations" held at the Institut Mittag-Leffler in July 2018. The authors thank the organisers of this workshop for putting together a stimulating program bringing different communities together, and the members of the institute for providing a friendly working atmosphere. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 850941).} \section{General framework}\label{sec:genframe} \label{General framework} In this section, we present the main algebraic framework which will allow us to develop and analyse our general numerical scheme. {We start by introducing decorated trees that encode the oscillatory integrals.} Decorations on the edges represent integrals in time and some operators stemming from Duhamel's formula. In addition, we impose decorations on the nodes {for the frequencies and} potential monomials. We will compute the corresponding dominant and lower order frequency interactions associated to these trees via the recursive maps $\mathscr{F}_{\text{\tiny{dom}}} $ and $ \mathscr{F}_{\text{\tiny{low}}} $ given in Definition~\ref{dom_freq}. These maps are chosen such that the solution is approximated at low regularity in Fourier space with the additional property that the approximation can be mapped back to physical space. The latter will allow for an efficient practical implementation of the new scheme, see Remark \ref{rem:FFT}. The second part of this section focuses on a different space of decorated trees that we name \emph{approximated decorated trees}. The main difference with the trees previously introduced is the additional root decoration by some integer $ r $. The approximated trees have to be understood as an abstract version of an approximation of order $ r $ of the corresponding oscillatory integral. In order to construct our low regularity scheme we want to carry out an abstract Taylor expansions of the time integrals at the level of these approximated trees in the spirit of \eqref{oscNew}: We will Taylor expand only the lower parts of the frequency interactions while integrating the dominant part exactly. For these operations, we need a deformed Butcher-Connes-Kreimer coproduct in the spirit of the one which has been introduced for singular SPDEs. We consider a coproduct $\Delta^{\!+} $ and a coaction $ \Delta $ with a non-recursive (see \eqref{eq:co-action_plus} and \eqref{eq:coproduct_plus}) and a recursive definition (see \eqref{def_deltas}). We show the usual coassociativity/compatibility properties in Proposition~\ref{bialgebra}. In the end we get a Hopf algebra and a comodule structures on these new spaces of approximated decorated trees. The antipode comes for free in this context since the Hopf algebra is connected, see Proposition~\ref{Hopf_algebras}. The main novelty of this general algebraic framework is the merging of two different structures that appear in dispersive PDEs (frequency interactions) and singular SPDEs (abstract Taylor expansion). They form the central part of the scheme - controlling the underlying oscillations and performing Taylor approximations. Within this construction we need to introduce new objects that were not considered before in such generality. \subsection{Decorated trees and frequency interactions} We consider a set of decorated trees following the formalism developed in \cite{BHZ}. These trees will encode the Fourier coefficients of the numerical scheme. We assume a finite set $ \mathfrak{L}$ and frequencies $ k_1,...,k_n \in \Z^{d} $. The set $ \mathfrak{L}$ parametrizes a set of differential operators with constant coefficients, whose symbols are given by the polynomials $ (P_{\mathfrak{t}})_{\mathfrak{t} \in \mathfrak{L}} $. We define the set of decorated trees $ \hat \mathcal{T} $ as elements of the form $ T_{\mathfrak{e}}^{\mathfrak{n}, \mathfrak{f}} = (T,\mathfrak{n},\mathfrak{f},\mathfrak{e}) $ where \begin{itemize} \item $ T $ is a non-planar rooted tree with root $ \varrho_T $, node set $N_T$ and edge set $E_T$. We denote the leaves of $ T $ by $ L_T $. $ T $ must also be a planted tree which means that there is only one edge outgoing the root. \item the map $ \mathfrak{e} : E_T \rightarrow \mathfrak{L} \times \lbrace 0,1\rbrace$ are edge decorations. \item the map $ \mathfrak{n} : N_T \setminus \lbrace\varrho_T \rbrace \rightarrow \N $ are node decorations. For every inner node $ v$, this map encodes a monomial of the form $ \xi^{\mathfrak{n}(v)} $ where $ \xi $ is a time variable. \item the map $ \mathfrak{f} : N_T \setminus \lbrace\varrho_T \rbrace \rightarrow \Z^{d}$ are node decorations. These decorations are frequencies that satisfy for every inner node $ u $: \begin{equs} \label{innerdecoration} (-1)^{\mathfrak{p}(e_u)}\mathfrak{f}(u) = \sum_{e=(u,v) \in E_T} (-1)^{\mathfrak{p}(e)} \mathfrak{f}(v) \end{equs} where $ \mathfrak{e}(e) = (\mathfrak{t}(e),\mathfrak{p}(e)) $ and $ e_u $ is the edge outgoing $ u $ of the form $ (v,u) $ . From this definition, one can see that the node decorations $ (\mathfrak{f}(u))_{u \in L_T} $ determine the decoration of the inner nodes. We assume that the node decorations at the leaves are linear combinations of the $ k_i $ with coefficients in $ \lbrace -1,0,1 \rbrace $. \item we assume that the root of $ T $ has no decoration. \end{itemize} When the node decoration $ \mathfrak{n} $ is zero, we will denote the decorated trees $ T_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}} $ as $ T_{\mathfrak{e}}^{\mathfrak{f}} = (T,\mathfrak{f},\mathfrak{e}) $. The set of decorated trees satisfying such a condition is denoted by $ \hat \mathcal{T}_0 $. We say that $ \bar T_{\bar \mathfrak{e}}^{\bar \mathfrak{f}} $ is a decorated subtree of $ T_{\mathfrak{e}}^{\mathfrak{f}} \in \hat \mathcal{T}_0 $ if $ \bar T $ is a subtree of $ T $ and the restriction of the decorations $ \mathfrak{f}, \mathfrak{e} $ of $ T $ to $ \bar T $ are given by $ \bar \mathfrak{f} $ and $ \bar \mathfrak{e} $. Notice that because trees considered in this framework are always planted, we look only at subtrees that are planted. A planted subtree of $ T $ is of the form $ T_e $ where $ e \in E_T $ and $ T_e $ corresponds to the tree above $ e $. The nodes of $ T_e $ are given by all the nodes whose path to the root contains $ e $. \begin{example}[label=exa:cont2] \label{example1} Below, we give an example of a decorated tree $ T_{\mathfrak{e}}^{\mathfrak{n}, \mathfrak{f}} $ where the edges are labelled with numbers from $ 1 $ to $ 7 $ and the set $ N_T \setminus \lbrace\varrho_T\rbrace $ is labelled by $\lbrace a,b,c,d,e,f,g \rbrace$: \begin{equs} \begin{tikzpicture}[scale=0.19,baseline=2cm] \node at (0,10) (a) {}; \node at (4,20) (f) {}; \node at (0,30) (k) {}; \node at (8,30) (l) {}; \node at (20,20) (g) {}; \node at (16,30) (m) {}; \node at (12,15) (c) {}; \node at (24,30) (p) {}; % \draw[kernel] (a) -- node [round1] {\tiny $\mathfrak{t}(1),\mathfrak{p}(1)$} (c) ; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(2),\mathfrak{p}(2)$} (f) ; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(3),\mathfrak{p}(3)$} (g) ; \draw[kernel1,black] (f) -- node [near end, round1] {\tiny $\mathfrak{t}(4),\mathfrak{p}(4)$} (k) ; \draw[kernel1] (f) -- node [round1] {\tiny $\mathfrak{t}(5),\mathfrak{p}(5)$} (l) ; \draw[kernel1] (g) -- node [round1] {\tiny $\mathfrak{t}(6),\mathfrak{p}(6)$} (m) ; \draw[kernel1] (g) -- node [near end, round1] {\tiny $\mathfrak{t}(7),\mathfrak{p}(7)$} (p) ; \draw (p) node [rect2] {\tiny $ \mathfrak{n}({g}), \mathfrak{f}({g}) $} ; \draw (m) node [rect2] {\tiny $ \mathfrak{n}({f}), \mathfrak{f}({f}) $} ; \draw (l) node [rect2] {\tiny $ \mathfrak{n}({e}), \mathfrak{f}({e}) $} ; \draw (k) node [rect2] {\tiny $ \mathfrak{n}(d), \mathfrak{f}(d) $} ; \draw (g) node [rect2] {\tiny $ \mathfrak{n}(c),\mathfrak{f}(c) $} ; \draw (f) node [rect2] {\tiny $ \mathfrak{n}(b),\mathfrak{f}(b) $} ; \draw (c) node [rect2] {\tiny $ \mathfrak{n}(a),\mathfrak{f}(a) $} ; \end{tikzpicture} \end{equs} \end{example} \begin{remark} The structure imposed on the node decorations \eqref{innerdecoration} is close to the one used in \cite{Christ,Gub11,oh1}. But in these works, the trees were designed only for one particular equation. In our framework, we cover a general class of dispersive equations by having more decorations on the edges given by $ \mathfrak{L} \times \lbrace 0,1 \rbrace $. The set $ \mathfrak{L} $ keeps track of the differential operators in Duhamel's formulation. The second edge decoration allows us to compute an abstract conjugate on the trees given in \eqref{bar}. \end{remark} We denote by $\hat H $ (resp. $ \hat H_0 $) the (unordered) forests composed of trees in $ \hat \mathcal{T} $ (resp. $ \hat \mathcal{T}_0 $) (including the empty forest denoted by $\mathbf{1}$). Their linear spans are denoted by $\hat \mathcal{H} $ and $ \hat \mathcal{H}_0 $. We extend the definition of decorated subtrees to forests by saying that $ T $ is a decorated subtree of the decorated forest $ F $ if their exists a decorated tree $ \bar T$ in $ F $ such that $ T $ is a decorated subtree of $ \bar T $. The forest product is denoted by $ \cdot $ and the counit is $ \mathbf{1}^{\star} $ which is non-zero only on the empty forest. In order to represent these decorated trees, we introduce a symbolic notation. An edge decorated by $ o = (\mathfrak{t},\mathfrak{p}) $ is denoted by $ \CI_{o} $. The symbol $ \CI_{o}(\lambda_{k}^{\ell} \cdot) : \hat \mathcal{H} \rightarrow \hat \mathcal{H} $ is viewed as the operation that merges all the roots of the trees composing the forest into one node decorated by $(\ell,k) \in \N \times \Z^{d} $. We obtain a decorated tree which is then grafted onto a new root with no decoration. If the condition $ \eqref{innerdecoration} $ is not satisfied on the argument then $\CI_{o}( \lambda_{k}^{\ell} \cdot)$ gives zero. If $ \ell = 0 $, then the term $ \lambda_{k}^{\ell} $ is denoted by $ \lambda_{k} $ as a short hand notation for $ \lambda_{k}^{0} $. When $ \ell = 1 $, it will be denoted by $ \lambda_{k}^{1} $. The forest product between $ \CI_{o_1}( \lambda^{\ell_1}_{k_1}F_1) $ and $ \CI_{o_2}( \lambda^{\ell_2}_{k_2}F_2) $ is given by: \begin{equs} \CI_{o_1}( \lambda^{\ell_1}_{k_1} F_1) \CI_{o_2}( \lambda^{\ell_2}_{k_2} F_2) := \CI_{o_1}( \lambda^{\ell_1}_{k_1} F_1) \cdot \CI_{o_2}( \lambda^{\ell_2}_{k_2} F_2). \end{equs} \begin{example}[label=exa:cont]\label{ex2} The following symbol \begin{equs} \CI_{(\mathfrak{t}(1),\mathfrak{p}(1))}( \lambda^{\mathfrak{n}(a)}_{\mathfrak{f}(a)}\CI_{(\mathfrak{t}(2),\mathfrak{p}(2))}( \lambda^{\mathfrak{n}(b)}_{\mathfrak{f}(b)})\CI_{(\mathfrak{t}(3),\mathfrak{p}(3))}( \lambda^{\mathfrak{n}(c)}_{\mathfrak{f}(c)})) \end{equs} encodes the tree \begin{equs} \label{example2a} \begin{tikzpicture}[scale=0.19,baseline=2cm] \node at (0,10) (a) {}; \node at (4,20) (f) {}; \node at (20,20) (g) {}; \node at (12,15) (c) {}; \draw[kernel] (a) -- node [round1] {\tiny $\mathfrak{t}(1),\mathfrak{p}(1)$} (c) ; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(2),\mathfrak{p}(2)$} (f) ; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(3),\mathfrak{p}(3)$} (g) ; \draw (g) node [rect2] {\tiny $ \mathfrak{n}(c),\mathfrak{f}(c) $} ; \draw (f) node [rect2] {\tiny $ \mathfrak{n}(b),\mathfrak{f}(b) $} ; \draw (c) node [rect2] {\tiny $ \mathfrak{n}(a),\mathfrak{f}(a) $} ; \end{tikzpicture} \end{equs} We will see later (in Example \ref{tex:kdv}) that the above tree with suitabl{y} chosen decorations describes the first iterated integral of the Kortweg--de Vries equation \eqref{kdvIntro}. \end{example} We are interested in the following quantity which represents the frequencies associated to this tree: \begin{equs} \label{frenquency} \mathscr{F}( T_{\mathfrak{e}}^{\mathfrak{f}} ) = \sum_{u \in N_T} P_{(\mathfrak{t}(e_u),\mathfrak{p}(e_u))}(\mathfrak{f}(u)) \end{equs} where $ e_u $ is the edge outgoing $ u $ of the form $ (v,u) $ and \begin{equs}\label{Palpha} P_{(\mathfrak{t}(e_u),\mathfrak{p}(e_u))}(\mathfrak{f}(u)){ \, := \,} (-1)^{\mathfrak{p}(e_u)}P_{\mathfrak{t}(e_u)}((-1)^{\mathfrak{p}(e_u)}\mathfrak{f}(u)) . \end{equs} The term $ \mathscr{F}( T_{\mathfrak{e}}^{\mathfrak{f}}) $ has to be understood as a polynomial in multiple variables given by the $ k_i $. In the numerical scheme what matters are the terms with maximal degree of frequency which are here the monomials of higher degree, cf. $\mathcal{L}_\text{\tiny{dom}}$. We compute them using the symbolic notation in the next section. We assume fixed $ \mathfrak{L}_{+} \subset \mathfrak{L} $. This subset encodes integrals in time that are of the form $ \int_0^{\tau} e^{s P_{(\mathfrak{t},\mathfrak{p})}(\cdot)} \cdots ds $ for $ (\mathfrak{t},\mathfrak{p}) \in \mathfrak{L}_+ \times \lbrace 0,1 \rbrace$, see also its interpretation given in \eqref{Pi} below. \subsection{Dominant parts of trees and physical space maps} \begin{definition} \label{Dom_projection} Let $ P(k_1,...,k_n) $ a polynomial in the $ k_i $. If the higher monomials of $ P $ are of the form \begin{equs} a \sum_{i=1}^{n} (a_i k_i)^{m}, \quad a_i \in { \lbrace 0,1 \rbrace}, \, a \in \Z, \end{equs} then we define $ \mathcal{P}_{\text{\tiny{dom}}}(P) $ as \begin{equs} \label{physical map} \mathcal{P}_{\text{\tiny{dom}}}(P) = a \left(\sum_{i=1}^{n} a_i k_i\right)^{m}. \end{equs} Otherwise, it is zero. \end{definition} \begin{remark} Given a polynomial $ P $, one can compute its dominant part $ \mathcal{L}_{\text{dom}} $ and its lower part $\mathcal{L}_{\text{low}}$ \begin{equs} \mathcal{L}_{\text{\tiny{dom}}} = \mathcal{P}_{\text{\tiny{dom}}}(P), \quad \mathcal{L}_{\text{\tiny{low}}} = \left(\mathrm{id} - \mathcal{P}_{\text{\tiny{dom}}} \right)(P). \end{equs} In our discretisation we will treat the dominant parts of the frequency interactions $\mathcal{L}_{\text{\tiny{dom}}}$ exactly, while approximating the lower order parts $\mathcal{L}_{\text{\tiny{low}}} $ by Taylor series expansions (cf. also \eqref{oscNew}). This will be achieved by applying recursively the operator $ \mathcal{P}_{\text{\tiny{dom}}} $ introduced in Definition~\ref{dom_freq}. Note that in the special case that $\mathcal{P}_{\text{\tiny{dom}}}(P) = 0$, we have to expand all frequency interactions into a Taylor series. The latter for instance arises in context of quadratic Schr\"odinger equations \begin{equs}\label{quadNLS} i \partial_t u = -\Delta u + u^2, \quad u(0,x) = v(x) \end{equs} for which we face oscillations of type (cf. \eqref{oscii}) \begin{equs} \int_0^\tau e^{i \xi \Delta } (e^{-i \xi \ \Delta} v)^2 d\xi & = \sum_{k_{1},k_{2}\in \Z^d}\hat{v}_{k_1}\hat{v}_{k_2} e^{i (k_1+k_2) x} \int_0^\tau e^{ -i \big(k_1+k_2\big)^2 \xi} e^{ i \big(k_1^2 + k_2^2 \big)\xi} d\xi \\ &= \sum_{k_{1},k_{2}\in \Z^d}\hat{v}_{k_1}\hat{v}_{k_2} e^{i (k_1+k_2) x} \int_0^\tau e^{ -2 i k_1 k_2 \xi} d\xi. \end{equs}Here we recall the notation $ k \ell= k_1 \ell_1 + \ldots + k_d \ell_d $ for $k, \ell \in \Z^d$. In contrast to the cubic NLS \eqref{nlsIntro2} where we have that (cf. \eqref{domNLS0}) \begin{equation*} { \mathcal{L}_{\text{dom}}(k_1) = 2k_1^2 \quad \text{and}\quad \mathcal{L}_{\text{low}}(k_1,k_2,k_3) = - 2 k_1 (k_2+k_3) + 2 k_2 k_3} \end{equation*} we observe for the quadratic NLS \eqref{quadNLS} with the map $ \mathcal{P}_{\text{\tiny{dom}}} $ given in \eqref{physical map} that \begin{equs} P(k_1,k_2) & = - (k_1 + k_2)^2 + (k_1^2 + k_2^2) = - 2k_1 k_2, \quad \mathcal{P}_{\text{\tiny{dom}}}(P) = 0, \\ \mathcal{L}_{\text{\tiny{dom}}} & = \mathcal{P}_{\text{\tiny{dom}}}(P) =0, \quad \mathcal{L}_{\text{\tiny{low}}} = P - \mathcal{P}_{\text{\tiny{dom}}}(P) = - 2k_1 k_2. \end{equs} Hence, although $\mathcal{L}= -\Delta$ and $\mathcal{L}_{\text{dom}}= 0 $ (which means that no oscillations are integrated exactly), we ``only'' loose one derivative in the local approximation error (cf. \eqref{ue1}) as \begin{equation*} \mathcal{O}\left(\tau^2 \mathcal{L}_{\text{low}}v\right)= \mathcal{O}\left(\tau^2 \vert \nabla \vert v\right). \end{equation*} \end{remark} \begin{remark} Terms of type \eqref{physical map} will naturally arise when filtering out the dominant nonlinear frequency interactions in the PDE. We have to embed integrals over their exponentials into our discretisation. For their practical implementation it will be therefore essential to map fractions of \eqref{physical map} back to physical space. Indeed, if we apply the inverse Fourier transform $ \mathcal{F}^{-1} $ one gets: \begin{equs} \label{nice physical terms} \mathcal{F}^{-1} & \left( \sum_{\substack{0 \neq k=k_1 +...+k_n\\k_\ell \neq 0} } \frac{1}{(k_1 +...+k_n)^m} \frac{1}{k_1^{m_1}} ... \frac{1}{k_n^{m_n}}v^1_{k_1}...v^n_{k_n} e^{i kx} \right) \\ & = (-\Delta)^{- m/2} \prod_{\ell = 1}^n \left( (-\Delta)^{-m_\ell/2} v^\ell(x)\right) \end{equs} where by abuse of notation we define the operator $(-\Delta)^{-1}$ in Fourier space as $ (-\Delta)^{-1} f(x) = \sum_{ k \neq 0} \frac{ \hat{f}_k }{k^2}e^{i k x}. $ \end{remark} In the next proposition, we elaborate on \eqref{nice physical terms} and give a nice class of functions depending on the $k_i$ that we can map back to the physical space. \begin{proposition} \label{Fourierproduct} Assume that we have polynomials $ Q $ in $ k_1, \ldots, k_n $ and $ k $ is a linear combination of the $k_i$ such that: \begin{equs} Q & = \prod_j (\sum_{u \in V_j} a_{u,V_j} k_u )^{m_i}, \quad V_j \subset \lbrace 1,...,n \rbrace, \quad a_{u,V_j} \in \lbrace-1,1\rbrace, \\ k & = \sum_{u=1}^{n} a_u k_u, \quad a_u \in \lbrace -1,1 \rbrace, \end{equs} where the $ V_i $ are either disjoint or if $ V_j \subset V_i $ we assume that there exist $ p_{i,j} $ such that \begin{equs} a_{u,V_i} = (-1)^{p_{i,j}} a_{u,V_j}, \quad u \in V_j. \end{equs} We also suppose that the $ V_i $ are included in $ k $ in the sense that there exist $ p_{V_i} $ such that \begin{equs} a_u = (-1)^{p_{V_i}} a_{u,V_i}, \quad u \in V_i. \end{equs} Then, one gets \begin{equs} \mathcal{F}^{-1} & \left( \sum_{ \substack{0 \neq k= a_1 k_1 +...+ a_n k_n\\ Q(k_1,...,k_n) \neq 0}} \frac{1}{Q} v^{1,a_1}_{k_1}...v^{n,a_n}_{k_n} e^{i kx} \right) \\ & = \left(\prod_{V_i \subset V_j} (-1)^{p_{V_i}} (-\Delta)^{- m_i/2}_{V_i} \right) v^{1,a_1}... v^{n,a_n} \end{equs} where $ v^{i,1} = v^{i} $ and $ v^{i,-1} = \overline{v^{i}} $. The operator $ (-\Delta)^{- m_i/2}_{V_i} $ acts only on the functions $\prod_{u \in V_i} v^{u,a_u} $ and the product starts by the smaller elements for the inclusion order. \end{proposition} \begin{proof} We proceed by induction on the number of $ V_i $. Let $ V_{\max} $ an element among the $ V_i $ maximum for the inclusion order. Then, we get \begin{equs} \sum_{ \substack{0 \neq k= a_1 k_1 +...+ a_n k_n\\ Q(k_1,...,k_n) \neq 0}} & \frac{1}{Q} v^{1,a_1}_{k_1}...v^{n,a_n}_{k_n} e^{i kx} = \sum_{\substack{0 \neq k= r + \ell \\ \ell \neq 0}} \frac{(-1)^{p_{V_{\max}}}}{\ell^{m_{\max}}}\sum_{ \substack{0 \neq r= \sum_{u \notin V_{\max}} a_u k_u \\ R \neq 0}} \frac{1}{R} \\ & \left( \prod_{j \notin V_{\max}} v^{j,a_j}_{k_j} \right) e^{i r x} \times \sum_{ \substack{0 \neq \ell= \sum_{u \in V_{\max}} a_u k_u \\ S \neq 0}} \frac{1}{S} \left( \prod_{j \in V_{\max}} v^{j,a_j}_{k_j} \right) e^{i \ell x} \end{equs} where \begin{equs} S = \prod_{V_j \varsubsetneq V_{\max}} (\sum_{u \in V_j} a_{u,V_j} k_u )^{m_i}, \quad R = \prod_{V_j \cap V_{\max} = {\centernot\ocircle}} (\sum_{u \in V_j} a_{u,V_j} k_u )^{m_i}, \quad Q = R S \ell^{m_{max}}. \end{equs} Thus, by applying the inverse Fourier transform, we get the term $ (-1)^{p_{V_{\max}}} (-\Delta)^{- m_{\max}/2}_{V_{\max}} $ from $ \frac{(-1)^{p_{V_{\max}}}}{\ell^{m_{\max}}} $. We conclude from the induction hypothesis on the two remaining sums. \end{proof} The next definition will allow us to compute the dominant part of the frequency interactions of a given decorated forest in $ \hat H_0 $. The idea is to filtered out the dominant part using the operator $ \mathcal{P}_{\text{\tiny{dom}}} $ which selects the frequencies of highest order. The operator $ \mathcal{P}_{\text{\tiny{dom}}} $ only appears if we face an edge in $ \mathfrak{L}_+ $ which corresponds to an integral in time that we have to approximate. \begin{definition} \label{dom_freq} We recursively define $\mathscr{F}_{\text{\tiny{dom}}}, \mathscr{F}_{\text{\tiny{low}}} : \hat H_{0} \rightarrow \mathbb{R}[\Z^d]$ as: \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}(\mathbf{1}) = 0 \quad \mathscr{F}_{\text{\tiny{dom}}}(F \cdot \bar F) & =\mathscr{F}_{\text{\tiny{dom}}}(F) + \mathscr{F}_{\text{\tiny{dom}}}(\bar F) \\ \mathscr{F}_{\text{\tiny{dom}}}\left( \CI_{(\mathfrak{t},\mathfrak{p})}( \lambda_{k}F) \right) & = \left\{ \begin{aligned} & \mathcal{P}_{\text{\tiny{dom}}}\left( P_{(\mathfrak{t},\mathfrak{p})}(k) +\mathscr{F}_{\text{\tiny{dom}}}(F) \right), \, \text{if } \mathfrak{t} \in \mathfrak{L}_+ , \\ & P_{(\mathfrak{t},\mathfrak{p})}(k) +\mathscr{F}_{\text{\tiny{dom}}}(F), \quad \text{otherwise} \\ \end{aligned} \right. \\ \mathscr{F}_{\text{\tiny{low}}} \left( \CI_{(\mathfrak{t},\mathfrak{p})}( \lambda_{k}F) \right) & = \left( \mathrm{id} - \mathcal{P}_{\text{\tiny{dom}}} \right) \left( P_{(\mathfrak{t},\mathfrak{p})}(k) +\mathscr{F}_{\text{\tiny{dom}}}(F) \right). \end{equs} We extend these two maps to $ \hat H $ by ignoring the node decorations $ \mathfrak{n} $. \end{definition} \begin{remark}\label{rem:epsi} The definition of $ \mathcal{P}_{\text{\tiny{dom}}} $ can be adapted depending on what is considered to be the dominant part. For example, if for $ \mathfrak{t}_2 \in \mathfrak{L}_+$, one has (cf. \eqref{Leps}): \begin{equs} P_{(\mathfrak{t}_2,p)}( \lambda) = \frac{1}{\varepsilon^{\sigma}} + F_{(\mathfrak{t}_2,p)}( \lambda). \end{equs} Then we can define the dominant part only depending on $ \varepsilon $ (see Example \ref{sec:kgr}) \begin{equs} \mathcal{P}_{\text{\tiny{dom}}}\left( P_{(\mathfrak{t}_2,p)}(k) \right)= \frac{1}{\varepsilon^{\sigma}}. \end{equs} The definition considers only the case where one does not have terms of the form $ 1/\varepsilon^{\sigma} $. It is sufficient for covering many interesting examples. \end{remark} In the following we compute the dominant part $ \mathscr{F}_\text{dom}$ for the underlying trees of the cubic Schrödinger~\eqref{nlsIntro} and KdV \eqref{kdvIntro} equation. \begin{example}[KdV]\label{tex:kdv} We consider the decorated tree $ T $ given in Example \ref{example2a}, where we fix the following decorations: \begin{equs} \mathfrak{p}(1) & = \mathfrak{p}(2) = \mathfrak{p}(3) = 0, \quad \mathfrak{t}(2) = \mathfrak{t}(3) = \mathfrak{t}_1, \quad \mathfrak{t}(1) = \mathfrak{t}_2, \\ \mathfrak{f}(b) & = k_1, \quad \mathfrak{f}(c) = k_2, \quad \mathfrak{f}(a) = k_1 + k_2, \quad P_{\mathfrak{t}_2}( \lambda) = \lambda^3 , \quad P_{\mathfrak{t}_1}( \lambda) = - \lambda^3. \end{equs} Now, we suppose $ \mathfrak{L}_+ = \lbrace \mathfrak{t}_2 \rbrace $ and $ \mathfrak{L} = \lbrace \mathfrak{t}_1, \mathfrak{t}_2 \rbrace $. Then the tree \eqref{example2a} takes the form \begin{equs} \label{example2} \begin{tikzpicture}[scale=0.22,baseline=2cm] \node at (0,10) (a) {}; \node at (4,20) (f) {}; \node at (20,20) (g) {}; \node at (12,15) (c) {}; \draw[kernel] (a) -- node [round1] {\tiny $\mathfrak{t}_2, 0 $} (c) ; \draw[kernel1] (c) -- node [round1] {\tiny $ \mathfrak{t}_1, 0$} (f) ; \draw[kernel1] (c) -- node [round1] {\tiny $ \mathfrak{t}_1, 0 $} (g) ; \draw (g) node [rect2] {\tiny $ k_2 $} ; \draw (f) node [rect2] {\tiny $ k_1$} ; \draw (c) node [rect2] {\tiny $k_1+k_2 $} ; \end{tikzpicture} \end{equs} This tree corresponds to the first iterated integral for the KdV equation \eqref{kdvIntro}. In more formal notation, we denote this tree by: \begin{equs}\label{kdvTK} \CI_{(\mathfrak{t}_2,0)}( \lambda_{k} \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_1}) \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_2}))= \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t2) at (1,2); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_2 $}}; \end{tikzpicture} , \quad k = k_1 + k_2 \end{equs} where a blue edge encodes $ (\mathfrak{t}_2,0) $ and a brown edge is used for $(\mathfrak{t}_1,0)$. The frequencies are given on the leaves. The ones on the inner nodes are determined by those on the leaves. On the left hand side, we have given the symbolic notation. Together with Definition \ref{dom_freq} one gets \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}(T) & = \mathcal{P}_{\text{\tiny{dom}}} \left( (k_1+k_2)^{3} - k_1^{3} - k_2^{3} \right) = 0 \\ \mathscr{F}_{\text{\tiny{low}}} (T) & = (k_1+k_2)^{3} - k_1^{3} - k_2^{3} = 3k_1 k_2 (k_1+k_2). \end{equs} The fact that $ \mathscr{F}_{\text{\tiny{dom}}}(T) $ is zero comes from the fundamental choice of definition of the operator $ \mathcal{P}_{\text{\tiny{dom}}} $ in Definition~\ref{Dom_projection}. \end{example} \begin{example}[Cubic Schrödinger]\label{exRDoNLS} Next we consider the Symbol \begin{equs} \CI_{(\mathfrak{t}_2,0)}( \lambda_{-k_1+k_2+k_3}\CI_{(\mathfrak{t}_2,1)}( \lambda_{k_1})\CI_{(\mathfrak{t}_2,0)}( \lambda_{k_2})\CI_{(\mathfrak{t}_2,0)}( \lambda_{k_3})) \end{equs} with $P_{\mathfrak{t}_2}( \lambda)= \lambda^2$, $P_{\mathfrak{t}_1}( \lambda) = - \lambda^2$, $ \mathfrak{L}_+ = \lbrace \mathfrak{t}_2 \rbrace $ and $ \mathfrak{L} = \lbrace \mathfrak{t}_1, \mathfrak{t}_2 \rbrace $ which encodes the tree \begin{equs} \label{example2} \begin{tikzpicture}[scale=0.22,baseline=2cm] \node at (0,10) (a) {}; \node at (0,20) (f) {}; \node at (10,20) (g) {}; \node at (20,20) (e) {}; \node at (10,15) (c) {}; \draw[kernel] (a) -- node [round1] {\tiny $\mathfrak{t}_2, 0$} (c) ; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}_1,1$} (f) ; \draw[kernel1] (c) -- node [round1] {\tiny $ \mathfrak{t}_1,0$} (g) ; \draw[kernel1] (c) -- node [round1] {\tiny $ \mathfrak{t}_1,0$} (e) ; \draw (f) node [rect2] {\tiny $ k_1$} ; \draw (g) node [rect2] {\tiny $ k_2 $} ; \draw (e) node [rect2] {\tiny $ k_3$} ; \draw (c) node [rect2] {\tiny $-k_1+k_2 +k_3$} ; \end{tikzpicture} \end{equs} This tree corresponds to the frequency interaction of the first iterated integral for the cubic Schrödinger equation \eqref{nlsIntro}. In a more formal notation, we denote this tree by: \begin{equs} \label{treeSE} \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} \end{equs} where a blue edge encodes $ (\mathfrak{t}_2,0) $, a brown edge is used for $ (\mathfrak{t}_1,0) $ and a dashed brown edge is for $ (\mathfrak{t}_2,1) $. With Definition \ref{dom_freq}, we get \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}(T) & = \mathcal{P}_{\text{\tiny{dom}}} \left( (-k_1+k_2+k_3)^{2} + (-k_1)^{2} - k_2^{2} - k_3^2 \right) \\& = \mathcal{P}_{\text{\tiny{dom}}} \left( 2k_1^2 - 2k_1(k_2+k_3) + 2 k_2 k_3\right) = 2 k_1^2 \\ \mathscr{F}_{\text{\tiny{low}}} (T) & = (-k_1+k_2+k_3)^{2} + (-k_1)^{2} - k_2^{2} - k_3^2 - 2k_1^2\\ & = - 2k_1(k_2+k_3) + 2 k_2 k_3. \end{equs} \end{example} The map $ \mathscr{F}_{\text{\tiny{dom}}}$ has a nice property regarding the tree inclusions given in the next proposition. This inclusive property will be important in practical computations, see also Remark \ref{rem:FFT} and the examples in Section \ref{sec:examples}. For Proposition \ref{nasty_trees} below, we need some additional assumption. \begin{assumption} \label{assumption_physical_space} We consider decorated forests whose decorations at the leaves form a partition of the $ k_1,...,k_n $ ; in the sense that for two leaves $ u $ and $ v $, $ \mathfrak{f}(u) $ (resp. $ \mathfrak{f}(v) $) is a linear combination of $ (k_i)_{i \in I} $ (resp. $ (k_i)_{i \in J} $) with $ I,J \subset \lbrace 1,...,n \rbrace $ and $ I \cap J = {\centernot\ocircle} $. This will be the case in the examples given in Section \ref{sec:examples}. \end{assumption} With this assumption, for a decorated forest $ F = \prod_i T_i $ such that $ \mathcal{P}_{\text{\tiny{dom}}}\left(\mathscr{F}_{\text{\tiny{dom}}}(F) \right) \neq 0 $ one has the nice identity: \begin{equs} \label{splitting_identity} \mathcal{P}_{\text{\tiny{dom}}}\left(\mathscr{F}_{\text{\tiny{dom}}}(F) \right) = \mathcal{P}_{\text{\tiny{dom}}}\left(\sum_i \mathcal{P}_{\text{\tiny{dom}}}\left(\mathscr{F}_{\text{\tiny{dom}}}(T_i) \right) \right). \end{equs} We will illustrate this property and give a counterexample in an example below. \begin{example} We give a counter-example in the setting of the Schrödinger equation to \eqref{splitting_identity} when $\mathcal{P}_{\text{\tiny{dom}}}\left(\mathscr{F}_{\text{\tiny{dom}}}(F) \right) = 0 $. We consider the following forest $ F = \prod_{i=1}^3 T_i$ where the decorated trees $ T_i $ are given by: \begin{equs} T_1 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,1); \coordinate (tri) at (0,-1); \draw[kernels2] (tri) -- (root); \node[var] (rootnode) at (root) {\tiny{$ k_4 $}}; \node[not] (trinode) at (tri) {}; \end{tikzpicture} , \quad T_2 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,1); \coordinate (tri) at (0,-1); \draw[kernels2] (tri) -- (root); \node[var] (rootnode) at (root) {\tiny{$ k_5 $}}; \node[not] (trinode) at (tri) {}; \end{tikzpicture}, \quad T_3 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,-1); \coordinate (t3) at (0,1); \coordinate (t4) at (0,3); \coordinate (t41) at (-2,5); \coordinate (t42) at (2,5); \coordinate (t43) at (0,7); \draw[kernels2] (t3) -- (root); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \end{tikzpicture} , \quad F = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,-1); \coordinate (t1) at (-2,1); \coordinate (t2) at (2,1); \coordinate (t3) at (0,1); \coordinate (t4) at (0,3); \coordinate (t41) at (-2,5); \coordinate (t42) at (2,5); \coordinate (t43) at (0,7); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture}. \end{equs} Then, we can check Assumption {\ref{assumption_physical_space}} for $ F $. One has \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}(T_1) & = - k_4^2, \quad\mathscr{F}_{\text{\tiny{dom}}}(T_2) = - k_5^2, \quad\mathscr{F}_{\text{\tiny{dom}}}(T_3) = - (-k_1 + k_2 + k_3)^2 + 2 k_1^2, \\ \mathscr{F}_{\text{\tiny{dom}}}(F) & = - k_4^2 - k_5^2 - (-k_1 + k_2 + k_3)^2 + 2 k_1^2 \end{equs} and \begin{equs} \mathcal{P}_{\text{\tiny{dom}}} \left( \mathscr{F}_{\text{\tiny{dom}}}(T_1) \right) = - k_4^2, \quad \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(T_2) \right) = - k_5^2, \quad \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(T_3) \right) = 0. \end{equs} Therefore, one obtains \begin{equs} \mathcal{P}_{\text{\tiny{dom}}} \left( \sum_{i=1}^3 \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(T_i) \right) \right) = - (k_4 + k_5)^2. \end{equs} But on the other hand, \begin{equs} \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(F) \right) = 0. \end{equs} \end{example} \begin{proposition} \label{nasty_trees} Let $ T_{\mathfrak{e}}^{\mathfrak{f}} $ be a decorated tree in $ \hat H_0 $ and $ e \in E_T $. We recall that $ T_e $ corresponds to the subtree of $ T $ above $ e $. The nodes of $ T_e $ are given by all the nodes whose path to the root contains $ e $. Under Assumption~\ref{assumption_physical_space} one has \begin{equs} \label{subtreek} \begin{aligned} \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(T_{\mathfrak{e}}^{\mathfrak{f}}) \right) & = a \left( \sum_{u \in V } a_u k_u \right)^{m}, \quad m \in \N, a \in \Z, \, V \subset L_T, \, a_u \in \lbrace -1,1 \rbrace \\ \mathcal{P}_{\text{\tiny{dom}}} \left( \mathscr{F}_{\text{\tiny{dom}}}((T_e)_{\mathfrak{e}}^{\mathfrak{f}}) \right) & = b \left( \sum_{u \in \bar V } b_u k_u \right)^{m_e}, \quad m_e \in \N, b \in \Z, \, \bar V \subset L_T, \, b_u \in \lbrace -1,1 \rbrace \end{aligned} \end{equs} and $\bar V \subset V \text{ or } \bar V \cap V = {\centernot\ocircle} $. If $ \bar V \subset V $, then there exists $ \bar p \in \lbrace 0,1 \rbrace $ such that $ a_u = (-1)^{\bar p} b_u $ for every $ u \in \bar V $. \end{proposition} \begin{proof} We proceed by induction over the size of the decorated trees. We consider $T= \CI_{(\mathfrak{t},\mathfrak{p})}( \lambda_{k} F) $ where $ F = \prod_{i=1}^m T_i $. (i) If $F =\mathbf{1}$ then one has: \begin{equs} \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(T) \right) = \mathcal{P}_{\text{\tiny{dom}}} \left( P_{(\mathfrak{t},\mathfrak{p})}(k) \right). \end{equs} We conclude from the definition of $ \mathcal{P}_{\text{\tiny{dom}}} $. (ii) If $ m > 2 $, then one gets: \begin{equs} \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(T) \right) = \mathcal{P}_{\text{\tiny{dom}}} \left( P_{(\mathfrak{t},\mathfrak{p})}(k) + \sum_{i=1}^m \mathscr{F}_{\text{\tiny{dom}}}(T_i) \right). \end{equs} Using Assumption~\ref{assumption_physical_space}, one obtains: \begin{equs} \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(T) \right) = \mathcal{P}_{\text{\tiny{dom}}} \left( \sum_{i=1}^m (P_{(\mathfrak{t},\mathfrak{p})}(k^{(i)}) +\mathscr{F}_{\text{\tiny{dom}}}(T_i)) \right) \end{equs} where $ k^{(i)} $ corresponds to the frequency attached to the node connected to the root of $ T_i $. If $ \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(T) \right) \neq 0 $ then from \eqref{splitting_identity} we have \begin{equs} \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(T) \right) = \mathcal{P}_{\text{\tiny{dom}}} \left( \sum_{i=1}^m \mathcal{P}_{\text{\tiny{dom}}} \left( P_{(\mathfrak{t},\mathfrak{p})}(k^{(i)}) +\mathscr{F}_{\text{\tiny{dom}}}(T_i) \right) \right). \end{equs} We apply the induction hypothesis to each decorated tree $ \tilde T_i = \CI_{(\mathfrak{t},\mathfrak{p})}( \lambda_{k^{(i)}} T_i) $ and we recombine the various terms in order to conclude. (iii) If $ m=1 $, then $ F = \CI_{(\mathfrak{t}_1,\mathfrak{p}_1)}( \lambda_{ \bar k} T_1) $ where $ \bar k $ is equal to $ k $ up to a minus sign. We can assume without loss of generality that \begin{equs} \label{case1} \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(F) \right) = \mathscr{F}_{\text{\tiny{dom}}}(F). \end{equs} Indeed, otherwise \begin{equs} \mathcal{P}_{\text{\tiny{dom}}} \left( \mathscr{F}_{\text{\tiny{dom}}}(T)\right) = \mathcal{P}_{\text{\tiny{dom}}} \left( P_{(\mathfrak{t},\mathfrak{p})}(k) + P_{(\mathfrak{t}_1,\mathfrak{p}_1)}(\bar k) + \mathscr{F}_{\text{\tiny{dom}}}(T_1) \right) . \end{equs} We can see $ P_{(\mathfrak{t},\mathfrak{p})}(k) + P_{(\mathfrak{t}_1,\mathfrak{p}_1)}(\bar k) $ as a polynomial and then apply the induction hypothesis. We are down to the case \eqref{case1} and we consider: \begin{equs} \mathcal{P}_{\text{\tiny{dom}}}\left(\mathscr{F}_{\text{\tiny{dom}}}(T) \right) & = \mathcal{P}_{\text{\tiny{dom}}}\left( P_{(\mathfrak{t},\mathfrak{p})}(k) + \mathcal{P}_{\text{\tiny{dom}}}\left(\mathscr{F}_{\text{\tiny{dom}}}(\bar T) \right) \right) \end{equs} where now $ F = \bar T$ is just a decorated tree. We apply the induction hypothesis on $ \bar T $ and we get \begin{equs}\label{PT} \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(\bar T) \right) & = a \left( \sum_{u \in V } a_u k_u \right)^{m}, \quad a \in \Z, \, V \subset L_T, \, a_u \in \lbrace -1,1 \rbrace \\ k & = \sum_{u \in L_T } (c_u k_u), \quad c_{ u} = (-1)^{ p} a_{u} , \quad u \in V. \end{equs} If the degree of $P_{(\mathfrak{t},\mathfrak{p})}(k)$ is higher than the degree of $\mathscr{F}_{\text{\tiny{dom}}}(\bar T)$ we obtain that \begin{equs} \mathcal{P}_{\text{\tiny{dom}}} \left( P_{(\mathfrak{t},\mathfrak{p})}(k) +\mathscr{F}_{\text{\tiny{dom}}}(\bar T) \right) = \mathcal{P}_{\text{\tiny{dom}}} \left( P_{(\mathfrak{t},\mathfrak{p})}(k) \right). \end{equs} On the other hand, if the degree of $P_{(\mathfrak{t},\mathfrak{p})}(k)$ is lower than the degree of $\mathscr{F}_{\text{\tiny{dom}}}(\bar T)$ \begin{equs} \mathcal{P}_{\text{\tiny{dom}}} \left( P_{(\mathfrak{t},\mathfrak{p})}(k) +\mathscr{F}_{\text{\tiny{dom}}}(\bar T) \right) = \mathcal{P}_{\text{\tiny{dom}}} \left( \mathscr{F}_{\text{\tiny{dom}}}(\bar T) \right). \end{equs} If $ P_{(\mathfrak{t},\mathfrak{p})}(k)$ and $\mathscr{F}_{\text{\tiny{dom}}}(\bar T)$ have the same degree $ m $, we get using the definition of $ P_{(\mathfrak{t},\mathfrak{p})}(k)$ in \eqref{Palpha} as well as the induction hypothesis on $\bar T$ given in \eqref{PT} that \begin{equs} P_{(\mathfrak{t},\mathfrak{p})}(k) + \mathcal{P}_{\text{\tiny{dom}}}\left(\mathscr{F}_{\text{\tiny{dom}}}(\bar T) \right) & = \sum_{u \in V } \left( a (-1)^{p + m + \mathfrak{p}} + (- 1)^{\mathfrak{p}} \right)( (-1)^{\mathfrak{p}}c_u k_u)^{m} \\ & + \sum_{u \in L_T \setminus V } (- 1)^{\mathfrak{p}} ((-1)^{\mathfrak{p}} c_u k_u)^{m} + R \end{equs} where $ R $ are terms of lower order and contains non-zero By applying the map $ \mathcal{P}_{\text{\tiny{dom}}} $ defined in \eqref{physical map}, we thus obtain an expression of the form \begin{equs} \label{resultb} \mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(T) \right) = b \left( \sum_{u \in \tilde{V} } (-1)^{\mathfrak{p}} c_u k_u \right)^{m} \,\text{for some } b \in \Z \end{equs} where $ \tilde V $ could be either $ L_T $ or $ L_T \setminus V $. Let $ e \in E_T $, $ T_e \neq T $, then $ T_e $ is a subtree of $ \bar T $. By the induction hypothesis, one obtains \eqref{subtreek}, meaning that if we denote by $ V $ (resp. $ \bar V $) the set associated to $ \bar T $ (resp. $ T_e $), we get: $ \bar V \subset V $ or $ \bar V \cap V = {\centernot\ocircle} $. In the first case $\bar V \subset V $, the assertion follows as $ V \subset \tilde{V} $ or $ V \cap \tilde{V} = {\centernot\ocircle} $ such that necessarily $\bar V \subset \tilde V$ or $\bar V \cap \tilde V = {\centernot\ocircle}$. In the second case $ \bar V \cap V = {\centernot\ocircle} $, we apply the induction hypothesis on $ T_e $. Then, for $ v $ of $ T_e $, there exists $ p $ such that the decoration $ \mathfrak{f}(v) $ is given by: \begin{equs} \mathfrak{f}(v) = \sum_{u \in L_{T_v} } (d_u k_u), \quad d_{ u} = (-1)^{ p} b_{u} , \quad u \in \bar V. \end{equs} As $ \mathfrak{f}(v) $ appears as a subfactor in $ k $, one has $ \bar V \subset L_T $. Then, $ \bar {V} \cap V = {\centernot\ocircle} $ gives also that $ \bar V \subset L_T \setminus V $. Therefore, we have $ \bar V \subset \tilde{V} $ which concludes the proof. \end{proof} \begin{corollary} \label{physical_map} Let $ T_{\mathfrak{e}}^{\mathfrak{f}} $ a decorated tree in $ \hat \mathcal{T}_0 $. We assume that Assumption~\ref{assumption_physical_space} holds true for a set $ A $ of decorated subtrees of $ T_{\mathfrak{e}}^{\mathfrak{f}} $ such that $\mathcal{P}_{\text{\tiny{dom}}} \left(\mathscr{F}_{\text{\tiny{dom}}}(\bar T) \right) =\mathscr{F}_{\text{\tiny{dom}}}(\bar T) \neq 0$ for $ \bar T \in A $. Moreover, we assume that the $ \bar T $ are of the form $ (T_e)_{\mathfrak{e}}^{\mathfrak{f}} $ where $ e \in E_T $. Then, the following product \begin{equs} \prod_{\bar T \in A} \frac{1}{\left(\mathscr{F}_{\text{\tiny{dom}}}(\bar T) \right)^{m_T}} \end{equs} can be mapped back to Physical space using operators of the form $(- \Delta)^{-m/2}_V $ as defined in Proposition~\ref{Fourierproduct}. \end{corollary} \begin{proof} Proposition~\ref{nasty_trees} gives us the structure needed for applying Proposition~\ref{Fourierproduct} which allows us to conclude. \end{proof} \begin{example} We illustrate Corollary~\ref{physical_map} via an example extracted from the cubic Schrödinger equation~\eqref{nlsIntro}.We consider the following decorated trees: \begin{equs}\label{nlsTK} T_1 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture}, \quad T_2 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture}. \end{equs} We observe that these trees satisfy Assumption~\ref{assumption_physical_space} and that $ T_1 $ is a subtree of $ T_2 $. One has that: \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}( T_1) = 2 k_1^2 , \quad\mathscr{F}_{\text{\tiny{dom}}}( T_2) = 2 (k_1 + k_4)^2 \end{equs} and the following quantity can be mapped back into physical space: \begin{equs} \sum_{\substack{k= -k_1-k_4+k_2+k_3+k_5\\ k_1\neq 0, k_1+k_4\neq 0 \\k_1,k_2,k_3,k_4,k_5\in \Z^d}}\frac{1}{ \mathscr{F}_{\text{\tiny{dom}}}( T_1)} \frac{1}{ \mathscr{F}_{\text{\tiny{dom}}}( T_2)} \overline{\hat{v}_{k_1}} \overline{\hat{v}_{k_4}}\hat{v}_{k_2}\hat{v}_{k_3}\hat{v}_{k_5} e^{i k x}\\= \sum_{\substack{k= -k_1-k_4+k_2+k_3+k_5\\ k_1\neq 0, k_1+k_4\neq 0 \\k_1,k_2,k_3,k_4,k_5\in \Z^d}} \frac{1}{4 (k_1^2) (k_1 + k_4)^2}\overline{\hat{v}_{k_1}} \overline{\hat{v}_{k_4}}\hat{v}_{k_2}\hat{v}_{k_3}\hat{v}_{k_5} e^{i k x} \\= \frac14 v(x)^3(-\Delta)^{-1}\left(\overline v(x) (-\Delta)^{-1} \overline v(x)\right) . \end{equs} \end{example} \subsection{Approximated decorated trees} We denote by $ \mathcal{T} $ the set of decorated trees $ T_{\mathfrak{e},r}^{\mathfrak{n},\mathfrak{f}} = (T,\mathfrak{n},\mathfrak{f},\mathfrak{e},r) $ where \begin{itemize} \item $ T_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}} \in \hat \mathcal{T} $. \item The decoration of the root is given by $ r \in \Z $, $ r \geq -1 $ such that \begin{equs} \label{condition_trees} r +1 \geq \deg(T_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}}) \end{equs} where $ \deg $ is defined recursively by \begin{equs} \deg(\mathbf{1}) & = 0, \quad \deg(T_1 \cdot T_2 ) = \max(\deg(T_1),\deg(T_2)), \\ \deg(\CI_{(\mathfrak{t},\mathfrak{p})}( \lambda^{\ell}_{k}T_1) ) & = \ell + \mathbf{1}_{\lbrace\mathfrak{t} \in \CL_+\rbrace} + \deg(T_1) \end{equs} where $ \mathbf{1} $ is the empty forest and $ T_1, T_2 $ are forests composed of trees in $ \mathcal{T} $. The quantity $\deg(T_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}})$ is the maximum number of edges with type in $ \mathfrak{L}_+ $ and node decorations $ \mathfrak{n} $ lying on the same path from one leaf to the root. \end{itemize} We call decorated trees in $ \mathcal{T} $ approximated decorated trees. The main difference with decorated trees introduced before is the adjunction of the decoration $ r $ at the root. The idea is that these trees correspond to different analytical objects. We summarise this below: \begin{equs} T_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}} & \in \hat{\mathcal{T}} \equiv \text{Iterated integral} \\ T_{\mathfrak{e},r}^{\mathfrak{n},\mathfrak{f}} & \in \hat{\mathcal{T}} \equiv \text{Approximation of an iterated integral of order $r$}. \end{equs} The interpretation \eqref{recursive_pi_r} of approximated decorated trees gives the numerical scheme see Definition~\ref{scheme}. \begin{remark} The condition \eqref{condition_trees} encodes the fact that the order of the scheme must be higher than the maximum number of iterated integrals and monomials lying on the same path from one leaf to the root. Moreover, we can only have monomials of degree less than $ r+1 $ at order $ r+2 $. \end{remark} \begin{example We continue Example \ref{ex2} with the decorated tree $ T_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}} $ given in \eqref{example2a}. We suppose that $ \mathfrak{t}(1) $ is in $ \mathfrak{L}_+ $ but not $ \mathfrak{t}(2) $ and $\mathfrak{t}(3) $. We are now in the context of Example~\ref{tex:kdv}. Then, one has \begin{equs} \deg(T_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}}) = \mathfrak{n}(a) + 1+ \max(\mathfrak{n}(b),\mathfrak{n}(c)). \end{equs} \end{example} We denote by $ \mathcal{H} $ the vector space spanned by forests composed of trees in $ \mathcal{T}$ and $ \lambda^n $, $ n \in \N $ where $ \lambda^n $ is the tree with one node decorated by $ n $. When the decoration $ n $ is equal to zero we identify this tree with the empty forest: $ \lambda^{0} =\mathbf{1} $. Using the symbolic notation, one has: \begin{equs} \mathcal{H} = \langle \lbrace \prod_j \lambda^{m_j} \prod_i \CI^{r_i}_{o_i}( \lambda_{k_i}^{\ell_i} F_i), \, \CI_{o_i}( \lambda_{k_i}^{\ell_i} F_i) \in \hat \mathcal{T} \rbrace \rangle \end{equs} where the product used is the forest product. We call decorated forests in $ \mathcal{H} $ approximated decorated forests. The map $ \CI^{r}_{o}( \lambda_{k}^{\ell} \cdot) : \hat \mathcal{H} \rightarrow \mathcal{H} $ is defined as the same as for $ \CI_{o}( \lambda_{k}^{\ell} \cdot) $ except now the root is decorated by $ r $ and it could be zero if the inequality \eqref{condition_trees} is not satisfied. We extend this map to $ \mathcal{H} $ by: \begin{equs} \CI^{r}_{o}( \lambda_{k}^{\ell} (\prod_j \lambda^{m_j} \prod_i \CI^{r_i}_{o_i}( \lambda_{k_i}^{\ell_i} F_i))) { \, := }\CI^{r}_{o}( \lambda_{k}^{\ell+\sum_j m_j} (\prod_i \CI_{o_i}( \lambda_{k_i}^{\ell_i} F_i))). \end{equs} In the extension, we remove the decorations $ r_i $ and we add up the decorations $ m_j $ with $ \ell $. In the sequel, we will use a recursive formulation and move from $ \hat \mathcal{H} $ to $ \mathcal{H} $. Therefore, we define the map $ \CD^{r} : \hat \mathcal{H} \rightarrow \mathcal{H} $ which replaces the root decoration of a decorated tree by $ r $ and performs the projection along the identity \eqref{condition_trees}. It is given by \begin{equs}\label{DR} \CD^{r}(\mathbf{1})= \mathbf{1}_{\lbrace 0 \leq r+1\rbrace} , \quad \CD^r\left( \CI_{o}( \lambda_{k}^{\ell} F) \right) = \CI^{r}_{o}( \lambda_{k}^{\ell} F) \end{equs} and we extend it multiplicatively to any forest in $ \hat \mathcal{H} $. The map $ \CD^r $ projects according to the order of the scheme $ r $. We disregard decorated trees having a degree bigger than $ r $. We denote by $ \mathcal{T}_{+} $ the set of decorated forests composed of trees of the form $ (T,\mathfrak{n},\mathfrak{f},\mathfrak{e},(r,m)) $ where \begin{itemize} \item $ T_{\mathfrak{e},r}^{\mathfrak{n},\mathfrak{f}} \in \mathcal{T} $. \item The edge connecting the root has a decoration of the form $ (\mathfrak{t},\mathfrak{p}) $ where $ \mathfrak{t} \in \mathfrak{L}_+ $. \item The decoration $ (r,m) $ is at the root of $ T $ and $ m \in \N $ is such that $ m \leq r +1 $. \end{itemize} The linear span of $ \mathcal{T}_+ $ is denoted by $ \mathcal{H}_+ $. One can observe that the main difference with $\mathcal{H}$ is that $ \lambda^m \notin \mathcal{T}_+ $. We can define the same grafting operator as before $ \CI^{(r,m)}_{o}( \lambda_{k}^{\ell} \cdot) : \mathcal{H} \rightarrow \mathcal{H}_+ $ as the same as $ \CI^{r}_{o}( \lambda_{k}^{\ell} \cdot) : \mathcal{H} \rightarrow \mathcal{H} $ but now we add the decoration $ (r,m) $ at the root where $ m \leq r+1 $. We also define $ \hat \CD^{(r,m)}: \hat\mathcal{H} \rightarrow \mathcal{H}_+ $ as the same as $ \CD^{r} $. It is given by \begin{equs}\label{DRhat} \hat \CD^{(r,m)}(\mathbf{1})= \mathbf{1}, \quad \hat \CD^{(r,m)}\left( \CI_{o}( \lambda_{k}^{\ell} F) \right) = \CI^{(r,m)}_{o}( \lambda_{k}^{\ell} F). \end{equs} \begin{example For the tree \eqref{example2a} we obtain when applying $ \CD^r $ and $ \hat \CD^{(r,m)} $ \begin{equs} \label{example3} \begin{tikzpicture}[scale=0.19,baseline=2cm] \node at (0,10) (a) {}; \node at (4,20) (f) {}; \node at (20,20) (g) {}; \node at (12,15) (c) {}; \draw[kernel1] (a) -- node [round1] {\tiny $\mathfrak{t}(1),\mathfrak{p}(1)$} (c) ; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(2),\mathfrak{p}(2)$} (f) ; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(3),\mathfrak{p}(3)$} (g) ; \draw (g) node [rect2] {\tiny $ \mathfrak{n}(c),\mathfrak{f}(c) $} ; \draw (f) node [rect2] {\tiny $ \mathfrak{n}(b),\mathfrak{f}(b) $} ; \draw (c) node [rect2] {\tiny $ \mathfrak{n}(a),\mathfrak{f}(a) $} ; \draw (a) node [rect2] {\tiny $ r $} ; \end{tikzpicture}, \quad \begin{tikzpicture}[scale=0.19,baseline=2cm] \node at (0,10) (a) {}; \node at (4,20) (f) {}; \node at (20,20) (g) {}; \node at (12,15) (c) {}; \draw[kernel1] (a) -- node [round1] {\tiny $\mathfrak{t}(1),\mathfrak{p}(1)$} (c) ; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(2),\mathfrak{p}(2)$} (f) ; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(3),\mathfrak{p}(3)$} (g) ; \draw (g) node [rect2] {\tiny $ \mathfrak{n}(c),\mathfrak{f}(c) $} ; \draw (f) node [rect2] {\tiny $ \mathfrak{n}(b),\mathfrak{f}(b) $} ; \draw (c) node [rect2] {\tiny $ \mathfrak{n}(a),\mathfrak{f}(a) $} ; \draw (a) node [rect2] {\tiny $ (r,m) $} ; \end{tikzpicture} \end{equs} For the decorated tree in \eqref{treeSE}, one obtains \begin{equs} \label{DRnot} \CD^{r} \left(\begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} \right) = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ r $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture}, \quad \hat \CD^{(r,m)} \left(\begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} \right) = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,m) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture}. \end{equs} \end{example} \subsection{Operators on approximated decorated forests} In the subsection, we introduce two maps $ \Delta $ and $ \Delta^{\!+} $ that will act on approximated decorated forests splitting them into two parts by the use of the tensor product. They act at two levels. First, on the shapes of the trees, they extract a subtree at the root. Then, they induce subtle changes in the decorations that could be interpreted as abstract Taylor expansions. We define a map $\Delta : \mathcal{H} \rightarrow \mathcal{H} \otimes \mathcal{H}_+$ for a given $T^{\mathfrak{n},\mathfrak{f}}_{\mathfrak{e},r} \in {\mathcal T}$ by \begin{equs} \label{eq:co-action_plus} \Delta T^{\mathfrak{n},\mathfrak{f}}_{\mathfrak{e},r} & = \sum_{A \in \mathfrak{A}(T) } \sum_{\mathfrak{e}_A} \frac1{\mathfrak{e}_A!} (A,\mathfrak{n} + \pi\mathfrak{e}_A, \mathfrak{f}, \mathfrak{e},r) \\ & \otimes \prod_{e \in \partial(A,T)}( T_{e}, \mathfrak{n} , \mathfrak{f}, \mathfrak{e}, (r-\deg(e),\mathfrak{e}_A(e)))\;, \\ & = \sum_{A \in \mathfrak{A}(T) } \sum_{\mathfrak{e}_A} \frac1{\mathfrak{e}_A!} A^{\mathfrak{n} +\pi\mathfrak{e}_A, \mathfrak{f} }_{\mathfrak{e},r} \otimes \prod_{e \in \partial(A,T)} (T_{e})^{\mathfrak{n} , \mathfrak{f}}_{\mathfrak{e}, (r-\deg(e),\mathfrak{e}_A(e))} \end{equs} where we use the following notations \begin{itemize} \item We write $T_e $ as the planted tree above the edge $ e $ in $ T $. For $g : E_T \rightarrow \N$, we define for every $x \in N_T$, $(\pi g)(x) = \sum_{e=(x,y) \in E_T} g(e)$. \item In $ A^{\mathfrak{n} +\pi\mathfrak{e}_A, \mathfrak{f} }_{\mathfrak{e},r} $, the maps $ \mathfrak{n}, \mathfrak{f} $ and $ \mathfrak{e} $ are restricted to $ N_A $ and $ E_A $. The same is valid for $ (T_{e})^{\mathfrak{n} , \mathfrak{f}}_{\mathfrak{e}, (r-\deg(e),\mathfrak{e}_A(e))} $ where the restriction is on $ N_{T_e} \setminus \lbrace \varrho_{T_e}\rbrace $ and $ E_{T_e} $, $ \varrho_{T_e} $ is the root of $ T_e $. When $ A $ is reduced to a single node, we set $ A^{\mathfrak{n} +\pi\mathfrak{e}_A, \mathfrak{f} }_{\mathfrak{e},r} = \lambda^{\mathfrak{n} +\pi\mathfrak{e}_A} $. \item The first sum runs over $\mathfrak{A}(T)$, the set of all subtrees $A$ of $T$ containing the root $ \varrho $ of $ T $. The second sum runs $\mathfrak{e}_{A} : \partial(A,T) \rightarrow \N$ where $\partial(A,T)$ denotes the edges in $E_T \setminus E_A$ of type in $ \mathfrak{L}_+ $ that are adjacent to $N_A$. \item Factorial coefficients are understood in multiindex notation. \item We define $ \deg(e) $ for $ e \in E_T $ as the number of edges having $ \mathfrak{t}(e) \in \mathfrak{L}_+ $ lying on the path from $ e $ to the root in the decorated tree $ T_{\mathfrak{e}}^{\mathfrak{n}, \mathfrak{f}} $. We also add up the decoration $ \mathfrak{n} $ on this path. \end{itemize} Let us comment briefly that the play on decorations can be interpreted as asbract Taylor expansions. We can use the following dictionary: \begin{equs} (T_e)_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}} \equiv \int_{0}^{\tau} e^{i\xi P(k)} f(k_1,...,k_n,\xi) d\xi \end{equs} where $ k_1,...,k_n $ are the frequencies appearing on the leaves of $ (T_e)_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}}$, $ P $ is the polynomial associated to the decoration of the edge $ e $ and $ k $ is the frequency on the node connected to the root. This iterated integral appears inside the iterated integral associated to $ T_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}} $. For our numerical approximation, we need to approximate this integral by giving a scheme of the form: \begin{equs} (T_e)_{\mathfrak{e}, \tilde{r}}^{\mathfrak{n},\mathfrak{f}} \equiv \sum_{\ell \leq \tilde{r}} \frac{\tau^{\ell}}{\ell!} f_{\tilde{r},\ell}(k_1,...,k_n), \quad (T_{e})^{\mathfrak{n} , \mathfrak{f}}_{\mathfrak{e}, (\tilde{r},\ell)} \equiv f_{\tilde{r},\ell}(k_1,...,k_n) \end{equs} where the order of the expansion is given by $ \tilde{r} = r - \deg(e) $. Then, the $ \tau^{\ell} $ are part of the original iterated integrals and cannot be detached as the $ f_{\tilde{r},\ell} $. That's why we increase the polynomial decorations where the tree $ T_e $ was originally attached via the term $ \mathfrak{n} +\pi\mathfrak{e}_A $. The choice for $ \tilde{r} $ is motivated by the fact that we do not need to go too far for the approximation of $ (T_e)_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}} $. We take into account all the time integrals and polynomials which lie on the path connecting the root of $ T_e $ to the root of $ T $. The map $ \Delta $ is compatible with the projection induced by the decoration $ r $. Indeed, one has \begin{equs} \deg(T_{\mathfrak{e}}^{\mathfrak{n}, \mathfrak{f}}) = \max_{e \in E_T} \left( \deg( (T_e)_{\mathfrak{e}}^{\mathfrak{n}, \mathfrak{f}} ) + \deg(e) \right). \end{equs} Therefore, if $ \deg(T_{\mathfrak{e}}^{\mathfrak{n}, \mathfrak{f}}) < r $ then for every $ e \in E_T $ \begin{equs} \deg( (T_e)_{\mathfrak{e}}^{\mathfrak{n}, \mathfrak{f}} ) + \deg(e) < r. \end{equs} We deduce that if $ T^{\mathfrak{n},\mathfrak{f}}_{\mathfrak{e},r} $ is zero then the $ (T_e)_{\mathfrak{e},(r-\deg(e),\mathfrak{e}_A(e))}^{\mathfrak{n}, \mathfrak{f}} $ are zero too. We define a map $\Delta^{\!+} : \mathcal{H}_+ \rightarrow \mathcal{H}_+ \otimes \mathcal{H}_+$ given for $T^{\mathfrak{n}, \mathfrak{f}}_{\mathfrak{e},(r,m)} \in {\mathcal T}_+$ by \begin{equs} \label{eq:coproduct_plus} \Delta^{\!+} T^{\mathfrak{n},\mathfrak{f}}_{\mathfrak{e},(r,m)} & = \sum_{A \in \mathfrak{A}(T) } \sum_{\mathfrak{e}_A} \frac1{\mathfrak{e}_A!} A^{\mathfrak{n} +\pi\mathfrak{e}_A, \mathfrak{f} }_{\mathfrak{e},(r,m)} \otimes \prod_{e \in \partial(A,T)} (T_{e})^{\mathfrak{n} , \mathfrak{f}}_{\mathfrak{e}, (r-\deg(e),\mathfrak{e}_A(e))}. \end{equs} We require that $ A^{\mathfrak{n} +\pi\mathfrak{e}_A, \mathfrak{f} }_{\mathfrak{e},(r,m)} \in \mathcal{H}_+ $, so we implicitly have a projection on zero when $ A $ happens to be a single node with $ \mathfrak{n} +\pi\mathfrak{e}_A \neq 0 $. We illustrate this coproduct on a well-chosen example. \begin{example \label{example3} We continue with the tree in Example \ref{example1}. We suppose that $ \mathfrak{L}_+ = \lbrace \mathfrak{t}(2), \mathfrak{t}(3), \mathfrak{t}(4), \mathfrak{t}(5) \rbrace $. Below, the subtree $ A \in \mathfrak{A}(T)$ is colored in blue. We have $ N_A = \lbrace \varrho, a,b\rbrace $, $ E_A = \lbrace 1,2 \rbrace $ and $ \partial(A,T) = \lbrace 3, 4,5 \rbrace $ . \begin{equs} \begin{tikzpicture}[scale=0.19,baseline=2cm] \node at (0,10) (a) {}; \node at (4,20) (f) {}; \node at (0,30) (k) {}; \node at (8,30) (l) {}; \node at (20,20) (g) {}; \node at (16,30) (m) {}; \node at (12,15) (c) {}; \node at (24,30) (p) {}; % \draw[kernel1,blue] (a) -- node [round3] {\tiny $\mathfrak{t}(1),\mathfrak{p}(1)$} (c) ; \draw[kernel1,blue] (c) -- node [round3] {\tiny $\mathfrak{t}(2),\mathfrak{p}(2)$} (f) ; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(3),\mathfrak{p}(3)$} (g) ; \draw[kernel1,black] (f) -- node [near end, round1] {\tiny $\mathfrak{t}(4),\mathfrak{p}(4)$} (k) ; \draw[kernel1] (f) -- node [round1] {\tiny $\mathfrak{t}(5),\mathfrak{p}(5)$} (l) ; \draw[kernel1] (g) -- node [round1] {\tiny $\mathfrak{t}(6),\mathfrak{p}(6)$} (m) ; \draw[kernel1] (g) -- node [near end, round1] {\tiny $\mathfrak{t}(7),\mathfrak{p}(7)$} (p) ; \draw (p) node [rect2] {\tiny $ \mathfrak{n}({g}), \mathfrak{f}({g}) $} ; \draw (m) node [rect2] {\tiny $ \mathfrak{n}({f}), \mathfrak{f}({f}) $} ; \draw (l) node [rect2] {\tiny $ \mathfrak{n}({e}), \mathfrak{f}({e}) $} ; \draw (k) node [rect2] {\tiny $ \mathfrak{n}(d), \mathfrak{f}(d) $} ; \draw (g) node [rect2] {\tiny $ \mathfrak{n}(c),\mathfrak{f}(c) $} ; \draw[blue] (f) node [round3] {\tiny $ \mathfrak{n}(b),\mathfrak{f}(b) $} ; \draw[blue] (c) node [round3] {\tiny $ \mathfrak{n}(a),\mathfrak{f}(a) $} ; \draw[blue] (a) node [round3] {\tiny $ r $} ; \end{tikzpicture} \end{equs} We also get: \begin{equs} \deg(4) = \deg(5) = 1 + \mathfrak{n}(a) + \mathfrak{n}(b), \quad \deg(3) = \mathfrak{n}(a). \end{equs} We have for a fix $ \mathfrak{e}_A : \partial(A,T) \rightarrow \N $: \begin{equs} \begin{tikzpicture}[scale=0.19,baseline=2cm] \node at (0,10) (a) {}; \node at (4,20) (f) {}; \node at (12,15) (c) {}; \draw[kernel1,blue] (a) -- node [round3] {\tiny $\mathfrak{t}(1),\mathfrak{p}(1)$} (c) ; \draw[kernel1,blue] (c) -- node [round3] {\tiny $\mathfrak{t}(2),\mathfrak{p}(2)$} (f) ; \draw[blue] (f) node [round3] {\tiny $ \mathfrak{n}(b) + \mathfrak{e}_A(4) + \mathfrak{e}_A(5),\mathfrak{f}(b) $} ; \draw[blue] (c) node [round3] {\tiny $ \mathfrak{n}(a) + \mathfrak{e}_A(3),\mathfrak{f}(a) $} ; \draw[blue] (a) node [round3] {\tiny $ r $} ; \end{tikzpicture} \otimes \quad \begin{tikzpicture}[scale=0.19,baseline=2cm] \node at (0,10) (a) {}; \node at (4,20) (f) {}; \node at (0,30) (k) {}; \node at (8,30) (l) {}; \node at (0,20) (g) {}; \node at (-6,30) (m) {}; \node at (0,10) (c) {}; \node at (6,30) (p) {}; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(3),\mathfrak{p}(3)$} (g) ; \draw[kernel1] (g) -- node [round1] {\tiny $\mathfrak{t}(6),\mathfrak{p}(6)$} (m) ; \draw[kernel1] (g) -- node [near end, round1] {\tiny $\mathfrak{t}(7),\mathfrak{p}(7)$} (p) ; \draw (p) node [rect2] {\tiny $ \mathfrak{n}({g}), \mathfrak{f}({g}) $} ; \draw (m) node [rect2] {\tiny $ \mathfrak{n}({f}), \mathfrak{f}({f}) $} ; \draw (g) node [rect2] {\tiny $ \mathfrak{n}(c),\mathfrak{f}(c) $} ; \draw (c) node [rect1] {\tiny $ r - \mathfrak{n}(a),\mathfrak{e}_A(3)$} ; \end{tikzpicture} \begin{tikzpicture}[scale=0.19,baseline=2cm] \node at (0,20) (g) {}; \node at (0,10) (c) {}; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(4),\mathfrak{p}(4)$} (g) ; \draw (g) node [rect2] {\tiny $ \mathfrak{n}(d),\mathfrak{f}(d) $}; \draw (c) node [rect1] {\tiny $ \bar r,\mathfrak{e}_A(4)$} ; \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.19,baseline=2cm] \node at (0,20) (g) {}; \node at (0,10) (c) {}; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(5),\mathfrak{p}(5)$} (g) ; \draw (g) node [rect2] {\tiny $ \mathfrak{n}(e),\mathfrak{f}(e) $}; \draw (c) node [rect1] {\tiny $ \bar r,\mathfrak{e}_A(5)$} ; \end{tikzpicture} \end{equs} where $ \bar r = r - \mathfrak{n}(a) - \mathfrak{n}(b) -1 $. Now if $ A $ is just equal to the root of $ T $, then one gets $ N_A = \lbrace \varrho \rbrace $, $ E_A = {\centernot\ocircle}$ and $ \partial(A,T) = \lbrace 1\rbrace $ as illustrated below \begin{equs} \label{example_root} \begin{tikzpicture}[scale=0.19,baseline=2cm] \node at (0,10) (c) {}; \draw (c) node [rect1] {\tiny $ \mathfrak{e}_A(1)$} ; \end{tikzpicture} \qquad \otimes \qquad \begin{tikzpicture}[scale=0.19,baseline=2cm] \node at (0,10) (a) {}; \node at (4,20) (f) {}; \node at (0,30) (k) {}; \node at (8,30) (l) {}; \node at (20,20) (g) {}; \node at (16,30) (m) {}; \node at (12,15) (c) {}; \node at (24,30) (p) {}; % \draw[kernel1] (a) -- node [round1] {\tiny $\mathfrak{t}(1),\mathfrak{p}(1)$} (c) ; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(2),\mathfrak{p}(2)$} (f) ; \draw[kernel1] (c) -- node [round1] {\tiny $\mathfrak{t}(3),\mathfrak{p}(3)$} (g) ; \draw[kernel1,black] (f) -- node [near end, round1] {\tiny $\mathfrak{t}(4),\mathfrak{p}(4)$} (k) ; \draw[kernel1] (f) -- node [round1] {\tiny $\mathfrak{t}(5),\mathfrak{p}(5)$} (l) ; \draw[kernel1] (g) -- node [round1] {\tiny $\mathfrak{t}(6),\mathfrak{p}(6)$} (m) ; \draw[kernel1] (g) -- node [near end, round1] {\tiny $\mathfrak{t}(7),\mathfrak{p}(7)$} (p) ; \draw (p) node [rect2] {\tiny $ \mathfrak{n}({g}), \mathfrak{f}({g}) $} ; \draw (m) node [rect2] {\tiny $ \mathfrak{n}({f}), \mathfrak{f}({f}) $} ; \draw (l) node [rect2] {\tiny $ \mathfrak{n}({e}), \mathfrak{f}({e}) $} ; \draw (k) node [rect2] {\tiny $ \mathfrak{n}(d), \mathfrak{f}(d) $} ; \draw (g) node [rect2] {\tiny $ \mathfrak{n}(c),\mathfrak{f}(c) $} ; \draw (f) node [rect2] {\tiny $ \mathfrak{n}(b),\mathfrak{f}(b) $} ; \draw (c) node [rect2] {\tiny $ \mathfrak{n}(a),\mathfrak{f}(a) $} ; \draw (a) node [rect1] {\tiny $ r,\mathfrak{e}_A(1) $} ; \end{tikzpicture} \end{equs} The map $ \Delta^{\!+} $ behaves the same way except that we start with a tree decorated by $ (r,m) $ at the root and that we exclude the case described in \eqref{example_root}. \end{example} \begin{example} Next we provide a more explicit example of the computations for the maps $ \Delta^{\!+} $ and $ \Delta $ on the tree \begin{equs}\label{kdvTK2} \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t11) at (-2,4); \coordinate (t12) at (-3,6); \coordinate (t13) at (-1,6); \coordinate (t2) at (1,2); \draw[kernels2] (t11) -- (t13); \draw[kernels2] (t11) -- (t12); \draw[kernels2] (t1) -- (root); \draw[symbols] (t1) -- (t11); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ r $}] (trinode) at (tri) {}; \node[not] (trinode) at (t1) {}; \node[var] (rootnode) at (t12) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t13) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} \end{equs} that appears for the KdV equation \eqref{kdvIntro}. We have that \begin{equs} \Delta \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t11) at (-2,4); \coordinate (t12) at (-3,6); \coordinate (t13) at (-1,6); \coordinate (t2) at (1,2); \draw[kernels2] (t11) -- (t13); \draw[kernels2] (t11) -- (t12); \draw[kernels2] (t1) -- (root); \draw[symbols] (t1) -- (t11); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ r $}] (trinode) at (tri) {}; \node[not] (trinode) at (t1) {}; \node[var] (rootnode) at (t12) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t13) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t11) at (-2,4); \coordinate (t12) at (-3,6); \coordinate (t13) at (-1,6); \coordinate (t2) at (1,2); \draw[kernels2] (t11) -- (t13); \draw[kernels2] (t11) -- (t12); \draw[kernels2] (t1) -- (root); \draw[symbols] (t1) -- (t11); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ r $}] (trinode) at (tri) {}; \node[not] (trinode) at (t1) {}; \node[var] (rootnode) at (t12) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t13) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} \otimes \mathbf{1} + \sum_{m \leq r+1} \frac{ \lambda^m}{m!} \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t11) at (-2,4); \coordinate (t12) at (-3,6); \coordinate (t13) at (-1,6); \coordinate (t2) at (1,2); \draw[kernels2] (t11) -- (t13); \draw[kernels2] (t11) -- (t12); \draw[kernels2] (t1) -- (root); \draw[symbols] (t1) -- (t11); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,m) $}] (trinode) at (tri) {}; \node[not] (trinode) at (t1) {}; \node[var] (rootnode) at (t12) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t13) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} + \sum_{m \leq r} \frac{1}{m!} \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t2) at (1,2); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ r $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ _\ell^m $}}; \node[var] (trinode) at (t2) {\tiny{$ k_2 $}}; \end{tikzpicture} \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t2) at (1,2); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r-1,m) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_2 $}}; \end{tikzpicture} \end{equs} where $ \ell = k_1 + k_2 $ and we have used similar notations as in Example~\ref{tex:kdv} and in \eqref{DRnot}. We have introduced a new graphical notation for a node decorated by the decoration $(m,\ell)$ when $ m \neq 0 $: $ \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \node[var] (rootnode) at (root) {\tiny{$ _\ell^m $}}; \end{tikzpicture} $. One can notice that the second abstract Taylor expansion is shorter. This is due to the fact that there was one blue edge (in $ \mathfrak{L}_+ $) on the path connecting the cutting tree to the root. In addition we have \begin{equs} \Delta^{\!+} \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t11) at (-2,4); \coordinate (t12) at (-3,6); \coordinate (t13) at (-1,6); \coordinate (t2) at (1,2); \draw[kernels2] (t11) -- (t13); \draw[kernels2] (t11) -- (t12); \draw[kernels2] (t1) -- (root); \draw[symbols] (t1) -- (t11); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,m) $}] (trinode) at (tri) {}; \node[not] (trinode) at (t1) {}; \node[var] (rootnode) at (t12) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t13) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t11) at (-2,4); \coordinate (t12) at (-3,6); \coordinate (t13) at (-1,6); \coordinate (t2) at (1,2); \draw[kernels2] (t11) -- (t13); \draw[kernels2] (t11) -- (t12); \draw[kernels2] (t1) -- (root); \draw[symbols] (t1) -- (t11); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,m) $}] (trinode) at (tri) {}; \node[not] (trinode) at (t1) {}; \node[var] (rootnode) at (t12) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t13) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} \otimes \mathbf{1} + \mathbf{1} \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t11) at (-2,4); \coordinate (t12) at (-3,6); \coordinate (t13) at (-1,6); \coordinate (t2) at (1,2); \draw[kernels2] (t11) -- (t13); \draw[kernels2] (t11) -- (t12); \draw[kernels2] (t1) -- (root); \draw[symbols] (t1) -- (t11); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,m) $}] (trinode) at (tri) {}; \node[not] (trinode) at (t1) {}; \node[var] (rootnode) at (t12) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t13) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} + \sum_{n \leq r} \frac{1}{n!} \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t2) at (1,2); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,m) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ _\ell^n $}}; \node[var] (trinode) at (t2) {\tiny{$ k_2 $}}; \end{tikzpicture} \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t2) at (1,2); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r-1,n) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_2 $}}; \end{tikzpicture} \end{equs} One of the main difference between $ \Delta $ and $ \Delta^{\!+} $ is that we do not have a higher Taylor expansion on the edge connecting the root for $ \Delta^{\!+} $ because the elements $ \lambda^k $ are not in $ \mathcal{H}_+ $. With this property, $ \mathcal{H}_+ $ will be a connected Hopf algebra. \end{example} We use the symbolic notation to provide an alternative, recursive definition of the two maps $ \Delta : \mathcal{H} \rightarrow \mathcal{H} \otimes \mathcal{H}_+ $ and $ \Delta^{\!+} : \mathcal{H}_+ \rightarrow \mathcal{H}_+ \otimes \mathcal{H}_+ $ \begin{equation} \label{def_deltas} \begin{aligned} \Delta \mathbf{1} & = \mathbf{1} \otimes \mathbf{1}, \quad \Delta \lambda^{\ell} = \lambda^{\ell} \otimes \mathbf{1}\\ \Delta \CI^{r}_{o_1}( \lambda_{k}^{\ell} F) & = \left( \CI^{r}_{o_1}( \lambda_{k}^{\ell}\cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-\ell}(F) \\ \Delta \CI^{r}_{o_2}( \lambda_{k}^{\ell} F) & = \left( \CI^{r}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-\ell-1}(F) + \sum_{m \leq r +1} \frac{ \lambda^{m}}{m!} \otimes \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F) \\ \Delta^{\!+} \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F) & = \left( \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-\ell-1}(F) + \mathbf{1} \otimes \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F) \end{aligned} \end{equation} where $ o_1 = (\mathfrak{t}_1,p_1) $, $ \mathfrak{t}_1 \notin \mathfrak{L}_+ $ and $ o_2 = (\mathfrak{t}_2,p_2) $, $ \mathfrak{t}_2 \in \mathfrak{L}_+ $. \begin{remark} \label{comparisonalgebra} The maps $ \Delta^{\!+} $ and $ \Delta $ are a variant of the maps used in \cite{reg,BHZ} for the recentering of iterated integrals in the context of singular SPDEs. They could be understood as a deformed Butcher-Connes-Kreimer coproduct. We present the ideas behind their construction: \begin{itemize} \item the decoration $ \mathfrak{f} $ are inert and behave nicely toward the extraction/cutting operation. \item The recursive definition \eqref{def_deltas} is close to the definition of the Connes-Kreimer coproduct with an operator which grafts a forest onto a new root (see \cite{CK}). \item The deformation is given by the sum over $ m $ where root decorations are increased. The number of terms in the sum is finite bounded by $ r+1 $. This deformation corresponds to the one used for SPDEs but with a different projection. In Numerical Analysis, the length of the Taylor expansion is governed by the path connecting the root to the edge we are considering, whereas for SPDEs, it depends on the tree above the edge. Therefore, the structure proposed here is new in comparison to the literature and shows the universality of the deformation observed for singular SPDEs. \item There are some interesting simplifications in the definition of $ \Delta $ and $ \Delta^{\!+} $ in comparison to \cite{reg,BHZ}. Indeed, one has \begin{equs} \Delta \lambda^{\ell} = \lambda^{\ell} \otimes \mathbf{1} \end{equs} instead of the expected definition for the polynomial coproduct \begin{equs} \Delta \lambda = \lambda \otimes \mathbf{1} + \mathbf{1} \otimes \lambda, \quad \Delta \lambda^{\ell} = \sum_{m \leq \ell} \binom{\ell}{m} \lambda^{m} \otimes \lambda^{\ell - m} \end{equs} and $ \Delta^{\!+} $ is not defined on polynomials. This comes from our numerical scheme: We are only interested in recentering around $ 0 $. Therefore, all the right part of the tensor product will be evaluated at zero and all these terms can be omitted at the level of the algebra. Such simplifications can also be used in the context of SPDEs where one considers random objects of the form $ \Pi_x T $ which are recentered iterated integrals around the point $ x $. When one wants to construct these stochastic objects, the interest lies in their law and it turns out that their law is invariant by translation. Then, one can only consider the term $ \Pi_0 T $ which corresponds to the Numerical Analysis framework. With this simplification, we obtain an easier formulation for the antipode given in \eqref{antipode_rec}. \end{itemize} \end{remark} \begin{remark} For the sequel, we will use mainly the symbolic notation \eqref{def_deltas} which is very useful for carrying out recursive proofs. We will also develop a recursive formulation of the general numerical scheme. This approach is also crucial in \cite{reg} and has been pushed forward in \cite{BR18} for singular SPDEs. \end{remark} In the next proposition, we prove the equivalence between the recursive and non-recursive definitions. \begin{proposition} The definitions \eqref{eq:co-action_plus} and \eqref{eq:coproduct_plus} coincide with \eqref{def_deltas}. \end{proposition} \begin{proof} The operator $\Delta$ is multiplicative on $\hat \mathcal{H}$. It remains to verify that the recursive identities hold as well. We consider $\Delta \sigma$ with $ \sigma = \CI^{r}_{(\mathfrak{t}_2,p)}( \lambda^{\ell}_{k} \tau)$ and $ \mathfrak{t}_2 \in \mathfrak{L}_+ $. We write $ \tau = F_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}}$, $ \bar \tau = F_{\mathfrak{e}, r-\ell-1}^{\mathfrak{n},\mathfrak{f}} $ where $ F = \prod_i T_i $ is a forest formed of the trees $ T_i $. One has $ \bar \tau = \prod_i (T_i)_{\mathfrak{e}_i, r-\ell-1}^{\mathfrak{n}_i,\mathfrak{f}_i} $ and the maps $ \mathfrak{n}, \mathfrak{f}, \mathfrak{e} $ are obtained as disjoint sums of the $ \mathfrak{n}_i, \mathfrak{f}_i, \mathfrak{e}_i $. We write $\sigma = T^{\bar \mathfrak{n},\bar \mathfrak{f}}_{\bar \mathfrak{e},r}$ where \begin{equs} \bar \mathfrak{e} = \mathfrak{e} + \mathbf{1}_e (\mathfrak{t}_2,p), \quad \bar \mathfrak{f}(u) = \mathfrak{f} + \mathbf{1}_u k, \quad \bar \mathfrak{n}(u) = \mathfrak{n} + \mathbf{1}_u \ell \end{equs} and $e$ denotes a trunk of type $\mathfrak{t}$ created by $\CI_{(\mathfrak{t}_2,p)}$, $\rho$ is the root of $T$ and $ u $ is such that $ e=(\rho,u) $. It follows from these definitions that \begin{equs} \mathfrak{A}(T) = \{\{\rho\}\} \cup \{ A \cup \{\rho,e\}\,:\, A \in \mathfrak{A}(F)\}\; \end{equs} where $ \mathfrak{A}(F) = \sqcup_i \mathfrak{A}(T_i)$ and the $ A $ are forests. One can actually rewrite \eqref{eq:co-action_plus} exactly as the same for forests. Then, we have the identity \begin{equs} \Delta \sigma &= (\CI^{r}_{(\mathfrak{t}_2,p)}( \lambda^{\ell}_k \cdot) \otimes \mathrm{id}) \Delta \bar \tau + \sum_{\mathfrak{e}_{\bullet}} {1\over \mathfrak{e}_{\bullet}!} (\bullet, \pi \mathfrak{e}_{\bullet},0,0 ,0) \otimes (T, \bar{\mathfrak{n}},\bar{\mathfrak{f}},\bar{\mathfrak{e}},(r,\mathfrak{e}_{\bullet})) \\ & = \left( \CI^{r}_{(\mathfrak{t}_2,p)}( \lambda_{k}^{\ell} \cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-\ell-1}(\tau) + \sum_{m \leq r +1} \frac{ \lambda^{m}}{m!} \otimes \CI^{(r,m)}_{(\mathfrak{t}_2,p)}( \lambda_{k}^{\ell} \tau)\; \end{equs} where the recursive $\left( \CI^{r}_{(\mathfrak{t}_2,p)}( \lambda^{\ell}_k \cdot) \otimes \mathrm{id} \right) \Delta$ encodes the extraction of $ A \cup \{\rho,e\} $, $ A \in \mathfrak{A}(T) $. We can perform a similar proof for $ \mathfrak{t}_1 \notin \mathfrak{L}_+ $ and for $ \Delta^{\!+} $. The main difference is that the sum on the polynomial decoration is removed for an edge not in $ \mathfrak{L}_+ $ and for $ \Delta^{\!+} $ such that we just keep the first term. \end{proof} \begin{example}\label{rem:RecDnlsT1} We illustrate the recursive definition of $ \Delta $ by performing some computations on some relevant decorated trees that one can face in practice for instance in case of cubic NLS \eqref{nlsIntro}. For the decorated tree \begin{equs} T_1 = \CI_{(\mathfrak{t}_2,0)} \left( \lambda_k F_1 \right) \quad F_1 = \CI_{(\mathfrak{t}_1,1)}( \lambda_{k_1}) \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_2}) \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_3}) \end{equs} which appears in context of the cubic Schrödinger equation, see Section \ref{sec:nls}, we have that \begin{equs}\label{comps} \Delta \CD^r(T_1) = \CD^r(T_1) \otimes \mathbf{1} + \sum_{m \leq r+1} \frac{ \lambda^m}{m!} \otimes \hat \CD^{(r,m)}( T_1). \end{equs} Relation \eqref{comps} is proven as follows: Using the definition of $ \CD^r$ in \eqref{DR} as well as~\eqref{def_deltas} yields that \begin{equs}\label{comp1} \Delta \CD^r(T_1) & = \Delta \CI^{r}_{(\mathfrak{t}_2,0)}( \lambda_{k} F_1) \\& = \left( \CI^{r}_{(\mathfrak{t}_2,0)}( \lambda_k \cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-1}(F_1) + \sum_{m \leq r +1} \frac{ \lambda^{m}}{m!} \otimes \CI^{(r,m)}_{(\mathfrak{t}_2,0)}( \lambda_k F_1).\end{equs} Thanks to the definition of $\hat \CD^r$ in \eqref{DRhat} we can conclude that \[ \CI^{(r,m)}_{(\mathfrak{t}_2,0)}( \lambda_k F_1) = \hat \CD^{(r,m)}\CI_{(\mathfrak{t}_2,0)}( \lambda_{k} F_1) =\hat \CD^{(r,m)} (T_1) \] which yields together with \eqref{comp1} that \begin{equ}\label{compi} \Delta \CD^r(T_1) = \left( \CI^{r}_{(\mathfrak{t}_2,0)}( \lambda_k \cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-1}(F_1) + \sum_{\ell \leq r +1} \frac{ \lambda^{\ell}}{\ell!} \otimes \hat \CD^{(r,\ell)} (T_1). \end{equ} Next we need to analyse the term $ \left( \CI^{r}_{(\mathfrak{t}_2,0)}( \lambda_k \cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-1}(F_1) $. First we use the multiplicativity of $\CD^r${,} (cf. \eqref{DR}) and coproduct which yields that \begin{equs}\label{comp2} \Delta \CD^{r-1}(F_1) = \left( \Delta \CI^{r-1}_{(\mathfrak{t}_1,1)}( \lambda_{k_1})\right)\left( \Delta \CI^{r-1}_{(\mathfrak{t}_1,0)}( \lambda_{k_2}) \right)\left(\Delta \CI^{r-1}_{(\mathfrak{t}_1,0)}( \lambda_{k_3})\right) . \end{equs} Thanks to \eqref{def_deltas} we furthermore have that \begin{equs}\label{comp3} \Delta \CI^{r-1}_{(\mathfrak{t}_1,p)}( \lambda_{k_j}) &= \left( \CI^{r-1}_{(\mathfrak{t}_1,p)}( \lambda_{k_j}\cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-1}( \mathbf{1} )\\& = \left( \CI^{r-1}_{(\mathfrak{t}_1,p)}( \lambda_{k_j}\cdot) \otimes \mathrm{id} \right)\left( \mathbf{1} \otimes \mathbf{1}\right) = \CI^{r-1}_{(\mathfrak{t}_1,p)}( \lambda_{k_j}) \otimes \mathbf{1} \end{equs} where we have used that $ \CD^{r-1}( \mathbf{1} ) = \mathbf{1}$ and $\Delta \mathbf{1} = \mathbf{1} \otimes \mathbf{1}$, see also \eqref{def_deltas}. Plugging~\eqref{comp3} into \eqref{comp2} yields that \begin{equs}\label{CoF1} \Delta \CD^{r-1}(F_1) \\& = \left( \CI^{r-1}_{(\mathfrak{t}_1,1)}( \lambda_{k_1}) \otimes \mathbf{1} \right) \left( \CI^{r-1}_{(\mathfrak{t}_1,0)}( \lambda_{k_2}) \otimes \mathbf{1} \right) \left( \CI^{r-1}_{(\mathfrak{t}_1,0)}( \lambda_{k_3}) \otimes \mathbf{1} \right)\\ &= \CI^{r-1}_{(\mathfrak{t}_1,1)}( \lambda_{k_1}) \CI^{r-1}_{(\mathfrak{t}_1,0)}( \lambda_{k_2}) \CI^{r-1}_{(\mathfrak{t}_1,0)}( \lambda_{k_3}) \otimes \mathbf{1} = \CD^{r-1}(F_1) \otimes \mathbf{1} . \end{equs} Hence, \begin{equs} \left( \CI^{r}_{(\mathfrak{t}_2,0)}( \lambda_k \cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-1}(F_1)& = \left( \CI^{r}_{(\mathfrak{t}_2,0)}( \lambda_k \cdot) \otimes \mathrm{id} \right) \left( \CD^{r-1}(F_1) \otimes \mathbf{1} \right)\\ & = \CI^{r}_{(\mathfrak{t}_2,0)}( \lambda_k F_1) \otimes \mathbf{1} = \CD^{r}(T_1) \otimes \mathbf{1}. \end{equs} Plugging this into \eqref{compi} yields \eqref{comps}. \end{example} \subsection{Hopf {algebra} and comodule structures} Using the two maps $ \Delta $ and $ \Delta^{\!+} $, we want to identify a comodule structure over a Hopf algebra. Here, we provide a brief reminder of this structure for a reader not familiar with it. For simplicity, we will use the notation of the spaces introduced above as well as the maps $ \Delta $ and $ \Delta^{\!+} $. The proof that we are indeed in this framework is then given in Proposition~\ref{Hopf_algebras} below. A {\it bialgebra} $(\mathcal{H}_+,\mathcal{M},\mathbf{1},\Delta^{\!+},\mathbf{1}^\star)$ is given by: \begin{itemize} \item A vector space $ \mathcal{H}_+ $ over $\C$ \item A linear map $\mathcal{M}:\mathcal{H}_+\otimes \mathcal{H}_+ \to \mathcal{H}_+$ (product) and an element $\eta:r\mapsto r\mathbf{1}$, $\mathbf{1}\in \mathcal{H}_+$ (identity) such that $(\mathcal{H}_+,\mathcal{M},\eta)$ is a unital associative algebra. \item Linear maps $\Delta^{\!+} :\mathcal{H}_+ \to \mathcal{H}_+\otimes \mathcal{H}_+$ (coproduct) and $\mathbf{1}^\star:\mathcal{H}_+\to\C$ (counit), such that $(\mathcal{H}_+,\Delta^{\!+},\mathbf{1}^\star)$ is a counital coassociative coalgebra, namely \begin{equation}\label{e:coasso} (\Delta^{\!+}\otimes\mathrm{id})\Delta^{\!+}=(\mathrm{id}\otimes\Delta^{\!+})\Delta^{\!+}, \qquad (\mathbf{1}^\star\otimes\mathrm{id})\Delta^{\!+}= (\mathrm{id}\otimes\mathbf{1}^\star)\Delta^{\!+}=\mathrm{id} \end{equation} \item $ \Delta^{\!+} $ and $ \mathbf{1}^{*} $ (resp. $ \mathcal{M} $ and $ \mathbf{1} $) are homomorphisms of algebras (coalgebras). \end{itemize} A {\it Hopf algebra} is a bialgebra $(\mathcal{H}_+,\mathcal{M},\mathbf{1},\Delta^{\!+},\mathbf{1}^\star)$ endowed with a linear map $\mathcal{A}: \mathcal{H}_+ \to \mathcal{H}_+$ such that \begin{equation}\label{anti} \mathcal{M}(\mathrm{id}\otimes \mathcal{A})\Delta^{\!+} = \mathcal{M}(\mathcal{A}\otimes\mathrm{id})\Delta^{\!+}= \mathbf{1}^\star\mathbf{1}. \end{equation} A {\it right comodule} over a bialgebra $(\mathcal{H}_+,\mathcal{M},\mathbf{1},\Delta^{\!+},\mathbf{1}^\star)$ is a pair $(\mathcal{H},\Delta)$ where $\mathcal{H}$ is a vector space and $\Delta: \mathcal{H} \to \mathcal{H} \otimes \mathcal{H}_+$ is a linear map such that \begin{equs}[e:coaction] (\Delta\otimes\mathrm{id})\Delta=(\mathrm{id}\otimes\Delta^{\!+})\Delta, \qquad (\mathrm{id} \otimes \mathbf{1}^{\star})\Delta=\mathrm{id}. \end{equs} In our framework, the product $ \mathcal{M} $ is given by the forest product: \begin{equs} \mathcal{M} (F_1 \otimes F_2) = F_1 \cdot F_2. \end{equs} Most of the properties listed above are quite straightforward to check. In the next proposition we focus on the coassociativity of the maps $ \Delta $ and $ \Delta^{\!+} $ in \eqref{e:coasso} and~\eqref{e:coaction}. Before, let us explain why this structure is useful. If one considers characters that are multiplicative maps $ g : \mathcal{H}_+ \rightarrow \C$, then the coproduct $ \Delta^{\!+} $ and the antipode $\mathcal{A} $ allow us to put a group structure on them. We denote the group of such characters by $ \mathcal{G} $. The product for this group is the convolution product $ \star $ given for $ f,g \in \mathcal{G} $ by: \begin{equs} f \star g = \left( f \otimes g \right) \Delta^{\!+}. \end{equs} We do not need a multiplication because we use the identification $ \C \otimes \C \cong \C $. The inverse is given by the antipode: \begin{equs} f^{-1} = f(\mathcal{A} \cdot). \end{equs} The comodule structure allows us to have an action of $ \mathcal{G} $ onto $ \mathcal{H} $ defined by \begin{equs} \Gamma_{f} = \left( \mathrm{id} \otimes f \right) \Delta^{\!+}, \quad \Gamma_{f} \Gamma_{g} = \Gamma_{f \star g}, \quad \Gamma_f^{-1} = \Gamma_{f^{-1}}. \end{equs} In Section~\ref{sec::Brikhoff}, we will use these structures to decompose our scheme for iterated integrals, that is a character $ \Pi^n : \mathcal{H} \rightarrow \mathcal{C}$, into \begin{equs} \Pi^n = \left( \hat \Pi^n \otimes A^n \right) \Delta \end{equs} where $ \mathcal{C} $ is a space introduced in Section~\ref{sec::recursive_pi}, $ A^n \in \mathcal{G} $ (defined from $ \Pi^n $) and $ \hat \Pi^n : \mathcal{H} \rightarrow \mathcal{C}$. In the identity above we have made the following identification $ \mathcal{C} \otimes \C \cong \mathcal{C} $. The main point of this decomposition is to let the character $ \hat \Pi^n $ appear which is simpler than $ \Pi_n $. This will help us in carrying out the local error analysis in Section~\ref{local error analysis}. \begin{proposition} \label{bialgebra} One has: \begin{equs} \left( \Delta \otimes \mathrm{id} \right) \Delta = \left( \mathrm{id} \otimes \Delta^{\!+} \right) \Delta, \quad \left( \Delta^{\!+} \otimes \mathrm{id} \right) \Delta^{\!+} = \left( \mathrm{id} \otimes \Delta^{\!+} \right) \Delta^{\!+}. \end{equs} \end{proposition} \begin{proof} We proceed by induction and we perform the proof only for $ \CI^{r}_{o_2}( \lambda_{k}^{\ell} F) $. The other case follows similar steps. Note that \begin{equs} & \left( \Delta \otimes \mathrm{id} \right) \Delta \CI^{r}_{o_2}( \lambda_{k}^{\ell} F) \\ & = \left(\Delta \CI^{r}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-\ell-1}(F) + \sum_{m \leq r + 1} \Delta \frac{ \lambda^{m}}{m!} \otimes \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F) \\ & = \sum_{m \leq r + 1} \frac{ \lambda^{m}}{m!} \otimes \left( \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-\ell-1}(F) \\ & + \sum_{m \leq r +1} \frac{ \lambda^{m}}{m!} \otimes \mathbf{1} \otimes \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F) +\left( \left( \CI^{r}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \mathrm{id} \right) \Delta \otimes \mathrm{id} \right) \Delta \CD^{r-\ell-1}(F). \end{equs} On the other hand, we get \begin{equs} & \left( \mathrm{id} \otimes \Delta^{\!+} \right) \Delta \CI^{r}_{o_2}( \lambda_{k}^{\ell} F) \\ & = \left(\CI^{r}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \Delta^{\!+} \right) \Delta \CD^{r-\ell-1}(F) + \sum_{m \leq r+1} \frac{ \lambda^{m}}{m!} \otimes \Delta^{\!+} \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F) \\ & = \left(\CI^{r}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \Delta^{\!+} \right) \Delta \CD^{r-\ell-1}(F) + \sum_{m \leq r +1} \frac{ \lambda^{m}}{m!} \otimes \mathbf{1} \otimes \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F) \\ & +\sum_{m \leq r +1} \frac{ \lambda^{m}}{m!} \otimes \left( \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-\ell-1}(F). \end{equs} Next we observe that \begin{equs} \left(\CI^{r}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \Delta^{\!+} \right) \Delta \CD^{r-\ell-1}(F) & = \left( \CI^{r}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \mathrm{id} \otimes \mathrm{id} \right) \left( \mathrm{id} \otimes \Delta^{\!+} \right) \Delta \CD^{r-\ell-1}(F) \\ & = \left( \CI^{r}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \mathrm{id} \otimes \mathrm{id} \right) \left( \Delta \otimes \mathrm{id} \right) \Delta \CD^{r-\ell-1}(F) \\ & = \left( \left( \CI^{r}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \mathrm{id} \right) \Delta \otimes \mathrm{id} \right) \Delta \CD^{r-\ell-1}(F) \end{equs} where we used an inductive argument for \begin{equs} \left( \Delta \otimes \mathrm{id} \right) \Delta \CD^{r-\ell-1}(F) = \left( \mathrm{id} \otimes \Delta^{\!+} \right) \Delta \CD^{r-\ell-1}(F) . \end{equs} This yields the assertion. \end{proof} \begin{proposition} \label{Hopf_algebras} There exists an algebra morphism $ \mathcal{A} : \mathcal{H}_+ \rightarrow \mathcal{H}_+ $ so that $ (\mathcal{H}_+, \cdot, \Delta^{\!+}, \mathbf{1}, \mathbf{1}^{\star}, \mathcal{A} ) $ is a Hopf algebra. The map $ \Delta : \mathcal{H} \rightarrow \mathcal{H} \otimes \mathcal{H}_+ $ turns $ \mathcal{H} $ into a right comodule for $ \mathcal{H}_+ $ with counit $ \mathbf{1}^{\star} $. \end{proposition} \begin{proof} From Proposition~\ref{bialgebra}, $ \mathcal{H}_+ $ is a bialgebra and $ \Delta $ is a coaction. In fact, $ \mathcal{H}_+ $ is a connected graded bialgebra with the grading given by the number of edges. Therefore from \cite[Corollary II.3.2]{Man}, it is a Hopf algebra and we get the existence of a unique map called the antipode such that: \begin{equs} \label{antipode_identity} \mathcal{M} \left( \mathcal{A} \otimes \mathrm{id} \right) \Delta^{\!+} = \mathcal{M} \left( \mathrm{id} \otimes \mathcal{A} \right) \Delta^{\!+} = \mathbf{1} \mathbf{1}^{\star}. \end{equs} This concludes the proof. \end{proof} We use the identity \eqref{antipode_identity} to write a recursive formulation for the antipode: \begin{proposition} For every $ T \in \hat \mathcal{H} $, one has \begin{equs} \label{antipode_rec} \mathcal{A} \CI^{(r,m)}_{o_2}( \lambda_k^{\ell} F ) & = \mathcal{M} \left( \CI^{(r,m)}_{o_2}( \lambda_k^{\ell} \cdot )\otimes \mathcal{A} \right) \Delta \CD^{r-\ell-1}(F). \end{equs} \end{proposition} \begin{proof} We use the identity~\eqref{antipode_identity} which implies that \begin{equs} \mathcal{M} \left( \mathrm{id} \otimes \mathcal{A} \right) \Delta^{\!+} \CI^{(r,m)}_{o_2}( \lambda_k^{\ell} F )= \mathbf{1} \, \mathbf{1}^{\star}\left( \CI^{(r,m)}_{o_2}( \lambda_k^{\ell} F ) \right). \end{equs} As $\mathbf{1}^{*}$ is non-zero only on the empty forest we can thus conclude that \begin{equs} \mathcal{M} \left( \mathrm{id} \otimes \mathcal{A} \right) \Delta^{\!+} \CI^{(r,m)}_{o_2}( \lambda_k^{\ell} F )= 0.\end{equs} Then, we have by the definition of $\Delta^{\!+} \CI^{(r,m)}_{o_2}( \lambda_k^{\ell} F )$ given in \eqref{def_deltas} that \begin{equs} \mathcal{M} \left( \mathrm{id} \otimes \mathcal{A} \right) \left( \left( \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} \cdot) \otimes \mathrm{id} \right) \Delta \CD^{r-\ell-1}(F) + \mathbf{1} \otimes \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F) \right)= 0 \end{equs} which yields \eqref{antipode_rec}. \end{proof} \begin{remark} The formula \eqref{antipode_rec} can be rewritten in a non-recursive form. Indeed, let us introduce the reduced coproduct: \begin{equs} \tilde{\Delta} F = \Delta^{\!+} F - F \otimes \mathbf{1} - \mathbf{1} \otimes F. \end{equs} Then, we can rewrite \eqref{antipode_identity} as follows \begin{equs} \label{recursive formula} \mathcal{A} F = - F - \sum_{(T)} F' \cdot (\mathcal{A} F'') \quad \tilde \Delta F = \sum_{(T)} F' \otimes F'' \end{equs} where we have used Sweedler notations. \end{remark} \begin{example} We compute the antipode on some KdV trees (cf. \eqref{kdvTK} and \eqref{kdvTK2}) using the formula \eqref{recursive formula}: \begin{equs}\mathcal{A} \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t2) at (1,2); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,n) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_2 $}}; \end{tikzpicture} & = - \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t2) at (1,2); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,n) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_2 $}}; \end{tikzpicture}, \\ \mathcal{A} \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t11) at (-2,4); \coordinate (t12) at (-3,6); \coordinate (t13) at (-1,6); \coordinate (t2) at (1,2); \draw[kernels2] (t11) -- (t13); \draw[kernels2] (t11) -- (t12); \draw[kernels2] (t1) -- (root); \draw[symbols] (t1) -- (t11); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,m) $}] (trinode) at (tri) {}; \node[not] (trinode) at (t1) {}; \node[var] (rootnode) at (t12) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t13) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} & = - \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t11) at (-2,4); \coordinate (t12) at (-3,6); \coordinate (t13) at (-1,6); \coordinate (t2) at (1,2); \draw[kernels2] (t11) -- (t13); \draw[kernels2] (t11) -- (t12); \draw[kernels2] (t1) -- (root); \draw[symbols] (t1) -- (t11); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,m) $}] (trinode) at (tri) {}; \node[not] (trinode) at (t1) {}; \node[var] (rootnode) at (t12) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t13) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} - \sum_{n \leq r} \frac{1}{n!} \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t2) at (1,2); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,m) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ _\ell^n $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} \mathcal{A} \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t2) at (1,2); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r-1,n) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_2 $}}; \end{tikzpicture} = - \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t11) at (-2,4); \coordinate (t12) at (-3,6); \coordinate (t13) at (-1,6); \coordinate (t2) at (1,2); \draw[kernels2] (t11) -- (t13); \draw[kernels2] (t11) -- (t12); \draw[kernels2] (t1) -- (root); \draw[symbols] (t1) -- (t11); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,m) $}] (trinode) at (tri) {}; \node[not] (trinode) at (t1) {}; \node[var] (rootnode) at (t12) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t13) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} + \sum_{n \leq r} \frac{1}{n!} \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t2) at (1,2); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r,m) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ _\ell^n $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t2) at (1,2); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (r-1,n) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_2 $}}; \end{tikzpicture}. \end{equs} \end{example} \section{Approximating iterated integrals} \label{sec::Iterated integrals} In this section, we introduce the main characters that map decorated trees to oscillatory integrals denoted by the space $ \mathcal{C} $. The first character $ \Pi : \hat \mathcal{H} \rightarrow \mathcal{C} $ corresponds to the integral in Fourier space stemming form Duhamel's formula. Then, the second character $ \Pi^n : \mathcal{H} \rightarrow \mathcal{C} $ gives an approximation of the first character, in the sense that if $ F \in \hat \mathcal{H} $, then $ \Pi^n \CD^r(F) $ is a low regularity approximation of order $ r $ of $ \Pi F $. This is the main result of the section: Theorem~\ref{approxima_tree}. In order to prove this result, one needs to introduce an intermediate character $ \hat \Pi^n : \mathcal{H} \rightarrow \mathcal{C} $ that singles out the dominant oscillations (see Proposition~\ref{factorisation_dom}). The connection between $ \Pi^n $ and $ \hat \Pi^n $ is performed via a Birkhoff factorisation where one uses the coaction $ \Delta $. Such a factorisation seems natural in our context, see Remark~\ref{Birkoffnatural}, and it shows a different application from the existing literature on Birkhoff factorisations. Indeed, the approximation in $ \Pi^n $ is centered around well-chosen Taylor expansions that depends on the frequency interactions. It is central for the local error analysis to understand these interactions. The Birkhoff factorisation allows us to control the contributions of the dominant and lower order parts. We conclude this section by checking that the approximation given by $ \Pi^n $ can be mapped back to the physical space see Proposition~\ref{physical_space} which is an important property for the practical implementation of the numerical scheme (cf. Remark \ref{rem:FFT}). \subsection{A recursive formulation} \label{sec::recursive_pi} For the rest of this section, an element of $ \mathfrak{L}_+ $ (resp, $ \mathfrak{L}_+ \times \lbrace 0,1\rbrace $) is denoted by $ \mathfrak{t}_2 $ (resp. $ o_2 $) and an element of $ \mathfrak{L} \setminus \mathfrak{L}_+ $ (resp. $ \mathfrak{L} \setminus \mathfrak{L}_+ \times \lbrace 0,1 \rbrace $) is denoted by $ \mathfrak{t}_1 $ (resp. $ o_1 $). We denote by $ \mathcal{C} $ the space of functions of the form $ z \mapsto \sum_j Q_j(z)e^{i z P_j(k_1,...,k_n) } $ where the $ Q_j(z) $ are polynomials in $ z $ and the $ P_j $ are polynomials in $ k_1,...,k_n \in \Z^{d} $. The $ Q_j $ may also depend on $ k_1,...,k_n $. We use the pointwise product on $ \mathcal{C} $ for $ G_1(z) = Q_1(z)e^{i z P_1(k_1,...,k_n) } $ and $ G_2(z) = Q_2(z)e^{i z P_2( k_1,..., k_n) } $ given by: \begin{equs} ( G_1 G_2)(z) = Q_1(z) Q_2(z) e^{i z P(k_1,...,k_n)}, \quad P = P_1 + P_2. \end{equs} We want to define characters on decorated trees using their recursive construction. A character is a map defined from $ \hat \mathcal{H} $ into $ \mathcal{C} $ which respects the forest product. In the sense, that $ g : \hat \mathcal{H} \rightarrow \mathcal{C} $ is a character if one has: \begin{equs} g(F \cdot \bar F) = g(F) g(\bar F), \quad F, \bar{F} \in \hat \mathcal{H}. \end{equs} We define the following character $ \Pi : \hat \mathcal{H} \rightarrow \mathcal{C} $ by \begin{equation}\label{Pi} \begin{aligned} \Pi \left( F \cdot \bar F \right)(\tau) & = ( \Pi F)(\tau) ( \Pi \bar F )(\tau), \\ \Pi \left( \CI_{o_1}( \lambda_k^{\ell} F)\right)(\tau) & = e^{i \tau P_{o_1}(k)} \tau^{\ell} (\Pi F)(\tau), \\ \Pi \left( \CI_{o_2}( \lambda_k^{\ell} F)\right)(\tau) & = -i \vert \nabla\vert^{\alpha} (k) \int_{0}^{\tau} e^{i \xi P_{o_2}(k)} \xi^{\ell}(\Pi F)(\xi) d \xi, \end{aligned} \end{equation} where $ F, \bar F \in \hat \mathcal{H} $. \begin{example With the aid of \eqref{Pi}, one can compute recursively the following oscillatory integrals arising in the cubic NLS equation~\eqref{nlsIntro} \begin{equs} (\Pi \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,1); \coordinate (tri) at (0,-1); \draw[kernels2] (tri) -- (root); \node[var] (rootnode) at (root) {\tiny{$ k_2 $}}; \node[not] (trinode) at (tri) {}; \end{tikzpicture}) (\tau) & = e^{-i \tau k_2^2}, \quad (\Pi \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,1); \coordinate (tri) at (0,-1); \draw[kernels2,tinydots] (tri) -- (root); \node[var] (rootnode) at (root) {\tiny{$ k_1 $}}; \node[not] (trinode) at (tri) {}; \end{tikzpicture}) (\tau) = e^{i \tau k_1^2}, \quad (\Pi \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,-1); \coordinate (t1) at (-2,1); \coordinate (t2) at (2,1); \coordinate (t3) at (0,2); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \node[not] (rootnode) at (root) {};t \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} )(\tau) = e^{i \tau (k_1^2 - k_2^2 - k_3^2)}, \\ ( \Pi \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture}) (\tau) & = -i \int^{\tau}_0 e^{is (-k_1 + k_2 + k_3)^2} e^{i s (k_1^2 - k_2^2 - k_3^2)} ds. \end{equs} \end{example} We need a well chosen approximation of order $ r $ for the character $ \Pi $ defined in~\eqref{Pi}, which is suitable in the sense that it embeds those dominant frequencies matching the regularity $n$ of the solution, see Remark~\ref{rem:regi}. Therefore, we consider a new family of characters defined now on $ \mathcal{H} $ and parametrised by $ n \in \N $: \begin{equation} \label{recursive_pi_r} \begin{aligned} \Pi^n \left( F \cdot \bar F \right)(\tau) & = \left( \Pi^n F \right)(\tau) \left( \Pi^n \bar F \right)(\tau), \quad (\Pi^n \lambda^{\ell})(\tau) = \tau^{\ell}, \\ (\Pi^n \CI^{r}_{o_1}( \lambda_{k}^{\ell} F ))(\tau) & =\tau^{\ell} e^{i \tau P_{o_1}(k)} (\Pi^n \CD^{r-\ell}(F))(\tau), \\ \left( \Pi^n \CI^{r}_{o_2}( \lambda^{\ell}_k F) \right) (\tau) & = \CK^{k,r}_{o_2} \left( \Pi^n \left( \lambda^{\ell} \CD^{r-\ell-1}(F) \right),n \right)(\tau). \end{aligned} \end{equation} All approximations are thereby carried out in the map $ \CK^{k,r}_{o_2} \left( \cdot,n \right) $ which is given in Definition~\ref{Taylor_exp} below. The main idea behind the map $ \CK^{k,r}_{o_2} \left( \cdot,n \right) $ is that all integrals are approximated through well-chosen Taylor expansions depending on the regularity $ n $ of the solution assumed a priori, and the interaction of the frequencies in the decorated trees. For a polynomial $ P(k_1,...,k_n)$, we define the degree of $ P $ denoted by $ \deg(P) $ as the maximum $ m $ such that $ k_i^{m} $ appears as a factor of one monomial in $ P $ for some~$i$. For example: \begin{equs} & P(k_1,k_2,k_3) = - 2 k_1 (k_2+k_3) + 2 k_2 k_3, \quad \deg(P) = 1,\\ & P(k_1,k_2,k_3) = k_1^2- 2 k_1 (k_2+k_3) + 2 k_2 k_3 , \quad \deg(P) = 2. \end{equs} \begin{definition} \label{Taylor_exp} Assume that $ G: \xi \mapsto \xi^{q} e^{i \xi P(k_1,...,k_n)} $ where $ P $ is a polynomial in the frequencies $ k_1,...,k_n $ and let $ o_2 = (\mathfrak{t}_2,p) \in \mathfrak{L}_+ \times \lbrace 0,1 \rbrace$ and $ r \in \N $. Let $ k $ be a linear map in $ k_1,...,k_n $ using coefficients in $ \lbrace -1,0,1 \rbrace $ and \[ \begin{aligned} \CL_{\text{\tiny{dom}}} & = \mathcal{P}_{\text{\tiny{dom}}} \left( P_{o_2}(k) + P \right), \quad \CL_{\text{\tiny{low}}} = \mathcal{P}_{\text{\tiny{low}}} \left( P_{o_2}(k) + P \right) \\ f(\xi) & = e^{i \xi \CL_{\text{\tiny{dom}}}}, \quad g(\xi) = e^{i \xi \CL_{\text{\tiny{low}}}}, \quad { \tilde g}(\xi) = e^{i \xi \left( P_{o_2}(k) + P \right)} . \end{aligned} \] Then, we define for $ n \in \N $ and $ r \geq q $ \begin{equ}[e:Pim1] \CK^{k,r}_{o_2} ( { G},n)(\tau) = \left\{ \begin{aligned} & -i \vert \nabla\vert^\alpha(k) \sum_{\ell \leq r - q} \frac{ { \tilde g}^{(\ell)}(0)}{\ell!} \int_0^{\tau} \xi^{\ell+q} d \xi, \, \text{if } n \geq \text{deg}\left(\mathcal{L}_{\text{\tiny{dom}}}^{r+1}\right) + \alpha , \\ & -i\vert \nabla\vert^\alpha(k) \sum_{\ell \leq r -q} \frac{g^{(\ell)}(0)}{\ell !} \, \Psi^{r}_{n,q}\left( \CL_{\text{\tiny{dom}}} ,\ell\right)(\tau), \quad \text{otherwise}. \\ \end{aligned} \right. \end{equ} Thereby we set for $ \left(r - q - \ell+1 \right) \deg(\CL_{\text{\tiny{dom}}}) + \ell \deg(\CL_{\text{\tiny{low}}}) + \alpha > n $ \begin{equs}[e:Pim] \Psi^{r}_{n,q}\left( \CL_{\text{\tiny{dom}}},\ell \right)(\tau) = \int_0^{\tau} \xi^{\ell+q} f(\xi) d \xi. \end{equs} Otherwise, \begin{equ}[e:Pim2] \Psi^{r}_{n,q}\left( \CL_{\text{\tiny{dom}}},\ell \right)(\tau) = \sum_{m \leq r - q - \ell} \frac{f^{(m)}(0)}{m!} \int_0^{\tau} \xi^{\ell+m+q} d \xi. \end{equ} Here $ \deg(\CL_{\text{\tiny{dom}}})$ and $\deg(\CL_{\text{\tiny{low}}}) $ denote the degree of the polynomial $ \CL_{\text{\tiny{dom}}} $ and $ \CL_{\text{\tiny{low}}} $, respectively and $\vert \nabla\vert^\alpha(k) = \prod_{\alpha = \sum \gamma_j < \deg(\CL)} k_j^{\gamma_j}$ (cf. \eqref{Lldef}). If $ r <q $, the map $ \CK^{k,r}_{o_2} ( { G},n)(\tau)$ is equal to zero. \end{definition} \begin{remark}[Practical implementation]\label{rem:stab} In practical computations we need to stabilise the above approach, as the Taylor series expansion of $g$ may introduce derivatives on the numerical solution causing instability of the discretisation. We propose two ways to obtain stabilised high-order resonance based schemes without changing the underlying structure of the local error: \begin{itemize} \item Instead of straightforwardly applying a Taylor series expansion of $g$ we introduce a stabilisation in the Taylor series expansion itself based on finite difference approximations of type $ g'(0) = \frac{g(t)-g(0)}{t} + \mathcal{O}(t g''). $ For instance, at second- and third order we will use that \begin{equation} \begin{aligned}\label{gExp1} g(\xi) & = g(0) + \xi \frac{g(t)-g(0)}{t} + \mathcal{O}(t \xi g'')\\ g(\xi) &= g(0) + \xi \frac{g(t)+ g(-t)}{t} + \frac{\xi^2}{2} \frac{g(t)- 2 g(0) + g(-t)}{t^2} + \mathcal{O}(t \xi^2 g'''). \end{aligned} \end{equation} We refer to \cite{Fornberg} for a simple recursive algorithm calculating the weights in compact finite difference formulas for any order of derivative and to any order of accuracy. \item We carry out a straightforward Taylor series expansion of $g$, but include suitable filter functions $\Psi$ in the discretisation. At second-order they may for instance take the form\begin{equation}\label{psi} \Psi = \Psi \left(i \tau \mathcal{L}_\text{low}\right) \quad \text{ with }\quad \Psi(0) = 1 \quad \text{and} \quad \left\Vert \tau \Psi\left(i \tau \mathcal{L}_\text{low}\right) g'(0) \right\Vert \leq 1. \end{equation} For details on filter functions we refer to \cite{H2Tri} and the references therein. \end{itemize} Practical computations and choices of this stabilisation for concrete examples are detailed in Section \ref{sec:examples}. \end{remark} \begin{example} We consider $P_{\mathfrak{t}_2}( \lambda) = - \lambda^2$, $p = 0$, $ \alpha =0 $, $ k = -k_1+k_2+k_3 $ and \begin{equs} { G}(\xi) = \xi e^{i \xi ( k_1^2 - k_2^2 - k_3^2 )}. \end{equs} With the notation of Definition \ref{Taylor_exp} we observe that $q = 1$ and $P(k_1,k_2,k_3) = k_1^2 - k_2^2 - k_3^2$ such that \begin{equs} P_{o_2}(k) &+ P = (-1)^{p} P_{\mathfrak{t}_2}( (-1)^{p} k) + P \\ & = (-k_1+k_2+k_3)^2 + k_1^2 - k_2^2 - k_3^2 = 2k_1^2 - 2 k_1 (k_2+k_3) + 2 k_2 k_3. \end{equs} Hence, \begin{equs} \CL_{\text{\tiny{dom}}} & = 2 k_1^2, \quad \CL_{\text{\tiny{low }}} = - 2 k_1 (k_2+k_3) + 2 k_2 k_3, \end{equs} cf. also the Schrödinger Example \ref{exRDoNLS}. Furthermore, we observe as $\deg(\CL_{\text{\tiny{dom}}}) = 2$, $\deg(\CL_{\text{\tiny{low}}}) = 1$ and $q = 1$ that \begin{equs}\label{condE} \left(r - q - \ell+1 \right) \deg(\CL_{\text{\tiny{dom}}}) + \ell \deg(\CL_{\text{\tiny{low}}}) > n \quad \text{ if }\quad 2r - n > \ell. \end{equs} In the following we will also exploit that $f(0) = g(0) = 1$. \begin{itemize} \item {\bf Case $r = 0$:} As $ r< q = 1$ we have for all $n$ that \[ \CK^{k,0}_{o_2} ( { G},n)(\tau) = 0. \] \item {\bf Case $r = 1$:} For $n = 1$ we obtain \[ \CK^{k,1}_{o_2} ( { G},n)(\tau) = -i \Psi^{1}_{n,1}\left( \CL_{\text{\tiny{dom}}},0 \right) = \int_0^\tau \xi f(\xi) d\xi = \frac{1}{2i k_1^2}\left( \tau e^ {2i \tau k_1^2}- \frac{ e^{2 i \tau k_1^2}-1}{2 i k_1^2}\right) \] as condition \eqref{condE} takes for $\ell = 0$ the form $2-n> 0$. On the other hand, for $n \geq 2$ we have that $n \geq \deg(\CL_{\text{\tiny{dom}}})$ such that \[ \CK^{k,1}_{o_2} ( { G},n)(\tau) = -i { \tilde g}(0) \int_0^\tau \xi d\xi =- i \frac{\tau^2}{2}. \] \item \noindent {\bf Case $r = 2$:} If $ n \geq 4$ we obtain \[ \CK^{k,2}_{\alpha} ( { G},n)(\tau) = - i \left( \frac{\tau^2}{2} + \frac{ { \tilde g}(\tau) - 1}{\tau} \frac{\tau^3}{2}\right). \] Let $n \leq 3$. We have that \begin{equs} \CK^{k,2}_{\alpha} ( { G},n)(\tau) & = -i \left( \Psi^{2}_{n,1}\left( \CL_{\text{\tiny{dom}}},0 \right) + \frac{g(\tau)-1}{\tau} \Psi^{2}_{n,1}\left( \CL_{\text{\tiny{dom}}},1 \right) \right) \\ & = -i \left(\Psi^{2}_{n,1}\left( \CL_{\text{\tiny{dom}}},0 \right) + \frac{g(\tau)-1}{\tau} \Psi^{2}_{n,1}\left( \CL_{\text{\tiny{dom}}},1 \right)\right) \end{equs} and condition \eqref{condE} takes the form $4-n> \ell$. If $\ell = 1$ we thus obtain for $n =1,2$ \begin{equs} \Psi^{2}_{n \leq 2 ,1}\left( \CL_{\text{\tiny{dom}}},1 \right) = \int_0^\tau \xi^2 f(\xi) d\xi = \frac{ \tau^2 }{2 i k_2^2} \left(e^{2i \tau k_1^2} - 2 \Psi^{1}_{1,1}\left( \CL_{\text{\tiny{dom}}},0 \right) \right) \end{equs} and for $n = 3$ \begin{equs} \Psi^{2}_{n>2,1}\left( \CL_{\text{\tiny{dom}}},1 \right) = f(0)\int_0^\tau \xi^{2}d\xi = \frac{\tau^3}{3}. \end{equs} If $\ell = 0$, on the other hand, condition \eqref{condE} holds for $n = 1,2,3$. Henceforth, we have that \[ \Psi^{2}_{n \leq 3,1}\left( \CL_{\text{\tiny{dom}}},0 \right) = \int_0^\tau \xi f(\xi)d\xi = \Psi^{1}_{1,1}\left( \CL_{\text{\tiny{dom}}},0 \right) . \] \end{itemize} \end{example} \begin{lemma} \label{Taylor_bound} We keep the notations of Definition~\ref{Taylor_exp}. We suppose that $ q \leq r$ then one has \begin{equation} - i \vert \nabla\vert^{\alpha} (k) \int_{0}^{\tau} \xi^{q} { e^{i \xi \left (\CL_{\text{\tiny{dom}}} + \CL_{\text{\tiny{low}}}\right)}} d\xi -\CK^{k,r}_{o_2} ( { G},n)(\tau) = \CO(\tau^{r+2} k^{\bar n}) \end{equation} where $ \bar n = \max(n, \deg(\CL_{\text{\tiny{low}}}^{r-q +1}) + \alpha) $. \end{lemma} \begin{proof} Recall the notation of Definition~\ref{Taylor_exp} which implies that $$ \int_{0}^{\tau} \xi^{q} { e^{i \xi \left (\CL_{\text{\tiny{dom}}} + \CL_{\text{\tiny{low}}}\right)}} d\xi = \int_{0}^{\tau} \xi^{q} f(\xi) g(\xi) d\xi $$ with $f(\xi) = e^{i \xi \CL_{\text{\tiny{dom}}}}$ and $ g(\xi) = e^{i \xi \CL_{\text{\tiny{low}}}}$. It is just a consequence of Taylor expanding the functions $ g, { \tilde g}$ and $ f $. If $ n \geq \text{deg}\left(\mathcal{L}_{\text{\tiny{dom}}}^{r}\right) + \alpha $ we have \begin{equs} -i \vert \nabla\vert^{\alpha} (k) & \int_{0}^{\tau} \xi^{q} f(\xi) g(\xi) d\xi +i \vert \nabla\vert^{\alpha} (k) \sum_{\ell \leq r - q} \frac{ { \tilde g}^{(\ell)}(0)}{\ell!} \int_0^{\tau} \xi^{\ell+q} d \xi \\ & = \CO( \tau^{r+2} \vert \nabla\vert^{\alpha} (k) \mathcal{L}_{\text{\tiny{dom}}}^{r+1}) \\ & = \CO(\tau^{r+2} k^{\bar n}). \end{equs} Else, we get \begin{equs} - i \vert \nabla\vert^{\alpha} (k) & \int_{0}^{\tau} \xi^{q} f(\xi) g(\xi) d\xi + i \vert \nabla\vert^{\alpha} (k) \sum_{\ell \leq r - q} \frac{g^{(\ell)}(0)}{\ell!} \int_0^{\tau} \xi^{\ell+q} f(\xi) d \xi \\ & = \CO(\tau^{r+2} \vert \nabla\vert^{\alpha} (k) g^{(r-q+1)}) \\& = \CO(\tau^{r+2} \vert \nabla\vert^{\alpha} (k) \CL_{\text{\tiny{low}}}^{r-q+1}) \end{equs} where the latter follows from the observation that $ g^{(\ell)}(\xi) = (i \CL_{\text{\tiny{low}}})^{\ell} e^{i \xi \CL_{\text{\tiny{low}}}} $. If on the other hand $ \left(r- q - \ell +1 \right) \deg(\CL_{\text{\tiny{dom}}}) + \ell \deg(\CL_{\text{\tiny{low}}}) + \alpha \leq n $ then \begin{equs} & - i \vert \nabla\vert^{\alpha} (k) \frac{g^{(\ell)}(0)}{\ell!} \int_0^{\tau} \xi^{\ell+q} f(\xi) d \xi +i \vert \nabla\vert^{\alpha} (k)\frac{g^{(\ell)}(0)}{\ell!} \sum_{m \leq r - q - \ell} \frac{f^{(m)}(0)}{m!} \int_0^{\tau} \xi^{\ell+m+q} d \xi \\ & = \frac{g^{(\ell)}(0)}{\ell!} \int_0^{\tau} \xi^{\ell+q} \CO \left(\xi^{r-q-\ell+1} \vert \nabla\vert^{\alpha} (k)\CL_{\text{\tiny{dom}}}^{r-q-\ell+1} \right) d \xi \\ & = \CO(\tau^{r+2} \vert \nabla\vert^{\alpha} (k) \CL_{\text{\tiny{low}}}^{\ell} \CL_{\text{\tiny{dom}}}^{r-q-\ell+1} ) \\ & = \CO(\tau^{r+2} k^n ) \end{equs} which allows us to conclude. \end{proof} \begin{remark} In the proof of Lemma~\ref{Taylor_bound}, one has \begin{equs} \deg \left( \CL_{\text{\tiny{low}}}^{\ell} \CL_{\text{\tiny{dom}}}^{r-q-\ell+1} \right) \geq\deg \left( \CL_{\text{\tiny{low}}}^{r-q+1} \right). \end{equs} If $ n = \deg \left( \CL_{\text{\tiny{low}}}^{r-q+1} \right) + \alpha $ we cannot carry out a Taylor series expansion of $ f $ and we have to perform the integration exactly. This will give a more complicate numerical scheme. If on the other hand, $ n $ is larger, part of the Taylor expansions of $ f $ will be possible. In fact, $ n $ corresponds to the regularity of the solution we assume a priori, see Remark \ref{rem:regi}. \end{remark} \begin{remark} \label{better error} In Lemma~\ref{Taylor_bound} we express the approximation error in terms of powers of $ k $. This will be enough for conducting the local error analysis for the general scheme. One can be more precise and keep the full structure by replacing $ k $ by monomials in $ \CL_{\text{\tiny{dom}}} $ and $ \CL_{\text{\tiny{low}}} $. This could be certainly useful when one wants to perform the global error analysis and needs to keep track of the full error structure. \end{remark} \subsection{A Birkhoff type factorisation} \label{sec::Brikhoff} The character $ \Pi^n : \mathcal{H} \rightarrow \mathcal{C} $ is quite complex since one needs to compute several nonlinear interactions (oscillations) at the same time. Indeed, most of the time the operator $ \CK^{k,r}_{o_2} \left( \cdot,n \right) $ is applied to a linear combination of monomials of the form $ e^{i \xi P_j(k)} $. We want to single out every oscillation through a factorisation of this character. We start by introducing a splitting with a projection $ \mathcal{Q} $: \begin{equs} \mathcal{C} = \mathcal{C}_- \oplus \mathcal{C}_+, \quad \mathcal{Q} : \mathcal{C} \rightarrow \mathcal{C}_- \end{equs} where $ \mathcal{C}_- $ is the space of polynomials $ Q(\xi) $ and $ \mathcal{C}_+ $ is the subspace of functions of the form $ z \mapsto \sum_j Q_j(z)e^{i z P_j(k_1,...,k_n) } $ with $ P_j \neq 0 $. \begin{remark} \label{Birkoffnatural} In the classical Birkhoff factorisation for Laurent series, we consider $A = \C[[t,t^{-1}]$, with finite pole-part. In this context, the splitting reads $ A = A_- \oplus A_+ $ where $A_- = t^{-1}\C[t^{-1}]$ and $A_+ = \C[[t]]$, such that $ \mathcal{Q} $ keeps only the pole part of a series: \begin{equs} \mathcal{Q}\Big( \sum_{n} a_n t^{n} \Big) = \sum_{ n< 0} a_n t^{n} \in A_-. \end{equs} The idea is to remove the divergent part of the series, i.e., its pole part. In our context the structure of the factorisation is quite different, as we are interested in singling out the oscillations. Let suppose that we start with a term of the form $ z \mapsto e^{i z P(k_1,...,k_n) }$, where $P$ corresponds to the dominant part of some differential operator. Then, the integral over this term yields two contributions: \begin{align*} \int_{0}^{t} e^{i \xi P(k_1,...,k_n)} d\xi = \frac{ e^{i t P(k_1,...,k_n)} - 1}{i \small{P(k_1,...,k_n)}}. \end{align*} One is the oscillation $e^{i z P(k_1,...,k_n) }$ we start with evaluated at time $z = t$ and the other one is the constant term $ -1 $. These terms are obtained by applying recursively the projection $ \mathcal{Q} $. This approach seems new from the literature and it is quite different in spirit from what has been observed for singular SPDEs. \end{remark} We set \begin{equs}\label{K} \CK^{k,r}_{o_2,-}:= \mathcal{Q} \circ \CK^{k,r}_{o_2} , \quad \CK^{k,r}_{o_2,+}:=\left( \mathrm{id} - \mathcal{Q} \right) \circ \CK^{k,r}_{o_2} . \end{equs} One has \[ \CK^{k,r}_{o_2} = \CK^{k,r}_{o_2,-} + \CK^{k,r}_{o_2,+}. \] We define a character $ A^n : \mathcal{H}_+ \rightarrow \C $ by \begin{equation} \begin{aligned}\label{An} A^n( \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F)) & = \left( \mathcal{Q} \circ \partial^{m} \Pi^{n} \CI^{r}_{o_2}( \lambda_{k}^{\ell} F) \right)(0). \end{aligned} \end{equation} where $\partial^{m} \Pi^{n} \CI^{r}_{o_2}( \lambda_{k}^{\ell} F)$ is the $ m $th derivative of the function $ t \mapsto (\Pi^{n} \CI^{r}_{o_2}( \lambda_{k}^{\ell} F))(t) $. The character $ A^n $ applied to $ \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F) $ is extracting the coefficient of $ \tau^{m} $ multiplied by $ m! $ in $ \Pi^n \CI^{r}_{o_2}( \lambda_{k}^{\ell} F) $. If we extend $ \Pi^n $ to $ \mathcal{H}_+ $ by setting \begin{equs} \Pi^n( \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F)) = \partial^{m} \Pi^{n} \CI^{r}_{o_2}( \lambda_{k}^{\ell} F) \end{equs} then we have a new expression for $ A^n $ \begin{equs} \label{nice expressiion An} A^n = \left( \mathcal{Q} \circ \Pi^n \cdot \right)(0). \end{equs} We define a character $ \hat \Pi^n : \mathcal{H} \rightarrow \mathcal{C} $ which computes only one interaction by applying repeatedly the projection $ \mathrm{id} - \mathcal{Q} $ \begin{equation} \label{def_A_B} \begin{aligned} \hat{\Pi}^n \left( F \cdot \bar F \right)(\tau) & = \left( \hat{\Pi}^n F \right)(\tau) \left( \hat{\Pi}^n \bar F \right)(\tau), \quad (\hat{\Pi}^n \lambda^{\ell})(\tau) = \tau^{\ell}, \\ \hat{\Pi}^n(\CI^{r}_{o_1}( \lambda_{k}^{\ell} F ))(\tau) & = \tau^{\ell }e^{i \tau P_{o_1}(k)} \hat{\Pi}^n (\CD^{r-\ell}(F))(\tau), \\ \hat \Pi^n(\CI^{r}_{o_2}( \lambda_{k}^{\ell} F )) & = \CK_{o_2,+}^{k,r}( \hat \Pi^n( \lambda^{\ell} \CD^{r-\ell-1}(F)),n ) \end{aligned} \end{equation} {for $ F, \bar{F} \in \mathcal{H} $. One can notice that $ \hat \Pi^n $ takes value in $ \mathcal{C}_+ $ on elements of the form $\CI^{r}_{o_2}( \lambda_{k}^{\ell} F ) $.} The character $ \hat{\Pi}^n $ is central for deriving the local error analysis for the numerical scheme. It has nice properties outlined in Proposition~\ref{factorisation_dom} which are crucial for proving Theorem~\ref{approxima_tree}. We provide an identity for the approximation $ \Pi^n $: \begin{equation} \label{antipode_def} \begin{aligned} \Pi^n = \left( \hat \Pi^n \otimes A^n \right) \Delta. \end{aligned} \end{equation} In the identity above, we do not need a multiplication because $ A^n(F) \in \C $ for every $ F \in \mathcal{H}_+ $ and $ \mathcal{C}$ is a $ \C $-vector space. Therefore, we use the identification $ \mathcal{C} \otimes \C \cong \mathcal{C} $. \begin{proposition} \label{Prop 3.7} The two definitions \eqref{recursive_pi_r} and \eqref{antipode_def} coincide. \end{proposition} \begin{proof} We prove this identity by induction on the number of edges of a forest. We first consider a tree of the form $ \CI^{r}_{o_1}( \lambda_k^{\ell} F) $ then we get: \[ \begin{aligned} \left( \hat \Pi^n(\cdot)(\tau) \otimes A^n \right) \Delta \CI^{r}_{o_1}( \lambda_k^{\ell} F ) & = \left(\hat \Pi^n \left( \CI^{r}_{o_1}( \lambda_k^{\ell} \cdot ) \right)(\tau) \otimes A^n \right) \Delta \CD^{r - \ell}(F) \\ & = \tau^{\ell} e^{i \tau P_{o_1}(k)} \left( \hat \Pi^n \left( \cdot \right)(\tau) \otimes A^n \right) \Delta \CD^{r - \ell}(F) \\ & = \tau^{\ell} e^{i \tau P_{o_1}(k)} (\Pi^n \CD^{r - \ell}(F))(\tau) \\ & = \left( \Pi^n \CI^{r}_{o_1}( \lambda_k^{\ell} F ) \right)(\tau), \end{aligned} \] where we have used our inductive hypothesis. We look now at a tree of the form $ \CI^{r}_{o_2}( \lambda_{k}^{\ell} F) $: \[ \begin{aligned} & \left( \hat \Pi^n \otimes A^n \right) \Delta \CI^{r}_{o_2}( \lambda_{k}^{\ell} F) = \left( \hat \Pi^n(\CI^{r}_{o_2}( \lambda_k^{\ell} \cdot)) \otimes A^n \right) \Delta \CD^{r-\ell-1}(F) \\& + \sum_{m \leq r + 1} \frac{1}{m!}\hat{\Pi}^n( \lambda^{m}) A^n( \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F) ) \\ & = \CK_{o_2,+}^{r,k} \left( \hat \Pi^n( \lambda^{\ell} \cdot) \otimes A^n, n \right) \Delta \CD^{r-\ell-1}(F) + \CK_{o_2,-}^{r,k}(\Pi^n \lambda^{\ell} \CD^{r -\ell-1}(F),n) \\ & = \CK_{o_2,+}^{r,k}(\Pi^n \lambda^{\ell} \CD^{r-\ell-1}(F),n) + \CK_{o_2,-}^{r,k}(\Pi^n \lambda^{\ell} \CD^{r-\ell-1}(F),n) \\ & = \Pi^n \left(\CI^{r}_{o_2}( \lambda^{\ell}_{k} F) \right) \end{aligned} \] where we used the following identification \begin{equs} (\hat \Pi^n \otimes A^n) (F_1 \otimes F_2) = ( \hat \Pi^n F_1 \otimes A^n F_2 ) = \hat \Pi^n(F_1) A^n(F_2) \end{equs} and that \begin{equs} & \sum_{m \leq r+1} \frac{1}{m!}\hat{\Pi}^n( \lambda^{m}) \, A^n( \CI^{(r,m)}_{o_2}( \lambda_{k}^{\ell} F) ) \\ & = \sum_{m \leq r + 1} \frac{1}{m!}\hat{\Pi}^n( \lambda^{m}) \,\left( \mathcal{Q} \circ \partial^{m} \Pi^n \CI^{r}_{o_2}( \lambda_{k}^{\ell} F) \right)(0) \\ & = \CK_{o_2,-}^{r,k}(\Pi^n \lambda^{\ell} \CD^{r - 1-\ell}(F),n). \end{equs} \end{proof} The interest of the decomposition given by Proposition~\ref{Prop 3.7} comes from Proposition~\ref{factorisation_dom}: $\hat{\Pi}^n$ only involves one oscillation. \begin{example} \label{AnBn} We compute $ A^n $ and $ \hat \Pi^n $ for the following decorated trees that appear for the cubic NLS equation, see also \eqref{nlsTK}: \begin{equs} T_1 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture}, \quad T_2 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ \ell $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture}, \quad T_3 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} \end{equs} where $ \ell = -k_1 + k_2 + k_3 $. Then, when $ r = 0 $ and $ n < 2 $, one has from \eqref{Psi0} \begin{equs} (\Pi^n \CD^r(T_1))(\tau) = -i \frac{e^{2 i \tau k_1^2}-1}{2 i k_1^2}. \end{equs} On the other hand, \begin{equs} \Delta \CD^{0}(T_1) = \CD^{0}(T_1) \otimes \mathbf{1} + \mathbf{1} \otimes \hat \CD^{(0,0)}(T_1) \end{equs} and \begin{equs} \hat \Pi^{n}(\CD^{0}(T_1)) = -\frac{e^{2 i \tau k_1^2}}{2 k_1^2}, \quad A^n(\hat \CD^{(0,0)}(T_1)) = \frac{1}{2 k_1^2}. \end{equs} When $ n=2 $, one gets \begin{equs} (\Pi^n \CD^0(T_1))(\tau) = \tau, \quad \hat \Pi^{n}(\CD^{0}(T_1)) = 0, \quad A^n(\hat \CD^{(0,0)}(T_1)) = 1. \end{equs} Now, we consider the tree $ T_3 $ and we assume that $ r=1 $ and $ n =2 $. We calculate that (for details see \eqref{computescheme2} in Section \ref{sec:examples}) \begin{equs} (\Pi^n \CD^1(T_3))(\tau) = - \frac{\tau^2}{2}. \end{equs} Then, \begin{equs} \Delta & \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ 1 $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2) ; \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ 1 $} ] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} \otimes \mathbf{1} + \mathbf{1} \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (1,0) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} + \lambda \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (1,1) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} + \frac{ \lambda^2}{2} \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (1,2) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} \\ & + \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ 1 $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ \ell $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (0,0) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} + \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ 1 $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ ^1_\ell $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} \otimes \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ (0,1) $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture}. \end{equs} When, one applies $ (\hat \Pi^n \otimes A^n) $, the only non-zero contribution is given by the third term from the previous computation: \begin{equs} (\hat \Pi^n \lambda^2)(\tau) = \tau^2, \quad A^n ( \hat \CD^{(1,2)}(T_3)) = -1. \end{equs} In order to see a non-trivial interaction in the previous term, one has to consider a higher order approximation such as for $ r = 2 $ and $ n=3 $. In this case, one has \begin{equs}\label{KD} (\Pi^{n} \CD^{1}(T_1) )(\tau) & = - i \int_{0}^{\tau} e^{2i s k_1^2} ds + \frac{\tau^2}{2} \mathscr{F}_{\text{\tiny{low}}} \left( T_1 \right) \\ & = - \frac{e^{2i \tau k_1^2} }{2 k_1^2} + \frac{1}{2 k_1^2} + \frac{\tau^2}{2} \mathscr{F}_{\text{\tiny{low}}} \left( T_1 \right) . \end{equs} Then, \begin{equs} A^n(\hat \CD^{(1,0)}(T_1)) = \frac{1}{2k_1^2}, \quad A^n(\hat \CD^{(1,1)}(T_1)) = 0, \quad A^n(\hat \CD^{(1,2)}(T_1)) = \mathscr{F}_{\text{\tiny{low}}} \left( T_1 \right) . \end{equs} Next one has to approximate \begin{equs} - i \int_0^\tau \mathrm{e}^{i s k^2} \Big( \mathrm{e}^{i s( k_4^2 - k_5^2 - k_{123}^2)} (\Pi^n \CD^1(T_1))(s) \Big) d s \end{equs} where $ k = -k_1 + k_2 + k_3 - k_4 + k_5 $ and $ k_{123} = -k_1 + k_2 + k_3 $. Thanks to the structure of $(\Pi^n \CD^1(T_1))(s)$, see \eqref{KD}, it remains to control the following two oscillations \begin{equs} k^2 + k_4^2 - k_5^2 - k_{123}^2 & =\mathscr{F}_{\text{\tiny{dom}}}\left( T_2 \right) + \mathscr{F}_{\text{\tiny{low}}} \left( T_2 \right) \\ k^2 + k_4^2 - k_5^2 - k_{123}^2 + 2k_1^2 & = \mathscr{F}_{\text{\tiny{dom}}}\left( T_3 \right) + \mathscr{F}_{\text{\tiny{low}}} \left( T_3 \right). \end{equs} Then \begin{equs} (\Pi^{n} \CD^2(T_3))(\tau) & = - \int_{0}^{\tau} \frac{e^{i s \mathscr{F}_{\text{\tiny{dom}}}\left( T_3 \right) }}{2ik_1^2} \left( 1+ i \mathscr{F}_{\text{\tiny{low}}} \left( T_3 \right) s - \mathscr{F}_{\text{\tiny{low}}} \left( T_3 \right)^2 \frac{s^2}{2}\right) ds \\ & + \int_{0}^{\tau} \frac{e^{i s\mathscr{F}_{\text{\tiny{dom}}}\left( T_2 \right)}}{2ik_1^2} \left( 1+ i \mathscr{F}_{\text{\tiny{low}}} \left( T_2 \right) s - \mathscr{F}_{\text{\tiny{low}}} \left( T_2 \right)^2 \frac{s^2}{2}\right) ds - i \frac{\tau^3}{3 !} \mathscr{F}_{\text{\tiny{low}}} \left( T_1 \right) \end{equs} One has the following identities: \begin{equs} (\hat \Pi^n \CD^2(T_3) ) (\tau) & = - \left( \mathrm{id} - \mathcal{Q} \right) \int_{0}^{\tau} \frac{e^{i s \mathscr{F}_{\text{\tiny{dom}}}\left( T_3 \right) }}{2ik_1^2} \left( 1+ i \mathscr{F}_{\text{\tiny{low}}} \left( T_3 \right) s - \mathscr{F}_{\text{\tiny{low}}} \left( T_3 \right)^2 \frac{s^2}{2}\right) ds, \\ A^n ( \hat \CD^{(2,0)}(T_3) ) & = \mathcal{Q} (\Pi^{n} \CD^2(T_3))(\tau), \quad A^n ( \hat \CD^{(2,1)}(T_3) ) = A^n ( \hat \CD^{(2,2)}(T_3) ) = 0, \\ A^n ( \hat \CD^{(2,3)}(T_3) ) & = -i \mathscr{F}_{\text{\tiny{low}}} \left( T_1 \right), \quad \Big(\hat \Pi_n \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ 2 $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ ^1_\ell $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} \Big)(\tau) = \Big(\hat \Pi^n \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not,label= {[label distance=-0.2em]below: \scriptsize $ 2 $}] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ ^2_\ell $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture} \Big)(\tau) =0, \\ \hat \Pi^n( \CD^{2}(T_2) ) & = (\mathrm{id} - \mathcal{Q}) \int_{0}^{\tau} e^{i s\mathscr{F}_{\text{\tiny{dom}}}\left( T_2 \right)} \left( 1+ i \mathscr{F}_{\text{\tiny{low}}} \left( T_2 \right) s - \mathscr{F}_{\text{\tiny{low}}} \left( T_2 \right)^2 \frac{s^2}{2}\right) ds. \end{equs} \end{example} In the next proposition, we write a Birkhoff type factorisation for the character $ \hat \Pi^n $ defined from $ \Pi^n $ and the antipode. Such an identity was also obtained in the context of SPDEs (see \cite{BHZ}) but with a twisted antipode. Our formulation is slightly simpler due to the simplifications observed at the level of the algebra (see Remark~\ref{comparisonalgebra}). The Proposition~\ref{Birkhoff} is not used in the sequel but it gives an inductive way to compute $ \hat \Pi^n $ in terms of $ \Pi^n $. It can be seen as a rewriting of identity~\eqref{antipode_def}. \begin{proposition} \label{Birkhoff} One has \begin{equs} \label{factorisation} \hat \Pi^n = \left( \Pi^n \otimes (\mathcal{Q} \circ \Pi^n \mathcal{A} \cdot)(0) \right) \Delta. \end{equs} \end{proposition} \begin{proof} From \eqref{antipode_def}, one gets: \begin{equs} \label{identityA} \Pi^n = \left( \hat \Pi^n \otimes (\mathcal{Q} \circ \Pi^n \cdot)(0) \right) \Delta = \hat \Pi^n * (\mathcal{Q} \circ \Pi^n \cdot)(0) \end{equs} where the product $ * $ is defined from the coaction $ \Delta $. Then, if we multiply the identity \eqref{identityA} by the inverse $ (\mathcal{Q} \circ \Pi^n \mathcal{A} \cdot)(0) $, we get \begin{equs} \Pi^n * (\mathcal{Q} \circ \Pi^n \mathcal{A} \cdot)(0) & =\left( \hat \Pi^n * (\mathcal{Q} \circ \Pi^n \cdot)(0) \right) * (\mathcal{Q} \circ \Pi^n \mathcal{A} \cdot)(0) \\ & = \left( \left( \hat \Pi^n \otimes (\mathcal{Q} \circ \Pi^n \cdot)(0) \right) \Delta \otimes (\mathcal{Q} \circ \Pi^n \mathcal{A} \cdot)(0) \right) \Delta \\ & = \left( \hat \Pi^n \otimes \left( (\mathcal{Q} \circ \Pi^n \cdot)(0) \otimes (\mathcal{Q} \circ \Pi^n \mathcal{A} \cdot)(0) \right) \Delta^{\!+} \right) \Delta \\ & = \hat \Pi^n \end{equs} where we have used \begin{equs} \left( \Delta \otimes \mathrm{id} \right) \Delta = \left( \mathrm{id} \otimes \Delta^{\!+} \right) \Delta, \quad \mathcal{M} \left( \mathrm{id} \otimes \mathcal{A} \right) \Delta^{\!+} = \mathbf{1} \mathbf{1}^{\star}. \end{equs} This concludes the proof. \end{proof} \subsection{Local error analysis} \label{local error analysis} In this section, we explore the properties of the character $ \hat \Pi^n $ which allow us to conduct the local error analysis of the approximation given by $ \Pi^n $. Proposition \ref{factorisation_dom} below shows that only one oscillation is treated through $ \hat \Pi^n $. \begin{proposition} \label{factorisation_dom} For every forest $ F \in \hat \mathcal{H} $, there exists a polynomial $ B^n\left( \CD^r(F) \right){ (\xi)} $ such that \begin{equs} \label{simple_formula} \hat \Pi^n\left( \CD^r(F) \right)(\xi) = B^n\left( \CD^r(F) \right)(\xi) e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}( F)} \end{equs} where $ \mathscr{F}_{\text{\tiny{dom}}}(F) $ is given in Definition~\ref{dom_freq}. Moreover, $ B^n(\CD^r(F))(\xi) $ is given by: \begin{equs} \label{nicefactorisation} B^n(\CD^r(F))(\xi) = \frac{P(\xi)}{Q}, \quad Q = \prod_{\bar T \in A} \left(\mathscr{F}_{\text{\tiny{dom}}}(\bar T) \right)^{m_{\bar T}} \end{equs} where $ P(\xi) $ is a polynomial in $ \xi $ and the $k_i$, $ A $ is a set of decorated subtrees of $ F $ satisfying the same property as in Corollary~\ref{physical_map}. \end{proposition} \begin{proof} We proceed by induction. We get \begin{equs} \hat{\Pi}^n\left( \CD^r(F \cdot \bar F) \right)(\xi) & = \hat{\Pi}^n\left( \CD^r(F ) \right)(\xi) \hat{\Pi}^n\left( \CD^r( \bar F) \right)(\xi) \\ & = B^n\left( \CD^r(F) \right)(\xi) e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}( F)} B^n\left( \CD^r(\bar F) \right)(\xi) e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}( \bar F)} \\ & = B^n\left( \CD^r(F) \right)(\xi) B^n\left( \CD^r(\bar F) \right)(\xi) e^{i \xi(\mathscr{F}_{\text{\tiny{dom}}}( F) +\mathscr{F}_{\text{\tiny{dom}}}( \bar F) )} \\ & = B^n\left( \CD^r(F \cdot \bar F) \right)(\xi) e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}( F \cdot \bar F)}. \end{equs} The pointwise product preserves the structure given by \eqref{nicefactorisation}. Then for $ T = \CI_{o_1}( \lambda_{k}^{\ell}F) $, one gets by the definition of $\hat{\Pi}^n$ given in \eqref{def_A_B} that \begin{equs} \hat{\Pi}^n\left( \CD^r(T) \right)(\xi) & = \xi^{\ell} e^{i \xi P_{o_1}(k) } \hat{\Pi}^n\left( \CD^{r-\ell}(F) \right)(\xi) \\ & = \xi^{\ell} e^{i \xi P_{o_1}(k) +i \xi\mathscr{F}_{\text{\tiny{dom}}}( F) } B^n\left( \CD^r( F) \right)(\xi) \\ & = B^n\left( \CD^r(T) \right)(\xi) e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}( T)}. \end{equs} One gets $ B^n\left( \CD^r(T) \right)(\xi) = \xi^{\ell} B^n\left( \CD^r(F) \right)(\xi) $ and can conclude on the preservation of the factorisation \eqref{nicefactorisation}. We end with the decorated tree $ T = \CI_{o_2}( \lambda_{k}^{\ell} F) $. By the definition of $\hat{\Pi}^n$ given in~\eqref{def_A_B} we obtain that \begin{equs} \hat \Pi^n\left( T \right)(\xi) = \CK_{o_2,+}^{k,r}( \hat \Pi^n( \lambda^{\ell} \CD^{r-\ell-1}(F)),n )(\xi). \end{equs} Now, we apply the induction hypothesis on $ F $ which yields \begin{equs} \hat \Pi^n( \CD^{r-\ell-1}(F))(\xi) = B^n( \CD^{r-\ell-1}( F))(\xi) e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}(F)} \end{equs} and we conclude by applying Definition~\ref{Taylor_exp}. Indeed, by applying $ \CK_{o_2,+}^{k,r} $ to $ \xi^{\ell} e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}( F)} $, we can get in \eqref{e:Pim} extras terms $ \frac{1}{Q} $ coming from expressions of the form \begin{equs} \int_0^{\tau} \xi^{\ell+q} e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}(T)} d \xi. \end{equs} Then, by computing this integral, we obtain coefficients of the form \begin{equs} \frac{1}{(i\mathscr{F}_{\text{\tiny{dom}}}(T))^{m_T}} \end{equs} if $\mathscr{F}_{\text{\tiny{dom}}}(T)^{m_T} \neq 0 $. This term will be multiplied by $ B^n( \CD^{r-\ell-1}( F))(\xi) e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}(F)} $ and preserves the structure given in \eqref{nicefactorisation}. When performing the Taylor series expansions by applying $ \CK_{o_2,+}^{k,r} $ (cf. \eqref{e:Pim1} and \eqref{e:Pim2}) we can also get some extra polynomials in the $ k_i $. This leads to the factor $P(\xi)$ in \eqref{nicefactorisation} and concludes the proof. \end{proof} \begin{remark} The identity \eqref{simple_formula} shows that the character $ \hat \Pi^n $ has selected the oscillation $ e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}( F)} $ for the decorated forest $F$. The complexity of this character is hidden behind the polynomial $ B^n\left( \CD^r(F) \right)(\xi) $. It depends on the parameters $ n $ and $ r $. The explicit formula \eqref{simple_formula} is strong enough for conducting the local error analysis. \end{remark} The next recursive definition introduces a systematic way to compute the local error from the structure of the decorated tree and the coaction $ \Delta $. \begin{definition}\label{def:Llow} Let $ n \in \N $, $ r \in \Z $. We recursively define $ \mathcal{L}^{r}_{\text{\tiny{low}}}(\cdot,n)$ as \begin{equs} \mathcal{L}^{r}_{\text{\tiny{low}}}(F,n) = 1, \quad r < 0. \end{equs} Else \begin{equs} \mathcal{L}^{r}_{\text{\tiny{low}}}(\mathbf{1},n) = 1, \quad \mathcal{L}^{r}_{\text{\tiny{low}}}(F \cdot \bar F,n) = \mathcal{L}^{r}_{\text{\tiny{low}}}(F,n ) + \mathcal{L}^{r}_{\text{\tiny{low}}}( \bar F,n) \\ \mathcal{L}^{r}_{\text{\tiny{low}}}(\CI_{o_1}( \lambda_{k}^{\ell} F ),n) = \mathcal{L}^{r-\ell}_{\text{\tiny{low}}}( F,n ) \\ \mathcal{L}^{r}_{\text{\tiny{low}}}(\CI_{o_2}( \lambda^{\ell}_{k} F ),n) = k^{\alpha} \mathcal{L}^{r-\ell-1}_{\text{\tiny{low}}}( F,n ) + \mathbf{1}_{\lbrace r-\ell \geq 0 \rbrace} \sum_j k^{\bar n_j} \end{equs} where \begin{equs} \bar n_j = \max_{ m}\left(n,\deg\left( P_{(F^{(1)}_j, F^{(2)}_j,m)} \mathscr{F}_{\text{\tiny{low}}} (\CI_{(\mathfrak{t}_2,p)}( \lambda^{\ell}_{k} F^{(1)}_j ))^{r-\ell +1- m} + \alpha \right) \right) \end{equs} with \begin{equs} \Delta \CD^{r-\ell-1}(F) & = \sum_{j} F^{(1)}_j \otimes F^{(2)}_j, \\ \quad A^n(F^{(2)}_j) B^n\left( F^{(1)}_j \right)(\xi) & = \sum_{m \leq r-\ell -1} \frac{P_{(F^{(1)}_j, F^{(2)}_j,m)}}{Q_{(F^{(1)}_j, F^{(2)}_j,m)}}\xi^m \end{equs} and $ \mathscr{F}_{\text{\tiny{low}}} $ is defined in Definition \ref{dom_freq}. \end{definition} \begin{remark} For a tree $ T $ with $ n $ leaves, the quantity $ \mathcal{L}^{r}_{\text{\tiny{low}}}(T,n) $ is a polynomial in the frequencies $ k_1,...,k_n $ attached to its leaves. The recursive definition of $ \mathcal{L}^{r}_{\text{\tiny{low}}}(\cdot,n) $ follows exactly the mechanism involved in the proof of Theorem~\ref{approxima_tree}. \end{remark} \begin{remark} One can observe that the local error strongly depends on the Birkhoff factorisation which provides a systematic way to get all the potential contributions. Indeed, $ \bar n $ depends on $ A^n, B^n $ applied to decorated forests coming from the coaction $ \Delta $. \end{remark} \begin{remark}\label{rem:simpL} In Section~\ref{sec:examples} we derive the low regularity resonance based schemes on concrete examples up to order two. In the discussed examples, one does not get any contribution from $ B^n $ and $ A^n $ (see also Example~\ref{AnBn}). Thus, one can work with a simplify definition of $\bar n_j $ given by: \begin{equs} \bar n_j & = \max(n, \deg( \mathscr{F}_{\text{\tiny{low}}} (\CI_{(\mathfrak{t}_2,p)}( \lambda^{\ell}_{k} F_j ))^{r-\ell +1} + \alpha ), \end{equs} where \begin{equs} \sum_j F_j & = \mathcal{M}_{(1)} \Delta \CD^{r-\ell-1}(F), \quad F_j \in H , \quad \mathcal{M}_{(1)} \left( F_1 \otimes F_2 \right) = F_1. \end{equs} \end{remark} \begin{remark} For $ n=0 $, one obtains an optimal scheme in terms of regularity of the solution which is equal to $n(T,0) = \mathcal{L}^{r}_{\text{\tiny{low}}}(T,0)$. Then, one may wish to use this information to simplify the scheme, i.e., carry out additional Taylor series expansions of the dominant parts if the regularity allows for it. This can be achieved by introducing the new decoration $ n_0 =\max_{T} n(T,0) $. In the examples in Section~\ref{sec:examples}, one can observe that for $ n \geq n_0 $ one has: $ n = \mathcal{L}^{r}_{\text{\tiny{low}}}(T, n)$. In order to guarantee that this holds true in general, one can extend naturally the algebraic structure by introducing the regularity $ n $ as a decoration at the root of a decorated tree. We list below what will be the potential changes: \begin{itemize} \item First in Definition~\ref{Taylor_exp}, we take into account only monomials $ \xi^\ell $ that will give the length of the Taylor approximation. We could potentially insert some polynomials in the frequency that will refine the analysis with respect to $ n $ in case we already used some regularity coming from previous approximations. One can introduce an extended decoration by having another component on the monomials $ \lambda^{\ell}, \ell \in \N^{2} $ where the second decorations will stand for this regularity already used. \item The trees will carry at the root a decoration in $ \Z^2 $ of the form $ (r,n) \in \Z^2 $. The decoration will have the same behaviour as $ r $, it will decrease for each edge in $ \mathfrak{L}_+ $ according to the derivative $ |\nabla|^{\alpha} $ that appears in Duhamel's formula. The recursive formula \eqref{def_deltas} will have two Taylor expansions one determined by $ r $, the other one determined by $ n $ for $ n \geq n_0 $. The Birkhoff factorisation will remain the same based on these two Taylor expansions. \end{itemize} This potential extension for optimising the scheme shows that the algebraic structure chosen in this work is robust and can encode various behaviours such as the order of the scheme as well as its regularity. \end{remark} \begin{remark} As in Remark~\ref{better error}, we use only powers of $ k $ in Definition~\ref{def:Llow} but more structures can be preserved if one wants to conduct a global error analysis. \end{remark} Now we are in the position to state the approximation error of $\Pi^{n,r}$ to $\Pi$ (cf. \eqref{eq:loci}). \begin{theorem} \label{approxima_tree} For every $ T \in \mathcal{T} $ one has, \begin{equs} \left(\Pi T - \Pi^{n,r} T \right)(\tau) = \mathcal{O}\left( \tau^{r+2} \mathcal{L}^{r}_{\text{\tiny{low}}}(T,n) \right) \end{equs} where $\Pi $ is defined in \eqref{Pi}, $\Pi^n$ is given in \eqref{recursive_pi_r} and $\Pi^{n,r} = \Pi^n \CD^r$. \end{theorem} \begin{proof} We proceed by induction on the size of a forest by using the recursive definition \eqref{recursive_pi_r} of $ \Pi^n $ and we prove a more general version of the theorem for forests. In fact, only the version for trees is needed for the local error analysis. First, one gets: \begin{equs} \left(\Pi - \Pi^{n,r} \right)(\mathbf{1})(\tau) = 0 = \mathcal{O}\left( \tau^{r+2} \mathcal{L}^{r}_{\text{\tiny{low}}}(\mathbf{1},n) \right). \end{equs} One also has \begin{equs} \left(\Pi - \Pi^{n,r} \right)(\CI_{o_1}( \lambda_{k}^{\ell} F ))(\tau) & = \tau^{\ell} e^{i \tau P_{o_1}(k)} (\Pi - \Pi^{n,r-\ell})(F)(\tau) \\ & = \mathcal{O}\left( \tau^{r+2} \mathcal{L}^{r-\ell}_{\text{\tiny{low}}}(F,n) \right). \end{equs} Then, one gets again by \eqref{Pi} and \eqref{recursive_pi_r} that \begin{equs} \left(\Pi - \Pi^{n,r} \right)(F \cdot \bar F)(\tau) & = \left(\Pi - \Pi^{n,r} \right)(F)(\tau) ( \Pi^{n,r} \bar F)(\tau) + (\Pi F)(\tau) \left(\Pi - \Pi^{n,r} \right)(\bar F)(\tau) \\ & = \mathcal{O}\left( \tau^{r+2} \mathcal{L}^{r}_{\text{\tiny{low}}}(F,n) \right) + \mathcal{O}\left( \tau^{r+2} \mathcal{L}^{r}_{\text{\tiny{low}}}(\bar F,n) \right) \\ & = \CO \left( \tau^{r+2} \mathcal{L}^{r}_{\text{\tiny{low}}}(F \cdot \bar F,n) \right) \end{equs} where we use Definition \ref{def:Llow}. At the end, by \eqref{Pi} and \eqref{recursive_pi_r} and inserting zero in terms of \[ \pm \Pi^{n,r-\ell - 1}( F)(\xi) \] as well as using that $ \Pi^{n,r-\ell - 1}( F) = \Pi^n \CD^{r-\ell-1}(F) $ we obtain \begin{equs} &\left( \Pi- \Pi^{n,r} \right) \left( \CI_{o_2}( \lambda_k^{\ell} F) \right) (\tau) = - i \vert \nabla\vert^{\alpha} (k) \int_{0}^{\tau} \xi^{\ell} e^{i \xi P_{o_2}(k)} (\Pi - \Pi^{n,r-\ell - 1})( F)(\xi) d \xi \\ & - i \vert \nabla\vert^{\alpha} (k)\int_{0}^{\tau} e^{i \xi P_{o_2}(k)} (\Pi^{n} { \lambda^{\ell} } \CD^{r-\ell-1}(F) )(\xi) d \xi -\CK^{k,r}_{o_2} ( \Pi^{n}( \lambda^\ell\CD^{r-\ell-1} (F)),n )(\tau) \\ & = \int_{0}^{\tau} \CO \left( \xi^{r+1} k^{\alpha} \mathcal{L}^{r-\ell-1}_{\text{\tiny{low}}}(F,n) \right) d\xi + \mathbf{1}_{\lbrace r-\ell \geq 1 \rbrace} \sum_{j} \CO(\tau^{r+2} k^{\bar n_j}) \\ & = \CO \left( \tau^{r+2} \mathcal{L}^{r}_{\text{\tiny{low}}}(F,n) \right) . \end{equs} Note that in the above calculation we have used the following decomposition \begin{equs} \Pi^n \left( \CD^{r-\ell-1} (F)\right) = \left( \hat \Pi^n \otimes A^n \right) \Delta \CD^{r-\ell-1} (F) \end{equs} which by Proposition~\ref{factorisation_dom} implies that one has \begin{equs} \label{Bnu} \Pi^n \left( \CD^{r-\ell-1} (F)\right)(\xi) = \sum_{j} A^n(F^{(2)}_j) B^n\left( F^{(1)}_j \right)(\xi) e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}( F^{(1)}_j)} \end{equs} where we have used Sweedler notations for the coaction $ \Delta $. Then, one has \begin{equs} A^n(F^{(2)}_j) B^n\left( F^{(1)}_j \right)(\xi) = \sum_{m \leq r-\ell -1} \frac{P_{(F^{(1)}_j, F^{(2)}_j,m)}}{Q_{(F^{(1)}_j, F^{(2)}_j,m)}}\xi^m \end{equs} where $ P_{(F^{(1)}_j, F^{(2)}_j,m)} $ and $ Q_{(F^{(1)}_j, F^{(2)}_j,m)}$ are polynomials in the frequencies $ k_1,..,k_n $. Thus by applying Lemma~\ref{Taylor_bound}, we get an error for every term in the sum \eqref{Bnu} which is at most of the form: \begin{equs} \sum_j \CO(\tau^{r+2} k^{\bar n_j} ) \end{equs} where \begin{equs} \bar n_j = \max_{ m}\left(n,\deg\left( P_{(F^{(1)}_j, F^{(2)}_j,m)} \mathscr{F}_{\text{\tiny{low}}} (\CI_{(\mathfrak{t}_2,p)}( \lambda^{\ell}_{k} F^{(1)}_j ))^{r-\ell +1- m} + \alpha \right) \right). \end{equs} This concludes the proof. \end{proof} \begin{proposition} \label{physical_space} For every decorated tree $ T = \CI^{r}_{(\mathfrak{t},p)}( \lambda^{\ell}_{k} F) $ in $ \mathcal{T} $ with disjoint leaves decorations being a subset of the $k_i$ as in Assumption~\ref{assumption_physical_space}, one can map $ \Pi^n T $ back into physical space which means that for functions indexed by the leaves of $ T $, $ (v_u)_{u \in L_T} $, the term \begin{equs} \label{inverse Fourier} \mathcal{F}^{-1} \left( \sum_{k = \sum_{u \in L_T} a_u k_u}(\Pi^n T)(\xi) \right) (v_{u,a_u}, u \in L_T) \end{equs} can be expressed by applying classical differential operators $ \nabla^{\ell} , e^{i\xi \nabla^{m}}$, $ m,\ell \in \Z $ to $ v_{u,a_u} $ which are defined by $ v_{u,1} = v_u $ and $ v_{u,-1} = \overline{v_u} $. \end{proposition} \begin{proof} We proceed by induction using the identity \eqref{antipode_def}. The latter implies that \begin{equs} \Pi^n T & = \left( \hat \Pi^n \otimes A^n \right) \Delta T = \sum_j \hat \Pi^n(T_j^{(1)}) A^n(T_j^{(2)}) \quad \Delta T = \sum_{j} T_j^{(1)} \otimes T_{j}^{(2)}. \end{equs} Then, for every $ \hat \Pi^n T_j^{(1)} $, we apply Proposition~\ref{factorisation_dom} and we get: \begin{equs} \hat \Pi^n\left( T_j^{(1)} \right)(\xi) = B^n\left( T_j^{(1)} \right)(\xi) e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}( T_j^{(1)})}. \end{equs} Moreover, $ B^n(T_j^{(1)})(\xi) $ is given by: \begin{equs} \label{nicefactorisationb} B^n(T_j^{(1)})(\xi) = \frac{P(\xi)}{Q}, \quad Q = \prod_{\bar T \in A({T_j^{(1)}})} \left(\mathscr{F}_{\text{\tiny{dom}}}(\bar T) \right)^{m_{\bar T}} \end{equs} with the notations defined in Proposition~\ref{factorisation_dom} and $ A(T_j^{(1)}) $ are some decorated subtrees of $ T_j^{(1)} $. The term $ \hat \Pi^n\left( T_j^{(1)} \right)(\xi) $ can be mapped back to physical space using Proposition~\ref{Fourierproduct}. The polynomial $ P(\xi) $ will produce derivatives of type $ \nabla^{\ell} $ and the term $ e^{i \xi\mathscr{F}_{\text{\tiny{dom}}}( T_j^{(1)})}$ is of the form $ e^{i\xi \nabla^{m}} $. For the terms $ A^n(T_j^{(2)}) $, we use the non-recursive definition of the map $ \Delta $. Indeed, $ T_j^{(2)} $ is a product of trees of the form $ T_e $ where $ e $ is an edge in $ T $ which was cut. The map $ A^n$ is defined from $ \Pi^n $ in \eqref{An}. Then we can apply the induction hypothesis on $ A^n(T_e) $. For each $ T_e = \CI_{(\mathfrak{t},p)}( \lambda_{k_e}^{\ell_e} \bar T_e) $, $ k_e $ appears as a decoration at a leaf of $ T_j^{(1)} $. Then, it is either included in $ Q $ or disjoint. This allows us to apply the inverse Fourier Transform concluding the proof. \end{proof} \section{A general numerical scheme}\label{sec:genScheme} Recall the mild solution of \eqref{dis} given by Duhamel's formula \begin{equation}\label{duhLin_it} u(t) = e^{ it \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} v - i\vert \nabla \vert^\alpha e^{ it \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} \int_0^t e^{ -i\xi \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right)} p(u(t),\bar u(t)) d\xi . \end{equation} For simplicity we restrict our attention to nonlinearities of type \begin{equs}\label{poly} p(u, \bar u) = u^N \bar u^M \end{equs} which includes all examples in Section~\ref{sec:examples}. The analysis which follows can straightforwardly be generalised to polynomials and coupled systems. In order to describe our general numerical scheme, we first describe the iterated integrals produced by the (high order) iteration of Duhamel's formula \eqref{duhLin_it} through a class of suitable decorated trees. \subsection{Decorated trees generated by Duhamel's formula} With the aid of the Fourier series expansion $u(x) = \sum_{k\in \Z^d} \hat u_k(t) e^{i k x}$ we first rewrite Duhamel's formula \eqref{duhLin_it} at the level of the Fourier coefficients: \begin{equation}\label{duhLin_it_Fourier} \hat u_k(t) = e^{ it P(k)} \hat v_k - i \vert \nabla\vert^{\alpha} (k)e^{ it P(k)} \int_0^t e^{ -i\xi P(k) } p_k(u(\xi),\bar u(\xi)) d\xi \end{equation} where $P(k)$ denotes the differential operator $\mathcal{L}$ in Fourier space, i.e., \begin{equs} P(k) = \mathcal{L}\left(\nabla,\frac{1}{\varepsilon}\right)(k) \end{equs} (cf. \eqref{Lldef} and \eqref{Leps}, respectively) and \begin{equs} p_k(u(t),\bar u(t)) {\, :=} \sum_{k =\sum_{i} k_i - \sum_j \bar k_j} \prod_{i=1}^N \hat u_{k_i}(t) \prod_{j=1}^M \bar{\hat{u}}_{\bar k_j}(t). \end{equs} This equation is given in an abstract way by \begin{equs} \label{recursion_tree_f} U_k = \CI_{(\mathfrak{t}_1,0)}( \lambda_k) + \CI_{(\mathfrak{t}_1,0)}( \lambda_k \CI_{(\mathfrak{t}_2,0)}( \lambda_k p_k(U, \bar U ) ) ) \end{equs} where \begin{equs} p_k(U,\bar U) {\, :=} \sum_{k =\sum_i k_i - \sum_j \bar k_j} \prod_{i=1}^N U_{k_i} \prod_{j=1}^M \bar{U}_{\bar k_j} \end{equs} and $\mathfrak{L}=\{\mathfrak{t}_1,\mathfrak{t}_2\}$, $P_{\mathfrak{t}_1}( \lambda) = P( \lambda)$ and $P_{\mathfrak{t}_2}( \lambda) = - P( \lambda)$. \begin{remark} The two systems \eqref{duhLin_it_Fourier} and \eqref{recursion_tree_f} are equivalent. Indeed, we can define a map $ \psi $ such that: \begin{equs} \psi(U_k)(u,v,t) & = \hat u_k(t) , \quad \psi(\bar U_k)(u,v,t) = \bar{\hat{u}}_k(t) \\ \psi\left( \CI_{o_1}( \lambda_k) \right)(v,u,t) & = e^{ it P_{o_1}(k)} \hat v_k(t) \\ \psi\left( \CI_{o_1}( \lambda_k T) \right)(v,u,t) & = e^{ it P_{o_1}(k)} \psi\left( T \right)(v,u,t) \\ \psi\left( \CI_{o_2}( \lambda_k T) \right)(v,u,t) & = - i \vert \nabla \vert^\alpha(k) \int_{0}^{t} e^{ i \xi P_{o_2}(k)} \psi\left( T \right)(v,u,\xi) d\xi. \end{equs} In this notation \eqref{duhLin_it_Fourier} takes the form \begin{equs} \psi(U_k )= \psi\left(\CI_{(\mathfrak{t}_1,0)}( \lambda_k) \right)(v,u,0)+ \psi\left(\CI_{(\mathfrak{t}_1,0)}( \lambda_k \CI_{(\mathfrak{t}_2,1)}( \lambda_k p_k(U, \bar U ) ) )\right). \end{equs} \end{remark} We define the notion of a rule in the same spirit as in \cite{BHZ}. A rule is then a map $R$ assigning to each element of $\mathfrak{L} \times \lbrace 0,1 \rbrace$ a non-empty collection of tuples in $\mathfrak{L} \times \lbrace 0,1 \rbrace$. The relevant rule describing the class of equations \eqref{recursion_tree_f} is given by \begin{equs} R( (\mathfrak{t}_1,p) ) & = \{(), ( (\mathfrak{t}_2,p) ) \} \\ R( (\mathfrak{t}_2,p) ) & = \{ ( (\mathfrak{t}_1,p)^{N}, (\mathfrak{t}_1,p+1)^{M}) \} \end{equs} where $N$ and $M$ depend on the polynomial nonlinearity \eqref{poly} and the notation $ (\mathfrak{t}_1,p+1)^{M} $ means that $ (\mathfrak{t}_1,p+1) $ is repeated $ M $ times and the sum $ p+1 $ is performed modulo $2$. Using graphical notation, one gets: \begin{equation}\label{e:ru} \begin{aligned} R\bigg( \begin{tikzpicture}[scale=0.2,baseline=0.32cm] \node at (0,0) [dot] (k) {}; \node at (0,5) (l) {}; \draw[kernel1] (l) -- node [rect1] {\tiny$\mathfrak{t}_1, p $} (k) ; \end{tikzpicture} \bigg) & =\bigg\{ (), \ \bigg( \begin{tikzpicture}[scale=0.2,baseline=0.32cm] \node at (0,0) [dot] (k) {}; \node at (0,5) (l) {}; \draw[kernel1] (l) -- node [rect1] {\tiny$\mathfrak{t}_2, p $} (k) ; \end{tikzpicture} \bigg) \bigg\} \\ R\bigg( \begin{tikzpicture}[scale=0.2,baseline=0.32cm] \node at (0,0) [dot] (k) {}; \node at (0,5) (l) {}; \draw[kernel1] (l) -- node [rect1] {\tiny$\mathfrak{t}_2, p $} (k) ; \end{tikzpicture} \bigg) & =\bigg\{ \bigg( \bigg( \begin{tikzpicture}[scale=0.2,baseline=0.32cm] \node at (0,0) [dot] (k) {}; \node at (0,5) (l) {}; \draw[kernel1] (l) -- node [rect1] {\tiny$\mathfrak{t}_1, p $} (k) ; \end{tikzpicture} \bigg)^{N}, \ \bigg( \begin{tikzpicture}[scale=0.2,baseline=0.32cm] \node at (0,0) [dot] (k) {}; \node at (0,5) (l) {}; \draw[kernel1] (l) -- node [rect1] {\tiny$\mathfrak{t}_1, p +1 $} (k) ; \end{tikzpicture} \bigg)^{M} \bigg) \bigg\} \end{aligned} \end{equation} \begin{definition} \label{rules} A decorated tree $ T_{\mathfrak{e}}^{\mathfrak{f}} $ in $ \hat \mathcal{T}_0 $ is generated by $ R $ if for every node $ u $ in $ N_T $, one has \begin{equs} \cup_{e \in E_u}(\mathfrak{t}(e),\mathfrak{p}(e)) \in R(e_u) \end{equs} where $ E_u \subset E_T $ are the edges of the form $ (u,v) $ and $ e_u $ is the edge of the form $ (w,u) $. The set of decorated trees generated by $ R $ is denoted by $ \hat \mathcal{T}_0(R) $ and for $ r \in \Z $, $ r \geq -1 $, we set: \begin{equs} \hat \mathcal{T}_0^{r}(R) = \lbrace T_{\mathfrak{e}}^{\mathfrak{f}} \in \hat \mathcal{T}_0{ (R)} \, , \deg(T_{\mathfrak{e}}^{\mathfrak{f}}) \leq r +1 \rbrace. \end{equs} \end{definition} Given a decorated tree $ T_{\mathfrak{e}} = (T,\mathfrak{e})$ where we just have the edge decoration, the symmetry factor $S(T_{\mathfrak{e}})$ is defined inductively by setting $S(\mathbf{1})\, { =} 1$, while if $T$ is of the form \begin{equs} \prod_{i,j} \mathcal{I}_{(\mathfrak{t}_{t_i},p_i)}\left( T_{i,j}\right)^{\beta_{i,j}} \end{equs} with $T_{i,j} \neq T_{i,\ell}$ for $j \neq \ell$, then \begin{align}\label{S} S(T) \, { :=} \Big( \prod_{i,j} S(T_{i,j})^{\beta_{i,j}} \beta_{i,j}! \Big)\;. \end{align} We extend this definition to any tree $ T_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}} $ in $ \mathcal{T} $ by setting: \begin{equs} S(T_{\mathfrak{e}}^{\mathfrak{n},\mathfrak{f}} )\, { :=} S(T_{\mathfrak{e}} ). \end{equs} Then, we define the map $ \Upsilon^{p}(T)(v) $ for \begin{equs} T = \CI_{(\mathfrak{t}_2,0)}( \lambda_k \prod_{i=1}^N \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_i} T_i) \prod_{j=1}^M \CI_{(\mathfrak{t}_1,1)}( \lambda_{\tilde k_j} \tilde T_j) ) \end{equs} by \begin{equs}\label{upsi} \Upsilon^{p}(T)(v)& \, { :=} \partial_v^{N} \partial_{\bar v}^{M} p(v,\bar v) \prod_{i=1}^N \Upsilon^p( \lambda_{k_i} T_i)(v) \prod_{j=1}^M \overline{\Upsilon^p( \lambda_{\tilde k_j}\tilde T_j)(v)}\\ & = N! M! \prod_{i=1}^N \Upsilon^p( \lambda_{k_i} T_i)(v) \prod_{j=1}^M \overline{\Upsilon^p( \lambda_{\tilde k_j}\tilde T_j)(v)} \end{equs} and \begin{equs} \Upsilon^{p}(\CI_{(\mathfrak{t}_1,0)}( \lambda_{k}) )(v) &\, { :=} \hat v_k, \quad \Upsilon^{p}(\CI_{(\mathfrak{t}_1,0)}( \lambda_{k} \tilde T ))(v) \, { :=} \Upsilon^{p}( \lambda_k \tilde T) (v), \\ \Upsilon^{p}(\CI_{(\mathfrak{t}_1,1)}( \lambda_{k}) )(v) &\, { :=} \bar{\hat{v}}_k, \quad\Upsilon^{p}(\CI_{(\mathfrak{t}_1,1)}( \lambda_{k} \tilde T) )(v) \, { :=} \overline{\Upsilon^{p}( \lambda_k \tilde T) (v)}, \quad \tilde T \neq \mathbf{1} . \end{equs} \begin{example}\label{ex:SUpsNLS1} Assume that we have the tree \begin{equs} T = \CI_{(\mathfrak{t}_2,0)}( \lambda_k \CI_{(\mathfrak{t}_1,1)}( \lambda_{k_1} ) \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_2}) \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_3}) ) \end{equs} then \begin{equs} \Upsilon^{p}\left(T\right) (v) & = 2\Upsilon^{p}\left( \CI_{(\mathfrak{t}_1,1)}( \lambda_{k_1} ) \right) (v) \Upsilon^{p}\left( \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_2} ) \right) (v) \Upsilon^{p}\left( \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_3} ) \right) (v) \\& =2 \overline{\hat{v}}_{k_1} \hat{v}_{k_2} \hat{v}_{k_3} \end{equs} and $ S(T) = 2. $ \end{example} If we want to find a solution $ U $ to \eqref{recursion_tree_f} as a linear combination of decorated trees, then we need to give a meaning to the conjugate of $ U $ that is $ \bar U $. We define this operation on $ \hat \mathcal{T} $ recursively as: \begin{equs}\label{bar} \overline{ \CI_{(\mathfrak{t},p)}( \lambda_k T)} = \CI_{(\mathfrak{t},p+1)}( \lambda_{k} \overline{T}), \quad \overline{T_1 \cdot T_2 } = \overline{T}_1 \cdot \overline{T}_2. \end{equs} This map is well-defined from $ \hat \mathcal{H} $ into itself and preserves the identity \eqref{innerdecoration}. We want to find maps $ V^{r}_k $, $ r \in \Z $, $ r \geq -1 $ such that \begin{equs} \label{equa_recurs} V^{r+1}_k = \CI_{(\mathfrak{t}_1,0)}( \lambda_k ) + \CI_{(\mathfrak{t}_1,0)}( \lambda_k \CI_{(\mathfrak{t}_2,0)}( \lambda_k p_k(V^{r}, \overline{V^{r}} ) ) ) . \end{equs} An explicit expression is given in the next proposition \begin{proposition} The solution in $ \hat \mathcal{H}$ of \eqref{equa_recurs} is given by the trees generated by the rule $ R $ \begin{equs} V^{r}_k (v) = \sum_{T \in \hat \mathcal{T}_0^{r}(R)} \frac{\Upsilon^{p}( \lambda_k T)(v)}{S(T)} \CI_{(\mathfrak{t}_1,0)}( \lambda_k T). \end{equs} \end{proposition} \begin{proof}We will prove this by induction. We need to expand \begin{equs} \label{recursion_tree_f_t} Z_k = \CI_{(\mathfrak{t}_1,0)}( \lambda_k ) + \CI_{(\mathfrak{t}_1,0)}( \lambda_k \CI_{(\mathfrak{t}_2,0)}( \lambda_k p_k(V^{r}, \overline{V^{r}} ) ) ) . \end{equs} One has \begin{equs} Z_k & = \CI_{(\mathfrak{t}_1,0)}( \lambda_k ) + \sum_{T_i, \tilde T_j \in \hat \mathcal{T}_0^{r}(R)} \sum_{k = \sum_i k_i - \sum_j \tilde k_j } \prod_{i,j} \frac{\Upsilon^{p}(T_i)}{S(T_i)} \frac{\overline{\Upsilon^{p}(\tilde T_j)}}{S(\tilde T_j)} \\ & \cdot \CI_{(\mathfrak{t}_1,0)}( \lambda_k \CI_{(\mathfrak{t}_2,0)}( \lambda_k\left( \prod_i \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_i} T_i) \prod_j \CI_{(\mathfrak{t}_1,1)}( \lambda_{\tilde k_j} \tilde T_j) \right))). \end{equs} Then, we fix the products $ \prod_i \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_i} T_i) $ and $ \prod_j \CI_{(\mathfrak{t}_1,0)}( \lambda_{\tilde k_j} \tilde T_j) $ which can be rewritten as follows \begin{equs} \prod_i \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_i} T_i) = \prod_{\ell} \CI_{(\mathfrak{t}_1,0)}( \lambda_{m_\ell} F_{\ell})^{\beta_{\ell}} \\ \prod_j \CI_{(\mathfrak{t}_1,1)}( \lambda_{\tilde k_j} \tilde T_j) = \prod_{\ell} \CI_{(\mathfrak{t}_1,1)}( \lambda_{\tilde{m}_\ell} \tilde F_{\ell})^{\alpha_{\ell}} \end{equs} where the $ F_{\ell} $ (resp. $ \tilde F_{\ell} $) are disjoints. The number of times the same term appears when we sum over the $ F_{\ell} $ (resp. $ \tilde F_{\ell} $) and the $ k_i $ (resp. $ \tilde k_j $) is equal to $ \frac{N!}{\prod_{\ell} \beta_{\ell} !} $ (resp. $ \frac{M!}{\prod_{\ell} \alpha_{\ell} !} $). With the identity \begin{equs} \prod_{i,j} \frac{\Upsilon^{p}(T_i)}{S(T_i)} \frac{\overline{\Upsilon^{p}(\tilde T_j)}}{S(\tilde T_j)} \frac{N!}{\prod_{\ell} \beta_{\ell} !} \frac{M!}{\prod_{\ell} \alpha_{\ell} !} = \frac{\Upsilon^{p}( \lambda_k T)}{S(T)} \end{equs} we thus get \begin{equs} Z_k = \sum_{T \in \hat \mathcal{T}_0^{r+1}(R)} \frac{\Upsilon^{p}( \lambda_k T)}{S(T)} \CI_{(\mathfrak{t}_1,0)}( \lambda_k T) = V^{r+1}_{k} \end{equs} which concludes the proof. \end{proof} Before describing our numerical scheme, we need to remove the trees which are already of size $\CO(\tau^{r+2})$. Indeed, one has \begin{equs} (\Pi T)( \tau) = \CO(\tau^{n_+(T)}) \end{equs} where $n_+(T) $ is the number of edges of type in $ \mathfrak{L}_+ $ corresponding to the number of integration in the definition of $ \Pi T $. Therefore, we define the space of trees $ \mathcal{T}^{r}_{0}(R) $ as \begin{equs} \label{space_scheme} \mathcal{T}^{r}_{0}(R) = \lbrace T \in \hat \mathcal{T}^{r}_{0}(R), \, n_+(T) \leq r +1 \rbrace. \end{equs} \subsection{Numerical scheme and local error analysis} Now, we are able to describe the general numerical scheme: \begin{definition}[The general numerical scheme] \label{scheme} For fixed $ n, r \in \N $, we define the general numerical scheme in Fourier space as: \begin{equs}\label{genscheme} U_{k}^{n,r}(\tau, v) = \sum_{T \in \mathcal{T}^{r+2}_{0}(R)} \frac{\Upsilon^{p}( \lambda_kT)(v)}{S(T)} \Pi^n \left( \CD^r(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T)) \right)(\tau). \end{equs} \end{definition} \begin{remark} The sum appearing in the expression~\eqref{genscheme} runs over infinitely many trees (finitely many shapes, but infinitely many ways of splitting up the frequency $ k $ among the branches). One can notice that each leaf decorated by $ k_i $ is associated with $ v_{k_i} $ coming from the term $\Upsilon^{p}( \lambda_kT)(v) $. Therefore with an appropriate analytical assumption on the initial value $ v $, i.e., if $v$ belongs to a sufficiently smooth Sobolev space, this sum converges in a suitable norm, see also Remark~\ref{sobolev_remark} below and the regularity assumptions detailed in Section \ref{sec:examples}. \end{remark} \begin{remark} We can always map the term $ U_{k}^{n,r}(\tau, v) $ back to physical using classical operators. Indeed, from Proposition~\ref{physical_space} this holds true for each term $ \Pi^n \left( \CD^r(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T)) \right)(\tau) $. In practical applications this will allow us to carry out the multiplication of functions in physical space, using the Fast Fourier Transform (cf. Remark \ref{rem:FFT}). Details for concrete applications are given in Setion \ref{sec:examples}. \end{remark} \begin{remark} The spaces $ \CV_{k}^{r} $ given in \eqref{decoratedV1} and \eqref{decoratedV2} are defined by: \begin{equs} \CV_{k}^{r} = \lbrace \CI_{(\mathfrak{t}_1,0)}( \lambda_k T), \, T \in \mathcal{T}^{r+2}_{0}(R) \rbrace. \end{equs} \end{remark} The numerical scheme \eqref{genscheme} approximates the exact solution locally up to order $r+2$. More precisely, the following Theorem holds, \begin{theorem}[Local error]\label{thm:genloc} The numerical scheme \eqref{genscheme} with initial value $v = u(0)$ approximates the exact solution $U_{k}(\tau,v) $ up to a local error of type \begin{equs} U_{k}^{n,r}(\tau,v) - U_{k}(\tau,v) = \sum_{T \in \mathcal{T}^{r+2}_{0}(R)} \CO\left(\tau^{r+2} \CL^{r}_{\text{\tiny{low}}}(T,n) \Upsilon^{p}( \lambda_kT)(v) \right) \end{equs} where the operator $\CL^{r}_{\text{\tiny{low}}}(T,n)$, given in Definition \ref{def:Llow}, embeds the necessary regularity of the solution. \end{theorem} \begin{proof} First we define the exact solution $u^r$ up to order $ r $ in Fourier space by \begin{equs} U_{k}^{r}(\tau,v) = \sum_{T \in \mathcal{T}^{r+2}_{0}(R)} \frac{\Upsilon^{p}( \lambda_kT)(v)}{S(T)} \Pi \left( \CI_{(\mathfrak{t}_1,0)}( \lambda_k T) \right)(\tau) \end{equs} which satisfies \begin{equs}\label{app1} u(\tau) - u^r(\tau) = \CO\left( \tau^{r+2} \vert \nabla \vert^{\alpha(r+2)} \tilde p (u(t))\right) \end{equs} for some polynomial $\tilde p$ and $0 \leq t \leq \tau$. Thanks to Proposition~\ref{approxima_tree} we furthermore obtain that \begin{equs}\label{app2} U_{k}^{n,r} & (\tau,v) - U_{k}^{r}(\tau,v) \\ & = \sum_{T \in \mathcal{T}^{r+2}_{0}(R)} \frac{\Upsilon^{p}( \lambda_kT)}{S(T)}(v) \left( \Pi -\Pi^{n,r} \right) \left( \CI_{(\mathfrak{t}_1,0)}( \lambda_k T)\right)(\tau) \\ & = \sum_{T \in \mathcal{T}^{r+2}_{0}(R)} \CO\left(\tau^{r+2} \CL^{r}_{\text{\tiny{low}}}(T,n) \Upsilon^{p}( \lambda_kT)(v)\right). \end{equs} Next we write \begin{equs} U_{k}^{n,r}(\tau,v) - U_{k}(\tau,v) & = U_{k}^{n,r}(\tau,v) - U_k^r(\tau,v) + U_k^r(\tau,v) - U_{k}(\tau,v) \end{equs} where by the definition of $\CL^{r}_{\text{\tiny{low}}}(T,n)$ we easily see that the approximation error~\eqref{app2} is dominant compared to \eqref{app1}. \end{proof} \begin{remark}\label{rem:glob} Theorem \ref{thm:genloc} allows us to state the order of consistency of the general scheme \eqref{genscheme} as well as the necessary regularity requirements on the exact solution to meet the error bound. In Section~\ref{sec:examples} we detail the particular form of the general scheme \eqref{genscheme} on concrete examples and explicitly determine the required regularity of the solution in the local error imposed by the operator $\CL^{r}_{\text{\tiny{low}}}(T,n)$. \end{remark} \begin{remark} \label{sobolev_remark} Theorem \ref{thm:genloc} provides a local error estimate (order of consistency) for the new resonance based schemes \eqref{genscheme}. With the aid of stability one can easily obtain a global error estimate with the aid of Lady Windamere's fan argument \cite{H2Tri}. However, the necessary stability estimates in general rely on the algebraic structure of the underlying space. In the stability analysis of dispersive PDEs set in Sobolev spaces $H^r$ one classically exploits bilinear estimates of type \[ \Vert v w \Vert_r \leq c_{r,d} \Vert v \Vert_r \Vert w \Vert_r. \] The latter only hold for $r>d/2$ and thus restricts the analysis to sufficiently smooth Sobolev spaces $H^r$ with $r>d/2$. To obtain (sharp) $L^2$ global error estimates one needs to exploit discrete Strichartz estimates and discrete Bourgain spaces in the periodic setting, see, e.g., \cite{IZ09,ORS19,ORS20}. This is out of the scope of this paper. \end{remark} \begin{proposition}\label{Tay} For $ n $ sufficiently large the scheme \eqref{genscheme} coincides with a classical numerical discretisation based on Taylor series expansions of the full operator $\mathcal{L}$. \end{proposition} \begin{proof} From Theorem~\ref{thm:genloc}, one has \begin{equs} U_{k}^{n,r}(\tau,v) - U_{k}^{r}(\tau,v) = \sum_{T \in \mathcal{T}^{r+2}_{0}(R)} \CO\left(\tau^{r+2} \CL^{r}_{\text{\tiny{low}}}(T,n) \Upsilon^{p}( \lambda_kT)(v)\right). \end{equs} We need to show that for $ n $ bigger than $ \deg(P_{\mathfrak{t}_2}^r) $, $ U_{k}^{n,r}(\tau,v) $ is a polynomial in $ \tau $ and that $ \CL^{r}_{\text{\tiny{low}}}(T,n) = \CO(P_{\mathfrak{t}_2}^{r}(k)) $. Then by mapping it back to the physical space, we get: \begin{equs} U^{n,r}(\tau,v) - U^{r}(\tau,v) = \CO(\tau^{r+2} \partial_t^{r} v) \end{equs} where $ U^{n,r}(\tau,v) $ is a polynomial in $ \tau $. The two statements can be proven by induction. One needs to see how these properties are preserved by applying the map $ \CK^{k,r}_{o_2} ( \cdot,n) $. Indeed, one has \begin{equs} \left( \Pi^n \CI^{r}_{o_2}( \lambda^{\ell}_k F) \right) (\tau) & = \CK^{k,r}_{o_2} \left( \Pi^n \left( \lambda^{\ell} \CD^{r-\ell-1}(F) \right),n \right)(\tau). \end{equs} Then by the induction hypothesis, we can assume that $ \Pi^n \left( \lambda^{\ell} \CD^{r-\ell-1}(F) \right)(\xi) $ is a polynomial in $ \xi $ where the coefficients are polynomials in $ k $. Then by applying Definition~\ref{Taylor_exp}, one has: \begin{equs} \tilde{g}(\xi) = e^{i \xi P_{o_2}(k)}. \end{equs} One can see that if $ n \geq \deg(P_{\mathfrak{t}_2}^{r}) $ then one Taylor expands $ \tilde{g} $ which yields a polynomial in $ \tau $ with polynomial coefficients in $ k $. Lemma~\ref{Taylor_bound} implies an error of order $ \CO(k^{n}) $. This concludes the proof. \end{proof} \begin{remark}\label{rem:Tay} Proposition \ref{Tay} implies that we indeed recover classical numerical discretisations with our general framework for smooth solutions. More precisely, one could check that depending on the particular choice of filter functions $\Psi$ (cf. Remark~\ref{rem:stab}) we recover exponential Runge--Kutta methods and exponential integrators, respectively. For details on the latter we refer to \cite{Berland,ButcherExp,HochOst10} and the references therein. \end{remark} In the examples in Section \ref{sec:examples} we will also state the local error in physical space. For this purpose we introduce the notation $\varphi^\tau$ and $\Phi^\tau$ which will denote the exact and numerical solution at time $t = \tau$, i.e., $\varphi^\tau(v) = u(\tau)$ and $\Phi^\tau(v) = u^1\approx u(\tau)$. We write \begin{equs}\label{lo} \varphi^\tau (v) - \Phi^\tau(v) = \mathcal{O}_{\Vert \cdot \Vert} \left(\tau^\gamma \tilde{\CL} v\right) \end{equs} if in a suitable norm $\Vert \cdot \Vert$ (e.g., Sobolev norm) it holds that \begin{equs} \Vert \varphi^\tau (v) - \Phi^\tau(v)\Vert \leq C(T,d,r) \tau^\gamma \sup_{0 \leq t \leq \tau} \Vert q\left(\tilde{\CL} \varphi^t (v)\right)\Vert \end{equs} for some polynomial $q$, differential operator $\tilde{\CL}$ and constant $C$ independent of $\tau$. If~\eqref{lo} holds we say that the numerical solution $u^1$ approximates the exact solution $u(t)$ at time $t = \tau$ with a local error of order $\mathcal{O}\left(\tau^\gamma \tilde{\CL} v\right) $. \section{Applications}\label{sec:examples} We illustrate the general framework presented in Section \ref{sec:genScheme} on three concrete examples. First we consider the nonlinear Schrödinger and the Korteweg-de Vries equation, for which we find a new class of second-order resonance based schemes. For an extensive overview on classical, non-resonance based discretisations we thereby refer to \cite{BeDe02,CanG15,CCO08,CoGa12,Duj09,Faou12,GauLu,HochOst10,HLRS10,HLR12,HKRT12,HKR99,IZ09,Klein06,Lubich08,YMQ88,T74,Ta12} and the references therein. In addition, we illustrate the general framework on a highly oscillatory system: The Klein--Gordon equation in the so-called non-relativistic limit regime, where the speed of light formally tends to infinity, see, e.g., \cite{BaoKGZUA,BaoZ,BD,BS19,BFS17,ChC}. \subsection{Nonlinear Schrödinger} \label{sec:nls} We consider the cubic nonlinear Schrödinger equation \begin{equation}\label{nls} i \partial_t u + \Delta u = \vert u\vert^2 u \end{equation} with mild solution given by Duhamel's formula \begin{equation}\label{DuhNLS} u(\tau) = e^{i \tau \Delta} u(0) - i e^{i \tau \Delta} \int_0^\tau e^{-i \xi \Delta} \left(\vert u(\xi)\vert^2 u(\xi)\right)d\xi. \end{equation} The Schrödinger equation \eqref{nls} fits into the general framework \eqref{dis} with \begin{equation*} \begin{aligned} \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right) =\Delta, \quad \alpha = 0 \quad \text{and}\quad p(u,\overline u) = u^2 \overline u. \end{aligned} \end{equation*} Here $ \CL = \lbrace \mathfrak{t}_1, \mathfrak{t}_2 \rbrace $, $ P_{\mathfrak{t}_1} = - \lambda^2 $ and $ P_{\mathfrak{t}_2} = \lambda^2 $ (cf \eqref{Palpha}). Then, we denote by~$ \<thick> $ an edge decorated by $ (\mathfrak{t}_1,0) $, $ \<thick2> $ an edge denoted by $ (\mathfrak{t}_1,1) $ by $\<thin>$ an edge decorated by $ (\mathfrak{t}_2,0) $ and by $\<thin2>$ an edge decorated by $ (\mathfrak{t}_2,1) $. The rules that generate the trees obtained by iterating Duhamel's formulation are given by: \begin{equ} R(\<thin>) = \{(\<thick>,\<thick>,\<thick2>) \}\;, \quad R(\<thin2>) = \{(\<thick2>,\<thick2>,\<thick>) \}\;,\quad R(\<thick>) = \{(\<thin>) , ()\}\;, \quad R(\<thick2>) = \{(\<thin2>) , ()\}\;. \end{equ} \subsubsection{First-order schemes} The general framework~\eqref{genscheme} derived in Section \ref{sec:genScheme} builds the foundation of the first-order resonance based schemes presented below for the nonlinear Schrödinger equation~\eqref{nls}. The structure of the schemes depends on the regularity of the solution. \begin{corollary}\label{corNLS1} For the nonlinear Schrödinger equation \eqref{nls} the general scheme~ \eqref{genscheme} takes at first order the form \begin{align} \label{nls1low} u^{\ell+1}& = e^{i \tau \Delta} u^{\ell}- i \tau e^{i \tau \Delta} \left( (u^\ell)^2 \varphi_1(-2 i \tau \Delta) \overline{u^\ell}\right) \end{align} with a local error of order $\mathcal{O}(\tau^2 \vert \nabla \vert u)$ and filter function $\varphi_1(\sigma) = \frac{e^\sigma-1}{\sigma}$. In case of regular solutions the general scheme~ \eqref{genscheme} takes the simplified form \begin{align} \label{nls1class} u^{\ell+1} & = e^{i \tau \Delta} u^{\ell} - i \tau e^{i \tau \Delta} \left( \vert u^\ell\vert^2 u^\ell \right) \end{align} with a local error of order $\mathcal{O}(\tau^2 \Delta u)$ . \end{corollary} \begin{remark} With the general framework introduced in Section \ref{sec:genScheme} we exactly recover the first-order resonance based low regularity scheme \eqref{nls1low} proposed in \cite{OS18}. In addition, for smooth solution we recover a classical first-order approximation~\eqref{nls1class}, i.e., the exponential Euler method, with a classical local error $\mathcal{O}\left(\tau^2 \Delta u\right)$. The low regularity scheme \eqref{nls1low} allows us to solve a larger class of solution due to its favorable error behavior at low regularity.\end{remark} \begin{proof} {\bf Construction of the schemes.} For the first-order scheme we need a local error of order $\mathcal{O}(\tau^2)$. Therefore, we need to choose $r = 0$ in Definition \ref{scheme} and the corresponding trees accordingly. From Definition \eqref{scheme}, the scheme is given by \begin{equs}\label{s1} U_{k}^{n,0}(\tau,v) = \sum_{T \in \mathcal{T}_0^2(R)} \frac{\Upsilon^{p}( \lambda_k T)(v)}{S(T)} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T)) \right)(\tau) \end{equs} where one has \begin{equs} \mathcal{T}_0^2(R) = \lbrace T_0, T_1, \, k_i \in \Z^{d} \rbrace, \quad T_0 = \mathbf{1} \quad \text{and} \quad T_1 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} \end{equs with $T_1$ associated to the first order iterated integral \begin{equation}\label{I1} \begin{aligned} \CI_1(v^2,\overline v, \xi ) & = \int_0^\xi e^{-i \xi_1 \Delta} \left[\left(e^{i \xi_1 \Delta} v\right)^2 \left(e^{-i \xi_1 \Delta} \overline v\right) \right]d\xi_1. \end{aligned} \end{equation} Hence, our first-order scheme \eqref{s1} takes the form \begin{equs}\label{nlsorder1} U_{k}^{n,0}(\tau, v) & = \frac{\Upsilon^{p}( \lambda_k)(v)}{S(\mathbf{1})} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k)) \right)(\tau) \\& +\sum_{\substack{k_1,k_2,k_3\in \Z^d\\-k_1+k_2+k_3 = k}} \frac{\Upsilon^{p}( \lambda_k T_1)(v)}{S(T_1)} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau). \end{equs} In order to write down the scheme \eqref{nlsorder1} explicitly, we need to compute \begin{equs} & \Upsilon^{p}( \lambda_k)(v), \quad S(\mathbf{1}) \quad\text{and} \quad \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k ))\right)(\tau) \\ & \Upsilon^{p}( \lambda_k T_1)(v), \quad S(T_1) \quad\text{and} \quad \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1))\right)(\tau). \end{equs} Note that if we use the symbolic notation, one gets: \begin{equs}\label{tnls} T_1 = \CI_{(\mathfrak{t}_2,0)} \left( \lambda_k F_1 \right) \quad F_1 = \CI_{(\mathfrak{t}_1,1)}( \lambda_{k_1}) \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_2}) \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_3}),\\ \quad k = -k_1+k_2+k_3. \end{equs} Thanks to the definition of $ \Upsilon^{p}, S$ and Example \ref{ex:SUpsNLS1} we already know that \begin{equs} & \Upsilon^{p}( \lambda_k)(v) = \hat v_k , \quad S(\mathbf{1}) =1,\quad \Upsilon^{p}( \lambda_k T_1)(v) = 2 \overline{\hat{v}}_{k_1}\hat{v}_{k_2}\hat{v}_{k_3}, \quad S(T) = 2. \end{equs} Hence, the scheme \eqref{nlsorder1} takes the form \begin{equs}\label{nlsorder1a} U_{k}^{n,0}(\tau, v) & = \hat{v}_k \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k)) \right)(\tau) \\& + \sum_{\substack{k_1,k_2,k_3\in \Z^d\\-k_1+k_2+k_3 = k}}\overline{\hat{v}}_{k_1}\hat{v}_{k_2}\hat{v}_{k_3} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau). \end{equs} It remains to compute $\Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_kT_j)) \right)(\tau)$ for $j=0,1$ with $T_0 = \mathbf{1}$ and $T_1$ given in \eqref{tnls}.\\ 1. {Computation of $\Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k)) \right)(\tau)$.} The second line in \eqref{recursive_pi_r} together with $P_{\mathfrak{t}_1}(k) = - k^2$ implies that \[ \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k)) \right)(\tau) = \Pi^n( \CI^0_{(\mathfrak{t}_1,0)}( \lambda_k))(\tau) = e^{i \tau P_{\mathfrak{t}_1}(k)} = e^{-i \tau k^2}. \] 2. {Computation of $\Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_kT_1)) \right)(\tau)$.} The definition of the tree $T_1$ in \eqref{tnls} furthermore implies that \begin{equation}\label{a1} \begin{aligned} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1))\right)(\tau) & = (\Pi^n \CI^{0}_{(\mathfrak{t}_1,0)}( \lambda_{k} T_1 ))(\tau) =e^{i \tau P_{\mathfrak{t}_1}(k)} (\Pi^n \CD^{0}(T_1))(\tau) \\& = e^{- i \tau k^2} (\Pi^n \CI^0_{(\mathfrak{t}_2,0)} \left( \lambda_k F_1 \right) )(\tau). \end{aligned} \end{equation} Furthermore, by the third line in \eqref{recursive_pi_r} we have \begin{equs} \left( \Pi^n \CI^{0}_{(\mathfrak{t}_2,0)}( \lambda_k F_1) \right) (\tau) & = \CK^{k,0}_{(\mathfrak{t}_2,0)} \left( \Pi^n \CD^{-1}(F_1) ,n \right)(\tau) . \end{equs} By the multiplicativity of $\Pi^n$ (see \eqref{recursive_pi_r}) we furthermore obtain \begin{equs} \Pi^n \CD^{-1}(F_1) (\xi) & = \left(\Pi^n \CI^{-1}_{(\mathfrak{t}_1,1)}( \lambda_{k_1}) \right)(\xi) \left(\Pi^n \CI^{-1}_{(\mathfrak{t}_1,0)}( \lambda_{k_2}) \right)(\xi) \left(\Pi^n \CI^{-1}_{(\mathfrak{t}_1,0)}( \lambda_{k_3})\right)(\xi) \\ & = e^{(-1)i \xi P_{\mathfrak{t}_1}(-k_1)} \left(\Pi^n \mathbf{1} \right)(\xi) e^{i \xi P_{\mathfrak{t}_1}(k_2)} \left(\Pi^n \mathbf{1} \right)(\xi) e^{i \xi P_{\mathfrak{t}_1}(k_3)} \left(\Pi^n \mathbf{1} \right)(\xi) \\ & = e^{ i \xi (k_1^2- k_2^2-k_3^2)} \end{equs} where we used again that $P_{\mathfrak{t}_1}(k_j) = - k_j^2$. Collecting the results and plugging them into \eqref{a1} yields that \begin{equs}\label{a2} (\Pi^n \CI^{0}_{(\mathfrak{t}_1,0)}( \lambda_{k} T_1 ))(\tau) = e^{-i \tau k^2} \CK^{k,0}_{(\mathfrak{t}_2,0)} \left( e^{ i \xi (k_1^2- k_2^2-k_3^2)},n\right)(\tau). \end{equs} Next we use Definition \ref{Taylor_exp}. Observe that (see Example \ref{exRDoNLS}), \begin{equs}\label{lowdom1} & \mathcal{L}_{\text{\tiny{dom}}} = 2 k_1^2, \quad \mathcal{L}_{\text{\tiny{low}}} = - 2 k_1 (k_2+k_3) + 2 k_2 k_3\\ & f(\xi) = e^{i \xi\mathcal{L}_{\text{\tiny{dom}}}}, \quad g(\xi) = e^{i \xi\mathcal{L}_{\text{\tiny{low}} }} \end{equs} and thus as $g(0) = 1$ we have \begin{equation*}\label{K0} \CK^{k,0}_{(\mathfrak{t}_2,0)} \left( e^{ i \xi (k_1^2- k_2^2-k_3^2)},n\right)(\tau) = \left\{ \begin{aligned} & - i \tau,\,\quad \text{if } n\geq 2\\ & -i\Psi_{n,0}^0 \left(\mathcal{L}_\text{dom}, 0\right)(\tau),\, \quad \text{if } n < 2 \\ \end{aligned} \right. \end{equation*} with \begin{equ}\label{Psi0} \Psi^{0}_{n,0}\left( \CL_{\text{\tiny{dom}}},0 \right)(\tau) \int_0^{\tau} f(\xi) d \xi = \frac{e^{2 i \tau k_1^2}-1}{2 i k_1^2} = \tau \varphi_1(2i \tau k_1^2), \quad \text{if } n < 2. \end{equ} Plugging this into \eqref{a2} yields together with \eqref{a1} and \eqref{nlsorder1a} that \begin{equs}\label{Fscheme1} \begin{aligned} U_{k}^{n= 1,0}(\tau, v) & = e^{-i \tau k^2} \hat{v}_k -i \tau \sum_{\substack{k_1,k_2,k_3\in \Z^d\\-k_1+k_2+k_3 = k}}\overline{\hat{v}}_{k_1}\hat{v}_{k_2}\hat{v}_{k_3} e^{-i \tau k^2} \varphi_1(2i \tau k_1^2) \\ U_{k}^{n > 1,0}(\tau, v) & =e^{-i \tau k^2} \hat{v}_k - i \tau \sum_{\substack{k_1,k_2,k_3\in \Z^d\\-k_1+k_2+k_3 = k}} \overline{\hat{v}}_{k_1}\hat{v}_{k_2}\hat{v}_{k_3} e^{-i \tau k^2}. \end{aligned} \end{equs} Note that the above Fourier based resonance schemes can easily be transformed back to physical space yielding the two first-order iterative schemes \eqref{nls1low} and \eqref{nls1class} which depend on the smoothness $n$ of the exact solution. \noindent {\bf Local error analysis.} It remains to show that the general local error bound stated in Theorem \ref{thm:genloc} implies the local error \begin{equs}\label{loc1} \mathcal{O}\left(\tau^2 \nabla^n\right), \quad n = 1,2 \end{equs} for the schemes \eqref{nls1low} (with $n=1$) and \eqref{nls1class} (with $n=2$), respectively. Theorem~\ref{thm:genloc} implies that \begin{equs} U_{k}^{n,0}(\tau,v) - U_{k}^{0}(\tau,v) = \sum_{ T \in \mathcal{T}_0^2(R)} \CO\left(\tau^{2} \CL^{0}_{\text{\tiny{low}}}(T,n) \Upsilon^{p}( \lambda_k T)(v) \right). \end{equs} By Definition \ref{def:Llow} and Remark \ref{rem:simpL} we have that $ \mathcal{L}^{0}_{\text{\tiny{low}}}(T_0) = 1$ and \begin{equs} & \mathcal{L}^{0}_{\text{\tiny{low}}}(T_1) = \mathcal{L}^{0}_{\text{\tiny{low}}}(\CI_{(\mathfrak{t}_2,0)}( \lambda_{k} F_1 ),n) = \mathcal{L}^{-1}_{\text{\tiny{low}}}(F_1,n) + \sum_j k^{\overline n_j}\\ & \overline{n}_j = \max(n, \deg( \mathscr{F}_{\text{\tiny{low}}} (\CI_{(\mathfrak{t}_2,0)}( \lambda^{\ell}_{k} F_j )))) \end{equs} where $\sum_j F_j = \mathcal{M}_{(1)} \Delta F_1$ with $\mathcal{M}_{(1)} \left( F_1 \otimes F_2 \right) = F_1$. Note that by \eqref{CoF1} we have that $ \Delta F_1 = F_1 \otimes \mathbf{1} $ such that $\sum_j F_j = F_1$. Hence, \begin{equs} \overline{n}_j = \overline{n}_1 = \max(n, \deg( \mathscr{F}_{\text{\tiny{low}}} (\CI_{(\mathfrak{t}_2,0)}( \lambda_{k} F_1)))). \end{equs} By Definition \ref{dom_freq} and Example \ref{exRDoNLS} we obtain that \begin{equs} \mathscr{F}_{\text{\tiny{low}}} (\CI_{(\mathfrak{t}_2,0)}( \lambda_{k} F_1) = - 2k_1(k_2+k_3) + 2 k_2 k_3. \end{equs} Hence, $\overline n_1 = \max(n, 1)$ and \begin{equs} \mathcal{L}^{0}_{\text{\tiny{low}}}(T_1) & = \mathcal{L}^{-1}_{\text{\tiny{low}}}(F_1,n) + k^{\max(n, 1)} \\ & = \mathcal{L}^{-1}_{\text{\tiny{low}}}\left(\CI_{(\mathfrak{t}_1,1)}( \lambda_{k_1}) ,n\right) + \mathcal{L}^{-1}_{\text{\tiny{low}}}\left(\CI_{(\mathfrak{t}_1,0)}( \lambda_{k_2}) ,n\right)\\ & + \mathcal{L}^{-1}_{\text{\tiny{low}}}\left(\CI_{(\mathfrak{t}_1,0)}( \lambda_{k_3}) , n\right) + k^{\max(n, 1)}\\ & = \mathcal{L}^{-1}_{\text{\tiny{low}}}\left(\mathbf{1},n\right) + \mathcal{L}^{-1}_{\text{\tiny{low}}}\left(\mathbf{1},n \right) + \mathcal{L}^{-1}_{\text{\tiny{low}}}\left(\mathbf{1},n \right) + k^{\max(n, 1)}. \end{equs} Using that $\mathcal{L}^{-1}_{\text{\tiny{low}}}\left(\mathbf{1},n\right) =1 $ we finally obtain that $ \mathcal{L}^{0}_{\text{\tiny{low}}}(T_1) = \CO( k^{\max(n, 1)}). $ Therefore, we recover \eqref{loc1}. \end{proof} \subsubsection{Second-order approximation}\label{sec:2NLS} The general framework~\eqref{genscheme} derived in Section \ref{sec:genScheme} builds the foundation of the second-order resonance based schemes presented below for the nonlinear Schrödinger equation~\eqref{nls}. The structure of the schemes depends on the regularity of the solution. \begin{corollary}\label{corNLS2} For the nonlinear Schrödinger equation \eqref{nls} the general scheme~ \eqref{genscheme} takes at second order the form \begin{equs}\label{nls2low} u^{\ell+1} & = e^{i \tau \Delta} u^\ell - i \tau e^{i \tau \Delta} \Big( (u^\ell)^2 \left(\varphi_1(-2i \tau \Delta) - \varphi_2(-2i \tau \Delta)\right) \overline{u^\ell}{ \Big)} \\& - i \tau \left(e^{i \tau \Delta} u^\ell\right)^2 \varphi_2 (-2i \tau \Delta)e^{i \tau \Delta} \overline{u^\ell} -\frac{\tau^2}{2} e^{i \tau \Delta} \left( \vert u^\ell\vert^4 u^\ell\right) \end{equs} with a local error of order $\mathcal{O}(\tau^3 \Delta u)$ and filter function $\varphi_2(\sigma) = \frac{e^\sigma-\varphi_1(\sigma)}{\sigma}$. In case of more regular solutions the general scheme \eqref{genscheme} takes the simplified form \begin{equation}\label{nls2mid} \begin{aligned} & u^{\ell+1} = e^{i \tau \Delta} u^\ell \\& - i \tau e^{i \tau \Delta} \left( (u^\ell)^2 \left(\varphi_1 (-2i \tau \Delta)-\frac{1}{2}\right) \overline{u^\ell} + \frac{1}{2} \left(e^{i \tau \Delta} u^\ell\right)^2e^{i \tau \Delta} \overline{u^\ell} \right)\\ & -\frac{\tau^2}{2} e^{i \tau \Delta} \left( \vert u^\ell\vert^4 u^\ell\right) \end{aligned} \end{equation} with a local error of order $\mathcal{O}(\tau^3 \nabla^3 u)$, and for smooth solutions \begin{equs}\label{nls2smooth} u^{\ell+1} & = e^{i \tau \Delta} u^\ell - i \tau e^{i \tau \Delta} \left(\Psi_1 - i \Psi_2 \frac12 \tau \Delta \right) \vert u^\ell\vert^2 u^\ell \\ & + \frac{\tau^2}{2} e^{i \tau \Delta}\Psi_3 \left( - (u^\ell)^2 \Delta \overline{u^\ell} + 2 \vert u^\ell\vert^2 \Delta u^\ell- \vert u^\ell\vert^4 u^\ell \right) \end{equs} with a local error of order $\mathcal{O}(\tau^3 \Delta^2 u)$ and suitable filter functions $\Psi_{1,2,3}$ satisfying \begin{equs} \Psi_{1,2,3}= \Psi_{1,2,3}\left(i \tau \Delta\right), \quad \Psi_{1,2,3}(0) = 1, \quad \Vert \tau \Psi_{1,2,3}\left(i \tau \Delta\right) \Delta \Vert \leq 1. \end{equs} \end{corollary} \begin{remark}\label{rem:nls2} With the general framework we had recovered at first-order exactly the resonance based first-order scheme \eqref{nls1low} derived in \cite{OS18}. The second-order schemes \eqref{nls2low} and \eqref{nls2mid} are new and allow us to improve the classical local error structure $\mathcal{O}\left( \tau^3 \Delta^4 u \right)$. We refer to \cite{Lubich08,CoGa12} for the error analysis of classical splitting and exponential integrators for the Schrödinger equation. The new second-order low regularity scheme \eqref{nls2low} moreover allows us to improve the recently introduced scheme \cite{KOS19}, which only allows for a local error of order $\mathcal{O}\left(\tau^{2+1/2} \Delta u \right)$. Thus we break the order barrier of $3/2$ previously assumed for resonance based approximations for Schrödinger equations. With the new framework we in addition recover for smooth solutions classical second-order Schrödinger approximations obeying the classical local error structure $\mathcal{O}\left(\tau^2 \Delta^2 u\right)$: Depending on the choice of filter functions $\Psi_{1,2,3}$ the second-order scheme \eqref{nls2smooth} coincides with second-order exponential Runge--Kutta or exponential integrator methods (\cite{Berland,ButcherExp,HochOst10}), see also Remark \ref{rem:Tay}. The favorable error behavior of the new schemes for non-smooth solutions is underlined by numerical experiments, see Figure~\ref{fig:NLS}. \end{remark} \begin{proof} {\bf Construction of the schemes.} For the second-order scheme we need a local error of order $\mathcal{O}(\tau^3)$. Therefore, we need to choose $r = 1$ in Definition \ref{scheme} and the corresponding trees accordingly. This yields that \begin{equs}\label{nlsorder2} U_{k}^{n,1}(\tau, v) & = \frac{\Upsilon^{p}( \lambda_k)(v)}{S( \lambda_k)} \Pi^n \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k)) \right)(\tau) \\& +\sum_{\substack{k_1,k_2,k_3\in \Z^d\\-k_1+k_2+k_3 = k}} \frac{\Upsilon^{p}( \lambda_k T_1)(v)}{S(T_1)} \Pi^n \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) \\& +\sum_{\substack{k_1,k_2,k_3,k_4,k_5\in \Z^d\\-k_1+k_2+k_3 -k_4+k_5= k}} \frac{\Upsilon^{p}( \lambda_k T_2)(v)}{S(T_2)} \Pi^n \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_2)) \right)(\tau) \\& +\sum_{\substack{k_1,k_2,k_3,k_4,k_5\in \Z^d\\k_1-k_2-k_3 +k_4+k_5= k}} \frac{\Upsilon^{p}( \lambda_k T_3)(v)}{S(T_3)} \Pi^n \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_3)) \right)(\tau) \end{equs} with $T_1$ defined in \eqref{tnls} and \begin{equs} \mathcal{T}_0^3(R) = \lbrace T_0, T_1, T_2, T_3, \, k_i \in \Z^{d} \rbrace, \quad T_2 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols] (t3) -- (t4); \draw[kernels2,tinydots] (t4) -- (t41); \draw[kernels2] (t4) -- (t42); \draw[kernels2] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture}, \quad T_3 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,2); \coordinate (t4) at (0,4); \coordinate (t41) at (-2,6); \coordinate (t42) at (2,6); \coordinate (t43) at (0,8); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2,tinydots] (t3) -- (root); \draw[symbols] (root) -- (tri); \draw[symbols,tinydots] (t3) -- (t4); \draw[kernels2] (t4) -- (t41); \draw[kernels2,tinydots] (t4) -- (t42); \draw[kernels2,tinydots] (t4) -- (t43); \node[not] (rootnode) at (root) {}; \node[not] (rootnode) at (t4) {}; \node[not] (rootnode) at (t3) {}; \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{4}} $}}; \node[var] (rootnode) at (t41) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t42) {\tiny{$ k_{\tiny{3}} $}}; \node[var] (rootnode) at (t43) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_5 $}}; \end{tikzpicture}. \end{equs} In symbolic notation one gets \begin{equs}\label{nlst23} \begin{aligned} & T_2 = \CI_{(\mathfrak{t}_2,0)} \left( \lambda_k F_2 \right), \quad k = -k_1+k_2+k_3-k_4+k_5 \\ & F_2 = \CI_{(\mathfrak{t}_1,1)}( \lambda_{k_4}) \CI_{(\mathfrak{t}_1,0)} \left( \lambda_{-k_1+k_2+k_3} T_1 \right)\CI_{(\mathfrak{t}_1,0)}( \lambda_{k_5}), \\ & T_3 = \CI_{(\mathfrak{t}_2,0)} \left( \lambda_k F_3 \right), \quad k = k_1-k_2-k_3+k_4+k_5 \\ & F_3 = \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_4}) \overline{\CI_{(\mathfrak{t}_1,0)}\left( \lambda_{-k_1+k_2+k_3} T_1 \right)}\CI_{(\mathfrak{t}_1,0)}( \lambda_{k_5}), \end{aligned} \end{equs} where thanks to \eqref{bar} \begin{equs} \overline{\CI_{(\mathfrak{t}_1,0)}\left( \lambda_{-k_1+k_2+k_3} T_1 \right)}\ = \CI_{(\mathfrak{t}_1,1)}\left( \lambda_{-k_1+k_2+k_3} \overline{T_1} \right). \end{equs} Note that the trees $T_2,T_3$ correspond to the next iterated integrals \begin{equation}\label{I2} \begin{aligned} \CI_2( v^3,\overline v^2, \xi ) & = \int_0^\xi e^{-i \xi_1 \Delta} \left[\left(e^{i \xi_1 \Delta} v\right) \left(e^{-i \xi_1 \Delta} \overline v\right) \left( e^{i \xi_1 \Delta} \CI_1( v^2,\overline v, \xi_1 ) \right) \right] d\xi_1 \\ \CI_3( v^3,\overline v^2, \xi ) & = \int_0^\xi e^{-i \xi_1 \Delta} \left[\left(e^{i \xi_1 \Delta} v\right)^{2} \left( e^{-i \xi_1 \Delta} \overline{\CI_1( v^2,\overline v, \xi_1 )} \right) \right] d\xi_1. \end{aligned} \end{equation} The definitions \eqref{upsi} imply that \begin{equs} \Upsilon^{p}( \lambda_k T_2)(v) & = 2\Upsilon^{p}\left( \CI_{(\mathfrak{t}_1,1)}( \lambda_{k_4} ) \right) (v) \Upsilon^{p}\left( \CI_{(\mathfrak{t}_1,0)}( \lambda_{k} T_1) \right) (v) \Upsilon^{p}\left( \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_5} ) \right) (v)\\ & = 2\overline{\hat{v}}_{k_4} \left( 2 \overline{\hat{v}}_{k_1}\hat{v}_{k_2}\hat{v}_{k_3}\right) {\hat{v}}_{k_5} \\ \Upsilon^{p}( \lambda_k T_3)(v) & = 2 \Upsilon^{p}\left( \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_4} ) \right) (v) \Upsilon^{p}\left( \lambda_{k} \overline{T_1} \right)(v)\Upsilon^{p}\left( \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_5} ) \right) (v)\\ & = 2 \hat v_{k_4}\overline{ \left( 2 \overline{\hat{v}}_{k_1}\hat{v}_{k_2}\hat{v}_{k_3}\right) } \hat v_{k_5} = 2 \hat v_{k_4} \left(2 \hat v_{k_1} \overline{\hat{v}}_{k_2} \overline{\hat{v}}_{k_3}\right) \hat v_{k_5} \end{equs} and by \eqref{S} we obtain \begin{equs} S(T_2) = 1 \cdot 2 = 2, \quad S(T_3) = 2 \cdot 2 = 4. \end{equs} Next we have to compute $ \Pi^n \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_j)) \right)$ for $j=1, 2,3$.\\ \noindent 1. {Computation of $ \Pi^n \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)$:} Here $k= - k_1+k_2+k_3$. Similarly to \eqref{a2} we obtain that \begin{equ}\label{a3} \Pi^n \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) = e^{-i \tau k^2} \CK^{k,1}_{(\mathfrak{t}_2,1)} \left( e^{ i \xi (k_1^2- k_2^2-k_3^2)},n\right)(\tau), \end{equ} where by \eqref{lowdom1} we have that if $ n\geq 4$ \begin{equs} \CK^{k,1}_{(\mathfrak{t}_2,1)} \left( e^{ i \xi (k_1^2- k_2^2-k_3^2)},n\right)(\tau) & = -i \tau + \left(k^2 + k_1^2 - k_2^2-k_3^2\right) \frac{\tau^2}{2} . \end{equs} If $ n < 4 $ \begin{equs} \CK^{k,1}_{(\mathfrak{t}_2,1)} \left( e^{ i \xi (k_1^2- k_2^2-k_3^2)},n\right)(\tau) = &-i \Psi_{n,0}^1(\mathcal{L}_{\text{\tiny{dom}}},0)(\tau) -i \frac{g(\tau)-1}{\tau} \Psi_{n,0}^1(\mathcal{L}_{\text{\tiny{dom}}},1)(\tau) , \end{equs} with \begin{equ} \Psi^{1}_{n,0}\left( \CL_{\text{\tiny{dom}}},\ell \right)(\tau) = \left\{ \begin{aligned} & \int_0^{\tau} \xi^{\ell} f(\xi) d \xi, \, \quad \text{if } 4 - \ell > n \text{ and } n < 4 , \\ & \sum_{m \leq 1-\ell} \frac{f^{(m)}(0)}{m!} \int_0^{\tau} \xi^{\ell+m} d \xi, \quad \text{if } 4 - \ell \leq n \text{ and } n < 4. \\ \end{aligned} \right. \end{equ} Hence,\begin{equation}\label{simpNLS} \begin{aligned} \Pi^{n=2} \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) &=-i \tau e^{-i \tau k^2} \Big( \varphi_1(2i \tau k_1^2) + \left(g(\tau)-1\right) \varphi_2(2i \tau k_1^2) \Big)\\ \Pi^{n= 3} \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) & = -i \tau e^{-i \tau k^2} \Big( \varphi_1(2i \tau k_1^2) + \frac{1}{2}\left(g(\tau)-1\right)\Big)\\ \Pi^{n \geq 4} \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) & = e^{-i \tau k^2} \Big(-i \tau + \left(k^2 + k_1^2 - k_2^2-k_3^2\right) \frac{\tau^2}{2} \Big) \end{aligned} \end{equation} where $g(\tau) = e^{i \tau (k^2 - k_1^2-k_2^2-k_3^2)}$ and $\mathcal{L}(k) = \mathcal{L}_\text{dom}(k) + \mathcal{L}_\text{low}(k)$. \noindent 2. {Computation of $ \Pi^n \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_2)) \right)$:} Here $k = -k_1+k_2+k_3-k_4+k_5$. By \eqref{recursive_pi_r} and $ P_{\mathfrak{t}_1}(k) = -k^2$ we have \begin{equs} \Pi^n \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_2)) \right)(\tau) & = e^{-i \tau k^2 } \left( \Pi^n \CD^{1}(T_2)\right)(\tau) = e^{-i \tau k^2 } \left( \Pi^n \CI^1_{(\mathfrak{t}_2,0)} \left( \lambda_k F_2 \right) \right)(\tau)\\ & = e^{-i \tau k^2 } \CK_{(\mathfrak{t}_2,0)}^{k,1}\left(\Pi^n\left(\CD^{0}(F_2)\right),n\right)(\tau). \end{equs} Furthermore, by the mulitiplicativity of $\Pi^n$ we obtain with the aid of \eqref{a2} and~\eqref{K0} \begin{align*} \Pi^n\left(\CD^{0}(F_2)\right) (\tau)& = \left( \Pi^n \CI^0_{(\mathfrak{t}_1,1)}( \lambda_{k_4}) \right)(\tau) \left(\Pi^n \CI^0_{(\mathfrak{t}_1,0)} \left( \lambda_{k}T_1 \right)\right)(\tau) \left(\Pi^n \CI^0_{(\mathfrak{t}_1,0)}( \lambda_{k_5})\right) (\tau)\\ &= e^{-i \tau k_4^2} e^{-i \tau (-k_1+k_2+k_3)^2} \CK^{-k_1+k_2+k_3,0}_{(\mathfrak{t}_2,0)} \left( e^{ i \xi (k_1^2- k_2^2-k_3^2)},n\right)(\tau) e^{-i \tau k_5^2} \\ &= - i e^{-i \tau (k_4^2+k_5^2)} e^{-i \tau (-k_1+k_2+k_3)^2} \Psi_{n,0}^0 \left(\mathcal{L}_{\text{\tiny{dom}}}, 0\right)(\tau) \end{align*} where by \eqref{Psi0} and the fact that $n \geq 2$ we have that \begin{equs} \Psi_{n,0}^0 \left(\mathcal{L}_{\text{\tiny{dom}}}, 0\right)(\tau) = \tau. \end{equs} Hence, \begin{align} \label{computescheme2} \begin{aligned} \Pi^n \left( \CI^{1}_{(\mathfrak{t}_1,0)}( \lambda_k T_2) \right)(\tau) & = -i e^{-i \tau k^2 } \CK_{(\mathfrak{t}_2,0)}^{k,1}\left(\xi e^{-i \xi (k_4^2+k_5^2)} e^{-i \xi (-k_1+k_2+k_3)^2} ,n\right)(\tau)\\ & =- e^{-i \tau k^2 } \frac{\tau^2}{2} \end{aligned} \end{align} where we used again that $n \geq 2$. \noindent 3. {Computation of $ \Pi^n \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_3)) \right)$:} Here $k = k_1-k_2-k_3+k_4+k_5$. Similarly we can show that \begin{align*} \Pi^n \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_3)) \right)(\tau) & = + e^{-i \tau k^2 } \frac{\tau^2}{2}. \end{align*} Plugging the results from Computations 1--3 into \eqref{nlsorder2} yields that \begin{equs}\label{Fscheme2} U_k^{n,1}(\tau,v) & = e^{-i \tau k^2} \hat v_k \\& -i \tau \sum_{-k_1+k_2+k_3 = k}e^{-i \tau k^2} \frac{1}{\tau} \Pi^{n} \left( \CI^{1}_{(\mathfrak{t}_1,0)}( \lambda_k T_1) \right) (\tau) \, \overline{\hat{v}}_{k_1} \hat v_{k_2} \hat v_{k_2} \\ & - \frac{\tau^2}{2} \sum_{-k_1+k_2+k_3-k_4+k_5 = k} e^{-i \tau k^2}\overline{\hat{v}}_{k_1} \hat v_{k_2} \hat v_{k_2} \overline{\hat{v}}_{k_4} \hat v_{k_5} \end{equs} with $\Pi^{n} \left( \CD^1(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)$ given in \eqref{simpNLS} and we have used that the last two sums in \eqref{nlsorder2} can be merged into one. The Fourier based resonance schemes \eqref{Fscheme2} can easily be transformed back to physical space yielding the three low-to-high regularity second-order iterative schemes \eqref{nls2low} -- \eqref{nls2smooth} which depend on the smoothness $n$ of the exact solution. {\bf Local error analysis.} It remains to show that the general local error bound given in Theorem \ref{thm:genloc} implies the local error \begin{equs}\label{loc2} \mathcal{O}\left( \tau^3 \nabla^n\right), \quad n = 2,3,4 \end{equs} of the schemes \eqref{nls2low} -- \eqref{nls2smooth}. Theorem \ref{thm:genloc} implies that \begin{equs}\label{locoErr2} U_{k}^{n,1}(\tau,v) - U_{k}^{1}(\tau,v) = \sum_{T \in \mathcal{T}_0^3(R) } \CO\left(\tau^{3} \CL^{1}_{\text{\tiny{low}}}(T,n) \Upsilon^{p}( \lambda_k T)(v) \right) \end{equs} where $\mathcal{T}_0^3(R) = \{ T_0, T_1, T_2, T_3\}$ with $T_0 = \mathbf{1}$, $T_1$ given in \eqref{tnls} and $T_2, T_3$ defined in \eqref{nlst23}. 1) Computation of $\CL^{1}_{\text{\tiny{low}}}(T_0, n)$ and $\CL^{1}_{\text{\tiny{low}}}(T_1, n)$:\\ By Definition \ref{def:Llow} and Remark \ref{rem:simpL} we obtain similarly to the first-order scheme (here with $r=1$) that \begin{equs} \mathcal{L}^1_{\text{\tiny{low}}}\left(T_0,n\right) = 1, \quad \mathcal{L}^1_{\text{\tiny{low}}}\left(T_1,n\right) = k^{\text{\tiny{max}}(n,2)}= k^{\text{\tiny{max}}(n,2)}. \end{equs} 2) Computation of $\CL^{1}_{\text{\tiny{low}}}(T_2, n)$:\\ Next we calculate ( using again Definition \ref{def:Llow} and Remark \ref{rem:simpL}) that \begin{equs}\label{L1low} \mathcal{L}^1_{\text{\tiny{low}}}\left(T_2,n\right) & = \mathcal{L}^1_{\text{\tiny{low}}}\left(\CI_{(\mathfrak{t}_2,0)}( \lambda_k F_2),n\right) = \mathcal{L}^0_{\text{\tiny{low}}}\left(F_2,n\right) + \sum_{j} k^{\bar n_j} \end{equs} with \[ \bar n_j = \max(n, \deg( \mathscr{F}_{\text{\tiny{low}}} (\CI_{(\mathfrak{t}_2,0)}( \lambda_{k} F^j_2 ))^{2})) \] where $ \sum_j F_2^j = \mathcal{M}_{(1)} \Delta \CD^{r-1} (F_2)$ with $r=1$. Hence, we have to calculate $\Delta \CD^{0} (F_2)$. By the multiplicativity of the coproduct, its recursive definition~\eqref{def_deltas} and the calculation of $\CD^r(T_1)$ given in \eqref{comps} we obtain that \begin{equs} \Delta \CD^r (F_2) & =\Delta \CI^r_{(\mathfrak{t}_1,1)}\left( \lambda_{k_4}\right) \Delta \CI^r_{(\mathfrak{t}_1,0)}\left( \lambda_{-k_1+k_2+k_3}T_1\right)\Delta \CI^r_{(\mathfrak{t}_1,0)}\left( \lambda_{k_5}\right)\\ &= \left(\CI^r_{(\mathfrak{t}_1,1)}\left( \lambda_{k_4}\right) \otimes \mathbf{1} \right)\left( \CI^{r}_{(\mathfrak{t}_1,0)}( \lambda_{-k_1+k_2+k_3}\cdot) \otimes \mathrm{id} \right) \\& \qquad\Delta \CD^{r}(T_1)) \left(\CI^r_{(\mathfrak{t}_1,0)}\left( \lambda_{k_5}\right) \otimes \mathbf{1} \right)\\ & = \CD^r(F_2) \otimes \mathbf{1} \\&+ \CI^r_{(\mathfrak{t}_1,1)}\left( \lambda_{k_4}\right) \CI^r_{(\mathfrak{t}_1,0)}\left( \lambda_{k_5}\right) \CI^{r}_{(\mathfrak{t}_1,0)}\left(\sum_{m \leq r +1 } \frac{ \lambda_{-k_1+k_2+k_3}^m}{m!} \right) \otimes \CD^{(r,m)}( T_1) . \end{equs} Hence, as $r = 1$ we obtain \begin{equs} \sum_{j=1}^4 F^j_2 & = \CD^0(F_2) + \sum_{m \leq r +1 } \frac{1}{m!} \CI^0_{(\mathfrak{t}_1,1)}\left( \lambda_{k_4}\right) \CI^0_{(\mathfrak{t}_1,0)}\left( \lambda_{k_5}\right) \CI^{0}_{(\mathfrak{t}_1,0)}\left( \lambda^{m}_{-k_1+k_2+k_3} \right) . \end{equs} Now we are in the position to compute $\bar n_1$: By Definition \ref{dom_freq} we have \begin{equs}\label{RD1} \mathscr{F}_{\text{\tiny{dom}}}\left(\CI_{(\mathfrak{t}_2,0)}\left( \lambda_k F_2\right)\right)& = \mathcal{P}_{\text{\tiny{dom}}}\left( P_{(\mathfrak{t}_2,0)}(k) +\mathscr{F}_{\text{\tiny{dom}}}(F_2) \right)\\& = \mathcal{P}_{\text{\tiny{dom}}}\left(k^2 +\mathscr{F}_{\text{\tiny{dom}}}( F_2)\right) \end{equs} where we used that $P_{\mathfrak{t}_2}(k) = + k^2$ as well as \eqref{Palpha}. Furthermore, we have that \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}\left(F_2\right)& = \mathscr{F}_{\text{\tiny{dom}}}\left( \CI_{(\mathfrak{t}_1,1)}( \lambda_{k_4}) \right) +\mathscr{F}_{\text{\tiny{dom}}}\left(\CI_{(\mathfrak{t}_1,0)} \left( \lambda_{-k_1+k_2+k_3} T_1 \right)\right) \\ & +\mathscr{F}_{\text{\tiny{dom}}}\left(\CI_{(\mathfrak{t}_1,0)}( \lambda_{k_5})\right) \end{equs} with \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}\left( \CI_{(\mathfrak{t}_1,1)}( \lambda_{k_4}) \right)& = P_{(\mathfrak{t}_1,1)}(k_4) +\mathscr{F}_{\text{\tiny{dom}}}(\mathbf{1}) = - P_{\mathfrak{t}_1}(-k_4) = +k_4^2 \end{equs} where we used that $P_{\mathfrak{t}_1}(k) = - k^2$ as well as \eqref{Palpha}. Similarly we obtain that \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}\left( \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_5}) \right) = - k_5^2 . \end{equs} Furthermore, we have that \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}\left(\CI_{(\mathfrak{t}_1,0)} \left( \lambda_{-k_1+k_2+k_3} T_1 \right)\right) & = - (-k_1+k_2+k_3)^2 + \mathscr{F}_{\text{\tiny{dom}}}\left(T_1 \right)\\ & = - (-k_1+k_2+k_3)^2 + 2 k_1^2 \end{equs} where we used Example \ref{exRDoNLS}. Hence, \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}\left(F_2\right)& = - (-k_1+k_2+k_3)^2 + 2 k_1^2 + k_4^2 - k_5^2. \end{equs} Plugging this into \eqref{RD1} yields \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}\left(\CI_{(\mathfrak{t}_2,0)}\left ( \lambda_k F_2 \right)\right) & = \mathcal{P}_{\text{\tiny{dom}}}\left(k^2 - (-k_1+k_2+k_3)^2 + 2 k_1^2 + k_4^2 - k_5^2\right). \end{equs} Therefore, $ \mathscr{F}_{\text{low}}\left(\CI_{(\mathfrak{t}_2,0)}\left ( \lambda_kF_2\right)\right) = k$ such that \begin{equs} \overline n_1 = \text{max}(n,\text{deg}(k^{2})) = \text{max}(n,2). \end{equs} Next we compute $\overline n_2$: \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}\left(\CI_{(\mathfrak{t}_2,0)}\left ( \lambda_k F_2^2 \right)\right) = \mathcal{P}_{\text{\tiny{dom}}}\left(k^2 +\mathscr{F}_{\text{\tiny{dom}}}\left(F_2^2\right)\right) \end{equs} with \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}\left(F_2^2\right) & = \mathscr{F}_{\text{\tiny{dom}}}\left( \CI_{(\mathfrak{t}_1,1)}\left( \lambda_{k_4}\right)\right)+\mathscr{F}_{\text{\tiny{dom}}}\left( \CI_{(\mathfrak{t}_1,0)}\left( \lambda_{k_5}\right) \right)\\&+ \mathscr{F}_{\text{\tiny{dom}}}\left( \CI_{(\mathfrak{t}_1,0)}\left( \lambda_{-k_1+k_2+k_3} \right) \right). \end{equs} Note that \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}\left( \CI_{(\mathfrak{t}_1,0)}\left({ \lambda_{-k_1+k_2+k_3}^m} \right)\right) & = \mathscr{F}_{\text{\tiny{dom}}}\left( \CI_{(\mathfrak{t}_1,0)}\left( \lambda_{-k_1+k_2+k_3} \right)\right)\\ & = - (-k_1+k_2+k_3)^2. \end{equs} Together with the previous computations we can thus conclude that \begin{equs} \mathscr{F}_{\text{\tiny{dom}}}\left(\CI_{(\mathfrak{t}_2,0)}\left ( \lambda_k F_2^2 \right)\right) & = \mathcal{P}_{\text{\tiny{dom}}}\left(k^2 - (-k_1+k_2+k_3)^2 + k_4^2 - k_5^2 \right). \end{equs} Therefore, $ \mathscr{F}_{\text{low}}\left(\CI_{(\mathfrak{t}_2,0)}\left ( \lambda_k F_2^2 \right)\right) = k$ and \begin{equs} \bar n_2 = \text{max}(n,2). \end{equs} Hence \begin{equs} \bar n_1 = \bar n_2 = \bar n_3 = \bar n_4 = \text{max}(n,2). \end{equs} Furthermore, by Definition \ref{def:Llow} we have \begin{equs} \mathcal{L}^1_{\text{\tiny{low}}}\left( F_2,n\right) & = \mathcal{L}^1_{\text{\tiny{low}}} \left( \CI_{(\mathfrak{t}_1,1)}\left( \lambda_{k_4}\right) ,n\right) + \mathcal{L}^1_{\text{\tiny{low}}} \left( \CI_{(\mathfrak{t}_1,0)}\left( \lambda_{-k_1+k_2+k_3} T_1\right) ,n\right) \\& + \mathcal{L}^1_{\text{\tiny{low}}} \left( \CI_{(\mathfrak{t}_1,0)}\left( \lambda_{k_5}\right) ,n\right)\\ & = \mathcal{L}^1_{\text{\tiny{low}}}(\mathbf{1}) + \mathcal{L}^1_{\text{\tiny{low}}}\left(T_1 ,n\right) + \mathcal{L}^1_{\text{\tiny{low}}}(\mathbf{1}) \\ & = 2 + k^{\text{\tiny{max}}(n,2)}. \end{equs} Plugging this into \eqref{L1low} yields that \begin{equs} \mathcal{L}^1_{\text{\tiny{low}}}\left(T_2,n\right) & = \mathcal{O}\left( k^{\text{\tiny{max}}(n,2)}\right). \end{equs} \noindent 3) Computation of $\CL^{1}_{\text{\tiny{low}}}(T_3, n)$:\\ Similarly we obtain that \begin{equs} \mathcal{L}^1_{\text{\tiny{low}}}\left(T_3,n\right) & = \mathcal{O}\left( k^{\text{\tiny{max}}(n,2)}\right). \end{equs} Plugging the computations 1-3 into \eqref{locoErr2} we recover the local error structure \eqref{loc2}. \end{proof} \subsection{Korteweg--de Vries}\label{sec:kdv} We consider the Korteweg--de Vries (KdV) equation \begin{equs}\label{kdv} \partial_t u + \partial_x^{3} u = \frac12 \partial_x u^2 \end{equs} with mild solution given by Duhamel's formula \begin{equs} u(\tau) = e^{-\tau \partial_x^{3}} v + \frac12 e^{-\tau \partial_x^{3}} \int_{0}^{\tau} e^{ \xi \partial_x^{3}} \partial_x u^2(\xi) d\xi. \end{equs} The KdV equation \eqref{kdv} fits into the general framework \eqref{dis} with \begin{equation*}\label{kgrDo} \begin{aligned} \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right) = i \partial_x^3, \quad \alpha = 1 \quad \text{and}\quad p(u,\overline u) = p(u) = i \frac12 u^2. \end{aligned} \end{equation*} Here $ \CL = \lbrace \mathfrak{t}_1, \mathfrak{t}_2 \rbrace $, $ P_{\mathfrak{t}_1} = - \lambda^3 $ and $ P_{\mathfrak{t}_2} = \lambda^3 $ . Then, we denoted by $ \<thick> $ an edge decorated by $ (\mathfrak{t}_1,0) $ and by $\<thin>$ an edge decorated by $ (\mathfrak{t}_2,0) $. Following the formalism given in \cite{BHZ}, one can provide the rules that generate the trees obtained by iterating the Duhamel formulation: \begin{equ} R(\<thin>) = \{(\<thick>,\<thick>) \}\;, \quad R(\<thick>) = \{(\<thin>), ()\}\;. \end{equ} The general framework~\eqref{genscheme} derived in Section \ref{sec:genScheme} builds the foundation of the first- and second-order resonances based schemes presented below for the KdV equation~\eqref{kdv}. The structure of the schemes depends on the regularity of the solution. \begin{corollary}\label{corKdV} For the KdV equation \eqref{kdv} the general scheme~ \eqref{genscheme} takes at first order the form \begin{equation} \begin{aligned}\label{schemeKdV1} u^{\ell+1} &= e^{-\tau \partial_x^3} u^\ell + \frac16 \left(e^{-\tau\partial_x^3 }\partial_x^{-1} u^\ell\right)^2 - \frac16 e^{-\tau\partial_x^3} \left(\partial_x^{-1} u^\ell\right)^2\end{aligned} \end{equation} with a local error of order $\mathcal{O}\Big( \tau^2 \partial_x^2 u \Big)$ and at second-order \begin{equation} \begin{aligned}\label{schemeKdV} u^{\ell+1} &= e^{-\tau \partial_x^3} u^\ell + \frac16 \left(e^{-\tau\partial_x^3 }\partial_x^{-1} u^\ell\right)^2 - \frac16 e^{-\tau\partial_x^3} \left(\partial_x^{-1} u^\ell\right)^2\\& +\frac{\tau^2}{4} e^{- \tau \partial_x^3}\Psi\big(i \tau \partial_x^2\big) \Big(\partial_x \Big(u^\ell \partial_x (u^\ell u^\ell)\Big)\Big) \end{aligned} \end{equation} with a local error of order $\mathcal{O}\Big( \tau^3 \partial_x^4 u \Big)$ and a suitable filter function $\Psi$ satisfying \begin{equs} \Psi= \Psi\left(i \tau \partial_x^2 \right), \quad \Psi(0) = 1, \quad \Vert \tau \Psi \left(i \tau \partial_x^2\right) \partial_x^2 \Vert_r \leq 1. \end{equs} \end{corollary} \begin{remark} Note that the first-order scheme \eqref{schemeKdV1} which was originally derived in \cite{HS16} is optimised as the resonance structure factorises in such a way that all frequencies can be integrated exactly (details are given in the proof). This is in general true, for equations in one dimension with quadratic nonlinearities up to first order. However, this trick can not be applied to derive second-order methods. The second-order scheme is new and allows us to improve the local error structure $\mathcal{O}\left( \tau^3 \partial_x^5 u \right)$ introduced by the classical Strang splitting scheme \cite{HLR12}. Due to the stability constrain induced by Burger's nonlinearity it is preferable to embed the resonance structure into the numerical discretisation even for smooth solutions. In Figure~\ref{fig:KdV} we numerically observe the favourable error behaviour of the new resonance based scheme \eqref{schemeKdV} for $\mathcal{C}^\infty$ solutions. \end{remark} \begin{proof} The proof follows the line of argumentation to the analysis for the Schrödinger equation. The construction of the schemes is again based on the general framework~\eqref{genscheme}. Hence, we have to consider for $r = 0,1$ \begin{equs}\label{kdvU} U_{k}^{n,r}(\tau, v) = \sum_{T \in \mathcal{T}^{r+2}_{0}(R)} \frac{\Upsilon^{p}( \lambda_kT)(v)}{S(T)} \Pi^n \left( \CD^r(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T)) \right)(\tau). \end{equs} Thereby, for the first-order scheme the trees of interests are \begin{equs} \mathcal{T}_0^2(R) = \lbrace T_0, T_1, \, k_i \in \Z^{d} \rbrace, \quad T_0 = \mathbf{1}\quad\text{and}\quad T_1 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t2) at (1,2); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_2 $}}; \end{tikzpicture} \end{equs} where $T_1$ is associated to the first-order iterated integral \begin{equs} \CI_1(v^2,s) = \int_{0}^{s} e^{ s_1 \partial_x^{3}} \partial_x (e^{-s_1 \partial_x^{3}}v)^2 ds_1 \end{equs} and in symbolic notation takes the form \begin{equs} T_1 = \CI_{(\mathfrak{t}_2,0)}\left ( \lambda_{k} F_1\right) \quad F_1 = \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_1}) \CI_{(\mathfrak{t}_1,0) }( \lambda_{k_2}) \quad \text{with } k = k_1+k_2. \end{equs} For the first-order scheme we set $r=0$ in \eqref{kdvU} such that \begin{equs}\label{kdvU} U_{k}^{n,0}(\tau, v) & = \frac{\Upsilon^{p}( \lambda_kT_0 )(v)}{S(T_0)} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_0)) \right)(\tau)\\ & + \sum_{k = k_1 + k_2} \frac{\Upsilon^{p}( \lambda_kT_1 )(v)}{S(T_1)} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau). \end{equs} For the first term we readily obtain that \begin{equs} \frac{\Upsilon^{p}( \lambda_kT_0)(v)}{S(T_0)} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_0)) \right)(\tau) = e^{-i \tau k^3} \hat{v}_k. \end{equs} It remains to compute the second term. Note that thanks to \eqref{recursive_pi_r} we have that \begin{equs}\label{kdvST} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) &= e^{i \tau P_{\mathfrak{t}_1}(k)}\Pi^n (\CD^0(T_1))(\tau) \\& = e^{-i \tau k^3} \Pi^n (\CI_{(\mathfrak{t}_2,0)}^0( \lambda_k F_1) )(\tau) \\ & = e^{-i \tau k^3} \mathcal{K}_{(\mathfrak{t}_2,0)}^{k,0} \left( \Pi^n (\CD^{-1}(F_1))\right)(\tau). \end{equs} By the product formula we furthermore obtain that \begin{equs} \Pi^n (\CD^{-1}(F_1))(\tau)& = \Pi^n \left( \CI^{-1}_{(\mathfrak{t}_1,0)} ( \lambda_{k_1})\right) (\tau) \Pi^n \left( \CI^{-1}_{(\mathfrak{t}_1,0)}( \lambda_{k_2})\right) (\tau) \\ & = e^{-i \tau k_1^3} e^{-i \tau k_2^3}. \end{equs} Plugging the above relation into \eqref{kdvST} yields that \begin{equs} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) = e^{-i \tau k^3} \mathcal{K}_{(\mathfrak{t}_2,0)}^{k,0} \left( e^{i \xi (-k_1^3 - k_2^3)} \right)(\tau). \end{equs} Next we observe that \begin{equs} P_{(\mathfrak{t}_2,0)}(k) - k_1^3-k_2^3 = k^3 - k_1^3 - k_2^3 = 3 k_1 k_2 (k_1+k_2) \end{equs} such that \begin{equs} \frac{1}{ P_{(\mathfrak{t}_2,0)}(k) - k_1^3-k_2^3 } \end{equs} can be mapped back to physical space. Therefore, we set \begin{equs} \mathcal{L}_{\tiny\text{dom}} = P_{(\mathfrak{t}_2,0)}(k) - k_1^3-k_2^3 = 3 k_1 k_2 (k_1+k_2) \end{equs} and integrate all frequencies exactly. This implies \begin{equs} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) & = e^{-i \tau k^3} \frac{i (k_1+k_2) }{3i k_1 k_2 (k_1+k_2)} \left(e^{i \tau (k^3 - k_1^3 -k_2^3)}-1\right)\\ & =\frac{1 }{3 k_1 k_2 } \left(e^{- i \tau ( k_1^3 + k_2^3)}- e^{-i \tau k^3} \right). \end{equs} Together with \eqref{kdvU} this yields the scheme \eqref{schemeKdV1}. For the second-order scheme we need to take into account the following trees \begin{equs}\mathcal{T}_0^3(R) = \lbrace T_0, T_1, T_2, \, k_i \in \Z^{d} \rbrace, \quad T_2 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-1,2); \coordinate (t11) at (-2,4); \coordinate (t12) at (-3,6); \coordinate (t13) at (-1,6); \coordinate (t2) at (1,2); \draw[kernels2] (t11) -- (t13); \draw[kernels2] (t11) -- (t12); \draw[kernels2] (t1) -- (root); \draw[symbols] (t1) -- (t11); \draw[kernels2] (t2) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {}; \node[not] (trinode) at (tri) {}; \node[not] (trinode) at (t1) {}; \node[var] (rootnode) at (t12) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t13) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture} \end{equs} which is associated to the second-order iterated integral \begin{equs} \CI_2(v^3,s) = \int_{0}^{s} e^{ s_1 \partial_x^{3}} \partial_x \left( (e^{-s_1 \partial_x^{3}}v) e^{-s_1 \partial_x^{3}} \int_0^{s_1} e^{s_2 \partial_x^{3}}\partial_x (e^{-s_2 \partial_x^{3}}v)^2 ds_2 \right) ds_1. \end{equs} Then one can proceed as in the second-order schemes for the Schrödinger equation. We omit the details here. The local error analysis is then given by Theorem \ref{thm:genloc} noting that $\alpha = 1$ and \begin{equs} \mathcal{L}_{\tiny\text{low}}^0(T_1,\cdot) = k^{1+\alpha}, \quad \mathcal{L}_{\tiny\text{low}}^1(T_1,\cdot) = \mathcal{L}_{\tiny\text{low}}^1(T_2,\cdot) = k^{2(1+\alpha)}. \end{equs} \end{proof} \subsection{Klein--Gordon}\label{sec:kgr} In this section we apply the general framework \eqref{genscheme} to the Klein--Gordon equation \begin{equation}\label{kgo} \partial_t^2 z - \Delta z + \frac{1}{\varepsilon^2} z = \vert z\vert^2 z, \quad z(0,x) = \gamma(x),\quad \partial_t z(0,x) = \frac{1}{\varepsilon^2} \delta(x). \end{equation} Here we are in particular interested in resolving the highly oscillatory, so-called non-relativistic, structure of the PDE when the speed of light $c = \frac{1}{\varepsilon}$ formally tends to infinity. Via the transformation $u = z - i \varepsilon \langle \nabla \rangle_{\frac{1}{\varepsilon}}^{-1} \partial_t z$ we express \eqref{kgo} in its first-order form \begin{align}\label{kgr} & i \partial_t u = -\frac{1}{\varepsilon}\langle \nabla \rangle_{\frac{1}{\varepsilon}} u + \frac{1}{\varepsilon}\langle \nabla \rangle_{\frac{1}{\varepsilon}}^{-1} \textstyle \frac18 (u + \overline u)^3, \qquad \frac{1}{\varepsilon}\langle \nabla \rangle_{\frac{1}{\varepsilon}} = \frac{1}{\varepsilon}\sqrt{\frac{1}{\varepsilon^2}-\Delta}. \end{align} The first-order form \eqref{kgr} casts into the general form \eqref{dis} with \begin{equation*}\label{kgrDo} \begin{aligned} \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right) = \frac{1}{\varepsilon}\langle \nabla \rangle_{\frac{1}{\varepsilon}}, \quad \alpha = 0 \quad \text{and}\quad p(u,\overline u) = \frac{1}{\varepsilon}\langle \nabla \rangle_{\frac{1}{\varepsilon}}^{-1} \textstyle \frac18 (u + \overline u)^3. \end{aligned} \end{equation*} The leading operator $\mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right) $ thereby triggers oscillations of type \begin{equation*}\label{kgosc} \sum_{\ell \in \Z } e^{ i t \ell \frac{1}{\varepsilon^2}} \end{equation*} which can be formally seen by the Taylor series expansion $$ \mathcal{L}\left(\nabla, \frac{1}{\varepsilon}\right) = \frac{1}{\varepsilon} \langle \nabla \rangle_{\frac{1}{\varepsilon}} = \frac{1}{\varepsilon^2} - \frac12 \Delta + \mathcal{O}\left(\frac{\Delta^2}{\varepsilon^2}\right). $$ In oder to determine these dominant oscillations we define the non-oscillatory operators \begin{equs}\label{Bdelta} & \mathcal{B}_\Delta = \frac{1}{\varepsilon} \langle \nabla \rangle_{\frac{1}{\varepsilon}} - \frac{1}{\varepsilon^2}, \qquad \mathcal{B}_\Delta (k) = \frac{1}{\varepsilon^2} \sqrt{1 + \frac{k^2}{\varepsilon^2}} - \frac{1}{\varepsilon^2} \\ & \mathcal{C}_\Delta = \frac{1}{\varepsilon}\langle \nabla \rangle_{\frac{1}{\varepsilon}}^{-1}, \qquad \mathcal{C}_\Delta(k) = \frac{1}{\sqrt{1+ \frac{k^2}{\varepsilon^2}}} \end{equs} which both can be uniformly bounded in $\varepsilon$ thanks to the estimates $\Vert \mathcal{B}_\Delta w \Vert \leq \frac12 \Vert \Delta w\Vert$, $\frac{1}{1+x^2} \leq 1$. The latter motivates us to rewrite the oscillatory equation \eqref{kgr} in the following form \begin{align}\label{kgrB} & i \partial_t u = - \left( \frac{1}{\varepsilon^2} +\mathcal{B}_\Delta\right) u + \mathcal{C}_\Delta \textstyle \frac18 (u + \overline u)^3. \end{align} Here $ \CL = \lbrace \mathfrak{t}_1, \mathfrak{t}_2 \rbrace $, $ P_{\mathfrak{t}_1} = -\left( \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta( \lambda) \right) $ and $ P_{\mathfrak{t}_2} = \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta( \lambda) $ . Then, we denoted by $ \<thick> $ an edge decorated by $ (\mathfrak{t}_1,0) $, $ \<thick2> $ an edge denoted by $ (\mathfrak{t}_1,1) $ by $\<thin>$ an edge decorated by $ (\mathfrak{t}_2,0) $ and by $\<thin2>$ an edge decorated by $ (\mathfrak{t}_2,1) $. The rules that generate the trees obtained by iterating the Duhamel formulation are given by: \begin{equs} R(\<thin>) = R(\<thin2>) = \{ (\<thick>,\<thick>,\<thick>), (\<thick>,\<thick>,\<thick2>), (\<thick>,\<thick2>,\<thick2>), (\<thick2>,\<thick2>,\<thick2>) \}\;, \quad R(\<thick>) = \{(\<thin>) , ()\}\;, \quad R(\<thick2>) = \{(\<thin2>) , ()\}\;. \end{equs} The general framework~\eqref{genscheme} derived in Section \ref{sec:genScheme} builds the foundation of the first-order resonance based schemes presented below for the Klein--Gordon equation equation~\eqref{kgr}. \begin{corollary}\label{corKGR} For the Klein--Gordon equation \eqref{kgr} the general scheme~ \eqref{genscheme} takes at first order the form \begin{equs}\label{kgr1} u^{\ell+1} & = e^{i \tau \left(\frac{1}{\varepsilon^2} + \mathcal{B}_\Delta\right)} u^\ell - \tau \frac{3i}{8} e^{i \tau \left(\frac{1}{\varepsilon^2} + \mathcal{B}_\Delta\right)}\mathcal{C}_\Delta \vert u^\ell\vert^2 u^\ell\\ & - \tau \frac{i}{8} e^{i \tau \left(\frac{1}{\varepsilon^2} + \mathcal{B}_\Delta\right)} \mathcal{C}_\Delta \Big( \varphi_1\left(2i \frac{1}{\varepsilon^2} \tau\right) \left(u^\ell \right)^3+ 3 \varphi_1\left(-2i \frac{1}{\varepsilon^2} \tau\right) \left \vert u^\ell\right\vert^2 u^\ell \\& + \varphi_1\left(-4 i \frac{1}{\varepsilon^2} \tau\right) \left(\overline{u^\ell}\right)^3 \Big) \end{equs} with a local error $\mathcal{O}\left( \tau^2 \Delta u\right)$ and the filter function $\varphi_1(\sigma) = \frac{e^\sigma-1}{\sigma}$. If we allow step size restrictions $\tau = \tau\left(\frac{1}{\varepsilon^2}\right)$ the general scheme~ \eqref{genscheme} takes the simplified form \begin{equs}\label{kgrSimp} u^{\ell+1} = e^{i \tau \left(\frac{1}{\varepsilon^2} + \mathcal{B}_\Delta\right)} u^\ell - \tau \frac{i}{8} e^{i \tau \left(\frac{1}{\varepsilon^2} + \mathcal{B}_\Delta\right)} \mathcal{C}_\Delta \left(u^\ell + \overline{u^\ell}\right)^3 \end{equs} with a local error of order $ \mathcal{O}\left( \frac{\tau^2}{\varepsilon^{2}} u \right) + \mathcal{O}(\tau^2 \Delta u )$. \end{corollary} \begin{remark} With the general framework we recover at first-order exactly the resonance based first-order scheme \eqref{kgr1} derived in \cite{BFS17}. If we allow for step size restrictions, we recover a classical approximation the classical local error structure $\mathcal{O}\left( \frac{\tau^2}{\varepsilon^2}\right)$ introduced by classical Strang splitting or Gautschi-type schemes (\cite{BD}). \end{remark} \begin{proof} From the general framework~\eqref{genscheme} (with $r=0$) we obtain that \begin{equs}\label{kdvU} U_{k}^{n,0}(\tau, v) = \sum_{T \in \mathcal{T}^{2}_{0}(R)} \frac{\Upsilon^{p}( \lambda_kT)(v)}{S(T)} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T)) \right)(\tau) \end{equs} with \begin{equs} \mathcal{T}_0^2(R) & = \lbrace T_0, T_1, T_2, T_3, T_4, \, k_i \in \Z^{d} \rbrace, \quad T_0 = \mathbf{1} \\ T_1 & = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture}, \quad T_2 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture}, \quad T_3 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2] (t2) -- (root); \draw[kernels2,tinydots] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture}, \quad T_4 = \begin{tikzpicture}[scale=0.2,baseline=-5] \coordinate (root) at (0,0); \coordinate (tri) at (0,-2); \coordinate (t1) at (-2,2); \coordinate (t2) at (2,2); \coordinate (t3) at (0,3); \draw[kernels2,tinydots] (t1) -- (root); \draw[kernels2,tinydots] (t2) -- (root); \draw[kernels2,tinydots] (t3) -- (root); \draw[symbols] (root) -- (tri); \node[not] (rootnode) at (root) {};t \node[not] (trinode) at (tri) {}; \node[var] (rootnode) at (t1) {\tiny{$ k_{\tiny{1}} $}}; \node[var] (rootnode) at (t3) {\tiny{$ k_{\tiny{2}} $}}; \node[var] (trinode) at (t2) {\tiny{$ k_3 $}}; \end{tikzpicture}. \end{equs} Let us carry out the computation for the tree $T_1$ which in symbolic notation takes the form \begin{equs} T_1 & = \CI_{(\mathfrak{t}_2,0)} ( \lambda_k F_1), \quad k = k_1+k_2+k_3 \\ F_1 & = \CI_{(\mathfrak{t}_1,0)} ( \lambda_{k_1}) \CI_{(\mathfrak{t}_1,0)}( \lambda_{k_2}) \CI_{(\mathfrak{t}_1,0)} ( \lambda_{k_3}). \end{equs} Thanks to \eqref{recursive_pi_r} and the fact that $ P_{(\mathfrak{t}_1,0)}\left(k,\frac{1}{\varepsilon^2}\right) = \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta(k) $ we obtain \begin{equs} \Pi^n \left( \CD^r(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) & = e^{i \tau P_{(\mathfrak{t}_1,0)}(k,\frac{1}{\varepsilon})} (\Pi^n \CD^0 (T_1))(\tau) \\& = e^{i \tau \left( \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta(k)\right ) } (\Pi^n \CI^0_{(\mathfrak{t}_2,0)} ( \lambda_k F_1))(\tau) \\& = e^{i \tau \left( \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta(k)\right ) } \mathcal{K}_{(\mathfrak{t}_2,0)}^{k,0}\left( \Pi^n \CD^{-1}(F_1) \right)(\tau). \end{equs} By the product rule we furthermore have that \begin{equs} \Pi^n \CD^{-1}(F_1)(\xi) & = \left( \Pi^n \CI^{-1}_{(t_1,0)} \lambda_{k_1}\right)(\xi) \left( \Pi^n \CI^{-1}_{(t_1,0)} \lambda_{k_2}\right)(\xi) \left( \Pi^n \CI^{-1}_{(t_1,0)} \lambda_{k_3} \right)(\xi) \\& = e^{i \xi \left(3 \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta(k_1)+ \mathcal{B}_\Delta(k_2)+ \mathcal{B}_\Delta(k_3)\right ) } \end{equs} such that \begin{equs}\label{KGC1} \Pi^n & \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) \\ & = e^{i \tau \left( \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta(k)\right ) } \mathcal{K}_{(\mathfrak{t}_2,0)}^{k,0}\left( e^{i \xi \left(3 \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta(k_1)+ \mathcal{B}_\Delta(k_2)+ \mathcal{B}_\Delta(k_3)\right ) } \right)(\tau). \end{equs} Definition \ref{Taylor_exp} together with Remark \ref{rem:epsi} and the observation $ P_{(\mathfrak{t}_2,0)}\left(k,\frac{1}{\varepsilon^2}\right) = - \left( \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta(k) \right) $ implies that \begin{equs}\label{domKg} \mathcal{L}_{\tiny \text{dom}} & = \mathcal{P} _{\tiny \text{dom}} \left( P_{(\mathfrak{t}_2,0)}\left(k,\frac{1}{\varepsilon^2}\right) + P \right) \\& = \mathcal{P} _{\tiny \text{dom}} \left( 2 \frac{1}{\varepsilon^2} - \mathcal{B}_\Delta(k) + \mathcal{B}_\Delta(k_1)+ \mathcal{B}_\Delta(k_2)+ \mathcal{B}_\Delta(k_3 ) \right) = 2 \frac{1}{\varepsilon^2} . \end{equs} Hence, \begin{equs} \mathcal{K}_{(\mathfrak{t}_2,0)}^{k,0}\left( e^{i \xi \left(3 \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta(k_1)+ \mathcal{B}_\Delta(k_2)+ \mathcal{B}_\Delta(k_3)\right ) } \right)(\tau) & = -i \frac{1}{8} \mathcal{C}_\Delta(k) \frac{e^{2 i \tau \frac{1}{\varepsilon^2}} -1}{2 i \frac{1}{\varepsilon^2}}. \end{equs} Plugging this into \eqref{KGC1} yields that \begin{equs} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) = - i \tau \frac{1}{8} \mathcal{C}_\Delta(k) \varphi_1\left(2 i \tau \frac{1}{\varepsilon^2}\right) e^{i \tau \left( \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta(k)\right ) }. \end{equs} Together with the observation that \begin{equs} \frac{\Upsilon^{p}( \lambda_kT_1)(v)}{S(T_1)} = \hat{v}_{k_1} \hat v_{k_2} \hat v_{k_3} \end{equs} we obtain for the first tree $T_1$ in Fourier space that \begin{equs} \frac{\Upsilon^{p}( \lambda_kT)(v)}{S(T)} & \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) \\& = - i\tau \frac{1}{8} \mathcal{C}_\Delta(k) \hat{v}_{k_1} \hat v_{k_2} \hat v_{k_3} \varphi_1\left(2 i \tau \frac{1}{\varepsilon^2}\right) e^{i \tau \left( \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta(k)\right ) }. \end{equs} In physical space the latter takes the form \begin{equs}\label{termi} \mathcal{F}^{-1} \left(\frac{\Upsilon^{p}( \lambda_kT)(v)}{S(T)} \Pi^n \left( \CD^0(\CI_{(\mathfrak{t}_1,0)}( \lambda_k T_1)) \right)(\tau) \right)\\ = -i \frac{1}{8} \tau e^{i \tau \left( \frac{1}{\varepsilon^2} + \mathcal{B}_\Delta \right ) } \mathcal{C}_\Delta\varphi_1\left(2 i \tau \frac{1}{\varepsilon^2}\right) v^3 \end{equs} with $ \mathcal{B}_\Delta $ and $ \mathcal{C}_\Delta $ defined in \eqref{Bdelta}. The term \eqref{termi} is exactly the third term (corresponding to $(u^\ell)^3$) in the first-order scheme \eqref{kgr1}. The other terms in the scheme can be computed in a similar way with the aid of the trees $T_j$, $j = 1,2,3,4$. The local error is then given by Theorem \ref{thm:genloc}. For instance, the local error introduced by the approximation of $T_1$ reads \begin{equs} \mathcal{O}\left(\tau^2 \left(- \mathcal{B}_\Delta(k) + \mathcal{B}_\Delta(k_1)+ \mathcal{B}_\Delta(k_2)+ \mathcal{B}_\Delta(k_3 )\right) \hat v_{k_1} \hat v_{k_2} \hat v_{k_3}\right) = \mathcal{O}(\tau^2 \Delta v^3). \end{equs} The other approximations obey a similar error structure. The simplified scheme \eqref{kgrSimp} on the other hand is constructed by carrying out a Taylor series expansion of the full operator $\mathcal{L}_{\tiny\text{dom}} + \mathcal{L}_{\tiny\text{low}} $. For instance, when calculating $\mathcal{K}$, instead of integrating the dominant part \eqref{domKg} for the first tree $T_1$, exactly, we Taylor expand the full operator \begin{equs} P_{(\mathfrak{t}_2,0)}\left(k,\frac{1}{\varepsilon^2}\right) + P = 2 \frac{1}{\varepsilon^2} - \mathcal{B}_\Delta(k) + \mathcal{B}_\Delta(k_1)+ \mathcal{B}_\Delta(k_2)+ \mathcal{B}_\Delta(k_3 ). \end{equs} This implies a local error structure of type \begin{equs} \mathcal{O}\left( \frac{\tau^2}{\varepsilon^{2}} u \right) + \mathcal{O}(\tau^2 \Delta u ). \end{equs} We omit further details here. \end{proof} \begin{remark} Our framework \eqref{genscheme} allows us to also derive second- and higher-order schemes for the Klein--Gordon equation \eqref{kgr}. The trees have the same shape as for the cubic Schrödinger equation \eqref{nls}, but many more trees $T\in \mathcal{T}^{3}_{0}(R)$ are needed for the description. With our framework we can in particular recover the first- and second-order uniformly accurate method proposed in \cite{BFS17}. \end{remark} \subsection{Numerical experiments}\label{sec:num} We underline the favorable error behavior of the new resonance based schemes compared to classical approximation techniques in case of non-smooth solutions. We choose $M=2^{8}$ spatial grid points and carry out the simulations up to $T = 0.1$. \begin{example}[Schrödinger] In Figure \ref{fig:NLS} we compare the convergence of the new resonance based approach with classical splitting and exponential integration schemes in case of the Schrödinger equation \eqref{nls} with smooth and non-smooth solutions. The numerical experiments underline the favorable error behavior of the resonance based schemes presented in Corollary \ref{corNLS2} in case of non-smooth solutions. While the second-order Strang splitting faces high oscillations in the error causing severe order reduction, the second-order resonance based scheme maintains its second-order convergence for less regular solutions. \end{example} \begin{example}[Kortweg--de Vries] Figure \ref{fig:KdV} underlines the preferable choice of embedding the resonance structure into the numerical discretisation even for smooth solutions of the KdV equation \eqref{kdv}. While the second-order classical exponential integrator suffers from spikes in the error when hitting certain (resonant) time steps, the second-order resonance based scheme presented in Corollary \ref{corKdV} allows for full-order convergence without any oscillations. \end{example} \begin{figure}[h!] \begin{subfigure}[c]{0.48\textwidth} \includegraphics[width=1\textwidth]{NewH2.eps} \subcaption{ $H^2$ data} \end{subfigure} \begin{subfigure}[c]{0.48\textwidth} \includegraphics[width=1\textwidth]{NewHSmooth.eps} \subcaption{$\mathcal{C}^\infty$ data} \end{subfigure} \caption{Error versus step size (double logarithmic plot). Comparison of classical and resonance based schemes for the Schrödinger equation for smooth (right picture) and non-smooth (left picture) solutions.}\label{fig:NLS} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.48\textwidth]{FigKdV.eps} \caption{Error versus step size (double logarithmic plot). Comparison of classical and resonance based schemes for the KdV equation with smooth data in $\mathcal{C}^\infty$.}\label{fig:KdV} \end{figure} \newpage
1,314,259,993,744
arxiv
\subsection{Augmentation Parameters} Augmentation methods are parameterized and their performance may be sensitive to changes in parameter values. However in a real-world low-case scenario (mostly) without any development set, parameter tuning may not always be possible. Therefore we create augmented datasets using all combinations of augmentation parameters described in Sec.~\ref{sec:augmain} (e.g., $4x4x2=32$ augmented datasets with RWD method for each treebank). To analyze the behavior of each augmentation technique, we perform a grid search on the parameters and draw the box plots for each augmentation method on Belarusian, Tamil, Vietnamese, Buryat, Kazakh and Kurmanji POS tagging shown in Fig.~\ref{fig:box_plot_pos}. The box plots ensure that the augmentation techniques can be compared with respect to their performance range and their sensitivity to parameters rather than their best performance. \begin{figure*} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1.\linewidth]{figures/UD_Belarusian-HSE_fig.png} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1.\linewidth]{figures/UD_Tamil-TTB_fig.png} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1.\linewidth]{figures/UD_Vietnamese-VTB_fig.png} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1.\linewidth]{figures/UD_Buryat-BDT_fig.png} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1.\linewidth]{figures/UD_Kazakh-KTB_fig.png} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1.\linewidth]{figures/UD_Kurmanji-MG_fig.png} \end{subfigure} \caption{ Box plots for augmentation model performances on POS Tagging. Org box refers to the baseline on the original dataset, while other techniques are defined in Sec.~\ref{sec:augmain}.} \label{fig:box_plot_pos} \end{figure*} \paragraph{Belarusian} The median lines of RWD, RWS, CI, CSU, CA and the Nonce lie outside of the Org (baseline) box. This suggests that these techniques are statistically likely to perform higher than the baseline. Out of these methods, Nonce and RWD performances are found to be less dispersed than the others, i.e., are more prone to parameter changes. The min-max range of Nonce is quite smaller than others, signaling the reliability of the method There are only a few outliers to most techniques, suggesting that the results mostly follow a distribution. \paragraph{Tamil} For Tamil only RWS and Nonce median lines lie above the baseline box. Similar to Belarusian, the box length and the min-max range of Nonce is smaller than RWS; hinting the reliability of the model. Unlike Belarusian, SR, crop and rotate lie under the box, meaning that they are likely to yield worse results than the baseline. Similarly the number of outliers are limited and the maximum improvement over the best baseline model is around $0.7\%$. \paragraph{Vietnamese, Hungarian} For Vietnamese, there are no methods that are significantly better than the baseline, but only worse: CD, crop and rotate. Although the maximum value of SR surpasses the best baseline model, the box plots reveal that this is statistically unlikely. Similar pattern is observed for Hungarian, despite being from a different language family. \paragraph{Buryat, Telugu} While the training dataset of Buryat is the smallest of all, even modest changes in parameters may lead to outliers. Interestingly, we found none of the techniques to be significantly better or worse than the baseline according to the median lines. However, the outliers provide a performance boost around $8\%$ over the best baseline. Even the plot is not shown for convenience, we noticed that all augmentation techniques, except from SR, provide neither a significant drop or increase in the scores for Telugu, similar to Buryat. \paragraph{Kazakh, Kurmanji} For both languages, the median lines above the baseline box are the character-level methods and the syntactic method Nonce. In Kazakh, similar to Belarusian and Tamil, Nonce has the smallest box and the min-max range. For Kurmanji, however, CA has the smallest box and the min-max range instead of Nonce. Kazakh and Kurmanji, as the languages with the lowest resources, benefit significantly more from character-level augmentation compared to other languages. This may be due to higher ratio of out-of-vocabulary words for these treebanks. In other words, association of a character set to POS labels is more important than association of tokens with POS labels. Apparently, character-level noise helps to strengthen such links. \subsection{Frequency Analysis} \secrev{}{Besides improving the downstream task scores, one of the important goals of an augmentation method is to increase generalization capability of the model. In order to evaluate the extent of this skill, we measure the individual performances on frequent and infrequent tokens along with word types. We perform a case study on dependency parsing since it stands between POS and SRL by means of complexity. The results are given in Fig.~\ref{fig:freq}.} \begin{figure*} \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1.\linewidth]{figures/freq_word.png} \end{subfigure} ~ \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=1.\linewidth]{figures/freq_pos.png} \end{subfigure} \caption{ Individual contributions of correct labeling of frequent and infrequent tokens and POS tags to the overall parser performance. Shown separately for each augmentation technique.} \label{fig:freq} \end{figure*} \secrev{}{\paragraph{Frequent vs Infrequent Tokens} First, we define the token as \textit{frequent} if it is among the top 10\% in the token frequency list extracted from the combination of training and development set. Next, we create a vocabulary of frequent tokens and then measure the LAS scores individually for frequent and infrequent tokens for each language and augmentation technique. Finally, we average the scores over all languages to ease the presentation of the results. The results show that only \textit{SR}, \textit{CSW}, \textit{Rotate} and \textit{Nonce} improve the scores of infrequent tokens than frequent ones. \textit{SR} and \textit{Nonce} are likely to replace frequent tokens with infrequent ones, since their objective is to replace words with another. However, \textit{CSW} and \textit{Rotate} are not designed to replace tokens. We believe character switching sometimes coincidentally resulted with rare tokens, and \textit{Rotate} operation has randomly chosen the subtrees with rare tokens to augment. Interestingly, there is no one-to-one correspondence between the techniques that improved the dependency scores the most and the techniques that improved labeling of infrequent tokens the most. This may be due to building the vocabulary over tiny training sets and identifying the frequent/rare tokens accordingly. Therefore we analyze a more general property: POS tags.} \paragraph{Frequent vs Infrequent POS tags} \secrev{}{We perform a similar analysis for the token class---using the gold POS tags as the class. Since the number of unique POS tags are much lower than unique tokens, we identify the top 50\% of the POS tags as frequent. We use the same calculation technique as above. The results show that all techniques improve the performance on rare token classes more than the frequent ones, however there still does not exist a direct correlation between the best performing technique and the improvement over infrequent POS tags. This suggests that the improvement cannot simply be explained by frequency analysis, i.e., the method that focuses on improving rare tokens or token classes is not guaranteed to improve the overall score. This is because the parser's performance is a more complicated multivariate variable that rely on many other factors apparent in dataset statistics.} \subsection{Character-Level Augmentation} \label{ssec:char_aug_tec} The idea of adding synthetic noise to text applications is not new, however have mostly been used for adversarial attacks or to develop more robust models~\cite{BelinkovB18,KarpukhinLEG19}. Previous work by \citet{KarpukhinLEG19} introduce four types of synthetic noise on orthographic level: character deletion (\textsc{CD}), insertion (\textsc{CI}), substitution (\textsc{CSU}) and swapping (\textsc{CSW}). Additionally, they introduce a mixture of all noise types by sampling from a distribution of 60\% clean (no noise) and 10\% from each type of noise, which we refer to as \textit{Character All} (\textsc{CA}). They show that adding synthetic noise to training data improve the performance on test data with natural noise, i.e., text with real-world spelling mistakes, while not hurting the performance on clean data. The authors experiment on neural machine translation where the source languages are German, French, Czech and the target language is English. We hypothesize that adding right amount of synthetic noise might as well improve the performance on low-resource languages for our set of downstream tasks. For \textsc{CI}, we first build a character vocabulary out of the most commonly used characters in the training set. We do not add noise to one letter words and do not apply \textsc{CSW} to first and the last characters of the token. The advantages of character-level synthetic noise are two-fold: First, the output of the augmentation mostly preserves the original syntactic and semantic labels. This is because the resulting tokens are mostly out of vocabulary words that are quite close to the original word---like a spelling mistake. Second, they are trivial to generate, not requiring any external resources like large language models or syntactic annotations. Finally, they are only constrained by the number of characters, that results with the ability of generating huge numbers of augmented sentences, which can be an advantage for most downstream tasks. \subsection{Token-Level Augmentation} \label{ssec:tok_aug_tec} This category includes methods that perform token-level changes such as adding, replacing or removing certain tokens. While some preserve the syntax or semantics of the original sentence, majority does not. \paragraph{Synonym Replacement} As one of the earliest techniques~\cite{KolomiyetsBM11,ZhangL15,wang-yang-2015-thats,wei-zou-2019-eda}, it aims to replace words with their synonyms. A lexicon containing synonymity like WordNet, or a thesaurus is generally required to retrieve the synonyms. Since most languages do not have such a resource, some researchers~\cite{wang-yang-2015-thats} exploit special pretrained word embeddings and use k-nearest neighbors (by means of cosine similarity) of the queried word as the replacement. As discussed in Sec.~\ref{sec:relwork}, more recent studies~\cite{Kobayashi18,WuLZHH19,FadaeeBM17a,anaby2020not,kumar2020} employ contextualized language models such as bi-directional LSTMs or BERT~\cite{DevlinCLT19} to find related words such that the class of the sentence is still preserved. However these methods require strong pretrained language models which are generally not available for truly low resource languages. Furthermore, these methods are mostly applied to sentence classification tasks, where the labels are considered coarse-grained compared to our downstream tasks. Considering these, we use a simplified approach similar to \citet{wang-yang-2015-thats} and query the randomly chosen token on non-contextualized pretrained embeddings. Most languages we experiment on are morphologically productive, having out-of-vocabulary words problem. To circumvent this issue we use the subword-level fastText embeddings~\cite{grave2018learning}. \paragraph{Random Word} Similar to character-level noise, one can inject a higher level of noise by randomly inserting (\textsc{RI}), deleting (\textsc{RD}) or swapping (\textsc{RS}) tokens. The EDA framework by~\citet{wei-zou-2019-eda} show the efficiency of these techniques---despite their simplicity---on multiple text classification benchmarks. Following \citet{wei-zou-2019-eda}, we experiment with all techniques except from \textsc{RI}. Since our downstream tasks require contextual annotation of tokens, we can not insert a random word without annotation. Similar to character-level methods, they are easy to apply and can produce extensive amount of synthetic sentences as they are only constrained with the number of tokens. One disadvantage is the inability to preserve the syntactic and semantic labels. For instance deleting a word may yield an ungrammatical sentence without a valid dependency tree. Therefore they are not eligible for two of our tasks: dependency parsing and semantic role labeling. \begin{figure*}[!ht] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \begin{dependency} \begin{deptext}[column sep=1.2em] I \& wrote \& him \& a \& letter \\ \end{deptext} \deproot{2}{root} \depedge{1}{2}{nsubj} \depedge{3}{2}{iobj} \depedge{4}{5}{det} \depedge{5}{2}{obj} \end{dependency} \caption{Original sentence} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \begin{dependency} \begin{deptext}[column sep=1.2em] I \& wrote \& a \& letter \\ \end{deptext} \deproot{2}{root} \depedge{1}{2}{nsubj} \depedge{3}{4}{det} \depedge{4}{2}{obj} \end{dependency} \caption{Cropped sentence} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \begin{dependency} \begin{deptext}[column sep=1.2em] him \& a \& letter \& I \& wrote \\ \end{deptext} \deproot{5}{root} \depedge{4}{5}{nsubj} \depedge{1}{5}{iobj} \depedge{2}{3}{det} \depedge{3}{5}{obj} \end{dependency} \caption{Rotated sentence} \end{subfigure} \caption{ Demonstration of the syntactic augmentation techniques \textit{crop} and \textit{rotate}.} \label{fig:syntactic_demo} \end{figure*} \subsection{Syntactic Augmentation} \label{ssec:syn_aug_tec} This category consists of more sophisticated methods that benefit from syntactic properties to generate new sentences. The main disadvantages of this category are (i) the need for syntactic annotation, and (ii) being more constrained, i.e., not being able to generate as many data points as previous categories. \paragraph{Nonce} This technique, introduced by \citet{GulordavaBGLB18}, aims to produce dummy sentences by replacing some of the words in the original sentence, such that the produced sentence is syntactic equivalent of the original while not preserving the original semantics. In more details, randomly chosen content words, i.e., noun, adjective and verb, are replaced with words that have the same part of speech tag, morphological tags and dependency label. Given the sentence \textit{``Her sibling bought a cake''}, the following sentences can be generated: \textit{``\textbf{My} sibling \textbf{saw} a cake''} or \textit{``His \textbf{motorbike} bought a \textbf{island}''}. As can be seen, the generated sentences mostly do not make any sense, however syntactically equivalent. For this reason, it can only be employed for syntactic tasks and not for SRL. \paragraph{Crop} This augmentation algorithm by~\citet{SahinS18}, morphs the dependency trees to generate new sentences. The main idea is to remove some of the dependency links to create simpler, shorter but still (mostly) meaningful sentences. In order to do so, \citet{SahinS18} defines a list of dependency labels which is referred to as Label Of Interest (LOI) that attaches subjects and direct, indirect objects to the predicates. Then a simpler sentence is created by remaining only one LOI at a time. Following \citet{SahinS18}, we \textit{only} consider the dependency relations that attach subjects and objects to predicates as LOIs; and ignore other adverbial dependency links involving predicates. We keep the augmentation non-recursive, i.e., we only reorder the first level of flexible subtrees, hence the flexible chunks inside these subtrees are kept fixed. The idea is demonstrated in Figure~\ref{fig:syntactic_demo}. \paragraph{Rotate} It uses a similar idea to \textit{image rotation}, where the sentence is \textit{rotated} around the \texttt{root} node of the dependency tree. The method uses the same LOIs as cropping and creates flexible chunks that can be reordered in the sentence. A demonstration of rotation is shown in Figure~\ref{fig:syntactic_demo}. Both the cropping and rotation operations may cause semantic shifts and ill-formed sentences, mostly depending on the morphological properties of the languages. For instance, languages that use case marking to mark subjects and objects can still generate valid sentence forms via rotation operation simply because the word order is more flexible~\cite{wordOrder}. However, most valid sentences may still sound strange to a native speaker, since there is still a \textit{preferred order} and the generated sentence may have an infrequent word order. Furthermore, for languages with a strict word order like English, rotation may introduce more noise than desired. Finally, the augmented sentences may still be beneficial for learning, since they would provide the model with more examples of variations in semantic argument structure and in the order of phrases. \subsection{Parameters} \label{ssec:params} All of the aforementioned augmentatiom methods are \textit{parametric}. As we show later in Sec.~\ref{sec:results}, the choice of parameters may have a substantial impact on the success of the methods. The parameters and their value range are defined as below: \paragraph{Ratio} This parameter is used to calculate the number of new sentences to be generated from one original sentence. To ensure that more sentences are generated from longer sentences, it is simply calculated as $ratio*|sentence|$, i.e., for a sentence of length 10, only 1 extra sentence will be produced with $ratio=0.1$. It is only used for orthographic methods since syntactic methods are constrained in other ways (e.g., number of tokens that share morphological features and dependency labels). The values we experiment with are as follows: $[0.1, 0.2, 0.3, 0.4]$. \paragraph{Probability} This determines the probability of a token-level augmentation, i.e., the ratio of the augmented tokens to sentence length. For word insertion, deletion and all character-based methods (\textsc{CI}, \textsc{CD}, \textsc{CSU}, \textsc{CSW}, \textsc{CA}), it can be interpreted as the ratio of tokens that undergo a change to the number of total tokens; while for word swapping operation, it refers to the ratio of swapped token pairs to total number of tokens. Moreover, it determines the number of characters to which the augmentation are applied in case of character-level augmentation. Similar to ratio, we experiment with the value range: $[0.1, 0.2, 0.3, 0.4]$. For the syntactic methods \textsc{crop} and \textsc{rotate}, probability is associated with the operation dynamics, i.e., when the operation probability is set to 0.2, the expected number of additional sentences produced via cropping would be $\#LOI/5$, where $\#LOI$ indirectly refers to the number of valid subtrees for the operation. Since the number of additional sentences are already constrained with $\#LOI$, we also experiment with higher probabilities for crop and rotate. The range is then $[0.1, 0.3, 0.5, 0.7, 1.0]$. \paragraph{Max Sentences} In case of considerably long sentences or substantial number of available candidates, the number of augmented sentences can grow rapidly. This sometimes causes an undesired level of noise in the training data. To avoid this, the number of maximum sentences, a.k.a., sentence threshold parameter, is commonly used. Following \citet{GulordavaBGLB18,vaniaKSL19} we experiment with $5$ and $10$ as the threshold values. \section{Introduction} \input{intro} \section{Related Work} \label{sec:relwork} \input{relwork} \section{Augmentation Techniques} \label{sec:augmain} \input{augmethods} \section{Experimental Setup} \label{ssec:exper_set} First we provide detailed information on the downstream tasks and the languages we have experimented on. Next we discuss the datasets that are used in our experiments. \subsection{Tasks} \label{ssec:tasks} \input{tasks} \subsection{Languages} \label{ssec:lang} \input{languages} \subsection{Datasets} \label{ssec:datasets} \input{datasets} \section{Experiments and Results} \label{sec:results} \input{results} \section{Analysis} \label{sec:analysis} \input{analysis} \section{Conclusion} \input{conclusion} \section{Acknowledgements} We first thank to anonymous reviewers that helped us improve the paper. We would like to thank Clara Vania, Benjamin Heinzerling, Jonas Pfeiffer and Phillip Rust for the valuable discussions and providing early access to their implementations. We finally thank Celal Şahin, Necla İşgüder and Osman İşgüder for their invaluable support during the writing of this paper. \starttwocolumn \subsection{POS Tagging} \label{ssec:pos_res_sec} \secrev{}{POS Tagging is considered as one of the most fundamental NLP tasks, such that it has been the initial step in the traditional NLP pipeline for quite a long time. Even in the era of end-to-end models, it has been shown to improve the performance of the downstream tasks of higher complexity when employed as a feature. Taking the importance of POS tagging on higher-level downstream tasks into account, we conduct experiments with different subword units and training regimes (see Sec.~\ref{ssec:tasks} for details) to examine (i) whether the behavior of the augmentation methods are model-agnostic, and (ii) whether the improvements are relevant for state-of-the-art models which are fine-tuned on large pretrained multilingual contextualized language models. The results for the character~(\texttt{char}), BPEs~(\texttt{BPE}) and multilingual BERT~(\texttt{mBERT}) are given in Table~\ref{tab:postag_res_multi}. The languages that lack the corresponding pretrained model (e.g., Kurmanji and \texttt{BPE}) are not shown in the table.} \secrev{}{\paragraph{\texttt{char}} Except from Telugu, at least one of the techniques has significantly improved over the baselines for all other languages. The token-level methods, \textit{RWD} and \textit{RWS} have increased the scores of Kazakh and Tamil significantly, while providing a slight increase on Belarusian and Kurmanji. On the other hand, \textit{SR}, results in a mixture of increase and decrease in the scores. For instance, Buryat and Kazakh POS taggers benefit significantly from \textit{SR}, while the opposite pattern is observed for Belarusian, Kurmanji and Telugu. Unlike token-level methods, the character-level ones: \textit{CI}, \textit{CSU}, \textit{CSW}, \textit{CD} and \textit{CA}, seem to bring more consistent improvements. In particular, Belarusian, Kazakh and Tamil POS taggers are significantly improved by most of the character-level techniques, while Vietnamese tagger only benefited from \textit{CSU} and \textit{CA} and slightly hurt by \textit{CD}. In majority of the cases, syntactic methods either slightly decrease the performance or have not improved significantly. One exception is the \textit{Nonce} technique, which led to a significant improvement for Kazakh, slight improvements for Buryat and Tamil.} \secrev{}{\paragraph{\texttt{BPE}} In general, \texttt{BPE} scores are slightly lower than \texttt{char}, which can be considered as more room for improvement. Similar to \texttt{char}, we observe improvements over the baselines by at least one technique with the exception of Tamil. On the contrary of \texttt{char}, token-level techniques \textit{RWD} and \textit{RWS} significantly increase the scores for Belarusian, Buryat, Kazakh, Telugu and Vietnamese. Similar to \texttt{char}, \textit{SR} yields significantly higher scores for Buryat and Kazakh, while significantly reducing the performance for Belarusian and Telugu. While bringing consistent and significant improvements for Buryat and Kazakh, not many character-level methods lead to higher scores for the other languages. Finally, except the case for Buryat and Kazakh, the improvements from syntactic techniques are not significant, while the performance drop might be severe in cases like Vietnamese.} \secrev{}{\paragraph{\texttt{mBERT}} As expected, the mean baseline scores are the highest for most languages, however the augmentation methods still provide significant gains in some cases. The most significant improvements are achieved in Belarusian by \textit{CSU} and in Kazakh by \textit{RWS}, \textit{CSW} and \textit{Nonce}. Unlike \texttt{char} and \texttt{BPE}, we do not observe a distinct increase or decrease pattern in certain groups of techniques, except from \textit{CA} that achieved significantly higher scores for all languages. However by manual comparison of the improved/worsened results, the pattern is found to be closer to \texttt{char} than to \texttt{BPE}.} \paragraph{Summary and Discussion} \secrev{We found that augmentation boosts the performance of Belarus, Tamil, Kazakh and Kurmanji POS taggers, while benefiting Buryat POS tagging substantially when the right configuration is set. On the other hand, we haven't observed any significant gains for Vietnamese and Telugu which are first three largest treebanks. Although in majority of the cases augmentation techniques performed either \textit{statistically} on par or better, there were a few cases where the accuracy deteriorated: \textit{SR} for Tamil and Telugu; and \textit{crop} and \textit{rotate} for Tamil and Vietnamese. We believe that this is due to: (i) pretrained embeddings do not guarantee a syntactic equivalent of a token, and (ii) quality of embeddings also suffer from low-resources, i.e., small wikipedia.}{To summarize, we observe significant improvements over the corresponding baselines by a group of augmentation techniques on certain languages for \textit{all} experimented models. The token-level methods provide more significant improvements for \texttt{BPE} models, whereas character-level methods increase the \texttt{char} models' performances the most. Except from \textit{Nonce} and a few exceptional cases, the syntactic-level augmentation techniques either deteriorate the performance or do not lead to significant gains. Even though \texttt{mBERT} baselines are quite strong, it is still possible to achieve significantly better scores---especially for Kazakh, and \textit{CA} leads to consistent improvements across experimented languages. On the other hand, we haven't observed any significant gains by any augmentation for Telugu-\texttt{char} and Tamil-\texttt{BPE}. We also note the following decreasing patterns across models: Belarusian-\textit{SR}, Telugu-\textit{SR}, Telugu-\textit{crop}, Vietnamese-\textit{crop} and Telugu-\textit{rotate}. We believe the inconsistency of \textit{SR} is due to: (i) pretrained embeddings do not guarantee a syntactic equivalent of a token, and (ii) quality of embeddings also suffer from low-resources, i.e., small Wikipedia.} Furthermore, the linguistically motivated syntactic techniques such as crop and \textit{rotate} are found to be less effective than the straight forward techniques that rely on introducing noise to the system, namely as \textit{RWD}, \textit{RWS}, \textit{CI}, \textit{CSU}, \textit{CSW}, \textit{CD} and \textit{CA}. One important difference between these two categories is the \textit{number of augmented data} that can be generated via both techniques. This number is limited to linguistic factors for syntactic techniques, such as the number of subtrees or the number of tokens that share the same morphological features and dependency labels. On the other hand, the number of sentences with noise addition is only constrained by the parameters. Hence, substantially larger amounts of augmented data can be generated via simple techniques than the more sophisticated ones. Additionally, we believe that these additional amount of noisy sentences provide informative signals to the POS tagging network. As shown previously~\cite{TenneyDP19}, low-level features like part of speech tags, are mostly encoded at the initial layers of the network. As layers are added to the network, more sophisticated features that require understanding of the interaction between tokens are more likely to be captured. This is connected to the nature of the unsophisticated augmentation techniques that treat tokens as isolated items, rather than considering the relation amongst other tokens like the syntactic methods. As a result, easier techniques yield stronger associations at the initial layers, while more sophisticated techniques are more likely to strengthen the intermediate layers. In addition to easier techniques, the syntactic method \textit{Nonce} also advanced the scores of most POS Taggers. Although it is considered one of the sophisticated methods, it targets tokens instead of the modifying the sentence structure. \begin{landscape} \topskip0pt \vspace*{\fill} \begin{table}[htb!] \centering \setlength{\tabcolsep}{2pt} \scalebox{0.68}{ \begin{tabular}{clllllllllllll} \toprule & & &\multicolumn{3}{c}{\textbf{Token-Level}} &\multicolumn{5}{c}{\textbf{Character-Level}} &\multicolumn{3}{c}{\textbf{Syntactic}} \\ \midrule & &\textbf{Org} &\textbf{RWD} &\textbf{RWS} &\textbf{SR} &\textbf{CI} &\textbf{CSU} &\textbf{CSW} &\textbf{CD} &\textbf{CA} &\textbf{Crop} &\textbf{Rotate} &\textbf{Nonce} \\ \toprule \multirow{7}{*}{\rotatebox[origin=c]{90}{\texttt{char}}} & \textbf{be} & 90.29 $\pm$ 1.23 & 90.41 $\pm$ 1.68 & \cellcolor{lightgreen}91.38 $\pm$ 1.36 *& \cellcolor{lightred}88.57 $\pm$ 2.48 $\dagger$& \cellcolor{darkgreen}91.00 $\pm$ 0.43 **& \cellcolor{darkgreen}91.66 $\pm$ 0.39 **& 90.24 $\pm$ 1.17 & \cellcolor{darkgreen}91.60 $\pm$ 0.79 **& \cellcolor{darkgreen}91.50 $\pm$ 0.63 **& \cellcolor{lightred}88.75 $\pm$ 2.30 $\dagger$& 90.06 $\pm$ 1.12 & 89.68 $\pm$ 1.26 \\ & \textbf{bxr} & 39.29 $\pm$ 4.16 & 43.77 $\pm$ 6.37 & 41.48 $\pm$ 6.13 & \cellcolor{darkgreen}46.44 $\pm$ 3.16 **& \cellcolor{lightgreen}41.36 $\pm$ 5.60 *& 41.75 $\pm$ 5.60 & 41.15 $\pm$ 6.19 & \cellcolor{lightgreen}41.06 $\pm$ 4.83 *& 39.99 $\pm$ 5.33 & \cellcolor{lightgreen}43.96 $\pm$ 1.53 *& 42.37 $\pm$ 5.20 & \cellcolor{lightgreen}42.45 $\pm$ 3.57 *\\ & \textbf{kk} & 46.24 $\pm$ 3.02 & \cellcolor{darkgreen}49.65 $\pm$ 2.75 **& 46.36 $\pm$ 3.47 & \cellcolor{darkgreen}51.25 $\pm$ 3.04 **& \cellcolor{darkgreen}49.94 $\pm$ 2.91 **& \cellcolor{darkgreen}50.61 $\pm$ 3.15 **& \cellcolor{darkgreen}49.51 $\pm$ 3.47 **& \cellcolor{lightgreen}48.38 $\pm$ 4.0 *& \cellcolor{darkgreen}51.49 $\pm$ 1.22 **& 42.57 $\pm$ 9.04 & 46.94 $\pm$ 2.84 & \cellcolor{darkgreen}51.45 $\pm$ 0.58 **\\ & \textbf{ku} & 57.04 $\pm$ 3.98 & 57.70 $\pm$ 5.20 & \cellcolor{lightgreen}58.30 $\pm$ 3.25 *& \cellcolor{lightred}56.28 $\pm$ 3.71 $\dagger$& \cellcolor{lightgreen}58.90 $\pm$ 3.41 *& \cellcolor{lightgreen}57.97 $\pm$ 3.11 *& 56.99 $\pm$ 3.20 & 56.62 $\pm$ 3.40 & \cellcolor{lightgreen}58.60 $\pm$ 2.10 *& \cellcolor{lightgreen}57.67 $\pm$ 3.71 *& 54.46 $\pm$ 6.77 & 57.09 $\pm$ 2.68 \\ & \textbf{ta} & 84.30 $\pm$ 1.87 & \cellcolor{lightgreen}86.96 $\pm$ 0.54 *& \cellcolor{darkgreen}86.37 $\pm$ 0.26 **& 85.94 $\pm$ 0.35 & \cellcolor{darkgreen}86.29 $\pm$ 0.40 **& \cellcolor{darkgreen}86.62 $\pm$ 0.14 **& \cellcolor{darkgreen}86.37 $\pm$ 0.57 **& \cellcolor{lightgreen}86.24 $\pm$ 0.62 *& \cellcolor{lightgreen}86.33 $\pm$ 0.77 *& 84.63 $\pm$ 0.79 & 84.36 $\pm$ 0.57 & \cellcolor{lightgreen}85.98 $\pm$ 0.80 *\\ & \textbf{te} & 90.96 $\pm$ 1.38 & 90.07 $\pm$ 1.85 & 90.51 $\pm$ 1.21 & \cellcolor{lightred}90.80 $\pm$ 0.36 $\dagger$& 89.90 $\pm$ 1.54 & 90.21 $\pm$ 1.23 & 90.49 $\pm$ 1.68 & 90.62 $\pm$ 1.40 & 90.23 $\pm$ 1.73 & \cellcolor{darkred}89.99 $\pm$ 0.49 $\dagger\dagger$& \cellcolor{lightred}90.01 $\pm$ 0.63 $\dagger$& - \\ & \textbf{vi} & 77.62 $\pm$ 0.54 & 77.45 $\pm$ 0.75 & 77.10 $\pm$ 0.90 & 77.93 $\pm$ 0.21 & 76.97 $\pm$ 0.87 & 77.88 $\pm$ 0.24 & 77.29 $\pm$ 0.25 & \cellcolor{lightred}76.00 $\pm$ 0.57 $\dagger$& 78.08 $\pm$ 0.76 & \cellcolor{lightred}77.17 $\pm$ 0.26 $\dagger$& \cellcolor{lightred}77.20 $\pm$ 0.45 $\dagger$& - \\ \midrule \multirow{6}{*}{\rotatebox[origin=c]{90}{\texttt{BPE}}} & \textbf{be} & 88.43 $\pm$ 0.59 & \cellcolor{darkgreen}89.75 $\pm$ 0.42 **& \cellcolor{darkgreen}89.72 $\pm$ 0.38 **& \cellcolor{lightred}87.61 $\pm$ 0.80 $\dagger$& \cellcolor{darkgreen}86.06 $\pm$ 1.64 **& 88.73 $\pm$ 1.68 & \cellcolor{lightred}87.30 $\pm$ 1.59 $\dagger$& 88.41 $\pm$ 1.47 & \cellcolor{lightgreen}89.23 $\pm$ 1.53 *& 87.83 $\pm$ 0.74 & 87.81 $\pm$ 1.81 & 88.40 $\pm$ 1.66 \\ & \textbf{bxr} & 33.11 $\pm$ 0.80 & \cellcolor{darkgreen}44.21 $\pm$ 7.06 **& \cellcolor{darkgreen}41.50 $\pm$ 7.81 **& \cellcolor{darkgreen}46.67 $\pm$ 7.55 **& \cellcolor{darkgreen}41.82 $\pm$ 7.37 **& \cellcolor{darkgreen}43.49 $\pm$ 7.72 **& \cellcolor{darkgreen}39.14 $\pm$ 5.37 **& \cellcolor{darkgreen}46.06 $\pm$ 7.43 **& \cellcolor{darkgreen}42.20 $\pm$ 6.91 **& 28.60 $\pm$ 9.77 & \cellcolor{darkgreen}40.47 $\pm$ 7.43 **& \cellcolor{darkgreen}44.21 $\pm$ 5.51 **\\ & \textbf{kk} & 46.20 $\pm$ 0.68 & \cellcolor{darkgreen}50.36 $\pm$ 1.75 **& \cellcolor{lightgreen}46.83 $\pm$ 0.30 *& \cellcolor{darkgreen}50.29 $\pm$ 1.66 **& \cellcolor{lightgreen}48.64 $\pm$ 2.84 *& \cellcolor{darkgreen}47.14 $\pm$ 0.52 **& \cellcolor{darkgreen}48.33 $\pm$ 1.84 **& 47.28 $\pm$ 2.57 & \cellcolor{lightgreen}47.90 $\pm$ 2.40 *& 47.02 $\pm$ 6.51 & \cellcolor{darkgreen}49.00 $\pm$ 2.16 **& 50.75 $\pm$ 2.31 \\ & \textbf{ta} & 82.76 $\pm$ 0.41 & 81.96 $\pm$ 1.00 & 82.48 $\pm$ 0.59 & 81.51 $\pm$ 1.22 & 82.77 $\pm$ 0.31 & 82.79 $\pm$ 0.35 & 82.22 $\pm$ 0.52 & 82.48 $\pm$ 0.49 & 82.26 $\pm$ 0.68 & 81.69 $\pm$ 0.56 & \cellcolor{lightred}80.97 $\pm$ 0.64 $\dagger$& 81.97 $\pm$ 1.00 \\ & \textbf{te} & 88.27 $\pm$ 0.60 & \cellcolor{lightgreen}88.60 $\pm$ 0.23 *& \cellcolor{lightgreen}88.74 $\pm$ 0.63 *& \cellcolor{darkred}87.71 $\pm$ 0.63 $\dagger\dagger$& \cellcolor{darkgreen}88.93 $\pm$ 0.26 **& 88.21 $\pm$ 0.31 & \cellcolor{lightred}87.93 $\pm$ 0.67 $\dagger$& 88.27 $\pm$ 0.67 & 88.35 $\pm$ 0.34 & 87.38 $\pm$ 0.93 & 87.68 $\pm$ 0.36 & - \\ & \textbf{vi} & 77.07 $\pm$ 0.46 & \cellcolor{darkgreen}77.83 $\pm$ 0.50 **& \cellcolor{darkgreen}77.77 $\pm$ 0.33 **& \cellcolor{darkgreen}77.83 $\pm$ 0.62 **& 77.42 $\pm$ 0.33 & 77.39 $\pm$ 0.54 & \cellcolor{lightgreen}77.46 $\pm$ 0.56 *& \cellcolor{lightgreen}77.53 $\pm$ 0.37 *& \cellcolor{darkgreen}77.72 $\pm$ 0.43 **& \cellcolor{darkred}75.17 $\pm$ 1.08 $\dagger\dagger$& \cellcolor{darkred}76.04 $\pm$ 0.41 $\dagger\dagger$& - \\ \midrule \multirow{4}{*}{\rotatebox[origin=c]{90}{\texttt{mBERT}}} & \textbf{be} & 95.30 $\pm$ 0.65 & \cellcolor{lightred}94.50 $\pm$ 0.63 $\dagger$& \cellcolor{lightred}94.27 $\pm$ 0.68 $\dagger$& 94.55 $\pm$ 0.92 & 94.44 $\pm$ 1.39 & \cellcolor{darkgreen}95.70 $\pm$ 0.60 **& 95.18 $\pm$ 0.74 & 94.76 $\pm$ 1.16 & \cellcolor{lightgreen}95.56 $\pm$ 0.34 *& \cellcolor{lightgreen}95.64 $\pm$ 0.71 *& 95.11 $\pm$ 0.70 & 94.48 $\pm$ 1.02 \\ & \textbf{kk} & 71.34 $\pm$ 2.28 & 70.75 $\pm$ 1.89 & \cellcolor{darkgreen}73.10 $\pm$ 1.65 **& \cellcolor{lightgreen}71.88 $\pm$ 2.16 *& \cellcolor{lightgreen}71.65 $\pm$ 1.51 *& 69.00 $\pm$ 4.01 & \cellcolor{darkgreen}73.11 $\pm$ 0.77 **& 70.90 $\pm$ 1.58 & \cellcolor{lightgreen}72.57 $\pm$ 1.92 *& 67.42 $\pm$ 2.20 & \cellcolor{lightgreen}72.00 $\pm$ 1.59 *& \cellcolor{darkgreen}72.45 $\pm$ 1.72 **\\ & \textbf{te} & 89.92 $\pm$ 0.48 & 89.53 $\pm$ 0.95 & \cellcolor{lightgreen}90.12 $\pm$ 1.03 *& \cellcolor{lightred}88.43 $\pm$ 0.81 $\dagger$& 89.49 $\pm$ 1.17 & 88.84 $\pm$ 1.41 & 88.59 $\pm$ 0.64 & \cellcolor{lightred}89.40 $\pm$ 0.41 $\dagger$& \cellcolor{lightgreen}90.15 $\pm$ 1.31 *& \cellcolor{lightred}88.82 $\pm$ 1.01 $\dagger$& \cellcolor{lightred}89.01 $\pm$ 0.48 $\dagger$& - \\ & \textbf{vi} & 82.41 $\pm$ 0.72 & 81.66 $\pm$ 0.97 & 82.75 $\pm$ 0.87 & 81.04 $\pm$ 1.37 & 82.45 $\pm$ 0.89 & 82.09 $\pm$ 1.07 & \cellcolor{lightgreen}82.48 $\pm$ 0.81 *& 82.20 $\pm$ 0.87 & \cellcolor{lightgreen}82.57 $\pm$ 0.65 *& 82.28 $\pm$ 0.80 & 82.33 $\pm$ 0.79 & - \\ \bottomrule \end{tabular} } \caption{ Part-of-speech tagging results on original (Org) and augmented datasets, where the results of \texttt{char} are given at the top, \texttt{BPE} in the middle and \texttt{mBERT} at the bottom.} \label{tab:postag_res_multi} \end{table} \vspace*{\fill} \end{landscape} \subsection{Dependency Parsing} \label{ssec:dep_res_sec} \secrev{Due to infeasibility of performing parameter search for all languages, we run a grid search on parameters for the languages with the lowest resources: Buryat, Kazakh and Kurmanji. For Belarusian, Tamil, Telugu and Vietnamese, we determine the best configurations for each language-method pair based on POS tagging results. To elaborate, we measure the average performance of the POS taggers for each parameter (e.g., Ratio) and value (e.g., 0.1) and fix the value to the one that provided the best average performance for the specific augmentation method. We use POS tagging as the basis of our dependency parsing experiments for the following reasons: (i) the two tasks are closely related and purely syntactic; and (ii) share the same datasets, hence easily comparable. We report the results for the best performing parameters for Buryat, Kazakh and Kurmanji; and average of multiple runs with fixed parameters for the other languages in Table~\ref{tab:depparsing_res_multi}.}{We use the transition-based parser \texttt{uuparser} and the \texttt{biaffine} parser based on mBERT from \citet{RustPVRG20} to conduct the dependency parsing experiments. The mean and the standard deviations for each language-dataset pair are given in Table~\ref{tab:depparsing_res_multi}.} \begin{table*}[!ht] \centering \scalebox{0.55}{ \begin{tabular}{clllllllllll} \toprule & & &\multicolumn{1}{c}{\textbf{Token-Level}} &\multicolumn{5}{c}{\textbf{Character-Level}} &\multicolumn{3}{c}{\textbf{Syntactic}} \\ \midrule & &\textbf{Org} &\textbf{SR} &\textbf{CI} &\textbf{CSU} &\textbf{CSW} &\textbf{CD} &\textbf{CA} &\textbf{Crop} &\textbf{Rotate} &\textbf{Nonce} \\ \toprule \multirow{7}{*}{\rotatebox[origin=c]{90}{\texttt{uuparser}}} & \textbf{be} & 52.85 $\pm$ 0.70 & \cellcolor{darkred}50.10 $\pm$ 0.67 $\dagger\dagger$& \cellcolor{darkgreen}54.05 $\pm$ 0.40 **& 53.39 $\pm$ 0.51 & \cellcolor{lightgreen}55.37 $\pm$ 1.48 *& \cellcolor{lightgreen}54.89 $\pm$ 0.87 *& \cellcolor{lightgreen}53.57 $\pm$ 0.97 *& 53.01 $\pm$ 1.01 & 53.03 $\pm$ 0.31 & 54.27 $\pm$ 0.68 \\ & \textbf{bxr} & 11.73 $\pm$ 1.65 & \cellcolor{lightgreen}12.70 $\pm$ 1.84 *& 12.39 $\pm$ 1.31 & \cellcolor{darkgreen}13.04 $\pm$ 1.20 **& \cellcolor{darkgreen}13.83 $\pm$ 0.46 **& \cellcolor{darkgreen}13.44 $\pm$ 1.06 **& \cellcolor{darkgreen}14.06 $\pm$ 1.14 **& \cellcolor{darkgreen}11.99 $\pm$ 1.52 **& 11.43 $\pm$ 1.39 & \cellcolor{darkred}10.70 $\pm$ 0.77 $\dagger\dagger$\\ & \textbf{kk} & 23.82 $\pm$ 0.92 & 23.82 $\pm$ 0.98 & 24.36 $\pm$ 0.59 & \cellcolor{darkgreen}24.80 $\pm$ 0.75 **& \cellcolor{lightgreen}24.87 $\pm$ 0.77 *& \cellcolor{lightgreen}24.79 $\pm$ 0.86 *& \cellcolor{lightgreen}24.62 $\pm$ 0.83 *& 24.09 $\pm$ 0.79 & 23.76 $\pm$ 0.32 & \cellcolor{lightgreen} 24.78 $\pm$ 1.07 *\\ & \textbf{ku} & 21.09 $\pm$ 0.86 & \cellcolor{darkgreen}23.22 $\pm$ 0.60 **& \cellcolor{darkgreen}23.70 $\pm$ 0.92 **& \cellcolor{darkgreen}24.08 $\pm$ 0.74 **& \cellcolor{lightgreen}23.48 $\pm$ 0.87 *& \cellcolor{darkgreen}24.32 $\pm$ 0.55 **& \cellcolor{darkgreen}24.39 $\pm$ 0.12 **& \cellcolor{darkgreen}22.74 $\pm$ 0.35 **& \cellcolor{darkgreen}24.50 $\pm$ 0.44 **& \cellcolor{darkgreen}23.98 $\pm$ 0.31 **\\ & \textbf{ta} & 55.52 $\pm$ 0.39 & 54.27 $\pm$ 1.99 & 55.39 $\pm$ 0.42 & 55.15 $\pm$ 0.71 & 54.89 $\pm$ 0.90 & 56.11 $\pm$ 0.77 & 56.52 $\pm$ 1.31 & \cellcolor{lightgreen}54.62 $\pm$ 0.65 *& \cellcolor{lightgreen}55.16 $\pm$ 0.18 *& \cellcolor{darkgreen}57.50 $\pm$ 0.43 **\\ & \textbf{te} & 77.79 $\pm$ 0.85 & 76.94 $\pm$ 0.94 & 77.65 $\pm$ 0.94 & \cellcolor{lightgreen}78.83 $\pm$ 0.73 *& \cellcolor{lightgreen}77.27 $\pm$ 1.12 *& \cellcolor{darkgreen}78.11 $\pm$ 0.80 **& 78.87 $\pm$ 0.51 & \cellcolor{lightgreen}78.01 $\pm$ 0.86 *& \cellcolor{lightgreen}78.27 $\pm$ 1.05 *& - \\ & \textbf{vi} & 55.01 $\pm$ 0.15 & \cellcolor{darkred}48.23 $\pm$ 0.27 $\dagger\dagger$& 53.80 $\pm$ 0.32 & \cellcolor{lightred}52.26 $\pm$ 0.47 $\dagger$& \cellcolor{darkred}52.19 $\pm$ 0.36 $\dagger\dagger$& \cellcolor{darkred}52.92 $\pm$ 0.23 $\dagger\dagger$& \cellcolor{lightred}53.15 $\pm$ 0.42 $\dagger$& 55.33 $\pm$ 0.40 & 54.87 $\pm$ 0.26 & - \\ \midrule \multirow{7}{*}{\rotatebox[origin=c]{90}{\texttt{biaffine}}} & \textbf{be\^} & 78.17 $\pm$ 0.36 & 79.66 $\pm$ 0.28 & \cellcolor{darkgreen}80.47 $\pm$ 0.11 **& \cellcolor{darkgreen}80.04 $\pm$ 0.80 **& 78.79 $\pm$ 0.64 & \cellcolor{darkgreen}80.58 $\pm$ 0.49 **& \cellcolor{darkgreen}80.45 $\pm$ 0.36 **& \cellcolor{darkgreen}79.84 $\pm$ 0.30 **& \cellcolor{darkgreen}79.78 $\pm$ 0.35 **& \cellcolor{darkgreen}79.33 $\pm$ 0.27 **\\ & \textbf{bxr} & 17.71 $\pm$ 0.52 & \cellcolor{darkgreen}18.73 $\pm$ 0.13 **& 17.91 $\pm$ 0.51 & 17.65 $\pm$ 0.44 & 17.73 $\pm$ 0.17 & 17.72 $\pm$ 0.87 & \cellcolor{lightgreen}17.81 $\pm$ 0.50 *& \cellcolor{lightred}16.74 $\pm$ 0.88 $\dagger$& 17.59 $\pm$ 0.36 & 17.12 $\pm$ 0.82 \\ & \textbf{kk\^} & 34.23 $\pm$ 0.32 & \cellcolor{darkgreen}35.49 $\pm$ 0.75 **& \cellcolor{darkgreen}35.61 $\pm$ 0.52 **& \cellcolor{darkgreen}36.69 $\pm$ 0.76 **& \cellcolor{darkgreen}36.30 $\pm$0.59 **& \cellcolor{darkgreen}35.42 $\pm$ 0.49 **& \cellcolor{darkgreen}35.76 $\pm$ 0.42 **& \cellcolor{lightgreen}34.61 $\pm$ 0.75 *& 34.18 $\pm$ 0.42 & \cellcolor{darkgreen}38.43 $\pm$ 0.17 **\\ & \textbf{ku} & 20.48 $\pm$ 0.35 & 20.64 $\pm$ 0.24 & \cellcolor{darkgreen}22.16 $\pm$ 0.47 **& 20.49 $\pm$ 0.33 & \cellcolor{darkgreen}22.56 $\pm$ 0.68 **& \cellcolor{darkgreen}22.71 $\pm$ 0.30 **& \cellcolor{darkgreen}23.13 $\pm$ 0.26 **& 20.53 $\pm$ 0.41 & \cellcolor{darkred}19.39 $\pm$ 0.50 $\dagger\dagger$& \cellcolor{darkgreen}22.82 $\pm$ 0.24 **\\ & \textbf{ta\^} & 70.01 $\pm$ 0.62 & \cellcolor{darkgreen}70.99 $\pm$ 0.21 **& 70.27 $\pm$ 0.10 & \cellcolor{lightgreen}71.21 $\pm$ 1.08 *& 70.54 $\pm$ 0.49 & \cellcolor{darkgreen}71.13 $\pm$ 0.67 **& \cellcolor{darkgreen}71.47 $\pm$ 0.53 **& \cellcolor{darkgreen}70.89 $\pm$ 0.30 **& \cellcolor{lightgreen}71.11 $\pm$ 1.01 *& \cellcolor{darkgreen}71.24 $\pm$ 0.85 **\\ & \textbf{te\^} & 84.60 $\pm$ 0.79 & 84.60 $\pm$ 0.80 & \cellcolor{lightred}84.28 $\pm$ 0.97 $\dagger$& \cellcolor{darkgreen}84.88 $\pm$ 0.54 **& \cellcolor{darkgreen}85.02 $\pm$ 0.32 **& \cellcolor{darkred}84.05 $\pm$ 0.78 $\dagger\dagger$& 84.74 $\pm$ 0.81 & \cellcolor{darkgreen}85.30 $\pm$ 0.57 **& \cellcolor{darkred}84.19 $\pm$ 0.59 $\dagger\dagger$& - \\ & \textbf{vi\^} & 66.25 $\pm$ 0.84 & \cellcolor{darkred}62.40 $\pm$ 0.76 $\dagger\dagger$& \cellcolor{darkred}63.49 $\pm$ 0.80 $\dagger\dagger$& \cellcolor{darkred}63.87 $\pm$ 0.55 $\dagger\dagger$& \cellcolor{darkred}64.22 $\pm$ 2.07 $\dagger\dagger$& \cellcolor{darkred}64.54 $\pm$ 2.05 $\dagger\dagger$& \cellcolor{darkred}64.09 $\pm$ 1.25 $\dagger\dagger$& 66.11 $\pm$ 0.85 & 65.94 $\pm$ 0.91 & - \\ \bottomrule \end{tabular} } \caption{Dependency parsing LAS scores on original (Org) and augmented datasets, where the results of \texttt{uuparser} are given at the top, and \texttt{biaffine} parser at the bottom. \emph{language\^} denotes that \emph{language} is part of the multilingual BERT.} \label{tab:depparsing_res_multi} \end{table*} \paragraph{ \texttt{uuparser}} Compared to POS tagging, the relative improvements over the baseline are substantially higher. However, unlike POS tagging there is no linear relation between low baseline scores and larger improvements. For instance, the relative improvement for Kazakh is only 3\%, despite having the second lowest baseline. This suggests that more factors such as dataset statistics and linguistic properties come into play for dependency parsing. \secrev{}{ Except for Vietnamese, we observe significant gains for all languages by at least one of the augmentation methods---sometimes by \textit{any} technique as in Kurmanji. Similar to POS tagging, \textit{SR} provides a mixture of results, significantly reducing the scores for Belarus and Vietnamese, while improving the Kurmanji and Buryat parsers. Character-level augmentation techniques mostly improve the performance of Belarusian, Buryat, Kazakh, Kurmanji and Telugu parsers significantly. Unlike POS tagging, the improvements from syntactic methods are more emphasized for most languages with the exception of Vietnamese, Belarusian and Buryat-\textit{Nonce}.} \secrev{}{\paragraph{\texttt{biaffine}} As discussed earlier, mBERT is trained on the top 100 languages with the largest Wikipedia, however, training is performed on a \textit{shared} vocabulary. Hence even if a language is not seen during training, the shared vocabulary might enable \textit{zero-shot} learning---especially if the model is trained with related languages. In Table~\ref{tab:depparsing_res_multi}, the languages without an * sign are not included to mBERT training, therefore the scores are the results of zero-shot learning. Compared to \texttt{uuparser}, all baselines (Org) are substantially higher as expected---except for Kurmanji. Despite such high scores, we observe a considerable amount of statistically significant improvements over the baseline with some exceptions. As with the \texttt{uuparser}, Vietnamese dependency parsing performance is significantly reduced by augmentation, suggesting that augmentation methods introduce too much noise for Vietnamese dependency parser. \textit{SR} improves the scores more consistently for Belarus, Kazakh and Tamil, while not being able to bring significant gains for the other languages. Character noise injection techniques---especially \textit{CD} and \textit{CA}---boost the performance of most parsers, again with the exception of Vietnamese and Telugu. Similar to \texttt{uuparser}, syntactic techniques result in higher scores compared to POS tagging. We observe significant gains from \textit{Nonce}, while \textit{Crop} also improves substantially in most cases---Buryat and Vietnamese are exceptions. Unlike \textit{Nonce} and \textit{Crop}, \textit{Rotate} gives mixed results. } \paragraph{Summary and Discussion} The similarities \secrev{to POS tagging results}{among different parsers and POS taggers} are numerous, such as (i) the small number of augmentation techniques that were able to improve scores for \secrev{Telugu, Vietnamese and}{} Buryat; (ii) high performance of character noise methods \secrev{and the Nonce techniques}{} for most languages\secrev{and}{;} (iii) generally \secrev{low}{confusing} scores produced by the \textit{SR} and \textit{rotation}\secrev{}{, and (iv) mostly unimproved or worse results for Vietnamese and Telugu}. Considering the synthetic nature of most languages, where characters, e.g., case markers, provide valuable information on the relationship between two tokens, high performance of character-level noise is rather expected. In other words, varying character sequences may help the network to strengthen the dependency relations. Unlike POS tagging, we also observe a performance boost from the syntactic augmenters: \textit{Crop} and \textit{Nonce}, sometimes leading to best results e.g., for Vietnamese and Kurmanji. This suggests that introducing more syntactic variation/noise, even in smaller amounts than character-level noise, helps in certain cases. Nevertheless, the performances of both techniques are comparable, and it is not possible to single out one technique that is guaranteed to improve the results. Additionally, we observe that not \textit{all} character-level methods increase the scores and there is no clear pattern to which character noise improves which language. \secrev{For instance, while \textit{CA} yields the best scores in Belarusian POS tagging, \textit{CSW} and \textit{CD} are the ones to increase the dependency parsing scores for Belarusian. Finally, for dependency parsing the largest improvements are almost always obtained by injecting character-level noise as in POS tagging; however syntactic augmentation almost always improves dependency parsing unlike POS tagging.}{Our results also suggest that a higher number of augmentation techniques are able to improve significantly over competitive baseline scores provided by \texttt{biaffine} compared to \texttt{mBERT}-based POS tagger. One reason may be that \texttt{mBERT} model already containing a substantial amount of low-level syntactic knowledge and augmentation techniques only adding noise to the fine-tuning process.} \subsection{Semantic Role Labeling} \label{ssec:srl_res_sec} As discussed in Sec~\ref{ssec:tasks}, we \textit{simulate} a low-resource scenario by sampling 250, 500 and 1000 training sentences from the original training sets. We perform a grid search on augmentation parameters for the \#sampled=250 setting, and choose the parameters with the best average performance for other dataset settings. The results are given in Table~\ref{tab:srl_results}. \begin{table*}[!htp] \centering \scalebox{0.55}{ \begin{tabular}{lllllllllll} \toprule \multicolumn{9}{c}{\textbf{Catalan}} & \\ \midrule \textbf{\#sample} &\textbf{Org} &\textbf{SR} &\textbf{CI} &\textbf{CSU} &\textbf{CSW} &\textbf{CD} &\textbf{CA} &\textbf{Crop} &\textbf{Rotate} \\ 250 & 31.34 $\pm$ 0.99 & \cellcolor{darkgreen}37.29 $\pm$ 1.26 **& \cellcolor{darkgreen}38.50 $\pm$ 1.37 **& \cellcolor{darkgreen}37.74 $\pm$ 1.64 **& \cellcolor{darkgreen}39.79 $\pm$ 0.25 **& \cellcolor{darkgreen}37.45 $\pm$ 1.58 **& \cellcolor{darkgreen}38.28 $\pm$ 1.48 **& \cellcolor{darkred}26.59 $\pm$ 0.52 $\dagger\dagger$& \cellcolor{darkred}25.96 $\pm$ 0.62 $\dagger\dagger$\\ 500 & 44.53 $\pm$ 1.39 & 44.12 $\pm$ 1.78 & \cellcolor{darkgreen}49.75 $\pm$ 0.10 **& \cellcolor{lightgreen}47.77 $\pm$ 0.65 *& \cellcolor{lightgreen}46.81 $\pm$ 1.59 *& \cellcolor{lightgreen}47.47 $\pm$ 0.62 *& \cellcolor{darkgreen}49.46 $\pm$ 0.19 **& \cellcolor{lightred}44.30 $\pm$ 1.16 $\dagger$& 44.72 $\pm$ 0.36 \\ 1000 & 53.06 $\pm$ 1.33 & 53.49 $\pm$ 0.64 & 53.80 $\pm$ 0.67 & 54.83 $\pm$ 0.81 & \cellcolor{lightgreen}56.37 $\pm$ 0.29 *& \cellcolor{lightgreen}55.50 $\pm$ 0.17 *& 53.97 $\pm$ 0.96 & 53.57 $\pm$ 0.07 & 52.31 $\pm$ 0.96 \\ \midrule \multicolumn{9}{c}{\textbf{Turkish}} & \\ \midrule \textbf{\#sample} &\textbf{Org} &\textbf{SR} &\textbf{CI} &\textbf{CSU} &\textbf{CSW} &\textbf{CD} &\textbf{CA} &\textbf{Crop} &\textbf{Rotate} \\ 250 & 31.28 $\pm$ 0.12 & 31.78 $\pm$ 1.20 & 31.62 $\pm$ 1.53 & 30.67 $\pm$ 2.19 & 30.14 $\pm$ 2.46 & \cellcolor{lightgreen}32.52 $\pm$ 0.89 *& 32.02 $\pm$ 1.53 & \cellcolor{lightgreen}32.42 $\pm$ 0.64 *& 30.37 $\pm$ 1.75 \\ 500 & 35.75 $\pm$ 1.38 & 35.95 $\pm$ 0.65 & 36.35 $\pm$ 0.68 & \cellcolor{lightgreen}38.27 $\pm$ 0.88 *& 37.30 $\pm$ 1.05 & 34.80 $\pm$ 1.96 & 36.23 $\pm$ 1.37 & \cellcolor{lightgreen}37.40 $\pm$ 1.07 *& 34.58 $\pm$ 1.89 \\ 1000 & 44.89 $\pm$ 0.80 & 42.41 $\pm$ 1.20 & 43.36 $\pm$ 0.67 & 41.78 $\pm$ 1.39 & 42.28 $\pm$ 1.31 & 41.42 $\pm$ 1.38 & \cellcolor{darkred}29.14 $\pm$ 0.47 $\dagger\dagger$& 44.83 $\pm$ 1.07 & 42.71 $\pm$ 0.76 \\ \midrule \multicolumn{9}{c}{\textbf{Spanish}} & \\ \midrule \textbf{\#sample} &\textbf{Org} &\textbf{SR} &\textbf{CI} &\textbf{CSU} &\textbf{CSW} &\textbf{CD} &\textbf{CA} &\textbf{Crop} &\textbf{Rotate} \\ 250 & 31.23 $\pm$ 1.30 & \cellcolor{lightgreen}34.65 $\pm$ 1.02 *& 36.31 $\pm$ 2.28 & \cellcolor{darkgreen}37.91 $\pm$ 1.21 **& \cellcolor{lightgreen}36.80 $\pm$ 1.90 *& \cellcolor{lightgreen}36.76 $\pm$ 2.11 *& \cellcolor{darkgreen}37.34 $\pm$ 1.14 **& \cellcolor{darkred}26.04 $\pm$ 1.34 $\dagger\dagger$& \cellcolor{darkred}26.95 $\pm$ 2.22 $\dagger\dagger$\\ 500 & 44.16 $\pm$ 0.68 & 43.89 $\pm$ 0.65 & 44.80 $\pm$ 0.81 & 44.82 $\pm$ 1.10 & 43.75 $\pm$ 1.36 & \cellcolor{lightgreen}44.60 $\pm$ 0.72 *& \cellcolor{lightgreen}45.03 $\pm$ 0.91 *& \cellcolor{darkred}42.35 $\pm$ 0.42 $\dagger\dagger$& \cellcolor{darkred}41.13 $\pm$ 0.38 $\dagger\dagger$\\ 1000 & 50.98 $\pm$ 1.30 & 50.80 $\pm$ 1.21 & \cellcolor{darkgreen}53.03 $\pm$ 0.76 **& \cellcolor{darkgreen}52.53 $\pm$ 0.60 **& \cellcolor{lightgreen}52.10 $\pm$ 1.00 *& \cellcolor{darkgreen}52.76 $\pm$ 0.57 **& \cellcolor{darkgreen}54.12 $\pm$ 0.12 **& 51.32 $\pm$ 1.09 & \cellcolor{lightgreen}52.40 $\pm$ 0.34 *\\ \midrule \multicolumn{9}{c}{\textbf{Czech}} & \\ \midrule \textbf{\#sample} &\textbf{Org} &\textbf{SR} &\textbf{CI} &\textbf{CSU} &\textbf{CSW} &\textbf{CD} &\textbf{CA} &\textbf{Crop} &\textbf{Rotate} \\ 250 & 35.63 $\pm$ 1.08 & 33.48 $\pm$ 1.58 & \cellcolor{lightgreen}37.17 $\pm$ 0.13 *& \cellcolor{lightgreen}37.56 $\pm$ 0.02 *& \cellcolor{lightred}31.56 $\pm$ 1.00 $\dagger$& 35.70 $\pm$ 0.80 & 36.14 $\pm$ 1.78 & \cellcolor{darkgreen}34.84 $\pm$ 1.55 **& \cellcolor{darkred}30.78 $\pm$ 0.38 $\dagger\dagger$\\ 500 & 44.96 $\pm$ 0.51 & \cellcolor{lightred}42.04 $\pm$ 1.08 $\dagger$& \cellcolor{lightgreen}46.44 $\pm$ 1.20 *& 46.13 $\pm$ 1.97 & 46.24 $\pm$ 1.60 & \cellcolor{darkgreen}47.92 $\pm$ 0.59 **& \cellcolor{lightgreen}47.42 $\pm$ 1.07 *& \cellcolor{lightgreen}46.38 $\pm$ 0.12 *& 44.83 $\pm$ 1.47 \\ 1000 & 50.29 $\pm$ 0.09 & \cellcolor{lightred}48.15 $\pm$ 0.31 $\dagger$& \cellcolor{lightred}48.66 $\pm$ 0.49 $\dagger$& \cellcolor{lightgreen}52.63 $\pm$ 0.97 *& 51.10 $\pm$ 1.23 & \cellcolor{lightred}47.85 $\pm$ 1.19 $\dagger$& \cellcolor{lightgreen}51.97 $\pm$ 0.54 *& 49.33 $\pm$ 1.36 & 48.45 $\pm$ 1.34 \\ \midrule \multicolumn{9}{c}{\textbf{Finnish}} & \\ \midrule \textbf{\#sample} &\textbf{Org} &\textbf{SR} &\textbf{CI} &\textbf{CSU} &\textbf{CSW} &\textbf{CD} &\textbf{CA} &\textbf{Crop} &\textbf{Rotate} \\ 250 & 18.80 $\pm$ 0.89 & 18.60 $\pm$ 2.34 & 17.18 $\pm$ 1.56 & 19.68 $\pm$ 1.16 & \cellcolor{darkgreen}25.71 $\pm$ 0.80 **& \cellcolor{darkgreen}28.87 $\pm$ 1.55 **& \cellcolor{darkgreen}27.96 $\pm$ 0.44 **& \cellcolor{darkgreen}30.83 $\pm$ 0.17 **& 20.07 $\pm$ 2.42 \\ 500 & 37.23 $\pm$ 0.35 & 34.10 $\pm$ 1.61 & \cellcolor{lightred}35.59 $\pm$ 0.87 $\dagger$& \cellcolor{lightred}36.88 $\pm$ 0.29 $\dagger$& \cellcolor{lightgreen}38.98 $\pm$ 0.03 *& \cellcolor{lightred}35.29 $\pm$ 0.99 $\dagger$& 35.32 $\pm$ 1.63 & \cellcolor{lightgreen}37.58 $\pm$ 0.93 *& 35.16 $\pm$ 1.08 \\ 1000 & 41.64 $\pm$ 0.98 & \cellcolor{lightred}40.15 $\pm$ 0.95 $\dagger$& 41.27 $\pm$ 0.84 & 41.41 $\pm$ 1.08 & 39.80 $\pm$ 0.81 & 40.57 $\pm$ 1.17 & 41.99 $\pm$ 0.04 & 41.35 $\pm$ 1.37 & \cellcolor{lightred}39.08 $\pm$ 0.34 $\dagger$\\ \bottomrule \end{tabular} } \caption{ Semantic role labeling results on original (Org) and augmented datasets.} \label{tab:srl_results} \end{table*} As expectedly, the relative improvement over the baselines decreases, as the number of samples increases. For some languages like Finnish, the drop is dramatic, while for the majority, the decrease is exponential. The only exception is Czech. The reason is the fine-grained, language specific semantic annotation scheme that requires larger datasets. Another noticeable pattern is the decreasing number of augmentation methods that improve the F1 scores with the increasing sample size. \secrev{For instance, while six of the methods increase the SRL performance in \#sample=250 setting for Spanish, only 2 provides improvement for \#sample=2000 setting. Similarly in Turkish and Finnish, SRL benefits almost from all augmentation methods for 250 samples, while only a couple of methods continue to provide gains in case of 500 samples.}{For instance, while six of the methods increase the SRL performance in \#sample=250 setting for Catalan, only two of them provides improvement for \#sample=1000 setting---which are also less significant.} We see one distinctive pattern for the languages Turkish, Czech and Finnish. Unlike Spanish and Catalan, the syntactic operation \textit{crop} improves the performances for almost all settings. We believe this is due to the rich case marking systems of these languages that enable generating almost natural sentences from subtrees. Furthermore, we observe that the \textit{rotation} operation introduces a high amount of noise that can not be regularized easily with the models. One reason for more consistent improvements with cropping operation is related the semantic role statistics. Most treebanks in this study are dominated by the core arguments, i.e., Arg0, Arg1. These core arguments are usually observed as subjects or objects. In addition, many predicates are encountered with missing arguments, i.e., it is more likely to see a predicate with only one of the arguments than containing all. We believe cropping introduces more variation in terms of core arguments compared to rotation, which provides more signal for the cases such as missing arguments. For Spanish and Catalan, the gains are almost always provided by character-level noise. \secrev{The gain is consistent in almost all numbers of samples.}{} On the contrary, both languages never benefit from the syntactic operations, as expected. \secrev{}{ \subsection{Summary of the Findings} \label{ssec:summary_findings} We summarize the key findings from experiments from different perspectives: languages, downstream tasks, augmentation techniques and models. \subsubsection{Languages} \begin{itemize} \item Most languages have seen significant improvements in at least one augmentation configuration independent from tasks, models and the dataset sizes. \item Vietnamese has mostly witnessed drops in scores which were especially highlighted in dependency parsing. This suggests that the augmentation methods may be less effective for analytic languages. \item We have not observed any significant difference between fusional and agglutinative languages by means of POS and DEP scores, however the differences between syntactic and non-syntactic augmentation techniques were pronounced for languages with different morphological properties. \item The suitability of augmentation techniques have been found to be dependent on the language and subword unit pair. For instance Tamil POS tagging with \texttt{BPE} baseline could not be improved, where we have observed significant improvements over the Tamil POS tagging \texttt{char} baseline. \item We have detected inconsistent results for the Telugu language for majority of the cases. Furthermore, we have not seen many configurations that led to substantial improvements. Telugu had the second largest number of significant declines in performance after Vietnamese. We believe this may be due to Telugu having one of the largest treebanks, and augmentation techniques adding unmeaningful noise to the already strong baseline. \end{itemize} \subsubsection{Tasks} \begin{itemize} \item We have found many similarities between the results for POS and DEP. The most important ones are: (i) character-level augmentations providing significant improvements in most cases and (ii) inconsistent results from \textit{SR} and \textit{rotation} methods. \item The task that has been improved the most \textit{(i.e., statistically significant improvements with a large gap over the baseline)} was DEP, which has been followed by POS and SRL. In other words, the experimented augmentation benefited the task with the \textit{intermediate} complexity the most. \item Strong baseline scores provided by exploiting large pretrained contextualized embeddings were more likely to be further improved for DEP (e.g., \texttt{biaffine}) than POS (e.g., \texttt{mBERT}). \end{itemize} \subsubsection{Augmentation Techniques} \begin{itemize} \item The most consistent augmenters across tasks, models and languages have been found to be the character-level ones. \item A satisfactory choice of augmentation techniques depends heavily on the input unit. For instance, token-level augmentation provide significant improvements for \texttt{BPE}, while character-level ones give higher scores for \texttt{char} and Word Piece of \texttt{mBERT}. \item The performance of \textit{SR} has been detected as irregular. The reason may be that, it relies on external pretrained embeddings that may be of lower-quality for some of the languages. \item Even if we have not found one single winner across character-level augmentation methods, the mixed character noise, a.k.a., \textit{CA}, has improved the POS, DEP and SRL tasks more consistently. \item Among the syntactic augmenters, \textit{crop} and \textit{Nonce} have been found to be more reliable compared to \textit{rotate}---with some exceptions like the case for \texttt{BPE}. \end{itemize} \subsubsection{Models} \begin{itemize} \item We have observed almost a regular improvement pattern among different models for DEP, such as a significant drop in scores for Vietnamese. Even there were some similarities among models for POS, such as syntactic augmenters achieving the lowest scores, a regular pattern was not visible. The reason might be the difference among the subword units in POS\footnote{All POS models use \textit{distinct} input types: character, BPE and WordPiece; while \texttt{uuparser} (using a combination of char and word) and \texttt{biaffine} (WordPiece) parsers are likely to share more vocabulary.}. \item Strong baseline scores provided by exploiting large pretrained contextualized embeddings were more likely to be further improved for DEP (e.g., \texttt{biaffine}) than POS (e.g., \texttt{mBERT}). This may be due to \texttt{mBERT} containing a substantial amount of low-level syntactic knowledge \textit{already} (e.g., POS), hence augmentation techniques only adding noise to the fine-tuning process. \end{itemize} } \subsubsection{POS Tagging} The goal is to associate each token with its corresponding lexical class, i.e., syntactic label (e.g., \textit{noun, adjective, verb}). Although it is a token level task, disambiguation using contextual knowledge is mostly necessary as one token may belong to multiple classes (e.g., to fly \textit{(Verb)} or a fly \textit{(Noun)}). For languages with rich morphology, it is generally referred to as morphological disambiguation while the correct morphological analysis---including the POS tag---is chosen among multiple analyses. For analytic languages like English, it is mostly performed as the first step in the traditional NLP pipeline. For this task, we inherit the universal POS tag set that are shared among languages and defined within the scope of Universal Dependencies project~\cite{ud26}. \paragraph{Subword Units} \secrev{}{We experiment with three subword units: characters, BPEs and Word Pieces. Characters and character n-grams have been one of the most popular subword units~\cite{LingDBTFAML15}, since they (i) don't require any preprocessing, (ii) are language-agnostic and (iii) computationally cheap due to the small vocabulary size. Byte Pair Encoding~(BPE)~\cite{SennrichHB16a} is a simple segmentation algorithm that learns a subword vocabulary by clustering the frequent character pairs together for a predefined number of times. The algorithm only requires raw text---hence language-agnostic, and computationally simple. Furthermore, it has been shown to improve the performance on various NLP tasks, especially machine translation. \citet{Heinzerling018} trained BPE embeddings using the GloVe~\cite{PenningtonSM14} word embedding objective and made it available for 275 languages with multiple vocabulary sizes. However neither randomly initialized character embeddings nor gloVe-BPE embeddings are aware of context. Therefore, we also experiment with contextualized embeddings (i.e., different embeddings for the same subword depending on the context) that operate on Word Pieces. For this study we choose BERT~\cite{DevlinCLT19}: a Transformer~\cite{VaswaniSPUJGKP17} based contextualized language model that recently led to state-of-the-art results for many languages and tasks. We use the publicly available multilingual BERT~(mBERT)\footnote{https://github.com/google-research/bert/blob/f39e881/multilingual.md} that has been trained on the top 100 languages with the largest Wikipedia using a shared vocabulary across languages. Some of the low-resource languages we use in our experiments are not part of mBERT's languages. } \paragraph{The model} \secrev{} The overall architecture of the sequence tagging model used in this study~\cite{HeinzerlingS19multi} is given in Fig.~\ref{fig:postagsota}. Even though the modular architecture enables combining different subwords, we experiment with the units separately for the sake of measuring the individual effect. \begin{figure} \centering \includegraphics[width=0.65\textwidth]{figures/pos_tagger_benjamin.png} \caption{ General architecture of the sequence tagger taken from \citet{HeinzerlingS19multi}.} \label{fig:postagsota} \end{figure} For the character and BPE-level model, first each word is split into a sequence of subwords produced by the $\rho$ function. For characters, each subword unit is randomly initialized, whereas, for BPE, pretrained embeddings are looked up. \begin{gather} \rho(w) = {sub_0,sub_1,..,sub_n} \end{gather} For character and BPE-level models, the sequence is encoded via an RNN and a bi-LSTM network respectively. For BERT-based model, encoder is simply the pretrained Transformer that is fine-tuned during training, therefore no additional encoding is performed. For the character-level model, the \textit{final} hidden states are used to represent the token, while for BPE and mBERT, only the states of the first subword at each token are used. \begin{gather} \vec{hs_f}, \vec{hs_b} = \text{Encoder}(\rho(w)) \\ \vec{w} = [\vec{hs_f};\vec{hs_b}] \label{eq:comp} \end{gather} Next, the token embeddings, $\vec{w}$, are passed onto another bi-LSTM layer, where $i$ denotes the index of the word. \begin{gather} \vec{h_{f}}, \vec{h_{b}} = \text{bi-LSTM}(\vec{w_{i}}) \end{gather} Next, the concatenated hidden states from forward, $h_{f}$, and backward directions, $h_{b}$, are fed to a classification layer to project the feature space onto the label space. Finally, the probability distribution of each POS tag is calculated via a softmax function. The label with the highest probability is assigned to the input token as follows, where $S$ represents the sentence and $\vec{L}$ denotes the POS tag sequence. \begin{gather} \vec{p(L_i|S)} = softmax(W_{l}\cdot[\vec{h_{f}};\vec{h_{b}}]+\vec{b_{l}}) \end{gather} }\secrev{We used uniform initialization for all network components. The model is optimized via stochastic gradient descent with initial learning rate as 1. Gradients above $2$ are clipped. We trained the models for 50 epochs, and decrease the learning rate by half when the results on development set do not improve after 3 epochs. The embedding size for characters as well as hidden size of intermediate layers are chosen as 128. Batch size is set to 8 due to small training set. Number of bi-LSTM layers of all bi-LSTM components are left as a parameter and the values 1 and 2 are used in the experiments. Accuracy is used to evaluate the POS tagging results.}{BERT is fine-tuned during training, i.e., the model's weights are updated by backpropogating through all layers. For each language, we use the best vocabulary size reported in previous work~\cite{HeinzerlingS19multi}. For each subword-level model, we use the parameters from the original work~\cite{HeinzerlingS19multi}.} \subsubsection{Dependency Parsing} It aims to provide a viable structural or grammatical analysis of a sentence by finding the links between the tokens. It assumes that the dependent word is linked to its parent, a.k.a head word, with one of the dependency relations such as \textit{modifier, subject} or \textit{object}. The resulting grammatical analysis is called a dependency graph, shown in Fig.~\ref{fig:syntactic_demo}. We use the universal dependency label sets defined by Universal Dependencies project~\cite{ud26} and report the Labeled Attachment Score (LAS) as the performance measure. \secrev{}{For the experiments, we use two different models: uuparser~\cite{KiperwasserG16,delhoneux17arc} and the biaffine parser finetuned on large contextualized language models~\cite{GlavasV21}.} \paragraph{uuparser} \secrev{}{The parser is based on the transition-based \citet{KiperwasserG16} that uses bi-LSTMs to create features. Here $c$ refers to character and $e$ refers to embedding. First, a character based representation is generated via bi-LSTMs. Next, the non-contextual representation for a token is created by concatenating an embedding for the token itself, which we initialize randomly. \begin{gather} \vec{e_c} = \text{bi-LSTM}({c_0,c_1,..,c_n}) \\ \vec{w} = [\vec{e_w};\vec{e_c}] \end{gather} Then we create a context-aware representation for the token at index $i$ using an additional bi-LSTM layer: \begin{gather} \vec{h_{i}} = \text{bi-LSTM}(\vec{w_{i}}) \label{eq:dep_rep} \end{gather}} We use the arc-hybrid transition-based parser that is later extended with partially dynamic oracle and \textsc{Swap} transition to be able to create non-projective dependency graphs. A typical transition-based parser consists of so-called configurations and a set of transitions, a.k.a., actions, which can be applied to a configuration to create a new configuration. Parsing starts with a predefined initial configuration. At each iteration, a classifier chooses the best transition given the features extracted from the configuration, and updated the configuration. This step is repeated until a predefined terminal configuration. The arc-hybrid parser used in this work, defines configuration as a stack, a buffer and a set of dependency arcs. \secrev{}{Then the feature for the configuration is created by concatenating the representations of a fixed number of tokens calculated via Eq.~\ref{eq:dep_rep} from the top of the stack and the first element of the buffer.} Finally, a scoring function that is implemented as a multi-layer perceptron, assigns scores to transitions and dependency labels, given the extracted feature. The transition and the label with the highest score is then used to create the next configuration. \secrev{We used the default settings of the uuparser~\cite{KiperwasserG16,delhoneux17arc}\footnote{Available at \url{https://github.com/UppsalaNLP/uuparser}}. Similar to POS tagging, embedding sizes are set to 128, and the training is performed for 100 epochs. We report the Labeled Attachment Score (LAS) as the performance measure.}{We train separate models for each treebank, using \textit{only} randomly initialized character and word features.} \paragraph{biaffine}\secrev{}{The parser consists of an attention layer on top of the outputs from a transformer-based language model, as shown in Fig.~\ref{fig:biaffine}. If a token consists of multiple subword segments, one token representation is created by averaging the transformer outputs for each segment, denoted with $\mathbf{X}$. $\mathbf{X} \in \mathbb{R}^{N \times H}$ is then used as the representation of syntactic dependents, where $N$ and $H$ refer to the number of tokens in the sentence and the transformer hidden state size. To represent the \texttt{root} node, the transformer representation for the \texttt{[CLS]}: $\mathbf{x}_\mathit{CLS}$, a.k.a., the sentence start token, is used. To represent the dependent heads, $\mathbf{X}' = [\mathbf{x}_\mathit{CLS}; \mathbf{X}] \in \mathbb{R}^{(N+1) \times H}$ is defined. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figures/biaffine_glavas.png} \caption{ Transformer based architecture of the biaffine parser taken from \citet{GlavasV21}.} \label{fig:biaffine} \end{figure} The arc and relation scores are then calculated as biaffine products as following: \vspace{-1em} { \small \begin{equation*} \mathbf{Y}_{\mathit{arc}} = \mathbf{X}\mathbf{W}_\mathit{arc}\mathbf{X'}^{\top} + \mathbf{B}_\mathit{arc};\hspace{0.5em} \mathbf{Y}_{\mathit{rel}} = \mathbf{X}\mathbf{W}_\mathit{rel}\mathbf{X'}^{\top} + \mathbf{B}_\mathit{rel} \end{equation*}} \noindent where $\mathbf{W}$ and $\mathbf{B}$ refer to the weight and bias parameters for the arc and relation classifiers. The correct dependency head is selected simply as the row corresponding to the maximum score in $\mathbf{Y}_{\mathit{arc}}$. The arc and relation classification losses are defined as cross-entropy losses respectively over the sentence tokens and gold arcs. } \subsubsection{Semantic Role Labeling} Semantic Role Labeling (SRL), a.k.a. \textit{shallow semantic parsing}, is defined as analyzing a sentence by means of predicates and the arguments attached to them. A wide range of semantic formalisms and annotation schemes exist however the main idea is labeling the arguments according to their relation to the predicate. \begin{quote} $[$I$]$\textsubscript{A0: buyer} $[$bought$]$\textsubscript{buy.01: purchase} $[$a new headphone$]$\textsubscript{A1: thing bought} from $[$Amazon$]$\textsubscript{A2: seller} \end{quote} The example given above shows a labeled sentence with English Proposition Bank~\cite{martha2005proposition} semantic roles, where \textit{buy.01} denotes the first sense of the verb ``buy'', and \textit{A0, A1} and \textit{A2} are the numbered arguments defined by the predicate's semantic frame. For this study, we perform dependency-based SRL, which means that only the head word of the phrase (e.g., \textit{headphone} instead of \textit{a new headphone}) will be detected as an argument and will be labeled as \textit{A1}. To evaluate SRL results, we used the official CoNLL-09 evaluation script on the official test split. The script calculates the macro-average F1 scores for the semantic roles available in the data. \paragraph{The model} \secrev{}{Similar to previous models, each token is segmented into subwords. For SRL, we only use characters as the subword unit, since it provided competitive results for many languages~\cite{SahinS18}. \begin{gather} \rho(w) = {sub_0,sub_1,..,sub_n} \end{gather} Afterwards, following \citet{LingDBTFAML15}, a weighted composition of the hidden states: $hs_f$ from forward and $hs_b$ from backward direction are calculated and used as the token embedding as given in Eq.~\ref{eq:comp}. \begin{gather} \vec{hs_f}, \vec{hs_b} = \text{bi-LSTM}({sub_0,sub_1,..,sub_n}) \\ \vec{w} = W_f \cdot \vec{hs_f} + W_b \cdot \vec{hs_b} + b \label{eq:comp} \end{gather} In order to mark the predicate of interest, we concatenate a predicate flag $pf_i$ to $\vec{w}$ calculated as in Eq.~\ref{eq:comp}. It is simply defined as $1$ for the predicate of interest and $0$ for the rest. Next, $\vec{x_{i}}$s, are passed onto another bi-LSTM layer, where $i$ denotes the index of the word. \begin{gather} \vec{x_{i}} = [\vec{w};pf_i] \\ \vec{h_{f}}, \vec{h_{b}} = \text{bi-LSTM}(\vec{x_{i}}) \end{gather} The probability distribution of each semantic role label is finally calculated via a softmax function given the final bidirectional hidden states: $h_{f}$ and $h_{b}$, and the label with the highest probability is assigned to the input token. \begin{gather} \vec{p(A_i|S,P)} = softmax(W_{l}\cdot[\vec{h_{f}};\vec{h_{b}}]+\vec{b_{l}}) \end{gather} We use the default model parameters reported in \citet{SahinS18}. } \begin{gather} \lambda y \ \lambda x \ add (x,y) \\ \lambda y \ \lambda x \ add (x,y) (shopping list) \\ \lambda x \ add (x,shoppinglist) (ingredient) \end{gather}
1,314,259,993,745
arxiv
\section{#1}\begin{frame}{#2}\vspace{-10pt}}{\end{frame}} \else \newenvironment{myframe}[2]{\section{#1}\begin{frame}{#2}\vspace{-10pt}}{\end{frame}} \fi \ifdefined\mybullet \renewcommand{\mybullet}[1]{\vspace{1mm}\\$\bullet$ {\bf #1}\vspace{1mm}\\} \else \newcommand{\mybullet}[1]{\vspace{1mm}\\$\bullet$ {\bf #1}\vspace{1mm}\\} \fi \ifdefined\mybulletEQ \renewcommand{\mybulletEQ}[1]{$\bullet$ {\bf #1}\vspace{1mm}\\} \else \newcommand{\mybulletEQ}[1]{$\bullet$ {\bf #1}\vspace{1mm}\\} \fi \ifdefined\fracd \renewcommand{\fracd}[2]{\frac{\displaystyle{#1}}{\displaystyle{#2}}} \else \newcommand{\fracd}[2]{\frac{\displaystyle{#1}}{\displaystyle{#2}}} \fi \ifdefined\recd \renewcommand{\recd}[1]{\frac{\displaystyle 1}{\displaystyle{#1}}} \else \newcommand{\recd}[1]{\frac{\displaystyle 1}{\displaystyle{#1}}} \fi \ifdefined\pdd \renewcommand{\pdd}[2]{\frac{\displaystyle{\partial{#1}}}{\displaystyle{\partial{#2}}}} \else \newcommand{\pdd}[2]{\frac{\displaystyle{\partial{#1}}}{\displaystyle{\partial{#2}}}} \fi \ifdefined\biggg \renewcommand{\biggg}[1]{\scalebox{1.2}{\Bigg{#1}}} \else \newcommand{\biggg}[1]{\scalebox{1.2}{\Bigg{#1}}} \fi \ifdefined\Biggg \renewcommand{\Biggg}[1]{\scalebox{1.4}{\Bigg{#1}}} \else \newcommand{\Biggg}[1]{\scalebox{1.4}{\Bigg{#1}}} \fi \ifdefined\eq \renewcommand{\eq}[1]{\begin{equation}{#1}\end{equation}} \else \newcommand{\eq}[1]{\begin{equation}{#1}\end{equation}} \fi \ifdefined\eql \renewcommand{\eql}[2]{\begin{equation}\label{e:#1}{#2}{\end{equation}} \else \newcommand{\eql}[2]{\begin{equation}\label{e:#1}{#2}\end{equation}} \fi \ifdefined\ali \renewcommand{\ali}[1]{\begin{align}{#1}\end{align}} \else \newcommand{\ali}[1]{\begin{align}{#1}\end{align}} \fi \ifdefined{\vspace{0.2 cm}\\} \renewcommand{{\vspace{0.2 cm}\\}}{{\vspace{0.2 cm}\\}} \else \newcommand{{\vspace{0.2 cm}\\}}{{\vspace{0.2 cm}\\}} \fi \ifdefined{\mathrm Tr} \renewcommand{{\mathrm Tr}}{{\mathrm Tr}} \else \newcommand{{\mathrm Tr}}{{\mathrm Tr}} \fi \ifdefined{\mathbb R} \renewcommand{{\mathbb R}}{{\mathbb R}} \else \newcommand{{\mathbb R}}{{\mathbb R}} \fi \ifdefined{\mathbb Z} \renewcommand{{\mathbb Z}}{{\mathbb Z}} \else \newcommand{{\mathbb Z}}{{\mathbb Z}} \fi \ifdefined{\mathbb N} \renewcommand{{\mathbb N}}{{\mathbb N}} \else \newcommand{{\mathbb N}}{{\mathbb N}} \fi \ifdefined{\mathbb C} \renewcommand{{\mathbb C}}{{\mathbb C}} \else \newcommand{{\mathbb C}}{{\mathbb C}} \fi \ifdefined\operatorname{Re} \renewcommand{\operatorname{Re}}{\operatorname{Re}} \else \newcommand{\operatorname{Re}}{\operatorname{Re}} \fi \ifdefined\operatorname{Im} \renewcommand{\operatorname{Im}}{\operatorname{Im}} \else \newcommand{\operatorname{Im}}{\operatorname{Im}} \fi \ifdefined\operatorname{ar\,ch} \renewcommand{\operatorname{ar\,ch}}{\operatorname{ar\,ch}} \else \newcommand{\operatorname{ar\,ch}}{\operatorname{ar\,ch}} \fi \ifdefined\operatorname{ar\,sh} \renewcommand{\operatorname{ar\,sh}}{\operatorname{ar\,sh}} \else \newcommand{\operatorname{ar\,sh}}{\operatorname{ar\,sh}} \fi \ifdefined\operatorname{ar\,th} \renewcommand{\operatorname{ar\,th}}{\operatorname{ar\,th}} \else \newcommand{\operatorname{ar\,th}}{\operatorname{ar\,th}} \fi \ifdefined\operatorname{ch} \renewcommand{\operatorname{ch}}{\operatorname{ch}} \else \newcommand{\operatorname{ch}}{\operatorname{ch}} \fi \ifdefined\operatorname{sh} \renewcommand{\operatorname{sh}}{\operatorname{sh}} \else \newcommand{\operatorname{sh}}{\operatorname{sh}} \fi \ifdefined\operatorname{th} \renewcommand{\operatorname{th}}{\operatorname{th}} \else \newcommand{\operatorname{th}}{\operatorname{th}} \fi \ifdefined\operatorname{Ln} \renewcommand{\operatorname{Ln}}{\operatorname{Ln}} \else \newcommand{\operatorname{Ln}}{\operatorname{Ln}} \fi \ifdefined\operatorname{tg} \renewcommand{\operatorname{tg}}{\operatorname{tg}} \else \newcommand{\operatorname{tg}}{\operatorname{tg}} \fi \ifdefined\operatorname{ctg} \renewcommand{\operatorname{ctg}}{\operatorname{ctg}} \else \newcommand{\operatorname{ctg}}{\operatorname{ctg}} \fi \ifdefined\int\limits \renewcommand{\int\limits}{\int\limits} \else \newcommand{\int\limits}{\int\limits} \fi \ifdefined\oint\limits \renewcommand{\oint\limits}{\oint\limits} \else \newcommand{\oint\limits}{\oint\limits} \fi \ifdefined\integrated \renewcommand{\integrated}[3]{\left\{{#1}\right\}\left.\vphantom{#1}\right|_{#2}^{#3}} \else \newcommand{\integrated}[3]{\left\{{#1}\right\}\left.\vphantom{#1}\right|_{#2}^{#3}} \fi \ifdefined\pd \renewcommand{\pd}[2]{\frac{\partial{#1}}{\partial{#2}}} \else \newcommand{\pd}[2]{\frac{\partial{#1}}{\partial{#2}}} \fi \ifdefined\rec \renewcommand{\rec}[1]{\frac{1}{#1}} \else \newcommand{\rec}[1]{\frac{1}{#1}} \fi \ifdefined\gvec \renewcommand{\gvec}[1]{\mbox{\boldmath${#1}$}} \else \newcommand{\gvec}[1]{\mbox{\boldmath${#1}$}} \fi \ifdefined\cvec \renewcommand{\cvec}[1]{\mbox{\boldmath${#1}$}} \else \newcommand{\cvec}[1]{\mbox{\boldmath${#1}$}} \fi \ifdefined\td \renewcommand{\td}[2]{\frac{d{#1}}{d{#2}}} \else \newcommand{\td}[2]{\frac{d{#1}}{d{#2}}} \fi \ifdefined\md \renewcommand{\md}[2]{\frac{\mathrm{d}{#1}}{\mathrm{d}{#2}}} \else \newcommand{\md}[2]{\frac{\mathrm{d}{#1}}{\mathrm{d}{#2}}} \fi \ifdefined\z \renewcommand{\z}[1]{\left({#1}\right)} \else \newcommand{\z}[1]{\left({#1}\right)} \fi \ifdefined\ae \renewcommand{\ae}[1]{\left|{#1}\right|} \else \newcommand{\ae}[1]{\left|{#1}\right|} \fi \ifdefined\sz \renewcommand{\sz}[1]{\left[{#1}\right]} \else \newcommand{\sz}[1]{\left[{#1}\right]} \fi \ifdefined\kz \renewcommand{\kz}[1]{\left\{{#1}\right\}} \else \newcommand{\kz}[1]{\left\{{#1}\right\}} \fi \ifdefined\m \renewcommand{\m}[1]{\mathrm{#1}} \else \newcommand{\m}[1]{\mathrm{#1}} \fi \ifdefined\c \renewcommand{\c}[1]{\mathcal{#1}} \else \newcommand{\c}[1]{\mathcal{#1}} \fi \ifdefined\v \renewcommand{\v}[1]{\mathbf{#1}} \else \newcommand{\v}[1]{\mathbf{#1}} \fi \ifdefined\Eq \renewcommand{\Eq}[1]{Eq.~(\ref{#1})} \else \newcommand{\Eq}[1]{Eq.~(\ref{#1})} \fi \ifdefined\Eqs \renewcommand{\Eqs}[2]{Eqs.~(\ref{#1}) and (\ref{#2})} \else \newcommand{\Eqs}[2]{Eqs.~(\ref{#1}) and (\ref{#2})} \fi \ifdefined\a \renewcommand{\a}[1]{\aref({#1})} \else \newcommand{\a}[1]{\aref({#1})} \fi \ifdefined\A \renewcommand{\A}[1]{\Aref({#1})} \else \newcommand{\A}[1]{\Aref({#1})} \fi \ifdefined\r \let\R\r \renewcommand{\r}[1]{(\ref{#1})} \else \newcommand{\r}[1]{(\ref{#1})} \fi \ifdefined\comm \renewcommand{\comm}[2]{\left[{#1},{#2}\right]} \else \newcommand{\comm}[2]{\left[{#1},{#2}\right]} \fi \ifdefined\qquad\Rightarrow\qquad \renewcommand{\qquad\Rightarrow\qquad}{\qquad\Rightarrow\qquad} \else \newcommand{\qquad\Rightarrow\qquad}{\qquad\Rightarrow\qquad} \fi \ifdefined\quad\Rightarrow\quad \renewcommand{\quad\Rightarrow\quad}{\quad\Rightarrow\quad} \else \newcommand{\quad\Rightarrow\quad}{\quad\Rightarrow\quad} \fi \ifdefined\quad\Rightarrow \renewcommand{\quad\Rightarrow}{\quad\Rightarrow} \else \newcommand{\quad\Rightarrow}{\quad\Rightarrow} \fi \ifdefined\Rightarrow\quad \renewcommand{\Rightarrow\quad}{\Rightarrow\quad} \else \newcommand{\Rightarrow\quad}{\Rightarrow\quad} \fi \ifdefined\quad\Leftrightarrow\quad \renewcommand{\quad\Leftrightarrow\quad}{\quad\Leftrightarrow\quad} \else \newcommand{\quad\Leftrightarrow\quad}{\quad\Leftrightarrow\quad} \fi \ifdefined\obs \renewcommand{\obs}[1]{\left\langle{#1}\right\rangle} \else \newcommand{\obs}[1]{\left\langle{#1}\right\rangle} \fi \ifdefined\ket \renewcommand{\ket}[1]{\left|{#1}\right\rangle} \else \newcommand{\ket}[1]{\left|{#1}\right\rangle} \fi \ifdefined\bra \renewcommand{\bra}[1]{\left\langle{#1}\right|} \else \newcommand{\bra}[1]{\left\langle{#1}\right|} \fi \ifdefined\braket \renewcommand{\braket}[2]{\left<#1\vphantom{#2}\right|\left.#2\vphantom{#1}\right>} \else \newcommand{\braket}[2]{\left<#1\vphantom{#2}\right|\left.#2\vphantom{#1}\right>} \fi \ifdefined\ketbra \renewcommand{\ketbra}[2]{\left|#1\vphantom{#2}\right>\left<#2\vphantom{#1}\right|} \else \newcommand{\ketbra}[2]{\left|#1\vphantom{#2}\right>\left<#2\vphantom{#1}\right|} \fi \ifdefined\matrixel \renewcommand{\matrixel}[3]{\left<#1\vphantom{#2#3}\right|#2\left|#3\vphantom{#1#2}\right>} \else \newcommand{\matrixel}[3]{\left<#1\vphantom{#2#3}\right|#2\left|#3\vphantom{#1#2}\right>} \fi \ifdefined\contravcov \renewcommand{\contravcov}[3]{{{#1}^{#2}_{}}_{#3}} \else \newcommand{\contravcov}[3]{{{#1}^{#2}_{}}_{#3}} \fi \ifdefined\covcontrav \renewcommand{\covcontrav}[3]{{{#1}_{#2}^{}}^{#3}} \else \newcommand{\covcontrav}[3]{{{#1}_{#2}^{}}^{#3}} \fi \ifdefined{\hat{a}^{\vphantom\dagger}} \renewcommand{{\hat{a}^{\vphantom\dagger}}}{{\hat{a}^{\vphantom+}}} \else \newcommand{{\hat{a}^{\vphantom\dagger}}}{{\hat{a}^{\vphantom+}}} \fi \ifdefined{\hat{a}^\dagger} \renewcommand{{\hat{a}^\dagger}}{{\hat{a}^+}} \else \newcommand{{\hat{a}^\dagger}}{{\hat{a}^+}} \fi \ifdefined{\hat{b}^{\vphantom\dagger}} \renewcommand{{\hat{b}^{\vphantom\dagger}}}{{\hat{b}^{\vphantom+}}} \else \newcommand{{\hat{b}^{\vphantom\dagger}}}{{\hat{b}^{\vphantom+}}} \fi \ifdefined{\hat{b}^\dagger} \renewcommand{{\hat{b}^\dagger}}{{\hat{b}^+}} \else \newcommand{{\hat{b}^\dagger}}{{\hat{b}^+}} \fi \ifdefined\mathrm{arc}\,\mathrm{tg} \renewcommand{\mathrm{arc}\,\mathrm{tg}}{\mathrm{arc}\,\mathrm{tg}} \else \newcommand{\mathrm{arc}\,\mathrm{tg}}{\mathrm{arc}\,\mathrm{tg}} \fi \section{Introduction} Observation of net polarization of $\Lambda$ baryons in high energy heavy-ion collisions by the STAR experiment~\cite{STAR:2017ckg} fall in line with expectations predicting such polarization if local thermal equilibrium is assumed also for spin degrees of freedom~\cite{Becattini:2013fla}. The main interest in this topic is because by measuring such polarization an insight is gained into the details of the expansion dynamics of the strongly interacting Quark-Gluon Plasma (sQGP). As an example, non-vanishing polarization may be a consequence of rotating expansion of the sQGP, and the time evolution of this rotation is in connection to the Equation of State (EoS) of the sQPG. Numerical model studies of polarization~\cite{Csernai:2014nva,Xie:2016fjj,Karpenko:2016jyx,Xie:2017upb} indeed predict non-zero polarization and connect this quantity to properties (e.g. vorticity) of the flow. However, numerical simulations by their nature do not always give a clear picture of the dependence of final state observables on assumptions made on the initial state. In the work presented here~\cite{Boldizsar:2018akg} we set out to give analytic formulas for the polarization of massive spin 1/2 particles, based on the simple formula written up in Ref.~\cite{Becattini:2013fla} under the assumption of local thermal equilibrium. This type of investigations can lead to a straightforward connection between properties of the flow and experimentally observable quantities (in our case polarization). We utilize known exact analytic solutions of perfect fluid hydrodynamics that grab some aspects of the real geometry of a heavy-ion collision. In the following we briefly present the calculations leading to results about polarization and illustrate the main findings. Additional details are to be found in Ref.~\cite{Boldizsar:2018akg}. \section{Basic equations and assumptions} In phenomenological modelling of final state observables, the source function $f(x,p)$ that describes the distribution of particles produced in the hadronization process can be calculated from a thermal ensemble that corresponds to the final state of the hydrodynamical evolution of the sQGP. For spin 1/2 particles, $f(x,p)$ is a locally thermal Fermi--Dirac distribution. In Ref.~\cite{Becattini:2013fla} the following formula is established for the spin vector of locally thermally equilibrated fermions: \begin{align} \label{e:polarization} \langle S(p) \rangle^\mu &= \frac{\int \m d^3 \Sigma_\nu p^\nu f(x,p) \langle S(x,p) \rangle^\mu} {\int \m d^3\Sigma_\nu p^\nu f(x,p)},& \langle S(x,p) \rangle^\mu &= \frac{1}{8m}\big(1 {-} f(x,p)\big) \varepsilon^{\mu\nu\rho\sigma} p_\sigma \partial_\nu \beta_\rho. \end{align} Here $\langle S(x,p) \rangle^\mu$ is the space-time and momentum dependent spin vector of the produced particles, which is averaged over the freeze-out with the $f(x,p)$ distribution to get the observable (momentum dependent) polarization $\langle S(p) \rangle^\mu$. Other notations are: $p^\mu{\equiv}(E,\v p)$ and $m$ are the momentum and the mass of the particle (with $E^2 {=} m^2{+}\v p^2$ and $c{=}1$). $T(x)$, $u^\mu(x)$, and $\mu(x)$ are the temperature, four-velocity, and chemical potential fields of the fluid, respectively. We also customarily introduced the inverse temperature field as $\beta^\mu \equiv u^\mu /T$. Numerical calculations evaluate these formulas while solving the equations of hydrodynamics at the same time. On the other hand, we can take known exact solutions and directly evaluate the formula for $\langle S(p) \rangle^\mu$ given above. As a proof of concept, we first investigate the case of spherically symmetric Hubble flow described in Ref.~\cite{Csorgo:2003ry}. We calculate $\obs{S(p)}^\mu$ in an exact accelerating and rotating relativistic solution~\cite{Nagy:2009eq,Hatta:2014gqa}). As an outlook we take a glance at the Buda--Lund model (see e.g. Ref.~\cite{Csanad:2003qa}): we write up a rotating generalization and specify some formulas for the polarization in this model case. We evaluate the polarization from Eq.~\eqref{e:polarization} for a given $\beta^\mu$ field, which is specified by a particular hydrodynamical solution. In the expression for $S(x,p)$, Eq.~\eqref{e:polarization}, we made the $f\ll 1$ assumption (corresponding to the Maxwell-Boltzmann limiting case, which is usually assumed for calculations of final state observables). Also, in our calculations we used the saddle-point integration method; with this approximation the space-time averaged observable polarization vector becomes simply \begin{align} \label{e:polsimple} \langle S(x,p) \rangle^\mu = \rec{8m}\varepsilon^{\mu\nu\rho\sigma} p_\sigma \partial_\nu \beta_\rho \qquad\Rightarrow\qquad \langle S(p) \rangle^\mu \approx \rec{8m}\varepsilon^{\mu\nu\rho\sigma} p_\sigma \partial_\nu \beta_\rho \Big|_{\v r=\v R_0}, \end{align} where $\v R_0$ is the position of the saddle point (the point of maximum emittivity), which depends on $\v p$. So to get analytic formulas for the polarization for a given $\beta^\mu(x)$ field and freeze-out condition, one has to express $\v R_0$ as a function of particle momentum $\v p$ and evaluate the $\obs{S(x,p)}^\mu$ at this position. As usual in heavy-ion phenomenology, we also neglect the $\mu/T$ (fugacity) factor in the expression of the source function: \begin{align} \label{e:MB1} f(x^\mu ,p^\mu) = \frac{g}{(2\pi\hbar)^d}\exp\big({-}p_\mu \beta^\mu\big),\qquad\textnormal{where again:}\quad \beta^\mu = \frac{u^\mu}{T}. \end{align} Below we consider two exact solutions of relativistic perfect fluid hydrodynamics and outline the calculation of the polarization of produced spin 1/2 baryons in these models, using the methods and approximations described above. For a more detailed discussion see Ref.~\cite{Boldizsar:2018akg}; here we summarize the main steps. \section{Polarization in exact analytic hydrodynamical solutions} One of the solutions considered is a simple spherically symmetric special case of the (more general, ellipsoidal) Hubble-type solution (presented in its fullest form in Ref.~\cite{Csorgo:2003ry}), which is characterized by the following velocity ($u^\mu$), temperature ($T$) and density ($n$) profiles: \begin{align} \label{e:hubble} u^\mu & = \frac{x^\mu}{\tau},& n &= n_0 \left(\frac{\tau_0}{\tau}\right)^d,& T &= T_0 \left(\frac{\tau_0}{\tau}\right)^{d/\kappa}. \end{align} Here $d=3$ is the dimensionality of space, and as usual, we use $\tau = \sqrt{t^2-\v r^2}$, and $\kappa = 1/c_s^2$ is the inverse square of the speed of sound, assumed to be constant (this constant appears in the Equation of State as $\varepsilon/p$). The freeze-out is assumed to happen on a constant $\tau=\tau_0$ hypersurface. The Cooper-Frye prefactor for this assumption (at a given point on the hypersurface whose spatial coordinate is $\v r$) turns out to be \begin{align} \label{e:mb} t(\v r)\equiv\sqrt{\tau_0^2{+}\v r^2}\quad\Rightarrow\quad \m d^3\Sigma_\mu = \rec{t(\v r)}\begin{pmatrix} t(\v r) \\ \v r \end{pmatrix}\m d^3\v r. \end{align} In the case of the spherically symmetric Hubble flow, the temperature takes a constant value on the freeze-out hypersurface (denoted here by $T_0$), and the position of the point of maximum emittivity $\v R_0$ as well as the $\partial_\nu\beta_\rho$ derivative (a necessary ingredient in Eq.~\eqref{e:polsimple} for the calculation of the polarization) can be calculated as \begin{align} p^\mu\beta_\mu = \frac{ E t(\v r){-}\v p{\v r}}{T_0} \quad\Rightarrow\quad R_0=\frac{\tau_0}{m}\v p, \qquad\quad \partial_\nu \beta_\rho = \frac{g_{\nu\rho}}{\sqrt{\tau_0^2 {+} r^2}T_0}+\frac{r_\nu r_\rho}{(\tau_0^2 {+} r^2)^{3/2}T_0}. \end{align} With some simplifications, one can verify that both the time and spatial components of the resulting polarizaton four-vector are zero: \begin{align} &\langle S(p) \rangle^0 =\frac{1}{8mT_0} \varepsilon^{0ikl} p_l\partial_i \beta_k\bigg|_{\v r =\v R_0} = \rec{8mT_0} \varepsilon_{ikl}p_l \bigg(\frac{g_{ik}}{\sqrt{\tau_0^2 {+} r^2}T_0}+\frac{r_{i}r_{k}}{(\tau_0^2 {+} r^2)^{3/2}T_0}\bigg)\bigg|_{\mathbf r {\,=\,} \mathbf R_0}=\ldots=0,\\ &\langle S(p)\rangle^i = \rec{8mT_0} \bigg(\varepsilon_{ikl}p_l\partial_k\beta_0-\varepsilon_{ikl}p_l\partial_0\beta_k-\varepsilon_{ikl}p_0\partial_k\beta_l\bigg)\bigg|_{\v r=\v R_0}=\ldots=0. \end{align} In conclusion, the polarization four-vector in the spherical symmetric self-similar flow is \begin{align} \langle S(p)\rangle^\mu = \begin{pmatrix} 0\\ \mathbf 0\\ \end{pmatrix}, \end{align} which is consistent with our expectations, since this solution is totally spherically symmetric. A more realistic (and for our goals, more interesting) solution to be studied is a rotating and accelerating expanding solution, first written up in Ref.~\cite{Nagy:2009eq}: \begin{align} \label{e:rot} \v v &= \frac{2t\v r {+} \tau_0^2\gvec\Omega{\times}\v r}{t^2{+}r^2{+}\rho_0^2},& T&= \frac{T_0\tau_0^2}{\sqrt{(t^2{-}r^2{+}\rho_0^2)^2{+}4\rho_0^2 r^2{-}\tau_0^4(\gvec\Omega{\times}\v r})^2},& n &= n_0\z{\frac{T}{T_0}}^3. \end{align} Here the $\rho_0$ parameter characterizes the initial spatial extent of the system. The $T_0$ and $\tau_0$ (the freeze-out values) are included for the sake of consistency of physical units, and $\gvec\Omega$ is an angular velocity three-vector indicating the axis and magnitude of rotation. We write up the $\beta^\mu$ field as follows: \begin{align} \label{e:solclass} &\beta^\mu = \frac{u^\mu}{T} = a^\mu {+} F^{\mu\nu}x_\nu {+} (x^\nu b_\nu) x^\mu {-} \frac{x^\nu x_\nu}{2}b^\mu, \\\textnormal{with}\quad& a^\mu {\,=\,} \frac{\rho_0^2}{2T_0\tau_0^2}\begin{pmatrix}1\\\mathbf 0\end{pmatrix},\qquad b^\mu {\,=\,} \rec{T_0\tau_0^2}\begin{pmatrix}1\\\mathbf 0\end{pmatrix}, \qquad F_{0k} {\,=\,} F_{k0} {\,=\,} F_{00} {\,=\,} 0, \qquad F_{kl} {\,=\,} \varepsilon_{klm} \frac{\Omega_m}{2T_0}. \end{align} We need to find the saddle point $\v R_0$; the result is: \begin{align} &p_\mu\beta^\mu = \rec{T_0\tau_0^2}\Big(E(2r^2{+}\tau_0^2{+}\rho_0^2)-2\sqrt{\tau_0^2{+}r^2}\v r\v p-\tau_0^2\v r(\v p{\times}\gvec\Omega)\Big),\qquad \nabla\Big\{p_\mu\beta^\mu\Big\}\Big|_{\v r = \v R_0} \stackrel{!}{\,=\,}0\quad\Rightarrow\nonumber\\ &\Rightarrow\quad \v R_0 = \frac{\tau_0}{2p}\sqrt{\frac{E{-}m}{2m}}\sqrt{\tau_0^2(\hat{\v p}{\times}\gvec\Omega)^2(E{-}m)^2+4p^2}\cdot\hat{\v p} + \tau_0^2\frac{E{-}m}{2p}\cdot\hat{\v p}{\times}\gvec\Omega,\qquad\textnormal{with}\quad \hat{\v p}:=\frac{\v p}{|\v p|}. \label{e:R0eq:rotaccel} \end{align} The $\beta_\mu$ field from Eq.~\eqref{e:solclass} is thus used for the calculation of the polarization following Eq.~\eqref{e:polsimple} as \begin{align} \partial_\nu \beta_\rho = F_{\rho\nu} {+} x^\alpha b_\alpha g_{\nu\rho} {+} x_\rho b_\nu {-} x_\nu b_\rho \quad\quad\Rightarrow\quad\quad \langle S(p)\rangle^\mu = \rec{8m}\varepsilon^{\mu\nu\rho\sigma}p_\sigma \Big(F_{\rho\nu} {+} x_\rho b_\nu {-} x_\nu b_\rho \Big)\Big|_{\mathbf r {\,=\,} \mathbf R_0}. \end{align} Evaluating this by substituting the expressions of $F_{\mu\nu}$ and $b_\mu$, and collecting the time-like and space-like components carefully, we finally get the following concise result for the polarization four-vector in the case of the rotating and accelerating solution: \begin{align} \label{e:forgopol} \langle S(p)\rangle^\mu =\frac{1}{8mT_0} \begin{pmatrix} \mathbf p\gvec\Omega \\ m\gvec\Omega +\frac{E{-}m}{p^2}(\gvec\Omega\mathbf p)\mathbf p \end{pmatrix}. \end{align} In case of $\gvec\Omega = 0$, there is no rotation. Indeed we get $\langle S(p)\rangle^\mu{=}0$ in this case: in this model, polarization is very transparently connected to the presence of rotation. Transforming the polarization vector into the rest frame (``r.f.'') of the particle, the result is \begin{align}\label{e:polSrf} \langle S(p) \rangle^\mu_{\textnormal{r.f.}} = \begin{pmatrix} 0 \\ \mathbf S_{\textnormal{r.f.}} \end{pmatrix}, \quad\textnormal{where}\quad \mathbf S_\textnormal{r.f.} = \rec{8T_0}\gvec\Omega. \end{align} In this case it thus turns out that the polarization in the rest frame of the particle is independent of momentum. The helicity of the produced particles in this case is (the $\mathbf S$ polarization vector is taken in the laboratory frame) \begin{align} \label{e:helicity} H := \hat{\v p}\v S = \frac{E}{8mT_0}\hat{\v p}\gvec\Omega. \end{align} Fig.~\ref{f:pol} illustrates these results (in the same way as it became somewhat customary for numerical simulations): we plot specific components of the polarization vector in the laboratory frame as a function of momentum components, in the $p_z=0$ transverse plane. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{S_all_2.png} \caption{Components of the polarization vector for the rotating and accelerating solution according to Eq.~\eqref{e:forgopol}. The mass $m$ is taken as $m {=} m_\Lambda{=}1115$~MeV$/c^2$, and a realistic $|\gvec\Omega| = 0.1\,c/\m{fm}$ value was chosen for this illustration.} \label{f:pol} \end{figure} \begin{center} * * * \end{center} Having seen that it is indeed possible to obtain simple analytic formulas for the polarization in simple exact hydrodynamical solutions (although not fully realistic ones), we turn our attention to a logical next step of this line of investigation. We consider the Buda--Lund model~\cite{Csorgo:1995bi, Csanad:2003qa}: a more involved hydrodynamical final state parametrization that is successful in describing usual one-particle and two-particle observables (soft spectra and correlations). We present some first formulas on how to calculate the polarization in this model; this investigation can also unveil the dependence of the polarization on other factors beyond rotation (such as temperature gradient and/or the acceleration of the expansion). We write up a simple rotating generalization of the ellipsoidal Buda--Lund model as \begin{align} u^\mu = \begin{pmatrix} \sqrt{1+\v u^2} \\ \v u \end{pmatrix},\qquad\textnormal{where}\quad \v u = \begin{pmatrix} \frac{\dot X(t)}{X(t)} r_x+ \omega(t)\frac{R(t)}{Z(t)}r_z\\ \frac{\dot Y(t)}{Y(t)} r_y\\ \frac{\dot Z(t)}{Z(t)} r_z -\omega(t)\frac{R(t)}{X(t)} r_x \end{pmatrix},\qquad \textnormal{and } \quad R(t)\equiv \frac{X(t){+}Z(t)}{2}. \end{align} Here $\omega$ is the newly introduced angular velocity, and the $X$, $Y$, $Z$ and $\dot X$, $\dot Y$, $\dot Z$ parameters correspond to the principal axes and expansion velocity components of the ellipsoid-like source. The temperature field is \begin{align} \rec{T} = \frac{1{+}a^2A}{T_0}\z{1+\alpha^2\frac{(\tau{-}\tau_0)^2}{2(\Delta\tau)^2}},\qquad\textnormal{with}\quad a^2\equiv \frac{T_0{-}T_s}{T_s}, \quad \alpha^2\equiv\frac{T_0{-}T_e}{T_e},\quad A = \frac{r_x^2}{2X^2}{+}\frac{r_y^2}{2Y^2}{+}\frac{r_z^2}{2Z^2}. \end{align} The $\tau_0$ proper-time value corresponds to the freeze-out hypersurface (with ,,width'' $\Delta\tau_0$). The temperature values $T_0$, $T_s$ and $T_e$ are the values taken at the center of the ellipsoid, on the ,,surface'' of it, and the value after freeze-out, respectively. In this way, the $a^2$ parameter controls the temperature gradient. The $A$ function is called scaling variable; its constant values are coordinate-space ellipsoids. The polarization can be calculated according to Eq.~\eqref{e:polsimple} as \begin{align} \obs{S(\v p)}^0 &=-\rec{8m}\varepsilon_{klm}\v p_m\partial_k\beta_l = -\rec{8m}\v p\z{\nabla\times\gvec\beta},\\ \obs{S(\v p)}^k &=\rec{8m}\Big((\partial_0\gvec\beta)\times\v p\Big)_k-\rec{8m}(\nabla\beta_0\times\v p)_k-\frac{p_0}{8m}(\nabla\times\gvec\beta)_k. \end{align} With a Lorentz boost we can express the polarization in the local rest frame of the produced particles: \begin{align} \v S_{\textnormal{r.f.}}&=\rec{8m}\kz{(\partial_0\gvec\beta-\nabla\beta_0)\times\v p-E(\nabla\times\gvec\beta)+\frac{E{-}m}{p^2}(\v p(\nabla\times\gvec\beta))\v p}. \end{align} In order to find the saddle point, one is led to a nonlinear system of algebraic equations; these can be solved either numerically or by successive approximation. One then substitutes the resulting expression into the above formulas. Some very preliminary results from this investigation are shown on Fig.~\ref{f:blpol}. We plot the $S_z$ component with respect to the momentum components (again for $p_z = 0$) for a given angular velocity $\omega= 0.1\,c/\m{fm}$, for three different $a^2$ values. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{S_z_zeta1.png} \caption{$S_z$ component of the polarization vector from the rotating Buda--Lund model: with $a^2 = 0$ (left panel), $a^2 = 0.5$ (middle panel) and $a^2 = 1$ (right panel), meaning no temperature gradient, a moderate one, and a large temperature gradient, respectively. Other parameter values taken here are: $m = m_\Lambda$, $\omega = 0.1\,c/\m{fm}$, $X=6$ fm, $Y=8$ fm, $Z=10$ fm, $\dot X = 0.3$, $\dot Y = 0.4$ and $\dot Z=0.2$.} \label{f:blpol} \end{figure} \section{Summary} We presented new analytic formulas for the polarization of baryons produced in high energy heavy-ion collisions, utilizing several analytic hydrodynamical models. We considered a spherically symmetric self-similar flow (in which case the polarization is indeed zero, as is expected), and secondly, a relativistic expanding and rotating hydrodynamical solution. In this latter case one gets simple formulas for the polarization that show a straightforward connection of this quantity to the vorticity of the flow, however, the results and assumptions are too simple to be realistic. Finally, we presented some preliminary investigations into the usability of the Buda--Lund model for calculating the polarization; from this much more complex model, one can expect a realistic description of the experimentally observed polarization as well. We can thus reasonably hope that these studies have the potential of a better understanding of polarization measurements and their phenomenological implications on the strongly coupled Quark Gluon Plasma produced in heavy-ion collisions. \section*{Acknowledgments} The author is thankful to M\'arton~I.~Nagy and M\'at\'e~Csan\'ad for the numerous discussions and help given during the work presented here. The author is grateful to the organizers of the 19th Workshop on Particle Correlations and Femtoscopy (WPCF 2019) conference. This work was partially supported by the Hungarian NKIFH grants No. FK-123842 and FK-123959.
1,314,259,993,746
arxiv
\section{Effective medium theories} Effective medium theories (EMT) allow one to determine, by means of algebraic formulas, the effective electric permittivity $\varepsilon_e$ of a composite medium as a function of the constituent permittivities and shapes as well as of the fractional volumes characterizing the mixture [36-38]. One of the most important and successful EMTs is the Bruggeman Effective Medium Theory (BEMT), which is the simplest analytical model that predicts a metal-insulator transition at a critical volume fraction, $f_{c}$. BEMT treats the dielectric host medium and the metallic inclusions symmetrically, and it is based on the following assumptions: $(i)$ the grains are assumed to be randomly oriented spheroidal particles, and $(ii)$ they are embedded in an homogeneous effective medium that will be determined self-consistently. If a quasi-static electromagnetic field ${\bf E}_0$ impinges on such an inhomogeneous medium, the electric field ${\bf E}_i^{in}$ inside the metallic inclusions (permittivity $\varepsilon_i$), and the field ${\bf E}_{hm}^{in}$ inside the dielectric grains (permittivity $\varepsilon_{hm}$) read [37] \begin{eqnarray} {\bf E}_i^{in} &=& \left[\dfrac{1}{3} \dfrac{\varepsilon_e}{\varepsilon_e + L(\varepsilon_i - \varepsilon_e)} + \dfrac{2}{3} \dfrac{2\varepsilon_e}{2\varepsilon_e + (1-L)(\varepsilon_i - \varepsilon_e)}\right]{\bf E}_0\, ,\\ {\bf E}_{hm}^{in} &=& \left[\dfrac{1}{3} \dfrac{\varepsilon_e}{\varepsilon_e + L(\varepsilon_{hm} - \varepsilon_e)} + \dfrac{2}{3} \dfrac{2\varepsilon_e}{2\varepsilon_e + (1-L)(\varepsilon_{hm} - \varepsilon_e)}\right]{\bf E}_0\, , \end{eqnarray} where $0\leq L \leq 1$ is the depolarization factor of the spheroidal inclusions. For an arbitrary ellipsoidal particle characterized by the semi-axes $a_x\, , \ a_y\, ,\ a_z$, the depolarization factor $L_i$ in the $i-$direction is [17] \begin{eqnarray} L_i = \dfrac{a_xa_ya_z}{2} \int_0^{\infty} \dfrac{ds}{(s+a_i^2)\sqrt{(s+a_x^2)(s+a_y^2)(s+a_z^2)}}\, , \end{eqnarray} where the relation $L_x+L_y+L_z=1$ holds. In this work we consider spheroidal inclusions such that $L_z = L$ and $L_x = L_y = (1-L)/2$. Hence, the effective permittivity $\varepsilon_{e}$ is defined by \begin{eqnarray} \langle{\bf D} \rangle = \varepsilon_e \langle {\bf E}\rangle \rightarrow f\varepsilon_i {\bf E}_i^{in} + (1-f) \varepsilon_{hm}{\bf E}_{hm}^{in} = \varepsilon_e f {\bf E}_i^{in} + \varepsilon_{e} (1-f) {\bf E}_{hm}^{in} \, , \label{epsilonefetivo} \end{eqnarray} where $0\leq f \leq 1$ is the volume filling factor for the inclusions. Equation (\ref{epsilonefetivo}) implies that $\varepsilon_{e}$ satisfies exactly Eq. (1) given in the paper, which is valid provided the typical size of the grains is much smaller than the incident wavelength. Also, it is worth mentioning that the self-consistent equation describing $\varepsilon_e$ [Eq. (1) in the paper] has several roots, but only the one with $\textrm{Im}(\varepsilon_e) \geq 0$ is physical. Finally, the percolation threshold $f_{c}$ is calculated by taking the limit $\omega \rightarrow 0$, and evaluating the effective conductivity and permittivity of the composite medium. In the case of BEMT, Eq.~(1) in the paper leads to the following expression for the percolation threshold [36, 37, 38] \begin{equation} f_{c}(L) = \frac{L(5-3L)}{(1+ 9L)}. \label{bruggeman} \end{equation} In addition to BEMT, there are many different versions of effective medium theories; an important example is the one developed by Lagarkov-Sarychev [37]. It is known to predict more accurate results for $f_{c}$ than the BEMT in the case of very elongated inclusions (needle-like inclusions, small $L$). Within this approach $\varepsilon_{e}$ is obtained following a self-consistent prescription that is analogous to the one of BEMT, but under the assumption that the dielectric and metallic grains should not be treated symmetrically. Rather, the host medium is supposed to have spherical symmetry, whereas the metallic inclusions are considered to be spheroidal particles; this approach leads to [37]: \begin{eqnarray} 9(1-f)\left\{\dfrac{\varepsilon_{hm} - \varepsilon_{e}}{2\varepsilon_{e} + \varepsilon_{hm}}\right\} + f\left\{\dfrac{\varepsilon_{i} - \varepsilon_{e}}{\varepsilon_{e} + L(\varepsilon_{i}-\varepsilon_{e})} + \dfrac{4(\varepsilon_{i} - \varepsilon_{e})}{2\varepsilon_{e} + (1-L)(\varepsilon_{i}-\varepsilon_{e})}\right\}=0\, . \end{eqnarray} The percolation threshold $f_{c}$ predicted by the Lagarkov-Sarychev effective medium theory is [37] \begin{eqnarray} f_c(L) = \dfrac{9L(1-L)}{2+15L-9L^2}\, . \end{eqnarray} \section{Electric and Magnetic Polarizabilities of Small Particles} The sphere's electric, $\alpha_E$, and magnetic, $\alpha_H$, polarizabilities are given by [16, 17, 43] \begin{eqnarray} \dfrac{\alpha_E(\omega, f, L)}{(4\pi a^3/3)} &=& \dfrac{3}{2} \dfrac{(2y^2+x^2)[\sin{y}-y\cos{y}]-x^2y^2\sin{y}} {(y^2-x^2)[\sin{y}-y\cos{y}]-x^2y^2\sin{y}}\, , \cr \cr \dfrac{\alpha_H(\omega, f, L)}{(4\pi a^3/3)} &=& \dfrac{1}{4} \left[\dfrac{(x^2-6)}{y^2}(y^2+3y\cot{y}-3)-\dfrac{2x^2}{5} \right], \end{eqnarray} where $x = \omega a/c$ and $y = \sqrt{\varepsilon_{e}(\omega, f, L)}x$. In Fig.~\ref{Figure1} $\textrm{Im} (\alpha_E)$ is calculated, within the BEMT, as a function of both frequency $\omega$ and filling factor $f$ for two different values of $L$: $(a) L=0.1$ and $(b) L=1/3$, for an inhomogeneous composed of copper inclusions randomly distributed inside a polystyrene host sphere. Fig.~\ref{Figure1} reveals that, for every frequency, the maximal value of $\textrm{Im} (\alpha_E)$ occurs at $f_{c}$. This result corroborates the fact that the maximal power absorbed by the particle in the NFHT is maximal at the percolation threshold. We have verified that this result applies for every value of $L$ (only two distinctive examples are shown in Fig.~\ref{Figure1}). Figure~\ref{Figure1} also shows that, for a given $f$, $\textrm{Im} (\alpha_E)$ weakly depends on frequency, as it remains almost constant as one varies $\omega$. \begin{figure}[!h] \centering \includegraphics[scale = 0.7]{SupMat_Fig1.eps} \caption{Electric polarizability of a inhomogeneous sphere made of copper inclusions randomly distributed in a polystyrene spherical host as a function of frequency $\omega$ and volume fraction $f$ for two different values of the depolarization factor $L$: (a) $L=0.1$ and (b) $L=1/3$.} \label{Figure1} \end{figure} In Fig.~\ref{Figure2} $\textrm{Im} (\alpha_H)$ is shown as a function of both frequency $\omega$ and filling factor $f$ for the same parameters of Fig.~\ref{Figure1}. As it occurs for $\textrm{Im} (\alpha_E)$, $\textrm{Im} (\alpha_H)$ is weakly dependent on frequency for fixed $f$. However, $\textrm{Im} (\alpha_H)$ is much less peaked around $f_{c}$ than $\textrm{Im} (\alpha_E$), although a local maximum in $\textrm{Im} (\alpha_H)$ always exists at $f_{c}$ for every $L$. The joint contribution of the maxima at $f_{c}$, which occurs for both $\textrm{Im} (\alpha_E)$ and $\textrm{Im} (\alpha_H)$, ultimately leads to the large enhancement of NFHT for composite media at the percolation critical point. \begin{figure}[!h] \centering \includegraphics[scale = 0.7]{SupMat_Fig2.eps} \caption{Magnetic polarizability of a inhomogeneous sphere made of copper inclusions randomly distributed in a polystyrene spherical host as a function of frequency $\omega$ and volume fraction $f$ for two different values of the depolarization factor $L$: (a) $L=0.1$ and (a) $L=1/3$.} \label{Figure2} \end{figure} \section{Material parameters} The electrical permittivity $\varepsilon_{B}$ of Silicon Carbide (SiC) is well described by following dispersive model [2] \begin{equation} \varepsilon_{B}(\omega) = \epsilon_{\infty}\left(1 + \dfrac{\omega_L^2 - \omega_T^2}{\omega_T^2 - \omega^2 - i\Gamma \omega}\right), \label{sic} \end{equation} where $\omega_L = 182.7\times10^{12}$ rad/s, $\omega_T = 149.5\times10^{12}$ rad/s, and $\Gamma = 0.9\times10^{12}$ rad/s. For the polystyrene host medium, its electric permittivity $\varepsilon_{hm}$ reads [45] \begin{equation} \varepsilon_{hm}(\omega) = 1 + \dfrac{\omega_{p1}^2}{\omega_{r1}^2 - \omega^2 - i\Gamma_{1}\omega} +\dfrac{\omega_{p2}^2}{\omega_{r2}^2 - \omega^2 - i\Gamma_{2}\omega}, \end{equation} where $\omega_{p1} = 1.11\times10^{14}$ rad/s, $\omega_{r1} = 5.54\times10^{14}$ rad/s, $\omega_{p2} = 1.96\times10^{16}$ rad/s, $\omega_{r2} = 1.35\times10^{16}$ rad/s, and $\Gamma_1 = \Gamma_2 = 0.1 \times 10^{12}$ rad/s. The metallic inclusions are described by the Drude model [44] \begin{equation} \varepsilon_{i}(\omega) = 1-\dfrac{\omega_{i}^2}{\omega^2+ i \Gamma_{i} \omega}, \end{equation} where the material parameters for metallic inclusions of Titanium (Ti), Copper (Cu), Vanadium (V), Silver (Ag), and Gold (Au) are given in Table I. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|} \hline & $\omega_i$ (rad/s) & $\Gamma_i$ (rad/s) \\ \hline Titanium & $3.83 \times 10^{15}$ & $7.19 \times 10^{13}$ \\ \hline Copper & $1.12 \times 10^{16}$ & $1.38 \times 10^{13}$ \\ \hline Vanadium & $7.84 \times 10^{15}$ & $9.26 \times 10^{13}$\\ \hline Silver & $1.37 \times 10^{16}$ & $2.73 \times 10^{13}$ \\ \hline Gold & $1.37 \times 10^{16}$ & $4.05 \times 10^{13}$ \\ \hline \end{tabular} \label{Tab1} \caption{Material parameters used in the dispersive Drude model for metallic inclusions.} \end{table} \end{document}
1,314,259,993,747
arxiv
\section{Introduction} Let $\mathcal{F}$ be a family of sets in the plane; $\mathcal{F}$ is said to have a \textit{line transversal} if there is a line that intersects each set in $\mathcal{F}$. If every $r$ sets in $\mathcal{F}$ have a line transversal, then $\mathcal{F}$ is said to have the $T(r)$-property, and $\mathcal{F}$ is said to be $T^n$ if there are $n$ lines whose union intersects each set in $\mathcal{F}$. In this case we say $\mathcal{F}$ is \textit{pierced} by these lines. In 1969, Eckhoff showed that if $\mathcal{F}$ is a family of compact, convex sets that has the $T(r)$-property where $r\geq 4$, then $\mathcal{F}$ is $T^2$ \cite{eckhoff1969}. A result of Santalo shows that this result is best possible \cite{santalotheorem1940}, i.e. for all $r$, there exists a family of compact, convex sets with the $T(r)$ property that does not have a line transversal. Eckhoff also showed in 1973 that the $T(3)$-property does not imply $T^2$ \cite{eckhofftransversal1973}. In 1975, Kramer proved that the $T(3)$-property implies $T^5$ \cite{Kramer}. Eckhoff later showed in 1993 that the $T(3)$-property implies $T^4$, and conjectured that the $T(3)$-property in fact implies $T^3$ \cite{eckhoffgallai}. This conjecture has recently been verified by McGinnis and Zerbib \cite{mcginnis2021line}. In fact, they proved a stronger statement, which we now explain. Three sets $F_1,F_2,F_3$ in the plane are said to be a \textit{tight triple} if $\textrm{conv}(F_1\cup F_2)\cap \textrm{conv}(F_2\cup F_3)\cap \textrm{conv}(F_3\cup F_1)\neq \emptyset$. This was first defined by Holmsen \cite{holmsenGeometric2013}. A family of planar sets will be called \textit{a family of tight triples} if every three sets in the family are a tight triple. If three sets have a line transversal, then they are a tight triple as the convex hull of two of the three sets intersects the third. McGinnis and Zerbib showed that a family of tight triples consisting of compact, convex sets is $T^3$, which implies Eckhoff's conjecture. The main purpose of this paper is to prove an extension of the result that families of tight triples are $T^3$. We define a certain type of family of sets, which we call a $C(k)$. \begin{definition} For $k\geq 4$, we define a $C(k)$ to be a family of $k$ distinct sets in the plane together with a linear ordering, say $F_1,\dots,F_k$ where the sets are ordered by their indices, such that $\textrm{conv}(F_i\cup F_{i+1})\cap \textrm{conv}(F_j\cup F_{j+1})=\emptyset$ when $\{i,i+1\}\cap \{j,j+1\}=\emptyset$ (indices are taken modulo $k$). Additionally, we define a $C(3)$ to be a family of three disjoint sets in the plane that is not a tight triple. \end{definition} Roughly speaking, if $F_1,\dots,F_k$ is a $C(k)$, then the union $\cup_{i=1}^{k}\textrm{conv}(F_i\cup F_{i+1})$ resembles a closed loop that does not cross itself (see Figure \ref{fig:C(5)}). Notice that the sets in a $C(k)$ are pairwise disjoint. \begin{figure} \centering \includegraphics[scale=.4]{C_5_.png} \caption{An example of a $C(5)$. The sets of the $C(k)$ are in dark gray, and the ordering on these sets is indicated by the numbers $1,\dots,5$. The dark gray and light gray together depict the union $\bigcup_{i=1}^5\textrm{conv}(F_i\cup F_{i+1})$.} \label{fig:C(5)} \end{figure} A family $\mathcal{F}$ is said to be \textit{$C(k)$-free} if $\mathcal{F}$ does not contain a $C(k)$ as a subfamily. We note that a $C(k)$-free family may not be $C(k-1)$-free, and similarly, a $C(k-1)$-free family need not be $C(k)$-free. (see Figure \ref{fig:C(5)-free}). \begin{figure} \centering \includegraphics[scale=.6]{C_5_-free.png} \caption{A $C(5)$-free and $C(3)$-free family that is not $C(4)$-free.} \label{fig:C(5)-free} \end{figure} Let $L(k)$ be the smallest integer such that any $C(k)$-free family of compact, convex sets can be pierced by $L(k)$ lines. The following is the main result of this paper. \begin{theorem}\label{main} Let $k\geq 4$. We have the following: \[ \left\lceil \frac{k}{2} \right\rceil\leq L(k)\leq k-2. \] \end{theorem} For $k=4$, the lower bound of Theorem \ref{main} follows from the result of Santalo \cite{santalotheorem1940} that there are families with the $T(4)$-property that do not have a line transversal. This is due to the fact that a $C(k)$ cannot have a line transversal, so a family with the $T(4)$-property is in particular $C(4)$-free. We also note that the upper bound for $k=4$ was essentially proved in the concluding remarks of \cite{mcginnis2021line}. Indeed, the proof outlined in \cite{mcginnis2021line} shows that if $\mathcal{F}$ is a family of compact, convex sets in the plane that is not $T^2$, then there are non-parallel lines such that each quadrant defined by these two lines contains a set from $\mathcal{F}$. These 4 sets then make up a $C(4)$. For $k=5$, we get a tight result. \begin{corollary} The following equality holds: $$L(5)=3.$$ \end{corollary} The proof of Theorem \ref{main} is split into two sections. Section \ref{lower} is dedicated to proving the lower bound of Theorem \ref{main}, and Section \ref{upper} is dedicated to the upper bound. For 2 points $p$ and $q$, we denote by $[p,q]$ to be the line segment connecting $p$ and $q$. If $p=q$, then $[p,q]$ consists of a single point. \section{The lower bound}\label{lower} In this section we exhibit a $C(k)$-free family $\mathcal{F}$ that is not $T^{\lceil \frac{k}{2} \rceil-1}$. The inspiration for the construction of such a family comes from \cite{eckhofftransversal1973} where an example of a family of compact, convex sets with the $T(3)$-property that is not $T^2$ is exhibited by Eckhoff. As mentioned earlier, the result is already established for $k=4$ so we may assume that $k\geq 5$. For $k$ odd, we will present a family $\mathcal{F}$ that is both $C(k)$-free and $C(k+1)$-free and is not $T^{\lceil \frac{k}{2} \rceil-1}$. This will establish the lower bound of Theorem \ref{main}. We note that for $k$ even, an example of a $C(k)$-free family that is not $T^{\lceil \frac{k}{2} \rceil-1}$ is given simply by $k-1$ points in general position. However, in this example, the family is $C(k)$-free for the seemingly trivial reason that are only $k-1$ sets in the family. For each $k\geq 5$, we present a family demonstrating the lower bound of Theorem \ref{main} that contains more than $k$ sets. Let $p_1,\dots,p_{3(k-1)}$ be equidistant points on the unit circle, arranged clockwise. For $1\leq i\leq 3$, let $p^\ell_i$ be a point lying slightly counterclockwise to $p_i$ and $p^r_i$ to be a point lying slightly clockwise to $p_i$ in such a way that the points $p^\ell_1,\,p^r_1,\,p^\ell_2,\,p^r_2,\,p^\ell_3,\,p^r_3,\,p_4,\,\dots\,,p_{3(k-1)}$ are arranged in clockwise order. We first define three families of sets, which consists only of line segments (see Figure \ref{fig:example}). \[ \mathcal{F}_1=\{[p_1^r,p_3^\ell],[p_4,p_6],[p_7,p_9],\dots,[p_{3(k-1)-2},p_{3(k-1)}]\} \] \[ \mathcal{F}_2=\{[p_2^r,p_4],[p_5,p_7],[p_8,p_{10}],\dots,[p_{3(k-1)-4},p_{3(k-1)-2}],[p_{3(k-1)-1},p_1^\ell]\} \] \[ \mathcal{F}_3=\{[p_3^r,p_5],[p_6,p_8],[p_{10},p_{12}],\dots,[p_{3(k-1)-3},p_{3(k-1)-1}],[p_{3(k-1)},p_2^\ell]\}. \] Finally, we take $\mathcal{F}=\mathcal{F}_1\cup \mathcal{F}_2\cup \mathcal{F}_3$ (see Figure \ref{fig:example}). We now show that $\mathcal{F}$ is $C(k)$-free and $C(k+1)$-free, and we show it is not $T^{\lceil \frac{k}{2} \rceil-1}$. For a set $[p,q]\in \mathcal{F}$ we say that $p$ comes clockwise before $q$ if the clockwise distance along the unit circle from $p$ to $q$ is less than the clockwise distance from $q$ to $p$. When $[p,q]\in \mathcal{F}$ and $p$ comes clockwise before $q$, we denote arc$[p,q]$ to be the arc along the unit circle that goes clockwise from $p$ to $q$. Also, we use $I([p,q])$ to denote the set of indices $i$ such that $p_i\in$ arc$[p,q]$, $p_i^\ell\in$ arc$[p,q]$, or $p_i^r\in$ arc$[p,q]$. For example, $I(p_2^r,p_4)=\{2,3,4\}$. Note that if $F_1,F_2\in \mathcal{F}$ intersect, then $I(F_1)\cap I(F_2)\neq \emptyset$. \begin{figure} \centering \begin{tikzpicture} \draw[red] (4*360/12-5:3cm)--(2*360/12+5:3cm); \draw[red] (360/12:3cm)--(11*360/12:3cm); \draw[red] (10*360/12:3cm)--(8*360/12:3cm); \draw[red] (7*360/12:3cm)--(5*360/12:3cm); \draw[blue] (3*360/12-5:3cm)--(360/12:3cm); \draw[blue] (3,0)--(10*360/12:3cm); \draw[blue] (9*360/12:3cm)--(7*360/12:3cm); \draw[blue] (6*360/12:3cm)--(4*360/12+5:3cm); \draw[green] (2*360/12-5:3cm)--(3,0); \draw[green] (11*360/12:3cm)--(9*360/12:3cm); \draw[green] (8*360/12:3cm)--(6*360/12:3cm); \draw[green] (5*360/12:3cm)--(3*360/12+5:3cm); \draw (0,0) circle (3cm); \filldraw [black] (3,0) circle (2pt) node[right] {$p_5$}; \filldraw [black] (360/12:3cm) circle (2pt) node[right] {$p_4$}; \filldraw [black] (2*360/12+5:3cm) circle (2pt) node[above] {$p_3^\ell$}; \filldraw [black] (2*360/12-5:3cm) circle (2pt) node[above] {$p_3^r$}; \filldraw [black] (3*360/12-5:3cm) circle (2pt) node[above] {$p_2^r$}; \filldraw [black] (3*360/12+5:3cm) circle (2pt) node[above] {$p_2^\ell$}; \filldraw [black] (4*360/12-5:3cm) circle (2pt) node[above] {$p_1^r$}; \filldraw [black] (4*360/12+5:3cm) circle (2pt) node[above] {$p_1^\ell$}; \filldraw [black] (5*360/12:3cm) circle (2pt) node[left] {$p_{12}$}; \filldraw [black] (6*360/12:3cm) circle (2pt) node[left] {$p_{11}$}; \filldraw [black] (7*360/12:3cm) circle (2pt) node[left] {$p_{10}$}; \filldraw [black] (8*360/12:3cm) circle (2pt) node[below] {$p_9$}; \filldraw [black] (9*360/12:3cm) circle (2pt) node[below] {$p_8$}; \filldraw [black] (10*360/12:3cm) circle (2pt) node[below] {$p_7$}; \filldraw [black] (11*360/12:3cm) circle (2pt) node[right] {$p_6$}; \end{tikzpicture} \caption{An example demonstrating the lower bound of Theorem \ref{main} for $k=5$. The sets in $\mathcal{F}_1$ are red, in $\mathcal{F}_2$ are blue, and in $\mathcal{F}_3$ are green.} \label{fig:example} \end{figure} \begin{lemma}\label{freeness} The family $\mathcal{F}$ is $C(k)$-free and $C(k+1)$-free. \end{lemma} \begin{proof} Let $[p,q]\in \mathcal{F}$ where $p$ comes clockwise before $q$. There is no set of $\mathcal{F}$ that is disjoint from $[p,q]$ and contains a point on arc$[p,q]$. Since for each such set $[p,q]$, arc$[p,q]$ contains at least 3 of the $3k$ points in \[ P=\{p^\ell_1,\,p^r_1,\,p^\ell_2,\,p^r_2,\,p^\ell_3,\,p^r_3,\,p_4,\,\dots\,,p_{3(k-1)}\}, \] a $C(k)$ of $\mathcal{F}$ has the property that each set $[p,q]$ in the $C(k)$ contains exactly 3 points in arc$[p,q]$, and every point of $P$ is in arc$[p,q]$ for some $[p,q]$ of the $C(k)$. However, each set $[p',q']$ of $\mathcal{F}$ that contains $p_2^r$ in arc$[p',q']$ has the property that at least 4 points of $P$ are contained in arc$[p',q']$, a contradiction. Also, $\mathcal{F}$ is $C(k+1)$-free by the same reasoning. \end{proof} \begin{lemma} The family $\mathcal{F}$ is not $T^{\lceil \frac{k}{2} \rceil-1}$. \end{lemma} \begin{proof} Notice that for any $F\in \mathcal{F}$, if a line $L$ pierces $F$, then $L$ intersects arc$F$. Any point on the unit circle is contained in arc$F$ for at most 3 sets $F\in \mathcal{F}$. If a point is contained in arc$F$ for 3 such sets $F\in \mathcal{F}$, then this point must be of the form $p_i$ for some $4\leq i\leq 3(k-1)$. Since a line intersects the unit circle in at most 2 points, any line intersects at most 6 sets in $\mathcal{F}$. Now, since arc$[p_1^r,p_3^\ell]$ does not contain $p_i$ for any $4\leq i\leq 3(k-1)$, there is no line that intersects $[p_1^r,p_3^\ell]$ and intersects 6 sets in $\mathcal{F}$. It follows that if $a$ lines pierce $\mathcal{F}$, then $6(a-1)+5\geq 3(k-1)$, so $a > \frac{k-1}{2} = \left\lceil \frac{k}{2} \right\rceil-1$. This completes the proof. \end{proof} \section{The upper bound}\label{upper} In this section, we prove that every $C(k)$-free family $\mathcal{F}$ can be pierced by $k-2$ lines, and again, we may assume that $k\geq 5$. Because the sets of $\mathcal{F}$ are compact, it is the case that if every finite subfamily of $\mathcal{F}$ is $T^n$, then $\mathcal{F}$ is $T^n$. This is stated for instance in \cite{eckhoffgallai}. Therefore, throughout this section we may assume that $\mathcal{F}$ is finite, and thus, we may scale the plane so that each set of $\mathcal{F}$ is contained in the open unit disk. First, we will need to introduce a topological tool known as the KKM Theorem \cite{knaster1929}. Let $\Delta^{n-1} = \{(x_1,\dots,x_n) \in \mathbb{R}^n \mid x_i\ge 0, \sum_{i=1}^n x_i = 1\}$ denote the $(n-1)$-dimensional simplex in $\mathbb{R}^n$, whose vertices are the canonical basis vectors $e_1,\dots,e_n$. A face $\sigma$ of $\Delta^{n-1}$ is a subset of $\Delta^{n-1}$ of the form $\textrm{conv}(\{e_i : i\in I\})$ for some $I\subset [n]$. \begin{theorem} Let $A_1,\dots,A_{n}$ be open sets such that for every face $\sigma$ of $\Delta^{n-1}$ we have $\sigma \subset \bigcup_{e_j \in \sigma} A_j$. Then we have that $\cap_{i=1}^{n} A_{i}\neq \emptyset$. \end{theorem} We remark that the original KKM Theorem was stated for when the sets $A_i$ are closed, and the statement where the $A_i$'s are open as stated here appears in e.g. \cite{openkkm}. Let $U$ be the unit circle, and let $f:[0,1] \rightarrow U$ be a parameterization of $U$ defined by $f(t)=(\textrm{cos}(2\pi t), \textrm{sin}(2\pi t))$. A point $x=(x_1,\dots,x_{2(k-2)})\in \Delta^{2(k-2)-1}$ corresponds to $2(k-2)$ points on $U$ given by $f_i(x)=f(\sum_{j=1}^i x_{j})$ for $0\leq i\leq 2(k-2)$ (note that $f_0(x)=f_{2(k-2)}(x)$). We define the line segments $\ell_i(x)=[f_i(x),f_{i+k-2}(x)]$ for $0\leq i\leq 2(k-2)-1$ where addition is take modulo $2(k-2)$ (see Figure \ref{fig:R1}). Note that $\ell_i(x)=\ell_{i+k-2}(x)$. For $1\leq i\leq k-3$, we define the region $R^i_x$ to be the open set bounded by the lines $\ell_{i}(x)$ and $\ell_{i-1}(x)$ and by the arc from $f_{i-1}(x)$ to $f_i(x)$. For $k-2\leq i\leq 2(k-2)$, we define $R^i_x$ to be the intersection of the region in the open unit disk bounded by $\ell_{i-1}(x)$, $\ell_{i}(x)$ and the arc from $f_{i-1}(x)$ to $f_i(x)$ with the open halfspaces defined by $\ell_j(x)$ for those $1\leq j\leq 2(k-2)$ for which $\ell_j(x)$ is a line segment (and not a point) containing the open arc from $f_{i-1}(x)$ to $f_i(x)$ (see Figures \ref{fig:R1} - \ref{fig:R6}). \begin{lemma}\label{extra_region} Assume that each $R^i_x$ is non-empty. Let $Q_1$ be the open quadrant defined by the lines $\ell_{k-3}(x)$ and $\ell_{k-2}(x)$ that contains the open arc along the unit circle from $f_0(x)$ to $f_{k-3}(x)$. There is some $1\leq j\leq k-3$ such that $R^j_x$ is contained in $Q_1$. \end{lemma} \begin{proof} Let $Q_2,Q_3,Q_4$ be the remaining quadrants defined by $\ell_{k-3}(x)$ and $\ell_{k-2}(x)$ so that $Q_1,Q_2,Q_3,Q_4$ occur in counterclockwise order (see Figure \ref{fig:quadrants}). Since the regions $R^i_x$ are non-empty, the $Q_i$'s are non-empty. Assume that $R^{k-3}_x$ is not contained in $Q_1$, then $\ell_{k-4}(x)$ intersects $Q_4$. Similarly, if we assume that $R^1_x$ is not contained in $Q_1$, then $\ell_1(x)$ intersects $Q_2$. If $k=5$, then we are done since $k-4=1$ and $\ell_1(x)$ cannot intersect both $Q_2$ and $Q_4$. Otherwise, $k\geq 6$, and we take $j$ to be the smallest index such that $\ell_j(x)$ intersects $Q_4$. Such an index exists since we assume that $\ell_{k-4}(x)$ intersects $Q_4$. Also, $j>1$, since we assume that $\ell_1(x)$ intersects $Q_2$ (and hence does not intersect $Q_4$). Therefore $j-1\geq 1$ and $\ell_{j-1}(x)$ intersects $Q_2$, or the intersection of $\ell_{k-2}(x)$ and $\ell_{k-3}(x)$. This implies that $\ell_{j-1}(x)$ and $\ell_{j}(x)$ intersect in $Q_1$ (see Figure \ref{fig:quadrants}). Therefore, $R^j_x$ is contained in $Q_1$. \end{proof} \begin{lemma}\label{covers} If a connected set $F$ contained in the unit disc does not intersect any $\ell_j(x)$, then $F$ is contained in $R^i_x$ for some $i$. \end{lemma} \begin{proof} Let $\tilde{R}^i_x$ be the region bounded by the arc from $f_{i-1}(x)$ to $f_i(x)$ and by the lines $\ell_{i-1}(x)$ and $\ell_{i}(x)$. Note that $\tilde{R}^i_x=R^i_x$ for $1\leq i\leq k-3$. Also, we have that $F$ is contained in $\tilde{R}^i_x$ for some $i$. If $1\leq i\leq k-3$, then we are done since $\tilde{R}^i_x=R^i_x$. So assume that $i\geq k-2$ and $F$ is not contained in $R^i_x$. Since $F$ does not intersect any $\ell_j(x)$, there is some $j$ such that $F$ is contained in the open halfspace defined by $\ell_j(x)$ that does not contain the arc from $f_{i-1}(x)$ to $f_i(x)$. If $i=2(k-2)$, then choose the largest index $j\in \{1,\dots, k-4\}$ such that the open halfspace defined by $\ell_j(x)$ not containing the arc from $f_{2(k-2)-1}(x)$ to $f_0(x)$ contains $F$. Then $F$ is contained in $R^{j+1}_x$. This is because $R^{j+1}_x$ is the region in the open unit disk obtained by taking intersection of the open halfspace defined by $\ell_{j}(x)$ that does not contain the arc from $f_{2(k-2)-1}(x)$ to $f_0(x)$ (which contains $F$) with the open halfspace defined by $\ell_{j+1}(x)$ that contains the arc from $f_{2(k-2)-1}(x)$ to $f_0(x)$ (which contains $F$ by the maximality of $j$). If $i=k-2$, then choose the smallest index $j\in \{1,\dots, k-4\}$ such that the halfspace defined by $\ell_j(x)$ not containing the arc from $f_{k-3}(x)$ to $f_{k-2}(x)$ contains $F$. Then $F$ is contained in $R^{j}_x$ (by similar reasoning as above). Hence, we may assume that $i\neq k-2,2(k-2)$. Let $i'=i-(k-2)$. If there is some $u\in \{0,\dots,i'-2\}$ such that the halfspace defined by $\ell_u(x)$ not containing the arc from $f_{i-1}(x)$ to $f_{i}(x)$ contains $F$, then let $j$ be the largest such index. Then $F$ is contained in $R^{j+1}_x$. Otherwise, choose the smallest index $j\in \{k-3,\dots,i'+1\}$ such that the halfspace defined by $\ell_j(x)$ not containing the arc from $f_{i-1}(x)$ to $f_{i}(x)$ contains $F$. Then $F$ is contained in $R^j_x$. This completes the proof. \end{proof} With the goal of using the KKM Theorem, we define $A_i$ to be the set of points $x\in \Delta^{2(k-2)-1}$ for which $R^i_x$ contains a set in $\mathcal{F}$. Because $R^i_x$ is open and each set in $\mathcal{F}$ is closed, it follows that $A_i$ is open. Let us assume for contradiction that there is no point $x\in \Delta^{2(k-2)-1}$ for which the lines $\ell_j(x)$, $0\leq j\leq k-3$ pierce $\mathcal{F}$. Then by Lemma \ref{covers}, for each $x\in \Delta^{2(k-2)-1}$, there is some region $R^i_x$ that contains a set in $\mathcal{F}$. It follows that $\Delta^{2(k-2)-1}\subset \cup_{i=1}^{2(k-2)}A_i$. Also, it is clear that for $x=(x_1,\dots,x_{2(k-2)})$, if $x_i=0$, then the region $R^i_x$ is empty and hence $x\notin A_i$. It follows from this fact that the sets $A_i$ satisfy the conditions of the KKM Theorem. Therefore, by the KKM Theorem, there exists a point $x\in \cap_{i=1}^{2(k-2)}A_i$. Notice in particular that each $R^i_x$ is non-empty. Let $1\leq i\leq k-3$ be the index such that $R^i_x$ is contained in $Q_1$, guaranteed by Lemma \ref{extra_region} where $Q_1$ is defined as in Lemma \ref{extra_region}. Let $F_1$ be the set in $\mathcal{F}$ contained in $R^i_x$, and let $F_j$ be the set of $\mathcal{F}$ contained in $R^{k-4+j}_x$ for $2\leq j\leq k$. Note that the corresponding regions $R^i_x$ and $R^j_x$ $2\leq j\leq k$ are disjoint, so $F_1,\dots,F_k$ are pairwise distinct. Now, $\textrm{conv}(F_1\cup F_2)$ is separated from $F_3,\dots,F_k$ by the line $\ell_0(x)$, so $\textrm{conv}(F_1\cup F_2)$ is disjoint from $\textrm{conv}(F_u\cup F_{u+1})$ for all $3\leq u\leq k-1$. For $j\in \{2,\dots,k-1\}$, $\textrm{conv}(F_j\cup F_{j+1})$ is separated from $F_{j+2},\dots, F_k$ by $\ell_{k-4+(j+1)}(x)$, so $\textrm{conv}(F_j\cup F_{j+1})$ is disjoint from $\textrm{conv}(F_u\cup F_{u+1})$ for all $j+2\leq u\leq k-1$. Finally, $\textrm{conv}(F_k\cup F_{1})$ is separated from $F_2,\dots,f_{k-1}$ by $\ell_{k-3}(x)$, so $\textrm{conv}(F_k\cup F_{1})$ is disjoint from $\textrm{conv}(F_u\cup F_{u+1})$ for all $2\leq u\leq k-2$. It follows that the sets $F_1,\dots,F_k$ form a $C(k)$, a contradiction. This completes the proof of Theorem \ref{main}. \section{Concluding Conjecture} We present a conjecture for the correct value of $L(k)$, which states that the lower bound of Theorem \ref{main} is correct. \begin{conjecture} We have that $L(k)=\left\lceil \frac{k}{2} \right\rceil$. \end{conjecture} \section{Acknowledgements} The author would like to thank Shira Zerbib for commenting on a first draft of this paper.
1,314,259,993,748
arxiv
\section{Introduction} \label{intro} The measurements of bulk hadronic observables in small collision systems at the RHIC and the LHC are of prime importance because they provide direct access to the underlying dynamics of multi-particle production in QCD. However, the theoretical description of such observables is challenging because of the dominance of soft non-perturbative processes. The two major challenges are the systematic treatment of the soft multi-parton production and a consistent treatment of the hadronization of soft partons. Although the former is well addressed by the Color Glass Condensate (CGC) effective theory over a wide range of kinematics, an {\it ab inito} QCD based treatment of the latter is still missing. Effective approaches have been developed over the years towards a consistent treatment of hadronization~\cite{Field:1977fa,Azimov:1984np,Andersson:1983ia}. In the phenomenology of A+A collisions, the problem of hadronization is addressed by fluid-dynamic evolution followed by a Cooper-Frye prescription~\cite{Cooper:1974mv}. Applying such a macroscopic approach across systems of all sizes is however challenging~\cite{SchenkeQM18}. The conventional microscopic description of small collision systems such as p+p, on the other hand, implements an effective hadronization model based on the Lund-string fragmentation~\cite{Andersson:1983ia}. Such descriptions are implemented in Monte-Carlo event generators like PYTHIA~\cite{Sjostrand:2014zea}. However, the underlying framework for multi-particle production in PYTHIA fails to describe several observations in high multiplicity events~\cite{SjostrandQM18}. Computations based on the CGC approach, on the other hand, explain the origin of such events as a basic feature of rare highly occupied gluon states, as well as provide both qualitative and quantitative description of the most recent observations in such events~\cite{Mace:2018yvl,Schenke:2015aqa}. The only shortcoming over the years in this approach has been that the data-model comparisons are either done at the level of gluon distributions or by employing fragmentation functions~\footnote{Very recently CGC+NRQCD formalism has been used for the hadronization of $J/\psi$~\cite{Ma:2018bax} in high multiplicity p+p/A collisions.}, though the latter are applicable at higher $p_{T}$. In \cite{Schenke:2016lrs} an approach was developed by combining the IP-Glasma model of CGC and the Monte-Carlo Lund-string fragmentation of PYTHIA. This new CGC+PYTHIA framework successfully describes bulk observations in p+p collisions like multiplicity distributions $P(n)$ and mean transverse momentum $\left<p_{T}\right>$ of the identified particles. Most importantly, this model naturally describes the systematics of rare high multiplicity events that has recently generated a lot of interest. Observations such as the growth of $\left<p_{T}\right>$ with charged particle multiplicity $N_{ch}$, its mass ordering $\left<p_{T}\right>_{p/\bar{p}}>\left<p_{T}\right>_{K^\pm}>\left<p_{T}\right>_{\pi^\pm}$, the appearance of long-range di-hadron correlations and mass ordering of the elliptic anisotropy coefficient $v_{2}\{2\}$ are often attributed to signatures of collectivity driven by hydrodynamics. The CGC+PYTHIA framework successfully describes such observations without requiring strong final state interactions. In this contribution, we employ the CGC+PYTHIA framework and focus on a few basic observables such as multiplicity distributions and transverse momentum spectra in 200 GeV p+p and d+Au collisions. \begin{figure} \includegraphics[width=0.45\textwidth]{multdist_hadrons_proc_pp.eps} \includegraphics[width=0.45\textwidth]{multdist_hadrons_proc.eps} \caption{(color online) Probability distribution of charged hadron multiplicity compared to STAR~\cite{Abelev:2008ez} and UA5 data~\cite{Ansorge:1988kn}.} \label{fig_multdist} \end{figure} \section{The CGC+PYTHIA framework} The details of the CGC+PYTHIA framework are described in~\cite{Schenke:2016lrs}. In this framework, we estimate the event-by-event distribution of gluons $dN_g/dk_\perp dy$ from numerical solutions of the classical Yang-Mills equations as implemented in the IP-Glasma model~\cite{Schenke:2012wb}. The IP-Glasma lattice parameters in our computations are as follows: we use transverse lattices of size N = 400 and spacing a = 0.04 fm; we employ an infrared regulator of mass $m$ = 0.2 GeV. We choose the ratio of the saturation scale to the parameter controlling the width of the color charge fluctuations to be $Q_s/g^2\mu=0.7$ and the width of the fluctuations of saturation scale $\sigma(\log(Q_S))=0.5$. Other details of the lattice setup are similar to the most recent IP-Glasma computation performed in~\cite{Mantysaari:2018zdd}. We integrate the distribution $dN_g/dk_\perp dy$ to estimate the total number of gluons $N_g$ within the transverse momentum $0\!<\!k_\perp\!<\!k_\perp^{\rm max}$ and rapidity $-y^{\rm beam}\!<\!y\!<\!y^{\rm beam}$, where the beam rapidity at $\sqrt{s}=200$ GeV is $y^{\rm beam} = $5.36. We then sample $N_g$ gluons to construct PYTHIA strings and feed them into the Monte-Carlo implementation of the Lund string fragmentation routine in PYTHIA (version 8.235)~\cite{Sjostrand:2014zea}. The Lund symmetric fragmentation function implemented in PYTHIA is given by \begin{equation} f(z, m_{T}) = \frac{1}{z} (1-z)^{a} \exp \left(-\frac{b\, m{_T}^2}{z} \right)\;, \end{equation} where the parameters $a,b,m_{T}$ are constrained by global data -- further tuning of the parameters $a$ and $b$ are allowed within the ranges of $0\le a \le2.0$ and $0.2\le b\le2.0$, which we will do below to estimate part of our systematic uncertainties. \section{Results} In Fig.\ref{fig_multdist} we present the probability distribution of the scaled \footnote{It is worth noting that many theoretical uncertainties, that we discuss later, are mitigated by taking the ratio $N_{\rm ch}/\left<N_{\rm ch}\right>$.} inclusive charged hadron multiplicity $P(N_{\rm ch}/\left<N_{\rm ch}\right>)$ at midrapidity in p+p and d+Au collisions compared to STAR~\cite{Abelev:2008ez} and UA5 data~\cite{Ansorge:1988kn}. We find a nice agreement over the entire range of available data, slightly better than a previous CGC computation that does not include fragmentation~\cite{Mace:2018vwq}. % It is widely known that $P(N_{\rm ch}/\left<N_{\rm ch}\right>)$ is intrinsically composed of multiple negative binomial distributions. To the best of our knowledge, CGC is the only available framework at the RHIC and LHC that produces such negative binomial distributions from first principles -- conventional models like PYTHIA generate convolution of Poisson distributions for charged hadron multiplicity~\cite{SjostrandQM18}. \begin{figure} \includegraphics[width=0.45\textwidth]{pid_pt_pp_200_proc.eps} \includegraphics[width=0.45\textwidth]{pid_pt_dAu_200_proc.eps} \caption{(color online) Transverse momentum spectra of identified particles obtained from the CGC+PYTHIA framework (shown by bands) in 200 GeV p+p and d+Au collisions compared to the STAR data (shown by symbols).} \label{fig_ptdist} \end{figure} We now turn to more differential measurements. In Fig.\ref{fig_ptdist} we present the identified particle spectra for the two systems and compare them to the available data from STAR~\cite{Abelev:2008ez}. We present our results as shaded bands which already incorporate the systematic uncertainties in our calculations. These uncertainties include the variation of the evolution time for the Yang-Mills phase in the range of $\tau=0.4-0.6$ fm, the variation of the Lund string fragmentation parameters $a$ and $b$ within the range allowed by PYTHIA and the variation of $k_\perp^{\rm max}$ within a range of $8-15$ GeV. Within the uncertainties shown here, CGC+PYTHIA seems to provide a reasonable description of the identified particle spectra at RHIC. In Fig.\ref{fig_pi0dist} we show the invariant cross section at mid-rapidity for $\pi^0$ within the pseudorapidity window of $|\eta|<0.3$ defined as $\sigma \times 1/(2\pi p_T)dN/dp_{T}dy$. We use $\sigma$ to be the inelastic cross section in p+p collisions $\sigma^{pp}_{\rm inel}$=42 mb and the total geometric cross section in d+Au collisions $\sigma^{\rm d+Au}_{\rm geo}$=2180 mb estimated by the MC-Glauber model calculations in~\cite{Alver:2008aq}. Once again we show CGC+PYTHIA calculations as bands that include different aforementioned sources of systematic uncertainties. Within the uncertainties the calculations agree well with the PHENIX data from~\cite{Adler:2006wg}. \begin{figure} \includegraphics[width=0.45\textwidth]{spectra_pp_pi0_proc.eps} \includegraphics[width=0.45\textwidth]{spectra_dAu_pi0_proc.eps} \caption{(color online) Transverse momentum dependence of the invariant cross section for $\pi^0$ in 200 GeV p+p and d+Au collisions compared to the PHENIX data from ~\cite{Adler:2006wg}.} \label{fig_pi0dist} \end{figure} \section{Summary} In this contribution, we extend our previous work using the newly developed CGC+PYTHIA framework to p+p and d+Au collisions at RHIC. A first data-model comparison of the bulk hadronic observables such as the probability distribution of inclusive charged hadron multiplicity, the identified particle spectra, and invariant cross section of neutral pions looks promising. There are now several new results from the small system scan at RHIC. One particularly striking observation is the hierarchy of $v_{n}(p_T)$ in $p/d/He-Au$ collisions~\cite{Aidala:2018mcw}. Recent computations based on the CGC successfully describes such observations from initial state dynamics alone~\cite{Mace:2018vwq}. A similar hierarchy of $R_{p/d/He-Au}$ observed at RHIC~\cite{Sakaguchi:2016gec} also demands a first principles explanation. A future extension of our work will focus on more complex observables for the small collision systems scan at RHIC. In addition to further exploration of small systems at RHIC and LHC, CGC+PYTHIA is also a promising framework for the phenomenology studies at a future EIC. \section{Acknowlegement} B. P. S., P. T., and R. V. are supported under DOE Contract No. DESC0012704. S. S. is supported by DOE Award No. DE-FG02-97ER41014. This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. We thank Kaushik Roy for a careful reading of the manuscript. \bibliographystyle{elsarticle-num}
1,314,259,993,749
arxiv
\section{Introduction} The existence of `other worlds' has always been one of the most discussed topics in the history of philosophy and science. The question has fascinated researchers since more than 2000 years, but the first attempt in modern astronomy to discover extrasolar planets was given by Huyghens (\cite{Huyghens}), in the XVII century. One had to wait nearly another 300~years until the first extrasolar planets have been discovered (Mayor \& Queloz~\cite{planet1}; Marcy \& Butler~\cite{planet2}), namely by observing the radial velocity of the parent star by Doppler-shift measurements. All of the confirmed detections of extrasolar planets so far result from this technique and $\sim 20$ planets have been found (Schneider~\cite{planetencyc}). Already in 1991, Mao \& Paczy\'nski (\cite{MP}) have pointed out that not only a (dark) foreground star that passes close to the line-of-sight of an observed luminous background source star yields a detectable variation in the observed light of the source star but also a planet around the foreground (lens) star can significantly modify the observed light curve. Gould \& Loeb (\cite{Gould92}) have shown that there is a significant probability to detect jupiter-mass and saturn-mass planets around stars in the Galactic disk that act as microlenses by magnifying the light of observed stars in the Galactic bulge. Bennett \& Rhie (\cite{Bennett96}) have pointed out that the capability of detecting planets by this photometric microlensing ($\mu$L{}) technique extends to earth-mass planets, where the limit is given by the finite size of the source stars. Contrary to all techniques employed or suggested to search for planets, photometric $\mu$L{} does not favour nearby objects. This makes it the unique technique to search for planets around stars at distances larger than a few kpc. Moreover, for disk lenses and bulge sources, a separation between planet and parent star of 2--6~AU is favoured, making it an ideal method to look for jupiter-like systems. Since the parent star of the planet acts as a gravitational lens only through its gravitational field, there is no luminosity bias for the parent stars that are generally not even seen. Moreover, it is the only method to discover Earth-like planets from ground-based observations.\footnote{In 1992, Earth mass objects have been discovered around the pulsar PSR1257+12 (Wolszczan \& Frail~\cite{Wols1}; Wolszcan~\cite{Wols2}) through time-delay measurements. The discovery is undoubtful, but the very nature of these objects is completely unknown: it is difficult, at the moment, to conciliate this discovery with our picture of planetary systems. A precise definition of a planet is a subtle question (see Marcy \& Butler~\cite{Marcy98}).} Several teams have started to look for planetary anomalies in $\mu$L{} light curves with monitoring programs that perform frequent and precise observations, namely PLANET (Albrow et al.~\cite{PL}; Dominik et al.~\cite{PL2}), MPS (Rhie et al.~\cite{MPS}), and MOA (Hearnshaw et al.~\cite{MOA}). All these teams rely on the microlensing 'alerts' issued by teams that undertake surveys of $\sim 10^{7}$ stars: OGLE (Udalski et al.~\cite{OGLE}), MACHO\footnote{MACHO will discontinue its operation by the end of 1999.} (Alcock et al.~\cite{MACHO1,MACHO2}), and EROS (Palanque-Delabrouille et al.~\cite{EROS}). While most of these alerts are on Galactic bulge stars, MACHO and EROS also observe(d) fields towards the Magellanic Clouds. However, the number of events towards SMC and LMC comprises only 5--10\% of the total number of events. In addition to detecting planets around stars in the Galactic disk (typically at $4~\mbox{kpc}$ distance) one could also think of detecting planets around stars in the Magellanic Clouds (at $\sim 50~\mbox{kpc}$ distance). However, in addition to the relative small number of detected events, finite source effects play a much more prominent role for lensing of stars in the Magellanic Clouds by stars in the Magellanic Clouds than for lensing of Galactic bulge stars by Galactic disk stars (Sahu~\cite{SahuNat}) resulting in a dramatic decrease in the probability to detect planetary signals. Safizadeh et al. (\cite{Safplan}) have pointed out that planets around disk stars can also be detected by looking at the shift of the light centroid of observed source stars caused by microlensing of disk stars and surrounding planets with upcoming space interferometers that allow to measure astrometric shifts at the $\mu$as level. Contrary to photometric $\mu$L{}, the observed signal of this 'astrometric $\mu$L{}' technique decreases with the distance of the lenses (e.g.\ Dominik \& Sahu~\cite{Sahu}). With $\mu$as-astrometry, jupiter-mass planets can only be detected for distances up to $\lesssim 30~\mbox{kpc}$. This leaves photometric $\mu$L{} as the only method ever capable of detecting planets in nearby galaxies like M31. In contrast to microlensing observations towards the Galactic bulge and the Magellanic Clouds, a large number of source stars fall onto the same pixel of the detector for observations towards M31. However, it is still possible to detect $\mu$L{} events even in unresolved star fields (Baillon et al.~\cite{Baillon}; Gould~\cite{Gould96}). Since standard photometric methods cannot be used to reveal $\mu$L{} events, new techniques have been developed: super-pixel photometry (Ansari et al.~\cite{Ansari97}) and difference image photometry (Tomaney \& Crotts~\cite{TC}; Alard \& Lupton~\cite{Alard98}). These techniques are used for the $\mu$L{} searches towards M31 as carried out by the Columbia-VATT search (Crotts \& Tomaney~\cite{CT}), AGAPE (Ansari et al.~\cite{Ansari97}), SLOTT-AGAPE (Bozza et al.~\cite{SLOTT}), and MEGA (Crotts et al.~\cite{MEGA}). In this paper we investigate the possibility to detect planets around stars in M31 with experiments that make use of either of these techniques. By searching for planets (or, at least, brown dwarfs) even in other galaxies, the limit for planet detection is further pushed towards larger distances. The paper is organized in the following way: in Sect.~2, we discuss the characteristics of microlensing signals caused by planets. In Sect.~3, the conditions for detecting anomalies in light curves of M31 are discussed. In Sect. 4, we calculate the probability to detect planetary signals in M31, and in Sect.~5, we discuss the extraction of planetary parameters. Finally, in Sect.~6, we summarize and conclude. \section{Microlensing signals of planets} \label{miclens} A microlensing event occurs if a massive lens object with mass $M$ located at a distance $D_{\rm L}$ from the observer passes close to the line-of-sight towards a luminous source star at the distance $D_{\rm S}$ from the observer. Let $u$ denote the angular separation betwen lens and source in units of the angular Einstein radius \begin{equation} \theta_{\rm E} = \sqrt{\frac{4GM}{c^2}\,\frac{D_{\rm S} - D_{\rm L}}{D_{\rm L}\,D_{\rm S}}}\,. \end{equation} For the 'standard model' of $\mu$L{}, i.e. point-like sources and lenses, the magnification $\mu$ is then given by (Paczy{\'n}ski~\cite{Pac86}) \begin{equation} \mu(u) = \frac{u^2+2}{u\,\sqrt{u^2+4}}\,. \end{equation} If one assumes uniform rectilinear motion between lens and source with the relative proper motion $\mu$, one has \begin{equation} u(t) = \sqrt{u_0^2+\left(\frac{t-t_0}{t_{\rm E}}\right)^2}\,, \end{equation} where $t_{\rm E} = \theta_{\rm E}/\mu$, $u_0$ gives the impact parameter, and $t_0$ gives the time of the smallest separation between lens and source. This means that one observes a light curve $\mu(u(t))$ that has the form derived by Paczy{\'n}ski (\cite{Pac86}), the so-called Paczy{\'n}ski curve. For recent and complete reviews of the theory of microlensing and of the observational results we further refer the to the works of Paczy{\'n}ski (\cite{Pac-rev}), Roulet \& Mollerach (\cite{Roulet97}), and Jetzer (\cite{Jetzer98}). More sophisticated models of the lens and the source include the finite source and the binarity (or multiplicity) of these objects. For such models, the light curves can differ significantly from Paczy{\'n}ski curves. If one neglects the binary motion, a binary lens is characterized by two parameters, the mass ratio between the lens objects $q$ and their instantanous angular separation $d$, measured in units of $\theta_{\rm E}$. The model of a binary lens includes the configuration of a star that is surrounded by a planet. In the following, we let $M$ denote the mass of the more massive object (star), while $m$ denotes the mass of the less massive object (planet) and $q = m/M < 1$. This means that $\theta_{\rm E}$ refers to the mass $M$ of the more massive object. For any mass ratio $q$, the caustics of a binary lens can show three different topologies (Schneider \& Wei{\ss}~\cite{SchneiWei}; Erdl \& Schneider~\cite{Erdl}) depending on the separation $d$: For 'wide binaries' there are two disjoint diamond-shaped caustic near the positions of each of the lens objects, for 'intermediate binaries' there is only one caustic with 6 cusps, and for 'close binaries' there is one diamond-shaped caustic near the center-of-mass and two small triangular shaped caustics. As $q \to 0$, the region of intermediate binaries vanishes as $q^{1/3}$ and the transition close-intermediate-wide occurs at $d=1$ (Dominik~\cite{Dominik99}). This means that for planets, one has a 'central caustic' near the star and either a diamond-shaped caustic (for $d>1$) or two triangular shaped caustics (for $d < 1$) at the position that had an image under the lens action of the star, considered at the position of the planet. We will refer to the latter caustic(s) as 'planetary caustic(s)'. Since the caustics are small and well-separated, the light curve mainly follows a Paczy{\'n}ski curve and is only locally distorted by either of the caustics. This allows us to distinguish two main types of anomalies in the light curve, namely the events affected by the central caustic (type I), and the ones affected by one of the planetary caustics (type II). To produce a Type I anomaly, the source has to pass the lens star with a small impact parameter, say $u_0 \lesssim 0.1$. Unless the source size is larger than variations in the magnification pattern, type I anomalies occur in high-magnification events ($\mu \simeq 1/u$ for $u \ll 1$). Moreover, the anomaly occurs near the maximum of the underlying Paczy{\'n}ski curve. Griest \& Safizadeh (\cite{GS98}) have pointed out that for high-magnification events, the probability to detect a planetary signal, namely as type I anomaly, is very large. In order to produce a high detection probability, the central caustic is often elongated along the lens axis, so that the magnification pattern is highly asymmetric around the lens star. If there are $N$ planets with masses $m_i$ around the parent star with mass $M$, they all perturbate the central caustic (Gaudi et al.~\cite{Naber98}), where the effect is proportional to the mass ratios $q_i = m_i/M$ (Dominik~\cite{Dominik99}). Though in principle, one can obtain information about the whole planetary system, the extraction of this information is non-trivial and the results are likely to be ambiguous (Dominik \& Covone, in preparation). Type II anomalies are produced when the source passes close enough to the lens to produce a detectable Paczy{\'n}ski curve ($u_0 \lesssim 1$), but not close enough to feel the effects of the central caustic ($u_0 \gtrsim 0.1$), and also gets affected by the planetary caustics, so that the source light beam will also be deflected by the planet, and a perturbation of the Paczy\'nski curve is produced at a time that depends on the angular separation between star and planet. From this time and from the duration of the perturbations, mass ratio $q$ and separation $d$ can be determined from high-quality observations, unless the duration is strongly influenced by the source size (Gaudi \& Gould~\cite{Gaudi97}; Dominik \& Covone, in preparation). Experiments towards unresolved star fields in nearby galaxies set very limiting conditions on the detection of $\mu$L{} events in general and on the detection of anomalies in particular. First, only the parts of the light curve that correspond to large magnifications can be observed. Second, anomalies can only be seen when they constitute very large deviations of the received flux. Therefore, all observed events are high-magnification events which gives a lot of candidates to look for type I anomalies. On the other hand, the background Paczy{\'n}ski curve for type II anomalies is not observed, and the planetary caustic has to be approached very closely to produce a high magnification. Therefore, type II anomalies are not likely to be detected in M31 experiments. Griest \& Safizadeh (\cite{GS98}) have studied the influence of the finite source size for type I anomalies. For sources in the Galactic bulge and lenses in the Galactic disk, they find that the finite source size can be neglected even for giant sources ($R \sim 10~R_{\odot}$) for a parent star of solar-mass and a mass ratio $q > 10^{-3}$. The characteristic quantity for the effect of the finite source size is the ratio between source size and the physical size of the angular Einstein radius at the position of the source \begin{equation} r_E' = D_{\rm S}\,\theta_{\rm E} = \sqrt{\frac{4GM}{c^2}\,\frac{D_{\rm S}\,(D_{\rm S} - D_{\rm L})} {D_{\rm L}}}\,. \end{equation} For lensing of bulge stars by disk stars, $D_{\rm S} \sim 8~\mbox{kpc}$ and $D_{\rm L} \sim D_{\rm S}/2$, while for M31 sources and lenses, $D_{\rm S} \sim D_{\rm L} \sim 600~\mbox{kpc}$ and $D_{\rm S} - D_{\rm L} \sim 10~\mbox{kpc}$. Therefore $r_{\rm E}'$ is approximately the same in the two cases and the estimates for the effect of the finite source size made for bulge stars and disk lenses are also valid for M31 sources and lenses. If the finite source size becomes non-negligible, the planetary signal is suppressed. We therefore restrict our discussion to planets with mass ratio $q > 10^{-3}$, i.e.\ Jupiter-like planets around stars of solar-mass and systems with larger mass ratio. \section{Detectability of anomalies in M31 experiments} For $\mu$L{} searches towards M31, each pixel of the detector contains light from many unresolved stars. There are several differences between classical microlensing surveys (i.e. surveys on resolved stars) and surveys towards unresolved star fields. The first one concerns the photometric errors. While in the classical regime, the photon noise is generally dominated by the light from the lensed star, it is dominated by the flux from stars that are not lensed for observations towards unresolved star fields. This means that the noise does not depend on the magnification. A second important difference is that it is impossible to determine the baseline flux of the lensed star. This means that the actual magnification and the Einstein time $t_{\rm E}$ of the event are not known. Moreover, in surveys towards unresolved star fields, there is a natural selection bias for the events with respect to the impact parameters and the luminosity of the lensed sources (e.g.\ Kaplan~\cite{Kaplan97}): events that involve lensing of giant stars and events with small impact parameters are preferred. Searches of $\mu$L{} events towards unresolved star fields (Crotts~\cite{Columbia}; Baillon et al.~\cite{Baillon}), M31 in particular, have motivated the development of new photometric methods. While the AGAPE team has implemented a 'super-pixel photometry' method (Ansari et al.~\cite{Ansari97}; Kaplan 1998), the Columbia-VATT team has used a 'difference image photometry' method (Crotts \& Tomaney~\cite{CT}; Tomaney \& Crotts~\cite{TC}). Recently, Alard \& Lupton (\cite{Alard98}) have improved the latter method yielding the 'Optimal image subtraction' (OIS) technique. The Columbia-VATT collaboration has found six candidate events towards M31 (Crotts \& Tomaney~\cite{CT}). AGAPE has observed 7 fields towards M31 in autumns 1994 and 1995, using the 2 meters telescope Bernard Lyot at the Pic du Midi Observatory. Their data analysis has selected 19 microlensing candidate events that are broadly consistent with Paczy\'nski curves. Only two of them can be retained as convincing candidates at the moment (Melchior~\cite{Mel}). One of these events shows a small but statistically significant deviation from a Paczy{\'n}ski curve (Ansari et al.~\cite{Z1}). This event could be due to lensing of a binary source, or even to a binary lens. There are too few data points to resolve the question, and other observations are needed to confirm that the event is due to $\mu$L{} and not due to stellar variability. In any case, the possibility to detect binary lens events towards unresolved star fields has been demonstrated. This gives us some confidence that future $\mu$L{} searches towards nearby galaxies could not only detect binary-lens events, but also reveal Jupiter-like planets. From a general point of view, we expect a larger fraction of anomalous microlensing events, since smaller impact parameters are favoured so that source trajectories are more likely to pass through the more asymmetric parts of the magnification pattern. However, the less accurate photometry sets a severe limit on the detection of anomalies. In the following, we determine how large an anomaly has to be in order to be detected in an M31 $\mu$L{} experiment. The light in an observed pixel is composed of contributions from the lensed star and many other unresolved stars. Since the light from the lensed star is in general spread over several pixels, only a fraction $f$ of it is received on a given pixel. If $\mu$ denotes the magnification of the lensed star, and $F_{\rm star}^{(0)}$ denotes its unlensed flux, the flux variation on the pixel is given by \begin{equation} \Delta F_{\rm pixel} = (\mu - 1) f F_{\rm star}^{(0)}, \label{pixel} \end{equation} where $\mu$, $f$ and $F_{\rm star}^{(0)}$ are not observed individually. Let us now consider an anomaly in an event, i.e.\ a deviation from a Paczy{\'n}ski curve. Let $\mu$ denote the magnification for the Paczy{\'n}ski curve and $\mu'$ the magnification for the anomalous curve. The difference in the pixel flux variations is then given by \begin{equation} \Delta(\Delta F_{\rm pixel}) = (\mu' - \mu) f F_{\rm star}^{(0)}\,. \end{equation} This difference is detectable when it exceeds the rms fluctuation $\sigma_{\rm pixel}$ by a factor Q, i.e.\ \begin{equation} \mu' -\mu \geq Q \frac{\sigma_{\rm pixel}}{f F_{\rm star}^{(0)}}\,. \end{equation} One sees that the brighter the star the less the magnification variation has to be in order to be detected. Thus, giant stars are preferred as sources. For $\mu \gg 1$, one obtains a detection threshold $\delta_{\rm th}$ for anomalies with Eq.~(\ref{pixel}) as \begin{equation} \delta_{\rm th} \equiv \left| \frac{\mu' - \mu}{\mu} \right| _{\rm th} = Q \frac{\sigma_{\rm pixel}} {\Delta F_{\rm pixel}}\,. \label{mia2} \end{equation} \begin{table} {\centering \begin{tabular}{|c|c|c|c|} \hline \#& $\sigma_{\rm pixel}$& $(\Delta F_{\rm pixel})_{\rm max}$& $\sigma_{\rm pixel}/(\Delta F_{\rm pixel})_{\rm max}$\\ \hline \hline 1& 90& 850& 0.106\\ \hline 2& 82& 680& 0.121\\ \hline 3& 78& 1286& 0.061\\ \hline 4& 99& 870& 0.114\\ \hline 5& 85& 900& 0.094\\ \hline 6& 44& 870& 0.050\\ \hline 7& 46& 1200& 0.038\\ \hline 8& 37& 1110& 0.033\\ \hline 9& 53.5& 830& 0.064\\ \hline 10& 73& 940& 0.077\\ \hline 11& 100& 620& 0.094\\ \hline 12& 56& 645& 0.121\\ \hline 13& 101& 945& 0.107\\ \hline 14& 63& 1320& 0.048\\ \hline 15& 48& 790& 0.061\\ \hline 16& 53& 600& 0.089\\ \hline 17& 55& 780& 0.071\\ \hline 18& 60& 807& 0.074\\ \hline 19& 54& 860& 0.063\\ \hline \end{tabular}\par} \caption{The rms fluctuation $\sigma_{\rm pixel}$ and the maximum flux variation $(\Delta F_{\rm pixel})_{\rm max}$ for the 19 AGAPE candidate events towards M31, analyzed using the super-pixel photometry method (Ansari et al.~\cite{Ansari97}).} \end{table} To obtain an estimate, we have a look at the values of $\sigma_{\rm pixel}$ and $(\Delta F_{\rm pixel})_{\rm max}$, i.e.\ $\Delta F_{\rm pixel}$ at the maximum, for the 19 candidate events detected by AGAPE and analyzed using the super-pixel photometry technique (Ansari et al.~\cite{Ansari97}). This analysis has been made on $7 \times 7$ pixels squares, the so-called ``super-pixel'', which correspond more or less to the average PSF dimension. It has been found that $\sigma_{\rm pixel} \sim 1.7~\sigma_\gamma$, where $\sigma_\gamma$ denotes the photon noise. The value of $\sigma_{\rm pixel}$ and $(\Delta F_{\rm pixel})_{\rm max}$ at the maximum as well as their ratio are listed in Table~1. The ratio $\sigma_{\rm pixel}/(\Delta F_{\rm pixel})_{\rm max}$ has mean value $0.078 \pm 0.026$. Therefore, for $Q=2$, we obtain $\delta_{\rm th} \simeq 15 \%$ for the detection of anomalies near the maximum. For 'optimal image subtraction', the effective rms fluctuation can be pushed closer to the photon-noise limit (Alard \& Lupton~\cite{Alard98}), yielding $\sigma_{\rm pixel} \sim 1.2~\sigma_\gamma$, so that the detection threshold reduces to $\delta_{\rm th} \simeq 10 \%$. \section{Detection probability for planetary signals} For $\mu$L{} events towards M31, the lens can be located in the Milky Way halo, the M31 halo, or the M31 bulge. It is almost impossible to discriminate among these different possible locations of the lens from a single observed light curve, though for a very small subset of microlensing events it is possible to tell something about the lens location (Han \& Gould~\cite{HG96}). Since we expect only those events for which the lens is in the M31 bulge as being due to stars, we will consider only those events as potential targets for a search for planetary anomalies. As pointed out before, one also needs a small impact parameter in order to produce an observable signal. Therefore, we restrict our attention to events that satisfy the following two conditions \begin{enumerate} \item $u_{0} < u_{\rm th} \equiv 0.1$; \footnote{For smaller $u_{\rm th}$, the detection probability will be larger.} \item{lens in the bulge or in the disk of the target galaxy.} \end{enumerate} Since we need more than one observed data point to be confident that we observe a $\mu$L{} anomaly, we require an observable anomaly to deviate by more than $\delta_{\rm th}$ {\em and} during more than $t_{\rm E} / 100$, i.e.\ $\sim 7$~hours for a month-long event, therefore requiring some dense sampling over the peak of the $\mu$L{} event. The probability to detect a signal depends on the projected separation $d$ between the star and the jupiter-like planet, as defined in Sect.~\ref{miclens}. Our calculation of the detection probability is similar to the one done by Griest \& Safizadeh (\cite{GS98}), but we use different detection criteria here. For calculating the magnifications, we have used the approach developped by Dominik (\cite{Dominik95}), released as 'Lens Computing Package (LCP)'. The ``cross section'' of the central caustic depends strongly on the direction of the source. Due to the elongated shape along the lens axis, it has a maximum for trajectories orthogonal to this axis, and a minimum for parallel trajectories. We have calculated the largest impact parameter $u_{\rm max} \leq u_{\rm th}$ that satisfies our detection criterium for several different source directions. The detection probability for a planet for each of the considered directions $\alpha$ is then simply given by $P(\alpha) = u_{\rm max}(\alpha)/u_{\rm th}$, using the fact that the distribution of impact parameters is approximately uniform for small impact parameters for events from microlensing experiments towards unresolved star fields. The final detection probability has been calculated by averaging over the different trajectories. The results are shown in Fig.~\ref{fig1}. \begin{figure} \epsfig{figure=planet_fig.eps,width=88mm} \caption{The probability to see a deviation larger than $\delta_{\rm th} = 10\%$ or $\delta_{\rm th} = 15\%$ caused by a Jupiter-like planet ($q = 10^{-3}$) that lasts more than $t_{\rm E}/100 \sim 7~\mbox{hours}$ as a function of the projected separation $d$ between star and planet in units of Einstein radii.} \label{fig1} \end{figure} For both values of $\delta_{\rm th}$, there is some reasonable probability to detect planetary signals for planets in the lensing zone (i.e. the range of planetary position for which the planetary caustics is within the Einstein ring of the major component of the system, $0.618 \leq d \leq 1.618$). In agreement with previous work (Griest \& Safizadeh~\cite{GS98}; Dominik~\cite{Dominik99}), the detection probability reaches a maximum for planets located close to the Einstein ring of their parent star (the caustic size increases towards $d \simeq 1$). Averaged over the lensing zone, the detection probability is $\sim 20\%$ for $\delta_{\rm th} = 15\%$ and $\sim 35\%$ for $\delta_{\rm th} = 10\%$. With a 2m-telescope, one can detected $\sim 400$ events per year towards the M31 bulge (Han~\cite{Han96}). Present-day microlensing surveys towards M31 are still far away from such a theoretical limit, but the technique has demonstrated to be successful, and fruitful developments can be expected in the near future. With $\sim 50\%$ of these events being due to M31 bulge lenses (Han~\cite{Han96}) and $\sim 50\%$ of these bulge lens events having $u_0 < 0.1$ (Baillon et al.~\cite{Baillon}), one can expect to detect up to 35 anomalies caused by Jupiter-like planets per year if every M31 bulge star has such a planet in its lensing zone. To be able to observe and characterize the planetary anomaly, frequent observations (every few hours) during the anomaly are necessary. Future observing programs towards M31 or other neigboring galaxies should take this into account. \section{Extraction of planetary parameters} There is a crucial difference between the detection of a signal that is consistent with a planet and the detection of a planet, i.e.\ the determination of parameters that unambiguously characterize its nature. In fact, it has been shown that the first microlensing event MACHO LMC-1 is consistent with a planet (Rhie \& Bennett~\cite{RhieLMC1}; Alcock et al.~\cite{MACHObinary}). However, it appears to be consistent with a binary lens of practically any mass ratio $q$ (Dominik \& Hirshfeld~\cite{DoHi2}), so that the existence of a planet cannot be claimed from this event. However, most of the papers about the detection of planets only show the possibility that a signal that arises from a planet can be detected (Mao \& Paczy{\'n}ski~\cite{MP}; Griest \& Safizadeh~\cite{GS98}; Safizadeh et al.~\cite{Safplan}), while the question about the extraction of parameters has only been addressed by a few people. Dominik (\cite{Do:ND}) has stressed that this is complicated by several points: there may be several different models that are consistent with the data, the fit parameters have finite uncertainties (in particular blending strongly influences $t_{\rm E}$), and the physical lens parameters only result on a stochastical basis using assumptions about galaxy dynamics. Gaudi \& Gould (\cite{Gaudi97}) have shown that one needs frequent and precise observations to determine the mass ratio $q$ and the separation $d$ from type II anomalies. However, it is more difficult to contrain these parameters in type I anomalies. Additional complication arise because one does not obtain information about the time separation between the main peak and the planetary peak, there is a degeneracy between $d$ and $q$ (Dominik~\cite{Dominik99}), and observed anomaly results from the combined action of all planets around the lens star (Gaudi et al.~\cite{Naber98}). Despite of the question whether $d$ and $q$ are well-determined, those parameters do not give the mass of the planet $m$, nor its true separation $a$. Moreover, an additional uncertainty enters because $d = a_{\rm p}/r_{\rm E}$ corresponds only to the projected instantaneous separation $a_{\rm p}$. Using models for the galactic dynamics, rather broad probability distributions for $a$ and $m$ result. However, as we stated before, photometric microlensing is the only method able to detect signals of planets around stars in M31, so that if there is a way to find planets, this is the only one. As we have shown, the prospects for detecting planetary signals are good. This means that even if planets can be truly characterized in only a fraction of the events where signals consistent with a planet can be detected, there is still a chance for being able to claim a planet. Such a subset of events could e.g.\ consist of events where the source trajectory crosses the caustic. Such caustic crossing events are likely to provide additional information. A complete discussion of the extraction of planetary parameters is beyond the scope of this paper and will be presented elsewhere (Dominik \& Covone, in preparation). \section{Summary and conclusions} While microlensing is already the only method to detect planets around stars that are at several kpc distance, namely by precise and frequent monitoring of $\mu$L{} events towards the Galactic bulge, future $\mu$L{} experiments towards nearby galaxies as M31 can even push this distance limit much further. Pixel lensing and difference image photometry have demonstrated to be successful methods to search for $\mu$L{} events towards unresolved star fields, and improvements are expected from the 'Optimal Image Subtraction (OIS)' technique (Alard \& Lupton~\cite{Alard98}). While AGAPE recently reported the observation of the possible first anomalous $\mu$L{} event towards M31 (Ansari et al.~\cite{Z1}), we have shown that even planetary systems can give rise to measurable anomalies. These planetary anomalies are due to passages of the source close to the central caustic near the parent star, i.e.\ the detection channel discussed by Griest \& Safizadeh (\cite{GS98}). Using the estimate of Han (\cite{Han96}) that about 400 events per year towards M31 can be detected with a 2m-telescope, we estimate that up to 35 jupiter-mass planets per year can be detected if they exist frequently in the lensing zone around their parent star. Following theoretical work by Gaudi \& Sackett (\cite{GS}), PLANET (Albrow et al.~\cite{PLANET:O14}) and MPS and MOA (Rhie et al.~\cite{MPSMOA}) have recently published first results concerning the determination of the abundance of planets from the absence of observed signals. From our estimates it follows that future $\mu$L{} experiments towards M31 can have the power to yield strong constraints on the abundance of jupiter-mass planets. \section*{Acknowledgements} We gratefully thank V. Cardone, J. Kaplan, Y. Giraud-Heraud, P. Jetzer and E. Piedipalumbo for stimulating discussions. We also thank the referee for some remarks that helped in improving the paper. RdR and AAM are finantially sustained by the M.U.R.S.T. grant PRIN97 ``SIN.TE.SI''. GC has received support from the European Social Funds. The work of MD has been financed by a Marie Curie Fellowship (ERBFMBICT972457) from the European Union.
1,314,259,993,750
arxiv
\section{Introduction} \label{sec:intro} The construction of $N\!N$ potentials has a long history, and a large number of different models have appeared in the literature. After a comparison of these $N\!N$ potentials by confronting them first with the $pp$ scattering data~\cite{Sto93a} and later also with the $np$ scattering data~\cite{Sto95}, it was found that all existing models do a rather poor job (some worse than others) in properly describing these data. This is rather disturbing, since most of these models are used in many-body calculations and $pp$ bremsstrahlung calculations to draw certain conclusions on three-body forces, relativistic effects, off-shell effects, etc. And so one can ask the question whether it is at all valid to conclude anything from a many-body calculation if the applied $N\!N$ potential cannot even adequately describe the two-nucleon data. This uncertainty has been recently cleared up with the construction of a number of high-quality $N\!N$ potential models~\cite{Sto94a,Wir95}. With high-quality we mean that these models describe the $N\!N$ scattering data with the almost optimal $\chi^{2}/N_{\rm data}\approx1$. Although based on very different functional forms, these potentials give remarkably similar results in many-body~\cite{Fri93,Zhe94,Glo96} and $pp$ bremsstrahlung~\cite{Ede96} calculations. One `disadvantage' of these new potential models, however, is that they are largely phenomenological and only explicitly include the one-pion exchange. Our next task, therefore, is to try and construct a high-quality potential model which is less phenomenological in that it explicitly contains the heavier-meson exchanges (with coupling constants which can be compared with empirical data) and which, possibly, is consistent with the symmetries of QCD. Recently, Ord\'o\~nez, Ray, and van Kolck~\cite{Ord96} presented a nucleon-nucleon ($N\!N$) potential based on an effective chiral Lagrangian of pions, nucleons, and $\Delta$ isobars. Using 26 free parameters, the agreement with the experimental scattering data was found to be satisfactory up to lab energies of about 100 MeV. An extension to higher energies and a further improvement in the description of the data would require an expansion to higher orders in chiral perturbation theory, making the model much more complicated and introducing many new parameters. Hence, the authors conclude that it is not practical for potential models derived from effective chiral Lagrangians to compete with more phenomenological approaches. In this paper we would like to advocate an alternative approach. We want to investigate whether it is possible to construct a potential model which not only gives a satisfactory description of the scattering data up to $\sim$350 MeV using a limited number of free parameters, but which also retains the salient features of chiral symmetry. For that purpose we here do not integrate out all mesons other than the pion~\cite{Wei90}, but rather adopt the successful approach used in one-boson-exchange models and keep all lowest-lying mesons with masses lower than 1 GeV, say. The $N\!N$ potential model is then obtained by evaluating the standard one-boson-exchange contributions involving these mesons, but now including the contributions of the box and crossed-box two-meson diagrams~\cite{Rij96a} and of the pair-meson diagrams where at least one of the nucleon lines contains a pair-meson vertex~\cite{Rij96b}. The potential contributions are calculated up to order $1/M^{2}$ in the nucleon mass as is customary in the conventional one-boson-exchange approaches, and so we want to stress that here we are {\it not\/} doing chiral perturbation theory. Here we only use chiral symmetry to generate the interaction Lagrangians and to find constraints for the associated coupling constants. However, as we will see, the pair-meson diagrams arise as a direct consequence of chiral symmetry. The box and crossed-box diagrams then also have to be included because they are of the same order in the number of exchanged mesons as the pair-meson diagrams. The pair-meson interactions can be viewed~\cite{Wei90} as the result of integrating out the heavy-meson (masses larger than 1 GeV, say) and resonance (e.g., $\Delta$, $N^{\ast}$, $Y^{\ast}$) degrees of freedom in the two-meson-exchange processes. Also, according to ``duality''~\cite{Dol68}, the resonance contributions to the various meson-nucleon amplitudes can be described approximately by heavy-meson exchanges. Treating the heavy-meson propagators as constants, which should be adequate at low energies, then leads directly to pair-meson exchanges. In Refs.~\cite{Rij96a,Rij96b} we already showed that the inclusion of the two-meson (box, crossed-box, and pair) contributions provides a substantial improvement in the description of the scattering data as compared to a potential containing only the standard one-boson exchanges. Although originally~\cite{Rij93,Sto94b} the meson-pair interaction Lagrangians were taken to be purely phenomenological, we later found that the pair-meson coupling constants could all be fixed using experimental input and chiral-symmetry constraints. In particular, in Ref.~\cite{Rij96b} the estimates for the pair-meson coupling constants were based on the linear $\sigma$ model~\cite{Gel60}. In order to appreciate this result, it should be realized that, by fixing the pair-meson coupling constants in this way, this improvement could be obtained without the introduction of any new parameters. This remarkable result encourages us to go beyond the $N\!N$ model and to investigate whether a similar approach will also be fruitful in the construction of an extended hyperon-nucleon ($Y\!N$) potential. An important motivation for the development of an extended $Y\!N$ model is provided by the study of hypernuclei using one-boson-exchange models~\cite{Yam90,Car90}. The problem with the construction of a $Y\!N$ potential is that there are only a few experimental data available, and these data are rather old and not very accurate. At present it is very difficult (if not impossible) to determine all the free parameters from the scattering data alone. A reduction in the number of free parameters is obtained by first fixing the parameters which also play a role in $N\!N$ scattering (and which are easier to determine, since there are many accurate $N\!N$ scattering data), and then using SU(3) symmetry to fix the coupling constants in the $Y\!N$ potential. This approach has been used successfully in the various hard-core~\cite{Nag77} and soft-core~\cite{Mae89} $Y\!N$ one-boson-exchange models of the Nijmegen group. The J\"ulich models~\cite{Hol89,Reu94} even impose an SU(6) symmetry. As a further reduction in the number of parameters, it would be convenient if we were able also to fix all (or at least most) single-meson coupling constants at their empirical values. As a first step, in this paper we will construct a chiral-invariant Lagrangian for the meson-baryon sector. Rather than extending the linear $\sigma$ model (the model we used in \cite{Rij96b}) to the strange sector, we here find it more convenient to explore the nonlinear realization of the spontaneously broken chiral symmetry. This allows us to express all pair-meson coupling constants in terms of the single-meson vertex parameters. Using symmetry arguments and further experimental data we give estimates for these single-meson vertex parameters. We then have two options for constructing a potential model: (i) we can either strictly impose all the constraints on the coupling constants and investigate whether it is at all possible to impose chiral symmetry in a potential model or, (ii) we can relax at least some of the constraints and, hence, optimize the description of the scattering data. As an application we will here only explore the first option and show that a fully constrained $N\!N$ model indeed allows for a very satisfactory description of the data. This is a remarkable result because, until now, in potential models it has never been possible to fix more than only a few coupling constants at their empirical values. The second option reflects that the imposed symmetries need not be exactly true, a fact which can be useful when fine tuning the model to the scattering data. This option will be left for the future. The outline of the paper is as follows. In Sec.~\ref{sec:content} we list the baryons and mesons of the model. Section~\ref{sec:su2} gives a review of the linear $\sigma$ model~\cite{Gel60} and the nonlinear realization of chiral symmetry as introduced by Weinberg~\cite{Wei67}. We briefly indicate how the model can be made to agree with experimental data. In Sec.~\ref{sec:su3} we extend the nonlinear realization to SU(3) to obtain a chiral-invariant meson-baryon Lagrangian. This SU(3) Lagrangian describes the meson-baryon interactions of the scalar, pseudoscalar, vector, and axial-vector meson nonets with the baryon octet. Section~\ref{sec:single-meson} then lists estimates for the various single-meson vertex parameters (singlet and octet coupling constants, mixing angles, and $F/(F+D)$ ratios), using theoretical and experimental input. The coupling constants for the double-meson vertices can all be expressed in terms of these single-meson vertex parameters; this will be discussed in Sec.~\ref{sec:double-meson}. Unfortunately, we have not been able to construct a potential model that satisfies all these constraints. Therefore, in Sec.~\ref{subsec:extension} we briefly indicate how the interaction Lagrangian can be extended by introducing new free parameters without abandoning the empirical constraints on the single-meson coupling constants. This allows for much more flexibility in actually imposing these constraints in a baryon-baryon potential model. As an application, in Sec.~\ref{sec:application} we show that with this extension it is indeed possible to construct an $N\!N$ potential that satisfies the constraints of chiral symmetry and which gives a very satisfactory description of the $N\!N$ scattering data up to 350 MeV. \section{Meson and baryon content} \label{sec:content} In the following, the baryons are the members of the SU(3) octet: $N$(940), $\Lambda$(1115), $\Sigma$(1192), and $\Xi$(1318). The mesons consist of the standard pseudoscalar nonet ($\pi$, $\eta$, $\eta'$, $K$) and vector nonet ($\rho$, $\omega$, $\phi$, $K^{\ast}$). The physical isoscalar mesons in these nonets come about as an admixture of the pure octet and pure singlet isoscalar states. That is, the physical $\eta$ and $\eta'$ are admixtures of the pure octet $\eta_{8}$ with the pure singlet $\eta_{0}$, and the physical $\omega$ and $\phi$ are admixtures of the pure octet $\omega_{8}$ with the pure singlet $\omega_{0}$. In the SU(3)$\times$SU(3) chiral-invariant model, the extension of the global chiral symmetry to a local chiral symmetry introduces two octets of gauge fields, which can be re-expressed in terms of one octet of vector and one octet of axial-vector fields (see below). It seems natural to identify the octet of vector gauge fields with the octet of vector mesons. The singlet (which is not a gauge field) is then added to complete the nonet. Similarly, the octet of axial-vector fields can be identified with the octet of axial-vector mesons. Adding the singlet and mixing the pure octet and pure singlet isoscalar states is then assumed to give the physical nonet of axial-vector mesons [$a_{1}(1260)$, $f_{1}(1285)$, $f_{1}(1420)$, and $K_{1}(1270)$]. In this scenario, the axial-vector mesons are a necessary ingredient in establishing the local symmetry, but one can argue that their role in low-energy baryon-baryon potential models is likely to be negligible. Their masses are well above 1 GeV, and so the corresponding short-range potentials are probably already effectively included by form factors at the meson-baryon vertices. For alternative approaches to include vector and axial-vector mesons in a chiral-invariant way, see Refs.~\cite{Eck89,Jen95}. The existence of the $0^{++}$ scalar nonet (octet plus singlet) is still controversial. The Particle Data Group~\cite{PDG96} lists an isovector state $a_{0}(980)$ and an isoscalar state $f_{0}(980)$, while there appears to be evidence for a strange isodoublet of scalar mesons, denoted by $\kappa(887)$~\cite{Sve92a}. These would make up the octet. There is also evidence for a broad isoscalar $0^{++}(760)$ state~\cite{Sve92b}, which could be the scalar singlet. Alternatively, in analogy with the pseudoscalar and vector nonets, it is likely that also the $f_{0}(980)$ and $\varepsilon(760)$ are admixtures of the pure octet and pure singlet isoscalar states. The baryon-baryon potential due to the exchange of a broad meson can be approximated by the sum of two potentials where each potential is due to the exchange of a stable (effective) meson~\cite{Sch71}. Due to the large width of the $\varepsilon(760)$, the low-mass pole in this two-pole approximation is rather small ($\sim$500 MeV), and hence would represent a possible candidate for the low-mass $\sigma$ meson which is a necessary ingredient in baryon-baryon potential models, and which is the ingredient of the linear $\sigma$ model. It is beyond the scope of this paper to discuss whether or not a nonet of scalar mesons with masses below 1 GeV really exists, how its existence can be explained or disclaimed in a quark model, or whether a scalar one-boson exchange is nothing else but an effective representation of correlated two-pion exchange. Here we only mention that already some time ago Jaffe~\cite{Jaf77} presented a quark-bag model calculation of $\bar{q}^{2}q^{2}$ mesons where the members and decay modes of the lowest multiplet fit in remarkably well with what is observed for the scalar nonet discussed above. More recently, T\"ornqvist~\cite{Tor95} has calculated the properties of a distorted $\bar{q}q$ nonet in a unitarized quark model, which also identifies the above scalar nonet reasonably well (except for the $\kappa(887)$ state). Finally, after an absence of almost 20 years, the latest edition of the Particle Data Group~\cite{PDG96} now tentatively lists a very broad resonance as $f_{0}$(400--1200) with a width of 600--1000 MeV. Here, we will stick to $\varepsilon(760)$ and assume a width of $\Gamma_{\varepsilon}\approx800$ MeV. \section{SU(2) chiral symmetry} \label{sec:su2} Although the SU(2) case has been discussed extensively in the literature, we believe it is still instructive to first review the SU(2) case in some detail before we turn to the SU(3) case. In the SU(2) case the results can be easily expressed in terms of the physical fields, which makes it easier to see what is going on. In the SU(3) case the expressions are much more involved. The classic example for a model with chiral SU(2) symmetry is the linear $\sigma$ model~\cite{Gel60}. The linear $\sigma$ model contains an isotriplet of pseudoscalars, $\bbox{\pi}$, and an isosinglet scalar, $\sigma$, which can be grouped into \begin{equation} \Sigma=\sigma+i\bbox{\tau}\!\cdot\!\bbox{\pi}, \end{equation} and which transforms under global SU(2)$_{L}$$\times$SU(2)$_{R}$ as \begin{equation} \Sigma \rightarrow L \Sigma R^{\dagger}. \label{Sigmatrans} \end{equation} The nucleon field $\psi$ has left and right components, $\psi_{L,R}={\textstyle\frac{1}{2}}(1\mp\gamma_{5})\psi$, transforming as \begin{equation} \psi_{L}\rightarrow L\psi_{L}, \ \ \ \psi_{R}\rightarrow R\psi_{R}. \end{equation} Given these transformation properties, we can construct the chiral-invariant Lagrangian ${\cal L}={\cal L}_{\psi}+{\cal L}_{\Sigma} +{\cal L}_{I}$, where \begin{eqnarray} {\cal L}_{\psi} &=& \overline{\psi}_{L}i\gamma^{\mu}\partial_{\mu}\psi_{L} +\overline{\psi}_{R}i\gamma^{\mu}\partial_{\mu}\psi_{R}, \nonumber\\ {\cal L}_{\Sigma} &=& {\textstyle\frac{1}{4}}{\rm Tr} (\partial^{\mu}\Sigma^{\dagger}\partial_{\mu}\Sigma) -{\textstyle\frac{1}{4}}\mu^{2}{\rm Tr}(\Sigma^{\dagger}\Sigma) -{\textstyle\frac{1}{8}}\lambda^{2}{\rm Tr} (\Sigma^{\dagger}\Sigma)^{2}, \nonumber\\ {\cal L}_{I} &=& -g(\overline{\psi}_{L}\Sigma\psi_{R} +\overline{\psi}_{R}\Sigma^{\dagger}\psi_{L}), \label{Lsu2lin} \end{eqnarray} and $\mu$, $\lambda$, and $g$ free parameters. Obviously, the ground state cannot have the full SU(2)$_{L}$$\times$SU(2)$_{R}$ symmetry, since in that case all hadrons (nucleon, pion, sigma in this model) would have partners of equal mass but with opposite parity, which is not the case in the real world. Hence, the chiral symmetry has to be spontaneously broken down to the vectorial subgroup SU(2)$_{V}$. This can be achieved by adding a term linear in the $\sigma$ field, choosing the ground state as $\langle\sigma\rangle=f_{0}$, and shifting the scalar field to $\sigma=s+f_{0}$. We then find the familiar Lagrangian \begin{eqnarray} {\cal L} &=& \overline{\psi}(i\gamma^{\mu}\partial_{\mu}-M)\psi -g\overline{\psi}(s+i\gamma_{5}\bbox{\tau}\!\cdot\!\bbox{\pi})\psi \nonumber\\ &&+{\textstyle\frac{1}{2}}(\partial^{\mu}s\partial_{\mu}s -m_{s}^{2}s^{2}) + {\textstyle\frac{1}{2}} (\partial^{\mu}\bbox{\pi}\!\cdot\!\partial_{\mu}\bbox{\pi} -m_{\pi}^{2}\bbox{\pi}^{2}) \nonumber\\ &&-\frac{m_{s}^{2}-m_{\pi}^{2}}{2f_{0}}\,s(s^{2}+\bbox{\pi}^{2}) -\frac{m_{s}^{2}-m_{\pi}^{2}}{8f_{0}^{2}}\, (s^{2}+\bbox{\pi}^{2})^{2}, \label{pscop} \end{eqnarray} where the nucleon mass is given by $M=gf_{0}$, and the scalar and pseudoscalar masses are given by $m_{s}^{2}=\mu^{2}+3\lambda^{2}f_{0}^{2}$ and $m_{\pi}^{2}=\mu^{2}+\lambda^{2}f_{0}^{2}$, respectively. The PCAC (partially-conserved axial-vector current) condition for the axial-vector Noether current, $\partial_{\mu}{\bf A}^{\mu}=m_{\pi}^{2} f_{\pi}\bbox{\pi}$, determines the relation between $\langle\sigma\rangle=f_{0}$ and $f_{\pi}=92.4\pm0.3$ MeV~\cite{PDG96}, the pion decay constant. As we will see below, the introduction of vector and axial-vector fields enforces a renormalization of the pion field, and so we cannot make the identification $f_{0}=f_{\pi}$ (a true identity in the absence of vector and axial-vector gauge fields). Weinberg has shown~\cite{Wei66} that the linear $\sigma$ model has the major disadvantage that it hides the fact that soft pions are emitted in clusters by derivative couplings from external lines, and so it is more convenient to transform the nonderivative $\overline{\psi}i\gamma_{5}\bbox{\tau}\!\cdot\!\bbox{\pi}\psi$ and $\sigma\bbox{\pi}^{2}$ interactions away. For that purpose, Weinberg defines a nonlinear transformation of the nucleon fields, given by~\cite{Wei67} \begin{equation} \psi = (1+\bbox{\xi}^{2})^{-1/2} (1-i\gamma_{5}\bbox{\tau}\!\cdot\!\bbox{\xi})N, \end{equation} with $\bbox{\xi}$ chosen such that \begin{equation} \overline{\psi}(M+gs+ig\gamma_{5}\bbox{\tau}\!\cdot\!\bbox{\pi})\psi =\overline{N}(M+gs')N. \label{nogam5} \end{equation} It will be convenient to write this transformation in the form \begin{equation} \left. \begin{array}{l} N_{R}=u\psi_{R} \\ N_{L}=u^{\dagger}\psi_{L} \end{array}\ \right\},\ \ u(\bbox{\xi})=\frac{1+i\bbox{\tau}\!\cdot\!\bbox{\xi}} {(1+\bbox{\xi}^{2})^{1/2}}, \label{udef2} \end{equation} in which case we find \begin{equation} \Sigma' \equiv f_{0}+s'=u^{\dagger}\Sigma u^{\dagger}. \end{equation} Solving Eq.~(\ref{nogam5}) gives \begin{eqnarray} \bbox{\xi}&=&\bbox{\pi}\left[f_{0}+s +\sqrt{(f_{0}+s)^{2}+\bbox{\pi}^{2}}\right]^{-1} \equiv \frac{1}{2f_{0}}\bbox{\pi}', \nonumber\\ s'&=&[(f_{0}+s)^{2}+\bbox{\pi}^{2}]^{1/2}-f_{0} =(\Sigma^{\dagger}\Sigma)^{1/2}-f_{0}, \label{newspi} \end{eqnarray} where the chiral rotation vector $\bbox{\xi}$ is proportional to a new pion field, $\bbox{\pi}'$. We should mention that the condition (\ref{nogam5}) also removes the $\sigma\bbox{\pi}^{2}$ interaction, and the new Lagrangian ${\cal L}_{\Sigma'}$ only contains three-point and four-point interactions for the $s'$ scalar field. In this nonlinear realization of the spontaneously broken chiral symmetry, the new fields $u(\bbox{\xi})$ of the coset space SU(2)$_{L}$$\times$SU(2)$_{R}$/SU(2)$_{V}$ transform as~\cite{Col69} \begin{mathletters} \begin{eqnarray} u &\rightarrow& LuH^{\dagger}=HuR^{\dagger}, \label{utrans}\\ H &=& \sqrt{L u^{2} R^{\dagger}}R u^{\dagger} =\sqrt{R u^{\dagger2} L^{\dagger}}L u, \label{Hdef} \end{eqnarray} \end{mathletters} where the equality in Eq.~(\ref{utrans}) is due to parity. The left and right components of the new doublet nucleon field $N$ both transform with the same SU(2) matrix $H$, and so the nucleon part of the Lagrangian can now be written as \begin{equation} {\cal L}_{N} = \overline{N}(i\gamma^{\mu}D_{\mu}-M)N - g\overline{N}Ns' -g_{A}\overline{N}\gamma^{5}\gamma^{\mu}u_{\mu}N, \label{Lsu2nonlin} \end{equation} where we defined $D_{\mu}=\partial_{\mu}+i\Gamma_{\mu}$, and \begin{eqnarray} \Gamma_{\mu}&=&-{\textstyle\frac{i}{2}}\left(u^{\dagger}\partial_{\mu}u +u\partial_{\mu}u^{\dagger}\right) = \frac{1}{4f_{0}^{2}}\, \frac{\bbox{\tau}\!\cdot\!\bbox{\pi}'\times\partial_{\mu}\bbox{\pi}'} {1+\bbox{\xi}^{2}}, \nonumber\\ u_{\mu}&=&-{\textstyle\frac{i}{2}}\left(u^{\dagger}\partial_{\mu}u -u\partial_{\mu}u^{\dagger}\right) = \frac{1}{2f_{0}}\, \frac{\bbox{\tau}\!\cdot\!\partial_{\mu}\bbox{\pi}'} {1+\bbox{\xi}^{2}}. \end{eqnarray} The transformation rule for the connection $\Gamma_{\mu}$, \begin{equation} \Gamma_{\mu} \rightarrow H\Gamma_{\mu}H^{\dagger} -iH\partial_{\mu}H^{\dagger}, \label{Gamtrans} \end{equation} ensures the invariance of the kinetic-energy term. The transformation rule for the $N$ field, $N\rightarrow H\!N$, means that $\overline{N}N$ is already an invariant in itself, and so the mass $M$ can be treated as a free parameter. Similarly, the transformation rule for $u_{\mu}$, \begin{equation} u_{\mu} \rightarrow Hu_{\mu}H^{\dagger}, \label{umutrans} \end{equation} allows for the introduction of the free parameter $g_{A}$ in Eq.~(\ref{Lsu2nonlin}). Choosing it to be $g_{A}=1.2601\pm0.0025~\cite{PDG96}$, which is the value for the weak interaction axial-vector coupling constant, and substituting for $\bbox{\pi}'$, the pion is seen to couple to the nucleon via the pseudovector coupling with strength $g_{A}/2f_{\pi}$. This relation between the weak axial-vector and the pion pseudovector coupling constants is known as the Goldberger-Treiman relation~\cite{Gol58}. Although this relation only exactly holds in the chiral limit, experimentally it is found to hold within about 2\%. Note that in the $\psi$ representation of Eq.~(\ref{pscop}), this coupling is found to be $g_{A}=1$, which is off by 25\%. The next step is to add the isotriplets of vector ($\bbox{\rho}$) and axial-vector (${\bf a}_{1}$) fields. One way to do this is to extend the global chiral symmetry to a local one and to define the corresponding left- and right-handed gauge fields as \begin{eqnarray} l_{\mu}&\equiv&{\textstyle\frac{1}{2}}\bbox{\tau}\!\cdot\!{\bf l}_{\mu} ={\textstyle\frac{1}{2}}\bbox{\tau}\!\cdot\! (\bbox{\rho}_{\mu}+{\bf a}_{\mu}), \nonumber\\ r_{\mu}&\equiv&{\textstyle\frac{1}{2}}\bbox{\tau}\!\cdot\!{\bf r}_{\mu} ={\textstyle\frac{1}{2}}\bbox{\tau}\!\cdot\! (\bbox{\rho}_{\mu}-{\bf a}_{\mu}), \end{eqnarray} with their field strength tensors \begin{equation} l_{\mu\nu}=\partial_{\mu}l_{\nu}-\partial_{\nu}l_{\mu} +ig_{V}[l_{\mu},l_{\nu}], \label{fieldstrength} \end{equation} and similarly for $r_{\mu\nu}$. Their transformation properties are \begin{eqnarray} l_{\mu} &\rightarrow& Ll_{\mu}L^{\dagger} -\frac{i}{g_{V}}L\partial_{\mu}L^{\dagger}, \ \ \ \ \ l_{\mu\nu}\rightarrow Ll_{\mu\nu}L^{\dagger}, \nonumber\\ r_{\mu} &\rightarrow& Rr_{\mu}R^{\dagger} -\frac{i}{g_{V}}R\partial_{\mu}R^{\dagger}, \ \ \ r_{\mu\nu}\rightarrow Rr_{\mu\nu}R^{\dagger}, \end{eqnarray} and so the proper field strength tensors for the nonlinear transformation are $u^{\dagger}l_{\mu\nu}u$ and $ur_{\mu\nu}u^{\dagger}$. Invariance under local chiral transformations implies the extensions \begin{eqnarray} \Gamma_{\mu}&=&-{\textstyle\frac{i}{2}}\left[u^{\dagger} (\partial_{\mu}+ig_{V}l_{\mu})u +u(\partial_{\mu}+ig_{V}r_{\mu})u^{\dagger}\right], \nonumber\\ u_{\mu}&=&-{\textstyle\frac{i}{2}}\left[u^{\dagger} (\partial_{\mu}+ig_{V}l_{\mu})u -u(\partial_{\mu}+ig_{V}r_{\mu})u^{\dagger}\right]. \end{eqnarray} The locally chiral-invariant Lagrangian can now be written as~\cite{Wes67} \begin{equation} {\cal L}_{N} = \overline{N}(i\gamma^{\mu}\partial_{\mu}-M)N -g_{s}\overline{N}Ns'-{\textstyle\frac{1}{2}}g_{V} \overline{N}\gamma^{\mu}\bbox{\tau}N\!\cdot\!\bbox{\rho}'_{\mu} -{\textstyle\frac{1}{2}}\lambda\overline{N}\gamma^{5}\gamma^{\mu} \bbox{\tau}N\!\cdot\!{\bf a}'_{\mu}, \label{Lsu2nonlinloc} \end{equation} where $g_{s}$, $g_{V}$, and $\lambda$ are free parameters, and where we defined the field abbreviations \begin{eqnarray} \bbox{\rho}'_{\mu} &=& \bbox{\rho}_{\mu}+\frac{2}{1+\bbox{\xi}^{2}}\, \bbox{\xi}\!\times\!{\bf a}_{\mu}+\frac{2}{g_{V}} \frac{1}{1+\bbox{\xi}^{2}}\,\bbox{\xi}\!\times\!(\partial_{\mu} \bbox{\xi}+g_{V}\bbox{\xi}\!\times\!\bbox{\rho}_{\mu}), \nonumber\\ {\bf a}'_{\mu} &=& {\bf a}_{\mu}-\frac{2}{1+\bbox{\xi}^{2}}\, \bbox{\xi}\!\times\!({\bf a}_{\mu}\!\times\!\bbox{\xi}) +\frac{2}{g_{V}}\frac{1}{1+\bbox{\xi}^{2}}\,(\partial_{\mu} \bbox{\xi}+g_{V}\bbox{\xi}\!\times\!\bbox{\rho}_{\mu}). \end{eqnarray} These field combinations are seen to give rise to pair ($\pi\pi$, $\pi\rho$, $\pi a_{1}$) vertices, and other higher-order multiple-meson vertices. As an example of how this model can be made to agree with the actual experimental results, we next give some attention to the meson sector. The axial-vector field can be given a mass by adding to the Lagrangian a chiral-invariant term proportional to ${\rm Tr}(u_{\mu}u^{\mu})$; this will also generate the kinetic-energy term for the pion field. The vector field receives its mass from a term proportional to ${\rm Tr}(l_{\mu}l^{\mu}+r_{\mu}r^{\mu})$, which is only invariant under {\it global\/} chiral transformations. This term also contributes to the mass of the axial-vector fields. However, in order to get an acceptable agreement with experiment (not only with respect to the empirical masses, but also with respect to some of the empirical meson coupling constants and decay widths), we have to extend the Lagrangian even further. A very general form in the context of the linear $\sigma$ model is given by Ko and Rudaz~\cite{Ko94}. Translating their results in terms of the fields of the nonlinear transformation (\ref{udef2}), the masses for the vector and axial-vector mesons are obtained from the Lagrangian \begin{equation} {\cal L}_{m} = f_{0}^{2}{\rm Tr}(u^{\mu}u_{\mu}) + {\textstyle\frac{1}{2}}\,m_{0}^{2}{\rm Tr}(l^{\mu}l_{\mu} +r^{\mu}r_{\mu}) - c\,g_{V}^{2}(f_{0}+s')^{2}{\rm Tr} (l^{\mu}u^{2}r_{\mu}u^{\dagger2}). \label{vecmass} \end{equation} We find that there is a mixing between the $\bbox{\pi}$ and ${\bf a}_{\mu}$ fields, which can be removed by making a field redefinition for the ${\bf a}_{\mu}$ field, ${\bf a}_{\mu}={\bf A}_{\mu} -h(\partial_{\mu}\bbox{\pi}+g_{V}\bbox{\pi}\!\times\!\bbox{\rho})$, and then choosing $h$ appropriately. Also, the kinetic-energy term for the pion field is no longer of the canonical form, and so we have to do a wave function renormalization: $\bbox{\pi}=Z_{\pi}^{-1/2}\bbox{\pi}_{r}$. The final result reads (for details, see Ref.~\cite{Ko94}) \begin{eqnarray} m_{\rho}^{2} &=& m_{0}^{2}-c\,g_{V}^{2}f_{0}^{2}, \nonumber\\ m_{a_{1}}^{2} &=& m_{0}^{2}+(c+1)g_{V}^{2}f_{0}^{2}, \nonumber\\ Z_{\pi} &=& 1-g_{V}^{2}f_{0}^{2}/m_{a_{1}}^{2}, \label{Zpisolve} \end{eqnarray} and the PCAC condition gives $f_{\pi}=Z_{\pi}^{1/2}f_{0}=92.4$ MeV. The parameter $c$ is needed to get a simultaneous agreement for the $\rho$ mass ($m_{\rho}=770$ MeV), the $a_{1}$ mass ($m_{a_{1}}=1.23$ GeV), and the $\rho N\!N$ coupling constant ($g_{V}=5.04$ from $\rho^{0}\rightarrow e^{+}e^{-}$); all data from Ref.~\cite{PDG96}. Substituting $f_{0}$ in the equation for $Z_{\pi}$ leads to the two solutions $(Z_{\pi},c)=(0.173,-0.131)$ and $(0.827,1.25)$. Similarly, the numerical value for the $\rho\pi\pi$ coupling constant ($g_{\rho\pi\pi}=6.05$ from $\rho\rightarrow\pi^{+}\pi^{-}$~\cite{PDG96}) can be enforced by adding yet another term to the Lagrangian, introducing a new parameter $\kappa_{6}$ (see Ref.~\cite{Ko94}). Having fixed both $c$ and $\kappa_{6}$, some of the other meson properties such as the root-mean-square pion radius and $a_{1}$ decay widths also come out very favorably~\cite{Ko94}. Although a full discussion of the meson sector is outside the scope of this paper, it is important to note that it is indeed possible to construct a chiral-invariant Lagrangian (sometimes at the expense of introducing new parameters and small chiral-symmetry violating terms) which does a reasonably good job in describing a variety of experimental results. Furthermore, the introduction of the axial-vector field as a gauge boson enforces a renormalization of the pion field, and so we explicitly need part of the Lagrangian for the meson sector to define this renormalization constant. \section{SU(3) chiral symmetry} \label{sec:su3} The extension to SU(3) is easily obtained by replacing the Pauli isospin matrices, $\tau_{i}$, by the Gell-Mann matrices, $\lambda_{a}$. The two-component nucleon field is replaced by a 3$\times$3 traceless baryon-octet matrix $\Psi$, where the left- and right-handed components transform as \begin{equation} \Psi_{L}\rightarrow L\Psi_{L}L^{\dagger}, \ \ \ \ \Psi_{R}\rightarrow R\Psi_{R}R^{\dagger}. \label{BLRtrans} \end{equation} The meson content of the model still consists of the isosinglet scalar, $\sigma$, but the pseudoscalar triplet is extended to an octet by including the $\eta_{8}$ and the four $K$ mesons. They are collectively denoted by $\pi_{a}$. These nine mesons are grouped into \begin{equation} \Sigma = \sigma + i\lambda_{a}\pi_{a}, \ \ \ (a=1,\ldots,8), \end{equation} which still transforms according to Eq.~(\ref{Sigmatrans}), but where $L$ and $R$ are now elements of SU(3). Translating the definition (\ref{udef2}) to the SU(3) case, we write \begin{equation} \left. \begin{array}{l} B_{R}=u\Psi_{R}u^{\dagger} \\ B_{L}=u^{\dagger}\Psi_{L}u \end{array}\ \right\},\ \ u(\xi_{a})=\exp[i\lambda_{a}\xi_{a}] \equiv \exp\left[\frac{i\lambda_{a}\pi'_{a}}{2f_{0}}\right], \label{udef3} \end{equation} where we used a more convenient representation for $u(\xi_{a})$, the elements of the coset space SU(3)$_{L}$$\times$SU(3)$_{R}$/SU(3)$_{V}$. We will identify the primed fields with the octet of physical pseudoscalar fields, which is given by \begin{equation} \frac{1}{\sqrt{2}}\lambda_{a}\pi'_{a} = \left( \begin{array}{ccc} {\displaystyle\frac{\pi^{0}}{\sqrt{2}}+\frac{\eta_{8}}{\sqrt{6}}} & \pi^{+} & K^{+} \\[2mm] \pi^{-} & {\displaystyle-\frac{\pi^{0}}{\sqrt{2}} +\frac{\eta_{8}}{\sqrt{6}}} & K^{0} \\[2mm] K^{-} & \overline{{K}^{0}} & {\displaystyle-\frac{2\eta_{8}}{\sqrt{6}}} \end{array} \right), \label{PSmat} \end{equation} and where the fields transform according to the SU(3) analogue of Eq.~(\ref{utrans}). Following the phase convention of Ref.~\cite{Pai66}, the new octet of baryon fields reads \begin{equation} B = \left( \begin{array}{ccc} {\displaystyle\frac{\Sigma^{0}}{\sqrt{2}}+\frac{\Lambda}{\sqrt{6}}} & \Sigma^{+} & p \\[2mm] \Sigma^{-} & {\displaystyle-\frac{\Sigma^{0}}{\sqrt{2}} +\frac{\Lambda}{\sqrt{6}}} & n \\[2mm] \Xi^{-} & -\Xi^{0} & {\displaystyle-\frac{2\Lambda}{\sqrt{6}}} \end{array} \right), \end{equation} which transforms as \begin{equation} B \rightarrow HBH^{\dagger}. \label{Btrans} \end{equation} The slightly different transformation property of the baryon octet matrix requires that the SU(3) analogue to the nucleon derivative, $D_{\mu}N=(\partial_{\mu}+i\Gamma_{\mu})N$ in Eq.~(\ref{Lsu2nonlin}), reads \begin{equation} D_{\mu}B = \partial_{\mu}B + i[\Gamma_{\mu},B]. \label{Dcovar} \end{equation} Due to the transformation properties (\ref{BLRtrans}) of the original octet fields, we cannot simply copy the interaction Lagrangian of Eq.~(\ref{Lsu2lin}) to the SU(3) case. To ensure invariance under chiral SU(3)$_{L}$$\times$SU(3)$_{R}$, the $\Sigma$ field would then have to appear quadratically; i.e., the interaction Lagrangian has to be of the form ${\rm Tr}(\overline{\Psi}_{L}\Sigma \Psi_{R}\Sigma^{\dagger} +\overline{\Psi}_{R}\Sigma^{\dagger}\Psi_{L}\Sigma)$. Alternatively, we note that the field combinations $u\Sigma^{\dagger}u$ and $u^{\dagger}\Sigma u^{\dagger}$ both transform according to Eq.~(\ref{Btrans}). Defining the combination \begin{equation} \chi_{\pm}={\textstyle\frac{1}{2}} \left(u^{\dagger}\Sigma u^{\dagger} \pm u\Sigma^{\dagger}u\right), \label{chipm} \end{equation} we thus find that the most simple interaction Lagrangian is given by \begin{equation} {\cal L}_{I}=-g_{s,1}\,{\rm Tr}(\overline{B}\chi_{+}B) -g_{s,2}\,{\rm Tr}(\overline{B}B\chi_{+}), \label{Lsps} \end{equation} where $g_{s,1}$ and $g_{s,2}$ are arbitrary constants. In principle, we also could have included a term of the form $g_{p}{\rm Tr}(\overline{B}\gamma_{5}\chi_{-}B)$, but that would have re-introduced a nonderivative pseudoscalar interaction~\footnote{This in contrast to the SU(2) case, where we can choose $u(\xi_{i})$ such that $\chi_{-}=0$, and so all nonderivative pseudoscalar interactions are transformed away. The difference is due to the fact that $(\tau_{i}\pi_{i})(\tau_{j}\pi_{j})$ is proportional to $\openone_{2}$, whereas $(\lambda_{a}\pi_{a})(\lambda_{b}\pi_{b})$ is not proportional to $\openone_{3}$.}, which we wanted to get rid of in the first place. Unfortunately, the interaction Lagrangian (\ref{Lsps}) is not good enough if we want it to generate the empirical baryon masses, since it gives the same mass, $M=(g_{s,1}+g_{s,2})f_{0}$, for all baryons in the baryon octet. This problem can be solved by adding an octet of scalar fields, $\lambda_{a}\sigma_{a}$, where the isoscalar octet member is given a nonvanishing vacuum expectation value. We can then extend $\Sigma$ by writing ($\lambda_{0}=\sqrt{2/3}\openone_{3}$) \begin{equation} \Sigma=F+\lambda_{0}s_{0}+\lambda_{a}s_{a} +i\lambda_{a}\pi_{a}, \ \ \ \ (a=1,\ldots,8), \label{chistart} \end{equation} with $\pi_{a}$ the {\it original\/} pseudoscalar fields and $F$ the vacuum expectation value of the scalar-nonet fields, i.e. \begin{equation} F=\left( \begin{array}{ccc} f_{1} & 0 & 0 \\ 0 & f_{1} & 0 \\ 0 & 0 & f_{2} \end{array} \right), \end{equation} with \begin{equation} f_{1} = {\textstyle\sqrt{\frac{2}{3}}}\langle\sigma_{0}\rangle +{\textstyle\sqrt{\frac{1}{3}}}\langle\sigma_{8}\rangle, \ \ \ f_{2} = {\textstyle\sqrt{\frac{2}{3}}}\langle\sigma_{0}\rangle -2{\textstyle\sqrt{\frac{1}{3}}}\langle\sigma_{8}\rangle. \label{VEV} \end{equation} In the Appendix we show that now it is still possible to find a transformation $u(\xi)$, which transforms away the octet of original pseudoscalar fields while leaving the vacuum expectation matrix $F$ invariant, and where the $\xi_{a}$ fields can be identified with an octet of new pseudoscalar fields. As before, this transformation generates the combinations $\chi_{\pm}$ of Eq.~(\ref{chipm}). Both combinations contain the {\it original\/} pseudoscalar fields in a complicated way. However, $\chi_{+}$ behaves like a set of scalar fields, and so we can simply {\it define\/} these new fields to be the physical scalar fields and drop any reference to the original scalar fields. Clearly, to zeroth order in the new pseudoscalar fields, the old and new scalar fields are the same. Dropping primes, the nonet of new scalar fields is given by \begin{equation} \chi_{+}=F+\lambda_{0}s_{0}+\lambda_{a}s_{a}, \end{equation} where $s_{0}$ now denotes the new scalar singlet and the octet is given by \begin{equation} \frac{1}{\sqrt{2}}\lambda_{a}s_{a} = \left( \begin{array}{ccc} {\displaystyle\frac{a_{0}^{0}}{\sqrt{2}}+\frac{s_{8}}{\sqrt{6}}} & a_{0}^{+} & \kappa^{+} \\[2mm] a_{0}^{-} & {\displaystyle-\frac{a_{0}^{0}}{\sqrt{2}} +\frac{s_{8}}{\sqrt{6}}} & \kappa^{0} \\[2mm] \kappa^{-} & \overline{{\kappa}^{0}} & {\displaystyle-\frac{2s_{8}}{\sqrt{6}}} \end{array} \right). \label{Smat} \end{equation} The pure octet isoscalar $s_{8}$ and pure singlet $s_{0}$ are mixed to give the physical $f_{0}(980)$ and $\varepsilon(760)$. Similarly, $-i\chi_{-}$ can be identified as a new isosinglet pseudoscalar field, not present before. Because it transforms as $H(-i\chi_{-})H^{\dagger}$, it can be formally added to the octet pseudoscalar matrix, which completes the nonet. It must be remembered, however, that group transformations such as given for instance in Eq.~(\ref{utrans}) and which involve different transformation matrices, are only valid for the (traceless) octet matrices. As discussed for the SU(2) case at the end of the previous section, also in the SU(3) case the global chiral symmetry can be easily extended to a local chiral symmetry. The required gauge fields are now given by two octets of combinations of vector and axial-vector fields. The vector octet is given by \begin{equation} \frac{1}{\sqrt{2}}\lambda_{a}\rho_{a} = \left( \begin{array}{ccc} {\displaystyle\frac{\rho^{0}}{\sqrt{2}}+\frac{\omega_{8}}{\sqrt{6}}} & \rho^{+} & K^{\ast+} \\[2mm] \rho^{-} & {\displaystyle-\frac{\rho^{0}}{\sqrt{2}} +\frac{\omega_{8}}{\sqrt{6}}} & K^{\ast0} \\[2mm] K^{\ast-} & \overline{{K}^{\ast0}} & {\displaystyle-\frac{2\omega_{8}}{\sqrt{6}}} \end{array} \right), \label{Vmat} \end{equation} and a similar matrix for the axial-vector octet. Adding a mass term of the form (\ref{vecmass}) breaks the local symmetry and introduces a mixing between the pseudoscalar and axial-vector fields, which requires a redefinition of the axial-vector fields. This in turn means that the kinetic-energy term for the pseudoscalar fields is no longer of the canonical form, which is taken care of by renormalizing the pseudoscalar fields. Unfortunately, then it is not possible to get a satisfactory agreement for both the vector and axial-vector masses simultaneously. But also this problem can be solved. One way is to introduce yet another term~\cite{Ko94} proportional to ${\rm Tr}(l_{\mu\nu}\Sigma r^{\mu\nu}\Sigma^{\dagger})$, which renormalizes the vector and axial-vector fields. Another option is to modify the term proportional to $m_{0}^{2}$ by inserting the combinations $\Sigma\Sigma^{\dagger}$ and $\Sigma^{\dagger}\Sigma$. The advantage of the latter extension is that it does not involve a new parameter, while it does allow for a very satisfactory description of both vector and axial-vector masses. Therefore, a convenient spin-1 mass Lagrangian reads \begin{equation} {\cal L}^{(1)}_{m} = f_{1}^{2}{\rm Tr}(u^{\mu}u_{\mu}) + \frac{m_{0}^{2}}{2f_{1}^{2}}\,{\rm Tr} (l^{\mu}\Sigma\Sigma^{\dagger}l_{\mu} +r^{\mu}\Sigma^{\dagger}\Sigma r_{\mu}) - c\,g_{V}^{2} {\rm Tr}(l^{\mu}\Sigma r_{\mu}\Sigma^{\dagger}). \label{mass1} \end{equation} Using this Lagrangian, we find that the set $m_{0}=653$ MeV, $c=-0.131$, $f_{1}=222$ MeV, and $f_{2}=290$ MeV gives very satisfactory results for the $\rho$, $a_{1}$, $K^{\ast}$, and $K_{1}$ masses, as well as for the pion and kaon decay constants. Given these parameters, the renormalization constants are $Z_{\pi}=0.173$, $Z_{K}=0.225$, and $Z_{\eta_{8}}=0.236$. Note that $(Z_{\pi},c)$ is the same as in the SU(2) case, and so the good phenomenology in the $\rho$--$a_{1}$ sector~\cite{Ko94} is still valid. To make a further connection with the real physical world, we next also introduce isosinglet vector and axial-vector fields, which complete the experimentally observed nonets. The isosinglet fields are taken to be SU(3) invariants. This means that we can formally include them on the diagonal in the respective octet matrix representations. Finally, to be complete, we should also list the kinetic-energy and mass terms for the scalar and pseudoscalar fields. The kinetic-energy term for the scalar fields is given by ${\rm Tr}\,(D^{\mu}\chi^{\dagger}_{+}D_{\mu}\chi_{+})$, with the covariant derivative defined as in Eq.~(\ref{Dcovar}) to ensure the chiral invariance. The kinetic-energy term for the pseudoscalar fields is already contained in ${\rm Tr}(u^{\mu}u_{\mu})$. The simplest possible mass term for the scalar fields is of the form \begin{equation} {\cal L}^{(0)}_{m,s}=f_{1}^{3}{\rm Tr}(\chi_{+}A_{s}) -c_{2}f_{1}^{2}{\rm Tr}(\chi^{\dagger}_{+}\chi_{+}) -c_{4}{\rm Tr}(\chi^{\dagger}_{+}\chi_{+})^{2}, \label{mass0sc} \end{equation} where the first term with the diagonal matrix $A_{s}={\rm diag}(x,x,y)$ breaks the chiral symmetry. This term has to be included in order to remove the terms linear in the $s_{0}$ and $s_{8}$ fields as generated by the last two terms. This condition fixes $(x,y)$ in terms of $(c_{2},c_{4})$. Fitting to the scalar masses as discussed in Sec.~\ref{sec:intro}, we find $(c_{2},c_{4})=(7.62,-0.46)$. To generate the octet pseudoscalar masses, we again need a term that breaks the chiral symmetry. The singlet pseudoscalar mass is generated by ${\rm Tr}(\chi^{\dagger}_{-}\chi_{-})$, which is chiral invariant. Hence, an appropriate Lagrangian for the pseudoscalar meson masses looks like \begin{equation} {\cal L}^{(0)}_{m,p}={\textstyle\frac{1}{4}}f_{1}^{2} {\rm Tr}\left[(u^{2}+u^{\dagger2})A_{p}\right] - {\textstyle\frac{1}{4}}m^{2}_{\eta_{0}} {\rm Tr}(\chi^{\dagger}_{-}\chi_{-}), \label{mass0ps} \end{equation} with the diagonal matrix $A_{p}={\rm diag}(x',x',y')$, where $x'=m^{2}_{\pi}/Z_{\pi}$ and $y'=2m^{2}_{K}/Z_{K}-m^{2}_{\pi}/Z_{\pi}$. Note that with this choice the quadratic Gell-Mann--Okubo mass formula, $3m^{2}_{\eta_{8}}+m^{2}_{\pi}-4m^{2}_{K}=0$, is approximately satisfied. The mass $m_{\eta_{0}}$ is chosen such that, with the proper mixing angle, we get the empirical $\eta$ and $\eta'$ masses. The main achievement of the field transformations as discussed above is that we have constructed various 3$\times$3 matrices of the general form $\Phi=(1/\sqrt{2})\lambda_{c}\phi_{c}$, where $c=0,\ldots,8$. These matrices contain scalar, pseudoscalar, vector, and axial-vector meson fields, and (except for the vector mesons) they all transform in the same way as the baryon fields. This allows us to define the following chiral-invariant combinations \begin{eqnarray} \left[\overline{B}B\Phi\right]_{F} &=& {\rm Tr}(\overline{B}\Phi B) -{\rm Tr}(\overline{B}B\Phi), \nonumber\\ \left[\overline{B}B\Phi\right]_{D} &=& {\rm Tr}(\overline{B}\Phi B) +{\rm Tr}(\overline{B}B\Phi)-{\textstyle\frac{2}{3}}\, {\rm Tr}(\overline{B}B){\rm Tr}(\Phi), \nonumber\\ \left[\overline{B}B\Phi\right]_{S} &=& {\rm Tr}(\overline{B}B){\rm Tr}(\Phi). \end{eqnarray} For a traceless matrix $\Phi$ (i.e., only octets of mesons), the $S$-type coupling $[\overline{B}B\Phi]_{S}$ vanishes, and the $F$- and $D$-type couplings can be written as ${\rm Tr}(\overline{B}[\Phi,B])$ and ${\rm Tr} (\overline{B}\{\Phi,B\})$, respectively; a notation often encountered in the literature. Defining a baryon-baryon-meson octet coupling constant $g^{\rm oct}$ and a baryon-baryon-singlet coupling constant $g^{\rm sin}$, a general interaction Lagrangian which satisfies chiral symmetry can now be written in the form \begin{equation} {\cal L}_{I} = -g^{\rm oct}\sqrt{2}\left\{ \alpha\left[\overline{B}B\Phi\right]_{F}+ (1-\alpha)\left[\overline{B}B\Phi\right]_{D}\right\}\, - \, g^{\rm sin}{\textstyle\sqrt{\frac{1}{3}}} \left[\overline{B}B\Phi\right]_{S}, \label{LIsu3} \end{equation} where $\alpha$ is known as the $F/(F+D)$ ratio, and the square-root factors are introduced for later convenience. The various field matrices are given by \begin{mathletters} \begin{eqnarray} \Phi_{\rm sc} &=& \frac{1}{\sqrt{2}}\left[ F+\lambda_{c}s_{c}\right], \label{phisc}\\ \Phi_{\rm vc} &=& \frac{-i}{\sqrt{2}g_{V}}\gamma_{\mu}\left[ u^{\dagger}\left(\partial^{\mu}+{\textstyle\frac{i}{2}}g_{V} \lambda_{c}(\rho^{\mu}+A^{\mu}-hD^{\mu}\pi)_{c}\right)u\right. \nonumber\\ &&\hspace{50pt} \left. +u\left(\partial^{\mu}+{\textstyle\frac{i}{2}}g_{V}\lambda_{c} (\rho^{\mu}-A^{\mu}+hD^{\mu}\pi)_{c}\right)u^{\dagger}\right], \label{phivc}\\ \Phi_{\rm ax} &=& \frac{-i}{\sqrt{2}g_{V}}\gamma_{5}\gamma_{\mu}\left[ u^{\dagger}\left(\partial^{\mu}+{\textstyle\frac{i}{2}}g_{V} \lambda_{c}(\rho^{\mu}+A^{\mu}-hD^{\mu}\pi)_{c}\right)u\right. \nonumber\\ &&\hspace{60pt} \left. -u\left(\partial^{\mu}+{\textstyle\frac{i}{2}}g_{V}\lambda_{c} (\rho^{\mu}-A^{\mu}+hD^{\mu}\pi)_{c}\right)u^{\dagger}\right], \label{phiax} \end{eqnarray} \end{mathletters} where $D^{\mu}(\lambda\pi)=\partial^{\mu}(\lambda\pi)- {\textstyle\frac{i}{2}}g_{V}[(\lambda\pi),(\lambda\rho_{\mu})]$, and $h$ chosen such that the mixing between the axial-vector and pseudoscalar fields in the meson sector vanishes. Note that the pseudovector coupling of the pseudoscalar fields is already included in the axial-vector field matrix $\Phi_{\rm ax}$. In addition to the electric coupling $\gamma^{\mu}\rho'_{\mu}$ [where $\rho'_{\mu}$ is a shorthand for the fields appearing in Eq.~(\ref{phivc})], it is also possible~\cite{Wes67} to include a chiral-invariant magnetic coupling $\sigma^{\mu\nu}\rho'_{\mu\nu}$, where $\rho'_{\mu\nu}$ is the field strength tensor for the $\rho'_{\mu}$ field combination. This is due to the fact that $\rho'_{\mu}$ transforms according to Eq.~(\ref{Gamtrans}), and so we can define a chiral-invariant field strength tensor $\rho'_{\mu\nu}$, as in Eq.~(\ref{fieldstrength}). The transformation (\ref{Gamtrans}) also imposes the constraint that the chiral-variant $D$-type coupling $[\overline{B}B\Phi_{\rm vc}]_{D}$ should vanish, i.e., the electric $\alpha^{e}_{V}=1$. This represents the so-called universality condition proposed by Sakurai~\cite{Sak60}. Hence, the assumption that the $\rho$ meson couples universally to the isospin current in this model is a direct consequence of chiral SU(3) symmetry. On the other hand, the magnetic $\alpha^{m}_{V}$ is still a free parameter. Until now, it appears that we have not gained much by imposing the SU(3)$\times$SU(3) chiral symmetry: the form of the interaction Lagrangian (\ref{LIsu3}) can be written down immediately by assuming only an SU(3) symmetry, and has been known for a long time (see, e.g., Ref.~\cite{Swa63}). However, the important difference is that the 3$\times$3 matrices $\Phi$ are not just simple representations of the scalar, vector, or axial-vector meson fields, but they contain the pseudoscalar fields in a nonlinear way as well. This means that the chiral Lagrangian contains all kinds of multiple-meson (pair, triple, etc.) interactions not envisaged before. The coupling constants for these multiple-meson interactions can all be expressed in terms of the single-meson interaction coupling constants. This will be the subject of Sec.~\ref{sec:double-meson}. Furthermore, the pseudoscalar fields are renormalized due to the introduction of the axial-vector fields. Hence, already in leading order the chiral Lagrangian gives interactions between baryons and pseudoscalar mesons that are slightly different from what is obtained with the standard (i.e., nonchiral) Lagrangian. Also, the coupling constants of the pseudoscalar mesons are directly related to those of the axial-vector mesons. Finally, the imposed symmetry allows us to fit all the octet meson masses with only two parameters for each octet. \section{Single-meson vertices} \label{sec:single-meson} We will first look at the single-meson baryon-baryon coupling constants, i.e., the interaction terms linear in the meson fields. Let us drop for a moment the Lorentz character of the interaction vertices ($\openone_{4}$ in spinor space for scalar mesons, $\gamma_{5}\gamma_{\mu}\partial^{\mu}$ for pseudoscalar mesons, $\gamma_{\mu}$ and $\sigma_{\mu\nu}\partial^{\nu}$ for vector mesons, and $\gamma_{5}\gamma_{\mu}$ for axial-vector mesons) and take as an example the nonet of pseudoscalar mesons. The derivative (pseudovector-coupled) pseudoscalar-meson interaction Lagrangian to lowest order in the fields is then of the form \begin{equation} {\cal L}_{\rm pv}={\cal L}_{\rm pv}^{\{1\}} +{\cal L}_{\rm pv}^{\{8\}}, \end{equation} where the $S$-type coupling in Eq.~(\ref{LIsu3}) gives the singlet interaction Lagrangian \begin{equation} m_{\pi}{\cal L}_{\rm pv}^{\{1\}}= -f_{N\!N\eta_{0}}(\overline{N}N)\eta_{0} -f_{\Xi\Xi\eta_{0}}(\overline{\Xi}\Xi)\eta_{0} -f_{\Sigma\Sigma\eta_{0}}(\overline{\bbox{\Sigma}}\!\cdot\! \bbox{\Sigma})\eta_{0} -f_{\Lambda\Lambda\eta_{0}}(\overline{\Lambda}\Lambda)\eta_{0}, \label{Lbar1} \end{equation} with the (derivative) pseudovector coupling constants \begin{equation} f_{N\!N\eta_{0}}=f_{\Xi\Xi \eta_{0}}=f_{\Sigma\Sigma \eta_{0}}= f_{\Lambda\Lambda \eta_{0}}=f^{\rm sin}_{\rm pv}, \label{gsin} \end{equation} and where we introduced the charged-pion mass as a scaling mass to make the pseudovector coupling $f$ dimensionless. The interaction Lagrangian for the meson octet is obtained by evaluating the $F$- and $D$-type couplings in Eq.~(\ref{LIsu3}), and can be written as \begin{eqnarray} m_{\pi}{\cal L}_{\rm pv}^{\{8\}} &=& -f_{N\!N\pi}(\overline{N}\bbox{\tau}N)\!\cdot\!\bbox{\pi} -f_{\Xi\Xi\pi}(\overline{\Xi}\bbox{\tau}\Xi)\!\cdot\!\bbox{\pi} -f_{\Lambda\Sigma\pi}(\overline{\Lambda}\bbox{\Sigma}+ \overline{\bbox{\Sigma}}\Lambda)\!\cdot\!\bbox{\pi} +if_{\Sigma\Sigma\pi}(\overline{\bbox{\Sigma}}\!\times\!\bbox{\Sigma}) \!\cdot\!\bbox{\pi} \nonumber\\ &&-f_{N\!N\eta_{8}}(\overline{N}N)\eta_{8} -f_{\Xi\Xi\eta_{8}}(\overline{\Xi}\Xi)\eta_{8} -f_{\Lambda\Lambda\eta_{8}}(\overline{\Lambda}\Lambda)\eta_{8} -f_{\Sigma\Sigma\eta_{8}}(\overline{\bbox{\Sigma}}\!\cdot\! \bbox{\Sigma})\eta_{8} \nonumber\\ &&-f_{\Lambda N\!K}\left[(\overline{N}K)\Lambda +\overline{\Lambda}(\overline{K}N)\right] -f_{\Xi\Lambda K}\left[(\overline{\Xi}K_{c})\Lambda +\overline{\Lambda}(\overline{K_{c}}\Xi)\right] \nonumber\\ &&-f_{\Sigma N\!K}\left[\overline{\bbox{\Sigma}}\!\cdot\! (\overline{K}\bbox{\tau}N)+(\overline{N}\bbox{\tau}K) \!\cdot\!\bbox{\Sigma}\right] -f_{\Xi\Sigma K}\left[\overline{\bbox{\Sigma}}\!\cdot\! (\overline{K_{c}}\bbox{\tau}\Xi) +(\overline{\Xi}\bbox{\tau}K_{c})\!\cdot\!\bbox{\Sigma}\right]. \label{Lbar8} \end{eqnarray} Here we introduced the doublets \begin{equation} N=\left(\begin{array}{c} p \\ n \end{array} \right), \ \ \ \Xi=\left(\begin{array}{c} \Xi^{0} \\ \Xi^{-} \end{array} \right), \ \ \ K=\left(\begin{array}{c} K^{+} \\ K^{0} \end{array} \right), \ \ \ K_{c}=\left(\begin{array}{c} \overline{K^{0}} \\ -K^{-} \end{array} \right), \label{doublets} \end{equation} and $\bbox{\Sigma}$ and $\bbox{\pi}$ are isovectors with phases chosen~\cite{Swa63} such that \begin{equation} \bbox{\Sigma}\!\cdot\!\bbox{\pi} = \Sigma^{+}\pi^{-} +\Sigma^{0}\pi^{0}+\Sigma^{-}\pi^{+}. \end{equation} The octet coupling constants are given by the following expressions ($f\equiv f^{\rm oct}_{\rm pv}$) \begin{equation} \begin{array}{lll} f_{N\!N\pi}=f, \ \ \ & f_{N\!N\eta_{8}}=\frac{1}{\sqrt{3}}\,f(4\alpha-1), \ \ \ & f_{\Lambda N\!K}=-\frac{1}{\sqrt{3}}\,f(1+2\alpha), \\[2mm] f_{\Xi\Xi\pi}=-f(1-2\alpha), \ \ \ & f_{\Xi\Xi\eta_{8}}=-\frac{1}{\sqrt{3}}\,f(1+2\alpha), \ \ \ & f_{\Xi\Lambda K}=-\frac{1}{\sqrt{3}}\,f(4\alpha-1), \\[2mm] f_{\Lambda\Sigma\pi}=\frac{2}{\sqrt{3}}\,f(1-\alpha), \ \ \ & f_{\Lambda\Lambda\eta_{8}}=-\frac{2}{\sqrt{3}}\,f(1-\alpha), \ \ \ & f_{\Sigma N\!K}=f(1-2\alpha), \\[2mm] f_{\Sigma\Sigma\pi}=2f\alpha, \ \ \ & f_{\Sigma\Sigma\eta_{8}}=\frac{2}{\sqrt{3}}\,f(1-\alpha), \ \ \ & f_{\Xi\Sigma K}=f. \end{array} \label{goct} \end{equation} Similar relations (without the scaling mass $m_{\pi}$) are found for the coupling constants of the scalar, vector, and axial-vector mesons. Note, however, that for the pseudoscalar mesons the relations (\ref{gsin}) and (\ref{goct}) should be slightly modified due to the difference in renormalization factors $Z_{\pi}$, $Z_{K}$, $Z_{\eta}$, and $Z_{\eta'}$ (see Sec.~\ref{sec:su3}). For each type of meson there are only four parameters: the singlet coupling, the octet coupling, the $F/(F+D)$ ratio, and the mixing angle to generate the physical isoscalar mesons from the pure octet and singlet fields. In most cases we can impose theoretical and experimental constraints on these parameters. This will be discussed next. As already mentioned before, the axial-vector mesons are very heavy, and hence are not expected to play an important role in low-energy potential models, but we will still discuss them here for reasons of completeness. \subsection{Scalar mesons} \label{subsec:scalar} Because the existence of a nonet of scalar mesons with masses below 1 GeV is still controversial, the constraints on the SU(3) parameters solely depend on the particular theoretical model one wants to use to describe the scalar mesons. As a matter of fact, it is not at all clear whether it is indeed valid to impose an SU(3) symmetry. However, in order to limit the number of free parameters, we will here assume the standard SU(3) relations and assume that the physical $\varepsilon(760)$ and $f_{0}(980)$ mesons are admixtures of the pure octet $\sigma_{8}$ and pure singlet $\sigma_{0}$ fields, in terms of the scalar mixing angle $\theta_{S}$, \begin{eqnarray} |f_{0}\rangle &=& \sin\theta_{S}\,|\sigma_{8}\rangle +\cos\theta_{S}\,|\sigma_{0}\rangle, \nonumber\\ |\varepsilon\rangle &=& \cos\theta_{S}\,|\sigma_{8}\rangle -\sin\theta_{S}\,|\sigma_{0}\rangle. \label{mixs} \end{eqnarray} A possible value for the mixing angle is obtained by assuming that the scalar mesons are $\bar{q}^{2}q^{2}$ states~\cite{Jaf77}, and that the $\varepsilon(760)$ does not contain any strange quarks (hence, its low mass). This implies an ideal mixing angle $\theta_{S}=35.3^{\circ}$. A value for $\alpha_{S}$ can be determined by assuming that the baryon masses are generated by the nonvanishing vacuum expectation value of the $\sigma_{0}$ and $\sigma_{8}$ scalar fields. Defining the vacuum expectation values $f_{1}$ and $f_{2}$ as in Eq.~(\ref{VEV}), we have \begin{equation} \sqrt{3}\langle \sigma_{0}\rangle = {\textstyle\sqrt{\frac{1}{2}}}\,(2f_{1}+f_{2}), \ \ \ \sqrt{3}\langle \sigma_{8}\rangle = (f_{1}-f_{2}), \end{equation} or, assuming the ideal mixing $\theta_{S}=35.3^{\circ}$, \begin{equation} f_{1}=\langle f_{0}(980)\rangle, \ \ \ f_{2}=-\sqrt{2}\langle \varepsilon(760)\rangle. \end{equation} {}From the interaction Lagrangians (\ref{Lbar1}) and (\ref{Lbar8}) for the scalar fields we find the following relations for the baryon masses, \begin{eqnarray} M_{N} &=& M_{0}-{\textstyle\frac{1}{3}}g^{\rm oct}_{\rm sc} (4\alpha_{S}-1)(f_{2}-f_{1}), \nonumber\\ M_{\Lambda} &=& M_{0}-{\textstyle\frac{2}{3}}g^{\rm oct}_{\rm sc} (\alpha_{S}-1)(f_{2}-f_{1}), \nonumber\\ M_{\Sigma} &=& M_{0}+{\textstyle\frac{2}{3}}g^{\rm oct}_{\rm sc} (\alpha_{S}-1)(f_{2}-f_{1}), \nonumber\\ M_{\Xi} &=& M_{0}+{\textstyle\frac{1}{3}}g^{\rm oct}_{\rm sc} (2\alpha_{S}+1)(f_{2}-f_{1}), \end{eqnarray} with $M_{0}=g^{\rm sin}_{\rm sc}\,(2f_{1}+f_{2})/\sqrt{6}= {\textstyle\frac{1}{2}}(M_{\Lambda}+M_{\Sigma})$. According to the relations given above, these masses satisfy the equality $2M_{N}+2M_{\Xi}=3M_{\Lambda}+M_{\Sigma}$, which is indeed approximately true experimentally (4516 MeV versus 4537 MeV). Solving for $\alpha_{S}$, we find $\alpha_{S}=1.42$ and $g^{\rm oct}_{\rm sc}(f_{2}-f_{1})=136.5$ MeV. Finally, using the estimates for $f_{1}$ and $f_{2}$ as given in Sec.~\ref{sec:su3}, we find $g^{\rm sin}_{\rm sc}=3.8$ and $g^{\rm oct}_{\rm sc}=2.0$. \subsection{Pseudoscalar mesons} \label{subsec:pseudoscalar} The mixing angle $\theta_{PS}$ for the pseudoscalar mesons is defined by \begin{eqnarray} |\eta \rangle &=& \cos\theta_{PS}\,|\eta_{8}\rangle -\sin\theta_{PS}\,|\eta_{0}\rangle, \nonumber\\ |\eta'\rangle &=& \sin\theta_{PS}\,|\eta_{8}\rangle +\cos\theta_{PS}\,|\eta_{0}\rangle. \label{mixps} \end{eqnarray} The linear and quadratic Gell-Mann--Okubo mass formulas give~\cite{PDG96} $\theta_{PS}\approx-23^{\circ}$ and $\theta_{PS}\approx-10^{\circ}$, respectively. The current experimental evidence, however, seems to favor~\cite{Gil87} $\theta_{PS}\approx-20^{\circ}$. The axial-vector current $F/D$ ratio, obtained from the Cabibbo theory of semileptonic decays of baryons, gives~\cite{Clo93} 0.575$\pm$0.0165, or $\alpha_{PS}=0.365\pm0.007$. The $\pi N\!N$ pseudovector coupling constant is obtained from the Goldberger-Treiman relation~\cite{Gol58}, which gives $f^{\rm oct}_{\rm pv}=f_{N\!N\pi}=g_{A}(0)m_{\pi}/2f_{\pi}= 0.952\pm0.003$. It can also be extracted from multienergy partial-wave analyses of $pp$, $np$, and $\bar{p}p$ scattering data~\cite{Sto93b}, which yields for the coupling constant at the pion pole $f_{N\!N\pi}^{2}/4\pi=0.0745\pm0.0006$. This latter result has to be extrapolated to $t=0$ before comparing it with the Goldberger-Treiman result. Finally, we can also give an estimate for the singlet pseudovector coupling constant. The estimate is based on the fact that we can rewrite the $\eta_{8}$ and $\eta_{0}$ in a nonstrange-strange basis, rather than the standard $\{u,d,s\}$ quark basis, and then assume that the purely strange-quark state does not couple to the nucleon. To be specific, in the standard quark basis \begin{eqnarray} |\eta_{8}\rangle &=& {\textstyle\sqrt{\frac{1}{6}}} |\bar{u}u+\bar{d}d-2\bar{s}s\rangle, \nonumber\\ |\eta_{0}\rangle &=& {\textstyle\sqrt{\frac{1}{3}}} |\bar{u}u+\bar{d}d+\bar{s}s\rangle, \end{eqnarray} and so the purely strange-quark state can be expressed as $|S\rangle= |\bar{s}s\rangle=(|\eta_{0}\rangle-\sqrt{2}|\eta_{8}\rangle)/\sqrt{3}$. This state does not couple to the nucleons provided that $f_{N\!N\eta_{0}}=\sqrt{2}\,f_{N\!N\eta_{8}}$ or, equivalently, $f^{\rm sin}_{\rm pv}={\textstyle\sqrt{\frac{2}{3}}}(4\alpha_{PS}-1) f^{\rm oct}_{\rm pv}$. \subsection{Vector mesons} \label{subsec:vector} The mixing angle $\theta_{V}$ for the vector mesons is defined by \begin{eqnarray} |\omega\rangle &=& \sin\theta_{V}\,|\omega_{8}\rangle +\cos\theta_{V}\,|\omega_{0}\rangle, \nonumber\\ |\phi \rangle &=& \cos\theta_{V}\,|\omega_{8}\rangle -\sin\theta_{V}\,|\omega_{0}\rangle. \label{mixv} \end{eqnarray} Ideal mixing (i.e., the $\phi$ meson is a pure $\bar{s}s$ state) gives $\theta_{V}=35.3^{\circ}$, which is very close to the experimental value ($\theta_{V}\approx35^{\circ}$) and the values found for the linear ($\theta_{V}\approx36^{\circ}$) and quadratic ($\theta_{V}\approx39^{\circ}$) Gell-Mann--Okubo mass formulas~\cite{PDG96}. As mentioned earlier, the transformation property of the connection $\Gamma_{\mu}$ requires that $\alpha_{V}^{e}=1$ for the electric coupling of the vector mesons, which is the universality assumption~\cite{Sak60}. The $\rho N\!N$ coupling constant is obtained from the electromagnetic decay $\rho^{0}\rightarrow e^{+}e^{-}$. The vector-meson dominance (VMD) hypothesis assumes that this decay proceeds via the photon and that the $\rho^{0}$--$\gamma$ coupling is proportional to the $\rho N\!N$ coupling~\cite{Sak60}. The decay width~\cite{PDG96} $\Gamma(\rho^{0}\rightarrow e^{+}e^{-})=6.77\pm0.32$ keV then gives $g^{\rm oct}_{\rm vc}=g_{N\!N\rho}=2.52\pm0.06$. Note the factor of 2 difference between the definition of $g^{\rm oct}_{\rm vc}$ in Eq.~(\ref{LIsu3}) and $g_{V}$ in Eq.~(\ref{phivc}). If we assume that the $\phi$ meson is a pure $\bar{s}s$ state, which does not couple to the nucleons (i.e., an ideal mixing angle $\theta_{V}=35.3^{\circ}$), then we find for the singlet coupling $g^{\rm sin}_{\rm vc}=\sqrt{6}g^{\rm oct}_{\rm vc}$ or, equivalently, $g_{N\!N\omega}=3g_{N\!N\rho}$. Perhaps a better estimate is obtained from the decay width~\cite{PDG96} $\Gamma(\omega\rightarrow e^{+}e^{-}) =0.60\pm0.02$ keV, which suggests $g_{N\!N\omega}/g_{N\!N\rho}= \left[(m_{\omega}\Gamma_{\rho^{0}\rightarrow e^{+}e^{-}})/(m_{\rho} \Gamma_{\omega\rightarrow e^{+}e^{-}})\right]^{1/2}=3.4\pm0.1$. Keeping only the leading-order contribution proportional to $\sigma_{\mu\nu}\partial^{\nu}\rho^{\mu}$, the magnetic coupling of the vector mesons is defined as $f^{\rm oct}_{\rm vc}/2{\cal M}$, where the scaling mass ${\cal M}$, taken to be the proton mass, is included to make $f^{\rm oct}_{\rm vc}$ dimensionless. Following Ref.~\cite{Sak65}, the SU(6) result for $\alpha_{V}^{m}$ can be expressed as $\alpha_{V}^{m}=(4M_{8}-m_{v8})/(10M_{8}+2m_{v8})$, where $M_{8}$ denotes the average mass in the baryon octet and $m_{v8}$ the average mass in the vector-meson octet. This gives $\alpha_{V}^{m}\approx0.28$ for the relativistic SU(6) case, while $\alpha_{V}^{m}={\textstyle\frac{2}{5}}$ for the static SU(6) case. Again applying the VMD hypothesis and assuming that the lowest-mass vector mesons ($\rho$, $\omega$, $\phi$) saturate the nucleon electromagnetic form factors, the magnetic couplings are given in terms of the anomalous magnetic moments of the proton and neutron. This gives $(f/g)_{N\!N\rho}=\kappa_{p}-\kappa_{n}=3.71$, and the isoscalar values are expected to be close to $(f/g)_{N\!N\omega} +(f/g)_{N\!N\phi}\approx\kappa_{p}+\kappa_{n}=-0.12$. \subsection{Axial-vector mesons} \label{subsec:axial-vector} We follow Ref.~\cite{Coo96} in estimating the value for the mixing angle $\theta_{A}$, defined by \begin{eqnarray} |f_{1}(1285)\rangle &=& \cos\theta_{A}\,|a_{8}\rangle -\sin\theta_{A}\,|a_{0}\rangle, \nonumber\\ |f_{1}(1420)\rangle &=& \sin\theta_{A}\,|a_{8}\rangle +\cos\theta_{A}\,|a_{0}\rangle. \label{mixax} \end{eqnarray} For that purpose we rewrite the $f_{1}(1285)$ and $f_{1}(1420)$ mesons in the nonstrange-strange basis, \begin{eqnarray} |f_{1}(1285)\rangle &=& \cos\phi_{A}\,|A_{NS}\rangle -\sin\phi_{A}\,|A_{S}\rangle, \nonumber\\ |f_{1}(1420)\rangle &=& \sin\phi_{A}\,|A_{NS}\rangle +\cos\phi_{A}\,|A_{S}\rangle. \end{eqnarray} The observed decay widths give a mixing angle $\phi_{A}\approx 12^{\circ}$. In the singlet-octet basis the mixing angle is then given by $\theta_{A}=\phi_{A}-\arctan\sqrt{2}=-42.7^{\circ}$. The axial-vector coupling constants are closely related to the pseudovector coupling constants due to the mixing between the axial-vector and pseudoscalar fields; see the end of Sec.~\ref{sec:su2}. The redefinition of the axial-vector fields and the renormalization of the pseudoscalar fields gives the relation between the coupling constants as \begin{equation} g^{\rm oct}_{\rm ax} = g^{\rm oct}_{\rm vc}\,\frac{g_{A}(0)}{Z_{\pi}} = g^{\rm oct}_{\rm vc} g_{A}(0) \left(1-\frac{g_{V}^{2}f_{1}^{2}}{m_{a_{1}}^{2}}\right)^{-1}. \end{equation} The above relation can be recast into the form \begin{equation} g_{N\!Na_{1}}=\frac{m_{a_{1}}}{m_{\pi}}f_{N\!N\pi}\, \sqrt{\frac{1-Z_{\pi}}{Z_{\pi}}}, \label{ga1tofpi} \end{equation} where the square root equals 1 for $Z_{\pi}=1/2$. The latter choice gives the familiar result~\cite{Wei67,Wes67,Sch67} $g_{N\!Na_{1}}= (m_{a_{1}}/m_{\pi})f_{N\!N\pi}\approx8.4$. But this assumes $m_{a_{1}} =\sqrt{2}m_{\rho}$, which is not supported by experiment. With our previous choice $(Z_{\pi},c)=(0.173,-0.131)$, the axial-vector coupling constant comes out to be much larger, $g^{\rm oct}_{\rm ax}\approx18$. A more moderate value is obtained by choosing the alternative solution to Eq.~(\ref{Zpisolve}), $(Z_{\pi},c)=(0.827,1.25)$, which gives $g^{\rm oct}_{\rm ax}\approx3.8$. An estimate for the $a_{1}N\!N$ coupling constant from experiment is based on the idea of axial-vector meson dominance, which relates it to the axial-vector coupling constant of the weak interaction, $g_{A}(0)$, and the decay constant $f_{a_{1}}$, as defined by the isovector $a_{1}$-to-vacuum matrix element of the hadronic axial-vector current. With our definition (\ref{LIsu3}) this gives~\cite{Coo96} $g^{\rm oct}_{\rm ax}=4.7\pm0.6$. Hence, we find contradictory results: the larger $|c|$ solution gives reasonable agreement with the empirical $g_{N\!Na_{1}}$ coupling constant, whereas the smaller $|c|$ solution gives better phenomenology for $a_{1}$ decay widths and the pion charge radius~\cite{Ko94}. This needs to be further explored. \section{Double-meson vertices} \label{sec:double-meson} \subsection{Vector double-meson vertices} The vector double-meson vertices are obtained from an expansion of Eq.~(\ref{phivc}). The expansion to second order in the meson fields is given by \begin{equation} \Phi_{\rm vc} = \frac{1}{\sqrt{2}}\,\gamma^{\mu}(\lambda\rho_{\mu}) -\frac{i(1-2g_{V}f_{1}h)}{4\sqrt{2}g_{V}f_{1}^{2}}\,\gamma^{\mu} \left[(\lambda\pi),\partial_{\mu}(\lambda\pi)\right] -\frac{i}{2\sqrt{2}f_{1}}\,\gamma^{\mu} \left[(\lambda\pi),(\lambda A_{\mu})\right] + \ldots \end{equation} Hence, it should be obvious that the pair interaction Lagrangians can be obtained by a simple replacement of the meson fields in the single-meson interaction Lagrangians. For example, the two-pion interaction Lagrangian is obtained from the vector version of Eq.~(\ref{Lbar8}) with the replacement \begin{equation} \gamma^{\mu}\bbox{\rho}_{\mu} \longrightarrow \gamma^{\mu}(\bbox{\pi}\times\partial_{\mu}\bbox{\pi}), \end{equation} which gives \begin{eqnarray} m^{2}_{\pi}{\cal L}_{(\pi\pi)} &=& -g_{N\!N(\pi\pi)}(\overline{N}\gamma^{\mu}\bbox{\tau}N) \!\cdot\!(\bbox{\pi}\!\times\!\partial_{\mu}\bbox{\pi}) -g_{\Xi\Xi(\pi\pi)}(\overline{\Xi}\gamma^{\mu}\bbox{\tau}\Xi) \!\cdot\!(\bbox{\pi}\!\times\!\partial_{\mu}\bbox{\pi}) \nonumber\\ && -g_{\Lambda\Lambda(\pi\pi)} (\overline{\Lambda}\gamma^{\mu}\bbox{\Sigma} +\overline{\bbox{\Sigma}}\gamma^{\mu}\Lambda) \!\cdot\!(\bbox{\pi}\!\times\!\partial_{\mu}\bbox{\pi}) +ig_{\Sigma\Sigma(\pi\pi)}(\overline{\bbox{\Sigma}}\times \gamma^{\mu}\bbox{\Sigma})\!\cdot\! (\bbox{\pi}\!\times\!\partial_{\mu}\bbox{\pi}). \end{eqnarray} Here we introduced the square of the charged-pion mass to make the coupling constants dimensionless. Substituting the appropriate renormalization factors and ${\textstyle\frac{1}{2}}g_{V}=g_{N\!N\rho}$, the coupling constants are given by \begin{equation} g_{B'B(\pi\pi)} = \frac{m_{\pi}^{2}}{4f_{1}^{2}}\, \frac{2Z_{\pi}-1}{Z_{\pi}}\,\frac{g_{B'B\rho}}{g_{N\!N\rho}}, \end{equation} for $B'B=N\!N$, $\Xi\Xi$, $\Lambda\Sigma$, and $\Sigma\Sigma$. Note that by choosing~\cite{Wei67,Wes67,Sch67} $Z_{\pi}={\textstyle\frac{1}{2}}$, i.e., making the assumption that $m_{a_{1}}=\sqrt{2}m_{\rho}$, all the $(\pi\pi)$ pair interactions are absent. This agrees with the assumption of vector meson dominance which in its most stringent form states that those multiple-meson interactions which can arise through the exchange of one single vector meson should not also occur directly~\cite{Wes67}. However, experimentally $m_{a_{1}}\neq\sqrt{2}m_{\rho}$, and so here the $(\pi\pi)$ pair interactions are still present in the interaction Lagrangian. We can also view the $(\pi\pi)$ pair exchange as to proceed via $\rho$ exchange where the $\rho$-meson propagator is approximated by a constant (which should be adequate for a heavy meson at low energies). Comparing the coupling constant for this and any other pair vertex with the effective coupling constant as obtained in such a meson saturation picture then gives an indication of how good this picture really is. The $(\pi K)$ and $(\eta_{8}K)$ combinations occur at the same place in the $\Phi_{\rm vc}$ matrix as the $K^{\ast}$ fields. Therefore, the $(\pi K)$ and $(\eta_{8}K)$ interactions are obtained by substituting, respectively, \begin{eqnarray} \gamma^{\mu}K^{\ast}_{\mu} &\longrightarrow& -i\gamma^{\mu}\bbox{\tau}\!\cdot\!(\bbox{\pi}\partial_{\mu}K -K\partial_{\mu}\bbox{\pi}), \nonumber\\ \gamma^{\mu}K^{\ast}_{\mu} &\longrightarrow& -i\gamma^{\mu} (\eta_{8}\partial_{\mu}K-K\partial_{\mu}\eta_{8}). \end{eqnarray} This gives the coupling constants \begin{eqnarray} g_{B'B(\pi K)} &=& \frac{m_{\pi}^{2}}{4\sqrt{2}f_{1}^{2}}\, \frac{2Z_{K\pi}-1}{Z_{K\pi}}\, \frac{g_{B'BK^{\ast}}}{g_{N\!N\rho}}, \nonumber\\ g_{B'B(\eta_{8}K)} &=& \frac{m_{\pi}^{2}}{4\sqrt{2}f_{1}^{2}}\, \frac{2Z_{K\eta_{8}}-1}{Z_{K\eta_{8}}}\, \frac{\sqrt{3}g_{B'BK^{\ast}}}{g_{N\!N\rho}}, \end{eqnarray} for $B'B=\Lambda N$, $\Xi\Lambda$, $\Sigma N$, and $\Xi\Sigma$. Here we defined averaged renormalization constants $Z^{2}_{ab}= Z_{a}Z_{b}$ to simplify the expressions. In the $\Phi_{\rm vc}$ matrix, the two-kaon interactions occur on the diagonal and on the same off-diagonal places as the $\bbox{\rho}$ fields. We can therefore split up the contributions into two parts. One part behaves like an isoscalar octet field, and so we have the replacement \begin{equation} \gamma^{\mu}\omega_{8\mu} \longrightarrow i\gamma^{\mu}(\overline{K}\partial_{\mu}K- \overline{K_{c}}\partial_{\mu}K_{c}), \end{equation} and the coupling constants \begin{equation} g_{BB(KK)} = \frac{m_{\pi}^{2}}{8f_{1}^{2}}\, \frac{2Z_{K}-1}{Z_{K}}\, \frac{\sqrt{3}g_{BB\omega_{8}}}{g_{N\!N\rho}}. \end{equation} The remaining part behaves like an isovector field ${\bf K}_{2\mu}$ which can be written as \begin{equation} {\bf K}_{2\mu}=(\overline{K}\bbox{\tau}\partial_{\mu}K) +(\overline{K_{c}}\bbox{\tau}\partial_{\mu}K_{c}), \label{KKvecmu} \end{equation} The Lagrangian for this type of two-kaon interaction is obtained by making the substitution $\bbox{\rho}_{\mu}\rightarrow i{\bf K}_{2\mu}$, which gives the coupling constants \begin{equation} g_{B'B(K\tau K)} = \frac{m_{\pi}^{2}}{8f_{1}^{2}}\, \frac{2Z_{K}-1}{Z_{K}}\,\frac{g_{B'B\rho}}{g_{N\!N\rho}}, \end{equation} where again $B'B=N\!N$, $\Xi\Xi$, $\Lambda\Sigma$, and $\Sigma\Sigma$. The double-meson interactions consisting of a pseudoscalar and an axial-vector meson are analogously obtained by the substitutions \begin{eqnarray} \gamma^{\mu}\bbox{\rho}_{\mu} &\longrightarrow& \gamma^{\mu}(\bbox{\pi}\!\times\!{\bf A}_{\mu}), \nonumber\\ \gamma^{\mu}K^{\ast}_{\mu} &\longrightarrow& -i\gamma^{\mu}\bbox{\tau}\!\cdot\!(\bbox{\pi}K_{1\mu} -K{\bf A}_{\mu}), \nonumber\\ \gamma^{\mu}K^{\ast}_{\mu} &\longrightarrow& -i\gamma^{\mu} (\eta_{8}K_{1\mu}-Ka_{8\mu}), \nonumber\\ \gamma^{\mu}\omega_{8\mu} &\longrightarrow& i\gamma^{\mu}(\overline{K}K_{1\mu} -\overline{K_{c}}K_{1\mu\,c}), \nonumber\\ \gamma^{\mu}\bbox{\rho}_{\mu} &\longrightarrow& i\gamma^{\mu}(\overline{K}\bbox{\tau}K_{1\mu} +\overline{K_{c}}\bbox{\tau}K_{1\mu\,c}). \end{eqnarray} Including averaged renormalization constants, the coupling constants are given by \begin{eqnarray} g_{B'B(\pi a_{1})} &=& \frac{m_{\pi}}{\sqrt{Z_{\pi}}f_{1}}\, g_{B'B\rho}, \nonumber\\ g_{B'B(\pi K_{1})} &=&-g_{B'B(Ka_{1})} = \frac{m_{\pi}}{\sqrt{Z_{K\pi}}f_{1}}\, {\textstyle\sqrt{\frac{1}{2}}}g_{B'BK^{\ast}}, \nonumber\\ g_{B'B(\eta_{8}K_{1})} &=&-g_{B'B(Ka_{8})} = \frac{m_{\pi}}{\sqrt{Z_{K\eta_{8}}}f_{1}}\, {\textstyle\sqrt{\frac{3}{2}}}g_{B'BK^{\ast}}, \nonumber\\ g_{B'B(KK_{1})} &=& \frac{m_{\pi}}{2\sqrt{Z_{K}}f_{1}}\, \sqrt{3}g_{B'B\omega_{8}}, \nonumber\\ g_{B'B(K\tau K_{1})} &=& \frac{m_{\pi}}{2\sqrt{Z_{K}}f_{1}}\, g_{B'B\rho}, \end{eqnarray} with the relevant substitutions for the $B'B$ baryon combinations. \subsection{Axial-vector double-meson vertices} The expansion of Eq.~(\ref{phiax}) to second order in the meson fields gives \begin{equation} \Phi_{\rm ax} = \frac{1}{\sqrt{2}}\,\gamma^{5}\gamma^{\mu}(\lambda A_{\mu}) +\frac{1-g_{V}f_{1}h}{\sqrt{2}g_{V}f_{1}}\, \gamma^{5}\gamma^{\mu}\partial_{\mu}(\lambda\pi) -\frac{i(1-g_{V}f_{1}h)}{2\sqrt{2}f_{1}}\,\gamma^{5}\gamma^{\mu} \left[(\lambda\pi),(\lambda\rho_{\mu})\right] + \ldots \end{equation} Completely analogous to the previous section, we make the substitutions \begin{eqnarray} \gamma^{5}\gamma^{\mu}{\bf A}_{\mu} &\longrightarrow& \gamma^{5} \gamma^{\mu}(\bbox{\pi}\!\times\!\bbox{\rho}_{\mu}), \nonumber\\ \gamma^{5}\gamma^{\mu}K_{1\mu} &\longrightarrow& -i\gamma^{5} \gamma^{\mu}\bbox{\tau}\!\cdot\! (\bbox{\pi}K^{\ast}_{\mu}-K\bbox{\rho}_{\mu}), \nonumber\\ \gamma^{5}\gamma^{\mu}K_{1\mu} &\longrightarrow& -i\gamma^{5} \gamma^{\mu}(\eta_{8}K^{\ast}_{\mu}-K\omega_{8\mu}), \nonumber\\ \gamma^{5}\gamma^{\mu}a_{8\mu} &\longrightarrow& i\gamma^{5} \gamma^{\mu}(\overline{K}K^{\ast}_{\mu} -\overline{K_{c}}K^{\ast}_{\mu\,c}), \nonumber\\ \gamma^{5}\gamma^{\mu}{\bf A}_{\mu} &\longrightarrow& i\gamma^{5}\gamma^{\mu}(\overline{K}\bbox{\tau}K^{\ast}_{\mu} +\overline{K_{c}}\bbox{\tau}K^{\ast}_{\mu\,c}). \end{eqnarray} The coupling constants are most easily expressed in terms of $g_{N\!N\rho}$ and the pseudovector coupling constants: \begin{eqnarray} g_{B'B(\pi\rho)} &=& 2g_{N\!N\rho}f_{B'B\pi}, \nonumber\\ g_{B'B(\pi K^{\ast})} &=& -g_{B'B(K\rho)} = \sqrt{2}g_{N\!N\rho}f_{B'BK}, \nonumber\\ g_{B'B(\eta_{8}K^{\ast})} &=& -g_{B'B(K\omega_{8})} = \sqrt{6}g_{N\!N\rho}f_{B'BK}, \nonumber\\ g_{B'B(KK^{\ast})} &=& \sqrt{3}g_{N\!N\rho}f_{B'B\eta_{8}},\nonumber\\ g_{B'B(K\tau K^{\ast})} &=& g_{N\!N\rho}f_{B'B\pi}. \end{eqnarray} \subsection{Possible extensions} \label{subsec:extension} Given the transformation property of the $\Phi_{\rm sc}$ and $\Phi_{\rm ax}$ fields, we can of course arbitrarily add more interaction Lagrangians of the type (\ref{LIsu3}) with higher orders in the meson fields. In general this will introduce a number of new free parameters. Because there is no guarantee that we can actually construct a baryon-baryon potential without any free parameters for the coupling constants, it might be very useful to still have some flexibility (i.e., free parameters) in the model. It will be convenient, of course, if at the same time we can still satisfy the empirical constraints for the single-meson coupling constants as given in Sec.~\ref{sec:single-meson}. With this purpose in mind, we therefore now discuss some possible extensions in more detail. Although most of the scalar meson masses are already close to 1 GeV, the low mass of $\sim$500 MeV in the two-pole approximation~\cite{Sch71} to the broad $\varepsilon(760)$ meson makes the double-scalar interactions worthwhile to be investigated. We can extend the scalar interaction Lagrangian by adding $\sqrt{2}(g_{ss}/m_{\pi})(\Phi_{\rm sc})^{2}$ to $\Phi_{\rm sc}$ of Eq.~(\ref{phisc}), where the charged-pion mass is introduced to make the coupling constant dimensionless. Since the octet and singlet parts in the interaction Lagrangian (\ref{LIsu3}) have independent coupling constants, we can also define two independent scalar-scalar coupling constants. Hence, writing $X=\lambda_{c}s_{c}$ we have the substitution \begin{equation} g^{\rm oct}_{\rm sc}\Phi^{(8)}_{\rm sc}\longrightarrow \frac{1}{\sqrt{2}}\left\{ g^{\rm oct}_{\rm sc}\left(F+X\right) +\frac{g^{(8)}_{ss}}{m_{\pi}} \left(F^{2}+(FX+XF)+X^{2}\right) \right\}, \end{equation} and a similar expression for the singlet part. We still want to satisfy the baryon-mass relations of Sec.~\ref{subsec:scalar}, which gives the constraints \begin{eqnarray} {\textstyle\sqrt{\frac{1}{6}}} \left[ g^{\rm sin}_{\rm sc}(2f_{1}+f_{2}) +\frac{g_{ss}^{(1)}}{m_{\pi}}(2f_{1}^{2}+f_{2}^{2}) \right] &=& 1153.5\ {\rm MeV}, \nonumber\\ \left[g^{\rm oct}_{\rm sc}(f_{2}-f_{1})+\frac{g_{ss}^{(8)}}{m_{\pi}} (f_{2}^{2}-f_{1}^{2})\right] &=& 136.5\ {\rm MeV}, \label{constraint} \end{eqnarray} where we substituted $M_{0}=1153.5$ MeV. Proceeding along the lines of the previous section, the evaluation of $(\lambda_{c}s_{c})^{2}$ under the assumption of ideal mixing allows us to read off the double-scalar interactions by making the following substitutions \begin{eqnarray} g^{\rm oct}_{\rm sc}{\bf a}_{0} &\rightarrow& g_{ss}^{(8)}\left[ 2f_{0}{\bf a}_{0}+{\textstyle\frac{1}{2}} (\overline{\kappa}\bbox{\tau}\kappa -\overline{\kappa_{c}}\bbox{\tau}\kappa_{c})\right], \nonumber\\ g^{\rm oct}_{\rm sc}\kappa\, &\rightarrow& g_{ss}^{(8)} \left[(\bbox{\tau}\!\cdot\!{\bf a}_{0})\kappa +(f_{0}-\sqrt{2}\varepsilon)\kappa\right], \nonumber\\ g^{\rm oct}_{\rm sc}s_{8} &\rightarrow& g_{ss}^{(8)} {\textstyle\sqrt{\frac{1}{3}}}\left[{\bf a}_{0}^{2}+f_{0}^{2} -2\varepsilon^{2}-\overline{\kappa}\kappa\right], \nonumber\\ g^{\rm sin}_{\rm sc}s_{0} &\rightarrow& g_{ss}^{(1)} {\textstyle\sqrt{\frac{2}{3}}}\left[{\bf a}_{0}^{2}+f_{0}^{2} +\varepsilon^{2}+2\overline{\kappa}\kappa\right], \end{eqnarray} where we included the relevant coupling constants. Note that the vacuum expectation matrix $F$ does not commute with $\lambda_{c}s_{c}$, and so also the scalar single-meson coupling constants need some modification. In terms of the original coupling constants, we have \begin{eqnarray} g_{B'Ba_{0}} &\longrightarrow& g_{B'Ba_{0}} \left[1+\frac{2f_{1}}{m_{\pi}}\, \frac{g_{ss}^{(8)}}{g^{\rm oct}_{\rm sc}}\right], \nonumber\\ g_{B'B\kappa}\, &\longrightarrow& g_{B'B\kappa} \left[ 1+\frac{f_{1}+f_{2}}{m_{\pi}}\, \frac{g_{ss}^{(8)}}{g^{\rm oct}_{\rm sc}}\right], \nonumber\\ g_{B'Bs_{8}} &\longrightarrow& g_{B'Bs_{8}} \left[1+\frac{2}{3}\,\frac{f_{1}+2f_{2}}{m_{\pi}}\, \frac{g_{ss}^{(8)}}{g^{\rm oct}_{\rm sc}}\right] +\frac{2\sqrt{2}}{3}\,\frac{f_{1}-f_{2}}{m_{\pi}}\, \tilde{g}_{B'Bs_{8}}, \nonumber\\ g_{B'Bs_{0}} &\longrightarrow& g_{B'Bs_{0}} \left[1+\frac{2}{3}\,\frac{2f_{1}+f_{2}}{m_{\pi}}\, \frac{g_{ss}^{(1)}}{g^{\rm sin}_{\rm sc}}\right] +\frac{2\sqrt{2}}{3}\,\frac{f_{1}-f_{2}}{m_{\pi}}\, \tilde{g}_{B'Bs_{0}}, \label{scalarmod} \end{eqnarray} where $\tilde{g}_{B'Bs_{8}}$ is obtained from the scalar version of Eq.~(\ref{gsin}) replacing $s_{0}$ by $s_{8}$ and $g^{\rm sin}_{\rm sc}$ by $g_{ss}^{(1)}$. Similarly, $\tilde{g}_{B'Bs_{0}}$ is obtained from the scalar version of Eq.~(\ref{goct}) replacing $s_{8}$ by $s_{0}$ and $g^{\rm oct}_{\rm sc}$ by $g_{ss}^{(8)}$. Clearly, this type of extension allows for much more freedom in the scalar one-boson exchanges, without the need of abandoning the generation of the proper baryon masses. Note, however, that because of the second constraint in Eq.~(\ref{constraint}), the resulting numerical values for $g_{B'B\kappa}$, of some importance in $Y\!N$ potentials, remain unaffected by these changes. A similar example is the case where we include the matrix for the scalar fields in the axial-vector interaction Lagrangian. This gives both scalar-pseudoscalar and scalar--axial-vector pair interactions. Since the axial-vector meson masses are well above 1 GeV and the scalar meson masses are close to 1 GeV, the scalar--axial-vector exchanges are expected to be completely negligible in baryon-baryon potential models, and we will not consider them here. The scalar-pseudoscalar interactions are generated by the combination $\{\Phi_{\rm sc},\Phi_{\rm ax}\}$. In principle, another possible combination is $i[\Phi_{\rm sc},\Phi_{\rm ax}]$, where the $i$ is required by hermiticity. However, due to the fact that the vacuum expectation matrix $F$ does not commute with $\Phi_{\rm ax}$, the commutator generates a complex contribution to the baryon-baryon-kaon coupling constants; we therefore drop this combination. As before, by including the anticommutator combination we can introduce two coupling constants $g_{sp}^{(8)}$ and $g_{sp}^{(1)}$, for the octet and singlet part of the interaction Lagrangian, respectively. The form of the various interaction Lagrangians and the expressions for the pair coupling constants are now easy to derive. Note that the baryon-baryon-pseudovector coupling constants are modified in an analogous manner to Eq.~(\ref{scalarmod}). As a final example we discuss the class of pair-meson interactions which, within the present formalism, do not have any theoretical constraint on the overall coupling constants. The most simple interaction of this type contains $(\Phi_{\rm ax})^{2}$ or, equivalently, $u_{\mu}u_{\nu}$. This is also the most important one since it contains the lightest meson (the pion). Because the axial-vector mesons are already rather heavy, in the following we will only consider the pseudoscalar-pseudoscalar contributions. We have the possibility for two types of field combinations, one symmetric and one antisymmetric in the fields: \begin{eqnarray} \phi_{s} &\sim& -{\textstyle\frac{1}{2}}g^{\mu\nu}\left[ \partial_{\mu}(\lambda\pi)\partial_{\nu}(\lambda\pi) +\partial_{\nu}(\lambda\pi)\partial_{\mu}(\lambda\pi)\right], \nonumber\\ \phi_{a} &\sim& +{\textstyle\frac{i}{2}}\sigma^{\mu\nu}\left[ \partial_{\mu}(\lambda\pi)\partial_{\nu}(\lambda\pi) -\partial_{\nu}(\lambda\pi)\partial_{\mu}(\lambda\pi)\right]. \end{eqnarray} It will be convenient to identify the two-pseudoscalar contributions with a matrix of scalar fields as in Eq.~(\ref{Smat}). For the symmetric combination this implies the following substitutions \begin{eqnarray} {\bf a}_{0} \rightarrow -{\textstyle\frac{1}{2}}g^{\mu\nu} && \left[ {\textstyle\sqrt{\frac{1}{3}}}(\partial_{\mu}\bbox{\pi} \partial_{\nu}\eta_{8}+\partial_{\nu}\bbox{\pi}\partial_{\mu}\eta_{8}) +{\textstyle\sqrt{\frac{2}{3}}}(\partial_{\mu}\bbox{\pi} \partial_{\nu}\eta_{0}+\partial_{\nu}\bbox{\pi}\partial_{\mu}\eta_{0}) +{\textstyle\frac{1}{2}}(\partial_{\mu}\overline{K}\bbox{\tau} \partial_{\nu}K+\partial_{\nu}\overline{K}\bbox{\tau}\partial_{\mu}K) \right], \nonumber\\ \kappa \rightarrow -{\textstyle\frac{1}{2}}g^{\mu\nu} && \left[ {\textstyle\sqrt{\frac{1}{2}}}\bbox{\tau}\!\cdot\!(\partial_{\mu} \bbox{\pi}\partial_{\nu}K+\partial_{\nu}\bbox{\pi}\partial_{\mu}K) -{\textstyle\sqrt{\frac{1}{6}}} (\partial_{\mu}\eta_{8}\partial_{\nu}K+\partial_{\nu}\eta_{8} \partial_{\mu}K)+{\textstyle\sqrt{\frac{4}{3}}}(\partial_{\mu} \eta_{0}\partial_{\nu}K+\partial_{\nu}\eta_{0}\partial_{\mu}K) \right], \nonumber\\ s_{8} \rightarrow -{\textstyle\frac{1}{2}}g^{\mu\nu} && \left[ {\textstyle\sqrt{\frac{1}{3}}}\partial_{\mu}\bbox{\pi}\!\cdot\! \partial_{\nu}\bbox{\pi}-{\textstyle\sqrt{\frac{1}{3}}}\partial_{\mu} \eta_{8}\partial_{\nu}\eta_{8}+{\textstyle\sqrt{\frac{2}{3}}} (\partial_{\mu}\eta_{8}\partial_{\nu}\eta_{0}+\partial_{\nu}\eta_{8} \partial_{\mu}\eta_{0})-{\textstyle\sqrt{\frac{1}{12}}} (\partial_{\mu}\overline{K}\partial_{\nu}K+\partial_{\nu} \overline{K}\partial_{\mu}K)\right], \nonumber\\ s_{0} \rightarrow -{\textstyle\frac{1}{2}}g^{\mu\nu} && {\textstyle\sqrt{\frac{2}{3}}}\left[\partial_{\mu}\bbox{\pi}\!\cdot\! \partial_{\nu}\bbox{\pi}+\partial_{\mu}\eta_{8}\partial_{\nu}\eta_{8} +\partial_{\mu}\eta_{0}\partial_{\nu}\eta_{0}+ (\partial_{\mu}\overline{K}\partial_{\nu}K+\partial_{\nu} \overline{K}\partial_{\mu}K)\right], \end{eqnarray} while for the antisymmetric combinations we have \begin{eqnarray} {\bf a}_{0} \rightarrow +{\textstyle\frac{i}{2}}\sigma^{\mu\nu} && \left[ i\partial_{\mu}\bbox{\pi}\times\partial_{\nu}\bbox{\pi}- {\textstyle\frac{1}{2}}(\partial_{\mu}\overline{K}\bbox{\tau} \partial_{\nu}K-\partial_{\nu}\overline{K}\bbox{\tau}\partial_{\mu}K) \right], \nonumber\\ \kappa \rightarrow +{\textstyle\frac{i}{2}}\sigma^{\mu\nu} && \left[ {\textstyle\sqrt{\frac{1}{2}}}\bbox{\tau}\!\cdot\!(\partial_{\mu} \bbox{\pi}\partial_{\nu}K-\partial_{\nu}\bbox{\pi}\partial_{\mu}K) +{\textstyle\sqrt{\frac{3}{2}}} (\partial_{\mu}\eta_{8}\partial_{\nu}K-\partial_{\nu}\eta_{8} \partial_{\mu}K) \right], \nonumber\\ s_{8} \rightarrow +{\textstyle\frac{i}{2}}\sigma^{\mu\nu} && \left[ -{\textstyle\sqrt{\frac{3}{4}}}(\partial_{\mu}\overline{K} \partial_{\nu}K-\partial_{\nu}\overline{K}\partial_{\mu}K)\right]. \end{eqnarray} These matrices can be substituted in the interaction Lagrangian (\ref{LIsu3}), which contains the free parameters $g^{(8)}_{sym}$, $g^{(1)}_{sym}$, and $\alpha_{sym}$ for the symmetric case, and $g^{(8)}_{asym}$ and $\alpha_{asym}$ for the antisymmetric case. Including the renormalization factors it is now straightforward to find the pair coupling constants for each of the two-pseudoscalar contributions, expressed in terms of these free parameters. \section{Application to $\protect\bbox{N\!N}$} \label{sec:application} As a first application of the chiral-symmetry constraints given in this paper, we like to investigate whether with the values for the coupling constants as given in the previous sections it is indeed possible to construct a baryon-baryon potential model which gives a satisfactory description of the baryon-baryon scattering data. Of course, the imposed constraints need not all be exactly true. For example, the vector-dominance assumption that $\kappa_{\rho}=3.71$ and $\kappa_{\omega}+\kappa_{\phi}=-0.12$ is only true if these mesons fully saturate the nucleon electromagnetic form factors. The presence of heavier vector-meson nonets likely changes these relations~\cite{Dub91}. Also, the SU(3) relations in Sec.~\ref{sec:single-meson} need not be true in an exact sense. This is already clear from the fact that we have to introduce symmetry-violating terms to generate the empirical meson masses. The existence of a scalar meson nonet and its quark content is still under debate, and so the assumption of an SU(3) symmetry for the scalar mesons might even be incorrect. On the other hand, relaxing too many constraints introduces too many free parameters -- something we would like to avoid. Therefore, at this stage we choose to impose {\it all\/} the constraints and only show that the resulting $N\!N$ potential model then already gives a very reasonable description of the scattering data. The experience with $N\!N$ potential models that have appeared in the literature suggests that a fully constrained potential model of the one-boson-exchange type is unlikely to succeed~\cite{Sto95}. On the other hand, we have already demonstrated~\cite{Rij96a,Rij96b} that by including two-meson-exchange contributions a major improvement in the description of the $N\!N$ scattering data can be obtained. In order to arrive at a model which at least gives a reasonable description of the scattering data, we found that we had to include the double-scalar and double-pseudoscalar extensions as outlined in Sec.~\ref{subsec:extension}. The single-meson coupling constants satisfy the empirical constraints as discussed in Sec.~\ref{sec:single-meson} and the pair-meson coupling constants satisfy the relations as given in Sec.~\ref{sec:double-meson}. The one-boson-exchange part of the potential is standard but includes the diffractive contribution~\cite{Nag78}, while the two-meson part can be found in, or easily derived from, Refs.~\cite{Rij96a,Rij96b}. The potential is regularized with exponential form factors, one for each type of meson (scalar, pseudoscalar, or vector). The single-meson coupling constants and the exponential cutoffs for each type are given in Table~\ref{copsing}. The pair-meson coupling constants are given in Table~\ref{coppair}. Note that we only include meson pairs with a total mass below $\sim$1 GeV. However, since the $\eta\eta$-exchange contributions did not significantly improve the fit, we decided not to include them at this stage. The 12 free parameters of the model ($\Lambda_{S}$, $\Lambda_{P}$, $\Lambda_{V}$, $g_{A_{2}}$, $g_{P}$, $g_{ss}^{(8)}$, $g_{ss}^{(1)}$, $g_{sym}^{(8)}$, $g_{sym}^{(1)}$, $\alpha_{sym}$, $g_{asym}^{(8)}$, and $\alpha_{asym}$) were determined in a fit to the Nijmegen representation~\cite{Sto93c} of the $\chi^{2}$ hypersurface of the $N\!N$ scattering data below $T_{\rm lab}=350$ MeV, updated with the inclusion of new data which have been published since then. The effective diffractive mass, $m_{P}=310$ MeV, was fixed at the (rounded-off) value as used in the old Nijm78 potential~\cite{Nag78}. The resulting $\chi^{2}/N_{\rm data}$ for each of the ten energy bins is shown in Table~\ref{chi2}, in comparison with the (updated) Nijmegen partial-wave analysis. The $\chi^{2}/N_{\rm data}=1.75$ for the 0--350 MeV energy interval actually compares very favorably to other potential models that have appeared in the literature~\cite{Sto95}. As a matter of fact, it should be realized that in this model {\it all\/} coupling constants satisfy constraints as imposed by chiral symmetry, or empirical constraints as discussed in Sec.~\ref{sec:single-meson}. This in contrast to any other model that has appeared in the literature. The model even gives a much better description of the data below 300 MeV ($\chi^{2}/N_{\rm data}=1.36$), whereas it rapidly worsens at higher energies. This sudden rise is probably due to the nonadiabatic expansion in the two-meson contributions~\cite{Rij96a,Rij96b} which, strictly speaking, is only valid below the pion production threshold ($T_{\rm lab}\approx 280$ MeV). The nonadiabatic expansion is an artifact of us working in coordinate space, and the sudden rise in $\chi^{2}$ at higher energies will be further investigated when we have developed a momentum-space version where we can retain the full energy dependence in the propagators\footnote{Alternatively, we can decide that the model should only be used up to $T_{\rm lab}\approx300$ MeV, say. Without pursuing this option any further, a quick and not too thorough refit already shows that a $\chi^{2}/N_{\rm data}=1.3$ seems easily feasible.}. At this stage we prefer to work in coordinate space for several reasons. First, the (already rather time consuming) fit in coordinate space is much faster than a fit in momentum space. Second, here we only wanted to investigate whether it is indeed possible to construct a potential model which incorporates all constraints and still gives a satisfactory description of the data. Modifications such as keeping the full energy dependence in the propagators rather than making a nonadiabatic expansion can be investigated at a later stage. Third, ultimately it is our goal to produce a high-quality potential model which is exactly equivalent in both coordinate space and momentum space. This allows equivalent applications in both spaces, but requires certain approximations to be made in order to arrive at analytical expressions in coordinate space. We should point out that, although the number of free parameters is not too large, the fully constrained fit is far from trivial. Due to the constraints, the relation between the change in a parameter and the corresponding change in the phase shifts is highly nonlinear. It turned out to be impossible to fit all the free parameters at the same time, and so we had to do numerous fit cycles where we fit only an (arbitrary) subset of the parameters. This makes it very complicated and time consuming (but not impossible) to arrive at a satisfactory fit, and we cannot guarantee that the fit presented here is the most optimal one. However, the present result already clearly illustrates our main objective, viz.\ to try to construct a potential model which gives a reasonable description of the scattering data, and at the same time incorporates a number of empirical and chiral-symmetry constraints. Improvements by investigating different parameter sets, adding more parameters, or relaxing some of the constraints will be left for the future. \section{Summary} We have constructed a chiral-invariant Lagrangian for the meson-baryon sector, where the mesons consist of the nonets of scalar, pseudoscalar, vector, and axial-vector mesons, and the baryons are the members of the baryon octet. Although we mainly focussed on the meson-baryon interaction Lagrangian, we briefly indicated what the Lagrangian in the meson sector looks like, and how it can be made to reproduce the correct phenomenology by allowing for some small contributions which violate the chiral symmetry. In the meson-baryon sector, the chiral symmetry imposes constraints on the coupling constants of the meson-pair vertices. This allows us to express all meson-pair coupling constants in terms of the single-meson vertex coupling constants. These single-meson vertex coupling constants, in principle, can all be fixed by using experimental data such as meson masses, baryon masses, meson decay parameters, and meson mixing angles. As a first step in the application of the meson-baryon Lagrangian, we demonstrate that it is possible to construct an $N\!N$ potential which has all the theoretical and empirical constraints as discussed in Secs.~\ref{sec:single-meson} and \ref{sec:double-meson}. The model gives a very satisfactory description of the $N\!N$ scattering data. It even has a slightly better quality than other (often more phenomenological) potential models that have appeared in the literature, where this is the first model where such strict constraints on the coupling constants have successfully been imposed. We should mention that this result could only be achieved after the inclusion of the two-meson contributions. Improvements and an extension to the $Y\!N$ sector are presently under investigation. The success of the present model gives us hope that it might indeed be possible to ultimately arrive at a potential model which not only provides a high-quality description of the scattering data, but which is also consistent with the symmetries of QCD. \acknowledgments Discussions with H.\ Fearing, N.\ Mobed, and D.\ Phillips are gratefully appreciated. This work was supported by the Natural Sciences and Engineering Research Council of Canada.
1,314,259,993,751
arxiv
\section{Introduction} Interpolation is a basic and fundamental subject in numerical analysis and approximation theory for the continuous representation of discrete data. A standard way to obtain a bivariate interpolation from univariate interpolation functions is by using a tensor product if the underlying two variables are considered separately. This procedure is also adapted to multivariate interpolation when data from a multivariate function are prescribed on a Cartesian product of grid points. There are numerous ways to approximate multivariate functions by using multivariate polynomials\cite{Trefethen, Mond, Derriennic}, splines\cite{Schultz,Wang_Tan,Nielson}, tensor products splines\cite{Jonge_Zanten}, local methods, global methods, blending- functions methods\cite{Gordon}, Hermite's Interpolation Formula\cite{A_Spitzbart}. All these methods may have advantages and disadvantages depending on the nature of the data and the applications. When data is generated from a very irregular multivariate function, the above methods are not ideal to provide a deep understanding of the true multivariate features. This paper proposes an approach to describe non-linear patterns associated with a multivariate data generating function by means of zipper multivariate fractal interpolation functions. Fractal surfaces continue to draw attention to scientists and engineers due to their useful applications in various areas such as medical sciences, surface physics, chemistry, bio-engineering, metallurgy, computer science, electrical engineering, earth science, etc. Fractal surfaces have been found to be good approximations of natural surfaces in these areas because of their special properties, such as self-similarity, visualization at different scales, and a non-integral fractal dimension. The construction of fractal surfaces using iterated function systems (IFSs) with co-planar boundary were first introduced by Massopust in \cite{Massopust2} with different scaling factors. The construction of fractal surfaces with arbitrary boundary values but equal scaling factors was taken up by Geronimo and Hardin in \cite{Geronimo_Hardin}. Hardin and Massopust investigated more general fractal functions defined on complexes of simplices $D\subseteq \mathbb{R}^n$ into $\mathbb{R}^m$ in \cite{Hardin_Massopust}. Bouboulis and Dalla \cite{Bouboulis_Dalla2,Bouboulis_Dalla3} constructed fractal surfaces using IFSs over grids or rectangular domains. Using tensor product of cardinal spline, Chand and Navascue\'s proposed bicubic fractal surfaces \cite{Chand_Navascues}. The theory of fractal surfaces has been investigated along various directions, for instance, \cite{Massopust,Ruan_Xu,Bouboulis_Dalla1,Chand_Kapoor,BEHM}. The shape preserving fractal surfaces are developed recently using blending functions and univariate fractal functions, see for instance \cite{Chand_vij,Chand_vv, Chand_kr}. Aseev \cite{VAO} introduced the construction of fractals by using the idea of zipper, where the entire graph can be mapped to two consecutive nodes in two different ways. Subsequently, the theory of multi-zipper was investigated by Tetenov et. al \cite{ATK}. Introducing such a binary array called signature of zipper, the class of affine zipper FIFs is introduced recently in the literature by Chand et al. \cite{CVVT}. Further, the calculus of zipper FIFs and cubic zipper FIFs are studied by Reddy \cite{kmr}. Further, the approximation by smooth zipper fractal function is investigated in \cite{Vijay_Vijender_Chand}. In this paper, we introduce the concept of multivariate zipper fractal interpolation functions (ZFIFs) to interpolate and approximate a multivariate data/function by using a suitable binary matrix called zipper $\epsilon$. These multivariate zipper fractal functions are more general than the existing classical and fractal approximants. Based on the existence of ZFIF, we construct a novel class of multivariate Bernstein zipper $\alpha$-fractal functions using Bernstein polynomials $B_{n_1,...,n_m}f$\cite{Foupouagnigni_Wouodjie,Davis} as base functions in its IFS $f^{\alpha,\epsilon}_{n_1n_2...n_m}$ for a given $f\in C\left(\prod\limits_{k=1}^mI_k\right)$, where the $I_k$ are bounded and closed intervals in $\mathbb{R}$. Multivariate Bernstein zipper $\alpha$-fractal functions $f^{\alpha,\epsilon}_{n_1n_2...n_m}$ converge uniformly to $f$ as $n_i \to \infty $ for all $i$, without altering the scaling functions. We prove that the multivariate Bernstein polynomial $B_{n_1,...,n_m}f$ is Lipschitz if so is $f$. Based on the H\"older exponents of germ function and base function, and scaling factors, we derive the bounds for box-dimension of multivariate zipper $\alpha$-fractal function. Our results are more general than several existing results in univariate and multivariate cases \cite{Vijender, Akhtar_Prasad, Pandey_vishwanathan}. This paper is arranged as follows. Section \ref{1sec} introduces the basics of univariate zipper fractal function including its construction. Section \ref{2sec2} is concerned with the constructive existence of multivariate zipper fractal interpolation on a given multivariate data set through a binary signature matrix. In addition, a multivariate (germ) function is fractalized through zipper setting to present its fractal version through a suitable base function. When the base function is taken as a multivariate Bernstein function, then multivariate Bernstein zipper $\alpha$-fractal functions are introduced in Section \ref{2sec3} with their approximation properties. The non-negativity aspects of multivariate germ function is preserved by the corresponding multivariate Bernstein zipper $\alpha$-fractal functions in Section \ref{2sec4} based on the restrictions on the scaling factors. Another shape preserving aspect like the coordinate-wise monotonicity of a multivariate germ function is studied by its zipper variety in Section \ref{2sec6}. Finally, we derive the bounds for the box dimension of multivariate zipper $\alpha$-fractal functions and we obtain the similar bounds for multivariate zipper Bernstein $\alpha$-fractal function by proving that if $f$ is H\"olderian, then so is $B_{\textbf{n}}f$. \section{Basics of Zipper Fractal Functions}\label{1sec} In this section, we discuss the basics of IFSs, zippers and present the construction of zipper fractal functions. More details can be found in \cite{VAO,Barnsley,CVVT}. In the following, for an $m\in \mathbb{N}$, we denote by $\mathbb{N}_m :=\{1, 2, \ldots, m\}$ the initial segment of $\mathbb{N}$ of length $m$. \begin{definition} Let $1< N\in\mathbb{N}$ and let $w_i :X \to X$, $i\in \mathbb{N}_{N-1}$, be non-surjective maps on a complete metric space $(X,d)$. Then the system $\tilde{I} :=\{X;w_i,i\in \mathbb{N}_{N-1}\}$ is called an IFS with vertices $\{k_1,k_2,\dots,k_N\}$, where \[ w_i(k_1)=k_{i}\quad\text{and}\quad w_i(k_N)=k_{i+1}. \] The points $k_1$ and $k_N$ are called the initial and final point of the IFS, respectively. \end{definition} \begin{definition} For a binary vector $ \epsilon:=(\epsilon_1,\epsilon_2,\dots,\epsilon_{N-1}) \in \{0,1\}^{N-1}$ called signature, let $w_i: X\to X$, $i\in \mathbb{N}_{N-1}$, be non-surjective maps on a complete metric space $(X,d)$ such that $w_i$ satisfies \[ w_i(k_1)=k_{i+\epsilon_i}\quad\text{and}\quad w_i(k_N)=k_{i+1-\epsilon_i}. \] Then the system $\tilde{I}=\{X;w_i, i\in \mathbb{N}_{N-1}\}$ is called a zipper with vertices $\{k_1,k_2,$ $\dots,k_N\}$. Any non-empty compact set $A \subset X$ satisfying the self-referential equation \begin{equation*} A=\bigcup^{N-1}_{i=1} w_i(A), \end{equation*} is called the attractor or zipper fractal corresponding to the zipper $\tilde{I}$. \end{definition} Clearly, an IFS is a particular case of a zipper when the signature satisfies $\epsilon_i=0$, for all $i \in \mathbb{N}_{N-1}$. Next, we will review the construction of zipper FIFs (ZFIFs) from a suitable zipper which is constructed from a given set of interpolation data. \vskip 6pt Let a set of interpolation points $\{(x_i,y_i) \in I \times \mathbb{R}: i\in \mathbb{N}_N (N>2)\}$ be given where $x_1<x_2<\dots<x_N$ is a partition of the interval $I:=[x_1,x_N]$ and $y_i \in [c,d] \subset \mathbb{R}$, $\forall i \in \mathbb{N}_N$. Let us set $I_i:=[x_i,x_{i+1}]$ and $ D:=I \times [c,d]$. Let $u_i^{\epsilon}:I \rightarrow I_i$, $i\in \mathbb{N}_{N-1}$, be contractive homeomorphisms such that \begin{equation}\label{eqn1} u_i^{\epsilon}(x_1)=x_{i+\epsilon_i} \quad\text{and}\quad u_i^{\epsilon}(x_N)=x_{i+1-\epsilon_i}. \end{equation} If $u_i^{\epsilon}(x) :=a_ix+b_i$ and $\epsilon_i=1$, then the horizontal scaling factors $a_i$ can be negative. Define $v_i^{\epsilon}: D \rightarrow \mathbb{R}$, $i \in \mathbb{N}_{N-1}$, by \begin{equation*} v_i^{\epsilon}(x,y):=\alpha_i(x)y+q_i(x), \end{equation*} where $\alpha_i$ and $q_i$ are continuous functions on $I$ such that $\|\alpha_i\|_{\infty}<1$, and \begin{equation}\label{eqn2} v_i^{\epsilon}(x_1,y_1)=y_{i+\epsilon_i}, \; \; v_i^{\epsilon}(x_N,y_N)=y_{i+1-\epsilon_i}, \quad i \in \mathbb{N}_{N-1}. \end{equation} Here $v_i^{\epsilon}$ either contracts or flips the graph of $f$ over $I$ to $I_i$. Using these maps, we define maps $w_i: D \rightarrow I_i \times \mathbb{R}$, $i\in\mathbb{N}_{N-1}$, by \begin{equation*} w_i^{\epsilon}(x,y) :=(u_i^{\epsilon}(x),v_i^{\epsilon}(x,y)), \; \; \forall (x,y) \in D. \end{equation*} The zipper IFS for the construction of ZFIFs is then given by \[ \tilde{I}^{\epsilon} : =\{D;w_i^{\epsilon}, i\in\mathbb{N}_{N-1}\} \] with vertices $\{v_i=(x_i,y_i)\}_{i=1}^N$ and signature $\epsilon=\{\epsilon_1,\epsilon_2,\dots,\epsilon_{N-1}\}$. For more details, please consult \cite{CVVT}. \begin{theorem}\label{2ta} For the above zipper $\tilde{I}^{\epsilon}= \{D;w_i^{\epsilon},ii\in\mathbb{N}_{N-1}\}$, one has the following conclusion: \begin{enumerate} \item[(i)] There exists a unique non-empty compact set $G\subset K$ such that \[ G=\bigcup\limits^{N-1}_{i=1} w_i^{\epsilon}(G). \] \item[(ii)] $G$ is the graph of a continuous function $f^{\epsilon}: I \rightarrow \mathbb{R}$ which interpolates the data $\{(x_i,y_i):i\in\mathbb{N}_N\}$, i.e., $G=\{(x,f^{\epsilon}(x) : x \in I\}$ and, for $i\in\mathbb{N}_N$, $f^{\epsilon}(x_i)=y_i$. \end{enumerate} \end{theorem} The above theorem gives the existence of the graph of a zipper interpolation function whose graph is the attractor of an associated zipper IFS. To obtain a recursive formula for the ZFIF $f^{\epsilon}$, we proceed as follows. Let $\epsilon \in \{0,1\}^{N-1}$ be fixed, and let \[ \wt{C}(I):=\{g \in C(I) : g(x_1)=y_1, \; g(x_N)=y_N\}. \] Then $\wt{C}(I)$ is a closed \emph{metric} subspace of $C(I)$ and $\wt{C}(I)$ is complete with respect to the metric $d$ induced by the uniform norm. Now define a Read-Bajraktarevi\'c operator $T:\wt{C}(I) \rightarrow \wt{C}(I)$ by \begin{equation*} (Tg)(x) := \sum_{i=1}^{N-1} v_i^{\epsilon} ( (u_i^{\epsilon})^{-1} (x), g \circ (u_i^{\epsilon})^{-1} (x)) \; \chi_{u_i^{\epsilon}(I)} (x), \quad x \in I. \end{equation*} Clearly, as $\|\alpha_i\|_{\infty}<1$, $T$ is contraction on $(\wt{C}(I),d)$. By the Banach fixed point theorem, $T$ has a unique fixed point $f^{\epsilon}$ which obeys the self-referential equation \begin{equation*} f^{\epsilon} = \sum_{i=1}^{N-1} v_i^{\epsilon} ( (u_i^{\epsilon})^{-1} , f^{\epsilon} \circ (u_i^{\epsilon})^{-1} ) \; \chi_{u_i^{\epsilon}(I)}. \end{equation*} We call this interpolating function $f^{\epsilon}$ a zipper fractal interpolation function (ZFIF) corresponding to the given data $\{(x_i,y_i):i\in\mathbb{N}_N\}$ and the signature $\epsilon=(\epsilon_1,\epsilon_2,\dots,\epsilon_{N-1})\in \{0,1\}^{N-1}$ for a fixed scaling function vector $\alpha:=(\alpha_1,\alpha_2,\dots,\alpha_{N-1})$. For a prescribed function $f \in C(I)$, if we choose $q_i(x):=f(u_i(x))-\alpha_i(x)b(x)$, for $i \in \mathbb{N}_{N-1}$, and $y_i=f(x_i)$, for $i \in \mathbb{N}_N$, where $b$ is called a base function satisfying $f(x_1)=b(x_1)$ and $f(x_N)=b(x_N)$, then the corresponding ZFIF $f_{\alpha}^{\epsilon}$ is called a zipper $\alpha$-fractal function. The concept of such zipper fractal functions will be extended to the multivariate setting in the next section. \section{Multivariate Zipper Fractal Functions}\label{2sec2} In the first part of this section, we show the existence of multivariate ZFIFs in a deterministic way with constant scalings. This concept is then used to perturb any multivariate function $f$ to construct its fractal analogue by using a suitable base function in the second part. \subsection{Multivariate Zipper Fractal Interpolation }\label{2subsec2.1} For $m\in \mathbb{N}$, we adopt the following notation. \begin{gather*} \mathbb{N}_{m,0} := \{0,1,...,m\},\quad \partial\mathbb{N}_{m,0} :=\{0,m\}, \quad \intr{\mathbb{N}_{m,0}} :=\{1,...,m-1\},\\ {\mathbf{j}}:=(j_1,\cdots,j_m),\quad {\mathbf{n}}:=(n_1,\cdots,n_m), \quad \epsilon :=(\epsilon^1,...,\epsilon^m) \in \prod_{k=1}^{m}\{0,1\}^{\mathbb{N}_k},\\ \mathcal{I} := \prod_{k=1}^m I_k,\quad\text{$I_k$ is a compact interval in $\mathbb{R}$, $1\leq k \leq m$}. \end{gather*} Let $2\leq m\in\mathbb{N}$ and let $C(\mathcal{I})$ be the Banach space of continuous functions $f:\mathcal{I}\to\mathbb{R}$ equipped with the sup-norm. Consider the interpolation data \begin{center} $\Delta = \{(x_{k,j_1},...,x_{k,j_m},y_{j}): \mathbf{j} \in \prod\limits_{k=1}^m \mathbb{N}_k\}$ \end{center} such that \begin{center} $a_k=x_{k,0}<...<x_{k,N_k}=b_k$, for each $k\in \{1,2,...,m\}$. \end{center} Since $\{a_k=x_{k,0},...,x_{k,N_k}=b_k\}$ is the partition of $I_k$, denote the $j_k$-th sub-interval of $I_k$ by $I_{k,j_k}=[x_{k,j_k-1},x_{k,j_k}]$, $j_k\in \mathbb{N}_{N_k}$. For every $ j_k\in \mathbb{N}_{N_k}$, consider an affine map $u_{k,j_k}^{\epsilon^k}:I_k \to I_{k,j_k}$ satisfying \begin{equation}\label{2eq1} |u_{k,j_k}^{\epsilon^k}(x)-u_{k,j_k}^{\epsilon^k}(x')|\leq \alpha_{k,j_k}|x-x'|, \quad\forall x, x' \in I_k, \end{equation} where $0\leq \alpha_{k,j_k} < 1$, and \begin{equation}\label{2eq2} \begin{cases} u_{k,j_k}^{\epsilon^k}(x_{k,0})=x_{k,j_k-1+ \epsilon^k_{j_k}}~ \text{and} ~ u_{k,j_k}^{\epsilon^k}(x_{k,N_k})=x_{k,j_k- \epsilon^k_{j_k}}, \text{if}~ j_k ~\text{is odd},\\ u_{k,j_k}^{\epsilon^k}(x_{k,0})=x_{k,j_k-\epsilon^k_{j_k}}~ \text{and}~ u_{k,j_k}^{\epsilon^k}(x_{k,N_k})=x_{k,j_k-1+ \epsilon^k_{j_k}}, \text{if}~ j_k ~\text{is even}. \end{cases} \end{equation} From \eqref{2eq2}, it is easy to check that \begin{equation}\label{2eq3} (u_{k,j_k}^{\epsilon^k})^{-1}(x_{k,j_k})= (u_{k,j_k+1}^{\epsilon^k})^{-1}(x_{k,j_k}), \forall j_k \in \text{int}\mathbb{N}_{N_k,0}. \end{equation} For each $k\in \mathbb{N}_{m}$, define a map $\tau_k: \mathbb{N}_{N_k}\times\{0,N_k\}\to \mathbb{Z}$ by \begin{equation}\label{2eq4} \begin{cases} \tau_k(j,0):=j-1+\epsilon_{j}^k \quad\text{and}\quad \tau_k(j,N_k):=j-\epsilon_j^k \;\text{if}\; j \;\text{is odd},\\ \tau_k(j,0):=j-\epsilon_j^k \quad\text{and}\quad \tau_k(j,N_k):=j-1+\epsilon_j^k \;\text{if}\; j \;\text{is even}. \end{cases} \end{equation} Using \eqref{2eq4}, we can rewrite \eqref{2eq2} as \begin{equation}\label{2eq5} u_{k,j_k}^{\epsilon^k}(x_{k,i_k})=x_{k,\tau_k(j_k,i_k)}, \forall j_k \in \mathbb{N}_{N_k}, i_k\in \partial\mathbb{N}_{N_k,0}, k\in \mathbb{N}_m. \end{equation} Let $\mathcal{K}:=\mathcal{I}\times \mathbb{R}$. For each $\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$, define a continuous function $v_{\mathbf{j}}^{\epsilon}:\mathcal{K} \to \mathbb{R}$ satisfying the following conditions: \begin{equation}\label{2eq6} v_{\mathbf{j}}^{\epsilon}(x_{1,i_1},...,x_{m,i_m},y_{i_1...i_m})=y_{\tau_1(j_1,i_1)...\tau_m(j_m,i_m)}, \quad\forall \; \mathbf{i} \in \prod_{k=1}^{m}\partial \mathbb{N}_{N_k,0} \end{equation} and \begin{equation}\label{2eq7} | v_{\mathbf{j}}^{\epsilon}(x_1,...,x_m,y)- v_{\mathbf{j}}^{\epsilon}(x_1,...,x_m,y')|\leq \gamma_{\mathbf{j}}|y-y'| \end{equation} for all $(x_1,...,x_m)\in \mathcal{I}$ and $y,y'\in \mathbb{R}$, where $0\leq \gamma_{\mathbf{j}} <1$. Now $\forall \mathbf{j}\in \prod\limits_{k=1}^{m}\partial \mathbb{N}_{N_k}$, we define $W_{\mathbf{j}}^{\epsilon}:\mathcal{K}\to\mathcal{K} $ by \begin{equation}\label{2eq8} W^{\epsilon}_{\mathbf{j}}(x_1,...,x_m,y)=(u_{1,j_1}^{\epsilon^1}(x_1),...,u_{m,j_m}^{\epsilon^m}(x_m), v_{\mathbf{j}}(x_1,...,x_m,y)). \end{equation} Now \begin{equation}\label{2eq9} I^{\epsilon} = \{ K,W_{\mathbf{j}}^{\epsilon}: {\mathbf{j}}\in \prod\limits_{k=1}^m \mathbb{N}_k\}. \end{equation} is named as multi-zipper IFS with vertices $ \Delta= \{(x_{k,j_1},...,x_{k,j_m},y_j): {\mathbf{j}} \in \prod\limits_{k=1}^m \mathbb{N}_k\}$ and signature $\epsilon$. Let us consider \[\mathcal{G}=\left\{g\in C(\mathcal{I}): g(x_{1,j_1},...,x_{m,j_m})=y_{\mathbf{j}},~ \forall ~{\mathbf{j}}\in \prod\limits_{k=1}^{m}\partial\mathbb{N}_{N_k,0}\right\}\] endowed with the uniform metric \[\rho(f,g)=\max\left\{ |f(x_1,...,x_m)-g(x_1,...,x_m)|:(x_1,...,x_m)\in \prod\limits_{k=1}^{m}I_k\right\}\] for $f,g\in \mathcal{G}$. Then $(\mathcal{G},\rho)$ is complete metric space. Define a Read-Bajrakterivi\`c operator $T^{\epsilon}:\mathcal{G}\to \mathcal{G}$ on $(\mathcal{G},\rho)$\cite{Massopust} by \begin{align}\label{2eq10} T^{\epsilon}g(x) := & \sum_{j\in \prod\limits_{k=1}^{m} \mathbb{N}_{N_k}} v_{\mathbf{j}}^{\epsilon}((u_{1,j_1}^{\epsilon^1})^{-1}(x_1),...,(u_{m,j_m}^{\epsilon^m})^{-1}(x_m),\nonumber\\ & \qquad g((u_{1,j_1}^{\epsilon^1})^{-1}(x_1),...,(u_{m,j_m}^{\epsilon^m})^{-1}(x_m))) \chi_{u_{\mathbf{j}}^{\epsilon}(\mathcal{I})}(x), \end{align} for all $x:=(x_1,...,x_m)\in \mathcal{I}$. One can easily observe that $T^{\epsilon}g$ is not continuous for all $\epsilon$. In order to achieve continuity, we restrict the signature $\epsilon$ to $\epsilon_{j_k}^k=\epsilon_{j_k+1}^k$, for each $i\in\mathbb{N}_{{N}_k-1}$, where $\epsilon_i^k$ denotes the $i$-th component of the binary column vector $\epsilon^k$. \begin{theorem}\label{2th0} Let $\Delta := \left\{(x_{k,j_1},...,x_{k,j_m},y_{\mathbf{j}}): {\mathbf{j}} \in \prod\limits_{k=1}^m N_k\right\}$ be set of multivariate interpolating data points and $\epsilon=(\epsilon^1,...,\epsilon^m) \in \prod\limits_{k=1}^{m}\{0,1\}^{N_k}$ be a signature for the IFS $I^{\epsilon}=\left\{ \mathcal{K},W_{{\mathbf{j}}}^{\epsilon}:{\mathbf{j}}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}\right\}$ as defined in \eqref{2eq9}. Assume that for all $j_k \in $ $\intr\mathbb{N}_{N_k,0}$, $1\leq k \leq m$, \begin{equation}\label{2eq0} \begin{split} (u_{k,j_k}^{\epsilon^k})^{-1}(x_{k,j_k}) & = (u_{k,j_k+1}^{\epsilon^k})^{-1}(x_{k,j_k})=x_k^{*},\\ v_{j_1,...,j_k,...,j_m}^{\epsilon}(x_1,...,x_{k-1}, & x_{k}^*,x_{k+1},...,x_m, y)\\ =v_{j_1,...,j_k+1,...,j_m}^{\epsilon}(x_1,...,&x_{k-1},x_{k}^*,x_{k+1},...,x_m, y), \end{split} \end{equation} \noindent where $(x_1,...,x_{k-1},x_{k+1},...,x_m)\in \prod\limits_{i=1,i\ne k}^{m}I_i,y \in \mathbb{R}$. Then, there exists a continuous function $f^{\epsilon}:\mathcal{I}\to\mathbb{R}$ on $\prod\limits_{k=1}^{m}I_k$ such that: \begin{enumerate} \item[(i)] $f^{\epsilon}$ interpolates the given multivariate data set $\Delta $, that is, \[ f^{\epsilon}(x_{1,j_1},...,x_{m,j_m})=y_{{\mathbf{j}}}, \quad\forall {\mathbf{j}} \in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k,0}. \] \item[(ii)] $ G=\{(x,f^{\epsilon}(x)): x \in \mathcal{I}\}$ is graph of the Zipper fractal function $f^{\epsilon}$, and $G$ satisfies \[ G=\bigcup_{\mathbf{j} \in \prod\limits_{k=1}^m \mathbb{N}_{N_k}}W_{\mathbf{j}}^{\epsilon}(G). \] \end{enumerate} \end{theorem} \begin{proof} The proof of this theorem is similar to the bivariate case as explained in \cite{Ruan_Xu}, but for the reader's convenience, we give a short explanation of it. It follows from \eqref{2eq10} that $Tg$ is continuous on $\prod\limits_{k=1}^{m}I_{k,j_k}$. To prove that $Tg$ is continuous on $m$-dimensional hyperrectangle $\prod\limits_{k=1}^{m}I_{k}$, it is sufficient to show that $Tg$ is well-defined on the hyperrectangle $\prod\limits_{k=1}^{m}I_{k,j_k}$. \noindent \textbf{Claim:} $T^{\epsilon}$ is well-defined. \vskip 3pt\noindent Assume $j_k \in \intr\mathbb{N}_{N_k,0}$, $1\leq k \leq m$, and $X :=(x_1,\cdots,x_k,\cdots,x_m)\in \mathcal{I}$, with $x_k=x_{k,j_k}$. Then there are following two cases:\\ \textbf{Case(i):} Assume $x_{k,j_k}$ is an element of $I_{k,j_k}$. Then, by \eqref{2eq0}, we have \begin{align*} T^{\epsilon}f(X) &= v_{j_1,...,j_k,..,j_m}((u_{1,j_1}^{\epsilon^1})^{-1}(x_1),..,(u_{k,j_k}^{\epsilon^k})^{-1}(x_k),\ldots,\\ &\qquad(u_{m,j_m}^{\epsilon^m})^{-1}(x_m),g((u_{1,j_1}^{\epsilon^11})^{-1}(x_1),...,(u_{m,j_m}^{\epsilon^m})^{-1}(x_m))). \end{align*} \textbf{Case(ii):} Consider $x_{k,j_k}$ as an element of $I_{k,j_k+1}$. Then, by \eqref{2eq0}, we have \begin{align*} T^{\epsilon}f(X)= &v_{j_1,...,j_k+1,..,j_m}((u_{1,j_1}^{\epsilon^1})^{-1}(x_1),..,(u_{k,j_k+1}^{\epsilon^k})^{-1}(x_k),\ldots,\\ &\qquad (u_{m,j_m}^{\epsilon^m})^{-1}(x_m),g((u_{1,j_1}^{\epsilon^11})^{-1}(x_1),...,(u_{m,j_m}^{\epsilon^m})^{-1}(x_m)))\\ =& v_{j_1,...,j_k,..,j_m}((u_{1,j_1}^{\epsilon^1})^{-1}(x_1),..,(u_{k,j_k}^{\epsilon^k})^{-1}(x_k),\ldots, \\ &\qquad (u_{m,j_m}^{\epsilon^m})^{-1}(x_m),g((u_{1,j_1}^{\epsilon^11})^{-1}(x_1),...,(u_{m,j_m}^{\epsilon^m})^{-1}(x_m))). \end{align*} Similarly, we can check the other possible cases. Hence, $Tg$ is well-defined on the boundary of $\prod\limits_{k=1}^{m}I_{k,j_k}$ and therefore continuous on $\mathcal{I}$. Let $\mathbf{i}:=(i_1,...,i_m)\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k,0}$. Choose $\mathbf{l}:=(l_1,...,l_m)\in \prod\limits_{k=1}^{m} \partial \mathbb{N}_{N_k},$ $ \mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$ such that $\mathbf{i}=(\tau_1(j_1,l_1),...,\tau_m(j_m,l_m))$. According to the definition of $\tau_k$, we have $(u_{k,j_k}^{\epsilon^k})^{-1}(x_{k,i_k})=x_{k,l_k}$, for all $k\in \mathbb{N}_{m}$. Using \eqref{2eq6} and \eqref{2eq10}, we obtain \begin{align*} T^{\epsilon}g(x_{1,i_1},...,x_{m,i_m}) &= v_{j}^{\epsilon}((u_{1,j_1}^{\epsilon^1})^{-1}(x_{1,i_1}),..,(u_{m,j_m}^{\epsilon^m})^{-1}(x_{m,i_m}),\\ &\qquad f((u_{1,j_1}^{\epsilon^1})^{-1}(x_{1,i_1}),...,(u_{m,j_m}^{\epsilon^m})^{-1}(x_{m,i_m}))) \\ &=v_{j}^{\epsilon}(x_{1,l_1},..,x_{m,l_m}, f(x_{1,l_1},...,x_{m,l_m}))\\ &=v_{j}^{\epsilon}(x_{1,l_1},..,x_{m,l_m}, y_{l})=y_{\tau_1(j_1,l_1)...\tau_m(j_m,l_m)}=y_{i}. \end{align*} Therefore $T^{\epsilon}f\in \mathcal{G}$ and this shows that $T^{\epsilon}$ is a map from $\mathcal{G}$ to $\mathcal{G}$. Now, let $f,g \in \mathcal{G}$, $ X=(x_1,...,x_m) \in \prod\limits_{k=1}^m I_{k,j_k} $ and \[ \|\gamma \|_{\infty}:=\max\{\gamma_{\mathbf{j}}: {\mathbf{j}}\in \prod\limits_{k=1}^m \mathbb{N}_k \}. \] Using \eqref{2eq7} and \eqref{2eq10}, we establish the contractivity of $T$ as follows: \begin{align*} |(T^{\epsilon}f &-T^{\epsilon}g)(X)| =\\ &\left|v_{\mathbf{j}}^{\epsilon}((u_{1,j_1}^{\epsilon^1})^{-1}(x_{1}),..,(u_{m,j_m}^{\epsilon^m})^{-1}(x_{m}),f((u_{1,j_1}^{\epsilon^1})^{-1}(x_{1}),(u_{m,j_m}^{\epsilon^m})^{-1}(x_{m})))\right.\\ & - \left. v_{\mathbf{j}}^{\epsilon}((u_{1,j_1}^{\epsilon^1})^{-1}(x_{1}),..,(u_{m,j_m}^{\epsilon^m})^{-1}(x_{m}), g((u_{1,j_1}^{\epsilon^1})^{-1}(x_{1}),...,(u_{m,j_m}^{\epsilon^m})^{-1}(x_{m})))\right|\\ & \leq \gamma_{\mathbf{j}} \abs{ f((u_{1,j_1}^{\epsilon^1})^{-1}(x_{1}),...,(u_{m,j_m}^{\epsilon^m})^{-1}(x_{m})) -g((u_{1,j_1}^{\epsilon^1})^{-1}(x_{1}),...,(u_{m,j_m}^{\epsilon^m})^{-1}(x_{m})) } \\& \leq \| \gamma\, \|_{\infty}\, \| f-g \|_{\infty}. \end{align*} As $X \in \prod\limits_{k=1}^m I_{k,j_k} $ was arbitrary, \[ \|T^{\epsilon}f-T^{\epsilon}g \|_{\infty} \leq \| \gamma \|_{\infty} \| f-g \|_{\infty}. \] Using the Banach fixed point theorem, we conclude that $T^{\epsilon}$ has a unique fixed point $f^{\epsilon}$ in the complete metric spaces $\mathcal{G}$, i.e., $T^{\epsilon} f^{\epsilon}=f^{\epsilon}$. Equivalently, \begin{align}\label{2eq11} f^{\epsilon} (x_1,...,x_m) &= \sum_{\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}} v_{\mathbf{j}}^{\epsilon}((u_{1,j_1}^{\epsilon^1})^{-1}(x_1),...,(u_{m,j_m}^{\epsilon^m})^{-1}(x_m),\nonumber\\ &\qquad f^{\epsilon}((u_{1,j_1}^{\epsilon^1})^{-1}(x_1),...,(u_{m,j_m}^{\epsilon^m})^{-1}(x_m)) \chi_{u_{\mathbf{j}}^{\epsilon}(\mathcal{I})}(x_1,...,x_m), \end{align} for all $(x_1,...,x_m)\in \mathcal{I}$. Let us assume that for $X=(x_1,......,x_m)$, \[ (u_{\mathbf{j}}^{\epsilon})^{-1}(X)=((u_{1,j_1}^{\epsilon^1})^{-1}(x_1),(u_{m,j_m}^{\epsilon^m})^{-1}(x_m)) \] and \[ u_{\mathbf{j}}^{\epsilon}(X)=(u_{1,j_1}^{\epsilon^1}(x_1),...,u_{m,j_m}^{\epsilon^m}(x_m)). \] Then, the self-referential equation associated with the multizipper FIF is given by \begin{align}\label{2eq12} f^{\epsilon}(X)= \sum_{\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}} v_{\mathbf{j}}^{\epsilon}((u_{\mathbf{j}}^{\epsilon})^{-1}(X),f^{\epsilon}((u_{\mathbf{j}}^{\epsilon})^{-1}(X)))\chi_{u_{\mathbf{j}}^{\epsilon}(\mathcal{I})}(X), \quad\forall X\in \mathcal{I}. \end{align} The above equation can be rewritten as \begin{align}\label{2eq13} \begin{split} f^{\epsilon}(u_{\mathbf{j}}^{\epsilon}(X))= v_{\mathbf{j}}^{\epsilon}(X,f^{\epsilon}(X)), \quad\forall X\in \mathcal{I}. \end{split} \end{align} This unique fixed point $f^{\epsilon}$ interpolates data points $\Delta$. For the graph of $f^{\epsilon}$, $G=\{(X,f^{\epsilon}):X\in \mathcal{I}\}$, we obtain by\eqref{2eq8} and \eqref{2eq13}, \begin{align*} \bigcup_{\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}}W_{\mathbf{j}}^{\epsilon}(G) & =\bigcup_{{\mathbf{j}}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}} \{W_{\mathbf{j}}^{\epsilon}(X,f^{\epsilon}(X)): X\in \mathcal{I}\}\\ & = \bigcup_{{\mathbf{j}}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}} \{(u_{\mathbf{j}}^{\epsilon}(X),v_{\mathbf{j}}^{\epsilon}(X,f^{\epsilon}(X))): X\in \mathcal{I}\}\\ &=\bigcup_{\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}}\{(u_{\mathbf{j}}^{\epsilon}(X),f^{\epsilon}(u_{\mathbf{j}}^{\epsilon}(X)): X\in \mathcal{I}\}\\ &=\{(X,f^{\epsilon}(X)):X\in \mathcal{I}\}=G. \end{align*} \noindent The unique fixed point $f^{\epsilon}$ of $T^{\epsilon}$ is called a {multivariate zipper FIF} corresponding to the IFS \eqref{2eq9}. \end{proof} \begin{remark} Note that in the construction of multivariate zipper FIF, we have to assign $\epsilon^k$ is either a zero or one column matrix for $k=1,2,\dots,m.$ Then, we can obtain $2^m$- multivariate FIFs by zipper methodology for the same set of scaling. When all $\epsilon = \bf{0}$, then our multivariate zipper fractal function reduces to the existing simple multivariate fractal function \cite{Pandey_vishwanathan}. \end{remark} \subsection{Multivariate Zipper $\alpha$-fractal Functions}\label{2subsec2.2} For a given multivariate function $f\in C(\mathcal{I})$, consider a grid on its domain \begin{center} $\Delta := \left\{(x_{k,j_1},...,x_{k,j_m}): {\mathbf{j}}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k,0}, k\in \mathbb{N}_m \right\},$ \end{center} where $a_k :=x_{k,0}<...<x_{k,N_k}=:b_k$ for each $k\in \mathbb{N}_{m}$. Construct a continuous function $b:\mathcal{I} \to \mathbb{R}$ satisfying the conditions: \begin{equation}\label{2eq14} b(x_{1,j_1},...,x_{m,j_m})=f(x_{1,j_1},...,x_{m,j_m}),\; \quad\forall \; {\mathbf{j}}\in \prod\limits_{k=1}^{m}\partial\mathbb{N}_{N_k,0}, \end{equation} For $ k\in \mathbb{N}_{m}$, we define affine maps $u_{k,j_k}^{\epsilon^k}: I_k\to I_{k,j_k}$ by \begin{equation}\label{2eq15} u_{k,j_k}^{\epsilon^k}(x) :=a_{k,j_k}(x)+b_{k,j_k}, j_k\in \mathbb{N}_{N_k}, \end{equation} where $a_{k,j_k}$ and $b_{k,j_k}$ are chosen so that each map $u_{k,j_k}$ satisfies \eqref{2eq1} and \eqref{2eq2}. For ${\mathbf{j}}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$, define continuous variable scaling functions \begin{align}\label{2eq015} \alpha_{\mathbf{j}}: \mathcal{I} \to \mathbb{R} \end{align} satisfying \begin{itemize} \item[(i)] $\| \alpha_{\mathbf{j}}\|_{\infty}<1$, \item[(ii)] for all $j_k \in\intr\mathbb{N}_{N_k,0}$ and $(u_{k,j_k}^{\epsilon^k})^{-1}(x_{k,j_k}) = (u_{k,j_k+1}^{\epsilon^k})^{-1}(x_{k,j_k})=x_k^{*}$, $ (x_1,...,x_m)\in \mathcal{I}$, \begin{align*} \begin{split} \alpha_{j_1\cdots j_k\cdots j_m}(x_1,\cdots,x_{k-1},x_{k}^*,x_{k+1},\cdots,x_m, y) =\alpha_{j_1\cdots j_k+1\cdots j_m}(x_1,\cdots,x_{k-1},\\ x_{k}^*,x_{k+1},\cdots,x_m, y), (x_1,...,x_{k-1},x_{k+1},...,x_m)\in \prod\limits_{i=1,i\ne k}^{m}I_i,y \in \mathbb{R}. \end{split} \end{align*} \end{itemize} Further, define $v_{\mathbf{j}}^{\epsilon}:\mathcal{I} \to \mathbb{R}$ by \begin{align}\label{2eq16} v_{\mathbf{j}}^{\epsilon}(X,y) :=f\left(u_{1,j_1}^{\epsilon^1}(x_1),...,u_{m,j_m}^{\epsilon^m}(x_m)\right)+ \alpha_{\mathbf{j}}(X)(y-b(X)). \end{align} Then, for all ${\mathbf{j}}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k},{\mathbf{l}}=(l_1,...,l_m)\in \prod\limits_{k=1}^{m}\partial \mathbb{N}_{N_k}$, we get \begin{align*} v_{\mathbf{j}}^{\epsilon}(x_{1,l_1},...,x_{m,l_m}, f(x_{1,l_1},...,x_{m,l_m})) &=f(u_{1,j_1}^{\epsilon^1}(x_{1,l_1}),...,u_{m,j_m}^{\epsilon^m}(x_{m,l_m}))\\ &=f(x_{1,\tau_1(j_1,l_1)},...,x_{m,\tau_m(j_m,l_m)})\\ &=y_{\tau_1(j_1,l_1)...\tau_m(j_m,l_m)}). \end{align*} In other words, $ v_{\mathbf{j}}^{\epsilon}$ satisfies \eqref{2eq6}. \noindent Suppose now that $j_k \in\intr\mathbb{N}_{N_k,0}, 1\leq k \leq m $ and that \[ x_k^*=(u_{k,j_k}^{\epsilon^k})^{-1}(x_{k,j_k})=(u_{k,j_k+1}^{\epsilon^k})^{-1}(x_{k,j_k}). \] For any $y\in \mathbb{R}$, \begin{align*} & v_{j_1,\ldots,j_{k-1}, j_k, j_{k+1},\ldots, j_m}^\epsilon(x_1,\ldots,x_{k-1},x_k^*,x_{k+1},\ldots,x_m,y) \\ & \qquad = f(u_{1,j_1}^{\epsilon^1}(x_1),\ldots,u_{k-1,j_{k-1}}^{\epsilon^{k-1}}(x_{k-1}),u_{k,j_k}^{\epsilon^k}(x_k^*),u_{k+1,j_{k+1}}^{\epsilon^{k+1}}(x_{k+1}),\ldots,u_{m,j_m}^{\epsilon^m}(x_m))\\ & \qquad\quad +\alpha_{j_1,\cdots,j_k,\cdots, j_m}(x_1,\cdots,x_k^*,\cdots,x_m)(y-b(x_1,\ldots,x_m))\\ & \qquad= f(u_{1,j_1}^{\epsilon^1}(x_1),\ldots,u_{k-1,j_{k-1}}^{\epsilon^{k-1}}(x_{k-1}),x_{k,j_k},u_{k+1,j_{k+1}}^{\epsilon^{k+1}}(x_{k+1}),\ldots,u_{m,j_m}^{\epsilon^m}(x_m))\\ & \qquad\quad + \alpha_{j_1,\cdots ,j_m}(x_1,\cdots,x_k^*,\cdots,x_m)(y-b(x_1,\ldots,x_m))\\ & \qquad =f(u_{1,j_1}^{\epsilon^1}(x_1),\ldots,u_{k-1,j_{k-1}}^{\epsilon^{k-1}}(x_{k-1}),u_{k,j_k+1}^{\epsilon_k}(x_{k,j_k}),u_{k+1,j_{k+1}}^{\epsilon^{k+1}}(x_{k+1}),\ldots,u_{m,j_m}^{\epsilon^m}(x_m))\\ & \qquad\quad + \alpha_{j_1,\cdots,j_k+1,\cdots, j_m}(x_1,\cdots,x_k^*,\cdots,x_m)(y-b(x_1,\ldots,x_m))\\ & \qquad =v_{j_1,\ldots,j_{k-1}, j_k+1, j_{k+1},\ldots,j_m}^{\epsilon} (x_1,\ldots,x_{k-1},x_k^*,x_{k+1},\ldots,x_m,y)). \end{align*} Therefore, $v_{\mathbf{j}}^{\epsilon}$ satisfies \eqref{2eq0}, \eqref{2eq6}, and \eqref{2eq7} for all ${\mathbf{j}}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$. Theorem \ref{2eq0} now implies that the IFS \begin{align}\label{2eq000} I^{\epsilon}=\left\{ \mathcal{K},W_{\mathbf{j}}^{\epsilon}:\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}\right\} \end{align} defined in \eqref{2eq9}, where the maps $u_{k,j_k}^{\epsilon_k}$ and $v_{\mathbf{j}}^{\epsilon}$ are defined as in \eqref{2eq15} and \eqref{2eq16}, determines a fractal function referred to as a {multivariate zipper $\alpha$-fractal function} and denoted by $f_{\Delta,b}^{\alpha,\epsilon}$. The fractal function $f_{\Delta,b}^{\alpha,\epsilon}$ is the fixed point of the RB operator $ T^{\epsilon}:\mathcal{G}\to \mathcal{G} $ given by \begin{align}\label{2eq17} T^{\epsilon}g(X) = f(X)+ \sum_{{\mathbf{j}}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}} \alpha_{{\mathbf{j}}} ((u_{\mathbf{j}}^{\epsilon})^{-1}(X))(f-b)((u_{\mathbf{j}}^{\epsilon})^{-1}(X)) \chi_{u_{{\mathbf{j}}}^{\epsilon}(\mathcal{I})}(X), \forall X\in \mathcal{I}. \end{align} The fixed point $f_{\Delta,b}^{\alpha,\epsilon}$ satisfies the self-referential equation \begin{align}\label{2eq18} \begin{split} f_{\Delta,b}^{\alpha,\epsilon}(X) = f(X)+ \sum_{\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}} \alpha_{\mathbf{j}} ((u_{\mathbf{j}}^{\epsilon})^{-1}(X))(f_{\Delta,b}^{\alpha,\epsilon}-b) ((u_{\mathbf{j}}^{\epsilon})^{-1}(X)) \chi_{u_{\mathbf{j}}^{\epsilon}(\mathcal{I})}(X),\\ \nonumber \forall X\in \mathcal{I}. \end{split} \end{align} Equivalently, \begin{equation}\label{2eq19} \begin{split} f_{\Delta,b}^{\alpha,{\epsilon}}((u_{\mathbf{j}}^{\epsilon})^{-1}(X))=f((u_{\mathbf{j}}^{\epsilon})^{-1}(X))+ \alpha_{\mathbf{j}}(X)(f_{\Delta,b}^{\alpha}-b)(X),\\ \forall X\in \mathcal{I},\; \mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}. \end{split} \end{equation} or, more succinctly, \begin{align}\label{2eq20} f_{\Delta,b}^{\alpha,\epsilon}(u_{\mathbf{j}}^{\epsilon}(X))=f(u_{\mathbf{j}}^{\epsilon}(X))+\alpha_{\mathbf{j}}(X)(f_{\Delta,b}^{\alpha,\epsilon}(X)-b(X)), \end{align} for all $X\in \prod\limits_{k=1}^{m}I_{k,j_k}$, $\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$. We can easily establish the following inequality from \eqref{2eq20}: \begin{align}\label{2eq21} \|f_{\Delta,b}^{\alpha,\epsilon}-f\|_{\infty}\leq \dfrac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}} \|f-b\|_{\infty}, \end{align} where $\|\alpha\|_{\infty}:=\max\left\{\|\alpha_{{\mathbf{j}}}\|_{\infty}: {\mathbf{j}}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}\right\}$. \noindent From \eqref{2eq21}, we observe that $\|f_{\Delta,b}^{\alpha,\epsilon}-f\|_{\infty} \to 0$ as $\|\alpha\|_{\infty} \to 0$. \section{Multivariate Bernstein Zipper Fractal Function}\label{2sec3} To get the convergence of multivariate $\alpha$-fractal function $f_{\Delta,b}^{\alpha,\epsilon}$ to $f$ without altering the scaling function $\alpha$, we take as base functions $b$ multivariate Bernstein polynomials $B_{{\mathbf{n}}}f(X)$\cite{Foupouagnigni_Wouodjie,Davis} of $f$. The ${\mathbf{n}}:=(n_1,...,n_m)$-th Bernstein polynomial for $f\in C(\mathcal{I})$ is given by \begin{align}\label{2eq22} B_{{\mathbf{n}}}f(X) &= \sum _{k_1=0}^{n_1}\cdots\sum _{k_m=0}^{n_m} f\left(x_{1,0} +(x_{1,N_1}-x_{1,0}) \dfrac{k_1}{n_1},...,x_{m,0}\right.\nonumber \\ & \qquad + (x_{m,N_m}-x_{m,0}) \left.\dfrac{k_m}{n_m}\right) \prod\limits _{r=1}^{m} b_{k_r,n_r}(x_r) , \end{align} where \begin{align*} b_{k_r,n_r}(x_r) &=\binom{n_r}{k_r}\frac{(x_r-x_{r,0})^{k_r}(x_{r,N_r}-x_r)^{n_r-k_r}}{(x_{r,N_r}-x_{r,0})^{n_r}},\quad 0\leq k_r \leq n_r,\\ \end{align*} for $r=1,...,m$ and $n_1,...,n_m\in \mathbb{N}$. If we take the base function $b(X)= B_{{\mathbf{n}}}f(X)$ in \eqref{2eq16} then the IFS \eqref{2eq000} becomes \begin{align}\label{2eq23} I_{{\mathbf{n}}}^{\epsilon}=\left\{ K,W_{{\mathbf{j}}}^{\epsilon}: {\mathbf{j}}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}\right\}, \end{align} where \[ W_{\mathbf{j}}^{\epsilon}(X,y) :=\left(u_{1,j_1}^{\epsilon^1}(x_1),...,u_{m,j_m}^{\epsilon^m}(x_m), v_{\mathbf{j}}^{\epsilon}(X,y)\right), \] and \[ v_{\mathbf{j}}^{\epsilon}(X,y) :=f\left(u_{1,j_1}^{\epsilon^1}(x_1),...,u_{m,j_m}^{\epsilon^m}(x_m))+\alpha_{\mathbf{j}}(X)(y-B_{{\mathbf{n}}}f(X)\right), \] for all $\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$. This IFS determines a multivariate zipper $\alpha$-fractal function \[ f_{\Delta,B_{{\mathbf{n}}}}^{\alpha,\epsilon} :=f_{\Delta;{\mathbf{n}}}^{\alpha,\epsilon}:=f_{{\mathbf{n}}}^{\alpha,\epsilon} \] (we use these three notations interchangeably) referred to as a {multivariate Bernstein zipper $\alpha$-fractal function} corresponding to the continuous function $f:\mathcal{I} \to \mathbb{R}$. It satisfies the self-referential equation \begin{align}\label{2eq24} f_{\Delta;{\mathbf{n}}}^{\alpha,\epsilon}\circ u_{\mathbf{j}}^{\epsilon} = f\circ u_{\mathbf{j}}^{\epsilon} + \alpha_{\mathbf{j}}\left(f_{\Delta;{\mathbf{n}}}^{\alpha,\epsilon} - B_{{\mathbf{n}}}f\right),\text{ on $X$ and for all $\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$}. \end{align} \begin{definition} Define an operator $\mathcal{F}_{\Delta,B_{n}}^{\alpha,\epsilon}:C(\mathcal{I})\to C(\mathcal{I})$ by \begin{center} $\mathcal{F}_{\Delta,B_{n}}^{\alpha,\epsilon}(f) :=f_{\Delta,B_{n}}^{\alpha,\epsilon}=f_{\Delta;n}^{\alpha,\epsilon}$, \end{center} where $\Delta$ is the set of data points, $B_{n}$ a multivariate Bernstein operator and $\alpha$ is scaling function. We call this operator a multivariate Bernstein zipper $\alpha$-fractal operator. \end{definition} \begin{theorem}\label{2th3.1} The multivariate Bernstein zipper $\alpha$-fractal operator \[ \mathcal{F}_{\Delta,B_{{\mathbf{n}}}}^{\alpha,\epsilon}:C(\mathcal{I})\to C(\mathcal{I}) \] is linear and bounded. \end{theorem} \begin{proof} The proof of this theorem is the same as in the univariate case for the $\alpha$-fractal operator in \cite{Chand_Navascues}.\end{proof} Without altering $\alpha$, we can get the following convergence result. \begin{theorem}\label{2th3.2} Let $f\in C(\mathcal{I})$. Then the multivariate Bernstein zipper $\alpha$-fractal function $f_{\Delta,B_{{\mathbf{n}}}}^{\alpha,\epsilon}$ converges uniformly to $f$ as $n_i \to \infty $, for all $1\leq i \leq m$. \end{theorem} \begin{proof} From \eqref{2eq24}, we get \begin{align*} \|f_{\Delta,B_{{\mathbf{n}}}}^{\alpha,\epsilon}-f\|_{\infty} &\leq \|\alpha\|_{\infty} \|f_{\Delta,B_{{\mathbf{n}}}}^{\alpha,\epsilon}-B_{{\mathbf{n}}}f\|_{\infty}\\ & \leq \|\alpha\|_{\infty}\|f_{\Delta,B_{{\mathbf{n}}}}^{\alpha,\epsilon}-f\|_{\infty}+\|\alpha\|_{\infty}\|f-B_{{\mathbf{n}}}f\|_{\infty} \end{align*} Hence, \begin{align}\label{2eq25} \|f_{\Delta,B_{{\mathbf{n}}}}^{\alpha,\epsilon}-f\|_{\infty}\leq \dfrac{\|\alpha\|_{\infty}}{1-\|\alpha\|_{\infty}} \|f-B_{{\mathbf{n}}}f\|_{\infty}. \end{align} By Ref. \cite{Foupouagnigni_Wouodjie}, we know that $\|f-B_{{\mathbf{n}}}f\|_{\infty} \to 0 $ as $n_i \to \infty $, for all $1\leq i \leq m$. Employing this in \eqref{2eq25}, we obtain $\|f_{\Delta,B_{{\mathbf{n}}}}^{\alpha,\epsilon}-f\|_{\infty} \to 0$, as $n_i \to \infty $ for all $1\leq i \leq m$. Therefore, $f_{\Delta,B_{{\mathbf{n}}}}^{\alpha,\epsilon}$ converges uniformly to $f$ as $n_i \to \infty $ for all $1\leq i \leq m$. \end{proof} \begin{example} In this example, we provide an illustration of Theorem \ref{2th3.2}. Let $f(x) :=\sin(\frac{\pi}{2}xy)$ in $ \mathcal{I} :=I_1\times I_2$ where $ I_1 =I_2 :=[0,1]$, $\alpha_{j_1,\cdots, j_m}=0.5$ for all $ j\in \prod\limits_{k=1}^m \mathbb{N}_{N_k}$, $m=2$. Consider a grid on $\mathbb{R}^2$ as \begin{align*} \Delta=\left\{ (x_i, y_j) : x_i \; \text{or} \; y_j = 0, \tfrac{1}{3}, \tfrac{2}{2}, 1\right\}. \end{align*} The original bivariate function $f(x)=\sin(\frac{\pi}{2}xy)$ is constructed in Fig. \ref{2fig1}(a). For the interpolation data of $f$ on $\Delta$, we have constructed fractal functions in Figs. \ref{2fig1}(b)-(e) corresponding to different values of the signature. Fig. \ref{2fig1}(f) is the the plot of $f_{\Delta,B_{20,20}}^{\alpha,\epsilon}$ with binary signature matrix 1. One can observe from Figs. \ref{2fig1}(d) and \ref{2fig1}(f) that $f_{\Delta,B_{20,20}}^{\alpha,\epsilon}$ provides a better approximation for $f\in C(\mathcal{I})$ than the one obtained by $f_{\Delta,B_{3,3}}^{\alpha,\epsilon}$. \end{example} \begin{figure} \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=\textwidth]{simplefeqsin_pibi2xy_fig} \caption{$f(x_1,x_2)=\sin(\frac{\pi}{2}x_1x_2)$} \label{2fig1a} \end{subfigure} \hfill \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=\textwidth]{pt7f_01_33fig} \caption{$f_{\Delta,B_{3,3}}^{\alpha,\epsilon} \;\text{when}\;\epsilon=(0,1)$} \label{2fig1b} \end{subfigure} \hfill \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=\textwidth]{pt7f_10_33fig} \caption{$f_{\Delta,B_{3,3}}^{\alpha,\epsilon}\;\text{when}\;\epsilon=(1,0)$} \label{2fig1c} \end{subfigure} \hfill \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=\textwidth]{pt7f_11_3fig} \caption{$f_{\Delta,B_{3,3}}^{\alpha,\epsilon} \;\text{when}\;\epsilon=(1,1)$} \label{2fig1d} \end{subfigure} \hfill \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=\textwidth]{pt7f_00_33fig} \caption{$f_{\Delta,B_{3,3}}^{\alpha,\epsilon}\;\text{when}\;\epsilon=(0,0)$} \label{2fig1e} \end{subfigure} \hfill \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=\textwidth]{pt7f_11_2020fig} \caption{$f_{\Delta,B_{20,20}}^{\alpha,\epsilon} \;\text{when}\;\epsilon=(1,1)$} \label{2fig1f} \end{subfigure} \hfill \caption{Multivariate Bernstein zipper $\alpha$-fractal functions} \label{2fig1} \end{figure} \section{Constrained Multivariate Bernstein zipper $\alpha$-Fractal Approximation}\label{2sec4} In this section, we study the constrained approximation by multivariate Bernstein zipper $\alpha$-fractal functions. \begin{theorem}\label{2th3} Let $f\in C(\mathcal{I})$ and $f(X)\geq 0$ for all $X \in \mathcal{I}$. Consider the set \[ \Delta = \left\{(x_{k,j_1},...,x_{k,j_m}): \mathbf{j} \in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k,0},\, k\in \mathbb{N}_m\right\} \] where $a_k:=x_{k,0}<...<x_{k,N_k}=:b_k$ for each $k\in\mathbb{N}_m$, $I_k :=[a_k,b_k]$, and $\alpha:\mathcal{I}\to \mathbb{R}$ is a continuous scaling function. Then, the sequence $\{I_{{\mathbf{n}}}^{\epsilon}\}$ of IFSs \eqref{2eq23} determines a sequence $\{f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}\}$ of positive multivariate Bernstein zipper $\alpha$-fractal functions that converges uniformly to $f$ if the scaling functions $\alpha_{\mathbf{j}}(X)$ are chosen as in \eqref{2eq015} and according to \begin{align}\label{2eq26} \max\left\{ \frac{-\phi^{\epsilon}(f;\mathbf{j})}{C_{{\mathbf{n}}}-\phi_{{\mathbf{n}}}},-\frac{C_{{\mathbf{n}}}-\Phi^{\epsilon}(f;\mathbf{j})}{\Phi_{{\mathbf{n}}}}\right\}\leq \alpha_{{\mathbf{j}}}(X) \leq \min\left\{\frac{\phi^{\epsilon}(f;\mathbf{j})}{\Phi_{{\mathbf{n}}}},\frac{C_{{\mathbf{n}}}-\Phi^{\epsilon}(f;\mathbf{j})}{C_{{\mathbf{n}}}-\phi_{{\mathbf{n}}}}\right\}, \end{align} for $\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$, where \begin{gather*} \phi^{\epsilon}(f;\mathbf{j}) :=\min_{X\in I}f(u_{\mathbf{j}}^{\epsilon}(X)), \quad \Phi^{\epsilon}(f;\mathbf{j}) :=\max_{X\in I}f(u_{\mathbf{j}}^{\epsilon}(X)),\\ \phi_{{\mathbf{n}}} :=\min_{X\in I}B_{{\mathbf{n}}}f(X),\quad \Phi_{{\mathbf{n}}} :=\max_{X\in I}B_{{\mathbf{n}}}f(X), \end{gather*} and $C_{{\mathbf{n}}}$ is a positive real number strictly greater than both $\phi_{{\mathbf{n}}}$ and $\|f\|_{\infty}$. \end{theorem} \begin{proof} By Theorem \ref{2th3.2}, there exists a sequence $\{f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}\}$, for $n_k \in \mathbb{N} $, of multivariate Bernstein zipper $\alpha$-fractal functions that converges to $f$ for any given non-negative function $f\in C(\mathcal{I})$. By \cite{Foupouagnigni_Wouodjie}, $B_{{\mathbf{n}}}$ is a positive linear operator and thus $B_{{\mathbf{n}}}f(X)\geq 0$, for all $X\in \mathcal{I}$, which implies the positivity of $\Phi_{{\mathbf{n}}}$. Let $q_{{\mathbf{n}};\mathbf{j}}^{\epsilon}(X) :=f(u_{\mathbf{j}}^{\epsilon}(X))-\alpha_{\mathbf{j}}(X)B_{{\mathbf{n}}}f(X)$. By \eqref{2eq24}, we obtain \begin{align}\label{2eq27} f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(u_{\mathbf{j}}^{\epsilon}(X)) & = f(u_{\mathbf{j}}(X))+\alpha_{\mathbf{j}}(X)(f^{\alpha,\epsilon}_{n}(X)-B_{{\mathbf{n}}}f(X))\nonumber\\ & = v_{{\mathbf{n}};\mathbf{j}}^{\epsilon}(X,f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X)). \end{align} As $v_{{\mathbf{n}};\mathbf{j}}^{\epsilon}(X,y) \in [0,C_{{\mathbf{n}}}]$, $\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$, for all $(X,y)\in \mathcal{I} \times[0,C_{{\mathbf{n}}}]$ this implies \[ f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(u_{\mathbf{j}}^{\epsilon}(X)\in[0,C_{{\mathbf{n}}}], \quad\forall X\in \mathcal{I}. \] Therefore, in order to prove that $f^{\alpha}_{\Delta;{\mathbf{n}}}(X)\in[0,C_{{\mathbf{n}}}]$, for all $X\in \mathcal{I}$, it suffices to show that $v_{{\mathbf{n}};\mathbf{j}}^{\epsilon}(X,y) \in [0,C_{{\mathbf{n}}}],$ for all $(X,y)\in \mathcal{I}\times[0,C_{{\mathbf{n}}}]$. Suppose that $(X,y)\in \mathcal{I}\times[0,C_{{\mathbf{n}}}]$ and $|\alpha_{\mathbf{j}}(X)|<1$. Now there are two cases: \vskip 4pt \noindent \textbf{Case(i)}: Let $ 0\leq \alpha_{\mathbf{j}}(X)<1$, for all $X\in \mathcal{I}$. Then, $ 0\leq y\leq C_{{\mathbf{n}}}$ gives \[ q_{{\mathbf{n}};\mathbf{j}}^{\epsilon}(X)\leq \alpha_{\mathbf{j}}(X)y+q_{{\mathbf{n}};\mathbf{j}}^{\epsilon}(X)\leq C_{{\mathbf{n}}}\alpha_{\mathbf{j}}(X)+q_{{\mathbf{n}};\mathbf{j}}^{\epsilon}(X). \] Hence, for $ \mathbf{j}\in \mathcal{I}$ and $(X,y)\in \mathcal{I}\times[0,C_{{\mathbf{n}}}]$, \begin{equation*} 0\leq v_{{\mathbf{n}};\mathbf{j}}^{\epsilon}(X,y) \leq C_{{\mathbf{n}}} \end{equation*} holds if \begin{equation}\label{2eq28} \begin{split} & f(u_{\mathbf{j}}^{\epsilon}(X))-\alpha_{\mathbf{j}}(X) B_{{\mathbf{n}}}f(X) \geq 0,\\ & f(u_{\mathbf{j}}^{\epsilon}(X))-\alpha_{\mathbf{j}}(X)B_{{\mathbf{n}}}f(X) \leq C_{{\mathbf{n}}}(1-\alpha_{\mathbf{j}}(X)). \end{split} \end{equation} As $f(u_{\mathbf{j}}^{\epsilon}(X))\geq \phi^{\epsilon}(f;\mathbf{j})$ and $B_{{\mathbf{n}}}f(X)\leq \Phi_{{\mathbf{n}}},$ we obtain that \[ f(u_{\mathbf{j}}^{\epsilon}(X)) -\alpha_{\mathbf{j}}(X)B_{{\mathbf{n}}}f(X)\geq 0 \] provided \begin{equation*} \phi^{\epsilon}(f,\mathbf{j})-\alpha_{\mathbf{j}}(X)\Phi_{{\mathbf{n}}}\geq 0. \end{equation*} Hence $ \alpha_{\mathbf{j}}(x) \leq \frac{\phi^{\epsilon}(f,{\mathbf{j}})}{\Phi_{{\mathbf{n}}}}.$ Next, as $f(u_{\mathbf{j}}^{\epsilon}(X))\leq \Phi^{\epsilon}(f,\mathbf{j})$ and $B_{{\mathbf{n}}}f(X)\geq \phi_{{\mathbf{n}}}$, the second inequality in \eqref{2eq28} holds if \begin{equation*} \begin{split} \alpha_{\mathbf{j}}(X)\leq \frac{C_{{\mathbf{n}}}-\Phi^{\epsilon}(f,\mathbf{j})}{C_{{\mathbf{n}}}-\Phi_{{\mathbf{n}}}^{\epsilon}}. \end{split} \end{equation*} In this case, for $ \mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$ and $(X,y)\in \mathcal{I}\times[0,C_{{\mathbf{n}}}]$, \begin{center} $v_{{\mathbf{n}}; \mathbf{j}}^{\epsilon}(X,y) \in [0,C_{{\mathbf{n}}}]$ \end{center} is true whenever \begin{equation*} \begin{split} \alpha_{\mathbf{j}}(X) \leq \min \left\{ \frac{\phi^{\epsilon}(f,\mathbf{j})}{\Phi_{{\mathbf{n}}}}, \frac{C_{{\mathbf{n}}}-\Phi^{\epsilon}(f,\mathbf{j})}{C_{{\mathbf{n}}}-\phi_{{\mathbf{n}}}}\right\}. \end{split} \end{equation*} \textbf{Case(ii)}: Let $ -1<\alpha_{\mathbf{j}}(X)\leq 0$, for all $X\in I$. Then $ 0\leq y\leq C_{{\mathbf{n}}}$ implies \[ C_{{\mathbf{n}}}\alpha_\mathbf{j}(X)+q_{{\mathbf{n}};\mathbf{\mathbf{j}}}^{\epsilon}\leq \alpha_{\mathbf{j}}(X)y+q_{{\mathbf{n}};\mathbf{j}}^{\epsilon}\leq q_{{\mathbf{n}};\mathbf{j}}^{\epsilon}. \] For $ \mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$, $(X,y)\in \mathcal{I} \times[0,C_{{\mathbf{n}}}]$, the inequality \begin{center} $0\leq v_{{\mathbf{n}};\mathbf{j}}^{\epsilon}(X,y)=\alpha_{\mathbf{j}}(X)y+q_{{\mathbf{n}};\mathbf{j}}^{\epsilon}\leq C_{{\mathbf{n}}},$ \end{center} holds if \begin{equation}\label{2eq29} \begin{split} & f(u_{\mathbf{j}}^{\epsilon}(X))-\alpha_{j}(X)B_{n}f(X)\leq C_{{\mathbf{n}}},\\ &C_{{\mathbf{n}}}\alpha_{\mathbf{j}}(X)+f(u_{\mathbf{j}}^{\epsilon}(X))-\alpha_{\mathbf{j}}(X)B_{n}f(X)\geq 0. \end{split} \end{equation} As $f(u_{\mathbf{j}}^{\epsilon}(X))\leq \Phi^{\epsilon}(f,\mathbf{j})$ and $B_{{\mathbf{n}}}f(X)\leq \Phi_{{\mathbf{n}}}$, then from first inequality in \eqref{2eq29}, we obtain \begin{equation*} f(u_{\mathbf{j}}^{\epsilon}(X)-\alpha_{\mathbf{j}}(X)B_{{\mathbf{n}}}f(X)\leq \Phi^{\epsilon}(f,\mathbf{j})-\alpha_{\mathbf{j}}(X)\Phi_{{\mathbf{n}}}\leq C_{{\mathbf{n}}}. \end{equation*} The last part of the above inequality reduces to $\alpha_{\mathbf{j}}(X) \geq -\frac{C_{\mathbf{n}}-\Phi^{\epsilon}(f,\mathbf{j})}{\Phi_{{\mathbf{n}}}}.$ Further, since $B_{{\mathbf{n}}}f(X)\geq \phi_{{\mathbf{n}}}$ {and} $f(u_{\mathbf{j}}^{\epsilon}(X))\geq \phi^{\epsilon}(f,\mathbf{j})$, a simple calculation yields that the second inequality in \eqref{2eq29} holds if $\alpha_{\mathbf{j}}(X)\geq \frac{-\phi^{\epsilon}(f,\mathbf{j})}{C_{{\mathbf{n}}}-\phi_{{\mathbf{n}}}}$. In this case, for $\mathbf{\mathbf{j}}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$ and $(X,y)\in \mathcal{I}\times[0,C_{{\mathbf{n}}}]$, \[ v_{{{\mathbf{n}}};\mathbf{j}}^{\epsilon}(X,y) \in [0,C_{{\mathbf{n}}}] \] holds if \begin{equation*} \max\left\{ \frac{-\phi^{\epsilon}(f,\mathbf{j})}{C_{{\mathbf{n}}}-\phi_{{\mathbf{n}}}},-\frac{C_{{\mathbf{n}}}-\Phi^{\epsilon}(f,\mathbf{j})}{\Phi_{{\mathbf{n}}}}\right\}\leq \alpha_{\mathbf{j}}(X). \end{equation*} These two cases imply \eqref{2eq26}. \end{proof} In the above theorem, we have seen that for every continuous function $f:\mathcal{I}\to\mathbb{R}$ in $\mathcal{I}$ with $f\geq 0$ on $\mathcal{I}$, there exists a sequence of positive multivariate Bernstein zipper $\alpha$-fractal functions which converges to $f$ in the sup-norm. \begin{theorem}\label{2th4} Let $f,g\in C(\mathcal{I})$ and $f\geq g$ on $\mathcal{I}$. For all ${\mathbf{n}} \in \mathbb{N}^{m}$, $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}$ are multivariate Bernstein $\alpha$-fractal functions associated with the IFS $I_{{\mathbf{n}}}^{\epsilon}$, where \[ \Delta := \left\{(x_{k,j_1},...,x_{m,j_m}): \mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k,0},\, k\in \mathbb{N}_m\right\} \] such that $a_k:=x_{k,0}<...<x_{k,N_k}=:b_k$ for $k\in\mathbb{N}_m$, $I_k :=[a_k,b_k]$ and $\alpha_{{\mathbf{j}}}$ taken as in \eqref{2eq015}. Then, the sequence $\{I_{{\mathbf{n}}}^{\epsilon}\}$ of IFSs determines a sequence of multivariate Bernstein zipper $\alpha$-fractal functions $\{f^{\alpha,{\epsilon}}_{\Delta;{\mathbf{n}}}\}$ such that $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}\geq g$ on $\mathcal{I}$ and which converges uniformly to $f$ if the continuous scaling functions $\alpha_{\mathbf{j}}(X)$ are chosen as in \eqref{2eq015} and satisfy \begin{align}\label{2eq30} 0\leq \alpha_{\mathbf{j}}(X)\leq \min\left\{\frac{\phi^{\epsilon}(f-g,\mathbf{j})}{\Phi_{{\mathbf{n}}}(f)-\phi(g)},1 \right\}, \end{align} where $\phi^{\epsilon}(f-g,\mathbf{j}) :=\min\limits_{X\in \mathcal{I}}B_{n}f(X)$ and $\phi(g):=\min\limits_{X\in \mathcal{I}}g(X).$ \end{theorem} \begin{proof} By \eqref{2eq24}, we can rewrite the functional equation of $f_{\Delta;{\mathbf{n}}}^{\alpha,\epsilon}$ as follows. \begin{align}\label{2eq32} f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X) &= f(X)+\sum_{\mathbf{j}\in \prod\limits_{k=1}^m\mathbb{N}_k} \alpha_{\mathbf{j}} ((u_{\mathbf{j}}^{\epsilon})^{-1}(X)) (f^{\alpha}_{\Delta;{\mathbf{n}}}((u_{\mathbf{j}}^{\epsilon})^{-1}(X))\nonumber \\ & \qquad -B_{{\mathbf{n}}}f((u_{\mathbf{j}}^{\epsilon})^{-1}(X)))\chi_{u_{\mathbf{j}}^{\epsilon}(I)}(X), \quad X\in \mathcal{I}. \end{align} This functional equation is a rule to get the values of $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}$ at $(N^{r+2}+1)^m$ distinct points in $\mathcal{I}$ in $(r+1)$-th iteration using the value of $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}$ at $(N^{r+1}+1)^m$ points in $\mathcal{I}$ at the $r$-th iteration. Let us begin the iteration process with the nodal points $X_i,i\in \mathbb{N}$. We establish that the $p$-th iterated image of $X$ satisfies $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X)\geq g(X)$. For the $0$-th iteration, we have \[ f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X)\geq g(X), \] since $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}$ interpolate $f$ at the nodes and $f(X)\geq g(X)$. Now, suppose that $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}\geq g$. We show that \begin{align*} f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(u_{\mathbf{j}}^{\epsilon}(X))\geq g((u_{\mathbf{j}}^{\epsilon}(X)),\quad \forall X\in \mathcal{I}, \,\forall \mathbf{j} \in \prod\limits_{k=1}^{m} \mathbb{N}_k. \end{align*} From the fixed point equation \eqref{2eq32}, this is equivalent to proving that \begin{align}\label{2eq33} f(u_{\mathbf{j}}^{\epsilon}(X))+\alpha _{\mathbf{j}} (X)f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X)- \alpha _{\mathbf{j}} (X)B_{{\mathbf{n}}}f(X)-g(u_{\mathbf{j}}^{\epsilon}(X))\geq 0. \end{align} If we choose $\alpha _{\mathbf{j}}(X)$ as non-negative and using the $p$-th iterated image, yields \[ f(u_{\mathbf{j}}^{\epsilon}(X))+\alpha _{\mathbf{j}}(X)g(X)- \alpha_{\mathbf{j}} (X)B_{{\mathbf{n}}}f(X)-g(u_{\mathbf{j}}^{\epsilon}(X))\geq 0. \] For the validity of the above inequality, it suffices to choose $\alpha _{\mathbf{j}}$ so that \begin{align}\label{2eq34} 0\leq \alpha_{\mathbf{j}}(X)\leq \min\left\{\frac{\phi^{\epsilon}(f-g,\mathbf{j})}{\Phi_{{\mathbf{n}}}(f)-\phi(g)} \right\}. \end{align} If $\alpha_{\mathbf{j}}$, $\mathbf{j}\in \prod\limits_{k=1}^{m} \mathbb{N}_{k}$ satisfies \eqref{2eq30}, then $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}\geq g$ on a dense subset of $\mathcal{I}$. By a density and continuity argument, $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X)\geq g(X)$ for all $X\in \mathcal{I}$. \end{proof} \begin{corollary}\label{2th5} Let $f,g\in C(\mathcal{I})$ and $f\geq g$ on $\mathcal{I}$. Consider the partition \[ \Delta := \left\{(x_{k,j_1},...,x_{k,j_m}): {\mathbf{j}}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k,0},\, k\in \mathbb{N}_m\right\} \] with $a_k :=x_{k,0}<...<x_{k,N_k}=:b_k$ for each $k\in \mathbb{N}_m$, $I_k :=[a_k,b_k]$, and a continuous scaling function $\alpha_{\mathbf{j}}:\mathcal{I}\to \mathbb{R}$. Then, there exist sequences $\{f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}\}$ and $\{g^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}\}$ of multivariate Bernstein zipper $\alpha$-fractal function converging to $f$ and $g$, respectively, with $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}} \geq g^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}$ on $\mathcal{I}$, if the scaling functions satisfy \eqref{2eq015} as well as the following estimate: \begin{align}\label{2eq35} 0\leq \alpha_{\mathbf{j}}(X)\leq \min\left\{\frac{\phi^{\epsilon}(f-g,\mathbf{j})}{\Phi_{{\mathbf{n}}}(f-g)},1 \right\}, \quad \mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}, \end{align} where \[ \phi^{\epsilon}(f-g,\mathbf{j}):=\min_{X\in I}(f-g)(u_{\mathbf{j}}^{\epsilon}(X)) \] and \[ \Phi_{{\mathbf{n}}}(f-g):=\max_{X\in \mathcal{I}}B_{{\mathbf{n}}}(f-g)(X). \] \end{corollary} \begin{proof} We obtain the result by taking $f$ as $f-g$ and $g=0$ in Theorem \ref{2th4}. \end{proof} In the following theorem, we construct a sequence of increasing multivariate Bernstein zipper FIF and a one-side approximation of a convex continuous function in an $m$-dimensional hyperrectangle. In this theorem we adopt the following notation: For ${\mathbf{n}}=(n_1,\cdots, n_m)\in \prod\limits_{k=1}^m I_k$, let ${\mathbf{n}}+1 :=(n_1+1,\cdots, n_m+1)$. \begin{theorem} Let $f\in C(\mathcal{I})$ be convex and $\alpha_{\mathbf{j}}$ non-negative scaling functions as in \eqref{2eq015}. Then, for $n_i \in \mathbb{N}_k$, $i\in \mathbb{N}_m$, \begin{align}\label{2eq45} f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X) \leq f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}+1}(X), \text{ for all } X\in \mathcal{I}. \end{align} Moreover, for $n_i \in \mathbb{N}_k$, $i\in \mathbb{N}_m$, \begin{align}\label{2eq46} f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X) \leq f(X), \text{ for all } X\in \mathcal{I}. \end{align} \end{theorem} \begin{proof} By \eqref{2eq24}, we have self-referential equations for $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}$ and $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}+1}$, $\mathbf{j}\in \prod\limits_{k=1}^{m}\mathbb{N}_{N_k}$, $ X\in \mathcal{I}$, of the form \begin{align}\label{2eq44} \begin{split} f_{\Delta;{\mathbf{n}}}^{\alpha,\epsilon}(u_{{\mathbf{j}}}^{\epsilon}(X)) &=f(u_{\mathbf{j}}^{\epsilon}(X))+\alpha_{{\mathbf{j}}}(X) \cdot(f_{\Delta;{\mathbf{n}}}^{\alpha,\epsilon}(X)-B_{{\mathbf{n}}}f(X)),\\ f_{\Delta;{\mathbf{n}}+1}^{\alpha,\epsilon}(u_{{\mathbf{j}}}^{\epsilon}(X)) &=f(u_{{\mathbf{j}}}^{\epsilon}(X))+\alpha_{{\mathbf{j}}}(X)(f_{\Delta;{\mathbf{n}}+1}^{\alpha,\epsilon}(X)-B_{{\mathbf{n}}+1}f(X)) \end{split} \end{align} From \eqref{2eq44}, we obtain \begin{align*} \begin{split} f_{\Delta;{\mathbf{n}}+1}^{\alpha,\epsilon}(u_{{\mathbf{j}}}^{\epsilon}(X)) - f_{\Delta;{\mathbf{n}}}^{\alpha,\epsilon}(u_{{\mathbf{j}}}^{\epsilon}(X)) &= \alpha_{{\mathbf{j}}}(X)(f_{\Delta;{\mathbf{n}}+1}^{\alpha,\epsilon}(X)-f_{\Delta;{\mathbf{n}}}^{\alpha,\epsilon}(X))\\ & \quad + \alpha_{{\mathbf{j}}}(X)(B_{{\mathbf{n}}}f-B_{{\mathbf{n}}+1}f)(X). \end{split} \end{align*} \cite[Theorem 5]{Foupouagnigni_Wouodjie} implies that $(B_{{\mathbf{n}}}f-B_{{\mathbf{n}}+1}f)(X)\geq 0$ and the above equation thus takes the form \begin{align*} f_{\Delta;{\mathbf{n}}+1}^{\alpha,\epsilon}(u_{{\mathbf{j}}}^{\epsilon}(X)) - f_{\Delta;{\mathbf{n}}}^{\alpha,\epsilon}(u_{{\mathbf{j}}}^{\epsilon}(X))\leq \alpha_{{\mathbf{j}}}(X)(f_{\Delta;{\mathbf{n}}+1}^{\alpha,\epsilon}(X)-f_{\Delta;{\mathbf{n}}}^{\alpha,\epsilon}(X)). \end{align*} As the construction of fractal function is an iterative process, we infer from the above equation that $f_{\Delta;{\mathbf{n}}+1}^{\alpha,\epsilon}(X)\geq f_{\Delta;{\mathbf{n}}}^{\alpha,\epsilon}(X)$, for all $ X\in \mathcal{I}$. As $f_{\Delta;{\mathbf{n}}}^{\alpha,\epsilon}$ converges uniformly to $f$, \eqref{2eq45} implies \eqref{2eq46}. \end{proof} \section{Coordinate-Wise Monotonic Multivariate Bernstein zipper $\alpha$-fractal functions}\label{2sec6} Multivariate monotonic interpolation functions play an important role in empirical option pricing models \cite{Hutchison_Lo_Poggio} in finance, design of aggregation operators in multi-criteria decision-making and fuzzy logic\cite{Calvo_Kolesarova_Komornikova_Mesiar1}, dose-response curves and surfaces in biochemistry and pharmacology, etc. Some work on monotonic surface approximation can be found in \cite{Chand_vv, Calvo_Kolesarova_Komornikova_Mesiar, Beatson_Ziegler}. In this section, we develop coordinate-wise monotonic ZFIFs without using differentiability of the multivariate ZFIFs on rectangular grids. \begin{theorem}\label{2th7} Let $f\in C(\mathcal{I})$ be non-zero and increasing with respect to the variable $x_l$. Let \begin{gather*} g_{{\mathbf{j}}}^{\epsilon}(X) :=f(u_{{\mathbf{j}}}^{\epsilon}(X)), \quad \gamma_{{\mathbf{j}}}^{\epsilon}:=\min_{X\in \mathcal{K}}\frac{\partial g_{{\mathbf{j}}}^{\epsilon}}{\partial x_l}(X),\\ \Gamma_{{\mathbf{j}}}^{\epsilon}:=\max_{X\in \prod\limits_{k=1}^{m}I_k}\frac{\partial g_{{\mathbf{j}}}^{\epsilon}}{\partial x_l}(X), \quad \Gamma_{{\mathbf{n}}}:=\max_{X\in \prod\limits_{k=1}^{m}I_k}\frac{\partial B_{{\mathbf{n}}}f}{\partial x_l}(X). \end{gather*} Then, $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X)$ is increasing with respect to the variable $x_l$ if the partial derivative $f_{x_l}$ exists and the scaling functions $\alpha_{{\mathbf{j}}}$ given in \eqref{2eq015} satisfy the following conditions:\begin{equation}\label{2eq43} \begin{split} & (i)\quad 0 \leq \alpha_{{\mathbf{j}}}(X)\leq \frac{\gamma_{{\mathbf{j}}}^{\epsilon}}{\Gamma_{{\mathbf{n}}}},\; \text{if $\epsilon_{j_l}^l=0$ and $j_l$ odd, or, $\epsilon_{j_l}^l=1$ and $j_l$ even};\\ &(ii) \quad \frac{\Gamma_{{\mathbf{j}}}^{\epsilon}}{\Gamma_{{\mathbf{n}}}}\leq \alpha_{{\mathbf{j}}}(X)\leq 0,\; \text{if $\epsilon_{j_l}^l=0$ and $j_l$ even, or, $\epsilon_{j_l}^l=1$ and $j_l$ odd}. \end{split} \end{equation} for $X \in \mathcal{I}$, ${\mathbf{j}}\in \prod\limits_{k=1}^{m}N_k$: \end{theorem} \begin{proof} Let $X' :=(x_1,..,x_{l}',..,x_m), X'':=(x_1,..,x_{l},..,x_m)\in \prod\limits_{k=1}^{m}I_k$ where $x_{l}'< x_{l}''$ and $f(X'')\geq f(X')$. Then. \begin{align*} f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(u_{{\mathbf{j}}}^{\epsilon}(X'')) - f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(u_{{\mathbf{j}}}^{\epsilon}(X')) &= f(u_{{\mathbf{j}}}^{\epsilon}(X''))-f(u_{{\mathbf{j}}}^{\epsilon}(X'))+\alpha _{{\mathbf{j}}}(X)((f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X'')\\ & \quad - (f^{\alpha,\epsilon}_{\Delta;n}(X'))(B_{{\mathbf{n}}}f(X''))-B_{{\mathbf{n}}}f(X'))). \end{align*} As $B_{{\mathbf{n}}}f$ is increasing with respect to the variable $x_l$ \cite{Foupouagnigni_Wouodjie}, $\Gamma_{{\mathbf{n}}}$ $>0$. Now there are two cases: \vskip 4pt\noindent \textbf{Case(i):} $\epsilon_{j_l}^l=0$ and $j_l$ odd, or, $\epsilon_{j_l}^l=1$ and $ j_l$ even, i.e., $u_l^{\epsilon^l}$ is increasing. In this case, $f(u_{{\mathbf{j}}}^{\epsilon}(X))$ is increasing with respect to the variable $x_l$, which implies that $\gamma_{j}^{\epsilon}$ is non-negative. Using the mean value theorem for several variables for $f(u_{j_1\cdots j_l\cdots j_m}^{\epsilon}(X''))-f(u_{j_1\cdots j_l\cdots j_m}^{\epsilon}(X'))$ and $(B_{{\mathbf{n}}}f(X'')-B_{{\mathbf{n}}}f(X'))$, yields \begin{align*} & f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(u_{j_1\cdots j_l\cdots j_m}^{\epsilon}(X'')) - f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(u_{j_1,\cdots j_l\cdots ,j_m}^{\epsilon}(X'))\\ & \quad \geq \gamma_{j_1\cdots j_l\cdots j_m}^{\epsilon}(x_{l}''-x_{l}') - \alpha _{j_1\cdots j_l\cdots j_m}(X)\Gamma_{n}(x_{l}''-x_{l}')+\alpha _{j_1\cdots j_l\cdots j_m}(X)\\ &\qquad \cdot(f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X'')-f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X'))\\ &\quad = (\gamma_{j_1\cdots j_l\cdots j_m}^{\epsilon}-\alpha _{j_1\cdots j_l\cdots j_m}(X) \Gamma_{{\mathbf{n}}})(x_{l}'' - x_{l}')\\ & \qquad + \alpha_{j_1\cdots j_m}(X)(f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X'')-f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X')). \end{align*} If $\alpha_{j_1\cdots j_m}(X)\geq0$, then we need $\gamma_{j_1\cdots j_l\cdots j_m}^{\epsilon}-\alpha _{j_1\cdots j_l\cdots j_m}(X)\Gamma_{{\mathbf{n}}})(x_{l}''-x_{l}') \geq 0$ which yields the first condition in \eqref{2eq43}. \vskip 4pt\noindent \textbf{Case(ii):} $\epsilon_{j_l}^l=0$ and $j_l$ even, or, $\epsilon_{j_l}^l=1$ and $ j_l$ odd, i.e., $u_{l,j_l}^{\epsilon^l}$ is decreasing. In this case, $f(u_{j_1\cdots j_l\cdots j_m}^{\epsilon}(X))$ is decreasing with respect to the variable $x_l$, which ensures that $\gamma_{j_1\cdots j_l\cdots j_m}^{\epsilon}$ is non-positive. If $\alpha_{j_1\cdots j_m}(X)\leq0$, then an application of the mean value theorem for several variables applied to $f(u_{j_1\cdots j_l\cdots j_m}^{\epsilon}(X''))-f(u_{j_1\cdots j_l\cdots j_m}^{\epsilon}(X'))$ and $(B_{{\mathbf{n}}}f(X'')-B_{{\mathbf{n}}}f(X'))$, yields \begin{align*} f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(u_{j_1,\cdots ,j_l,\cdots ,j_m}^{\epsilon}(X''))&- f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(u_{j_1,\cdots, j_l,\cdots, j_m}^{\epsilon}(X'))\\ &\leq \Gamma_{j_1,\cdots ,j_l,\cdots ,j_m}^{\epsilon}(x_{l}'' - x_{l}') - \alpha _{j_1,\cdots ,j_l,\cdots , j_m}(X)\Gamma_{{\mathbf{n}}}(x_{l}''-x_{l}')\\ & \quad + \alpha _{j_1,\cdots ,j_,\cdots , j_m}(X)(f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X'')-f^{\alpha,\epsilon}_{\Delta;n}(X'))\\ & = (\Gamma_{j_1;\cdots ,j_l;\cdots , j_m}^{\epsilon}-\alpha _{j_1,\cdots ,j_l,\cdots ,j_m}(X)\Gamma_{{\mathbf{n}}})(x_{l}''-x_{l}')\\ & \quad + \alpha _{j_1,\cdots ,j_m}(X)(f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X'')-f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X')). \end{align*} Thus, $\Gamma_{j_1,\cdots ,j_l,\cdots ,j_m}^{\epsilon}-\alpha _{j_1,\cdots ,j_l,\cdots ,j_m}(X) \Gamma_{{\mathbf{n}}})(x_{l}''-x_{l}')\leq0$ if the second inequality in \eqref{2eq43} is true. \end{proof} Since fractal interpolation is an iterative process, it ensures that $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}$ is increasing with respect to the variable $x_l$. \begin{remark} Using similar arguments, we can construct coordinate-wise monotonically decreasing multivariate Bernstein zipper $\alpha$-fractal functions $f^{\alpha,\epsilon}_{\Delta;{\mathbf{n}}}(X)$ for coordinate-wise monotonically decreasing functions $f\in C(\mathcal{I})$. \end{remark} \section{Box Dimension of Multivariate ZFIF} In this section, we estimate the bounds for the box dimension of multivariate ZFIFs and show that the multivariate Bernstein polynomial $B_{{\mathbf{n}}}f$ is H\"olderian with exponent $\beta$ provided that $f$ is H\"olderian with exponent $\beta$. This will be used in the box dimension estimates for multivariate zipper Bernstein fractal functions. \begin{definition} Let $A\in \mathbb{R}_0^+$ and $0<\beta \leq 1$. Then, $\Lip_A \beta$ is the set of all functions $f: \mathcal{K}\subset \mathbb{R}^m\to \mathbb{R} $ satisfying \[ |f(X_2)-f(X_1)|\leq A\|X_2-X_1\|^{\beta}, \quad \forall\, X_1,X_2 \in \mathcal{K}. \] Such functions are also called uniformly h\"olderian with exponent $\beta$. \end{definition} In the next theorem, we provide estimates for the fractal dimension of the graph of a multizipper FIF. For this purpose, we use uniform partitions of $I_k=[0,1]$, $k\in \mathbb{N}_m$. Based on the structure of IFSs \eqref{2eq000} and \eqref{2eq23}, we choose $u_{k,j_k}^{\epsilon^k}:I_k \to I_{k,j_k}$ as \begin{align} u_{k,j_k}^{\epsilon^k}(x_k) := \begin{cases} \frac{1-2\epsilon^k_{j_k}}{N_k}x_k+\frac{j_k-1+\epsilon^k_{j_k}}{N_k}, & \text{if}\; j_k \;\text{is odd};\\ \frac{-1+2\epsilon^k_{j_k}}{N_k}x_k+\frac{j_k-\epsilon^k_{j_k}}{N_k}, & \text{if}\; j_k \;\text{is even}. \end{cases} \quad j_k \in \mathbb{N}_{N_k},\; k\in \mathbb{N}_m. \end{align} \begin{definition}{\cite{Falconer}} Let $A $ be a non-empty bounded subset of $\mathbb{R}^n$. Suppose $\Lambda(\delta)$ denote the smallest number of $m$-dimensional cube of side $\delta$ that can cover $A$. The lower and upper box-counting dimensions of $A$ respectively are defined as \begin{align*} \begin{split} {\lowdim}(A)=\lowlim_{\delta \to 0}\frac{\log(N_{\delta}(A))}{-\log(\delta)}\\ {\updim}(A)=\uplim_{\delta \to 0}\frac{\log(N_{\delta}(A))}{-\log(\delta)}. \end{split} \end{align*} If these are equal, we refer to the common value as the box-counting dimension or box dimension of $A$: \begin{align*} {\dim}_B(A)=\lim_{\delta \to 0}\frac{\log(N_{\delta}(A))}{-\log(\delta)}. \end{align*} \end{definition} Suppose that the IFS \eqref{2eq000} generates a multizipper FIF $f_{{\mathbf{n}}}^{(\alpha,\epsilon)}$. Then, we have the following result: \begin{theorem}\label{Thm-BD} Let $f,b\in C(\mathcal{I})$ with H\"older exponents $\xi_1, \xi_2 \in (0,1]$. $G$ be the graph of fractal function $f_{{\mathbf{n}}}^{(\alpha,\epsilon)}$ associated with the IFS \eqref{2eq000}. Suppose that \begin{enumerate} \item the interpolation points don't stay on an $(m-1)$ dimensional hyperplane; \item $\xi=\min\{\xi_1,\xi_2\}$; \item $\gamma= \sum\limits_{j_1=1}^{N_1} \sum\limits_{j_1=2}^{N_2} \cdots \sum\limits_{j_k=1}^{N_k} |\alpha_{j_1, \ldots, j_m}|$. \end{enumerate} Then, we have the following bounds for the box dimension of $G$ based on the magnitude of $\gamma$: \begin{itemize} \item[(i)] If $\gamma \leq 1$, then $m\leq dim_B(G) \leq m+1-\xi$; \item[(ii)] If $\gamma > 1$ and $(N_1N_2\cdots N_m)^{(\xi-m)} \gamma \leq 1$, then \[ m\leq dim_B(G) \leq m+1-\xi + \frac{\log(\gamma)}{\log(N_1N_2\cdots N_m)}; \] \item[(iii)] If $\gamma > 1$ and $(N_1N_2\cdots N_m)^{(\xi-m)} \gamma > 1$, then \[ m\leq dim_B(G) \leq 1+ \frac{\log(\gamma)}{\log(N_1N_2\cdots N_m)}. \] \end{itemize} \end{theorem} \begin{proof} Our aim is to calculate the box dimension of the graph of the fractal function $f_{{\mathbf{n}}}^{(\alpha,\epsilon)}$. For this, we consider a cover $\Lambda(r)$ of $G$ whose elements are $m$-cube with sides of length $\frac{1}{(N_1N_2\cdots N_m)^r}$ and of the form \begin{align}\label{eqdim1} \begin{split} \left[\frac{p_1-1}{(N_1N_2\cdots N_m)^r}, \frac{p_1}{(N_1N_2\cdots N_m)^r} \right] \times \left[\frac{p_2-1}{(N_1N_2\cdots N_m)^r}, \frac{p_2}{(N_1N_2\cdots N_m)^r} \right]\times\\ \cdots \times \left[\frac{k_m-1}{(N_1N_2\cdots N_m)^r}, \frac{p_m}{(N_1N_2\cdots N_m)^r} \right] \times \left[c, c+ \frac{1}{(N_1N_2\cdots N_m)^r} \right], \end{split} \end{align} {for} $p_i = 1, 2, \ldots, (N_1N_2\cdots N_m)^r$, $r\in\mathbb{N}_0$, $i\in \mathbb{N}_m$, and $c\in \mathbb{R}$. Suppose $\mathcal{N}(r)$ is the number of such cubes necessary to cover the graph $G$. Let $\mathcal{N}_0(r)$ be the smallest number of arbitrary $(m+1)$-cubes of size \[ \prod_{j=1}^{m+1} \frac{1}{(N_1N_2\cdots N_m)^r}. \] required to cover $G$. Hence, $\mathcal{N}_0(r)\leq \mathcal{N}(r)$. Each arbitrary $(m+1)$-dimensional cube can be cover by at most $2^m$ $(m+1)$-cube of the form \eqref{eqdim1}. Thus, $\mathcal{N}(r)\leq 2^m\mathcal{N}_0(r)$ and, therefore, \[ \mathcal{N}_0(r)\leq \mathcal{N}(r)\leq 2^m\mathcal{N}_0(r). \] Hence, we can use covers of the form \eqref{eqdim1} to compute the box dimension of the graph $G$ of $f_{{\mathbf{n}}}^{(\alpha,\epsilon)}$. Denote $\Lambda(r,p_1,p_2,\cdots,p_m)$ as the collection of $(m+1)$-cubes in \begin{align}\label{eqdim2} \begin{split} \left[\frac{k_1-1}{(N_1N_2\cdots N_m)^r}, \frac{k_1}{(N_1N_2\cdots N_m)^r} \right] \times \left[\frac{k_2-1}{(N_1N_2\cdots N_m)^r}, \frac{k_2}{(N_1N_2\cdots N_m)^r} \right] \times \cdots \\ \cdots \times \left[\frac{k_m-1}{(N_1N_2\cdots N_m)^r}, \frac{k_m}{(N_1N_2\cdots N_m)^r} \right], \text{ for } k_i=1,2,\cdots,(N_1N_2\cdots N_m)^r \end{split} \end{align} of the form \eqref{eqdim1} which have $\mathcal{N}(r,k_1,k_2,\cdots,k_m)$ numbers of $(m+1)$-dimensional cubes. One observes that $$\mathcal{N}(r)=\sum_{p_1=1}^{(N_1N_2\cdots N_m)^r} \sum_{p_2=1}^{(N_1N_2\cdots N_m)^r}\cdots \sum_{p_m=1}^{(N_1N_2\cdots N_m)^r} \mathcal{N}(r,p_1,p_2,\cdots,p_m).$$ For $j\in \sum\limits_{k=1}^m \mathbb{N}_k$, the image of $\Lambda(r,p_1,p_2,\cdots,p_m)$ under the map $v_{j}$ is contained in \begin{align}\label{eqdim3} \begin{split} & \left[\frac{N_2 N_3\cdots N_m (l_1(p_1,j_1)-1)}{(N_1N_2\cdots N_m)^{r+1}}, \frac{N_2N_3\cdots N_m l_1(p_1,j_1)}{(N_1N_2\cdots N_m)^{r+1}} \right] \\ & \quad \times\left[\frac{N_1N_3\cdots N_m (l_2(p_2,j_2)-1)}{(N_1N_2\cdots N_m)^{r+1}}, \frac{N_1N_3\cdots N_m l_2(k_2,j_2)}{(N_1N_2\cdots N_m)^{r+1}} \right]\times\cdots \\ & \quad\times \left[\frac{N_1N_3\cdots N_{m-1}(l_m(p_m,j_m)-1)}{(N_1N_2\cdots N_m)^{r+1}}, \frac{N_1N_3\cdots N_{m-1}l_m(p_m,j_m)}{(N_1N_2\cdots N_m)^{r+1}} \right] \times \mathbb{R}, \end{split} \end{align} where $l_k(p_k,j_k)=p_1+(j_1-1)(N_1N_2\cdots N_m)^r$. Therefore, we obtain \begin{align}\label{eqdim4} \begin{split} \mathcal{N}(r+1,p_1,p_2,\cdots,p_m)&=\sum_{k_1=1}^{N_1} \sum_{k_2=1}^{N_2}\cdots \sum_{k_2=1}^{N_m} \sum_{p_1,p_2,\cdots,p_m=1}^{(N_1N_2\cdots N_m)^r}\\ &~~~~~~~~~~~~~\cdot\mathcal{N}(r,l_1(p_1,j_1),l_2(p_2,j_2),\cdots,l_k(p_k,j_k)). \end{split} \end{align} As $f$ and $b$ are uniform H\"olderian on $I$ with exponents $\xi_1,\xi_2 \in (0,1]$, we obtain the following estimates for $X=(x_1,x_2,\cdots,x_m), X'=(x_1',x_2',\cdots,x_m') \in \prod\limits_{k=1}^{m}\left[ \frac{p_k-1}{(N_1N_2\cdots N_m)^{r}},\frac{p_k}{(N_1N_2\cdots N_m)^{r}} \right]$: \begin{align} \begin{split} |f(u_{j}(X))-f(u_{j}(X'))| &\leq \frac{A_1}{(N_1N_2\cdots N_m)^{\xi_1(r+1)}},\\ |b(X)-b(X')| &\leq \frac{A_2}{(N_1N_2\cdots N_m)^{\xi_2 r}}. \end{split} \end{align} Thus, the maximum height of $v_{j}(\Lambda(r,p_1,p_2,\cdots,p_m))$ is bounded above by \[ \frac{|\alpha_{j}|\mathcal{N}(r,p_1,p_2,\cdots,p_m)}{(N_1N_2\cdots N_m)^{ r}}+\frac{A_1}{(N_1N_2\cdots N_m)^{\xi_1(r+1)}}+\frac{A_2|\alpha_{j}|}{(N_1N_2\cdots N_m)^{\xi_2 r}}. \] Now, \begin{align*} \begin{split} \mathcal{N}(r,l_1(p_1,j_1),l_2(p_2,j_2),\cdots,l_k(p_k,j_k))= \left(\frac{|\alpha_{j}|\mathcal{N}(r,p_1,p_2,\cdots,p_m)}{(N_1N_2\cdots N_m)^{ r}}\right.\\ \left.+ \frac{A_1}{(N_1N_2\cdots N_m)^{\xi_1(r+1)}} + \frac{A_2|\alpha_{j}|}{(N_1N_2\cdots N_m)^{\xi_2 r}}\right) (N_1N_2\cdots N_m)^{r+1}+2\\ = |\alpha_{j}|\mathcal{N}(r,p_1,p_2,\cdots,p_m) (N_1N_2\cdots N_m)+ A_1 (N_1N_2\cdots N_m)^{(1-\xi_1)(r+1)}\\ + A_2|\alpha_{j}| (N_1N_2\cdots N_m)^{(1-\xi_2)r+1}+ 2. \end{split} \end{align*} This produces an estimate of the form \begin{align} \begin{split} \sum_{j_1=1}^{N_1} \sum_{j_2=1}^{N_2}&\cdots \sum_{j_m=1}^{N_m}\mathcal{N}(r,l_1(p_1,j_1),l_2(p_2,j_2),\cdots,l_k(p_k,j_k)) \leq\\ &\mathcal{N}(r,p_1,p_2,\cdots,p_m) (N_1N_2\cdots N_m) \gamma + A_1 (N_1N_2\cdots N_m)^{(1-\xi_1)(r+1)+1}\\ & \quad + A_2 (N_1N_2\cdots N_m)^{(1-\xi_2)r+1} \gamma + 2(N_1N_2\cdots N_m). \end{split} \end{align} Substituting the above estimation in \eqref{eqdim4}, we obtain \begin{align} \begin{split} \mathcal{N}(r+1)&= \sum_{p_1,p_2,\cdots,p_m=1}^{(N_1N_2\cdots N_m)^r} \sum_{k_1=1}^{N_1} \sum_{k_2=1}^{N_2}\cdots \sum_{k_m=1}^{N_m}\mathcal{N}(r,l_1(p_1,j_1),l_2(p_2,j_2),\cdots,l_k(p_k,j_k))\\ &\leq \sum_{p_1,p_2,\cdots,p_m=1}^{(N_1N_2\cdots N_m)^r} \mathcal{N}(r,p_1,p_2,\cdots,p_m) (N_1N_2\cdots N_m) \gamma \\ & \quad + A_1 (N_1N_2\cdots N_m)^{(1-\xi_1)(r+1)+1}+A_2 (N_1N_2\cdots N_m)^{(1-\xi_2)r+1} \gamma \\ & \quad + 2(N_1N_2\cdots N_m))\\ &= \mathcal{N}(r) (N_1N_2\cdots N_m) \gamma + A_1 (N_1N_2\cdots N_m)^{(1-\xi_1)(r+1)+mr+1} \\ & \quad + A_2 (N_1N_2\cdots N_m)^{(1-\xi_2)r+1+mr} \gamma + 2(N_1N_2\cdots N_m)^{m r+1})\\ &\leq \mathcal{N}(r) (N_1N_2\cdots N_m) \gamma + A_1 (N_1N_2\cdots N_m)^{(r+1)(m+1-\xi)}\\ & \quad + A_2 (N_1N_2\cdots N_m)^{(r+1)(m+1-\xi)} \gamma + 2(N_1N_2\cdots N_m)^{(r+1)(m+1-\xi)}\\ &=\mathcal{N}(r) (N_1N_2\cdots N_m) \gamma + (N_1N_2\cdots N_m)^{(r+1)(m+1-\xi)} \mathcal{C}, \end{split} \end{align} where $\mathcal{C}:=A_1+A_2 \gamma +2$. Using the above inequality repeatedly on $r$, we get the following geometric series type expressions: \begin{align*} \begin{split} \mathcal{N}(r)&\leq \mathcal{N}(r-1) (N_1N_2\cdots N_m) \gamma + (N_1N_2\cdots N_m)^{r(m+1-\xi)} \mathcal{C}\\ &\leq [\mathcal{N}(r-2) (N_1N_2\cdots N_m) \gamma + (N_1N_2\cdots N_m)^{(r-1)(m+1-\xi)} \mathcal{C}](N_1N_2\cdots N_m) \gamma \\ & \quad + (N_1N_2\cdots N_m)^{r(m+1-\xi)} \mathcal{C}\\ &\leq \mathcal{N}(r-2) (N_1N_2\cdots N_m)^2 \gamma^2 + (1+(N_1N_2\cdots N_m)^{(\xi-m-1)} \gamma)\\ & \quad\cdot(N_1N_2\cdots N_m)^{r(m-\xi)} \mathcal{C}. \end{split} \end{align*} Continuing this process, we obtain \begin{align}\label{eqdim5} \begin{split} \mathcal{N}(r)\leq \mathcal{N}(0) (N_1N_2\cdots N_m)^{r} \gamma^{r} + \{1+(N_1N_2\cdots N_m)^{(\xi-m)} \gamma\\ + (N_1N_2\cdots N_m)^{2(\xi-m)} \gamma^2+ \cdots + (N_1N_2\cdots N_m)^{(r-1)(\xi-m)} \gamma^{(r-1)} \}\\ \cdot (N_1N_2\cdots N_m)^{r(m+1-\xi)} \mathcal{C} \end{split} \end{align} Thus, we have the following three cases: \vskip 4pt\noindent \textbf{Case(i):} $\gamma \leq 1$. \vskip 4pt\noindent As $N_k \geq 2$ and $\xi \in (0,1]$, $(N_1N_2\cdots N_m)^{k(\xi-m)} \leq 1$, for $k\geq 1$. By \eqref{eqdim5}, we have \begin{align} \begin{split} \mathcal{N}(r)&\leq \mathcal{N}(0) (N_1N_2\cdots N_m)^{r} \gamma^{r} + r(N_1N_2\cdots N_m)^{r(m+1-\xi)} \mathcal{C}\\ &\leq \mathcal{N}(0) (N_1N_2\cdots N_m)^{r(m+1-\xi)} r + r(N_1N_2\cdots N_m)^{r(m+1-\xi)} \mathcal{C}\\ &\leq \mathcal{C}_1 r (N_1N_2\cdots N_m)^{r(m+1-\xi)}, \quad \text{where}\quad \mathcal{C}_1=\mathcal{N}(0)+\mathcal{C}. \end{split} \end{align} Hence, \begin{align}\label{eqdim6} \begin{split} \dim_B(G)&= \lim_{r \to \infty} \frac{\log(\mathcal{N}(r))}{\log((N_1N_2\cdots N_m)^r)}\\ &\leq\lim_{r \to \infty} \frac{\log(\mathcal{C}_1 r (N_1N_2\cdots N_m)^{r(m+1-\xi)})}{\log((N_1N_2\cdots N_m)^r)} =m+1-\xi. \end{split} \end{align} By the continuity of the fractal function we have that $\dim_B(G) \geq m$, and using \eqref{eqdim6}, we obtain \[ m\leq \dim_B(G) \leq m+1-\xi. \] \vskip 4pt\noindent \textbf{Case(ii):} $\gamma >1$ and $(N_1N_2\cdots N_m)^{(\xi-m)} \gamma \leq 1$. \vskip 4pt\noindent By \eqref{eqdim5}, we have \begin{align} \begin{split} \mathcal{N}(r)&\leq \mathcal{N}(0) (N_1N_2\cdots N_m)^{r} \gamma^{r} + r(N_1N_2\cdots N_m)^{r(m+1-\xi)} \mathcal{C}\\ &\leq \mathcal{C}_2 \gamma^{r} r (N_1N_2\cdots N_m)^{r(m+1-\xi)}, \quad \text{where} \quad \mathcal{C}_2 = \mathcal{C} + \mathcal{N}(0). \end{split} \end{align} Hence \begin{align} \begin{split} \dim_B(G)& \leq \lim_{r \to \infty} \frac{\log(\mathcal{C}_2 \gamma^{r} r (N_1N_2\cdots N_m)^{r(m+1-\xi)})}{\log(N_1N_2\cdots N_m)^r}\\ &=m+1-\xi+ \frac{\log(\gamma)}{\log(N_1N_2\cdots N_m)}. \end{split} \end{align} Hence, using the above inequality, we obtain \[ m\leq \dim_B(G) \leq m+1-\xi + \frac{\log(\gamma)}{\log(N_1N_2\cdots N_m)}. \] \vskip 4pt\noindent \textbf{Case(iii)}: $\gamma >1$ and $(N_1N_2\cdots N_m)^{(\xi-m)} \gamma > 1$. \vskip 4pt\noindent Again by \eqref{eqdim5} \begin{align}\label{eqbox10} \begin{split} \mathcal{N}(r)&\leq \mathcal{N}(0) (N_1N_2\cdots N_m)^{r} \gamma^{r} + \left[ \frac{(N_1N_2\cdots N_m)^{r(\xi-m)} \gamma^r-1}{(N_1N_2\cdots N_m)^{(\xi-m)} \gamma-1} \right]\\ & \quad\cdot(N_1N_2\cdots N_m)^{r(m+1-\xi)} \mathcal{C}\\ &\leq \mathcal{N}(0) (N_1N_2\cdots N_m)^{r} \gamma^{r} + \left[ \frac{(N_1N_2\cdots N_m)^{r(\xi-m)} \gamma^r}{(N_1N_2\cdots N_m)^{(\xi-m)} \gamma-1} \right]\\ & \quad \cdot(N_1N_2\cdots N_m)^{r(m+1-\xi)} \mathcal{C}\\ &\leq \mathcal{N}(0) (N_1N_2\cdots N_m)^{r} \gamma^{r} +\frac{(N_1N_2\cdots N_m)^r \gamma^r}{(N_1N_2\cdots N_m)^{(\xi-m)} \gamma-1}\mathcal{C}. \end{split} \end{align} Using \eqref{eqbox10} and arguments similar to those above, we obtain \begin{align*} \begin{split} \dim_B(G) \leq 1+ \frac{\log(\gamma)}{\log(N_1N_2\cdots N_m)}. \end{split} \end{align*} \end{proof} \begin{remark} From the case (i)-(iii) of Theorem \ref{Thm-BD}, it is found that the the Box dimension of ZFIFs is independent of the signature matrix $\epsilon$. \end{remark} It is known that for a univariate function $f\in \Lip_A \beta$, the corresponding univariate Bernstein function $B_nf \in \Lip_A \beta$\cite{Brown_Elliott_Paget}. We need a similar result for the computation of the box dimension results of the graph of a multivariate Bernstein zipper fractal function. \begin{proposition}\label{lemmadim} If for a multivariate function $f\in \Lip_A \beta$ on $\prod\limits_{p=1}^m I_p$, where {$I_p=[0,1]$}, then the corresponding multivariate Bernstein polynomial $B_{{\mathbf{n}}}f\in \Lip_A \beta$. \end{proposition} \begin{proof} We know that \begin{align} \begin{split} B_{{\mathbf{n}}}f(X) = \sum_{j_1=0}^{n_1} \sum_{j_2=0}^{n_2} \cdots \sum_{j_m=0}^{n_m} \prod\limits_{p=1}^{m} b_{j_p,n_p}(x_p) f\left(\frac{k_1}{n_1},\frac{k_2}{n_2},\cdots,\frac{k_m}{n_m}\right), \end{split} \end{align} where $b_{j_p,n_p}(x_p):=\binom{n_p}{k_p}x^{k_p}(1-x_p)^{n_p-k_p}$, $p\in \mathbb{N}_m$, and $X:=(x_1,x_2,\cdots,x_m)$. Let \begin{align} A_{k_p,l_p}^{n_p}(x_p,y_p):=\frac{n_p!}{k_p! l_p!(n_p-k_p- l_p)!}\, x_p^{k_p}(y_p-x_p)^{l_p}(1-y_p)^{n_p-k_p-l_p} \end{align} for $p\in \mathbb{N}_m$ and $x_p,y_p \in I$. The following results is valid for $n\in \mathbb{N}$: \begin{align}\label{eqlip03} \sum_{k=0}^{n} \binom{n}{k} & x^{k}(1-x)^{n-k} f\left(\frac{k}{n}\right)\nonumber\\ & = \sum_{k=0}^{n} \sum_{l=0}^{n-k} \frac{n!}{k! l!(n-k- l)!}\; x^{k}(y-x)^{l}(1-y)^{n-k-l}f\left(\frac{k}{n}\right). \end{align} Let $X:=(x_1,x_2,\cdots,x_m)$ and $Y:=(y_1,y_2,\cdots,y_m)$. Then, there are $2^m$ possible arrangements in the corresponding arguments of $X$ and $Y$. Take as one possible case $x_1\leq y_1$, $x_2\geq y_2$, $x_3\geq y_3$, \ldots, $x_m\geq y_m$. Eqn. \eqref{eqlip03} implies \begin{align*} \begin{split} B_{{\mathbf{n}}}f(X) &= \sum_{j_2=0}^{n_2} \cdots \sum_{j_m=0}^{n_m} \prod\limits_{p=1,p \ne 1}^{m} b_{j_p,n_p}(x_p) \sum_{j_1=0}^{n_1} b_{j_1,n_1}(x_1)f\left(\frac{j_1}{n_1},\frac{j_2}{n_2},\cdots,\frac{j_m}{n_m}\right)\\ &=\sum_{j_2=0}^{n_2} \cdots \sum_{j_m=0}^{n_m} \prod\limits_{p=1,p \ne 1}^{m} b_{j_p,n_p}(x_p) \sum_{k_1=0}^{n_1} \sum_{l_1=0}^{n_1-k_1} A_{j_1,l_1}^{n_1}(y_1,x_1)\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\cdot f\left(\frac{k_1}{n_1},\frac{j_2}{n_2},\cdots,\frac{j_m}{n_m}\right)\\ &=\sum_{k_1=0}^{n_1} \sum_{l_1=0}^{n_1-k_1} \sum_{j_3=0}^{n_3} \cdots \sum_{j_m=0}^{n_m} \prod\limits_{p=1,p \ne 1}^{m} b_{j_p,n_p}(x_p) A_{j_1,l_1}^{n_1}(y_1,x_1) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\cdot \sum_{j_2=0}^{n_2} b_{j_2,n_2}(x_2)f\left(\frac{k_1}{n_1},\frac{j_2}{n_2},\cdots,\frac{j_m}{n_m}\right)\\ & =\sum_{k_1=0}^{n_1} \sum_{l_1=0}^{n_1-k_1} \sum_{j_3=0}^{n_3} \cdots \sum_{j_m=0}^{n_m} \prod\limits_{p=1,p \ne 1, 2}^{m} b_{j_p,n_p}(x_p) A_{j_1,l_1}^{n_1}(y_1,x_1)\\ &~~~~~~~~~~~~~~~~~~~~~~\cdot \sum_{k_2=0}^{n_2} \sum_{l_2=0}^{n_2-k_2} A_{j_2,l_2}^{n_2}(x_2,y_2) f\left(\frac{k_1}{n_1},\frac{k_2+l_2}{n_2},\cdots,\frac{j_m}{n_m}\right), \end{split} \end{align*} \begin{align*} \begin{split} B_{{\mathbf{n}}}f(X) &=\sum_{k_1=0}^{n_1} \sum_{l_1=0}^{n_1-k_1} \sum_{k_2=0}^{n_2} \sum_{l_2=0}^{n_2-k_2} \sum_{j_3=0}^{n_3} \cdots \sum_{j_m=0}^{n_m} \prod\limits_{p=1,p \ne 1, 2}^{m} b_{j_p,n_p}(x_p)\\ &~~~~~~~~~~~~~~~~~~\cdot A_{j_1,l_1}^{n_1}(y_1,x_1) A_{j_2,l_2}^{n_2}(x_2,y_2)f\left(\frac{k_1}{n_1},\frac{k_2+l_2}{n_2},\cdots,\frac{j_m}{n_m}\right). \end{split} \end{align*} Continuing this process, we obtain \begin{align}\label{eqlip3} \begin{split} B_{{\mathbf{n}}}f(X)=\sum_{k_1=0}^{n_1} \sum_{l_1=0}^{n_1-k_1} \sum_{k_2=0}^{n_2} \sum_{l_2=0}^{n_2-k_2} \sum_{j_3=0}^{n_3} \cdots \sum_{k_m=0}^{n_m} \sum_{l_m=0}^{n_m-k_m} A_{j_1,l_1}^{n_1}(y_1,x_1)\\ \prod\limits_{p=1,p \ne 1}^{m}A_{j_p,l_p}^{n_p}(x_p,y_p)f\left(\frac{k_1}{n_1},\frac{k_2+l_2}{n_2},\cdots,\frac{k_m+l_m}{n_m}\right). \end{split} \end{align} Similarly, we get \begin{align}\label{eqlip4} \begin{split} B_{{\mathbf{n}}}f(Y)=\sum_{k_1=0}^{n_1} \sum_{l_1=0}^{n_1-k_1} \sum_{k_2=0}^{n_2} \sum_{l_2=0}^{n_2-k_2} \sum_{j_3=0}^{n_3} \cdots \sum_{k_m=0}^{n_m} \sum_{l_m=0}^{n_m-k_m} A_{j_1,l_1}^{n_1}(y_1,x_1)\\ \prod\limits_{p=1,p \ne 1}^{m}A_{j_p,l_p}^{n_p}(x_p,y_p)f\left(\frac{k_1+l_1}{n_1},\frac{k_2}{n_2},\cdots,\frac{k_m}{n_m}\right). \end{split} \end{align} By \eqref{eqlip3} and \eqref{eqlip4}, we can write \begin{align}\label{eqlip5} \begin{split} |B_{{\mathbf{n}}}f(X)&- B_{{\mathbf{n}}}f(Y)|\\ &\leq\sum_{k_1=0}^{n_1} \sum_{l_1=0}^{n_1-k_1} \sum_{k_2=0}^{n_2} \sum_{l_2=0}^{n_2-k_2} \sum_{j_3=0}^{n_3}\cdots \sum_{k_m=0}^{n_m} \sum_{l_m=0}^{n_m-k_m} A_{j_1,l_1}^{n_1}(y_1,x_1)\\ &~~~~~~~~~~~~~~~\cdot\prod\limits_{p=1,p \ne 1}^{m}A_{j_p,l_p}^{n_p}(x_p,y_p) | f\left(\frac{k_1}{n_1},\frac{k_2+l_2}{n_2},\cdots,\frac{k_m+l_m}{n_m}\right)\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~-f\left(\frac{k_1+l_1}{n_1},\frac{k_2}{n_2},\cdots,\frac{k_m}{n_m}\right)|\\ &\leq \sum_{k_1=0}^{n_1} \sum_{l_1=0}^{n_1-k_1} \sum_{k_2=0}^{n_2} \sum_{l_2=0}^{n_2-k_2} \sum_{j_3=0}^{n_3}\cdots \sum_{k_m=0}^{n_m} \sum_{l_m=0}^{n_m-k_m} A_{j_1,l_1}^{n_1}(y_1,x_1)\\ &~~~~~~~~~~~~~~~\cdot \prod\limits_{p=1,p \ne 1}^{m}A_{j_p,l_p}^{n_p}(x_p,y_p) A \left(\max \left\{\frac{k_p}{n_p}:p\in \mathbb{N}_m\right\}\right)^{\beta}. \end{split} \end{align} Suppose $\max \left\{\frac{k_p}{n_p}:p\in \mathbb{N}_m\right\}=\frac{k_a}{n_a}$, for some $a\in \mathbb{N}_m$. Then, \eqref{eqlip5} becomes \begin{align}\label{eqlip6} \begin{split} |B_{{\mathbf{n}}}f(X)&- B_{{\mathbf{n}}}f(Y)| \leq \sum_{k_1=0}^{n_1} \sum_{l_1=0}^{n_1-k_1} \sum_{k_2=0}^{n_2} \sum_{l_2=0}^{n_2-k_2} \\ &~~~~~~~~~~\cdots \sum_{k_m=0}^{n_m} \sum_{l_m=0}^{n_m-k_m} A_{j_1,l_1}^{n_1}(y_1,x_1)\prod\limits_{p=1,p \ne 1}^{m}A_{j_p,l_p}^{n_p}(x_p,y_p) \left(\frac{k_a}{n_a}\right)^{\beta}\\ &=A \sum_{k_1=0}^{n_1} \sum_{l_1=0}^{n_1-k_1} A_{j_1,l_1}^{n_1}(y_1,x_1) \sum_{k_2=0}^{n_2} \sum_{l_2=0}^{n_2-k_2} A_{j_2,l_2}^{n_2}(x_2,y_2)\\ &\cdots \sum_{k_a=0}^{n_a} \sum_{l_a=0}^{n_a-k_a} A_{j_a,l_a}^{n_a}(x_a,y_a)\left(\frac{k_a}{n_a}\right)^{\beta} \cdots \sum_{k_m=0}^{n_m} \sum_{l_m=0}^{n_m-k_m} A_{j_m,l_m}^{n_m}(x_m,y_m)\\ &=A \sum_{l_1=0}^{n_1} b_{l_1,n_1}(x_1) \cdots \sum_{l_a=0}^{n_a} b_{l_a,n_a}(x_a) \left(\frac{k_a}{n_a}\right)^{\beta} \cdots \sum_{l_m=0}^{n_m} b_{l_m,n_m}(x_m)\\ &=A \sum_{l_a=0}^{n_a} b_{l_a,n_a}(x_a) \left(\frac{l_a}{n_a}\right)^{\beta}= A B_{n_a}(x^{\beta},(y_a-x_a))\leq A (y_a-x_a)^{\beta}\\ &= A \|X-Y\|^{\beta}. \end{split} \end{align} Similarly in all other cases, we get the same inequality. Therefore, \[B_{{\mathbf{n}}}f \in \Lip_A^{\beta}.\] \end{proof} \begin{corollary} Let $f\in C(\mathcal{I})$ with H\"older exponent $\xi \in (0,1]$. Let $G$ be the graph of multivariate Bernstein fractal function $f_{{\mathbf{n}}}^{(\alpha,\epsilon)}$ associated with the IFS \eqref{2eq23}. Suppose that \begin{enumerate} \item the interpolation points don't lie on a $(m-1)$ dimensional hyperplane; \item $\gamma= \sum\limits_{j_1=1}^{N_1} \sum\limits_{j_1=2}^{N_2} \cdots \sum\limits_{j_k=1}^{N_k} |\alpha_{j_1, \ldots, j_m}|$. \end{enumerate} Then, the box dimension of $G$ satisfies the estimates (i), (ii), (iii) of Theorem \ref{Thm-BD}. \end{corollary} \begin{proof} Since $f$ is H\"older with exponent $\xi$, therefore Lemma \ref{lemmadim} ensures that $B_{\textbf{n}}f$ is H\"older with the same exponent. Thus, Theorem \ref{Thm-BD} results are valid for the box dimension of multivariate Bernstein fractal function. \end{proof} \section{Conclusions} In this work, we have introduced multivariate zipper fractal interpolation prescribed on multivariate data given on a Cartesian grid by a binary signature matrix. Multivariate zipper $\alpha$-fractal functions are constructed and its approximation properties studied. Taking in the construction the base function as a multivariate Bernstein function, we have studied some shape preserving aspects of multivariate Bernstein zipper $\alpha$-fractal functions. Finally, we have derived bounds for box-dimension of the graph of a multivariate zipper $\alpha$-fractal function based on scaling factors and H\"older exponents of a given function and base function. It was found that our methodology provides $2^m$-iso-dimensional multivariate fractal functions for the same scaling factors. \section*{Acknowledgement} The second author would like to thank IIT Madras for funding from the IoE project: SB20210848MAMHRD008558 through Ministry of Education, Govt. of India. \input{biblio} \end{document}
1,314,259,993,752
arxiv
\section{Introduction} \label{intro} In a previous paper \cite{MarinPoveda}, higher order corrections (up to n-th order) were obtained for the perihelion precession (see figure (\ref{fig:avance})) in binary systems like OJ287, Sagittarius A*-S2 and H1821+643 using the Schwarzschild metric and complex integration. The corrections were performed considering the third root of the motion equation and developing the expansion in terms of $\epsilon \equiv r_s/\left(a(1-e^2)\right)$, where $r_{s}$ is the Schwarzschild radius.The results were compared with other expansions that appear in the literature giving corrections to second and third order \cite{Fokas,Tyler,Rosales,Biesel,DEliseo,Scharf,Do_Nhat}. In this paper we will consider both Schwarzschild and Kerr metrics to take into account the spin effect and developing the expansion in terms of both $\epsilon \equiv r_s/\left(a(1-e^2)\right)$ and $\epsilon^{*} \equiv \left(1- \frac{2 \alpha E'}{cJ}\right) \epsilon $, where $E'$ and $J$ are the energy and angular momentum per unit mass and $\alpha$ is a factor proportional to the black hole spin. Kerr's black holes are very interesting because most of the black holes in the universe probably have a rotational movement (spin) and one of the most important consequences of that spin is that spacetime is dragged around the rotating black hole leading to an effect that is known as "frame dragging". In the case of a Schwarzschild black hole, a precession occurs which is the rotation of the elliptical orbit in the fixed plane of that orbit. For Kerr's solution, the drag of the reference frame introduces an additional precession of the plane of the orbit around the axis of rotation of the black hole in the same direction of rotation of said black hole. To facilitate the calculations we will assume that the axis of rotation of the Kerr black hole is perpendicular to the plane of the orbit. \begin{figure} \begin{centering} \includegraphics{perih.eps} \par\end{centering} \caption{Perihelion precession. $\omega$ is the initial inclination of the orbit and $\delta\omega$ is the angle of precession.\label{fig:avance}} \end{figure} \section{Spin contribution to the perihelion precession} \medskip{} The Schwarzschild metric describes a body with spherical symmetry, but without electric charge and rotational movement. If we take into account that many black holes that have been found have rotation on their own axis, as for example the binary system OJ287, one might think that this rotation should influence the advance of the perihelion of the elliptical orbits. The rotation around its own axis is given by the angular momentum of spin $ S_ {z} $ of the massive body M. To include the spin in the calculation, you can use the Kerr metric. The Kerr metric is a solution to the field equations in vacuum for a body of mass M that rotates on its own axis with an angular momentum $ S_ {z} $. The Kerr metric is \cite{Misner,Ryder,Hobson,Chandrasekhar,tHooft,Ludvigsen,Marin}: \\ \begin{eqnarray} \left(ds\right)^{2}=c^{2}(d\tau)^{2}=\gamma c^{2}(dt)^{2}-\frac{r^{2}}{\Delta}(dr)^{2}-r^{2}\left(1+\frac{\alpha^{2}}{r^{2}}+\frac{r_{s}\alpha^{2}}{r^{3}}\right)(d\phi)^{2}+\frac{2r_{s}\alpha}{r}cdtd\phi, \end{eqnarray} \\ where $\alpha=\frac{S_{z}}{Mc}$, $\Delta=r^{2}-r_{s}r+\alpha^{2}$, with coordinates $x^{0}=ct$, $x^{1}=r$, $ x^{2}=\theta$ and $x^{3}=\phi$. $r_{s}=\frac{2GM}{c^{2}}$ is the Schwarzschild radius. We have taken $\theta=\pi/2$ (equatorial plane). In this paper we only intend to observe how the spin could contribute to the perihelion precession and if it is relevant to the calculation, for which only first order terms in $ \frac{\alpha}{r} $ will be taken into account. By doing this, the metric is reduced to: \\ \begin{eqnarray} \left(ds\right)^{2}=c^{2}(d\tau)^{2}=\gamma c^{2}(dt)^{2}-\frac{1}{\gamma}(dr)^{2}-r^{2}(d\phi)^{2}+\frac{2r_{s}\alpha}{r}cdtd\phi. \label{eq:mov} \end{eqnarray} The arc length $ds$ satisfies the relation $ds^{2} = g_{\mu \nu}dx^{\mu}dx^{\nu}$, then, the covariant metric tensor is: \begin{eqnarray} g_{\mu\nu}=\left(\begin{array}{cccc} \gamma & 0 & 0 & \frac{r_{s}\alpha}{r}\\ 0 & -\gamma^{-1} & 0 & 0\\ 0 & 0 & 0 & 0\\ \frac{r_{s}\alpha}{r} & 0 & 0 & -r^{2}. \end{array}\right) \end{eqnarray} The geodesic equation can be written in an alternative form using the Lagrangian \begin{eqnarray} L\left(x^{\mu}, \frac{dx^{\mu}}{d\sigma}\right) = - g_{\alpha \beta}\left(x^{\mu}\right) \frac{dx^{\alpha}}{d\sigma}\frac{dx^{\beta}}{d\sigma} \label{eq:Lagrangian RN} \end{eqnarray} \\ where $\sigma$ is a parameter of the trajectory of the particle, which is usually taken to be the proper time, $\tau$ for a massive particle. Using the Euler-Lagrange equations: \begin{eqnarray} \frac{\partial L}{\partial x^{\mu}}-\frac{d}{d \sigma}\left(\frac{\partial L}{\partial \left(\frac{dx^{\mu}}{d \sigma}\right)}\right)=0 \end{eqnarray} \\ we get the geodesic equation for the particle: \begin{eqnarray} \frac{du_{\mu}}{d\sigma} = \frac{1}{2} \left(\partial_{\mu} g_{\alpha \beta}\right) u^{\alpha} u^{\beta} \label{eq:geodesic} \end{eqnarray} where $u_{\mu} = \frac{dx_{\mu}}{d \sigma}$. With $\sigma= \tau$, for the coordinates $ct$ ($\mu=0$) and $\phi$ ($\mu = 3$), the geodesic equation (\ref{eq:geodesic}) give us, respectively: \begin{eqnarray} \frac{d}{d\tau}\left(\gamma c^{2}\left(\frac{dt}{d\tau}\right)+\frac{r_{s}\alpha c}{r}\left(\frac{d\phi}{d\tau}\right)\right)=0, \label{eq:mu0} \end{eqnarray} \begin{eqnarray} \frac{d}{d\tau}\left(r^{2}\left(\frac{d\phi}{d\tau}\right)-\frac{r_{s}\alpha c}{r}\left(\frac{dt}{d\tau}\right)\right)=0 \label{eqmu3} \end{eqnarray} Both of these equations define the following constants along the trajectory of the particle around the massive object: \begin{eqnarray} \gamma c^{2}\left(\frac{dt}{d\tau}\right)+\frac{r_{s}\alpha c}{r}\left(\frac{d\phi}{d\tau}\right)=E', \label{eq:mu00} \end{eqnarray} \begin{eqnarray} r^{2}\left(\frac{d\phi}{d\tau}\right)-\frac{r_{s}\alpha c}{r}\left(\frac{dt}{d\tau}\right)=J \label{eqmu33} \end{eqnarray} where $E'$ has units of energy per unit mass and $J$ of angular momentum per unit mass. From these equations we get: \begin{equation} \label{eq:dphitau} \frac{d\phi}{d\tau}=\frac{1}{\gamma cr^{2}}\left(\frac{r_{s}\alpha}{r}E'+\gamma cJ\right) \end{equation} \begin{equation} \frac{dt}{d\tau}=\frac{1}{\gamma cr^{2}}\left(\frac{r^{2}}{c}E'-\frac{r_{s}\alpha}{r}J\right) \end{equation} Equation (\ref{eq:mov}) can be written as: \begin{equation} c^{2}=\gamma c^{2}\left(\frac{dt}{d\tau}\right)^{2}-\frac{1}{\gamma}\left(\frac{dr}{d\tau}\right)^{2}-r^{2}\left(\frac{d\phi}{d\tau}\right)^{2}+\frac{2r_{s}\alpha}{r}c\frac{dt}{d\tau}\frac{d\phi}{d\tau}, \end{equation} \\ and replacing the values of $\frac{dt}{d\tau}$ and $\frac{d\phi}{d\tau}$: \[ \gamma^{2}r^{4}c^{4}=\gamma c^{2}\left(\frac{r^{2}}{c}E'-\frac{r_{s}\alpha}{r}J\right)^{2}-\gamma c^{2}r^{4}\left(\frac{dr}{d\tau}\right)^{2}-r^{2}\left(\frac{r_{s}\alpha}{r}E'+\gamma cJ\right)^{2}\qquad\qquad \] \[ \qquad\qquad\qquad\qquad\qquad\qquad+\frac{2r_{s}\alpha}{r}\left(r^{2}E'-\frac{r_{s}\alpha}{r}cJ\right)\left(\frac{r_{s}\alpha}{r}E'+\gamma cJ\right). \] Simplifying and taking only first order terms in $\frac{\alpha}{r}$: \\ \[ \gamma r^{4}c^{4}=c^{2}\left(\frac{r^{4}}{c^{2}}E'^{2}-\frac{2rr_{s}\alpha}{c}JE'\right)-c^{2}r^{4}\left(\frac{dr}{d\tau}\right)^{2}-r^{2}\left(2c\frac{r_{s}\alpha}{r}JE'+\gamma c^{2}J^{2}\right)+\frac{2r_{s}\alpha}{r}\left(r^{2}cJE'\right). \] Introducing the value of $\gamma$ in the left side of the last equation we have: \\ \[ c^{2}-\frac{r_{s}c^{2}}{r}=\frac{1}{c^{2}}E'^{2}-\frac{2r_{s}\alpha}{cr^{3}}JE'-\left(\frac{dr}{d\tau}\right)^{2}-\gamma\frac{J^{2}}{r^{2}} \] Finally we obtain an energy conservation equation, similar to the one obtained in the case of the Schwarzschild metric, but with an additional crossed term proportional to $J E'$ : \begin{eqnarray} \frac{E'^{2}}{c^{2}}-c^{2}=\left(\frac{dr}{d\tau}\right)^{2}+\gamma\frac{J^{2}}{r^{2}}-\frac{r_{s}c^{2}}{r}+\frac{2r_{s}\alpha}{cr^{3}}JE' \label{eq:conservedenergy} \end{eqnarray} This alllows us to define an effective potential: \\ \begin{equation} \widetilde{V}=\gamma\frac{J^{2}}{r^{2}}-\frac{r_{s}c^{2}}{r}+\frac{2r_{s}\alpha}{cr^{3}}JE' \end{equation} From equation (\ref{eq:conservedenergy}), we can get the expression for the radial kinetic energy per unit mass: \\ \begin{eqnarray} \left(\frac{dr}{d\tau}\right)^{2}=A+\frac{r_{s}c^{2}}{r}-\frac{J^{2}}{r^{2}}+\frac{J^{2}r_{s}}{r^{3}}-\frac{2r_{s}\alpha}{cr^{3}}JE' \label{eq:kerr}, \end{eqnarray} \\ where $A= \frac{E'^{2}}{c^{2}}-c^{2}$. Recall now that since the orbit is an ellipse, there are two points in which the temporal derivative becomes zero, and they are aphelion and perihelion. For these two points we can write: \begin{eqnarray} A+\frac{r_{s}c^{2}}{R_{a}}-\frac{J^{2}}{R_{a}^{2}}+\frac{J^{2}r_{s}}{R_{a}^{3}}-\frac{2r_{s}\alpha}{cR_{a}^{3}}JE'=0 \label{eq:3.4-1} \end{eqnarray} \begin{eqnarray} A+\frac{r_{s}c^{2}}{R_{p}}-\frac{J^{2}}{R_{p}^{2}}+\frac{J^{2}r_{s}}{R_{p}^{3}}-\frac{2r_{s}\alpha}{cR_{p}^{3}}JE'=0, \label{eq:3.5-1} \end{eqnarray} where $R_{a}=a\left(1+e\right)$ and $R_{p}=a\left(1-e\right)$. For an ellipse, the equation of motion have three real and positive roots. Two of the roots are $R_{a}$ and $R_{p}$ and the other we will call $R'_{o}$. To obtain this root ($R'_{o}$) we use equation (\ref{eq:kerr}) : \\ \[ \left(\frac{dr}{d\tau}\right)^{2}\frac{r^3}{J^2}=\frac{A}{J^2}r^3+\frac{r_{s}c^{2}}{J^2}r^2-r+r_s-\frac{2r_{s}\alpha E'}{cJ}=0, \] and write: \[ \frac{A}{J^2}r^3+\frac{r_{s}c^{2}}{J^2}r^2-r+r_s-\frac{2r_{s}\alpha E'}{cJ}=\frac{A}{J^2}(r-R_a)(r-R_p)(r-R'_o). \] The last equation can be written as: \[ \frac{r_{s}c^{2}}{J^{2}}r^{2}-r+\left(r_{s}-\frac{2r_{s}\alpha}{cJ}E'\right)=\frac{\left|A\right|}{J^{2}}\left(R'_{o}+R_{a}+R_{p}\right)r^{2}\qquad\qquad\qquad\qquad\] \[\qquad\qquad\qquad\qquad\qquad\qquad-\frac{\left|A\right|}{J^{2}}\left(R_{p}R'_{o}+R_{a}R_{p}+R_{a}R'_{o}\right)r+\frac{\left|A\right|}{J^{2}}R_{a}R_{p}R'_{o}. \] \\ It is important to recall that $A$ is negative for elliptic orbits, so it can be written as $A=-\left|A\right|$. Comparing the coefficients of $r^{0}$, $r$ and $r^{2}$ of both sides of the last equation, and replacing the values of $R_a$ and $R_p$ we obtain the following relations: \begin{eqnarray} \label{eq:Ro1} \frac{r_{s}c^{2}}{J^{2}}=\frac{\left|A\right|}{J^{2}}\left(R'_{o}+2a\right), \end{eqnarray} \begin{eqnarray} \label{eq:Ro2} 1= \frac{\left|A\right|}{J^{2}}\left(2aR'_{o}+a^{2}\left(1-e^{2}\right)\right) \end{eqnarray} \begin{eqnarray} \label{eq:Ro3} R'_{o}=\frac{J^{2}r_{s}}{\left|A\right|\left(1-e^{2}\right)a^{2}}\left(1-\frac{2\alpha}{cJ}E'\right) \end{eqnarray} The value of $R'_{0}$ given in (\ref{eq:Ro3}) is the same result that we obtained in a previous paper \cite{MarinPoveda}, but with an extra factor $1-\frac{2\alpha}{cJ}E'$. Using equations (\ref{eq:Ro2}) and (\ref{eq:Ro3}) we can finally write: \begin{eqnarray} \label{eq:Rofinal} R'_{0}=\frac{a\left(1-e^{2}\right)r_{s}\left(1-\frac{2\alpha E'}{cJ}\right)}{\left(a\left(1-e^{2}\right)-2r_{s}\left(1-\frac{2\alpha E'}{cJ}\right)\right)}, \label{eq:Roprime} \end{eqnarray} and \begin{eqnarray} \label{eq:JA} \frac{J}{\left|A\right|^{\frac{1}{2}}}=\frac{a^{\frac{3}{2}}\left(1-e^{2}\right)}{\left(a\left(1-e^{2}\right)-2r_{s}\left(1-\frac{2\alpha E'}{cJ}\right)\right)^{\frac{1}{2}}} \end{eqnarray} Going back to equation (\ref{eq:kerr}), and because $\frac{dr}{d \tau} = \left(\frac{dr}{d \phi}\right) \left(\frac{d \phi}{d \tau}\right)$, replacing the expression of $\frac{d \phi}{d \tau}$ given by (\ref{eq:dphitau}), we get: \begin{eqnarray} \left(\frac{dr}{d\phi}\right)^{2}\frac{1}{\gamma^2 c^2r^{4}}\left(\frac{r_{s}\alpha}{r}E'+\gamma cJ\right)^2=A+\frac{r_{s}c^{2}}{r}-\frac{J^{2}}{r^{2}}+\frac{J^{2}r_{s}}{r^{3}}-\frac{2r_{s}\alpha}{cr^{3}}JE' \end{eqnarray} Leaving only first order terms in $\frac{\alpha }{r}$ we arrive to the following equation: \begin{eqnarray} \left(\frac{dr}{d\phi}\right)^{2}\left(1+\frac{2r_{s}\alpha E'}{\gamma rcJ}\right)=\frac{A}{J^{2}}r^{4}+\frac{r_{s}c^{2}}{J^{2}}r^{3}-r^{2}+\left(r_{s}-\frac{2r_{s}\alpha}{cJ}E'\right)r \end{eqnarray} that also can be written as: \begin{eqnarray} \left(\frac{dr}{d\phi}\right)^{2}\left(1+\frac{2r_{s}\alpha E'}{\gamma rcJ}\right)=\frac{\left|A\right|}{J^{2}}\left(R_{a}-r\right)\left(r-R_{p}\right)\left(r-R'_{o}\right)r. \end{eqnarray} The advance of the perihelion, then will be given by the integral: \begin{eqnarray} \Delta\phi_{kerr}=\frac{2J}{\left|A\right|^{1/2}}\intop_{R_{p}}^{R_{a}}\frac{\left(1+\frac{2r_{s}\alpha E'}{\gamma rcJ}\right)^{1/2}dr}{\left[\left(R_{a}-r\right)\left(r-R_{p}\right)\left(r-R'_{o}\right)r\right]^{1/2}} \end{eqnarray} and keeping only terms of up to first order in $\frac{\alpha}{r}$: \begin{eqnarray} \Delta\phi_{kerr}=\frac{2J}{\left|A\right|^{1/2}}\intop_{R_{p}}^{R_{a}}\frac{dr}{\left[\left(R_{a}-r\right)\left(r-R_{p}\right)\left(r-R'_{o}\right)r\right]^{1/2}}\qquad\qquad\qquad\qquad\qquad\qquad\ \nonumber \end{eqnarray} \begin{eqnarray} \label{eq:kerr2} \qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{2r_s \alpha E'}{c\left|A\right|^{1/2}}\intop_{R_{p}}^{R_{a}}\frac{dr}{r\gamma\left[\left(R_{a}-r\right)\left(r-R_{p}\right)\left(r-R'_{o}\right)r\right]^{1/2}} \end{eqnarray} that can be written as: \begin{eqnarray} \Delta\phi_{kerr}=\Delta\phi_{sch}(R'_o)+\Delta\phi_{2}(R'_o) \label{eq:Kerr3} \end{eqnarray} where: \begin{eqnarray} \Delta\phi_{sch}(R'_o)=\frac{2J}{\left|A\right|^{1/2}}\intop_{R_{p}}^{R_{a}}\frac{dr}{\left[\left(R_{a}-r\right)\left(r-R_{p}\right)\left(r-R'_{o}\right)r\right]^{1/2}}, \label{eq:kerr4} \end{eqnarray} and \begin{eqnarray} \Delta\phi_{2}(R'_o)=\frac{2r_s \alpha E'}{c\left|A\right|^{1/2}}\intop_{R_{p}}^{R_{a}}\frac{dr}{\left(r-r_s\right)\left[\left(R_{a}-r\right)\left(r-R_{p}\right)\left(r-R'_{o}\right)r\right]^{1/2}} \label{eq:kerr5} \end{eqnarray} The first term in equation (\ref{eq:Kerr3}) is the same that was obtained in a previous paper \cite{MarinPoveda} using the Schwarzschild metric, but replacing $R_{o}$ by $R'_{o}$. The parameter $\alpha$ is related to the spin of the black hole $s$, also known as the kerr parameter through the relationship: \[ \alpha=\frac{GM}{c^{2}}s, \] in such a way that the angular momentum of spin is: \begin{equation} S_{z}=\frac{GM^{2}}{c}s, \end{equation} where $ s $ is a dimensionless parameter that can take values between 0 and 1. If $ s $ was greater than 1, there would be no event horizons and the singularity of $ r = 0 $ would be naked, which is not allowed \cite{Ryder,Wald}. \section{Calculation of $\Delta\phi_{sch}(R'_o)$} The first term in equation (\ref{eq:Kerr3}) also can be expressed as: \begin{eqnarray} \Delta\phi_{sch}(R'_o)=\frac{2J}{\left|A\right|^{1/2}}\intop_{R_{p}}^{R_{a}}\frac{r^{-\frac{1}{2}}\left(1-\frac{R'_{o}}{r}\right)^{-\frac{1}{2}}dr}{\left(\left(R_{a}-r\right)\left(r-R_{p}\right)r\right)^{\frac{1}{2}}}, \end{eqnarray} and because $R'_{o}<<R_{p}<R_{a}$, we can expand around $R'_{o}$: \begin{eqnarray} \label{eq:R'oserie} \left(1-\frac{R'_{0}}{r}\right)^{-\frac{1}{2}}=\sum_{n=1}^{\infty}\left(\begin{array}{c} -\frac{1}{2}\\ n-1 \end{array} \right) \left(-1\right)^{n-1} \frac{\left(R'_{o}\right)^{n-1}}{r^{n-1}}. \end{eqnarray} Then \begin{eqnarray} \Delta\phi_{sch}(R'_o)=\frac{2J}{\left|A\right|^{1/2}}\sum_{n=1}^{\infty}\left(\begin{array}{c} -\frac{1}{2}\\ n-1 \end{array} \right)\left(-1\right)^{n-1} \left(R'_{o}\right)^{n-1} I_{n}, \label{eq:Deltaphisch} \end{eqnarray} where: \begin{eqnarray} I_{n}=\intop_{R_{p}}^{R_{a}}\frac{dr}{r^{n}\left(\left(R_{a}-r\right)\left(r-R_{p}\right)\right)^{\frac{1}{2}}}. \end{eqnarray} The value of $I_{n}$ was calculated using complex integration in reference \cite{MarinPoveda}, where we obtained: \begin{eqnarray} I_{n}=\frac{\pi(-1)^{n+1}}{a^{n}2^{n-1}\left(1-e^{2}\right)^{n-1/2}}\sum_{k=0}^{n-1}\left(\begin{array}{c} n-1\\ k \end{array}\right)^{2}z_{1}^{n-1-k}z_{2}^{k}e^{n-1} \label{eq:In} \end{eqnarray} where: \begin{eqnarray} z_{1}=-\frac{\left(1+\sqrt{1-e^{2}}\right)}{e}, \end{eqnarray} and \begin{eqnarray} z_{2}=-\frac{\left(1-\sqrt{1-e^{2}}\right)}{e}, \end{eqnarray} Defining the functions $Q_{n-1}\left(z_{1},z_{2}\right)$ \begin{eqnarray} Q_{n}\left(z_1, z_2\right)=\sum_{k=0}^{n}\left(\begin{array}{c} n\\ k \end{array}\right)^{2}z_1^{n-k}{z_2}^{k}e^n, \end{eqnarray} we can write: \begin{eqnarray} \Delta\phi_{sch}(R'_o)=\frac{2 \pi J}{\left|A\right|^{1/2}}\sum_{n=0}^{\infty}\left(\begin{array}{c} -\frac{1}{2}\\ n \end{array}\right) \frac{\left(R'_{o}\right)^{n}Q_{n}\left(z_1, z_2\right)}{a^{n+1}2^{n}\left(1-e^{2}\right)^{\left(n+\frac{1}{2}\right)}}. \label{eq:Deltaphisch1} \end{eqnarray} In table (\ref{tab:AgujNegTabla-1}) it is shown the first five functions $Q_{n}$. \begin{table}[h] \caption{Values of the functions $Q_n$ \label{tab:AgujNegTabla-1}} \smallskip{} \centering{}% \begin{tabular}{ll} \hline \hline \noalign{\vskip\doublerulesep} Function & Expression\tabularnewline[\doublerulesep] \hline \noalign{\vskip\doublerulesep} $Q_{0}$ & $1$\tabularnewline[\doublerulesep] \noalign{\vskip\doublerulesep} $Q_{1}$ & $-2$\tabularnewline[\doublerulesep] \noalign{\vskip\doublerulesep} $Q_{2}$ & $\left(4+2e^{2}\right)$\tabularnewline[\doublerulesep] \noalign{\vskip\doublerulesep} $Q_{3}$ & $-\left(8+12e^{2}\right)$\tabularnewline[\doublerulesep] \noalign{\vskip\doublerulesep} $Q_{4}$ & $\left(16+48e^{2}+6e^{4}\right)$\tabularnewline[\doublerulesep] \noalign{\vskip\doublerulesep} $Q_{5}$ & $-\left(32+160e^{2}+60e^{4}\right)$\tabularnewline[\doublerulesep] \hline \hline \end{tabular} \end{table} Introducing the values of $R'_{o}$ and $\frac{J}{\left|A\right|^{1/2}}$ given by equations (\ref{eq:Rofinal}) and (\ref{eq:JA}), respectively, we get: \begin{eqnarray} \Delta\phi_{sch}(R'_o)=\frac{2 \pi}{\left(1-2 \epsilon \left(1- \frac{2 \alpha E'}{cJ}\right)\right)^{\frac{1}{2}}}\times \qquad\qquad\qquad\qquad\qquad \nonumber \end{eqnarray} \begin{eqnarray} \label{eq:Deltaphi1} \qquad\qquad\qquad \times \sum_{n=0}^{\infty}\left(\begin{array}{c} -\frac{1}{2}\\ n \end{array}\right)\frac{ \left(1- \frac{2 \alpha E'}{cJ}\right)^{n}\epsilon^{n}}{2^{n}\left(1-2\epsilon\left(1- \frac{2 \alpha E'}{cJ}\right)\right)^{n}} Q_{n}\left(z_{1},z_{2}\right), \end{eqnarray} where \begin{eqnarray} \epsilon\equiv \frac{r_{s}}{a\left(1-e^{2}\right)}=\frac{2GM}{a\left(1-e^{2}\right)c^{2}}, \end{eqnarray} and \begin{eqnarray} \left(\begin{array}{c} -\frac{1}{2}\\ n \end{array} \right)= \frac{\left(-1\right)^{n}\left(2n\right)!}{2^{2n}\left(n!\right)^{2}}. \end{eqnarray} \section{Calculation of $\Delta\phi_{2}(R'_o)$} The integral $\Delta\phi_{2}(R'_o)$ also can be written as: \begin{eqnarray} \Delta\phi_{2}(R'_o)=\frac{2r_s \alpha E'}{c\left|A\right|^{1/2}}\intop_{R_{p}}^{R_{a}}\frac{\left(1-\frac{2r_{s}+R'_o}{r}+\frac{2R'_o r_{s}+r_{s}^{2}}{r^{2}}-\frac{r_{s}^{2}R'_o}{r^{3}}\right)^{-\frac{1}{2}}dr}{r^{2}\left[\left(R_{a}-r\right)\left(r-R_{p}\right)\right]^{1/2}} \label{eq:Deltaphi2} \end{eqnarray} Let us call: $a_{1}=2r_{s}+R'_o$, $a_{2}=2R'_o r_{s}+r_{s}^{2}$, and $a_{3}=r_{s}^{2} R'_o$. Performing the expansion: \begin{eqnarray} \left(1-\frac{a_{1}}{r}+\frac{a_{2}}{r^{2}}-\frac{a_{3}}{r^{3}}\right)^{-\frac{1}{2}}=\sum_{n=0}^{\infty} \left(\begin{array}{c} -\frac{1}{2}\\ n \end{array}\right)\left(-\frac{a_{1}}{r}+\frac{a_{2}}{r^{2}}-\frac{a_{3}}{r^{3}}\right)^{n}=\sum_{n=0}^{\infty}\frac{c_{n}}{r^{n}}, \end{eqnarray} we can check that for example: \begin{eqnarray} c_{0}=1, \end{eqnarray} \begin{eqnarray} c_{1}=\frac{a_{1}}{2}, \end{eqnarray} \begin{eqnarray} c_{2}=\frac{1}{2}\left(\frac{3}{4} a_{1}^{2}-a_{2}\right), \end{eqnarray} \begin{eqnarray} c_{3}=\frac{5}{16}a_{1}^{3}-\frac{3}{4}a_{1}a_{2}+\frac{1}{2}a_{3}, \end{eqnarray} \begin{eqnarray} c_{4}=\frac{35}{128}a_{1}^{4}+\frac{3}{4}a_{1}a_{3}-\frac{15}{16}a_{1}^{2}a_{2}+\frac{3}{8}a_{2}^{2}, \end{eqnarray} \begin{eqnarray} etc.... \nonumber \end{eqnarray} In general the $c_{n}$ satisfy the following recurrence relationship: \begin{eqnarray} c_{n+3}=\frac{1}{6+2n}\left(a_{1}c_{n+2}\left(5+2n\right)-2a_{2}c_{n+1}\left(2+n\right)+a_{3}c_{n}\left(3+2n\right)\right) \end{eqnarray} In terms of the $c_{n}$ we can write (\ref{eq:Deltaphi2}) as: \begin{eqnarray} \qquad\qquad \Delta\phi_{2}(R'_o)=\frac{2r_s \alpha E'}{c\left|A\right|^{1/2}}\intop_{R_{p}}^{R_{a}}\sum_{n=0}^{\infty}\frac{c_{n} dr}{r^{n+2}\left[\left(R_{a}-r\right)\left(r-R_{p}\right)\right]^{1/2}} \nonumber \end{eqnarray} \begin{eqnarray} =\frac{2r_s \alpha E'}{c\left|A\right|^{1/2}}\sum_{n=0}^{\infty}c_{n}I_{n+2} \qquad \end{eqnarray} Replacing the value on $I_{n+2}$ given by (\ref{eq:In}) and the value of $\frac{J}{\left|A\right|^{1/2}}$ given by (\ref{eq:JA}) we get: \begin{eqnarray} \Delta\phi_{2}(R'_o)=\frac{\pi \epsilon}{2\left(1-2 \epsilon \left(1- \frac{2 \alpha E'}{cJ}\right) \right)^{\frac{1}{2}}}\left(\frac{2 \alpha E'}{cJ}\right)\sum_{n=0}^{\infty}\frac{\left(-1\right)^{n+1}c_{n}Q_{n+1}\left(z_{1},z_{2}\right)\epsilon^{n}}{2^{n}r_{s}^{n}} \label{eq:deltaphi2Q} \end{eqnarray} \section{Expansion of $\Delta\phi_{sch}(R'_o)$ in terms of $\epsilon^{*} \equiv \left(1- \frac{2 \alpha E'}{cJ}\right) \epsilon $} In terms of $\epsilon^{*} \equiv \left(1- \frac{2 \alpha E'}{cJ}\right) \epsilon $ (\ref{eq:Deltaphi1}) is: \begin{eqnarray} \Delta\phi_{sch}(R'_o)=\frac{2 \pi}{\left(1-2 \epsilon^{*} \right)^{\frac{1}{2}}} \ \times \sum_{n=0}^{\infty}\left(\begin{array}{c} -\frac{1}{2}\\ n \end{array}\right)\frac{\left(\epsilon^{*}\right)^{n}}{2^{n}\left(1-2\epsilon^{*}\right)^{n}} Q_{n}\left(z_{1},z_{2}\right). \label{eq:Kerrsch} \end{eqnarray} We will expand $\Delta\phi_{sch}(R'_o)$ in terms of $\epsilon^{*}$. Lets compute the first four terms of (\ref{eq:Kerrsch}) to recover the expansion until third order on $\epsilon^{*}$. \begin{eqnarray} \Delta\phi_{sch}(R'_o)^{(3)}= \frac{2\pi}{\left(1-2\epsilon^{*}\right)^{1/2}}(Q_{0}\left(z_{1},z_2\right)- \left(\frac{1}{2}\right)\frac{Q_{1}\left(z_1,z_2\right)}{2\left(1-2\epsilon^{*}\right)}\epsilon^{*} + \left(\frac{3}{8}\right)\frac{Q_{2}\left(z_{1},z_2\right)}{2^2\left(1-2\epsilon^{*}\right)^2}\left(\epsilon^{*}\right)^2 \nonumber \end{eqnarray} \begin{eqnarray} -\left(\frac{5}{16}\right)\frac{Q_{3}\left(z_{1},z_2\right)}{2^3\left(1-2\epsilon^{*}\right)^3}\left(\epsilon^{*}\right)^3+ ......). \end{eqnarray} \\ Replacing the values of the $Q_{i}$ ($i=0,1,2,3$) we have: \begin{eqnarray} \Delta\phi_{sch}(R'_o)^{(3)}= \frac{2\pi}{\left(1-2\epsilon^{*}\right)^{1/2}}(1+\frac{\epsilon^{*}}{2\left(1-2\epsilon^{*}\right)}+ \left(\frac{3}{16}\right)\frac{\left(2+e^{2}\right)\left(\epsilon^{*}\right)^{2}}{\left(1-2\epsilon^{*}\right)^{2}} \nonumber \end{eqnarray} \begin{eqnarray} +\left(\frac{5}{32}\right)\frac{\left(2+3e^{2}\right)\left(\epsilon^{*}\right)^{3}}{\left(1-2\epsilon^{*}\right)^{3}} + ........) \label{eq:phisch2} \end{eqnarray} which in third orden in $\epsilon^{*}$ is reduced to: \begin{eqnarray} \Delta\phi_{sch}(R'_o)^{(3)}=2\pi\left(1+\frac{3}{2}\epsilon^{*}+\frac{\left(54+3e^{2}\right)}{16} \left(\epsilon^{*}\right)^{2}+\left(\frac{135}{16}+\frac{45}{32}e^{2}\right)\left(\epsilon^{*}\right)^{3}+ ....\right). \end{eqnarray} In terms of $\epsilon$ we have: \begin{eqnarray} \Delta\phi_{sch}(R'_o)^{(3)}=2\pi (1+\frac{3}{2}\left(1- \frac{2 \alpha E'}{cJ}\right) \epsilon + \frac{3\left(18+e^{2}\right)}{16} \left(1- \frac{2 \alpha E'}{cJ}\right)^{2} \epsilon^{2} \nonumber \end{eqnarray} \begin{eqnarray} +\frac{45}{32}\left(6+e^{2}\right)\left(1- \frac{2 \alpha E'}{cJ}\right)^{3} \epsilon^{3}+ ....). \end{eqnarray} In general to order $m$ in $\epsilon^{*}$ we can write: \begin{eqnarray} \Delta\phi_{sch}(R'_o)^{(m)}=\frac{2 \pi}{\left(1-2 \epsilon^{*} \right)^{\frac{1}{2}}} \ \times \sum_{n=0}^{m}\left(\begin{array}{c} -\frac{1}{2}\\ n \end{array}\right)\frac{\left(\epsilon^{*}\right)^{n}}{2^{n}\left(1-2\epsilon^{*}\right)^{n}} Q_{n}\left(z_{1},z_{2}\right). \label{eq:deltaphim} \end{eqnarray} \section{Expansion of $\Delta\phi_{2}(R'_o)$ in terms of $ \epsilon $} To evaluate $\Delta\phi_{2}(R'_o)$ to third order in $ \epsilon $ it is sufficient to consider the first three terms in the sum given by equation (\ref{eq:deltaphi2Q}). \begin{eqnarray} \Delta\phi_{2}(R'_o)^{\left(3\right)}=\frac{\pi \epsilon}{2\left(1-2 \epsilon \left(1- \frac{2 \alpha E'}{cJ}\right) \right)^{\frac{1}{2}}}\left(\frac{2 \alpha E'}{cJ}\right)\left(-c_{0}Q_{1}+\frac{c_{1}Q_{2}\epsilon}{2r_{s}}-\frac{c_{2}Q_{3}\epsilon^{2}}{2^{2}r_{s}^{2}}\right), \label{eq:deltaphi*} \end{eqnarray} where \begin{equation} c_{0}=1, \end{equation} \begin{equation} c_{1}=\frac{2r_{s}+R'_o}{2}, \end{equation} \begin{equation} c_{2}=\frac{3}{8}\left(R'_o\right)^{2}+\frac{1}{2}r_{s}R'_o+r_{s}^{2}. \end{equation} If we introduce the values of the $Q_{i}$ in (\ref{eq:deltaphi*}) we have: \begin{eqnarray} \Delta\phi_{2}(R'_o)^{\left(3\right)}=\frac{\pi \epsilon}{\left(1-2 \epsilon \left(1- \frac{2 \alpha E'}{cJ}\right) \right)^{\frac{1}{2}}}\left(\frac{2 \alpha E'}{cJ}\right)(1+\frac{\left(2r_{s}+R'_o\right)\left(2+e^{2}\right)\epsilon}{4r_{s}} \nonumber \end{eqnarray} \begin{eqnarray} +\frac{\left(3\left(R'_o\right)^{2}+4r_{s}R'_o+8r_{s}^{2}\right)\left(2+3e^{2}\right)\epsilon^{2}}{16r_{s}^{2}}), \label{eq:deltaphi24} \end{eqnarray} where \begin{eqnarray} \frac{\left(2r_{s}+R'_o\right)}{r_{s}}=\frac{2+\left(1-4\epsilon \right) \left(1- \frac{2 \alpha E'}{cJ}\right)}{\left(1-2\epsilon \left(1- \frac{2 \alpha E'}{cJ}\right)\right)}, \label{eq:par1} \end{eqnarray} and \begin{eqnarray} \frac{\left(3\left(R'_o\right)^{2}+4r_{s}R'_o+8r_{s}^{2}\right)}{r_{s}^{2}}=\frac{8+4\left(1- \frac{2 \alpha E'}{cJ}\right)\left(1-8\epsilon\right)+\left(1- \frac{2 \alpha E'}{cJ}\right)^{2}\left(3-8\epsilon+32\epsilon^{2}\right)}{\left(1-2\epsilon \left(1- \frac{2 \alpha E'}{cJ}\right)\right)^{2}}. \label{eq:par2} \end{eqnarray} Introducing (\ref{eq:par1}) and (\ref{eq:par2}) in (\ref{eq:deltaphi24}), and employing the Taylor series (around $x=0$): $\left(1-x\right)^{-\frac{1}{2}}=1+\frac{1}{2}x+\frac{3}{8}x^{2}+...$, we obtain to third order in $\epsilon$: \begin{eqnarray} \Delta\phi_{2}(R'_o)^{\left(3\right)}=\pi \frac{2 \alpha E'}{cJ} \epsilon +\frac{\pi}{4}\left(10+3e^{2}-\left(6+e^{2}\right)\frac{2 \alpha E'}{cJ}\right)\frac{2 \alpha E'}{cJ}\epsilon^{2} \nonumber \end{eqnarray} \begin{eqnarray} +\frac{\pi}{16}\left(86+53e^{2}-\left(116+54e^{2}\right)\frac{2 \alpha E'}{cJ}+\left(38+13e^{2}\right)\left(\frac{2 \alpha E'}{cJ}\right)^{2}\right)\frac{2 \alpha E'}{cJ}\epsilon^{3} \label{eq:deltaphi23} \end{eqnarray} \section{Perihelion precession to third order in $\epsilon$} As the perihelion precession in a cycle is $\chi=\Delta\phi_{kerr}-2\pi$, we have until third order in $\epsilon$: \begin{eqnarray} \chi^{\left(3\right)}=\Delta\phi_{sch}(R'_o)^{(3)}+\Delta\phi_{2}(R'_o)^{\left(3\right)}-2\pi, \end{eqnarray} that give us: \begin{eqnarray} \chi^{\left(3\right)}=3\pi\left(1-\frac{4 \alpha E'}{3cJ}\right)\epsilon +\left(\frac{\left(54+3e^{2}\right)}{8}-11\left(\frac{2 \alpha E'}{cJ}\right)+\frac{\left(42+e^{2}\right)}{8}\left(\frac{2 \alpha E'}{cJ}\right)^{2}\right)\pi\epsilon^{2} \nonumber \end{eqnarray} \begin{eqnarray} +(\frac{45\left(6+e^{2}\right)}{16}-\frac{\left(362+41e^{2}\right)}{8}\left(\frac{2 \alpha E'}{cJ}\right)+\frac{\left(694+81\right)}{16}\left(\frac{2 \alpha E'}{cJ}\right)^{2} \nonumber \end{eqnarray} \begin{eqnarray} -\frac{\left(29+4e^{2}\right)}{2}\left(\frac{2 \alpha E'}{cJ}\right)^{3})\pi \epsilon^{3}. \label{eq:chi3} \end{eqnarray} \section{Applications} To analyze how the spin of the central black hole changes the perihelion precession, it was calculated until 3rd order (using equation (\ref{eq:chi3})) for three different binary systems. In table (\ref{tab:Oj287}) are shown the results. The term $\chi(\epsilon^i)$ is the one that depends only on the i-th power of $\epsilon$, since $\chi^{(i)}=\chi(\epsilon^1)+\chi(\epsilon^2)+...+\chi(\epsilon^i)$. The expansion was done in terms of $\alpha$, that is a parameter with units of length, but the experimental measurements are done in terms of the spin $s$. The spin $s$ of the black hole is a dimensionless parameter that is related to $\alpha$ by: \begin{equation} \alpha=\frac{GM}{c^2}s=\frac{r_s}{2}s \end{equation} The spin of a central black hole in a binary system is very difficult to measure. Nevertheless, some groups have measured it for the binary system OJ287. This measurements can be seen in articles by Pihajoki and Valtonen \cite{Pihajoki,Valtonen}. The problem is that the values obtained form those measurements are different, however we worked with both values for the numerical evaluations. Also we made the calculations for $s=0$. For the calculations we need the values of the energy per unit mass and the angular momentum per unit mass. For this, both values can be calculated numerically using equations (\ref{eq:3.4-1}) and (\ref{eq:3.5-1}). These are the equations of motion evaluated for the critical points $R_a$ (aphelion) and $R_p$ (perihelion). As the equations are quadratic, there are two solutions for each parameter. We took the negative value for the energy (that is necessary for the orbit to be elliptical), and the positive value for the angular momentum, because we considered a counterclockwise rotation. \begin{table}[h] \caption{Perihelion advance in degrees per period for some binary systems, and for different values of the spin. \label{tab:Oj287}} \smallskip{} \centering{}% \begin{tabular}{llllll} \hline System & Sagittarius A*-S2 & OJ287 & H1821+643 \tabularnewline \hline $M(\times M_{\Theta})$ & $4.310\times10^{6}$ & $1.830\times10^{10}$ & $3.000\times10^{10}$ \tabularnewline $r_s(AU)$ & $0.085$ & $360.847$ & $591.553$ \tabularnewline $a(AU)$ & $923.077$ & $11500$ & $40000$ \tabularnewline $e$ & $0.870$ & $0.700$ & $0.900$ \tabularnewline $\epsilon$ & $3.787\times 10^{-4}$ & $6.153\times 10^{-2}$ & $7.784\times 10^{-2}$ \tabularnewline \hline $s$ & $0$ & $0$ & $0$ \tabularnewline \hline $\chi(\epsilon)$ & $0.205^\circ$ & $33.223^\circ$ & $42.031^\circ$ \tabularnewline $\chi(\epsilon^2)$ & $(1.816\times 10^{-4})^\circ$ & $4.724^\circ$ & $7.692^\circ$ \tabularnewline $\chi(\epsilon^3)$ & $(1.858\times 10^{-7})^\circ$ & $0.765^\circ$ & $1.625^\circ$ \tabularnewline $\chi^{(3)}$ & $0.205^{\circ}$ & $38.713^{\circ}$ & $51.349^{\circ}$ \tabularnewline \hline $s$ & $0.280$ & $0.280$ & $0.280$ \tabularnewline \hline $\chi(\epsilon)$ & $0.206^\circ$ & $35.249^\circ$ & $46.230^\circ$ \tabularnewline $\chi(\epsilon^2)$ & $(1.837\times 10^{-4})^\circ$ & $5.440^\circ$ & $9.622^\circ$ \tabularnewline $\chi(\epsilon^3)$ & $(1.895\times 10^{-7})^\circ$ & $0.965^\circ$ & $2.349^\circ$ \tabularnewline $\chi^{(3)}$ & $0.206^{\circ}$ & $41.654^{\circ}$ & $58.203^{\circ}$ \tabularnewline \hline $s$ & $0.313$ & $0.313$ & $0.313$ \tabularnewline \hline $\chi(\epsilon)$ & $0.206^\circ$ & $35.488^\circ$ & $46.727^\circ$ \tabularnewline $\chi(\epsilon^2)$ & $(1.841\times 10^{-4})^\circ$ & $5.528^\circ$ & $9.866^\circ$ \tabularnewline $\chi(\epsilon^3)$ & $(1.899\times 10^{-7})^\circ$ & $0.991^\circ$ & $2.448^\circ$ \tabularnewline $\chi^{(3)}$ & $0.206^{\circ}$ & $42.007^{\circ}$ & $59.420^{\circ}$ \tabularnewline \hline \end{tabular} \end{table} In table (\ref{tab:Oj287}) it can be seen the values for $s=0$ for the systems Sagittarius A*-S2, OJ287, H1821+643. These values agree with those calculated in a previous paper \cite{MarinPoveda} using the Schwarzschild metric. These results are in concordance with the fact that the Kerr space-time transforms in Schwarzschild space-time when $s=0$. Then, for other values of $s$, the perihelion precession increases with $s$. Also, it is important to recall that we considered that $s$ is positive, i.e, the central black hole rotates in the same direction of the movement of the secondary massive object. We have used the values of the spin of OJ287 with all the systems to see how the perihelion precession changes with it because we do not have information about the values for those systems. For OJ287, the experimental accepted value for the perihelion precession is $39.1^\circ$ per period. The theoretical calculation (for $s=0.313$) predicts a higher value of $42.0^\circ$. Without considering the spin, the theoretical value is $38.7^{\circ}$. It was expected that the introduction of the spin would reduce the error with respect to the experimental value, but instead it exceeded this value. Nevertheless, there is another correction that is expected to reduce the value of the perihelion precession. This correction consist in taking into account the lost of energy in the gravitational radiation. That correction will be done in further works. \section{Conclusions} \label{Conclusions} In this paper we have considered both Schwarzschild and Kerr metrics to take into account the spin effect in the calculation of the perihelion precession in binary systems like OJ287, Sagittarius A*-S2, and H1821+643. We have developed the expansion in terms of both $\epsilon \equiv r_s/\left(a(1-e^2)\right)$ and $\epsilon^{*} \equiv \left(1- \frac{2 \alpha E'}{cJ}\right) \epsilon $, where $E'$ and $J$ are the energy and angular momentum per unit mass and $\alpha$ is a factor proportional to the black hole spin. Here we have used the notation $\chi\left(\epsilon^{n}\right)$ for the contribution of the $n^{th}$ term and $\chi^{\left(n\right)}$ for the complete expansion until $n^{th}$ term. Now, to see if the spin correction is relevant for massive objects, we calculated the perihelion advance for three binary systems, Sagittarius A*-S2, OJ287 and H1821+643. In table (\ref{tab:Oj287}) are shown these calculations performed using equation (\ref{eq:chi3}) for different orders. Sagitarius A* is a bright and very compact radio source located at the center of the Milky Way. There is great evidence that Sagittarius A* is a supermassive black hole wtith a mass approximately equal to $4 \times 10^{6} M_{\Theta}$ . We took star S2, because is the one that presents a very peculiar orbit. As it can be seen in table (\ref{tab:Oj287}), if we compare the value of $\chi^{(3)}$ for $s=0$ that is $0.205^\circ$ with the corresponding values for $s=0.280$ and $s=0.313$ that is $0.206^\circ$, we can see that the difference is not significant. It is worth mentioning that the values of $\chi\left(\epsilon^{n}\right)$ and $\chi^{\left(n\right)}$ corresponding to $s=0$ are agree with those calculated in a earlier paper \cite{MarinPoveda} using the Schwarzschild metric. OJ 287 is a binary system of black holes located 3.500 million light years from Earth having a total mass of around $1.845\times 10^{10}M_{\Theta}$. It can be seen that between the first and the second order terms ($\chi\left(\epsilon\right)$ and $\chi\left(\epsilon^{2}\right)$) there is a difference of approximately $5.5^{\circ}$. For higher orders, the difference is less than $1^{\circ}$. At the third order the contribution to the perihelion precession in a cycle is $0.965^{\circ}$ in our expansion for $s=0.280$ compared with the value of $0.765^{\circ}$ obtained in a earlier paper \cite{MarinPoveda} taking $s=0$. Then, the total perihelion precession is $41.654^{\circ}$. For $s=0.313$ the perihelion precession is slightly different (approximately $42^{\circ}$). Something interesting about our expansion is that it begins to stabilize taking into account higher order corrections around $42^{\circ}$. Then, equation (\ref{eq:chi3})) gives an important correction to the perihelion precession. It is important to mention that the gravitational radiation of the system (that affects directly to the values of the period, the radial distance and the eccentricity of the orbit \cite{Shreya}), could introduce important modifications in the perihelion advance equation. For H1821+643 system , that is a very massive black hole with a mass of $3\times 10^{10}M_{\Theta}$, the orbital parameters of the gravitational companion are not known, so we used random parameters. In this system, the corrections are very important. For example, taking $s=0.313$, the second order contribution $\chi\left(\epsilon^{2}\right)$ is $9.866^{\circ}$, while the third order contribution $\chi\left(\epsilon^{3}\right)$ is $2.448^{\circ}$ . Taking into account all these results, we can conclude that this is an excellent approach to calculate the perihelion precession for binary systems as it can be calculated until any order in the third root of the motion equation $R'_o$ (see equation (\ref{eq:Roprime})). With other methods as perturbation theory, the calculations are more complicated and due to the approximations that are made it is easy to make mistakes. \newpage \newpage
1,314,259,993,753
arxiv
\section{Appendix} \end{document} \section{Preliminaries} We adopt the following conventions throughout the paper. Throughout this paper, we consider two types or terms the same if they are equivalent up to $\alpha$-conversion~\cite{barendregt1984lambda}. We use subscripts to emphasize free occurrences of a variable. For example, $T_x$ means $x$ may have free occurrences in $T$. Additionally, we assume $\alpha$-conversion happens automatically. That is, when $T_y$ appears later, all corresponding free $x$'s in $T$ are substituted by $y$. We use semicolons ($;$) to denote context concatenation instead of commas ($,$). \begin{definition} A type $T$ is closed w.r.t. a context $\Gamma$, if $fv(T) \subseteq dom(\Gamma)$. \end{definition} \begin{definition} Well-formedness of a context is inductively defined as follows. \begin{enumerate} \item The empty context $\bigcdot$ is well-formed. \item If $\Gamma$ is well-formed, $T$ is closed w.r.t. $\Gamma$ and $x \notin dom(\Gamma)$, then $\Gamma ; x : T$ is well-formed. \end{enumerate} \end{definition} Unless explicitly mentioned, all lemmas and theorems require and preserve that types are closed and contexts are well-formed. This is proven explicitly in the mechanized proofs. \section{Definitions of $F_{<:}\xspace$ and $D_{<:}\xspace$}\label{ssec:defs} $F_{<:}\xspace$ is introduced by \citet{Cardelli:1985:UTD:6041.6042} as the core calculus of the Fun language, which extends system $F$ with upper bounded quantification. \input{figures/definitions/fsub_def} \begin{definition} $F_{<:}\xspace$ is defined in \Cref{fig:fsub_def}. \end{definition} Universal types in $F_{<:}\xspace$ combine polymorphism and subtyping. Universal types can be compared by the \infref{fsub:all} rule. The \infref{fsub:trans} rule indicates that the system has transitivity. It turns out that $F_{<:}\xspace$ can be defined in a way such that transitivity does not appear as an inference rule but rather a provable property. \input{figures/definitions/fsub_nf_def} \begin{definition} $F_{<:}\xspace$ normal form is defined in \Cref{fig:fsub_nf_def}. The different rules are shaded. \end{definition} We call this alternative definition ``normal form'', following the convention in~\citet{fsub-undec}. Both definitions are equivalent: \begin{theorem}\label{thm:fsub-nf-equiv}\citep{10.1007/3-540-52590-4_45} $F_{<:}\xspace$ subtyping is equivalent to $F_{<:}\xspace$ normal form. Namely $\subtypingf S U$ holds in non-normal form, iff it holds in normal form. \end{theorem} $D_{<:}\xspace$ is a richer calculus than $F_{<:}\xspace$. It adds a form of dependent types, called path types, each of which has both upper bounds and lower bounds, so it is more general than $F_{<:}\xspace$. \input{figures/definitions/dsub_def} \begin{definition} $D_{<:}\xspace$ is defined in \Cref{fig:dsub_def}. \end{definition} $D_{<:}\xspace$ has the following types: the top type $\top$, the bottom type $\bot$, type declarations, path types, and dependent function types. In $D_{<:}\xspace$, a path type has the form $x.A$ where the type label $A$ is fixed. That is, in $D_{<:}\xspace$, there is only one type label and it is $A$. A term in $D_{<:}\xspace$ can be a variable, a type tag, a lambda abstraction, an application, or a let binding. In the typing rules, the \infref{dsub:var}, \infref{dsub:sub} and \infref{dsub:let} rules are standard. The \infref{dsub:all-i} rule says a lambda is typed by pushing its declared parameter type to the context. Note that the return type is allowed to depend on the parameter, which makes the system dependently typed. The \infref{dsub:all-e} rule types a function application. Since $U$ may depend on its parameter, the overall type may refer to $y$. The \infref{dsub:typ-i} rule assigns a type declaration with equal bounds to a type tag. In the subtyping rules, the \infref{dsub:top}, \infref{dsub:bot}, \infref{dsub:refl} and \infref{dsub:trans} rules are standard. In the \infref{dsub:bnd} rule, type declarations are compared by comparing their corresponding components. Notice that the lower bounds are in contravariant position and hence they are compared in reversed order. Similarly, the \infref{dsub:all} rule also compares parameter types in reversed order. The return types are compared with the context extended with $S_2$. The \infref{dsub:sel1} and \infref{dsub:sel2} rules are used to access the bounds of a path type. Notice that the typing and subtyping rules in $D_{<:}\xspace$ are mutually dependent. This is because the \infref{dsub:sub} rule uses subtyping and the \infref{dsub:sel1} and \infref{dsub:sel2} rules use typing in their premises. This mutual dependency makes $D_{<:}\xspace$ harder to reason about. Nonetheless, this mutual dependency can be eliminated due to the following lemma. \begin{lemma}\label{lem:dsub-unravel} (unravelling of $D_{<:}\xspace$ subtyping) \infref{dsub:sel1} and \infref{dsub:sel2} can be changed to the following rules, and the resulting subtyping relation is equivalent to the original one. \begin{mathpar} \inferrule*[right=\infnlabel{Sel1'}{dsub:sel1p}] {\subtypingd{\Gamma(x)}{\{A: S .. \top\}}} {\subtypingd{S}{x.A}} \inferrule*[right=\infnlabel{Sel2'}{dsub:sel2p}] {\subtypingd{\Gamma(x)}{\{A: \bot .. U\}}} {\subtypingd{x.A}{U}} \end{mathpar} \end{lemma} \begin{proof} This follows directly from the fact that the only typing rules that apply to variables are the \infref{dsub:var} and \infref{dsub:sub} rules. \end{proof} This new definition of subtyping with the \infref{dsub:sel1p} and \infref{dsub:sel2p} rules no longer depends on typing. We will use this definition in the rest of the paper. \section{Conclusion} We have studied the decidability of typing and subtyping of the $D_{<:}\xspace$ calculus and several of its fragments. We first presented a counterexample showing that the previously proposed mapping from $F_{<:}\xspace$ to $D_{<:}\xspace$ cannot be used to prove undecidability of $D_{<:}\xspace$. We then discovered a normal form for $D_{<:}\xspace$ and proved its equivalence with the original $D_{<:}\xspace$ formulation. We used the normal form to prove $D_{<:}\xspace$ subtyping and typing undecidable by reductions from $\fsub^-$. We defined a kernel version of $D_{<:}\xspace$ by removing the bad bounds subtyping rule and restricting the subtyping rule for dependent function types to equal parameter types, as in kernel $F_{<:}\xspace$. We proved kernel $D_{<:}\xspace$ decidable, and showed that it is exactly the fragment of $D_{<:}\xspace$ that is handled by the step subtyping algorithm of \citet{abel-algorithmic}. We defined strong kernel $D_{<:}\xspace$, a decidable fragment of $D_{<:}\xspace$ that is strictly in between kernel $D_{<:}\xspace$ and full $D_{<:}\xspace$ in terms of expressiveness, and in particular permits subtyping comparison between parameter types of dependent function types. This allows us to handle type aliases gracefully within the subtyping relation. Finally, we proposed stare-at subtyping as an algorithm for deciding subtyping in strong kernel $D_{<:}\xspace$. We have mechanized the proofs of our theoretical results in a combination of Coq and Agda. \section{Discussion and Related Work} \subsection{Undecidability of Bad Bounds} In \Cref{ssec:nf}, we showed that the \infref{dsub:trans} rule and the \infref{dsub:bb} rule are equivalent in terms of expressiveness, and that $D_{<:}\xspace$ and $D_{<:}\xspace$ without the \infref{dsub:bb} rule are both undecidable. We also showed that kernel $D_{<:}\xspace$ is decidable. Kernel $D_{<:}\xspace$ applies two modifications to $D_{<:}\xspace$: it makes the parameter types in the \infref{dsub:all} rule identical, and it removes the \infref{dsub:bb} rule. It is then interesting to ask whether kernel $D_{<:}\xspace$ with the \infref{dsub:bb} rule is undecidable. We conjecture that it is, but we do not know how to prove it. We expect that the proof will not be straightforward. The first problem is to identify a suitable undecidable problem to reduce from. Most well-known undecidable problems have a clear correspondence to Turing machines, which have deterministic execution. On the other hand, (kernel) $D_{<:}\xspace$ can have multiple derivations witnessing the same conclusion. Therefore, the second step would be to find a deterministic fragment of $D_{<:}\xspace$ that is still undecidable due to bad bounds. Indeed, discovering a deterministic fragment was also the first step of \citet{fsub-undec}. Given the complexity of $D_{<:}\xspace$, it is hard even to find the fragment that would achieve these criteria. This problem is interesting because it investigates the effects that follow from supporting the \infref{dsub:bb} rule. Currently, in both kernel and strong kernel $D_{<:}\xspace$, the \infref{dsub:bb} rule is simply removed. This is consistent with the Scala compiler, which also does not implement this rule. However, is it possible to support a fragment of this rule? We know that in $D_{<:}\xspace$, the \infref{dsub:trans} rule and the \infref{dsub:bb} rule are equivalent, so recovering a fragment of bad bounds recovers a fragment of transitivity as well. Moreover, some uses of the rule are not necessarily bad. Consider the following example: \begin{mathpar} \subtypingd[x : \{A : \bot .. \top\}; y : \{A : \bot .. \top\}; z : \{A : x.A .. y.A\}]{x.A}{y.A} \end{mathpar} In this judgment, before $z$, $x.A$ and $y.A$ show no particular relation, but $z$ claims that $x.A$ is a subtype of $y.A$. This example does not look as bad as other bad bounds like the one asserting $\top$ is a subtype of $\bot$, because it is achievable. It would be nice to find a decidable fragment that supports examples such as this. Doing so will require a careful analysis of the decidability of bad bounds. \subsection{Related Work} There has been much work related to proving undecidability under certain settings of subtyping. \citet{fsub-undec} presented a chain of reductions from two counter machines (TCM) to $F_{<:}\xspace$ and showed $F_{<:}\xspace$ undecidable. \citet{Kennedy2006OnDO} investigated a nominal calculus with variance, modelling the situations in Java, C\# and Scala, and showed that this calculus is undecidable due to three factors: contraviarant generics, large class hierarchies, and multiple inheritance. \citet{Wehr:2009:DSB:1696759.1696773} considered two calculi with existential types, $\mathcal{EX}_{impl}$ and $\mathcal{EX}_{uplo}$, and proved both to be undecidable. Moreover, in $\mathcal{EX}_{uplo}$, each type variable has either upper or lower bounds but not both, so this calculus is related to $D_{<:}\xspace$, but since no variable has both lower and upper bounds, it does not expose the bad bounds phenomenon. \citet{Grigore:2017:JGT:3009837.3009871} proved Java generics undecidable by reducing Turing machines to a fragment of Java with contravariance. So far, work on the $DOT\xspace$ calculi mainly focused on soundness proofs \cite{wadlerfest-dot,oopsla-dot,simple-sound-proof}. \citet{abel-algorithmic} presented step subtyping as a partial algorithm for $DOT\xspace$ typing. In this paper, we have shown that the fragment of $D_{<:}\xspace$ typed by step subtyping is kernel $D_{<:}\xspace$. \citet{ASPINALL2001273} showed a calculus with dependent types and subtyping that is decidable due to the lack of a $\top$ type. \citet{Greenman:2014:GFP:2666356.2594308} identified the Material-Shape Separation. This separation describes two different usages of interfaces, and as long as no interface is used in both ways, the type checking problem is decidable by a simple algorithm. The undecidability proof in this paper has been mechanized in Agda. There are other fundamental results on formalizing proofs of undecidability. \citet{pcp-undec} mechanized undecibility proofs of various well-known undecidable problems, including the post correspondence problem (PCP), string rewriting (SR) and the modified post correspondence problem. Their proofs are based on Turing machines. In contrast, \citet{10.1007/978-3-319-66107-0_13} used a call-by-value lambda calculus as computational model. \citet{Forster:2019:CUI:3293880.3294096} proved undecibility of intuitionistic linear logic by reducing from PCP. \section{Introduction} The Dependent Object Types (DOT\xspace) calculus has received attention as a model for the Scala type system~% \citep[etc.]{wadlerfest-dot,oopsla-dot,simple-sound-proof}. The calculus features objects with abstract type members with upper and lower bounds, and path-dependent types to select those type members. It also supports object self-references, intersection types, and dependent function types. To implement any type system in a compiler requires a type checking \emph{algorithm}. If type checking is undecidable, a compiler writer needs either at least a semi-algorithm or an algorithm for a decidable variant of the type system. Type checking DOT\xspace has been conjectured to be undecidable because bounded quantification is undecidable in $F_{<:}\xspace$~\citep{fsub-undec}. However, such informal reasoning about DOT\xspace can understandably be incorrect, as we show with a simple example in \Cref{ssec:ipf}. Formally determining decidability of DOT\xspace turns out to be surprisingly challenging. It is challenging even for $D_{<:}\xspace$, a restriction of DOT\xspace that removes self-references and intersection types, leaving type members and path-dependent types that select them~% \citep{wadlerfest-dot,defint}. In this paper, our focus is entirely on decidability of $D_{<:}\xspace$ and its variants. \newcommand{\textcircled{\small 1}\xspace}{\textcircled{\small 1}\xspace} \newcommand{\textcircled{\small 2}\xspace}{\textcircled{\small 2}\xspace} \newcommand{\textcircled{\small 3}\xspace}{\textcircled{\small 3}\xspace} A general technique to prove a decision problem $P$ undecidable is \emph{reduction} from a known undecidable problem $Q$. This requires \textcircled{\small 1}\xspace defining a mapping $M$ from instances of $Q$ to instances of $P$ and proving that $p$ is yes-instance of $P$ if \textcircled{\small 2}\xspace and only if \textcircled{\small 3}\xspace $q$ is a yes-instance of $Q$. \citet{wadlerfest-dot} does \textcircled{\small 1}\xspace and \textcircled{\small 2}\xspace for a reduction from $F_{<:}\xspace$ to $D_{<:}\xspace$. However, in \Cref{ssec:ipf}, we identify a counterexample to \textcircled{\small 3}\xspace. This means that the proposed mapping \textcircled{\small 1}\xspace cannot be used to prove $D_{<:}\xspace$ undecidable. Based on the counterexample, we define $\fsub^-$, an undecidable fragment of $F_{<:}\xspace$ that is better suited for reduction to $D_{<:}\xspace$. However, reduction is still thwarted by subtyping transitivity, which is posed as an explicit inference rule in $D_{<:}\xspace$. In $D_{<:}\xspace$, all reasoning about any subtyping relationship $S <: U$ must consider the possibility that it arose due to transitivity $S <: T <: U$ involving some arbitrary and unknown type $T$. In previous work on DOT\xspace and $D_{<:}\xspace$, a recurring challenge has been the concept of \emph{bad bounds}. In the presence of a type member declaration $x: \{A:S..U\}$ with upper and lower bounds, the defining subtyping relationships $S <: x.A$ and $x.A <: U$ conspire with transitivity to induce the possibly unexpected and undesirable subtyping relationship $S <: U$ between the bounds. For $F_{<:}\xspace$, there is a normal form of the subtyping rules that achieves transitivity without an explicit rule~% \citep{10.1007/3-540-52590-4_45,fsub-undec}. We discover an analogous normal form for $D_{<:}\xspace$ in \Cref{ssec:nf}. In particular, we show that to achieve transitivity in $D_{<:}\xspace$ normal form, it is both necessary and sufficient to express the bad bounds concept as an explicit rule (\infref{dsub:bb}), and add it to the obvious fundamental rules that define the meaning of each form of type. $D_{<:}\xspace$ normal form turns out to have convenient properties and becomes the core concept underlying all of our developments. We prove undecidability of $D_{<:}\xspace$ by a reduction from $\fsub^-$ to $D_{<:}\xspace$ normal form. In $D_{<:}\xspace$ normal form, undecidability is crisply characterized by two specific subtyping rules. The first is the \infref{dsub:all} rule that compares function types, which is well known from $F_{<:}\xspace$ as the root cause of its undecidability. In $F_{<:}\xspace$, this rule can be restricted to a kernel version that applies only to functions with equal parameter types to make the resulting \emph{kernel} $F_{<:}\xspace$ decidable. The second is the \infref{dsub:bb} rule that models bad bounds. If the \infref{dsub:bb} rule is removed from $D_{<:}\xspace$ and the \infref{dsub:all} rule is replaced with the kernel version, the resulting kernel $D_{<:}\xspace$ becomes decidable. Moreover, we show that kernel $D_{<:}\xspace$ is exactly fragment of (full) $D_{<:}\xspace$ that can be typed by the partial typing algorithm of~% \citet{abel-algorithmic}. Nieto identified a counterexample demonstrating that the subtyping relation implemented by the Scala compiler violates transitivity. The violation corresponds directly to the \infref{dsub:bb} rule of kernel $D_{<:}\xspace$. The implementation of subtyping in the compiler does not implement this rule. This observation motivates dropping this problematic rule from practical, decidable variants of $D_{<:}\xspace$ normal form and DOT\xspace (when a normal form for DOT\xspace is found). The kernel restriction of the \infref{dsub:all} rule seriously limits expressiveness in $D_{<:}\xspace$ because it prevents comparison between parameter types of functions. This disables the case in which the parameter types are type aliases of each other. For example, in the scope of a type member declaration $x: \{A: T..T\}$, the types $x.A$ and $T$ should be considered equivalent. To address this limitation, we define a \emph{strong kernel} variant of the \infref{dsub:all} rule that allows comparison between parameter types. The expressiveness of strong kernel $D_{<:}\xspace$ is strictly between kernel $D_{<:}\xspace$ and full $D_{<:}\xspace$, but unlike full $D_{<:}\xspace$, strong kernel $D_{<:}\xspace$ is decidable. Finally, we provide stare-at subtyping, an algorithm to decide subtyping in strong kernel $D_{<:}\xspace$. To summarize, our contributions are: \begin{enumerate} \item a counterexample to the previously proposed reduction from $F_{<:}\xspace$ to $D_{<:}\xspace$, \item $D_{<:}\xspace$ normal form and its equivalence to $D_{<:}\xspace$, \item undecidability of $D_{<:}\xspace$ by reduction from $\fsub^-$, \item equivalence of kernel $D_{<:}\xspace$ and the fragment of $D_{<:}\xspace$ typeable by Nieto's algorithm, \item strong kernel $D_{<:}\xspace$, and \item the stare-at algorithm for typing strong kernel $D_{<:}\xspace$. \end{enumerate} We have verified the proofs of our lemmas and theorems in a combination of Coq and Agda. The formalization is included as supplementary material and will be submitted as an artifact. The properties of the variants of $D_{<:}\xspace$ are summarized in \Cref{tab:summary}. \begin{table}[] \begin{tabular}{|l|l|l|l|} \hline Name & \infref{dsub:all} rule & \infref{dsub:bb} rule & Decidability \\ \hline $D_{<:}\xspace$ and $D_{<:}\xspace$ normal form & full \infref{dsub:all} & $\checkmark$ & undecidable (\Cref{ssec:nf}) \\ & full \infref{dsub:all} & $\times$ & undecidable (\Cref{ssec:nf}) \\ Strong kernel $D_{<:}\xspace$ & \infref{dsub:sk:all} & $\times$ & decidable by Stare-at subtyping (\Cref{ssec:stareat}) \\ Kernel $D_{<:}\xspace$ & kernel \infref{dsub:k:all} & $\times$ & decidable by Step subtyping (\Cref{ssec:step}) \\ \hhline{|=|=|=|=|} & \infref{dsub:k:all} or \infref{dsub:sk:all} & $\checkmark$ & unknown \\\hline \end{tabular} \caption{Summary of $D_{<:}\xspace$ variants} \label{tab:summary} \end{table} \subsection{Motivation and Definition}\label{ssec:kdsub-def} In the previous section, we showed that both typing and subtyping in $D_{<:}\xspace$ are undecidable. A natural question to ask is \emph{what fragments of $D_{<:}\xspace$ are decidable?} In this section, we consider one such fragment. We base our adjustments to $D_{<:}\xspace$ on its normal form. The first adjustment is inspired by $F_{<:}\xspace$, which becomes decidable if its \infref{fsub:all} rule is restricted to a \emph{kernel} rule that requires the parameter types of both universal types to be identical~\cite{Cardelli:1985:UTD:6041.6042}. We apply the same restriction to the $D_{<:}\xspace$ \infref{dsub:all} rule. The second adjustment is to remove the \infref{dsub:bb} rule. There are several reasons for that: \begin{enumerate} \item Bad bounds are consequences of unintended interactions between the \infref{dsub:sel1p}, \infref{dsub:sel2p} and \infref{dsub:trans} rules. \item \citet{abel-algorithmic} observed that the implementation of subtyping in the Scala compiler violates transitivity in some cases, and these cases correspond exactly to the \infref{dsub:bb} rule. That is, the Scala compiler does not implement this rule. \item We conjecture that a calculus with bad bounds will be undecidable. \end{enumerate} The calculus after these two changes is shown in \Cref{fig:dsub_kernel}. We will see that this calculus is decidable, so we call it \emph{kernel $D_{<:}\xspace$}, following the convention in \cite{tapl}. \input{figures/definitions/dsub_kernel} We can show that kernel $D_{<:}\xspace$ is sound with respect to the original (full) $D_{<:}\xspace$: \begin{theorem} If $\subtypingdk S U$, then $\subtypingd S U$. \end{theorem} If kernel $D_{<:}\xspace$ is decidable, it cannot also be complete for full $D_{<:}\xspace$. For example, it does not admit the following subtyping judgment that is admitted by full $D_{<:}\xspace$. \begin{mathpar} \subtypingd[x : \{A : \top .. \top\}]{\forall(y : x.A)\top}{\forall(y : \top)\top} \end{mathpar} Kernel $D_{<:}\xspace$ rejects it because $x.A$ and $\top$ are not syntactically identical. Moreover, kernel $D_{<:}\xspace$ rejects conclusions that can only be drawn from bad bounds, such as: \begin{mathpar} \subtypingd[x : \{A : \top .. \bot\}]\top \bot \end{mathpar} This judgment can only be achieved by invoking \infref{dsub:trans} or \infref{dsub:bb}, but both of these rules are absent from kernel $D_{<:}\xspace$. \subsection{Step subtyping}\label{ssec:step} \citet{abel-algorithmic} defined step subtyping, a partial algorithm for deciding a fragment of $D_{<:}\xspace$ subtyping based on ideas developed for subtyping in $F_{<:}\xspace$ by~\citet{tapl}. We briefly review the step subtyping algorithm here. In the next section, we will observe that the fragment of $D_{<:}\xspace$ subtyping decided by the algorithm turns out to be exactly the kernel $D_{<:}\xspace$ that we defined in the previous section. We made some adjustments to the presentation to set up a framework, so the definition is not identical to Nieto's, but the adjustments are minor and have no impact on expressiveness. \input{figures/algs/dsub_step} \begin{definition} Step subtyping is defined using the inference rules in \Cref{fig:dsub-step-sub}. The algorithm searches for a derivation using these rules, backtracking if necessary. Backtracking eventually terminates by \Cref{thm:step-term}. \end{definition} The definitions of kernel $D_{<:}\xspace$ and step subtyping look similar. The differences are the cases related to path types. For these types, step subtyping uses three additional operations, \exposure ($\Uparrow$), \upcast ($\nearrow$), and \downcast ($\searrow$). The purpose of \upcast (\downcast) is, given a path type $x.A$, to look up $x$ in the typing context to a type member declaration $\{A:S..U\}$ and read off the upper bound $U$ (lower bound $S$, respectively). A complication, however, is that the typing context could assign to $x$ another path type. Therefore, \upcast and \downcast use \exposure, whose purpose is to convert a type that could be a path type to a supertype that is guaranteed to not be a path type. \exposure maps every non-path type to itself, and it maps a path type $x.A$ to its supertype $U$ in a similar way as \upcast. However, $U$ could itself be a path type, so, unlike \upcast, \exposure calls itself recursively on $U$. This guarantees that the type returned from \exposure is never a path type. The definitions of these operations are shown in \Cref{fig:dsub-exposure}. The \infref{dsub:expo:top}, \infref{dsub:uc:top} and \infref{dsub:dc:bot} rules are defined to make the operations total functions. We mark them with asterisks to indicate that they apply only when no other rules do, and therefore each of the three operations has exactly one rule to apply for any given input. \upcast and \downcast are shallow wrappers over \exposure. Notice that \upcast and \downcast are not even recursive. When handling a path type $x.A$, they use \exposure to find a non-path supertype of $\Gamma(x)$ and simply return bounds in the right directions. It is possible that \upcast and \downcast return other path types. Notice that in the \infref{dsub:expo:bot} and \infref{dsub:expo:bnd} rules, the recursive calls continue with $\Gamma_1$, the context preceding $x$. This ensures termination of \exposure. As long as the original context $\Gamma_1, x:T, \Gamma_2$ is well-formed, $T$ is closed in the truncated context $\Gamma_1$. Nieto showed that step subtyping is a sound and terminating algorithm. \begin{theorem}\citep{abel-algorithmic} Step subtyping as an algorithm is sound w.r.t. full $D_{<:}\xspace$. \begin{mathpar} \text{If }\subtypingda S U\text{, then }\subtypingd S U \end{mathpar} \end{theorem} \begin{theorem}\citep{abel-algorithmic}\label{thm:step-term} Step subtyping as an algorithm terminates. \end{theorem} \subsection{Soundness and Completeness of Step Subtyping}\label{ssec:sck} In this section, we will show that the subset of $D_{<:}\xspace$ subtyping relationships that step subtyping discovers turns out to be exactly the relation defined by the declarative kernel $D_{<:}\xspace$ rules. We begin by proving some basic properties of kernel $D_{<:}\xspace$. Although the kernel $D_{<:}\xspace$ subtyping reflexivity rule \infref{dsub:k:vrefl} applies only to path types, subtyping is actually reflexive for all types: \begin{lemma} Kernel $D_{<:}\xspace$ subtyping is reflexive. \begin{align*} \subtypingdk T T \end{align*} \end{lemma} Since kernel $D_{<:}\xspace$ does not have the \infref{dsub:bb} or \infref{dsub:trans} rules, transitivity no longer holds in general, but it does hold on $\top$ and $\bot$: \begin{lemma}\label{lem:kernel:trans:top} If $\subtypingdk \top U$, then $\subtypingdk S U$. \end{lemma} \begin{lemma}\label{lem:kernel:trans:bot} If $\subtypingdk S \bot$, then $\subtypingdk S U$. \end{lemma} Comparing step subtyping with kernel $D_{<:}\xspace$, we will show soundness of step subtyping first and completeness second. In step subtyping, the operations are separated into two layers. The first is the subtyping algorithm itself and the second is \exposure, which handles path types. The proof needs to go from the reverse direction by connecting \exposure with kernel $D_{<:}\xspace$ first. \begin{lemma}\label{lem:expo-dsubk} If $\expod S T$ and $\subtypingdk T U$, then $\subtypingdk S U$. \end{lemma} \begin{proof} By induction on the derivation of \exposure. \end{proof} We can then show that step subtyping is sound. \begin{theorem}\label{thm:kernel:sound} (soundness of step subtyping w.r.t. kernel $D_{<:}\xspace$) If $\subtypingda S U$, then $\subtypingdk S U$. \end{theorem} \begin{proof} By induction on step subtyping. From the rules, we can see that kernel $D_{<:}\xspace$ and step subtyping are almost identical, except for the \infref{dsub:step:sel1} and \infref{dsub:step:sel2} cases. These cases can be discharged by expanding \upcast and \downcast and then applying \Cref{lem:expo-dsubk}. \end{proof} Now we proceed to the opposite direction, proving completeness of step subtyping. \begin{theorem}\label{thm:completeness-kernel} (completeness of step subtyping w.r.t. kernel $D_{<:}\xspace$) If $\subtypingdk S U$, then $\subtypingda S U$. \end{theorem} \begin{proof} The proof requires an intricate strengthening of the induction hypothesis: if $\subtypingdk S U$ and this derivation contains $n$ steps, then $\subtypingda S U$, and if $U$ is of the form ${\{A : T_1 .. T_2\}}$, then $\expod S {S'}$ for some $S'$, and either \begin{enumerate} \item $S' = \bot$ or \item $S' = \{A:T'_1..T'_2\}$ for some $T'_1$ and $T'_2$ such that \begin{enumerate} \item $\subtypingda {T_1} {T_1'}$ and \item $\subtypingdk{T_2'}{T_2}$, and the number of steps in the derivation of $\subtypingdk{T_2'}{T_2}$ is less than or equal to $n$. \end{enumerate} \end{enumerate} The proof is by strong induction on $n$. To prove $\subtypingda S U$, the non-trivial cases are \infref{dsub:k:sel1} and \infref{dsub:k:sel2} cases; we discuss the latter. The antecedent is $\subtypingdk{\Gamma(x)}{\{A : \bot .. U\}}$. This case requires the strengthened induction hypothesis, since the original would only imply that $\subtypingda{\Gamma(x)}{\{A : \bot .. U\}}$, which is insufficient to establish $\subtypingda{x.A}{U}$. To establish this conclusion, we wish to apply the \infref{dsub:step:sel2} rule. The strengthened induction hypothesis is designed specifically to provide the necessary premises of this rule. It remains to prove the strengthened induction hypothesis. The type $U$ can have the specified form $\{A : T_1 .. T_2\}$ in the conclusions of three rules: \infref{dsub:k:bot}, \infref{dsub:k:bnd} and \infref{dsub:k:sel2}. Only the \infref{dsub:k:sel2} case is interesting. The conclusion of this rule forces $S = y.A$ for some $y$, and the antecedent is $\subtypingdk{\Gamma(y)}{\{A : \bot .. \{A : T_1 .. T_2\}\}}$. Applying the induction hypothesis to this antecedent leads to two cases: \begin{enumerate} \item When $\expod{\Gamma(y)}{\bot}$, the goal $\expod{y.A}{\bot}$ follows by \infref{dsub:expo:bot}. \item Otherwise, for some $T_1'$ and $T_2'$, we obtain additional antecedents: \begin{enumerate} \item $\expod{\Gamma(y)}{\{A : T_1' .. T_2' \}}$, \item $\subtypingda{\bot}{T_1'}$, and \item $\subtypingdk{T_2'}{\{A : T_1 .. T_2\}}$ by a derivation with strictly fewer that $n$ steps. \end{enumerate} The intention is to apply the \infref{dsub:expo:bnd} rule, but this rule requires an \exposure on $T_2'$ as well. This can be achieved by applying the inductive hypothesis to the third antecedent again. This yields $\expod{T_2'}{T_2''}$ for some $T_2''$ and this case is concluded, so we can apply \infref{dsub:expo:bnd} to obtain $\expod{y.A}{T_2''}$, where $T_2''$ satisfies the properties that the strengthened induction hypothesis requires of $S'$. \end{enumerate} \end{proof} Hence, we have shown that the subrelation of $D_{<:}\xspace$ subtyping induced by the step subtyping algorithm is exactly the kernel $D_{<:}\xspace$ subtyping relation. \section{Kernel $D_{<:}\xspace$} \input{sec_kernel/motiv} \input{sec_kernel/step} \input{sec_kernel/props} \subsection{Properties of Strong Kernel $D_{<:}\xspace$} In this section, we will prove that the subtyping relation defined by strong kernel $D_{<:}\xspace$ is in between kernel $D_{<:}\xspace$ and full $D_{<:}\xspace$ in expressiveness. As a first step, we need to prove reflexivity. \begin{lemma} Strong kernel $D_{<:}\xspace$ is reflexive. \begin{align*} \subtypingdsk T T \end{align*} \end{lemma} \begin{proof} By induction on $T$. \end{proof} In the next two theorems, we wish to show that strong kernel $D_{<:}\xspace$ is in between kernel $D_{<:}\xspace$ and full $D_{<:}\xspace$ in terms of expressiveness: \begin{theorem}\label{thm:kdsub2skdsub} If $\subtypingdk S U$ then $\subtypingdsk[\Gamma]S U[\Gamma]$. \end{theorem} \begin{proof} By induction on the derivation. The \infref{dsub:k:all} case requires reflexivity of strong kernel $D_{<:}\xspace$. \end{proof} \begin{theorem}\label{thm:skdsub2dsub} If $\subtypingdsk[\Gamma]S U[\Gamma]$ then $\subtypingd S U$. \end{theorem} Before we can prove this theorem, we need to define a new concept, a relationship between the two typing contexts. \begin{definition}\label{def:opesub} The order preserving sub-environment relation between two contexts, or $\opesub$, is defined in \Cref{fig:opesub_def}. \end{definition} \input{figures/definitions/opesub_def} Intuitively, If $\opesubd{\Gamma}$, then $\Gamma$ is a more ``informative'' context than $\Gamma'$. $\opesub$ is a combination of the narrowing and weakening properties. The following properties of $\opesub$ confirm this intuition. \begin{lemma} $\opesub$ is reflexive. \begin{align*} \opesubd{\Gamma}[\Gamma] \end{align*} \end{lemma} \begin{lemma} $\opesub$ is transitive. \begin{align*} \text{If }\opesubd{\Gamma_1}[\Gamma_2]\text{ and }\opesubd{\Gamma_2}[\Gamma_3]\text{, then }\opesubd{\Gamma_1}[\Gamma_3]. \end{align*} \end{lemma} \begin{theorem} (respectfulness) Full $D_{<:}\xspace$ subtyping is preserved by $\opesub$. If $\opesubd{\Gamma}$ and $\subtypingd[\Gamma'] S U$, then $\subtypingd S U$. \end{theorem} Given these results, we can proceed to proving the soundness of strong kernel $D_{<:}\xspace$ with respect to full $D_{<:}\xspace$, \Cref{thm:skdsub2dsub}. We prove a stronger result: \begin{theorem} If $\subtypingdsk S U$, $\opesubd{\Gamma}[\Gamma_1]$ and $\opesubd{\Gamma}[\Gamma_2]$, then $\subtypingd S U$. \end{theorem} \begin{proof} By induction on the strong kernel subtyping derivation. \end{proof} Then \Cref{thm:skdsub2dsub} follows from reflexivity of $\opesub$. Since we will show that strong kernel $D_{<:}\xspace$ is decidable, it cannot also be complete with respect to full $D_{<:}\xspace$. One example is bad bounds. For example, we are still not able to admit the following judgment which definitely requires bad bounds. \begin{mathpar} \subtypingd[x : \{A : \top .. \bot\}]\top \bot \end{mathpar} This shows that full $D_{<:}\xspace$ is strictly more expressive than strong kernel $D_{<:}\xspace$. Even if we remove the \infref{dsub:bb} rule from full $D_{<:}\xspace$, the \infref{dsub:sk:all} rule is strictly weaker than the full \infref{dsub:all} rule. For example, the following is a derivation in full $D_{<:}\xspace$: \begin{mathpar} \inferrule*[right=\infref{dsub:all}] {\inferrule*[right=\infref{dsub:bnd}] {\text{straightforward}} {\subtypingd[]{\{A : \bot .. \bot\}}{\{A : \bot .. \top\}}} \\ \inferrule*[right=\infref{dsub:sel2}] {\text{straightforward}} {\subtypingd[x : \{A : \bot .. \bot\}]{x.A}{\bot}}} {\subtypingd[]{\forall(x : \{A : \bot .. \top\})x.A}{\forall(x : \{A : \bot .. \bot\})\bot}} \end{mathpar} This judgment is rejected by strong kernel $D_{<:}\xspace$ because the comparison of the returned types relies on the parameter type to the right of $<:$, which is not possible in strong kernel $D_{<:}\xspace$. Notice that this example uses aliasing information from the right parameter type (i.e. that $x.A$ is an alias of $\bot$) to reason about the left return type (i.e. that $x.A$ is a subtype of $\bot$), which is something that strong kernel $D_{<:}\xspace$ cannot do. On the other hand, strong kernel $D_{<:}\xspace$ \emph{does admit} the motivating aliasing example from the beginning of this section: \begin{align*} &\text{let }\Gamma = x : \{A : \top .. \top\} \\ &\inferrule*[right=\infref{dsub:sk:all}] {\inferrule*[right=\infref{dsub:sk:sel1}] {\text{reflexivity}} {\subtypingdskr[\Gamma]{x.A}{\top}[\Gamma]} \\ \inferrule*[right=\infref{dsub:sk:top}] { } {\subtypingdsk[\Gamma ; y : x.A]{\top}{\top}[\Gamma ; y : \top]}} {\subtypingdsk[\Gamma]{\forall(y : x.A)\top}{\forall(y : \top)\top}[\Gamma]} \end{align*} In general, the \infref{dsub:sk:all} rule admits any subtyping between parameter types that is admitted by strong kernel. This shows that strong kernel $D_{<:}\xspace$ is strictly more powerful than kernel $D_{<:}\xspace$. \subsection{Motivation and Definition}\label{ssec:skdsub-motiv} In the previous section, we defined a decidable fragment of $D_{<:}\xspace$, kernel $D_{<:}\xspace$. Notwithstanding its decidability, it comes with obvious disadvantages. One example is the judgment we presented in \Cref{ssec:kdsub-def}: \begin{mathpar} \subtypingd[x : \{A : \top .. \top\}]{\forall(y : x.A)\top}{\forall(y : \top)\top} \end{mathpar} This judgment is admitted in full $D_{<:}\xspace$ but not kernel $D_{<:}\xspace$. The latter rejects this judgment because it requires the parameter types to be syntactically identical. However, we can see that here $x.A$ and $\top$ are in a special situation: $x.A$ is defined with $\top$ as both its lower and upper bounds, which makes $x.A$ an \emph{alias} for $\top$. In Scala, we would like to be able to use aliased types interchangeably. The kernel requirement of syntactically identical parameter types significantly restricts the usefulness of type aliases. Hence, the aim of this section is to (at least) lift this restriction while maintaining decidability. The inspiration for the new calculus comes from writing out the typing context \emph{twice} in a subtyping derivation. For example, the \infref{dsub:all} rule is: \begin{mathpar} \inferrule*[right=\infref{dsub:all}] {\subtypingd{S'}{S} \\ \subtypingd[\Gamma; x : S']{U_x}{U'_x}} {\subtypingd{\forall(x : S)U}{\forall(x : S')U'}} \end{mathpar} Let us write the contexts twice for this rule: \begin{mathpar} \inferrule*[right=\infnlabel{All-TwoContexts}{dsub:all-2g}] {\subtypingdt{S'}{S} \\ \subtypingdt[\Gamma; x : S']{U_x}{U'_x}[\Gamma; x : S']} {\subtypingdt{\forall(x : S)U}{\forall(x : S')U'}} \end{mathpar} Now do the same for the kernel version too: \begin{mathpar} \inferrule*[right=\infnlabel{K-All-TwoContexts}{dsub:k:all-2g}] {\subtypingdt[\Gamma; x : S]{U_x}{U'_x}[\Gamma; x : S]} {\subtypingdt{\forall(x : S)U}{\forall(x : S)U'}} \end{mathpar} So far, both copies of the context have been the same, so the second copy is redundant. However, comparing these two rules for a moment, we start to see some potential for improvement. In the premise comparing $U_x <: U'_x$, the only difference are the primes on $S$ in the typing contexts: the first rule uses $S'$ on both sides, while the second rule uses $S$ on both sides. Since $U_x$ comes from a universal type where $x$ has type $S$, and $U'_x$ from one where $x$ has type $S'$, what if we took the middle ground between the two rules, and added $S$ to the left context and $S'$ to the right context? \begin{mathpar} \inferrule*[right=All-AsymmetricContexts] {\subtypingdt{S'}{S} \\ \subtypingdt[\Gamma; x : S]{U_x}{U'_x}[\Gamma; x : S']} {\subtypingdt{\forall(x : S)U}{\forall(x : S')U'}} \end{mathpar} The new rule enables the contexts to be different, so it justifies maintaining both contexts. But how will a calculus with this hybrid rule behave? Will it be strictly in between the decidable kernel $D_{<:}\xspace$ and the undecidable full $D_{<:}\xspace$ in expressiveness? Will it be decidable? We will show that the answer to both questions is yes. The new hybrid rule allows comparison of function types with \emph{different} parameter types, and the return types are compared in two different contexts. In particular, it admits the example judgement with the aliased parameter types with which we began this section. We call this new calculus \emph{strong kernel $D_{<:}\xspace$}, and define it fully in \Cref{fig:dsub_skernel}. The \infref{dsub:sk:all} rule is the only rule that enables the two contexts to diverge. All of the other rules simply copy both contexts unchanged to the premises. \input{figures/definitions/dsub_skernel} \subsection{Properties of Stare-at Subtyping} We now move on to prove basic properties of the stare-at subtyping algorithm and its operation. We focus first on basic lemmas that ensure that the \revealing rules satisfy their intended specification. \begin{lemma} (\revealing gives prefixes) If $\revealingd S U$, then $\Gamma'$ is a prefix of $\Gamma$. \end{lemma} \begin{lemma} (\revealing returns no path) If $\revealingd S U$, then $U$ is not a path type. \end{lemma} \begin{lemma} (soundness of \revealing) If $\revealingd S U$, then $\subtypingd S U$. \end{lemma} \begin{lemma} (well-formedness condition) If $\revealingd S U$, $\Gamma$ is well-formed and $fv(S) \subseteq dom(\Gamma)$, then $\Gamma'$ is well-formed and $fv(U) \subseteq dom(\Gamma')$. \end{lemma} All these lemmas can be proved by direct induction. \upcast and \downcast have properties similar to \revealing. The proofs are much simpler because the operations are not even recursive. \begin{lemma} The following all hold. \begin{enumerate} \item If $\udcd {x.A} T$, then $\Gamma'$ is a prefix of $\Gamma$. \item If $\ucd {x.A} T$, then $\subtypingd {x.A} T$. \item If $\dcd {x.A} T$, then $\subtypingd T {x.A}$. \item If $\udcd {x.A} T$, $\Gamma$ is well-formed and $x \in dom(\Gamma)$, then $\Gamma'$ is well-formed and $fv(T) \subseteq dom(\Gamma')$. \end{enumerate} \end{lemma} Now we can proceed to prove soundness of stare-at subtyping with respect to full $D_{<:}\xspace$. As before, we prove a stronger result. \begin{theorem} (soundness of stare-at subtyping) If $\stareat S U$, $\opesubd{\Gamma}[\Gamma_1]$ and $\opesubd{\Gamma}[\Gamma_2]$, then $\subtypingd S U$. \end{theorem} \begin{proof} By induction on the derivation of stare-at subtyping. \end{proof} A corollary is that if Alice and Bob begin with the same context, then stare-at subtyping is sound with respect to full $D_{<:}\xspace$. \begin{theorem} If $\stareat[\Gamma]S U[\Gamma]$, then $\subtypingd S U$. \end{theorem} Next, we want to examine the termination of the operations. First we want to make sure that \revealing terminates as an algorithm. \begin{lemma} \revealing terminates as an algorithm. \end{lemma} \begin{proof} The measure is the length of the input context (the number of variables in its domain). \end{proof} Now we want to examine the termination of stare-at subtyping. We first define the structural measures for types and contexts. \begin{definition} The measure $M$ of types and contexts is defined by the following equations. \begin{align*} M(\top) &= 1 \\ M(\bot) &= 1 \\ M(x.A) &= 2 \\ M(\forall(x : S) U) &= 1 + M(S) + M(U) \\ M(\{A : S .. U\}) &= 1 + M(S) + M(U) \\ \\ M(\Gamma) &= \sum_{x : T \in \Gamma} M(T) \end{align*} \end{definition} As we can see, the measure simply counts the syntactic size of types and contexts. We can show that \revealing does not increase the input measure and \upcast and \downcast strictly decrease it. \begin{lemma} If $\revealingd S U$, then $M(\Gamma) + M(S) \ge M(\Gamma') + M(U)$. If $\udcd{x.A} U$, then $M(\Gamma) + M(x.A) > M(\Gamma') + M(U)$. \end{lemma} \begin{theorem}\label{thm:stare-at-term} Stare-at subtyping terminates as an algorithm. \end{theorem} \begin{proof} The measure is the sum of measures of all inputs: for $\stareat S U$, the measure is $M(\Gamma_1) + M(S) + M(U) + M(\Gamma_2)$. Since the measure just reflects the syntactic sizes, it is easy to see that it decreases in all of the cases other than \infref{dsub:sa:sel1} and \infref{dsub:sa:sel2}. These two cases are proven by the previous lemma. Notice that the proof is this easy because Alice and Bob use the returned contexts from \upcast and \downcast in both cases. \end{proof} \subsection{Stare-at subtyping}\label{ssec:stareat} It remains to show that strong kernel $D_{<:}\xspace$ is decidable. We will present the decision procedure first. We will prove some of its properties in the next section, and finally prove that it is a sound and complete decision procedure for strong kernel $D_{<:}\xspace$ in \Cref{ssec:sc}. The decision procedure is shown in \Cref{fig:stare-at}. We call it \emph{stare-at subtyping}, inspired by the notation $\stareat S U$. If we see $\gg$ and $\ll$ as eyes and $<:$ as a nose, then the notation looks like a face, and the two eyes are staring at the nose. \input{figures/algs/stare_at} In the same way as for step subtyping, the stare-at subtypinging algorithm searches for a derivation using the inference rules, backtracking when necessary. We will prove that this backtracking terminates (\Cref{thm:stare-at-term}). Stare-at subtyping generalizes step subtyping by operating on two contexts. One can think of stare-at subtyping as a collaborative game between two players, Alice and Bob. Alice is responsible for the context and type to the left of $<:$ or $>:$, while Bob is responsible for the other side. In particular, Alice and Bob are completely independent and do not need to see the contexts or types held by their collaborator. Most of the rules are just straightforward extensions of the corresponding rules of step subtyping with two contexts, except for three cases: \infref{dsub:sa:all}, \infref{dsub:sa:sel1} and \infref{dsub:sa:sel2}. In the \infref{dsub:sa:all} rule, the parameter types are allowed to be different, so there is an additional premise that compares the parameter types. This rule can handle not only the aliasing example, but also cases where $S'$ is a strict subtype of $S$. When comparing the return types, Alice and Bob work on their own extended contexts, so subsequently, if Alice and Bob refer to $x$, they potentially see $x$ at different types. Similar to step subtyping, stare-at subtyping relies on another operation to handle path types which generalizes \exposure: \revealing. \upcast and \downcast are generalized accordingly to reflect the differences between \exposure and \revealing. Like in step subtyping, the \infref{dsub:rv:top}, \infref{dsub:u:top} and \infref{dsub:d:bot} rules only apply when no other rules apply and the three operations are all total. \input{figures/algs/dsub_revealing} \revealing is similar to \exposure in that it finds a non-path supertype of the given type, and its rules mirror those of \exposure. The difference is that in addition to a type, \revealing also returns a typing context. The typing context is a prefix of the input typing context long enough to type any free variables that may occur in the type that \revealing returns. This returned prefix context participates in further subtyping decisions and makes it quite easy to prove termination. The change from \exposure to \revealing, the extra typing context that \revealing returns, is not related to the two typing contexts in strong kernel $D_{<:}\xspace$ and in the stare-at subtyping rules. Instead, it is motivated by the stare-at subtyping termination proof. We conjecture that if stare-at subtyping were to use \exposure instead of \revealing, it would still compute the same subtyping relation and it would still terminate, but proving termination would be significantly more difficult. The use of \revealing is sufficient for stare-at subtyping to satisfy the properties that we desire of it (which we will prove in the next two sections), and it makes the termination proof simpler than it would be with \exposure. The \upcast and \downcast rules have the same structure as those of step subtyping, except that they return the typing context that they receive from \revealing. In the cases where they return $\top$ or $\bot$, they return an empty context because these types have no free variables. Like in step subtyping, the result types of \upcast and \downcast are used in the \infref{dsub:sa:sel1} and \infref{dsub:sa:sel2} subtyping rules. These rules use the shortened typing context that is returned from \upcast or \downcast in their recursive subtyping premises. \subsection{Soundness and Completeness of Stare-at Subtyping}\label{ssec:sc} In the previous section, we showed that stare-at subtyping terminates and is sound for full $D_{<:}\xspace$. In this section, we strengthen the soundness proof to strong kernel $D_{<:}\xspace$, and also prove completeness with respect to strong kernel $D_{<:}\xspace$, to show that the fragment of full $D_{<:}\xspace$ decided by stare-at subtyping is exactly strong kernel $D_{<:}\xspace$. Our overall approach will mirror the proofs from \Cref{ssec:sck} of soundness and completeness of step subtyping with respect kernel $D_{<:}\xspace$. First, we connect \revealing with strong kernel $D_{<:}\xspace$. \begin{lemma} If $\revealingd[\Gamma_1] S T$ and $\subtypingdsk T U$, then $\subtypingdsk S U$. \end{lemma} In this lemma, the $\Gamma_1'$ returned from \revealing is not used in the rest of the statement. The intuition is that strong kernel does not shrink the context as \revealing does so $\Gamma_1'$ is irrelevant. This is all we need to show that stare-at subtyping is sound with respect to strong kernel $D_{<:}\xspace$. \begin{theorem} (soundness of stare-at subtyping w.r.t. strong kernel $D_{<:}\xspace$) If $\stareat S U$, then $\subtypingdsk S U$. \end{theorem} \begin{proof} The proof is done by induction on the derivation of stare-at subtyping and it is very similar to the one of \Cref{thm:kernel:sound}. \end{proof} The completeness proof is slightly trickier, because in the \infref{dsub:sa:sel1} and \infref{dsub:sa:sel2} cases, Alice and Bob work on prefix contexts in the recursive calls. In contrast, in the \infref{dsub:sk:sel1} and \infref{dsub:sk:sel2} rules of strong kernel $D_{<:}\xspace$, the subtyping judgements in the premises use the same full contexts as the conclusions. Therefore, we need to make sure that working on smaller contexts will not change the outcome \begin{theorem}\label{thm:truncation} (strengthening of stare-at subtyping) If $\stareat[\Gamma_1 ; \Gamma'_1 ; \Gamma''_1] S U[\Gamma_2; \Gamma'_2 ; \Gamma''_2]$, $fv(S) \subseteq dom(\Gamma_1 ; \Gamma''_1)$ and $fv(U) \subseteq dom(\Gamma_2 ; \Gamma''_2)$, then $\stareat[\Gamma_1 ; \Gamma''_1] S U[\Gamma_2; \Gamma''_2]$. \end{theorem} \begin{proof} By induction on the derivation of stare-at subtyping. \end{proof} By taking $\Gamma''_1$ and $\Gamma''_2$ to be empty, we know Alice and Bob are safe to work on the prefix contexts. Now we can prove the completeness of stare-at subtyping. \begin{theorem} (completeness of stare-at subtyping w.r.t. strong kernel $D_{<:}\xspace$) If $\subtypingdsk S U$, then $\stareat S U$. \end{theorem} \begin{proof} The proof is similar to the one of \Cref{thm:completeness-kernel}. We also need to strengthen the inductive hypothesis to the following: if $\subtypingdsk S U$ and this derivation contains $n$ steps, then $\stareat S U$ and if $U$ is of the form $\{A : T_1 .. T_2\}$, then $\revealingd[\Gamma_1] S {S'}$, and either \begin{enumerate} \item $S' = \bot$, or \item $S' = \{A : T_1' .. T_2' \}$ for some $T_1'$ and $T_2'$, such that \begin{enumerate} \item $\stareat {T_1} {T_1'}$ and \item $\subtypingdsk{T_2'}{T_2}$, and the number of steps in this derivation is less than or equal to $n$. \end{enumerate} \end{enumerate} The \infref{dsub:sk:sel1} and \infref{dsub:sk:sel2} cases are trickier. After invoking the inductive hypothesis, due to the well-formedness condition of \upcast and \downcast, we apply \Cref{thm:truncation} so that the eventual derivation of stare-at subtyping works in prefix contexts. \end{proof} Therefore, we conclude that strong kernel and stare-at subtyping are the same language. Completeness may seem somewhat surprising since stare-at subtyping truncates the typing contexts in the \infref{dsub:sa:sel1} and \infref{dsub:sa:sel2} cases while strong kernel subtyping does not. Technically, the truncation is justified by \Cref{thm:truncation}. Intuitively, since the prefixes of the typing contexts cover the free variables of the relevant type, they do include all of the information necessary to reason about that type. However, it is important to keep in mind that this is possible only because we have removed the \infref{dsub:bb} rule. In a calculus with the \infref{dsub:bb} rule, it is possible that $\subtyping S U$ is false in some context $\Gamma$ that binds all free variables of $S$ and $U$, but that if we further extend the context with some $\Gamma'$, that can make $\subtyping[\Gamma ; \Gamma'] S U$ true due to new subtyping relationships introduced in $\Gamma'$ by the \infref{dsub:bb} rule. \section{Strong Kernel $D_{<:}\xspace$} \input{sec_skernel/motiv} \input{sec_skernel/def} \input{sec_skernel/stareat} \input{sec_skernel/props} \input{sec_skernel/sc} \subsection{$F_{<:}\xspace^-$}\label{ssec:fsubm} The counterexample suggests that the problem with the mapping is that it permits interference between function types and universal types in $F_{<:}\xspace$ because it maps both of them to dependent function types in $D_{<:}\xspace$. Reviewing \citet{fsub-undec}, we notice that the undecidability proof of $F_{<:}\xspace$ does not make use of function types. Therefore, we can remove function types from $F_{<:}\xspace$ to obtain a simpler calculus that is better suited for undecidability reductions. \begin{definition}\label{def:fsubm} $F_{<:}\xspace^-$ is obtained from $F_{<:}\xspace$ defined in \Cref{fig:fsub_nf_def} by removing function types ($\to$) and the \infref{fsub:fun} rule. \end{definition} \begin{theorem}\label{thm:fsubminus-undec} $F_{<:}\xspace^-$ subtyping is undecidable. \end{theorem} \begin{proof} The $F_{<:}\xspace$ undecidability proof of \citet{fsub-undec} does not depend on function types. \end{proof} The mappings from \Cref{def:mapping} can be applied to types and contexts in $\fsub^-$. The \ref{eq:fsub:func} can be removed from the mapping since $\fsub^-$ does not have function types. \subsection{The Partial Undecidability Proof of \citet{wadlerfest-dot}}\label{ssec:ipf} Subtyping in $F_{<:}\xspace$ is known to be undecidable~\citep{fsub-undec}. \citet{wadlerfest-dot} defined the following total mappings from types and contexts in $F_{<:}\xspace$ to types and contexts in $D_{<:}\xspace$: \begin{definition}\label{def:mapping}\citep{wadlerfest-dot} The mappings $\inttyp{\cdot}$ and $\intctx{\cdot}$ are defined as follows: \begin{align*} \inttyp{\top} &= \top \\ \inttyp{X} &= x_X.A \\ \inttyp{S \to U} &= \forall(x : \inttyp{S}) \inttyp{U} \tag{function case}\label{eq:fsub:func} \\ \inttyp{\forall X <: S . U} &= \forall(x_X : \{A: \bot .. \inttyp{S}\}) \inttyp{U} \end{align*} \begin{align*} \intctx{\bigcdot} &= \bigcdot \\ \intctx{\Gamma ; X <: T} &= \intctx{\Gamma} ; x_X : \{A: \bot .. \inttyp{T} \} \end{align*} \end{definition} In the mapping, a correspondence between type variables in $F_{<:}\xspace$ and variables in $D_{<:}\xspace$ is assumed, as indicated by the notation $x_X$. \citeauthor{wadlerfest-dot} also proved that given a yes-instance of subtyping in $F_{<:}\xspace$, its image under the mapping is also a yes-instance of subtyping in $D_{<:}\xspace$: \begin{theorem}\citep[Theorem 1]{wadlerfest-dot}\label{thm:fsub2dsub} If $\subtypingf S U$, then $\subtypingd[\intctx{\Gamma}]{\inttyp{S}}{\inttyp{U}}$. \end{theorem} According to \Cref{def:reduction}, to show subtyping in $D_{<:}\xspace$ undecidable, it remains to show the other direction: \begin{conjecture}\label{con:dsub2fsub} If $\subtypingd[\intctx{\Gamma}]{\inttyp{S}}{\inttyp{U}}$, then $\subtypingf S U$. \end{conjecture} To see why this step is essential, consider what would happen if we defined a new calculus $D_{<:}\xspace^+$ by extending $D_{<:}\xspace$ subtyping with a rule that makes every type $S$ a subtype of every type $U$: \begin{mathpar} \inferrule*[right=Trivial]{ }{\subtypingdp S U} \end{mathpar} The mapping in \Cref{def:mapping} and \Cref{thm:fsub2dsub} continue to hold even for $D_{<:}\xspace^+$. But subtyping in $D_{<:}\xspace^+$ is obviously decidable because every instance is a yes-instance. If \Cref{thm:fsub2dsub} were sufficient to prove undecidability of $D_{<:}\xspace$, then it would also be sufficient to ``prove'' undecidability of the obviosuly decidable $D_{<:}\xspace^+$. Thus, \Cref{con:dsub2fsub} is \emph{essential} to complete the proof of undecidability of $D_{<:}\xspace$ subtyping. Unfortunately, \Cref{con:dsub2fsub} is \emph{false}. As a counterexample, consider the following subtyping query in $F_{<:}\xspace$: \begin{align*} \subtypingf[]{\top \to \top}{\forall X <: \top. \top} \end{align*} This subtyping relationship is false: in $F_{<:}\xspace$, function types and universal types are not related by subtyping. The image of this subtyping relationship under the mapping is: \begin{align*} {\subtypingd[]{\forall(x : \top). \top}{\forall(x_X : \{A : \bot .. \top\}).\top}} \end{align*} In $D_{<:}\xspace$, this subtyping relationship is true, as witnessed by the following derivation tree. \begin{mathpar} \inferrule*[right=\infref{dsub:all}] {\inferrule*[right=\infref{dsub:top}]{ }{\subtypingd[]{\{A: \bot .. \top\}} \top} \\ \inferrule*[right=\infref{dsub:refl}]{ }{\subtypingd[x : \{A: \bot .. \top\}] \top \top}} {\subtypingd[]{\forall(x : \top). \top}{\forall(x_X : \{A : \bot .. \top\}).\top}} \end{mathpar} The counterexample shows that the mapping in \Cref{def:mapping} \emph{cannot} be used to prove undecidability of $D_{<:}\xspace$ subtyping. \subsection{Bad Bounds}\label{ssec:attempt} Since $\fsub^-$ invalidates the counterexample to \Cref{con:dsub2fsub}, we can attempt to prove the conjecture for $\fsub^-$. When we try to invert the premise of the conjecture, $\subtypingd[\intctx{\Gamma}]{\inttyp{S}}{\inttyp{U}}$, the first problem we encounter are \emph{bad bounds}. The pattern of bad bounds is discussed in~\citet{simple-sound-proof,oopsla-dot}. Bad bounds are an unintended consequence of the combination of the \infref{dsub:sel1p}, \infref{dsub:sel2p} and \infref{dsub:trans} rules. In certain typing contexts, bad bounds make it possible to prove subtyping between \emph{any} types $S$ and $U$. Consider the following derivation tree: \begin{align*} &\text{assume }\Gamma(x) = \{A : S .. U \} \\ &\inferrule*[right=\infref{dsub:trans}] {\inferrule*[right=\infref{dsub:sel1p}] {\inferrule*[right=\infref{dsub:refl}] { } {\typingd{\{A : S .. U \}}{\{A: S .. U\}}}} {\subtypingd{S}{x.A}} \\ \inferrule*[right=\infref{dsub:sel2p}] {\text{same as left}} {\subtypingd{x.A}{U}}} {\subtypingd{S}{U}} \end{align*} This derivation uses transitivity to connect the lower and upper bounds of the path type $x.A$. The types $S$ and $U$ can be any types at all, as long as they appear in the type of $x$ in the typing context $\Gamma$. In $F_{<:}\xspace$, on the other hand, it is easy to show that, for example, a supertype of $\top$ must be $\top$. Properties like this are called \emph{inversion properties}. These properties do not hold in general in $D_{<:}\xspace$ due to bad bounds. Fortunately, we can prove similar properties in $D_{<:}\xspace$ if we restrict the typing context $\Gamma$ according to the following definition: \begin{definition} (invertible context) A context $\Gamma$ in $D_{<:}\xspace$ is invertible if all of the following hold. \begin{enumerate} \item No variable binds to $\bot$, \item No variable binds to a type of the form $\{A : S .. \bot \}$ for any $S$, \item No variable binds to a type of the form $\{A : T .. \{A : S .. U \} \}$ for any $T$, $S$ and $U$, and \item If a variable binds to $\{A : S .. U\}$, then $S = \bot$. \end{enumerate} \end{definition} The contexts in the range of the mapping from \Cref{def:mapping} are all invertible: \begin{lemma} Given an $F_{<:}\xspace^-$ context $\Gamma$, $\intctx{\Gamma}$ is invertible. \end{lemma} In invertible contexts, we can prove many useful inversion properties: \begin{lemma}\label{lem:inv-sub-props} (supertypes in invertible contexts) If a context $\Gamma$ is invertible, then all of the following hold. \begin{enumerate} \item If $\subtypingd \top T$, then $T = \top$. \item If $\subtypingd{\{A: S .. U \}} T$, then $T = \top$ or $T$ has the form $\{A: S' .. U'\}$. \item If $\subtypingd{\forall(x: S) U}T$, then $T = \top$ or $T$ has the form $\forall(x : S') U'$. \end{enumerate} \end{lemma} \begin{lemma}\label{lem:inv-inv-props} (subtypes in invertible contexts) If a context $\Gamma$ is invertible, then all of the following hold. \begin{enumerate} \item If $\subtypingd T \bot$, then $T = \bot$. \item If $\subtypingd T {\{A: S .. U \}}$, then $T = \bot$ or $T$ has the form $\{A: S' .. U'\}$. \item If $\subtypingd T {\forall(x: S) U}$, then $T = \bot$ or $T$ is some path $y.A$, or $T$ has the form $\forall(x : S') U'$. \item If $\subtypingd {T} {x.A}$, then $T = \bot$, $T = x.A$, or $T$ is some path $y.A$ from which $x.A$ can be reached. \end{enumerate} \end{lemma} \begin{lemma}\label{lem:inv-props} (subtyping inversion) If a context $\Gamma$ is invertible, then the following hold. \begin{enumerate} \item If $\subtypingd{\{A : S_1 .. U_1\}}{\{A : S_2 .. U_2\}}$, then $\subtypingd{S_2}{S_1}$ and $\subtypingd{U_1}{U_2}$. \item If $\subtypingd{\forall(x : S_1)U_1}{\forall(x : S_2)U_2}$, then $\subtypingd{S_2}{S_1}$ and $\subtypingd[\Gamma ; x : S_2]{U_1}{U_2}$. \end{enumerate} \end{lemma} These lemmas show that $D_{<:}\xspace$ is getting much closer to $F_{<:}\xspace^-$ in invertible contexts (hence in the contexts in the range of $\intctx{\cdot}$) and suggest that we are just one step away from proving undecidability of $D_{<:}\xspace$. Unfortunately, there is one more problem. \subsection{The \infref{dsub:trans} Rule}\label{ssec:attempt2} Recall the conjecture that we are trying to prove: if $\subtypingd[\intctx{\Gamma}]{\inttyp S}{\inttyp U}$, then $\subtypingfm S U$. When we perform induction on the premise, in the case of the \infref{dsub:trans} rule, we have the following antecedents in the proof context: \begin{enumerate} \item\label[premise]{enum:s-sub-t} For some $T$, $\subtypingd[\intctx{\Gamma}]{\inttyp{S}} T$, \item\label[premise]{enum:t-sub-u} $\subtypingd[\intctx{\Gamma}]T{\inttyp{U}}$, \item\label[premise]{enum:ind-hyp} Inductive hypothesis: if $\subtypingd[\intctx{\Gamma}]{\inttyp{T_1}}{\inttyp{T_2}}$, then $\subtypingfm{T_1}{T_2}$. \end{enumerate} The problem is that $T$ is not necessarily in the range of $\inttyp{\ }$. \paragraph{A counterexample for the \infref{dsub:trans} case: } Define $x <: T$ as syntactic sugar for $x : \{A: \bot .. T\}$. The following is a proof that Merlin gives us to prove that $\subtypingd[]{\forall(x <: \top) \top}{\top}$: \begin{mathpar} \inferrule*[right=\infref{dsub:trans}] {\inferrule*[] {\mathcal{D}} {\subtypingd[]{\forall(x <: \top) \top}{\forall(x : \{A: \top .. \bot\}) x.A}} \\ \inferrule*[right=\infref{dsub:top}] { } {\subtypingd[]{\forall(x : \{A: \top .. \bot\}) x.A}\top} } {\subtypingd[]{\forall(x <: \top) \top}{\top}} \end{mathpar} In the above, the subderivation $\mathcal{D}$ is as shown below. \begin{mathpar} \inferrule*[right=\infref{dsub:all}] {\inferrule*[right=\infref{dsub:bnd}] {\text{straightforward}} {\subtypingd[]{\{A: \top .. \bot\}}{\{A: \bot .. \top\}}} \\ \inferrule*[right=\infref{dsub:sel1}] {\text{straightforward}} {\subtypingd[x:\{A: \top .. \bot\}]\top{x.A}}} {\subtypingd[]{\forall(x <: \top) \top}{\forall(x : \{A: \top .. \bot\}) x.A}} \end{mathpar} In this example, the proof of $\subtypingd[]{\forall(x <: \top) \top}{\top}$ is concluded by transitivity on $\forall(x : \{A: \top .. \bot\}) x.A$. An inspection shows that both $\forall(x <: \top) \top$ and $\top$ are in the image of $\inttyp{\cdot}$, but $\forall(x : \{A: \top .. \bot\}) x.A$ is not, as it would require at the very least the lower bound in the type declaration to be $\bot$. Therefore, the target theorem cannot be proven by induction, because the induction hypothesis can be applied only to types in the range of $\inttyp{\cdot}$. To resolve this issue, we need to reformulate $D_{<:}\xspace$ so that it does not use the \infref{dsub:trans} rule. \subsection{Definition of Undecidability}\label{ssec:undec-def} A common method for proving a decision problem undecidable is by \emph{reduction} from some other known undecidable problem. \begin{definition} \citep[Definition 12.1a]{martin2010introduction} \label{def:reduction} If $Q$ and $P$ are decision problems, we say $Q$ is reducible to $P$ ($Q \le P$) if there is an algorithmic procedure $F$ that allows us, given an arbitrary instance $I_1$ of $Q$, to find an instance $F(I_1)$ of $P$ so that for every $I_1$, $I_1$ is a yes-instance of $Q$ if and only if $F(I_1)$ is a yes-instance of $P$. \end{definition} Notice that reducibility requires an if and only if proof: for our choice of $F$, we must show that $I_1$ is a yes-instance of $Q$ \emph{if and only if} $F(I_1)$ is a yes-instance of $P$. Reduction can be understood intuitively as an adversarial game. Consider a target decision problem $P$. Merlin is a wizard who claims to have access to true magic, and therefore be able to decide $P$. He is so confident that he would also offer a complete proof accompanying each yes answer he gives. Sherlock is a skeptical detective. He questions the Merlin's ability, and comes up with the following scheme in order to disprove Merlin's claim. \begin{centering} \begin{tikzpicture} \node at (0, 0) (Q) {an instance of Q}; \node at (5, 0) (P) {an instance of P}; \node at (9, -0.5) (M) {Merlin}; \node at (5, -1) (P') {a proof of P}; \node at (0, -1) (Q') {a proof of Q}; \path[draw, ->, thick] (Q) -- (P) node[midway, above]{Step 1}; \path [draw, ->, thick] (P.east) -- (M); \path [draw, ->, thick] (M) -- (P'.east); \path [draw, ->, thick] (P') -- (Q') node[midway, above]{Step 2}; \end{tikzpicture} \end{centering} Sherlock selects some undecidable problem $Q$. As Step~1, Sherlock devises a mapping from instances of $Q$ to instances of $P$ that preserves yes-instances: every yes-instance of $Q$ maps to some yes-instance of $P$. As Step~2, Sherlock devises a mapping from (yes-)proofs of $P$ to (yes-)proofs of $Q$. Then, if Merlin could really decide $P$, then Sherlock could use this setup to decide $Q$, which is impossible. For any instance of $Q$, Sherlock would map it to an instance of $P$ and give it to Merlin to decide. If the instance of $P$ is a no-instance, then so was the instance of $Q$. If the instance of $P$ is a yes-instance, then Sherlock could map Merlin's proof into a proof that the instance of $Q$ is also a yes-instance. $P$ is undecidable if and only if Sherlock achieves both steps and therefore proves Merlin is wrong. \subsection{Undecidability of Typing} In most calculi, undecidability of typing usually follows by some simple reduction from undecidability of subtyping in the same calculus. For example, for $D_{<:}\xspace$, we might map the subtyping problem $\subtypingd S U$ to the typing problem: \begin{mathpar} \typingd{\{A = S\}}{\{A : \bot .. U\}} \end{mathpar} and conjecture that the two problems are equivalent. In $D_{<:}\xspace$, however, we have to be careful because of the possibility of bad bounds. Indeed, it turns out that the two problems are \emph{not} equivalent. As a counterexample, note that if $\Gamma(w) = \{A : \{A : S .. S\} .. \{A : \bot .. U\} \}$, then the typing problem is true (since $\typingd{\{A = S\}}{\{A : S .. S\}}$ and $\subtypingd {\{A : S .. S\}}{\{A : \bot .. U\}}$) even if $S$ and $U$ are chosen so that the subtyping problem $\subtypingd S U$ is false. In general, the approach to proving undecidability of typing using undecidability of subtyping depends on inversion properties, which do not always hold in $D_{<:}\xspace$ due to bad bounds, so this approach does not work for $D_{<:}\xspace$. Nevertheless, $D_{<:}\xspace$ typing still turns out to be undecidable, but to prove it, we must reduce not from $D_{<:}\xspace$ subtyping, but from $\fsub^-$ subtyping, which does obey inversion properties. \begin{theorem} For all $\Gamma$, $S$ and $U$ in $F_{<:}\xspace^-$, \begin{mathpar} \text{if }\typingd[\intctx{\Gamma}]{\{A = \inttyp{S}\}}{\{A : \bot .. \inttyp{U}\}}\text{, then } \subtypingfm{S}{U}\text{.} \end{mathpar} \end{theorem} \begin{proof} The only typing rules that apply to $\{A = \inttyp{S}\}$ are \infref{dsub:typ-i} and $\infref{dsub:sub}$. Therefore, the premise implies that $\subtypingd[\intctx{\Gamma}]{\{A : \inttyp{S} .. \inttyp{S}\}}{\{A : \bot .. \inttyp{U}\}}$. Since $\intctx{\Gamma}$ is invertible, \Cref{lem:inv-props} implies $\subtypingd[\intctx{\Gamma}]{\inttyp{S}}{\inttyp{U}}$ and \Cref{thm:dsub2fsubm} implies $\subtypingfm{S}{U}$. \end{proof} \begin{theorem} $D_{<:}\xspace$ typing is undecidable. \end{theorem} \begin{proof} By reduction from $\fsub^-$ subtyping, mapping the $\fsub^-$ subtyping problem $\subtypingfm{S}{U}$ to the $D_{<:}\xspace$ typing problem $\typingd[\intctx{\Gamma}]{\{A = \inttyp{S}\}}{\{A : \bot .. \inttyp{U}\}}$. The if direction is immediate and the only if direction is proved by the previous theorem. \end{proof} \subsection{$D_{<:}\xspace$ Normal Form}\label{ssec:nf} Although $F_{<:}\xspace$ also has a \infref{fsub:trans} rule, it does not cause any problems for the undecidability proof of~\citet{fsub-undec}. The reason is that the paper begins with $F_{<:}\xspace$ normal form, a formulation that defines the same calculus but does not use the \infref{fsub:trans} rule. Therefore, it is interesting to ask whether there is $D_{<:}\xspace$ normal form. We first define what we mean by a normal form. \begin{definition} A subtyping definition is in normal form if the premises of every rule are defined in terms of syntactic subterms of the conclusion. \end{definition} The subterms of the conclusion $\subtyping S U$ include subterms of both $S$ and $U$, as well as of the context $\Gamma$. Consider the following rule in $F_{<:}\xspace$ normal form. \begin{mathpar} \inferrule*[right=\infref{fsubnf:tvar}] {X <: T \in \Gamma \\ \subtypingf T U} {\subtypingf X U} \end{mathpar} Although $T$ is not found in $X$ or $U$, notice that $T$ is a result of context lookup of $X$ and is therefore a subterm of the context $\Gamma$. Consider the \infref{fsub:trans} rule in $F_{<:}\xspace$. \begin{mathpar} \inferrule*[right=\infref{fsub:trans}] {\subtypingf S T \\ \subtypingf T U} {\subtypingf S U} \end{mathpar} In this rule, $T$ could be arbitrary and is unrelated to the inputs. Therefore, a definition in normal form should not contain rules like this. We have discovered a reformulation of the $D_{<:}\xspace$ subtyping relation that is in normal form. The normal form subtyping rules are shown in \Cref{fig:dsub_nf}. The difference from the original $D_{<:}\xspace$ rules is that the \infref{dsub:trans} rule is removed and replaced by the new \infref{dsub:bb} rule. By inspecting the rules one by one, we can see that they are indeed in normal form. We must also check that the normal form rules define the same subtyping relation as the original $D_{<:}\xspace$ subtyping rules. As a first step, we will show that $D_{<:}\xspace$ normal form satisfies transitivity. \input{figures/definitions/dsub_nf} Proving transitivity of $D_{<:}\xspace$ normal form is quite tricky. First, transitivity is interdependent with narrowing, so we will need to prove the two together. Second, the proof of transitivity requires reasoning about types of the following form: $$\{A : T_1 .. \{A: T_2 .. \{A: T_3 .. \cdots \{A: T_n .. T\} \cdots \} \} \}$$ We define such types formally as follows: \begin{definition} A type declaration hierarchy is a type $\tthier(T, l)$ defined by another type $T$ and a list of types $l$ inductively as follows. \begin{align*} \tthier(T, l) = \begin{cases} T, \text{if $l$ = nil, or} \\ \{A : T' .. \tthier(T, l')\}, \text{if $l = T' :: l'$} \end{cases} \end{align*} \end{definition} With that definition, we can now state and prove the full transitivity and narrowing theorem: \begin{theorem}\label{thm:dsub-trans-narr} For any type $T$ and two subtyping derivations in $D_{<:}\xspace$ normal form $\mathcal{D}_1$ and $\mathcal{D}_2$, the following hold: \begin{enumerate}[(1)] \item\label{itm:trans} (transitivity) If $\mathcal{D}_1$ concludes $\subtypingd S T$ and $\mathcal{D}_2$ concludes $\subtypingd T U$, then $\subtyping S U$. \item\label{itm:narrow} (narrowing) If $\mathcal{D}_1$ concludes $\subtypingd S T$ and $\mathcal{D}_2$ concludes $\subtypingd[\Gamma ; x : T ; \Gamma'] {S'} {U'}$, then $\subtypingd[\Gamma ; x : S ; \Gamma'] {S'} {U'}$. \item\label{itm:hdt-r} If $\mathcal{D}_1$ concludes $\subtypingd{T'}{\tthier(\{A : S' .. T\}, l)}$ and $\mathcal{D}_2$ concludes $\subtypingd T U$, then $\subtypingd{T'}{\tthier(\{A : S' .. U\}, l)}$. \item\label{itm:hdt-l} If $\mathcal{D}_1$ concludes $\subtypingd S T$ and $\mathcal{D}_2$ concludes $\subtypingd{T'}{\tthier(\{A : T .. U' \}, l)}$, then $\subtypingd{T'}{\tthier(\{A : S .. U' \}, l)}$. \end{enumerate} \end{theorem} \begin{proof} The proof is done by induction on the lexicographical order of the structure of the triple $(T, \mathcal{D}_1, \mathcal{D}_2)$. That is, the inductive hypotheses of the theorem are: \begin{enumerate}[(a)] \item\label{itm:ind-t} If $T^*$ is a strict syntactic subterm of $T$, then the theorem holds for $T^*$ and any other two subtyping derivations $\mathcal{D}'_1$ and $\mathcal{D}'_2$. \item\label{itm:ind-1} If $\mathcal{D}^*_1$ is a strict subderivation of $\mathcal{D}_1$, then the theorem holds for the same type $T$, the subderivation $\mathcal{D}^*_1$ and any subtyping derivation $\mathcal{D}'_2$. \item\label{itm:ind-2} If $\mathcal{D}^*_2$ is a strict subderivation of $\mathcal{D}_2$, then the theorem holds for the same type $T$, the same derivation $\mathcal{D}_1$ and the subderivation $\mathcal{D}^*_2$. \end{enumerate} This form of induction is motivated by the dependencies between the four clauses of the theorem and can be found in other literature~\citep[Theorem 5 (Cut)]{PFENNING200084}. Specifically, \ref{itm:ind-t} addresses that transitivity~\ref{itm:trans} and narrowing~\ref{itm:narrow} are mutually dependent, but when transitivity uses narrowing, $T$ is replaced with a syntactic subterm $T^*$. Similarly, \ref{itm:ind-1} addresses that transitivity~\ref{itm:trans} and~\ref{itm:hdt-r} are mutually dependent, but in each dependence cycle, $\mathcal{D}_1$ is replaced with a subderivation $\mathcal{D}^*_1$. Finally, \ref{itm:ind-2} addresses that transitivity~\ref{itm:trans} and~\ref{itm:hdt-l} are mutually dependent, but in each dependence cycle, $\mathcal{D}_2$ is replaced with a subderivation $\mathcal{D}^*_2$. In proving transitivity \ref{itm:trans}, we consider the cases by which $\subtypingd{S}{T}$ and $\subtypingd{T}{U}$ are derived. We consider three cases in detail: \infref{dsub:all}-\infref{dsub:all} case: In this case, $S$, $T$ and $U$ are all dependent function types. Let $S = \forall(x : S_1)U_1$, $T = \forall(x : S_2)U_2$ and $U = \forall(x : S_3)U_3$. The antecedents are: \begin{enumerate}[i.] \item\label{ante:s2s1} $\subtypingd{S_2}{S_1}$, \item\label{ante:s3s2} $\subtypingd{S_3}{S_2}$, \item\label{ante:u1u2} $\subtypingd[\Gamma;x : S_2]{U_1}{U_2}$, and \item\label{ante:u2u3} $\subtypingd[\Gamma;x : S_3]{U_2}{U_3}$. \end{enumerate} The goal is to show $\subtypingd{\forall(x : S_1)U_1}{\forall(x : S_3)U_3}$ by \infref{dsub:all}, which requires $\subtypingd{S_3}{S_1}$ and $\subtypingd[\Gamma ; x : S_3]{U_1}{U_3}$. Applying inductive hypothesis~\ref{itm:ind-t} to~\ref{ante:s3s2} and \ref{ante:s2s1}, we obtain $\subtypingd{S_3}{S_1}$ via transitivity~\ref{itm:trans}. Again applying inductive hypothesis~\ref{itm:ind-t} to \ref{ante:s3s2} and \ref{ante:u1u2}, we obtain $\subtypingd[\Gamma;x : \highlight{S_3}]{U_1}{U_2}$ via narrowing~\ref{itm:narrow}. Then, again by inductive hypothesis~\ref{itm:ind-t} and \ref{ante:u2u3}, $\subtypingd[\Gamma ; x : S_3]{U_1}{U_3}$ is concluded via transitivity~\ref{itm:trans} and the goal is also concluded. \infref{dsub:sel1p}-\infref{dsub:sel2p} case: In this case, we know $T = x.A$ for some $x$. The antecedents are: \begin{enumerate}[i.] \item $\subtypingd{\Gamma(x)}{\{A : S .. \top\}}$ and \item $\subtypingd{\Gamma(x)}{\{A : \bot .. U\}}$. \end{enumerate} By the \infref{dsub:bb} rule, we can show the conclusion $\subtypingd S U$. That is, the \infref{dsub:bb} rule is a restricted form of transitivity for the case when the middle type is a path type $x.A$. \infref{dsub:sel2p}-\emph{any} case: When $\subtypingd S T$ is derived by \infref{dsub:sel2p}, we know $S = y.A$ for some $y$. The antecedents are: \begin{enumerate}[i.] \item $\subtypingd{\Gamma(y)}{\{A : \bot .. T\}}$, and \item $\subtypingd T U$. \end{enumerate} The intention is to show that $\subtypingd{\Gamma(y)}{\{A : \bot .. \highlight{U}\}}$ holds and hence conclude $\subtypingd{y.A}{U}$ by \infref{dsub:sel2p}. To derive this conclusion, we need to apply the induction hypothesis~\ref{itm:ind-1} with $\subtypingd{\Gamma(y)}{\{A : \bot .. T\}}$ as the subderivation $\mathcal{D}^*_1$. The induction hypothesis~\ref{itm:ind-1} provides the necessary $\subtypingd{\Gamma(y)}{\{A : \bot .. U\}}$ via clause~\ref{itm:hdt-r}, and hence $\subtypingd{y.A}{U}$. The \infref{dsub:bb}-\emph{any} case can be proved in the same way. The \emph{any}-\infref{dsub:sel1p} and \emph{any}-\infref{dsub:bb} cases can be proved in a symmetric way, by invoking inductive hypothesis~\ref{itm:ind-2} instead of inductive hypothesis~\ref{itm:ind-1} in the corresponding places. Narrowing \ref{itm:narrow} is proved by case analysis on the derivation of $\subtypingd[\Gamma ; x : T ; \Gamma']{S'}{U'}$. Several cases require transitivity, which is obtained by applying the induction hypothesis~\ref{itm:ind-2}. Clause~\ref{itm:hdt-r} of the theorem is proved by case analysis on $\mathcal{D}_1$, the derivation of $\subtypingd{T'}{\tthier(\{A:S'..T\},l)}$, and then by an inner induction on the list $l$. We discuss two interesting cases. \infref{dsub:bnd}-nil case: $\tthier(\{A : S' .. T\}, nil) = \{A : S' .. T\}$ and $\subtypingd{T'}{\tthier(\{A : S' .. T\}, nil)}$ is constructed by \infref{dsub:bnd}. From the \infref{dsub:bnd} rule, we know that $T' = \{A : S_0 .. U_0\}$ and have the following antecedents: \begin{enumerate}[i.] \item $\subtypingd{S'}{S_0}$, and \item\label{ante:u0t} $\subtypingd{U_0}{T}$, and \item\label{ante:tu} $\subtypingd{T}{U}$. \end{enumerate} We wish to apply transitivity~\ref{itm:trans} to antecedents~\ref{ante:u0t} and \ref{ante:tu} to obtain $\subtypingd{U_0}{U}$. We can do this by invoking the induction hypothesis~\ref{itm:ind-1} with the antecedent~\ref{ante:u0t} $\subtypingd{U_0}{T}$ as $\mathcal{D}^*_1$. After applying transitivity, we can apply \infref{dsub:bnd} to $\subtypingd{S'}{S_0}$ and $\subtypingd{U_0}{U}$ to obtain $\subtypingd{\{A : S_0 .. U_0\}}{\{A : S' .. U\}}$ as required. This case shows the mutual dependence between clause~\ref{itm:hdt-r} and transitivity~\ref{itm:trans}. \infref{dsub:sel2p}-\emph{any} case: In this case, we know that $T' = z.A$ for some $z$ and have the following antecedents: \begin{enumerate}[i.] \item $\subtypingd{\Gamma(z)}{\{A : \bot .. \tthier(\{A : S' .. T\}, l)\}}$, and \item $\subtypingd T U$. \end{enumerate} We apply the induction hypothesis~\ref{itm:ind-1} with the subderivation $\subtypingd{\Gamma(z)}{\{A : \bot .. \tthier(\{A : S' .. T\}, l)\}}$ as $\mathcal{D}^*_1$. Notice that $\{A : \bot .. \tthier(\{A : S' .. T\}, l)\}$ can be rewritten as $\tthier(\{A : S' .. T\}, (\bot :: l))$, so the induction hypothesis of~\ref{itm:hdt-r} applies to yield $\subtypingd{\Gamma(z)}{\tthier(\{A : S' .. \highlight{U}\}, (\bot :: l))}$, which can be rewritten as $\subtypingd{\Gamma(z)}{\{A: \bot .. \tthier(\{A : S' .. U\}, l)\}}$. Finally, by \infref{dsub:sel2p}, $\subtypingd{z.A}{\tthier(\{A : S' .. U\}, l)}$ as required. Since the list $\bot :: l$ is longer than $l$, this case shows why clause~\ref{itm:hdt-r} needs to be defined on type declaration hierarchies of non-empty lists. Clause~\ref{itm:hdt-l} of the theorem is dual to clause~\ref{itm:hdt-r} and is proven in a symmetric way. Instead of the inductive hypothesis~\ref{itm:ind-1}, clause~\ref{itm:hdt-l} uses the inductive hypothesis~\ref{itm:ind-2}. \end{proof} Once transitivity is proved, we can show that the two definitions of $D_{<:}\xspace$ subtyping are equivalent. \begin{theorem}\label{thm:dsub-nf-equiv} Subtyping in $D_{<:}\xspace$ normal form is equivalent to the original $D_{<:}\xspace$. \end{theorem} \begin{proof} The if direction is immediate. In the only if direction, the \infref{dsub:trans} case can be discharged by transitivity of $D_{<:}\xspace$ normal form. \end{proof} Now that we have $D_{<:}\xspace$ normal form, we can finally show that $D_{<:}\xspace$ subtyping is indeed undecidable. \begin{theorem} \label{thm:dsub2fsubm} If $\subtypingd[\intctx{\Gamma}]{\inttyp{S}}{\inttyp{U}}$, then $\subtypingfm S U$. \end{theorem} \begin{proof} The proof is by induction on the subtyping derivation in $D_{<:}\xspace$ normal form, which no longer has the problem with the \infref{dsub:trans} rule discussed in \Cref{ssec:attempt2}. Most of the cases are proved by straightforward application of the induction hypothesis. The \infref{dsub:sel1p} and \infref{dsub:bb} cases require the following argument. In both cases, we have the antecedent: \begin{mathpar} \text{for some $x$, }\subtypingd[\intctx{\Gamma}]{\intctx{\Gamma}(x)}{\{A : \inttyp{S} .. \top\}} \end{mathpar} By inspecting $\intctx{\cdot}$, we know that $\intctx{\Gamma}(x)$ must be $\{A : \bot .. T\}$ for some $T$, and therefore the antecedent becomes \begin{mathpar} \subtypingd[\intctx{\Gamma}]{\{A : \bot .. T\}}{\{A : \inttyp{S} .. \top\}} \end{mathpar} Recall that $\intctx{\Gamma}$ is invertible. By \Cref{lem:inv-props}, we know \begin{mathpar} \subtypingd[\intctx{\Gamma}]{\inttyp{S}}\bot \end{mathpar} Furthermore, by \Cref{lem:inv-inv-props}, we know $\inttyp{S} = \bot$. By inspecting $\inttyp{\cdot}$, we see that $\bot$ is not in the image, and therefore both \infref{dsub:sel1p} and \infref{dsub:bb} cases are discharged by contradiction. \end{proof} \begin{theorem} \label{thm:dsubundec} Subtyping in $D_{<:}\xspace$ is undecidable. \end{theorem} \begin{proof} The proof is by reduction from $\fsub^-$ using the mapping from \Cref{def:mapping} but without the function case. For the if direction, \Cref{thm:fsub2dsub} applies since the $\fsub^-$ subtyping rules are a subset of the $F_{<:}\xspace$ subtyping rules. The only if direction is proved by the previous theorem. \end{proof} As we have seen, the only change in the normal form rules of $D_{<:}\xspace$ subtyping is that the \infref{dsub:trans} rule is removed and replaced with the \infref{dsub:bb} rule. In other words, the only thing that transitivity really contributes to $D_{<:}\xspace$ is the phenomenon of bad bounds. Conversely, if we exclude bad bounds from $D_{<:}\xspace$, then it no longer has transitivity of subtyping. The undecidability proof relies only on the common features of $\fsub^-$ and $D_{<:}\xspace$, and in particular, it does not depend on the \infref{dsub:bb} rule. If we remove this rule from $D_{<:}\xspace$, subtyping in the resulting variant is still undecidable. \begin{theorem} Subtyping in $D_{<:}\xspace$ normal form without the \infref{dsub:bb} rule is undecidable. \end{theorem} \begin{proof} The proof is the same as \Cref{thm:dsubundec}, but without the \infref{dsub:bb} case. \end{proof} \section{Undecidability of $D_{<:}\xspace$ (sub)typing} \input{sec_undec/undec} \input{sec_undec/ipf} \input{sec_undec/fsubm} \input{sec_undec/attempt} \input{sec_undec/nf} \input{sec_undec/typing}
1,314,259,993,754
arxiv
\section{Introduction} Church's realisability problem \cite{Church/63/Logic} has motivated the development of the beautiful theories of finite games of infinite duration \cite{Buchi/62/Automata,Buchi+Landweber/69/MSO}, and finite automata over infinite structures \cite{Rabin/69/Automata}. These two fields have often inspired and influenced each other. The quest for optimal complementation \cite{Vardi/07/Saga,Yan/08/lowerComplexity,Schewe/09/complementation} and determinisation \cite{Rabin/69/Automata,Safra/88/Safra,Piterman/07/Parity,Schewe/09/determinise,Colcombet+Zdanowski/09/Buchi} of nondeterministic automata has been long and fruitful. The quest for optimal \buchi\ complementation techniques seems to have been settled with matching upper \cite{Schewe/09/complementation} and lower \cite{Yan/08/lowerComplexity} bounds. A similar observation might, on first glance, be made for \buchi\ determinisation, as matching upper \cite{Schewe/09/determinise} and lower \cite{Colcombet+Zdanowski/09/Buchi} bounds were established shortly after those for complementation. However, while these bounds are tight to the state, they refer to deterministic Rabin automata only, with an exponential number of Rabin pairs in the states of the initial \buchi\ automaton. Choosing Rabin automata as targets is not the only natural choice. The dual acceptance condition, suggested by Streett \cite{Streett/82/Streett}, would be a similarly natural goal, and determinising to parity automata seems to be an even more attractive target, as emptiness games for parity automata \cite{Schewe/07/parity,Paterson+Zwick/08/Parity} have a lower computational complexity compared to emptiness games for Streett or Rabin automata \cite{Piterman+Pnueli/06/Rabin}. For parity and Streett automata, however, no similarly tight result is known. Indeed, the best known algorithm \cite{Piterman/07/Parity} provides an $O(n!^2)$ bound on the states \cite{Schewe/09/determinise} (for state-based acceptance; the bound can be improved to $O(n!(n-1)!)$ when transition based acceptance is used) of a deterministic parity automaton obtained from a nondeterministic \buchi\ automaton with $n$ states, as compared to the approximately $(1.65n)^n$ states of the smallest deterministic Rabin automaton \cite{Schewe/09/determinise,Colcombet+Zdanowski/09/Buchi}. Another argument for using parity or Streett conditions is that determinisation constructions are often nested. E.g., in distributed synthesis \cite{Pnueli+Rosner/90/Distributed,Kupferman+Vardi/01/Synthesizing,Finkbeiner+Schewe/05/Distributed}, several co-determinisation (determinisation of the complement language) steps are used. Using Rabin automata as a target in one step, one has to use a determinisation technique for Streett automata in the next. Streett determinisation, however, is significantly more involved and expensive \cite{Safra/92/Streett,Piterman/07/Parity}. In this paper, we introduce determinisation procedures for nondeterministic parity automata to deterministic Rabin and parity automata. Using an algorithmic representation that extends the determinisation procedures from \cite{Schewe/09/determinise}, we show that the number of states used in the determinisation of nondeterministic \buchi\ automata cannot be reduced by a single state, while we establish the tightness of our parity determinisation procedure to below a constant factor of $1.5$, even if we allow for Streett acceptance. This also shows that determinising parity automata to Rabin automata leads to a smaller blow-up than the determinisation to parity or Streett. As a special case, this holds in particular for B\"uchi automata. \paragraph*{Transition-based acceptance.} We use a transition based acceptance mechanism for various reasons. Transition-based acceptance mechanisms have proven to be a more natural target of automata transformations. Indeed, all determinisation procedures quoted above have a natural representation with an acceptance condition on transitions, and their translation to state-based acceptance is by multiplying the acceptance from the last transition to the statespace. A similar observation can be made for other automata transformations, like the removal of $\varepsilon$-transitions from translations of $\mu$-calculi \cite{Wilke/01/Alternating,Schewe+Finkbeiner/06/ATM} and the treatment of asynchronous systems \cite{Schewe+Finkbeiner/06/Asynchronous}, where the statespace grows by multiplication with the acceptance information (e.g., maximal priority on a finite sequence of transitions), while it can only shrink in case of transition based acceptance. Similarly, tools like \textsc{SPOT} \cite{DuretLutz11} offer more concise automata with transition-based acceptance mechanism as a translation from LTL. Using state-based acceptance in the automaton that we want to determinise would also complicate the presentation. But first and foremost, using transition based acceptance provides cleaner results. \paragraph*{Related work.} Besides the work on complementing \cite{Vardi/07/Saga,Yan/08/lowerComplexity,Schewe/09/complementation} and determinising \cite{Rabin/69/Automata,Safra/88/Safra,Piterman/07/Parity,Schewe/09/determinise,Colcombet+Zdanowski/09/Buchi} \buchi\ automata, tight bounds have been obtained for generalised \buchi\ automata \cite{Schewe+Varghese/12/GBA}, and specialised algorithms for complementing \cite{Cai+Zhang/11/Streett} and determinising Streett \cite{Safra/92/Streett,Piterman/07/Parity} automata have been studied. The NP-completeness of minimising deterministic \buchi\ or parity automata \cite{Schewe/10/minimise} suggests that it would be infeasible to look for polynomially bigger automata and to minimise them subsequently, whereas minimising the number of priorities of a deterministic parity automaton is cheap and simple~\cite{Carton+Maceiras/99/ParityIndex}. The construction of deterministic Co\buchi\ automata with a one-sided error, which is correct for Co\buchi\ recognisable languages \cite{Boker+Kupferman/09/cobuchi}, and decision procedures that use emptiness equivalent B\"uchi~\cite{Kupferman+Vardi/05/Safraless,KPV/06/Safraless} or safety \cite{Finkbeiner+Schewe/12/bounded} automata instead of language equivalent automata have also been studied. \section{Preliminaries} We denote the set of non-negative integers by $\omega$, i.e. $\omega = \{0,1,2,3,...\}$. For a finite alphabet $\Sigma$, we use $\Sigma^*$ to denote the set of finite sequences over $\Sigma$, $\Sigma^+$ to denote the set of finite non-empty sequences over $\Sigma$, and $\Sigma^{\omega}$ to denote the set of infinite sequences over $\Sigma$. An infinite \emph{word} $\alpha: \omega \rightarrow \Sigma$ is an infinite sequence of letters $\alpha_0 \alpha_1 \alpha_2 \cdots$ from $\Sigma$. We use $[k]$ to represent $\{1,2,\ldots,k\}$. $\omega$-automata are finite automata that are interpreted over infinite words and recognise $\omega$-regular languages $L\subseteq \Sigma^{\omega}$. Nondeterministic $\omega$-automata are quintuples $\mathcal{N}=(Q,\Sigma,I,T,\mathcal{F})$, where $Q$ is a finite set of states with a non-empty subset $I\subseteq Q$ of initial states, $\Sigma$ is a finite alphabet, $T:Q \times \Sigma \times Q$ is a transition relation that maps states and input letters to sets of successor states, and $\mathcal{F}$ is an acceptance condition. In this paper, we consider Rabin, Streett, parity, and \buchi\ acceptance. A \emph{run} $\rho$ of a nondeterministic $\omega$-automaton $\mathcal{N}$ on an input word $\alpha$ is an infinite sequence $\rho: \omega\rightarrow Q$ of states of $\mathcal{N}$, also denoted $\rho = q_0 q_1 q_2\cdots \in Q^{\omega}$, such that the first symbol of $\rho$ is an initial state $q_0\in I$ and, for all $i\in\omega$, $(q_{i},\alpha_{i},q_{i+1})\in T$ is a valid transition. For a run $\rho$ on a word $\alpha$, we denote with $\overline{\rho}: i \mapsto \big(\rho(i),\alpha(i),\rho(i+1)\big)$ the transitions of $\rho$. Let $\infi(\rho) = \{q\in Q \mid \forall i \in \omega \; \exists j>i\mbox{ such that } \rho(j)=q\}$ denote the set of all states that occur infinitely often during the run $\rho$. Likewise, let $\infi(\overline{\rho})=\{t\in T \mid \forall i \in \omega \; \exists j>i\mbox{ such that } \overline{\rho}(j)=t\}$ denote the set of all transitions that are taken infinitely many times in $\rho$. In this paper, we use acceptance conditions over transitions. Acceptance mechanisms over states can be defined accordingly. \emph{Rabin} automata are $\omega$-automata, whose acceptance is defined by a family of pairs $\{(A_i,R_i)\mid i \in J\}$, with $A_i,R_i \subseteq T$, of accepting and rejecting transitions for all indices $i$ of some index set $J$. A run $\rho$ of a Rabin automaton is \emph{accepting} if there is an index $i\in J$, such that infinitely many accepting transitions $t \in A_i$, but only finitely many rejecting transitions $t \in R_j$ occur in $\overline{\rho}$. That is, if there is an $i\in J$ such that $\infi(\overline{\rho}) \cap A_i \neq \emptyset = \infi(\overline{\rho}) \cap R_i$. \emph{Streett} automata are $\omega$-automata, whose acceptance is defined by a family of pairs $\{(G_i,B_i)\mid i \in J\}$, with $G_i,B_i \subseteq T$, of good and bad transitions for all indices $i$ of some index set $J$. A run $\rho$ of a Streett automaton is \emph{accepting} if, for all indices $i\in J$, some good transition $t \in G_i$ or no bad transition $t \in B_j$ occur infinitely often in $\overline{\rho}$. That is, if, for all $i\in J$, $\infi(\overline{\rho}) \cap G_i \neq \emptyset$ or $\infi(\overline{\rho}) \cap B_i = \emptyset$ holds. \emph{Parity} automata are $\omega$-automata, whose acceptance is defined by a priority function $\pri:T \rightarrow [c]$ for some $c \in \mathbb N$. A run $\rho$ of a parity automaton is \emph{accepting} if $\limsup_{n\rightarrow\infty} \pri\big(\overline{\rho}(n)\big)$ is even, that is, if the highest priority that occurs infinitely often is even. Parity automata can be viewed as special Rabin, or as special Streett automata. In older works, the parity condition was referred to as Rabin chain condition---because one can represent them by choosing $A_i$ as the set of states with priority $\leq 2i$ and $R_i$ as the sets of states with priorities $\leq 2i -1$, resulting in a chain $A_i \subseteq R_i \subseteq A_{i+1} \subseteq \ldots$---or a Streett chain condition---where $G_i$ is the set of states with priority $\geq 2i$, and $B_i$ is the set of states with priority $\geq 2i-1$. One-pair Rabin automata $\mathcal R_1 =\big(Q,\Sigma,I,T,(A,R)\big)$, which are Rabin automata with a singleton index set, such that we directly refer to the only pair $(A,R)$, and \emph{B\"uchi} automata, which can be viewed as one-pair Rabin automata with an empty set of rejecting states $R = \emptyset$, are of special technical interest in this paper. For all types of automata, a word $\alpha$ is accepted by an automaton $\mathcal A$ iff it has an accepting run, and its language $\mathcal{L}(\mathcal A)$ is the set of words it accepts. We call an automaton $(Q,\Sigma,I,T,\mathcal{F})$ \emph{deterministic} if $I$ is singleton and $T$ contains at most one target node for all pairs of states and input letters, that is, if $(q,\alpha,r),(q,\alpha,s) \in T$ implies $r = s$. Deterministic automata are denoted $(Q,\Sigma,q_0,\delta,\mathcal{F})$, where $q_0$ is the only initial state and $\delta$ is the partial function with $\delta: (q,\alpha) \mapsto r \Leftrightarrow (q,\alpha,r)\in T$. As nondeterministic automata can block, we also allow them to accept immediately. Technically, one can use a state $\top$ which every automaton has. From $\top$, all transitions go back to $\top$, and sequences that contain one (and thus almost only) $\top$ states are accepting. This state is not counted to the statespace $Q$. If we want to include it, we explicitly write $Q^\top$. \section{Determinisation} A nondeterministic parity automaton $\mathcal{P}$ is a quintuple $\big(P,\Sigma,I,T,\pri:T \rightarrow [c] \big)$. This NPA has $|P|=n$ states and $c$ priorities (or colours) on the transitions. We will tackle the determinisation of parity automata in three steps. Firstly, we will recall history trees, the data structure for determinising \buchi\ automata. Secondly, we will describe an adjustment of the data structure and a determinisation procedure from \buchi\ automata to one-pair Rabin automata. Finally, we will show that this data structure can be nested for the determinisation of parity automata. In \cite{Schewe/09/determinise,Schewe+Varghese/12/GBA}, we use ordered labelled trees to depict the states of the deterministic automaton. These ordered labelled trees are called \emph{history trees} in \cite{Schewe/09/determinise,Schewe+Varghese/12/GBA}. A \emph{history tree} is an ordered labelled tree $(\T,l)$, where $\T$ is a finite, prefix closed subset of finite sequences of natural numbers $\omega$. Every element $v\in \T$ is called a \emph{node}. Prefix closedness implies that, if a node $v=n_{1}\ldots n_{j}n_{j+1} \in \T$ is in $\T$, then $v'=n_{1}\ldots n_{j}$ is also in $\T$. We call $v'$ the predecessor of $v$, denoted $\pred(v)$. The empty sequence $\epsilon \in \T$ is called the \emph{root} of the ordered tree $\T$. Obviously, $\epsilon$ has no predecessor. We further require $\T$ to be \emph{order closed} with respect to siblings: if a node $v=n_{1}\ldots n_{j}$ is in $\T$, then $v'=n_{1}\ldots n_{j-1}i$ is also in $\T$ for all $i\in \omega$ with $i<n_j$. In this case, we call $v'$ an \emph{older sibling} of $v$ (and $v$ a \emph{younger sibling} of $v'$). We denote the set of older siblings of $v$ by $\os(v)$. A history tree is a tree labelled with sets of automata states. That is, $l:\T\rightarrow 2^{Q} \smallsetminus \{\emptyset\}$ is a labelling function, which maps nodes of $\T$ to non-empty sets of automata states. For \buchi\ automata, the labelling is subject to the following criteria. \begin{enumerate} \item The label of each node is a subset of the label of its predecessor: \hfill $l(v)\subseteq l(\pred(v))$ holds for all $\varepsilon \neq v \in \T$. \item The intersection of the labels of two siblings is disjoint: $\forall v,v' {\in} \T.\ v {\neq} v' \wedge \pred(v) {=} \pred(v') \Rightarrow l(v) {\cap} l(v') = \emptyset$. \item The union of the labels of all siblings is \emph{strictly} contained in the label of their predecessor: \hfill $\forall v \in \T\ \exists q \in l(v)\ \forall v' \in \T.\ v=\pred(v') \Rightarrow q \notin l(v')$. \end{enumerate} \subsection{Determinising one-pair Rabin automata} \label{Rabindet} For one-pair Rabin automata, it suffices to adjust this data structure slightly. A tree is called a \emph{root history tree} (RHT) if it satisfies (1) and (2) from the definition of history trees, and a relaxed version of (3) that allows for non-strict containment of the label of the root, $\forall v \in \T \mathbf{\mathbf \smallsetminus \{\varepsilon\}}\ \exists q \in l(v)\ \forall v' \in \T.\ v=\pred(v') \Rightarrow q \notin l(v')$, and the label of the root $\varepsilon$ \emph{equals} the union of its children's labels, $l(\varepsilon) = \bigcup\{l(v) \mid v \in \T \cap \omega\}$. Let $\mathcal{R}_1 = (Q,\Sigma,I,T,(A,R))$ be a nondeterministic one-pair Rabin automaton with $|Q| = n$ states. We first construct a language equivalent deterministic Rabin automaton $\mathcal D_1 =(D,\Sigma,d_{0},\Delta,\{(A_i,R_i) \mid i \in J\})$ where, \begin{itemize} \item $D$ is the set of RHTs over $Q$, \item $d_{0}$ is the history tree $(\{\varepsilon,0\},\ l:\varepsilon \mapsto I,\ l:0 \mapsto I)$, \item $J$ is the set of nodes $\neq \varepsilon$ that occur in some RHT of size $n+1$ (due to the definition of RHTs, an RHT can contain at most $n+1$ nodes), and \item for every tree $d\in D$ and letter $\sigma\in\Sigma$, the transition $d'=\Delta(d,\sigma)$ is the result of the sequence of the transition mechanism described below. The index set is the set of nodes, and, for each index, the accepting and rejecting sets refer to this node. \end{itemize} \subsubsection*{\textbf{Transition mechanism for determinising one-pair Rabin Automata}} We determine $\Delta{:}\big((\T,l),\sigma\big) \mapsto (\T',l')$~as~follows:% \hspace*{-2mm} \label{TransMech} \begin{enumerate} \item \emph{Update of node labels (subset constructions).} The root of a history tree $d$ collects the momentarily reachable states $Q_r \subseteq Q$ of the automaton $\mathcal{R}_1$. In the first step of the construction, we update the label of the root to the set of reachable states upon reading a letter $\sigma \in \Sigma$, using the classical subset construction. We update the label of every other node of the RHT $d$ to reflect the successors reachable through accepting or neutral transitions. For $\varepsilon$, we update $l$ to the function $l_1$ by assigning $l_1: \varepsilon \mapsto \{q' \in Q \mid \exists q \in l(\varepsilon).\ (q,\sigma,q') \in T\}$, and for all $\varepsilon \neq v\in \T$, we update $l$ to the function $l_1$ by assigning $l_1: v \mapsto \{q' \in Q \mid \exists q \in l(v).\ (q,\sigma,q') \in T \smallsetminus R\}$. \item \emph{Splitting of run threads / spawning new children.} In this step, we spawn new children for every node in the RHT. For nodes other than the root $\varepsilon$, we spawn a child labelled with the set of states reachable through accepting transitions; for the root $\varepsilon$, we spawn a child labelled like the root. Thus, for every node $\varepsilon \neq v\in d$ with $c$ children, we spawn a new child $vc$ and expand $l_1$ to $vc$ by assigning $l_1: vc \mapsto \{q \in Q \mid \exists q' \in l(v).\ (q',\sigma,q) \in A\}$. If $\varepsilon$ has $c$ children, we spawn a new child $c$ of the root $\varepsilon$ and expand $l_1$ to $c$ by assigning $l_1: c \mapsto l_1(\varepsilon)$. We use $\mathcal \T_n$ to denote the extended tree that includes the new children. \item \emph{Removing states from labels -- horizontal pruning.} We obtain a function $l_2$ from $l_1$ by removing, for every node $v$ with label $l(v)=Q'$ and all states $q\in Q'$, $q$ from the labels of all younger siblings of $v$ and all of their descendants. \item \emph{Identifying breakpoints -- vertical pruning.} We denote with $\T_e\subseteq \T_n$ the set of all nodes $v\neq \varepsilon$ whose label $l_2(v)$ is now equal to the union of the labels of its children. We obtain $\T_v$ from $\T_n$ by removing all descendants of nodes in $\T_e$, and restrict the domain of $l_2$ accordingly. Nodes in $\T_v \cap \T_e$ represent the breakpoints reached during the infinite run $\rho$ and are called \emph{accepting}, that is, the transition of $\mathcal D_1$ will be in $A_v$ for exactly the $v \in \T_v \cap \T_e$. Note that the root cannot be accepting. \item \emph{Removing nodes with empty label.} We denote with $\T_r = \{v \in \T_v \mid l_2(v) \neq \emptyset\}$ the subtree of $\T_v$ that consists of the nodes with non-empty label and restrict the domain of $l_2$ accordingly. \item \emph{Reordering.} To repair the orderedness, we call $\|v\| = |\os(v) \cap \T_r|$ the number of (still existing) older siblings of $v$, and map $v= n_1\ldots n_j$ to $v' = \|n_1\|\ \|n_1n_2\|\ \|n_1n_2n_3\|\ldots \|v\|$, denoted $\rename(v)$. For $\T'=\rename(\T_r)$, we update a pair $(\T_r,l_2)$ from Step 5 to $d'=\big(\T', l' \big)$ with $l': \rename(v) \mapsto l_2(v)$. We call a node $v\in \T'\cap \T$ \emph{stable} if $v=\rename(v)$, and we call all nodes in $J$ \emph{rejecting} if they are not stable. That is, the transition will be in $R_v$ exactly for those $v \in J$, such that $v$ is not a stable node in $\T \cap \T'$. \end{enumerate} Note that this construction is a generalisation of the same construction for \buchi\ automata: if $R = \emptyset$, then the label of $0$ is always the label of $\varepsilon$ in this construction, and the node $1$ is not part of any reachable RHT. (We would merely write $0$ in front of every node of a history tree.) The correctness proof of this construction follows the same lines as the correctness proof of the \buchi\ construction. \begin{lemma} \label{lemma:determ1} [$L(\mathcal{R}_1) \subseteq L(\mathcal D_1)$] Given that there is an accepting run of $\mathcal{R}_1$ on an $\omega$-word $\alpha$, there is a node $v \in J$ that is eventually always stable and always eventually accepting in the run of $\mathcal D_1$ on $\alpha$. \end{lemma} \paragraph*{Notation.} For a state $q$ of $\mathcal{R}_1$ and an RHT $d=(\T,l)$, we call a node $v$ the \emph{host node of $q$}, denoted $\host(q,d)$, if $q \in l(v)$, but not in $l(vc)$ for any child $vc$ of $v$. \begin{proofidea} This is the same as for \buchi\ determinisation \cite{Schewe/09/determinise}: the state of each accepting run is eventually `trapped' in the same node of the RHT, and this node must be accepting infinitely often. Let $d_0,d_1 \ldots$ be the run of $\mathcal D_1$ on $\alpha$ and $q_0, q_1, \ldots$ an accepting run of $\mathcal{R}_1$ on $\alpha$. Then we can define a sequence $v_0,v_1, \ldots$ with $v_i = \host(q_i,d_i)$, and there must be a longest eventually stable prefix $v$ in this sequence. An inductive argument can then be exploited to show that, once this prefix $v$ is henceforth stable, the index $v$ cannot be rejecting. The assumption that there is a point in time where $v$ is stable but never again accepting can lead to a contradiction. Once the transition $(q_i,\alpha(i),q_{i+1})$ is accepting, $q_{i+1} \in l_{i+1}(vc)$ for some $c \in \omega$ and for $d_{i+1}=(\T_{i+1},l_{i+1})$. As $v$ is never again accepting \emph{or} rejecting, we can show for all $j>i$ that, if $q_j \in l_j(vc_j)$, then $q_{j+1} \in l_{j+1}(vc_{j+1})$ for some $c_{j+1} \leq c_j$. This monotonicity leads to a contradiction with the assumption that $v$ is the \emph{longest} stable prefix. \end{proofidea} \begin{proof} We fix an accepting run $\rho=q_{0}q_{1} \ldots$ of $\mathcal{R}_1$ on an input word $\alpha$, and let $\rho_{\mathcal D_{1}}=d_{0}d_{1}\ldots$ be the run of $\mathcal D_{1}$ on $\alpha$. We then define the related sequence of host nodes $\vartheta=v_{0}v_{1}v_{2}\ldots=\host(q_0,d_0)\host(q_1,d_1)\host(q_2,d_2)\ldots$. Let $s=\liminf_{n\rightarrow\infty} |v_{n}|$ be the shortest length of these nodes that occurs infinitely often. Note that the root cannot be the host node of any state, as it is always labelled by the union of the labels of its children. We follow the run and argue that the initial sequence of length $s$ of the nodes in $\vartheta$ eventually stabilises. Let $i_{0} < i_{1} < i_{2} <\ldots$ be an infinite ascending chain of indices such that \begin{enumerate} \item $(q_j,\alpha(j),q_{j+1})\in T \smallsetminus R$ is a neutral or accepting transition for any $j\geq i_0$, \item the length $|v_{j}|\geq s$ of the j-th node is not smaller than $s$ for all $j \geq i_{0}$, and \item the length $|v_{j}| = s$ is equal to $s$ for all indices $j \in \{i_{0},i_{1},i_{2},\ldots\}$ in this chain. \end{enumerate} This implies that $v_{i_{0}}, v_{i_{1}}, v_{i_{2}}, \ldots$ is a descending chain when the single nodes $v_{i}$ are compared by lexicographic order. As the domain is finite, almost all elements of the descending chain are equal, say $v_{i}:=\pi$. In particular, $\pi \in J$ is eventually always stable. Let us assume for contradiction that this stable prefix $\pi$ is accepting only finitely many times. We choose an index $i$ from the chain $i_0 < i_1 < i_2 <\ldots$ such that \begin{enumerate} \item $\pi$ is stable for all $j\geq i$ and \item $\pi$ is not accepting for any $j \geq i$. \end{enumerate} Note that $\pi$ is the host of $q_i$ for $d_i$, and $q_j \in l_j(\pi)$ holds for all $j\geq i$. As $\rho$ is accepting, there is a smallest index $j>i$ such that $(q_{j-1},\alpha(j-1),q_j)\in A$. Now, as $\pi$ is stable but not accepting for all $k\geq i$ (and hence for all $k \geq j$), $q_k$ must henceforth be in the label of a child of $\pi$ in $d_k$, which contradicts the assumption that infinitely many nodes in $\vartheta$ have length $s = |\pi|$. Thus, $\pi$ is eventually always stable and always eventually accepting. \end{proof} \begin{lemma} \label{lemma:determ2} [$L(\mathcal D_1) \subseteq L(\mathcal{R}_1)$] Given that there is a node $v\in J$, which is eventually always stable and always eventually accepting for an $\omega$-word $\alpha$, then there is an accepting run of $\mathcal{R}_1$ on $\alpha$. \end{lemma} \paragraph*{Notation.} For an $\omega$-word $\alpha$ and $j\geq i$, we denote with $\alpha[i,j[$ the word $\alpha(i)\alpha(i+1)\alpha(i+2)\ldots\alpha(j-1)$. We denote with $Q_1 \rightarrow^\alpha Q_2$ for a finite word $\alpha=\alpha_1\ldots\alpha_{j-1}$ that there is, for all $q_j\in Q_2$ a sequence $q_1\ldots q_j$ with $q_1 \in Q_1$ and $(q_i,\alpha_i,q_{i+1})\in T$ for all $1\leq i <j$. If, for all $q_j \in Q_2$, there is such a sequence that contains a transition in $A$ but no transition in $R$, we write $Q_1 \Rightarrow^\alpha Q_2$. \begin{proofidea} For the run $d_0 d_1 d_2 \ldots$ of $\mathcal D_1$ on $\alpha$, we fix an ascending chain $1 < i_0 < i_1 < i_2 \ldots$ of indices, such that $v$ is not rejecting in any transition $(d_{j-1},\alpha(j-1),d_j)$ for $j \geq i_0$ and such that $(d_{i_j-1},\alpha(i_j-1),d_{i_j}) \in A_v$ for all $j \geq 0$. The \emph{proof idea} is the usual way of building a tree of initial sequences of runs: we build a tree of initial sequences of runs of $\mathcal{R}_1$ that contains a sequence $q_0q_1q_2\ldots q_{i_j}$ for any $j\in \omega$ iff \begin{itemize} \item $(q_i,\alpha(i),q_{i+1}) \in T$ is a transition of $\mathcal{R}_1$ for all $i < i_j$, \item $(q_i,\alpha(i),q_{i+1}) \notin R$ is not rejecting for all $i \geq i_0 - 1$, and \item for all $k<j$ there is an $i\in [i_k,i_{k+1}[$ such that $(q_i,\alpha(i),q_{i+1}) \in A$ is an accepting transition. \end{itemize} This infinite tree has an infinite branch by K\"onig's Lemma. By construction, this branch is an accepting run of $\mathcal{R}_1$ on $\alpha$. \end{proofidea} \begin{proof} Let $\alpha \in L(\mathcal D_1)$. Then there is a $v$ that is eventually always stable and always eventually accepting in the run $\rho_{\mathcal D_{1}}$ of $\mathcal D_{1}$ on $\alpha$. We pick such a $v$. Let $1 < i_{0}<i_{1}<i_{2}<\ldots$ be an infinite ascending chain of indices such that \begin{itemize} \item $v$ is stable for all transitions $(d_{j-1},\alpha(j-1),d_{j})$ with $j\geq i_0$, and \item the chain $ i_{0}<i_{1}<i_{2}<\ldots$ contains exactly those indices $i\geq i_0$ such that $(d_{i-1},\alpha(i-1),d_{i})$ is accepting. \end{itemize} Let $d_i=(\T_i,l_i)$ for all $i \in \omega$. By construction, we have \begin{itemize} \item $I \rightarrow^{\alpha[0,i_0[} l_{i_0}(v)$, and \item $l_{i_j}(v) \Rightarrow^{\alpha[i_j,i_{j+1}[} l_{i_{j+1}}(v)$. \end{itemize} Using this observation, we can build a tree of initial sequences of runs as follows: we build a tree of initial sequences of runs of $\mathcal{R}_1$ that contains a sequence $q_0q_1q_2\ldots q_{i_j}$ for any $j\in \omega$ iff \begin{itemize} \item $(q_i,\alpha(i),q_{i+1}) \in T$ is a transition of $\mathcal{R}_1$ for all $i < i_j$, \item $(q_i,\alpha(i),q_{i+1}) \notin R$ is not rejecting for all $i \geq i_0 - 1$, and \item for all $k<j$ there is an $i\in [i_k,i_{k+1}[$ such that $(q_i,\alpha(i),q_{i+1}) \in A$ is an accepting transition. \end{itemize} By construction, this tree has the following properties: \begin{itemize} \item it is infinite, \item it is finitely branching, \item no branch contains more than $i_0$ rejecting transitions, and, \item for all $j \in \omega$, a branch of length $>i_j$ contains at least $j$ accepting transitions. \end{itemize} Exploiting K\"onig's lemma, the first two properties provide us with an infinite path, which is a run of $\mathcal{R}_1$ on $\alpha$. The last two properties then imply that this run is accepting. $\alpha$ is therefore in the language of $\mathcal{R}_1$. \end{proof} \begin{corollary} \label{Rabineq} $L(\mathcal{R}_1) = L(\mathcal D_1)$. \end{corollary} \subsubsection*{Estimation of Root History Trees} Let $\#\mathsf{ht}(n)$ and $\#\mathsf{rht}(n)$ be the number of history trees and RHTs, respectively, over sets with $n$ states. First, $\#\mathsf{rht}(n) \geq \#\mathsf{ht}(n)$ holds, because the sub-tree rooted in $0$ of an RHT is a history tree. Second, $\#\mathsf{ht}(n+1) \geq \#\mathsf{rht}(n)$, because adding the additional state to $l(\varepsilon)$ turns an RHT into a history tree. With an estimation similar to that of history trees \cite{Schewe/09/determinise}, we get: \begin{theorem} \label{theo:rht} $\inf\big\{c \mid \#\mathsf{rht}(n) \in O\big((cn)^n\big)\big\} = \inf\big\{c \mid \#\mathsf{ht}(n) \in O\big((cn)^n\big)\big\} \approx 1.65$. \end{theorem} In \cite{Schewe/09/determinise} it was shown that $\#\mathsf{ht}(n)$ grows at a speed, such that $\inf\big\{c \mid \#\mathsf{ht}(n) \in O\big((cn)^n\big)\big\} \approx 1.65$. We argue that $\#\mathsf{rht}(n)$ does not only grow in the same speed, it even holds that there is only a small constant factor between $\#\mathsf{ht}(n)$ and $\#\mathsf{rht}(n)$. First, there is obviously a bijection between RHTs over $Q$ and the subset of history trees over $Q \cup \{q_d\}$, where $q_d \notin Q$ is a fresh dummy state, and $q_d$ is the only state that is hosted by the root. We estimate this size by the number of history trees, where $q_d$ is hosted by the root $\varepsilon$ of the history tree. To keep the estimation simple, it is easy to see that the share of history trees with $< \frac{1}{3} n$ nodes diminishes to $0$, as the number of trees with $n$ nodes grows much faster than the number of trees with $<\frac{1}{3}n$ nodes and the number of functions from $[n]$ onto $[n]$, $n!$, grows much faster than the functions from $[n]$ to $[\frac{1}{3}n]$. So we can assume for our estimation that the tree has at least $\frac{1}{3}n$ nodes, such that the share of trees where $q_d$ is in the root is at most $<\frac{3}{n}$. $\lim_{n \rightarrow \infty} \frac{\#\mathsf{ht}(n+1)}{n \#\mathsf{ht}(n)}$ converges to $(1+\frac{c}{n})^n = e^c$ for $c \approx 1.65$. Thus, we get the following estimation: $\lim_{n \rightarrow \infty} \frac{\#\mathsf{rht}(n)}{\#\mathsf{ht}(n)} \leq \lim_{n \rightarrow \infty} 3\frac{\#\mathsf{ht}(n+1)}{n \#\mathsf{ht}(n)} 3e^c<3 e^c$. \subsection{Determinising parity automata} \label{paritydet} Having outlined a determinisation construction for one-pair Rabin automata using root history trees, we proceed to define \emph{nested history trees} (NHTs), the data structure we use for determinising parity automata. We assume that we have a parity automaton $\mathcal P = (Q,\Sigma,I,T,\pri:T \rightarrow [c])$, and we select $e = 2\lfloor 0.5 c \rfloor$. A \emph{nested history tree} is a triple $(\T,l,\lambda)$, where $\T$ is a finite, prefix closed subset of finite sequences of natural numbers and a special symbol $\mathfrak s$ (for \emph{stepchild}), $\omega \cup \{\mathfrak s\}$. We refer to all other children $vc$, $c\in \omega$ of a node $v$ as its \emph{natural children}. We call $l(v)$ the label of the node $v \in \T$, and $\lambda(v)$ its \emph{level}. A node $v \neq \varepsilon$ is called a \emph{Rabin root}, iff it ends in $\mathfrak s$. The root $\varepsilon$ is called a Rabin root iff $c>e$. A node $v \in \T$ is called a \emph{base node} iff it is not a Rabin root and $\lambda(v) = 2$. The set of base nodes is denoted $\mathsf{base}(\T)$. \begin{itemize} \item The label $l(v)$ of each node $v \neq \varepsilon$ is a subset of the label of its predecessor: $l(v)\subseteq l(\pred(v))$ holds for all $\varepsilon \neq v \in \T$. \item The intersection of the labels of two siblings is disjoint: $\forall v,v' {\in} \T.\ v {\neq} v' \wedge \pred(v) {=} \pred(v') \Rightarrow l(v) {\cap} l(v') = \emptyset$. \item For all \emph{base nodes}, the union of the labels of all siblings is \emph{strictly} contained in the label of their predecessor: $\forall v {\in} \mathsf{base}(\T)\ \exists q {\in} l(v)\ \forall v' {\in} \T.\ v{=}\pred(v') \Rightarrow q {\notin} l(v')$. \item A node $v\in \T$ has a stepchild iff $v$ is neither a base-node, nor a Rabin root. \item The union of the labels of all siblings of a non-base node \emph{equals} the union of its children's labels: \hfill $\forall v {\in} \T\smallsetminus \mathsf{base}(\T)$, $l(v) = \{q \in l(v') \mid v' \in \T \mbox{ and } v =\pred(v')\}$ holds. \item The level of the root is $\lambda(\varepsilon)=e$. \item The level of a stepchild is 2 smaller than the level of its parent: \hfill for all $v\mathfrak{s} \in \T$, $\lambda(v\mathfrak{s}) = \lambda(v)-2$ holds. \item The level of all other children equals the level of its parent: for all $i \in \omega$ and $vi \in \T$, $\lambda(vi) = \lambda(v)$ holds. \end{itemize} While the definition sounds rather involved, it is (for odd $c$) a nesting of RHTs. Indeed, for $c = 3$, we simply get the RHTs, and $\lambda$ is the constant function with domain $\{2\}$. For odd $c >3$, removing all nodes that contain an $\mathfrak{s}$ somewhere in the sequence again resemble RHTs, while the sub-trees rooted in a node $v\mathfrak{s}$ such that $v$ does not contain a $\mathfrak{s}$ resemble NHTs whose root has level $c-3$. The transition mechanism from the previous subsection is adjusted accordingly. For each level $a$ (note that levels are always even), we define three sets of transitions for the parity automaton $\mathcal P$: the rejecting transitions $R_a = \{t \in T \mid \pri(t)>a$ and $\pri(t)$ is odd$\}$; the accepting transitions $A_a = \{t \in T \mid \pri(t) \geq a$ and $\pri(t)$ is even$\}$, and the (at least) neutral transitions, $N_a = T \smallsetminus R_a$. \paragraph*{Construction.} Let $\mathcal{P} = \big(P,\Sigma,I,T,\{\pri:P \rightarrow [c]\big)$ be a nondeterministic parity automaton with $|P|=n$ states. We construct a language equivalent deterministic Rabin automaton $\mathcal{DR}=(D,\Sigma,d_{0},\Delta,\{(A_i,R_i) \mid i \in J\})$ where, \begin{itemize} \item $D$ is the set of NHTs over $P$ (i.e., with $l(\varepsilon) \subseteq P$) whose root has level $e$, where $e=c$ if $c$ is even, and $e=c-1$ if $c$ is odd, \item $d_{0}$ is the NHT we obtain by starting with $(\{\varepsilon\},\ l:\varepsilon \mapsto I,\ \lambda: \varepsilon \mapsto e)$, and performing Step 7 from the transition construction until an NHT is produced. \item $J$ is the set of nodes $v$ that occur in some NHT of level $e$ over $P$, and \item for every tree $d\in D$ and letter $\sigma\in\Sigma$, the transition $d'=\Delta(d,\sigma)$ is the result of the sequence of transformations described below. \end{itemize} \paragraph*{\textbf{Transition mechanism for determinising parity automata.}} \label{TransMechParity} Note that we do not define the update of $\lambda$, but use $\lambda$. This can be done because the level of the root always remains $\lambda(\varepsilon)=e$; the level $\lambda(v)$ of all nodes $v$ is therefore defined by the number of $\mathfrak{s}$ occurring in $v$. Likewise, the property of $v$ being a base-node or a Rabin root is, for a given $c$, a property of $v$ and independent of the labelling function. Starting from an NHT $d=(\T,l,\lambda)$, we define the transitions $\Delta: (d,\sigma) \mapsto d'$ as follows: \begin{enumerate} \item \emph{Update of node labels (subset constructions):} For the root, we continue to use $l_1(\varepsilon)=\{q' \in Q \mid \exists q \in l(\varepsilon).\ (q,\sigma,q') \in T\}$. For other nodes $v \in \T$ that are \emph{no Rabin roots}, we use $l_1(v)=\{q' \in Q \mid \exists q \in l(v).\ (q,\sigma,q') \in N_{\lambda(v)}\}$. For the remaining Rabin roots $v\mathfrak{s}\in \T$, we use $l_1(v\mathfrak{s})=\{q' \in Q \mid \exists q \in l(v\mathfrak{s}).\ (q,\sigma,q') \in N_{\lambda(v)}\}$. That is, we use the neutral transition of the higher level of the \emph{parent} of the Rabin node. \item \emph{Splitting of run threads / spawning new children.} In this step, we spawn new children for every node in the NHT. For nodes $v \in \T$ that are no Rabin roots, we spawn a child labelled with the set of states reachable through accepting transitions. For a Rabin root $v \in \T$, we spawn a new child labelled like the root. Thus, for every node $v\in \T$ which is no Rabin root and has $c$ \emph{natural} children, we spawn a new child $vc$ and expand $l_1$ to $vc$ by assigning $l_1: vc \mapsto \{q \in Q \mid \exists q' \in l(v).\ (q',\sigma,q) \in A_{\lambda(v)}\}$. If a Rabin root $v$ has $c$ natural children, we spawn a new child $vc$ of the Rabin root $v$ and expand $l_1$ to $vc$ by assigning $l_1: vc \mapsto l_1(v)$. We use $\mathcal \T_n$ to denote the extended tree that includes the new children. \item \emph{Removing states from labels -- horizontal pruning.} We obtain a function $l_2$ from $l_1$ by removing, for every node $v$ with label $l(v)=Q'$ and all states $q\in Q'$, $q$ from the labels of all younger siblings of $v$ and all of their descendants. Stepchildren are always treated as the \emph{youngest} sibling, irrespective of the order of creation. \item \emph{Identifying breakpoints -- vertical pruning.} We denote with $\T_e\subseteq \T_n$ the set of all nodes $v\neq \varepsilon$ whose label $l_2(v)$ is now equal to the union of the labels of its \emph{natural} children. We obtain $\T_v$ from $\T_n$ by removing all descendants of nodes in $\T_e$, and restrict the domain of $l_2$ accordingly. Nodes in $\T_v \cap \T_e$ represent the breakpoints reached during the infinite run $\rho$ and are called \emph{accepting}. That is, the transition of $\mathcal{DR}$ will be in $A_v$ for exactly the $v \in \T_v \cap \T_e$. Note that Rabin roots cannot be accepting. \item \emph{Removing nodes with empty label.} We denote with $\T_r = \{v \in \T_v \mid l_2(v) \neq \emptyset\}$ the subtree of $\T_v$ that consists of the nodes with non-empty label and restrict the domain of $l_2$ accordingly. \item \emph{Reordering.} To repair the orderedness, we call $\|v\| = |\os(v) \cap \T_r|$ the number of (still existing) older siblings of $v$, and map $v= n_1\ldots n_j$ to $v' = \|n_1\|\ \|n_1n_2\|\ \|n_1n_2n_3\|\ldots \|v\|$, denoted $\rename(v)$. For $\T_o=\rename(\T_r)$, we update a pair $(\T_r,l_2)$ from Step 5 to $d'=\big(\T_o, l'\big)$ with $l': \rename(v) \mapsto l_2(v)$. We call a node $v\in \T_o\cap \T$ \emph{stable} if $v=\rename(v)$, and we call all nodes in $J$ \emph{rejecting} if they are not stable. That is, the transition will be in $R_v$ exactly for those $v \in J$, such that $v$ is not a stable node in $\T \cap \T'$. \item \emph{Repairing nestedness.} We initialise $\T'$ to $\T_o$ and then add recursively for \begin{itemize} \item Rabin roots $v$ without children a child $v0$ to $\T'$ and expand $l'$ by assigning $l': v0 \mapsto l'(v)$, and for \item nodes $v$, which are neither Rabin roots nor base-nodes, without children a child $v\mathfrak{s}$ to $\T'$ and expand $l'$ by assigning $l': v\mathfrak{s} \mapsto l'(v)$ \end{itemize} until we have constructed an NHT $d'=(\T',l',\lambda')$. \end{enumerate} \begin{lemma} $L(\mathcal{P}) \subseteq L(\mathcal{DR})$ \label{deterp1} \end{lemma} \paragraph*{Notation.} For a state $q$ of $\mathcal{P}$, an NHT $d=(\T,l,\lambda)$ and an even number $a \leq e$, we call a node $v'$ the \emph{$a$ host node of $q$}, denoted $\host_a(q,d)$, if $q \in l(v')$, but not in $l(v'c)$ for any natural child $v'c$ of $v'$, and $\lambda(v')=a$. Let $\rho = q_0,q_1, q_2 \ldots$ be an accepting run of $\mathcal{P}$ with even $a = \liminf_{i \rightarrow \infty}\pri\big(q_i,\alpha(i),q_{i+1}\big)$ on an $\omega$-word $\alpha$, let $d_0d_1d_2\ldots$ be the run of $\mathcal{DR}$ on $\alpha$, and let $v_i=\host_a(q_i,d_i)$ for all $i\in \omega$. \begin{proofidea}The core idea of the proof is again that the state of each accepting run is eventually `trapped' in a maximal initial sequence $v$ of $a$-hosts, with the additional constraint that neither $v$ nor any of its ancestors are infinitely often rejecting, and the transitions of the run of $\mathcal P$ are henceforth in $N_a$. We show by contradiction that $v$ is accepting infinitely often. For $\lambda(v)=a$, the proof is essentially the same as for one-Rabin determinisation. For $\lambda(v)>a$, the proof is altered by a case distinction, where one case assumes that, for some index $i >0$ such that, for all $j \geq i$, $v$ is a prefix of all $v_j$, $(q_{j-1},\alpha(j-1),q_j)\in N_a$, and $(d_{j-1},\alpha(j-1),d_j)\notin R_v \cup A_v$, $q_i$ is in the label of a natural child $vc$ of $v$. This provides the induction basis -- in the one-pair Rabin case, the basis is provided through the accepting transition of the one-pair Rabin automaton, and we have no corresponding transition with even priority $\geq \lambda(v)$ -- by definition. If no such $i$ exists, we choose an $i$ that satisfies the above requirements except that $q_i$ is in the label of a natural child $vc$ of $v$. We can then infer that the label of $v\mathfrak{s}$ also henceforth contains $q_i$. As a Rabin root whose parent is not accepting or rejecting, $v\mathfrak{s}$ is not rejecting either. \end{proofidea} \begin{proof} We fix an accepting run $\rho=q_{0}q_{1} \ldots$ of $\mathcal{P}$ on an input word $\alpha$, and use $a = \liminf_{i\rightarrow \infty}\big((q_i,\alpha(i),q_{i+1})\big)$ to refer to the dominating even priority of its transitions $\overline{\rho}$. We also let $\rho_{\mathcal DR}=d_{0}d_{1}\ldots$ be the run of $\mathcal DR$ on $\alpha$. We then define the related sequences of host nodes $\vartheta=v_{0}v_{1}v_{2}\ldots=\host_a(q_0,d_0)\host_a(q_1,d_1)\host_a(q_2,d_2)\ldots$. Note that Rabin roots cannot be the $a$ host node of any state, as it is always labelled by the union of the labels of its children, and its children have the same level as the Rabin root itself. Let \begin{itemize} \item $v'$ be the longest sequence, which is the initial sequence of almost all $v_i$, and \item $v$ the longest initial sequence of $v'$, such that, for no initial sequence $v''$ of $v$ (including $v$ itself), infinitely many transitions $(d_i,\alpha(i),d_{i+1})$ are in $R_a$. \end{itemize} We first observe that such a node $v$ exists: as $q_i \in l_i(\varepsilon)$ for $d_i=(\T_i,l_i,\lambda_i)$ for all $i \in \omega$, $\varepsilon$ satisfies all requirements except for maximality, such that a maximal element $v$ exists. We now distinguish two cases. `$a = \lambda(v)$': The first case is that the level of the node $v$ equals the dominating priority of $\overline{\rho}$. For this case, we can argue as in the one-Rabin pair case: if the transition is infinitely often in the set $A_v$ of $\mathcal{DR}$, then $\rho_{\mathcal{DR}}$ is accepting. Otherwise we choose a point $i \in \omega$ with the following properties: \begin{itemize} \item for all $j \geq i$, $(q_j,\alpha(j),q_{j+1}) \in N_a$, \item for all $j \geq i$ and all initial sequences $w$ of $v$, $(d_j,\alpha(j),d_{j+1}) \notin R_w$, \item for all $j \geq i$, $(d_j,\alpha(j),d_{j+1}) \notin A_v$, and \item $\pri(q_i,\alpha(i),q_{i+1})=a$. \end{itemize} We can now build a simple inductive argument with the following ingredients. \begin{description} \item[Induction basis:] $ $ \newline There is a $k \in \omega$ such that $q_{i+1} \in l_{i+1}(vk)$. The induction basis holds as the transition $(q_i,\alpha(i),q_{i+1})$ is in $A_a$ and the node $v$ is stable and non-accepting in $(d_i,\alpha(i),d_{i+1})$. \item[Induction step:] $ $ \newline if, for some $k \in \omega$ and $j > i$, $q_{j} \in l_{j}(vk)$, then \begin{itemize} \item there is a $k'\leq k$ such that $q_{j+1} \in l_{j+1}(vk')$, and \item if $k=k'$ then $(d_j,\alpha(j),d_{j+1}) \notin R_{vk}$. \end{itemize} To see this, $q_{j+1}$ is added to the `$l_1(vk)$' from Step 1 of the transition mechanism of the transition $(d_j,\alpha(j),d_{j+1})$. As $v$ is stable but not accepting, the two only reason for $q_{j+1} \notin l_{j+1}(vk)$ are that \begin{itemize} \item there is, for some $k'' < k$, a $q \in l_j(vk'')$ with and $(q,\alpha(j),q_{j+1}) \in N_{\lambda(v)}$ (note that $\lambda(v) = \lambda(vk) = \lambda(vk'') = a$), or \item for some $k'' < k$, the node $vk''$ is removed in Step 5 of the transition mechanism of the transition $(d_j,\alpha(j),d_{j+1})$. \end{itemize} In both cases (and their combination), we have $k' < k$. If neither is the case, then $(d_j,\alpha(j),d_{j+1}) \notin R_{vk}$ (as $\rename(vk)=vk$ holds in the transition mechanism). \end{description} The position $k \in \omega$ of the child $vk$ with $q_j \in l(vk)$ can thus only be decreased finitely many times (and $\lambda(vk)=a$ for all $k \in \omega$). For some $k\in \omega$, $vk$ is therefore a prefix of almost all $v_i$ of $\vartheta$. Once stable, it is henceforth no more rejecting. This contradicts the assumption that $v$ is the longest such sequence. `$a > \lambda(v)$': The second case is that the level of $v$ is strictly greater than the dominating priority of $\overline{\rho}$. We argue along similar lines. If the transition is infinitely often in the set $A_v$ of $\mathcal{DR}$, then $\rho_{\mathcal{DR}}$ is accepting. Otherwise we choose a point $i \in \omega$ with the following properties: \begin{itemize} \item for all $j \geq i$, $(q_j,\alpha(j),q_{j+1}) \in N_a$,, \item for all $j \geq i$ and all initial sequences $w$ of $v$, $(d_j,\alpha(j),d_{j+1}) \notin R_w$, and \item for all $j \geq i$, $(d_j,\alpha(j),d_{j+1}) \notin A_v$. \end{itemize} The difference to the previous argument is that the third prerequisite, `$\pri(q_i,\alpha(i),q_{i+1})=a$', holds no longer. This was used for the induction basis. We replace this by a distinction of two sub-cases. The first one is, that we do have an induction basis: we can choose the $i$ such that there is a $k \in \omega$ such that $q_{i+1} \in l_{i+1}(vk)$. The rest of the argument can be copied for this case: \begin{description} \item[Induction step:] $ $ \newline if, for some $k \in \omega$ and $j > i$, $q_{j} \in l_{j}(vk)$, then \begin{itemize} \item there is a $k'\leq k$ such that $q_{j+1} \in l_{j+1}(vk')$, and \item if $k=k'$ then $(d_j,\alpha(j),d_{j+1}) \notin R_{vk}$. \end{itemize} To see this, $q_{j+1}$ is added to the `$l_1(vk)$' from Step 1 of the transition mechanism of the transition $(d_j,\alpha(j),d_{j+1})$. As $v$ is stable but not accepting, the two only reason for $q_{j+1} \notin l_{j+1}(vk)$ are that \begin{itemize} \item there is, for some $k'' < k$, a $q \in l_j(vk'')$ with and $(q,\alpha(j),q_{j+1}) \in N_{\lambda(v)}$ (note that $\lambda(v) = \lambda(vk) = \lambda(vk'') = a$), or \item for some $k'' < k$, the node $vk''$ is removed in Step 5 of the transition mechanism of the transition $(d_j,\alpha(j),d_{j+1})$. \end{itemize} In both cases (and their combination), we have $k' < k$. If neither is the case, then $(d_j,\alpha(j),d_{j+1}) \notin R_{vk}$ (as $\rename(vk)=vk$ holds in the transition mechanism). \end{description} The position $k \in \omega$ of the child $vk$ with $q_j \in l(vk)$ can thus only be decreased finitely many times (and $\lambda(vk)=a$ for all $k \in \omega$). For some $k\in \omega$, $vk$ is therefore a prefix of almost all $v_i$ of $\vartheta$. Once stable, it is henceforth no more rejecting. This contradicts the assumption that $v$ is the longest such sequence. The other sub-case is that no such $i$ exists. We then choose $i$ such that the two remaining conditions are met. As $\lambda(v)>a \geq 2$ holds, the union of the labels of the children of $v$ must be the same as the label of $v$. Consequently, we have $q_j \in l(v\mathfrak{s})$ for all $j > i$. It remains to show that $v\mathfrak{s}$ is not rejecting infinitely many times. But the only ways a Rabin root can be rejecting is that its parent node is accepting (the breakpoint of Step 4 from the transition mechanism) or not stable (Step 5 with Step 3, removing states from the label that occur in younger siblings) in a transition. But both are excluded in the definition of $i$. Finally, we note that, for all $v_i$ in $\vartheta$, $\lambda(v_i) = a$ holds by construction. Consequently, $\lambda(v_i') \geq a$ holds for all initial sequences $v_i'$ of $v_i$. In particular, we have $\lambda(v) \geq a$, such that the above case distinction is complete. \end{proof} \begin{lemma} $L(\mathcal{DR}) \subseteq L(\mathcal{P})$ \label{deterp2} \end{lemma} The proof of this lemma is essentially the proof of Lemma \ref{lemma:determ2} where, for the priority $a=\lambda(v)$ chosen to be the level of the accepting index $v$, $A_a$ takes the role of the accepting set $A$ from the one-pair Rabin automaton. \paragraph*{Notation.} We denote with $Q_1 \Rightarrow^\alpha_a Q_2$ for a finite word $\alpha=\alpha_1\ldots\alpha_{j-1}$ that there is, for all $q_j\in Q_2$, a sequence $q_1\ldots q_j$ with \begin{itemize} \item $q_1 \in Q_1$, \item $(q_i,\alpha_i,q_{i+1})\in N_a$ for all $1\leq i <j$, and \item $(q_i,\alpha_i,q_{i+1})\in A_a$ for some $1\leq i <j$. \end{itemize} \begin{proof} Let $\alpha \in L(\mathcal{DR})$. Then there is a $v$ that is eventually always stable and always eventually accepting in the run $\rho_{\mathcal DR}$ of $\mathcal DR$ on $\alpha$. We pick such a $v$. Let $1 < i_{0}<i_{1}<i_{2}<\ldots$ be an infinite ascending chain of indices such that \begin{itemize} \item $v$ is stable for all transitions $(d_{j-1},\alpha(j-1),d_{j})$ with $j\geq i_0$, and \item the chain $ i_{0}<i_{1}<i_{2}<\ldots$ contains exactly those indices $i\geq i_0$ such that $(d_{i-1},\alpha(i-1),d_{i})$ is accepting. \end{itemize} Let $d_i=(\T_i,l_i,\lambda_i)$ for all $i \in \omega$. By construction, we have \begin{itemize} \item $I \rightarrow^{\alpha[0,i_0[} l_{i_0}(v)$, and \item $l_{i_j}(v) \Rightarrow^{\alpha[i_j,i_{j+1}[}_a l_{i_{j+1}}(v)$. \end{itemize} Using this observation, we can build a tree of initial sequences of runs as follows: we build a tree of initial sequences of runs of $\mathcal P$ that contains a sequence $q_0q_1q_2\ldots q_{i_j}$ for any $j\in \omega$ iff \begin{itemize} \item $(q_i,\alpha(i),q_{i+1}) \in T$ is a transition of $\mathcal P$ for all $i < i_j$, \item $(q_i,\alpha(i),q_{i+1}) \in N_a$ is not rejecting for all $i \geq i_0 - 1$, and \item for all $k<j$ there is an $i\in [i_k,i_{k+1}[$ such that $(q_i,\alpha(i),q_{i+1}) \in A_a$ is an accepting transition. \end{itemize} By construction, this tree has the following properties: \begin{itemize} \item it is infinite, \item it is finitely branching, \item no branch contains more than $i_0$ transitions with odd priority $>a$, and, \item for all $j \in \omega$, a branch of length $>i_j$ contains at least $j$ transitions with even priority $\geq a$. \end{itemize} Exploiting K\"onig's lemma, the first two properties provide us with an infinite path, which is a run of $\mathcal P$ on $\alpha$. The last two properties then imply that this run is accepting. $\alpha$ is therefore in the language of $\mathcal P$. \end{proof} \begin{corollary} \label{parityeq} $L(\mathcal{P}) = L(\mathcal{DR})$. \end{corollary} \subsection{Determinising to a deterministic parity automata $\mathcal D$} \label{subs:det2parity} Deterministic parity automata seem to be a nice target when determinising parity or one-pair Rabin automata given that algorithms that solve parity games (e.g, for acceptance games of alternating and emptiness games of nondeterministic parity tree automata) have a lower complexity when compared to solving Rabin games. For \buchi\ and Streett automata, determinisation to parity automata was first shown by Piterman in \cite{Piterman/07/Parity}. For applications that involve co-determinisation, the parity condition also avoids the intermediate Streett condition. Safra's determinisation construction (and younger variants) intuitively enforces a parity-like order on the nodes of history trees. By storing the order in which nodes are introduced during the construction, we can capture the Index Appearance Records construction that is traditionally used to convert Rabin or Streett automata to parity automata. To achieve this, we augment the states of the deterministic automaton (RHTs or NHTs) with a \emph{later introduction record} (LIR), an abstraction of the order in which the non-Rabin nodes of the ordered trees are introduced. (As Rabin roots are but redundant information, they are omitted in this representation.) For an ordered tree $\mathcal{T}$ with $m$ nodes that are no Rabin roots, an LIR is a sequence $v_{1}, v_{2},\dots v_{m}$ that contains the nodes of $\mathcal{T}$ that are no Rabin roots nodes, such that, each node appears after its ancestors and older siblings. For convenience in the lower bound proof, we represent a node $v \in \T$ of an NHT $d=(\T,l,\lambda)$ in the LIR by a triple $(S_\p,c_\p,P_\p)$ where $S_v = l(v)$, is the label of $v$, $c_v = \lambda(v)$ the level of $v$, and $P_v = \{q \in Q \mid v=\host_{c_v}(q,d)\}$ is the set of states $c_v$ hosted by $v$. The $v$ can be reconstructed by the order and level. We call the possible sequences of these triples \emph{LIR-NHTs}. Obviously, each LIR-NHT defines an NHT, but not the other way round. A finite sequence $(S_1,c_1,P_1) (S_2,c_2,P_2)(S_3,c_3,P_3) \ldots \linebreak[2] (S_k,c_k,P_k) $ of triples is a LIR-NHT if it satisfies the following requirements for all $i \in [k]$. \begin{enumerate} \item $P_i \subseteq S_i$, \item $\{P_i\} \cup \{S_j \mid j{>}i,\ c_i {=} c_j,$ and $S_j {\cap} S_i {\neq} \emptyset\}$ partitions~$S_i$. \item $\{S_j \mid j>i,\ c_i = c_j+2,$ and $S_j \cap P_i \neq \emptyset\}$ partition $P_i$. \item If the highest priority of $\mathcal P$ is even, then $c_i = e$ implies $S_i \subseteq S_1$. (In this case, the lowest level construction is B\"uchi and the first triple always refers to the root.) \item For $c_i < e$, there is a $j<i$ with $S_i \subseteq P_j$. \end{enumerate} To define the transitions of $\mathcal D$, we can work in two steps. First, we identify, for each position $i$ of a state $N = (S_1,c_1,P_1) (S_2,c_2,P_2)(S_3,c_3,P_3) \ldots$ of $\mathcal D$, the node $v_i$ of the NHT $d=(\T,l,\lambda)$ for the same input letter. We then perform the transition $\big(d,\sigma,(\T',l',\lambda')\big)$ on this Rabin automaton. We are then first interested in the set of non-rejecting nodes from this transition and their indices. These indices are moved to the left, otherwise maintaining their order. All remaining vertices of $\T'$ are added at the right, maintaining orderedness. The priority of the transition is determined by the smallest position $i$ in the sequence, where the related node in the underlying tree is accepting or rejecting. It is therefore more convenient to use a min-parity condition, where the parity of $\liminf_{n \rightarrow \infty} \pri(\overline{\rho})$ determines acceptance of a run $\rho$. As this means smaller numbers have higher priority, $\pri$ is representing the opposite of a priority function, and we refer to the priority as the \emph{co-priority} for clear distinction. If the smallest node is rejecting, the transition has co-priority $2i-1$, if it is accepting (and not rejecting), then the transition has co-priority $2i$, and if no such node exists, then the transition has co-priority $ne+1$. \begin{lemma} \label{parityexp} Given a nondeterministic parity automaton $\mathcal{P}$ with $|P| = n$ states and maximal priority $c$, we can construct a language equivalent deterministic parity automaton $\mathcal D$ with $n e +1$ priorities for $e=2\lfloor 0.5c \rfloor$, whose states are the LIR-NHTs described above. \end{lemma} \begin{proof} We use our determinisation technique from \textbf{Section~\ref{paritydet}} to construct a deterministic parity automaton, whose states consist of the LIR-NHTs, i.e., the NHTs augmented with the Later Introduction Records, with the parity index on the transitions from the states of the automata. First, we observe that $\mathcal P$ is language equivalent to the deterministic Rabin automaton $\mathcal{DR}$ from the construction of Section \ref{paritydet} by Corollary \ref{parityeq}. Let $\alpha$ be a word in the language $L(\mathcal{DR})$ of the automaton $\mathcal{DR}$. By definition of acceptance, we have an index $v$ such that the node $v$ is a node, which is eventually always stable and always eventually accepting in the transitions of the run of $\mathcal{DR}$ on $\alpha$. Note that $v$ cannot be a Rabin root, as Rabin roots cannot be accepting. Once stable, the position of this node in the LIR is non-increasing, and it decreases exactly when a node at a smaller position is deleted. This can obviously happen only finitely many times, and the position will thus eventually stabilise at some position $p$. Moreover, all positions $\leq p$ will then be henceforth stable. Then, by our construction, it is easy to see that henceforth no transition can have a co-priority $<2p$. At the same time, for each following transition where $v$ is accepting in the deterministic Rabin automaton, the respective transition of the run of $\mathcal P$ has a priority $\leq 2p$. (At some node that is represented in a position $\leq 2p$, an accepting or rejecting event happens.) These two observations provide, together with the fact that these priorities $\leq 2p$ must occur infinitely many times by the deterministic Rabin automaton being accepting, that the dominating priority of the run is an even priority $\leq 2p$. In the other direction, let $2i$ be the dominant priority for a run of our DPA $\mathcal D$ on a word $\alpha$. This leads to a scenario where all positions $\leq i$ eventually maintain their positions in the LIR. The respective nodes they represent remain stable, but not accepting, from then on in the transitions of the run of $\mathcal{DR}$ on $\alpha$. Observe that all older siblings (and ancestors, except for the omitted Rabin root) of a node $v$ of an NHT are represented on a smaller position than $v$. The node corresponding to the position $i$ is always eventually accepting in the transitions of $\mathcal{DR}$ on $\alpha$, such that $\alpha$ is accepted by $\mathcal{DR}$. \end{proof} \begin{lemma} \label{parityref} The DPA resulting from determinising a one-pair Rabin automaton $\mathcal{R}_1$ has $O(n!^2)$ states, and $O\big(n!(n-1!)\big)$ if $\mathcal{R}_1$ is B\"uchi. \end{lemma} \begin{proof} Let $|Q| = n$ be the number of states of our nondeterministic one-pair Rabin automaton. We explicitly represent (for the sake of evaluating the state-space) the tree structure of an RHT/LIR pair with $m$ nodes by a sequence of $m-1$ integers $i_1,i_2 \dots i_m$ such that $i_j$ points to the position $<j$ of the parent of the node $v_j$ in the LIR $v_{1}, v_{2},\dots v_{m}$. There are $(m-1)!$ such sequences. There is an obvious bijection between this representation of an LIR and its original definition. Thus, for an RHT/LIR pair with $n+1$ nodes, we can have upto $n!$ such RHT/LIR pairs just by virtue of the order of introduction of the nodes. To more accurately evaluate the number of states, we have to consider the way RHTs are labelled. The root is always labelled with the complete set of reachable states. We first consider the case where the root is labelled with all $|n|$ states of the nondeterministic one-pair Rabin automaton $\mathcal{R}_1$. Let $t(n,m)$ denote the number of trees and later introduction record pairs for history trees with $m$ nodes and $n=|Q|$ states in the label of the root. First, $t(n,n+1) = (n!\cdot n!)$ holds : For such a tree, there can be upto $n!$ onto functions that resemble the labelling of states of the deterministic automaton and $n!$ RHTs augmented with LIRs. For every $m\leq (n+1)$, a coarse estimation% \footnote{If we connect functions by letting a function $g$ from $Q$ onto $\{1,\ldots, m-1\}$ be the successor of a function $f$ from $Q$ onto $\{1,\ldots, m\}$ if there is an index $i \in \{1,\ldots, m-1\}$ such that $g(q)=i$ if $f(q)= m$ and $g(q)=f(q)$ otherwise, then the functions onto $m$ have $(m-1)$ successors, while every function onto $m-1$ has at least two predecessors. Hence, the number of labelling functions grows at most by a factor of $\frac{m-1}{2}$, while the number of ordered tree / LIR pairs is reduced by a factor of $m-1$.} provides $t(n,m-1)\leq \frac{1}{2}t(n,m)$. Hence, $\sum_{i=1}^n t(n,i)\leq 2 (n!\cdot n!)$. We next consider the case where the root is not labelled with all $|n|$ states of the nondeterministic one-pair Rabin automaton $\mathcal{R}_1$. Let $t'(n,m)$ denote the number of history tree / LIR pairs for such history trees with $m$ nodes for a nondeterministic one-pair Rabin automaton with $n$ states. We have $t'(n,n) = (n-1)!n!$ and, by an argument similar to the one used in the analysis of $t$, we also have $t'(n,m-1)\leq \frac{1}{2}t'(n,m)$ for every $m \leq n$, and hence $\sum_{i=1}^{n-1} t'(n,i)\leq 2 (n-1)!n!$. Overall, the number of RHTs augmented with LIRs is $\sum_{i=1}^{n}{t(n,i)} + \sum_{i=1}^{n-1}{t'(n,i)} \le O(n!^2)$. The number of states of the resulting deterministic parity automaton is $O(n!^2)$, which equates to a linear increase in size when compared with a deterministic parity automaton resulting from the determinisation of a language equivalent nondeterministic \buchi\ automaton instead of a nondeterministic one-pair Rabin automaton.% \footnote{A similar estimation for the case of determinising \buchi\ automata to parity automata would result in $O\big((n-1)!n!\big)$ states, when the acceptance condition is placed on the transitions rather than the states.} \end{proof} \section{Lower Bound} In this section, we establish the optimality of our determinisation to Rabin automata, and show that our determinisation to parity automata is optimal up to a small constant factor. What is more, this lower bound extends to the more liberal Streett acceptance condition. The technique we employ is similar to \cite{Colcombet+Zdanowski/09/Buchi,Schewe+Varghese/12/GBA}, in that we use the states (for Rabin automata) or a large share of the states (for parity automata) of the resulting automaton as memory in a game, and argue that it can be won, but not with less memory. Just as in \cite{Colcombet+Zdanowski/09/Buchi,Schewe+Varghese/12/GBA}, we use a game where this memory is a lower bound for the size of a deterministic Rabin automaton that recognises the language of a full nondeterministic automaton (see below). To estimate the size of the minimal Streett automaton, we use the complement language instead. Consequently, we get a dual result: a lower bound for a Rabin automaton that recognises the complement language. By duality, this bound is also the lower bound for a deterministic Streett automaton that recognises the language of this full automaton. As the parity condition is a special Streett condition, this lower bound extends to parity automata. \subsection{Full automata} Our lower bound proof builds on \emph{full} automata (cf.\ \cite{Yan/08/lowerComplexity}), like the ones used in \cite{Colcombet+Zdanowski/09/Buchi} to establish a lower bound for the translation from nondeterministic B\"uchi to deterministic Rabin automata. A parity automaton $\mathcal P_n^c = \big(Q,\Sigma_n^c,I,T,\pri\big)$ with $n$ states is called \emph{full} if its alphabet $\Sigma_n^c= Q \times Q^\top \rightarrow 2^\nc$ is the set of functions from $Q\times Q^\top$ to sets of priorities $\nc$, and \begin{itemize} \item $I=Q$, \item $T = \big\{\big(q,\sigma,q') \mid q \in Q,\ q'\in Q^\top,\ \sigma(q,q') \neq \emptyset\big\}$, \item $\pri: (q,\sigma,q') \mapsto \opt\big(\sigma(q,q')\big)$ for all $q,q' \in Q$ with $\sigma(q,q')\neq \emptyset$, where $\opt$ returns the highest even number of a set, and the lowest odd number if the set contains no even numbers. \end{itemize} $(q,\sigma,\top)$ encodes immediate acceptance from state $q$. Every nondeterministic parity automaton with priorities $\leq c$ can be viewed as a language restriction (by alphabet restriction) of $\mathcal P_n^c$. $\mathcal P_n^c$ therefore recognises the hardest language recognisable by a parity automata with $n$ states and maximal priority $c$. To estimate the size of deterministic Rabin, Streett, or parity automata that recognise the same language as a nondeterministic parity automaton with $n$ states and maximal priority $c$ reduces to estimating the size of the deterministic Rabin, Streett, or parity automata that recognises the language of $\mathcal P_n^c$. A useful property of this language is that we can focus on states with different sets of reachable states independently. We use $\reach(u)$ to denote the states reachable by a word $u \in \Sigma^*$ in $\mathcal P_n^c$ that is not immediately accepted by $\mathcal P_n^c$ (that is, such that $\top$ is not reachable on $u$). $\reach(u)$ can be defined inductively: $\reach(\varepsilon)=I$, and $\reach(va) = \big\{q' \in Q \mid \exists q \in \reach(v).\ (q,\sigma,q') \in T\big\}$ for all words $v\in \Sigma^*$ and all letters $a \in \Sigma$. This allows us to extend a useful observation from \cite{Colcombet+Zdanowski/09/Buchi}. \paragraph*{Notation.} In this section, we use $\rho(s,u)$ to refer to a finite part of the run of a deterministic automaton that starts in a state $s$ upon reading a word $u \in {\Sigma_n^c}^+$. If this finite part of the run ends in a state $s'$, we also write $\rho(s,u,s')$. In particular, $\rho(s,u,s')$ implies that $s'$ is reached from $s$ when reading $u$. For Rabin automata, an index $i$ is accepting resp.\ rejecting for $\rho(s,u,s')$, if it is accepting resp.\ rejecting in some transition in this sequence of a run. For parity automata, the co-priority of $\rho(s,u,s')$ is the smallest co-priority that occurs in any transition in the respective sequence of a run. \begin{lemma} \label{lem:differentSets} Let $\mathcal A$ be a deterministic Rabin, Street, or parity (or, more generally, Muller) automaton that recognises the language of $\mathcal P_n^c$ or its complement. Then $\rho(s_0,u,s)$ and $\rho(s_0,v,s)$ imply $\reach(u) = \reach(v)$ or $\reach(u) \ni \top \in \reach(v)$. \end{lemma} \begin{proof} Assume for contradiction that this is not the case. We select two words $u,v \in \Sigma^*$ with $\rho(s_0,u,s)$ and $\rho(s_0,v,s)$. Let $\sigma_\emptyset : (q,q') \mapsto \emptyset \forall (q,q')\in Q\times Q^\top$. If $\reach(u) \ni \top \notin \reach(v)$, then $v {\sigma_\emptyset}^\omega$ is accepted, and $u {\sigma_\emptyset}^\omega$ is rejected by $\mathcal P_n^c$. If $\top \notin \reach(u) \cup \reach(v)$ and $q \in \reach(u)\smallsetminus \reach(v)$, then we use $\sigma_q : (q,q) \mapsto \{2\}$ and $\sigma_q : (q',q'') \mapsto \emptyset$ if $(q',q'') \neq (q,q)$. Then $u {\sigma_q}^\omega$ is accepted, while $ v {\sigma_q}^\omega$ is rejected by $P_n^c$. Doing the same with $u$ and $v$ reversed provides us with the required contradiction. \end{proof} As a consequence, we can focus on sets of states with the same reachability set. \subsection{Language games} A language game is an initialised two player game $G=(V,E,v_0,\mathcal L)$, which is played between a verifier and a spoiler on a star-shaped directed labelled multi-graph $(V,E)$ without self-loops. It has a finite set $V$ of vertices, but a potentially infinite set of edges. The centre of the star, which we refer to by $c \in V$, is the only vertex of the verifier, while all other vertices are owned by the spoiler. Besides the centre, the game has a second distinguished vertex, the initial vertex $v_0$, where a play of the game starts. The remaining vertices $W = V \smallsetminus \{v_0,c\}$ are called the working vertices. Like $v_0$, they are owned by the spoiler. The edges are labelled by finite words over an alphabet $\Sigma$. Edges leaving the centre vertex are labelled by the empty word $\varepsilon$, and there is exactly one edge leaving from the edge to each working vertex, and no outgoing edge to the initial vertex. The set of these outgoing edges is thus $\{(c,\varepsilon,v)\mid v \in W\}$. The edges that lead to the centre vertex are labelled with non-empty words. The players play out a run of the game in the usual way by placing a pebble on the initial vertex $v_0$, letting the owner of that vertex select an outgoing edge, moving the pebble along it, and so forth. This way, an infinite sequence of edges is produced, and concatenating the finite words by which they are labelled provides an infinite word $w$ over $\Sigma$. The verifier has the objective to construct a word in $\mathcal L$, while the spoiler has the antagonistic objective to construct a word in $\Sigma^\omega \smallsetminus \mathcal L$. \begin{theorem} \cite{Colcombet+Zdanowski/09/Buchi} \label{theo:Lgame} If the verifier wins a language game for a language recognised by a deterministic Rabin automaton $\mathcal R$ with $r$ states, then he wins the language game using a strategy with memory $r$. \end{theorem} This is because he can simply run $\mathcal R$ as a witness automaton. Intuitively, the verifier would play on the product of $\mathcal R$ and $G$. This is a Rabin game, and if the verifier wins, then he wins memoryless \cite{DBLP:journals/apal/Klarlund94,Zielonka/98/Parity}. Thus, the states of $\mathcal R$ can serve as the memory in $G$: the verifier will simply make the decision defined by the decision he made in the product game. \begin{corollary} \label{cor:Lgame} If the verifier wins a language game for a language recognised by a deterministic Rabin automaton $\mathcal R$ with $r < |W|$ states, then he wins the language game played on a reduced graph, where the set of his outgoing edges is reduced to $r$ edges of his choice before playing the otherwise unchanged game. \end{corollary} These are simply the at most $r$ edges chosen by the verifier under the at most $r$ different memory states. \subsection{Lower bounds} We extend the technique introduced by Colcombet and Zdanowski \cite{Colcombet+Zdanowski/09/Buchi} to establish that the Rabin automata from Corollary \ref{parityeq} are the minimal deterministic Rabin automata that recognise the same language as $\mathcal P_n^c$. Just as in \cite{Colcombet+Zdanowski/09/Buchi,Schewe+Varghese/12/GBA}, we use the language of $\mathcal P_n^c$ as the target language. To establish that the deterministic parity automaton $\mathcal D_n^c$ from Lemma \ref{parityexp} cannot be $50\%$ larger than any deterministic Streett -- and thus in particular than any deterministic parity -- automaton that recognises the language of $\mathcal D_n^c$, we use the complement language of $\mathcal P_n^c$ as our target language. We therefore get a bound on the size of the smallest Rabin automaton that recognises the complement of the language of $\mathcal P_n^c$, and hence for the smallest Streett automaton that recognises $\mathcal P_n^c$. Having an upper bound for parity that matches this lower bound for the more general Streett condition, we can infer tightness of our determinisation construction for both classes of automata. \paragraph*{\bf Deterministic Rabin automata.} To establish the lower bound, it is easier to use triples $(S_v,c_v,P_v)$ for each node $v \in \T$ of an NHT $(\T,l,\lambda)$. By abuse of notation, we refer to the triple by $\T(v)$ (and thus to the state of the DRA by $\T$), to label by $T_S(v)$ and to $\{q {\,\in\,} Q \mid v=\host_{\lambda(v)}(q,\T)\}$ by~$\T_P(v)$. To define the edges leaving a spoiler vertex $\T$, we refer to the finite part of a run of $\mathcal R_n^c$ that starts in $\T$ when reading a word $u \in {\Sigma_n^c}^+$ by $\rho(\T,u)$. If this finite part of the run ends in $\T'$, we also write $\rho(\T,u,\T')$. In particular, $\rho(\T,u,\T')$ implies that $\T'$ is reached from $\T$ when reading $u$. The accepting and rejecting nodes of $\rho(\T,u,\T')$ are the union of the accepting and rejecting nodes, respectively, of the individual transitions in this section of the run. \begin{definition} [Relevant change] In a finite part $\rho(\T,u,\T')$, of our Rabin automaton $\mathcal R_n^c$ the \emph{relevant change} is the minimal position $\p$ w.r.t.\ lexicographic order, where \begin{itemize} \item the node has been accepting or rejecting during the piece of the run, or \item where $\T(\p)\neq \T'(\p)$. \end{itemize} We call the node $\p$ the relevant change, and we call it \begin{itemize} \item \emph{rejecting}, if $\rho(\T,u,\T')$ is rejecting at $\p$, \item \emph{accepting}, if $\rho(\T,u,\T')$ is accepting but not rejecting at $\p$, \item \emph{growing}, if it is not rejecting and $\T_S'(\p)\supsetneq \T_S(\p)$, and \item \emph{shrinking}, if it is not rejecting, $\T_S'(\p) = \T_S(\p)$ and $\T_P'(\p) \subsetneq \T_P(\p)$. \end{itemize} \end{definition} We use a set of language games, one for each subset $S\subseteq Q$ of the states $Q$ of $\mathcal P_n^c$ with two or more states. The vertices of such a language game consist of the centre vertex, the initial vertex, and the working states $W$. These working states consist of the states of $\mathcal R_n^c$ with $\reach(\T)=S$. The target language is the language of all words \emph{accepted} by $\mathcal R_n^c$, and we have the following edges: \begin{itemize} \item there is an edge $(v_0,u,c)$ for all $u \in {\Sigma_n^c}^+$ with $\rho(\T_0,u,\T)$ and $\T \in W$, \item $(c,\varepsilon,\T)$ for all $\T \in W$, and \item $(\T,u,c)$ if $\rho(\T,u,\T')$ is accepting, growing, or shrinking, and $\T' \in W$. \end{itemize} To establish that the minimal Rabin automaton that recognises the language of $\mathcal R_n^c$ cannot be smaller than $R_n^c$, we show that the verifier needs all edges to win each of these games. \begin{lemma} \label{lem:rwin} The verifier wins these language games. \end{lemma} For this, we recall the structure of the strategy that the verifier applies: he would use $\mathcal R_n^c$ as a witness automaton, moving to the vertex that represents the state $\T$ that $\mathcal R_n^c$ would be in upon reading the finite word produced so far. If one of his outgoing edges is removed, then there is one such state, say $\T$, he cannot respond to properly. Instead, he would have to go to a different state $\T'$. We show that, irrespective of the states $\T$ and $\T' \neq \T$ chosen, the spoiler can produce a word $u\in {\Sigma_n^c}^+$ such that $(\T,u,c)$ is an edge in $G$ and $\rho(\T',u,\T)$ is not accepting in any position. If the spoiler has such an option, then \emph{she} can use $\mathcal R_n^c$ as a witness automaton: whenever it is her move, she chooses an edge with the properties described above. \begin{proof} The verifier can simply use the strategy to monitor the state that the monitor DRA $\mathcal R_n^c$ from Corollary \ref{parityeq} would be in. He then has the winning strategy to play $(c,\varepsilon,\T)$ when the automaton is in state $\T$. To see that he wins the game with this strategy, we consider the run of $\mathcal R_n^c$ on the word defined by the play $(v_0,u_0,c) (c,\varepsilon,\T_1) (\T_1,u_1,c) (c,\varepsilon,\T_2) \ldots $, which refers to the word $u_0u_1u_2\ldots$. The segments $\rho(\T_i,u_i,\T_{i+1})$ of the run have, for all $i \geq 1$, an accepting, growing, or purifying relevant change. Let us consider the relevant changes of these segments. There is a -- with respect to lexicographic order -- minimal one $\p_{\min}$ that occurs infinitely often. Let us choose a position $i$ in the play such that no lexicographic smaller $\p'$ is henceforth a relevant change. Then no node smaller than or equal to $\p$ (with respect to the lexicographic order) can henceforth be rejecting. Clearly, if $\p$ is infinitely often accepting, then the verifier wins. Let us assume for contradiction that there is a $j>i$ such that $\p$ is not accepting from position $j$ onwards. Then, the set of states in $\T_l(v)$ must henceforth grow monotonously with $l$, and grow strictly every time $\p$ is growing. As this can only happen finitely many times, there is a $k>j$ such that $\p$ is henceforth neither accepting nor growing. Then, the set of pure states in $\T_l(v)$ must henceforth shrink monotonously with $l$, and shrink strictly every time $\p$ is shrinking. As this can only happen finitely often, this provides us with the required contradiction. \end{proof} \begin{lemma} \label{lem:rwillwin} Let $\T$ and $\T'$ be two different states of $\mathcal R_n^c$ with $\reach(\T)=\reach(\T')$. Then there is a word $u\in {\Sigma_n^c}^+$ such that no node in $\rho(\T,u,\T)$ is accepting and $\rho(\T',u,\T)$ has an accepting, growing, or shrinking relevant change. \end{lemma} \begin{proof} We first identify the minimal position $\p$ in which $\T$ and $\T'$ are different, and the set $P$ of all lexicographic smaller positions that are part of $\T$ (and thus of $\T'$). We use a word $u = \sigma v$ that consists of an initial letter, in which all nodes but $P$ are rejecting when staring in $\T$ and the nodes in $P$ are neither accepting nor rejecting when staring in $\T$ or $\T'$. In the second phase, we re-build $\T$ without making any node in $P$ accepting or rejecting. Now let $\T(\p) = (S_\p,c_\p,P_\p)$, $\T'(\p) = (S_\p',c_\p,P_\p')$, and $\T(\p) = (S_\p,c_\p,P_\p)$ for all $\p' \in P$. (Recall that the priority is defined by the position in the tree.) We now distinguish four cases: (1) there is an $s \in S_\p' \smallsetminus S_\p$, (2) $S_\p' = S_\p$ and there is an $s \in P_\p \smallsetminus P_\p'$, (3) $S_\p \supsetneq S_\p'$, and (4) $S_\p' = S_\p$ and $P_\p \subsetneq P_\p'$. We select the first letter of our word as follows. \begin{enumerate} \item If there is an $s \in S_\p' \smallsetminus S_\p$, then we can fix such an $s$ and select the first letter of our word as follows: \begin{itemize} \item We let $c_\p \in \sigma(s,s')$ for all $s' \in S_\p$. \item For all $\p' \in P$, all $s' \in P_{\p'}$, and ll $s'' \in S_{\p'}$, we let $c_{\p'}-1 \in \sigma(s',s'')$. \item If $c$ is odd, we let $c \in \sigma(s',s'')$ for all $s',s'' \in \reach(\T)$ (in order to maintain the set of reachable states). \end{itemize} No further priority is included in any set $\sigma(s',s'')$ for $s',s'' \in Q$ and $s'' \in Q^\top$. This letter $\sigma$ is chosen such that no node in $P$ is accepting or rejecting in $\delta(\T,\sigma)$ or $\delta(\T',\sigma)$. While $\p$ is accepting in $\delta(\T',\sigma)$, it is rejecting in $\delta(\T,\sigma)$. All other positions are rejecting in these transitions. \item If $S_\p' = S_\p$ and there is an $s \in P_\p \smallsetminus P_\p'$, then we can fix such an $s$ and select the first letter of our word as follows: \begin{itemize} \item We let $c_\p-1 \in \sigma(s,s')$ for all $s' \in S_p$. \item For all $\p' \in P$, all $s' \in P_{\p'}$, and ll $s'' \in S_{\p'}$, we let $c_{\p'}-1 \in \sigma(s',s'')$. \item If $c$ is odd, we let $c \in \sigma(s',s'')$ for all $s',s'' \in \reach(\T)$ (in order to maintain the set of reachable states). \end{itemize} No further priority is included in any set $\sigma(s',s'')$ for $s',s'' \in Q$ and $s'' \in Q^\top$. This letter $\sigma$ is chosen such that no node in $P$ is accepting or rejecting in $\delta(\T,\sigma)$ or $\delta(\T',\sigma)$. While $\p$ is accepting in $\delta(\T',\sigma)$, it is neither accepting nor rejecting in $\delta(\T,\sigma)$. All other positions are rejecting in these transitions. \item If $S_\p \supsetneq S_\p'$, then we can fix an $s \in P_\p$ and select the first letter of our word as follows: \begin{itemize} \item We let $c_\p-1 \in \sigma(s,s')$ for all $s' \in S_\p$. \item For all $\p' \in P$, all $s' \in P_{\p'}$, and ll $s'' \in S_{\p'}$, we let $c_{\p'}-1 \in \sigma(s',s'')$. \item If $c$ is odd, we let $c \in \sigma(s',s'')$ for all $s',s'' \in \reach(\T)$ (in order to maintain the set of reachable states). \end{itemize} No further priority is included in any set $\sigma(s',s'')$ for $s',s'' \in Q$ and $s'' \in Q^\top$. This letter $\sigma$ is chosen such that no node in $P$ is accepting or rejecting in $\delta(\T,\sigma)$ or $\delta(\T',\sigma)$. While $\p$ is not rejecting (but may or may not be accepting) in $\delta(\T',\sigma)$, it is neither accepting nor rejecting in $\delta(\T,\sigma)$. All other positions are rejecting in these transitions. \item If $S_\p = S_\p'$ and $P_\p \subseteq P_\p'$, then we can fix an $s \in P_\p$ and select the first letter of our word as follows: \begin{itemize} \item We let $c_\p-1 \in \sigma(s,s')$ for all $s' \in S_\p$. \item For all $\p' \in P$, all $s' \in P_{\p'}$, and ll $s'' \in S_{\p'}$, we let $c_{\p'}-1 \in \sigma(s',s'')$. \item If $c$ is odd, we let $c \in \sigma(s',s'')$ for all $s',s'' \in \reach(\T)$ (in order to maintain the set of reachable states). \end{itemize} No further priority is included in any set $\sigma(s',s'')$ for $s',s'' \in Q$ and $s'' \in Q^\top$. This letter $\sigma$ is chosen such that neither $\p$ nor any node in $P$ is accepting or rejecting in $\delta(\T,\sigma)$ or $\delta(\T',\sigma)$. All other positions are rejecting in these transitions. \end{enumerate} Note that $\Delta(\T,\sigma) = \Delta(\T',\sigma)$ holds in all those cases. Starting with this letter, we can continue to build a word to reconstruct $\T$. Note that all we have to avoid during this construction is to make $\p$ or a node in $P$ accepting. The resulting fragments $\rho(\T',u,\T)$ are accepting in cases (1) and (2), growing in case (3), and shrinking in case (4), such that $(\T',u,\T)$ is a transition, while $\rho(\T,u,\T)$ does not contain any accepting node. \end{proof} \begin{corollary} \label{cor:rlose} If any outgoing edge is removed from the verifier's centre vertex in any of these games, then the spoiler wins the language game. \end{corollary} Together with Lemmata \ref{lem:differentSets} and \ref{lem:rwin}, Corollary \ref{cor:rlose} provides: \begin{theorem} \label{theo:smallRabin} $\mathcal R_n^c$ is the smallest deterministic Rabin automaton that recognises the language of $\mathcal P_n^c$. \end{theorem} \paragraph*{\bf Deterministic parity automata.} For our language game, we use the spiked states of $\mathcal D_n^c$ and a fresh initial vertex as the spoiler vertices. We call a state of $\mathcal D_n^c$ \emph{spiked}, if its last position is a triple of the form $(\{q\},2,\{q\})$. This is a mild restriction and owed to the fourth case of the proof of Lemma \ref{lem:useOfSpikes}. Most states are spiked. \begin{lemma} \label{lem:spiked} $\mathcal D_n^c$ has more than twice as many spiked as unspiked states. \end{lemma} To define the edges leaving a spoiler vertex $N$, we refer to the finite part of a run of $\mathcal D_n^c$ that starts in $N$ when reading a word $u \in {\Sigma_n^c}^+$ by $\rho(N,u)$. If this finite part of the run ends in $N'$, we also write $\rho(N,u,N')$. In particular, $\rho(N,u,N')$ implies that $N'$ is reached from $N$ when reading $u$. The co-priority of $\rho(N,u,N')$ is the smallest co-priority that occurs in the respective sequence of a run. \begin{proof} Each unspiked state ends in a triple $(P,2,P)$ with $|P| \geq 2$. We can simply replace it by $|P|$ pairs of triples, $(P,2,P\smallsetminus \{q\}),(\{q\},2,\{q\})$, for each $q \in P$. The resulting state is spiked. Each non-spiked state produced at least two spiked states, each spiked state is produced by at most one state, and not every spiked state can be produced this way, e.g., states $N$ with $|\reach(N)|=1$ cannot. \end{proof} \begin{definition} [Relevant change] In $\rho(N,u,N')$, the \emph{relevant change} is the minimal position $i$, where \begin{itemize} \item the co-priority of $\rho(N,u,N')$ is $2i-1$ or $2i$ \hfill\mbox{(i.e., position $i$ was accepting or destroyed), or} \item the $i$-th position of $N$ and $N'$ differ. \end{itemize} For $N = \big\{(S_j,c_j,P_j)\big\}_{j \leq m}$ and $N' = \big\{(S_j',c_j',P_j')\big\}_{j \leq m'}$, we call the relevant change $i$ \begin{itemize} \item \emph{rejecting}, if the co-priority of $\rho(N,u,N')$ is $2i-1$, \item \emph{shrinking}, if $S_i' \subsetneq S_i$, \item \emph{defying} if $S_i' = S_i$ and the co-priority of $\rho(N,u,N')$ is $2i+1$, and \item \emph{purifying}, if $S_i' {=} S_i$, $P_i' {\supsetneq} P_i$, and the co-priority is ${>}2i$. \end{itemize} \end{definition} We use a language game, whose vertices consist of the centre vertex, the initial vertex, and and the working vertices $W$, which form a subset ot the spiked states of $\mathcal D_n^c$. Following Lemma \ref{lem:differentSets}, we will, for each $S \subseteq Q$ with $|S| \geq 2$, use an individual game where $W$ contains a spiked state $N$ iff $S$ is the set of states reachable in $N$ ($\reach(N)=S$). The target language is the \emph{complement} language of $\mathcal D_n^c$, and we have the following edges: \begin{itemize} \item there is an edge $(v_0,u,c)$ for all $u \in {\Sigma_n^c}^+$ such that there is a spiked state $N \in W$ such that $\rho(N_0,u,N)$, where $N_0$ is the initial state of $\mathcal D_n^c$, \item $(c,\varepsilon,N)$ for all spiked states $N$ of $\mathcal D_n^c$ that are working states of the game, and \item $(N,u,c)$ if $\rho(N,u,N')$ is rejecting, shrinking, defying, or purifying, and $N'$ is spiked. \end{itemize} \begin{lemma} The verifier wins all of these language games. \label{lem:win} \end{lemma} \vspace*{-2ex} \begin{proof} In the language game for each $S \subseteq Q$, the verifier can simply use the strategy to monitor the state that the monitor DPA $\mathcal D_n^c$ from Lemma \ref{parityexp} would be in. He then wins by playing $(c,\varepsilon,N)$ when the automaton is in state $N$. To see that he wins the game, we consider the run of $\mathcal D_n^c$ on the word defined by the play $(v_0,u_0,c) (c,\varepsilon,N_1) (N_1,u_1,c) (c,\varepsilon,N_2) \ldots $, which refers to the word $w=u_0u_1u_2\ldots$. The run $\rho$ of $\mathcal D_n^c$ on $w$ can be decomposed into the finite segments $\rho(N_i,u_i,N_{i+1})$ for all $i \geq 0$. For all $i \geq 1$, their relevant changes are rejecting, shrinking, defying, or purifying. Clearly, there is a minimal one $i_{\min}$ that occurs infinitely often. Consequently, no co-priority smaller than $2\im-1$ can occur infinitely many times in $\rho$. We can now distinguish four cases. \emph{\bf First}, assume that there are infinitely many rejecting relevant changes $\im$. Then the co-priority $2\im-1$ occurs infinitely often in $\rho$, and the $\omega$ word $w$ is~rejected. \emph{\bf Second}, assume that finitely many of the relevant changes with change priority $\im$ are rejecting, but infinitely many are shrinking. Then we can choose a position in the play where henceforth no relevant change with priority $<\im$, and no rejecting relevant change with priority $\im$ occurs. Consequently, the set of states at position $\im$ of $N_i$ would henceforth shrink monotonously with growing $i$, and would infinitely often shrink strictly. (contradiction) \emph{\bf Third}, assume that finitely many of the relevant changes with change priority $\im$ are rejecting or shrinking, but infinitely many are defying. Then we can choose a position in the play where henceforth no relevant change with change priority $<\im$, and no rejecting or shrinking relevant change with change priority $\im$ occurs. From this time onwards, no co-priority $\leq 2\im$ can occur on any segment of the run, while the co-priority $2\im +1$ occurs infinitely often. \emph{\bf Finally}, assume that finitely many of the relevant changes with change priority $\im$ are rejecting, shrinking, or defying. Then we can choose a position in the play where henceforth no relevant change $j$ with $j<\im$, and no rejecting, shrinking, or defying relevant change with relevant change $\im$ occurs. Consequently, the set of pure states at position $\im$ of $N_i$ would henceforth grow monotonously with growing $i$, and would infinitely often grow strictly. (contradiction) \end{proof} To establish that the minimal size of a Rabin automaton that recognises the complement language of $\mathcal D_n^c$ cannot be significantly smaller than $D_n^c$, we will show that the verifier needs all edges to win this game. For this, we recall the structure of the strategy that the verifier applies: he would use $\mathcal D_n^c$ as a witness automaton, moving to the vertex that represents the state $\mathcal D_n^c$ would be in upon reading the finite word produced so far. If one of his outgoing edges is removed, then there is one such state, say $N$, he cannot respond to properly. Instead, he would have to go to a different state $N'$. We show that, irrespective of the state $N$ that becomes unreachable and $N' \neq N$ chosen, the spoiler can produce a word $u\in {\Sigma_n^c}^+$ such that $(N',u,c)$ is a transition in $G$ and $\rho(N,u,N)$ has even co-priority. If the spoiler has such an option, then \emph{she} can use $\mathcal D_n^c$ as a witness automaton: initially, she selects an edge $(v_0,u,c)$ such that $\rho(N_0,u,N)$ holds; henceforth she chooses, whenever she is in a vertex $N$, an edge $(N',u,c)$ such that $\rho(N,u,N)$ holds, returning the run to $N$ with dominating even co-priority. Thus, she can make sure that the constructed word is accepted. \begin{lemma} \label{lem:useOfSpikes} Let $N$ and $N'$ be two different spiked states of $\mathcal D_n^c$ with $\reach(N)=\reach(N')$. Then there is a word $u\in {\Sigma_n^c}^+$ such that the lowest co-priority occurring in $\rho(N,u,N)$ is even and $\rho(N',u,N)$ has a rejecting, shrinking, defying, or purifying relevant change. \end{lemma} \begin{proof} Let $N = \big\{(S_i,c_i,P_i)\big\}_{i \leq m}$ and $N' = \big\{(S_i',c_i',P_i')\big\}_{i \leq m'}$. As $N \neq N'$, there is% \footnote{Note that a spiked state $N'$ cannot simply be longer than a spiked state $N$ with $\reach(N)=\reach(N')$ (or vice versa): assuming that $N$ is an initial sequence of $N'$. Then the rules (1), (2), (3), and (5) imply that $S_{m+1}'$ must be disjoint with all $S_i$ for $i\leq m$, which contradicts $\reach(N)=\reach(N')$.} a minimal $\im \leq \min\{m,m'\}$ such that $(S_\im,c_\im,P_\im) \neq (S_\im',c_\im',P_\im')$. We will construct a word $u$ such that \begin{itemize} \item $\rho(N,u,N)$ and $\rho(N',u,N)$ are fragments of runs, \item the minimal co-priority of $\rho(N,u,N)$, is even, and \item \im will be the relevant change in $\rho(N',u,N)$; it will be rejecting, shrinking, defying, or purifying. \end{itemize} We distinguish four cases. Let us \emph{\bf first} assume that there is an $s\in S_\im \smallsetminus S_\im'$. In this case, we choose such an $s$, and select the first letter $\sigma_\im$ of $u$ such that \begin{itemize} \item $c_\im\in\sigma_\im(s,s')$ for all $s' \in S_\im$, \item $c_i-1 \in \sigma_\im(s',s'')$ for all $i < \im$, $s' \in P_i$, and $s'' \in S_i$, and \item if $c$ is odd% \footnote{If the highest priority $c$ of the defining NPA $\mathcal P^c_n$ is odd, then $S_1$ might be a strict subset of $\reach(N)$. This part is then an easy way to make sure that all states in $\reach(N)$ remain reachable. If $c$ is even, then we have a tree on the lowest level, $S_1=\reach(N)$.}, $c \in \sigma_\im(s',s'')$ for all $s',s'' \in \reach(N)$. \end{itemize} No further priority is included in any set $\sigma_\im(s',s'')$ for $s',s'' \in Q$ and $s'' \in Q^\top$. Starting with $\sigma$ is the central step. The transition from $N$ reading $\sigma$ has co-priority $2\im$, the transition from $N'$ reading $\sigma$ has co-priority $2\im-1$. Note that during the transition, all nodes in the history trees underlying $N$ and $N'$ that refer to a position $<\im$ are the same. They are also stable and non-accepting during this transition. However, while the node the position $\im$ of $N$ refers to is accepting, the node position $\im$ of $N'$ refers to is not stable. (Note that $N$ and $N'$ could refer to the same underlying tree.) The resulting state is the same for $N$ and $N'$. The next letters are to rebuild $N$. For all $i = \im + 1$ to $m$, we append a further letter $\sigma_i$ to our partially constructed word $u$. We choose a state $s \in P_i$ and define $\sigma_i$ as follows: \begin{itemize} \item $c_i \in \sigma_i(s,s') $ for all $s' \in S_i$, \item $c_j{-}1 \in \sigma_i(s',s'') $ for all $j{<}i$, $s' {\in} P_j$ and $s'' {\in} S_j$, and \item if $c$ is odd, $c \in \sigma_\im(s',s'')$ for all $s',s'' \in \reach(N)$. \end{itemize} No further priority is included in any set $\sigma_\im(s',s'')$ for $s',s'' \in Q$ and $s'' \in Q^\top$. Clearly, reading $i$ from a state that agrees with $N$ on all positions $< i$ before reading $\sigma_i$, the resulting state will agree on all positions $\leq i$ with $N$ after this transition. The transition has a co-priority $\geq 2i-1 > 2\im$. Thus, the word $u = \sigma_\im \sigma_{\im+1} \sigma_{\im+2} \ldots \sigma_m$ has the required properties; in particular $\rho(N',u,N)$ has rejecting relevant change $i$. \smallskip In the remaining cases we have $S_\im \subseteq S_\im'$. Note that this implies $c_\im = c_\im'$. The \emph{\bf second} case is $S_\im = S_\im'$ and there is a state $s \in P_\im' \smallsetminus P_\im$. In this case, we fix such an $s$ and start our word $u$ with the letter $\sigma_\im$ that satisfies \begin{itemize} \item $c_\im-1 \in \sigma_\im(s,s')$ for all $s' \in S_\im$, \item $c_i-1 \in \sigma_\im(s',s'')$ for all $i < \im$, $s' \in P_i$, and $s'' \in S_i$, and, \item if $c$ is odd, $c \in \sigma_\im(s',s'')$ for all $s',s'' \in \reach(N)$. \end{itemize} No further priority is included in any set $\sigma_\im(s',s'')$ for $s',s'' \in Q$ and $s'' \in Q^\top$. Starting $u$ with this letter $\sigma_\im$ is again the central step. The transition from $N$ has co-priority $2\im$, while the transition from $N'$ has co-priority $2\im+1$. We can now continue $u$ in the same manner as above and use $u = \sigma_\im \sigma_{\im+1} \sigma_{\im+2} \ldots \sigma_m$, and $u$ will again satisfy the constraints; in particular $\rho(N',u,N)$ has defying relevant change $i$. In the \emph{\bf third} case, $S_\im \subsetneq S_\im'$, we fix an $s\in S_\im$ and start our word $u$ with the letter $\sigma_\im$ that satisfies \begin{itemize} \item $c_\im \in \sigma_\im(s,s')$ for all $s' \in S_\im$, \item $c_i-1 \in \sigma_\im(s',s'')$ for all $i < \im$, $s' \in P_i$, and $s'' \in S_i$, and, \item if $c$ is odd, $c \in \sigma_\im(s',s'')$ for all $s',s'' \in \reach(N)$. \end{itemize} No further priority is included in any set $\sigma_\im(s',s'')$ for $s',s'' \in Q$ and $s'' \in Q^\top$. Starting $u$ with this letter $\sigma_\im$ is again the central step. The transition from $N$ or $N'$ reading $\sigma$ has co-priority $2\im$. We can again continue $u$ in the same manner as above and use $u = \sigma_\im \sigma_{\im+1} \sigma_{\im+2} \ldots \sigma_m$, and $u$ will again satisfy the constraints; in particular $\rho(N',u,N)$ has shrinking relevant change $i$. Finally, in the \emph{\bf fourth} case we have $S_\im = S_\im'$ and $P_\im' \subsetneq P_\im$. We first note that this implies $|P_\im| \geq 2$. The restriction to spiked states then provides $\im < m$. We can therefore refer to position $\im+1$ of $N$. We choose an $s \in P_{\im+1}$ and start our word $u$ with the letter $\sigma_\im$ that satisfies \begin{itemize} \item $c_{\im+1} \in \sigma_\im(s,s')$ for all $s' \in S_\im$, \item $c_i-1 \in \sigma_\im(s',s'')$ for all $i \leq \im$, $s' \in P_i'$, and $s'' \in S_i$, and, \item if $c$ is odd, $c \in \sigma_\im(s',s'')$ for all $s',s'' \in \reach(N)$. \end{itemize} No further priority is included in any set $\sigma_\im(s',s'')$ for $s',s'' \in Q$ and $s'' \in Q^\top$. Then the effect on $N$ is obvious: the transition from $N$ reading $\sigma$ has co-priority $2\im+2$. Starting from $N'$, the same state is reached. The co-priority is $2\im+2$ if $N'$ has a position $(S_{\im+1}',c_{\im+1}',P_{\im+1}')$ with $s\in S_{\im+1}$ and $c_{\im+1}'=c_{\im+1}$, and $2\im+1$ otherwise. (Note that, for the case that $s \in P_\im'$, this would imply $c_\im = c_{\im+1} + 2$.) We can again continue $u$ in the same manner as above, although this results in the slightly shorter word $u = \sigma_{\im+1} \sigma_{\im+2} \sigma_{\im+3}\ldots \sigma_m$. The word $u$ will again satisfy the constraints; in particular $\rho(N',u,N)$ has purifying relevant change $i$ and $\rho(N,u,N)$ has co-priority $2\im+2$. \end{proof} \begin{lemma} \label{lem:lose} If any outgoing edge is removed from the verifier's centre vertex in any of these games, then the spoiler wins the language game. \end{lemma} \begin{proof} If the spoiler has such an option, then \emph{she} can use $\mathcal D_n^c$ as a witness automaton. Let $N$ be the spiked state, to whom the outgoing edge from the centre is removed. Initially, the spoiler plays a word $u_0$ with $\rho(N_0,u_0,N)$ by choosing the edge $(v_0,u_0,c)$ from the initial vertex, such that $N$ is reached from the initial state of $\mathcal D_n^c$. Henceforth she plays, whenever she is in a vertex $N'$, the word $u$ from the previous lemma by choosing the edge $(N',u,c)$. This way, the two players construct a play $(v_0,u_0,c) (c,\varepsilon,N_1) (N_1,u_1,c) (c,\varepsilon,N_2) (N_2,u_2,c) (c,\varepsilon,N_3)\linebreak(N_3,u_3,c) (c,\varepsilon,N_4) (N_4,u_4,c) \ldots$. For every $i \geq 1$, the segment $\rho(N_i,u_i,N_{i+1})$ of the run of $\mathcal D_n^c$ on the word $w=u_0u_1u_2u_3u_4\ldots$ has even minimal co-priority. Thus, the co-priority of the overall run is even. \end{proof} Together, the Lemmata \ref{lem:win}, \ref{lem:lose}, and \ref{lem:differentSets} provide: \begin{theorem} Every Rabin automaton $\mathcal R= \big(S,\Sigma_n^c,s_0,\delta, R\big)$ that recognises the complement language of $\mathcal P_n^c$ must, for each non-empty subset% \footnote{For each $q{\in}Q$ there is only a single state $N$ of $\mathcal D_n^c$ with $\reach(N)=\{q\}$.} $S\subseteq Q$ of the states $Q$ of $\mathcal P_n^c$, have at least as many states $s$ with $\reach(s)=S$ as $\mathcal D_n^c$ has spiked states $N$ with $\reach(N) = S$. \end{theorem} \begin{corollary} Every Rabin automaton that recognises the complement language of $\mathcal P_n^c$ must contain at least as many states as $\mathcal D_n^c$ has spiked states. \end{corollary} By dualisation and the observation that parity automata are special Streett automata we simply get: \begin{corollary} A deterministic Streett or parity automaton that recognises the language of $\mathcal P_n^c$ must have at least as many states as $\mathcal D_n^c$ has spiked states. \end{corollary} The restriction to spiked states is minor -- using the estimation of Lemma \ref{lem:spiked}, we get: \begin{theorem} $\mathcal D_n^c$ has less than $1.5$ times as many states as the smallest deterministic Streett (or parity) automaton that recognises the language of $\mathcal P_n^c$. \end{theorem} State sizes for two parameters are usually not crisp to represent. But for the simple base cases, B\"uchi and one pair Rabin automata, we get very nice results: it establishes that the known upper bound for determinising B\"uchi to parity automata \cite{Schewe/09/determinise} are tight and Piterman's algorithm for it \cite{Piterman/07/Parity} is \textbf{optimal} modulo a factor of $3n$, where $2n$ stem from the fact that \cite{Piterman/07/Parity} uses state based acceptance. With Lemma \ref{parityref} we get: \begin{corollary} The determinisation of B\"uchi automata to Streett or parity automata leads to $\theta(n!(n-1)!)$ states, and the determinisation of one-pair Rabin automata to Streett or parity automata leads to $\theta(n!^2)$ states. \end{corollary}
1,314,259,993,755
arxiv
\section{Introduction} Early work on disk accretion to a black hole argued that a large-scale poloidal magnetic field originating from say the interstellar medium, would be dragged inward and greatly compressed near the black hole by the accreting plasma (Bisnovatyi-Kogan \& Ruzmaikin 1974, 1976) and that this would be important for the formation of jets (Lovelace 1976). Later, the importance of a weak small-scale magnetic field within the disk was recognized as the source of the turbulent viscosity of disk owing to the magneto-rotational instability (MRI; Balbus \& Hawley 1991, 1998). Analysis of the diffusion and advection of a large-scale field in a disk with a turbulent viscosity comparable to the turbulent magnetic diffusivity (as suggested by MRI simulations) indicated that a {\it weak} large-scale field would diffuse outward rapidly (van Ballegooijen 1989; Lubow, Papaloizou, \& Pringle 1994; Lovelace, Romanova, \& Newman 1994, 1997). This has led to the suggestion that special conditions (nonaxisymmetry) are required for the field to be advected inward (Spruit \& Uzdensky 2005). Recently, Bisnovatyi-Kogan and Lovelace (2007) pointed out that the disk's surface layers are highly conducting (or non-diffusive) because the MRI is suppressed in this region where the magnetic energy-density is larger than the thermal energy-density. Rothstein and Lovelace (2008) analyzed this problem in further detail and discussed the connections with global and shearing box magnetohydrodynamic (MHD) simulations of the MRI. With the disk's highly conducting surface neglected, the fast outward diffusive drift of the large-scale poloidal magnetic field in a turbulent disk can be readily understood by looking at the induction equation for the vertical field $B_z$, $$ {\partial (rB_z) \over \partial t} = {\partial \over \partial r}\left[ (r B_z)u - \eta r \left( {\partial B_r \over \partial z}-{\partial B_z \over \partial r }\right)\right]~, $$ where $u >0$ is the accretion speed and $\eta$ is the turbulent diffusivity both assumed independent of $z$ (Lovelace et al. 1994). (Both assumptions are invalid and are removed in this work.) For a dipole type field, $B_z$ is an even function of $z$ and it changes very little over the half-thickness $h$ of a thin disk ($h\ll r$). Consequently, the average of this equation over the half-thickness gives $$ {\partial (rB_z) \over \partial t} = {\partial \over \partial r}\left[ (r B_z)u - \left({ r\over h} \right)\eta(B_r)_h +\eta r {\partial B_z \over \partial r}\right]~, $$ where $(B_r)_h$ is the radial field at the disk's surface. The terms inside the square brackets represents the flux of the $z-$magnetic flux. For stationary conditions it vanishes. The term proportional to $u$ describes the {\it inward advection} of the magnetic field, the term $\propto \eta (B_r)_h$ describes the {\it outward diffusive drift} of the field, and the last term represents the {\it radial diffusion} of the field. For $(B_r)_h \sim B_z$ the radial diffusion term is negligible compared with the diffusive drift term. For a weak $B_z$ field and the standard $\alpha-$disk model with turbulent viscosity of the order of the turbulent diffusivity, the accretion speed is $u \sim \alpha c_s h/r$ (with $c_s$ the sound speed), whereas the outward drift speed of the field is $ \sim \alpha c_s$ which is a factor $r/h \gg 1$ larger than $u$. That is, a stationary solution with $B_z\neq 0$ is not possible. A stationary or growing $B_z$ field is possible if the field is sufficiently strong to give appreciable outflow of angular momentum to jets (Lovelace et al. 1994). Three-dimensional MHD simulations have been performed which give some information on the advection/diffusion of a large scale field. These simulations resolve the largest scales of the MRI turbulence and therefore self-consistently include the turbulent viscosity and diffusivity. Most simulations performed to date have investigated conditions in which the accreting matter contains no net magnetic flux and where no magnetic field is supplied at the boundary of the computational domain (e.g., Hirose et al. 2004; De Villiers et al. 2005; Hawley \& Krolik 2006; McKinney \& Narayan 2007). In these simulations stretching of locally poloidal field lines in the initial configuration leads to a large-scale poloidal fields and jet structures in the inner disk. However, in simulations by Igumenshchev, Narayan, \& Abramowicz 2003; Igumenshchev 2008) weak poloidal flux injected at the outer boundary is clearly observed to be dragged into the central region of the disk, leading to the buildup of a strong poloidal magnetic field close to the central object. This flux build up and its limit are discussed by Narayan, Igumenshchev, \& Abramowicz (2003). The extent to which the magnetic field advection seen in numerical simulations depends on having a thick disk or nonaxisymmetric conditions is unclear. In this work we calculate the profiles through the disk of stationary accretion flows (with radial and azimuthal components), and the profiles of a large-scale, weak magnetic field taking into account the turbulent viscosity and diffusivity due to the MRI and the fact that the turbulence vanishes at the surface of the disk. By a weak field we mean that the magnetic pressure in the middle of the disk is less than the thermal pressure. Related calculations of the disk structure were done earlier by K\"onigl (1989), Li (1995), Ogilvie and Livio (2001) but without taking into account the absence of turbulence at the disk's surface. Recent work calls into question the $\alpha$-description of the MRI turbulence in accretion disks and develops a closure model which fits shearing box simulation results (Pessah, Chan, \& Psaltis 2008). Analysis of this more complicated model is deferred a future study. Section 2 develops the equations for the flow and magnetic field in a viscous diffusive disk. Section 3 discusses the boundary conditions at the surface of the disk. Section 4 derives the internal flow/field solutions for an analytically soluble disk model. Section 5 discusses the external flow/field solutions which may be magnetocentrifugal winds or electromagnetic outflows. Section 6 gives the conclusions of this work. \section{Theory} We consider the non-ideal magnetohydrodynamics of a thin axisymmetric, viscous, resistive disk threaded by a large-scale dipole-symmetry magnetic field ${\bf B}$. We use a cylindrical $(r,\phi,z)$ inertial coordinate system in which the time-averaged magnetic field is ${\bf B}=B_r\hat{\bf r}+B_\phi\hat{{\hbox{\tenbg\char'036}}~}+ B_z\hat{\bf z}$, and the time-averaged flow velocity is ${\bf v}=v_r\hat{\bf r}+v_\phi\hat{{\hbox{\tenbg\char'036}}~}+ v_z\hat{\bf z}$. The main equations are \begin{eqnarray} \rho {d{\bf v}\over dt}&=&-{\bf \nabla} p +\rho{\bf g} + {1\over c}{\bf J \times B} + {\bf F^\nu}~, \\ {\partial {\bf B} \over \partial t} &=& {\nabla \times}({\bf v \times B}) - {\bf \nabla}\times(\eta {\bf \nabla }\times {\bf B})~. \end{eqnarray} These equations are supplemented by the continuity equation, $\nabla(\rho {\bf v})=0$, by ${\bf \nabla \times B} =4\pi{\bf J}/c$, and by ${\bf\nabla} \cdot {\bf B}=0$. Here, $\eta$ is the magnetic diffusivity, ${\bf F}^\nu=-{\bf \nabla}\cdot T^\nu$ is the viscous force with $T_{jk}^\nu= -\rho \nu (\partial v_j/\partial x_k +\partial v_k/\partial x_j-(2/3)\delta_{jk} {\bf \nabla} \cdot{\bf v} )$ (in Cartesian coordinates), and $\nu$ is the kinematic viscosity. For simplicity, in place of an energy equation we consider the adiabatic dependence $p \propto \rho^\gamma$, with $\gamma$ the adiabatic index. We assume that both the viscosity and the diffusivity are due to magneto-rotational (MRI) turbulence in the disk so that \begin{equation} \nu ={\cal P} \eta =\alpha ~{c_{s0}^2 \over \Omega_K}~ g(z)~, \end{equation} where ${\cal P}$ is the magnetic Prandtl number of the turbulence assumed a constant of order unity (Bisnovatyi-Kogan \& Ruzmaikin 1976), $\alpha \leq 1$ is the dimensionless Shakura-Sunyaev (1973) parameter, $c_{s0}$ is the midplane isothermal sound speed, $\Omega_K \equiv (GM/r^3)^{1/2}$ is the Keplerian angular velocity of the disk, and $M$ is the mass of the central object. The function $g(z)$ accounts for the absence of turbulence in the surface layer of the disk (Bisnovatyi-Kogan \& Lovelace 2007; Rothstein \& Lovelace 2008). In the body of the disk $g = 1$, whereas at the surface of the disk, at say $z_S$, $g$ tends over a short distance to a very small value $\sim 10^{-8}$, effectively zero, which is the ratio of the Spitzer diffusivity of the disk's surface layer to the turbulent diffusivity of the body of the disk. At the disk's surface the density is much smaller than its midplane value. We consider stationary solutions of equations (1) and (2) for a weak large-scale magnetic field. These can be greatly simplified for thin disks where the disk half-thickness, of the order of $h \equiv c_{s0}/\Omega_K$, is much less than $r$. Thus we have the small parameter \begin{equation} \varepsilon={h \over r} = {c_{s0}\over v_K} \ll 1~. \end{equation} It is useful in the following to use the dimensionless height $\zeta \equiv z/h$. The three magnetic field components are assumed to be of comparable magnitude on the disk's surface, but $B_r = 0 = B_\phi$ on the midplane. On the other hand the axial magnetic field changes by only a small almount going from the midplane to the surface, $\Delta B_z \sim \varepsilon B_r \ll B_z$ (from ${\bf \nabla} \cdot{\bf B} =0$) so that $B_z\approx$ const inside the disk. As a consequence, the $\partial B_j/\partial r$ terms in the magnetic force in equation (1) can all be dropped in favor of the $\partial B_j/\partial z$ terms (with $j=r,~\phi$). Thus, we neglect the final term of the induction equation given in the Introduction. It is important to keep in mind that $B_j$ is the large scale field; the approximation does not apply to the small-scale field which gives the viscosity and diffusivity. The three velocity components are assumed to satisfy $v_z^2 \ll c_{s0}^2$ and $v_r^2 \ll v_\phi^2$. Consequently, $v_\phi(r,z)$ is close in value to the Keplerian value $v_K(r)\equiv (GM/r)^{1/2}$. Thus, $\partial v_\phi/\partial r = -(1/2)(v_\phi/r)$ to a good approximation. With these assumptions, the radial component of equation (1) gives \begin{equation} \rho \left({GM \over r^2} -{v_\phi^2 \over r}\right) = - {\partial p \over \partial r} +{1\over 4 \pi} B_z {\partial B_r \over \partial z}+F_r^\nu \end{equation} The dominant viscous force is $F_r^\nu= -\partial T_{rz}^\nu/\partial z$ with $T_{rz}^\nu = -\rho \nu \partial v_r/\partial z$. We normalize the field components by $B_0 = B_z(r,z=0)$, with $b_r=B_r/B_0$, $b_\phi = B_\phi/B_0$, and $b_z = B_z/B_0\approx 1$. Also, we define $u_\phi \equiv v_\phi(r,z)/v_K(r)$ and the accretion speed $u_r \equiv -~v_r/(\alpha c_{s0})$. For the assumed dipole field symmetry, $b_r$ and $b_\phi$ are odd functions of $\zeta$ whereas $u_r$ and $u_\phi$ are even functions. Equation (5) then gives \begin{equation} {\partial b_r \over \partial \zeta} = {\beta \tilde{\rho}\over \varepsilon} ~~ \big(1 - k_p~\varepsilon^2 - u_\phi^2 \big) +\alpha^2\beta {\partial \over \partial \zeta} \left(\tilde{\rho} g{\partial u_r \over \partial \zeta} \right)~, \end{equation} where $\tilde{\rho} \equiv \rho(r,z)/\rho_0$ with $\rho_0\equiv \rho(r,z=0)$. The midplane plasma beta is \begin{equation} \beta \equiv {4\pi \rho_0c_{s0}^2 \over B_0^2}~, \end{equation} where $k_p \equiv - \partial \ln p/\partial \ln r$ is assumed of order unity and $p=\rho c_s^2$. Note that $\beta =c_{s0}^2/v_{A0}^2$, where $v_{A0}=B_0/(4\pi\rho_0)^{1/2}$ is the midplane Alfv\'en velocity. The rough condition for the MRI instability and the associated turbulence in the disk is $\beta >1$ (Balbus \& Hawley 1998). In the following we assume $\beta >1$, which we refer to as a weak magnetic field. The $\phi-$component of equation (1) gives \begin{equation} {\partial b_\phi \over \partial \zeta} ={ \alpha \beta \tilde{\rho}\over 2} ~ \big( 3 \varepsilon k_\nu g - u_r \big) -{\alpha \beta \over \varepsilon} {\partial \over \partial \zeta}\left(\tilde{\rho}g {\partial u_\phi \over \partial \zeta}\right)~, \end{equation} where $k_\nu \equiv \partial \ln(\rho c_{s0}^2 r^2/h)/\partial \ln(r)>0$ is of order unity. In addition to the well-know viscous force [$F^\nu_\phi(a) = -r^{-2}\partial(r^2 T_{r\phi}^\nu)/\partial r$ with $ T_{r\phi}^\nu=-\rho \nu r\partial(v_\phi/r)/\partial r$] which gives the term $\propto k_\nu$, we must include the force contribution $F_\phi^\nu(b)= -\partial T_{\phi z}^\nu/\partial z$ with $T_{\phi z}^\nu = -\rho \nu \partial v_\phi/\partial z$. This gives the second derivative term in equation (8). Note that integration of equation (8) from $\zeta=0$ (where $b_\phi=0$ and $\partial u_\phi/\partial \zeta=0$) to $\zeta_S+\epsilon$ (where $g=0$) gives $$ b_{\phi S+}={1\over 2} \alpha \beta\tilde{\Sigma} \big(3\varepsilon k_\nu - \overline{u}_r \big)~, $$ where $\overline{u}_r\equiv \int_0^{\zeta_S} d\zeta~ \tilde{\rho} u_r/\tilde{\Sigma}$ is the average accretion speed, $\tilde{\Sigma}\equiv \int_0^{\zeta_S} d\zeta~ \tilde{\rho}$, and the $S+$ subscript indicates evaluation at $\zeta=\zeta_S+\epsilon$. The average accretion speed, written as \begin{equation} \overline{u}_r = u_0 - {2 b_{\phi S+}\over \alpha \beta \tilde{\Sigma}}~, \end{equation} is the sum of a viscous contribution, $u_0\equiv 3\varepsilon k_\nu$, and a magnetic contribution ($\propto b_{\phi S+}$) due to the loss of angular momentum from the surface of the disk where necessarily $b_{\phi S+} \leq 0$ (Lovelace, Romanova, \& Newman 1994). Equation (9) is discussed further in \S 5. The continuity equation implies that $r h \rho_0\tilde{\Sigma}(\alpha c_{s0} \overline{u}_r )$ is independent of $r$. The $z-$component of equation (1) gives \begin{equation} {\partial p \over \partial \zeta}= -\rho c_{s0}^2\zeta - {\rho_0 c_{s0}^2\over 2 \beta} {\partial \over \partial \zeta}\big(b_r^2 +b_\phi^2\big)~. \end{equation} The neglected viscous force in this equation, $ r^{-1}\partial(rT_{rz}^\nu)/\partial r$ with $T_{rz}=-\rho \nu \partial v_r/\partial z$, is smaller than the retained pressure gradient term by a factor of order $\alpha^2 \varepsilon \ll 1$. The term involving the magnetic field describes the magnetic compression of the disk because $b_r^2+b_\phi^2$ at the surface of the disk is larger than its midplane value which is zero (Wang, Sulkanen, \& Lovelace 1990). For $\beta \gg 1$ the compression effect is small and can be neglected. As mentioned we assume $p \propto \rho^\gamma$ which can be written as $p=\rho c_{s0}^2 (\rho/\rho_0)^{\gamma-1}$. Thus \begin{equation} \tilde{\rho}={\rho \over \rho_0} =\left( 1-{(\gamma-1)\zeta^2 \over 2\gamma}\right)^{1/(\gamma-1)}~, \end{equation} for $\beta \gg 1$. The density goes to zero at $\zeta_m = [2\gamma/(\gamma-1)]^{1/2}$. However, before this distance is reached the MRI turbulence is suppressed, and $g(\zeta)$ in equation (3) is effectively zero. The toroidal component of Ohm's law (equivalent to equation 2), $J_\phi= \sigma ({\bf v \times B})_\phi$, with $\sigma=c^2/(4\pi \eta)$, gives \begin{equation} {\partial b_r \over \partial \zeta} ={{\cal P}\over g} {u_r }~. \end{equation} Multiplying this equation by $g$, integrating from $\zeta_S -\epsilon$ to $\zeta_S +\epsilon$, and using the facts that $\partial g/\partial \zeta = -\delta(\zeta-\zeta_S)$ and that $|u_r|$ is bounded implies that $b_{rS+}= b_{rS-}$. That is, there is no jump in $b_r$ across the highly conducting surface layer. Note that multiplying equation (12) by $g$ and integration from $\zeta =0$ to $\zeta_S +\epsilon$ gives \begin{equation} b_{rS} = {\cal P}\zeta_S \langle u_r \rangle~, \end{equation} where $\langle .. \rangle =\int_0^{\zeta_S}d\zeta(..)/\zeta_S$. The other components of Ohm's law give \begin{equation} {\partial u_\phi \over \partial \zeta} ={3\varepsilon \over 2} b_r - {\alpha \varepsilon \over {\cal P}}{\partial \over \partial \zeta} \left( g {\partial b_\phi \over \partial \zeta}\right)~. \end{equation} Combining equations (6) and (12) gives \begin{equation} u_r ={\beta \tilde{\rho} g \over \varepsilon{\cal P}} \big(1-k_p\varepsilon^2 - u_\phi^2\big)+{\alpha^2\beta g\over {\cal P}} {\partial \over \partial \zeta} \left({\tilde{\rho}} g{\partial u_r \over \partial \zeta}\right)~. \end{equation} For thin disks, $\varepsilon \ll 1$, and $\beta > 1$, we have $u_\phi = 1 +\delta u_\phi$ with $(\delta u_\phi)^2 \ll 1$ which follows from the integral of equation (6). Consequently, \begin{equation} \delta u_\phi = -{k_p\varepsilon^2\over 2} -{\varepsilon {\cal P}u_r \over 2\beta \tilde{\rho}g} +{\alpha^2 \varepsilon \over 2 \tilde{\rho}} {\partial \over \partial \zeta} \left(\tilde{\rho}g {\partial u_r \over \partial \zeta}\right)~, \end{equation} to a good approximation. We first take the derivative of equation (14) and then substitute the $b_r$ derivative with equation (12). In turn, the $u_\phi$ derivatives can be put in terms of $u_r$ and its derivatives using equation(16). In this way we obtain \begin{eqnarray} {\alpha^4\beta^2} {\partial^2 \over \partial \zeta^2} \left(g{\partial \over \partial \zeta} \left({\tilde{\rho}} g{\partial \over \partial \zeta} \left({1\over {\tilde{\rho}}}{\partial \over \partial \zeta} \left({\tilde{\rho}} g {\partial u_r \over \partial \zeta}\right)\right) \right)\right) \nonumber \\ -~\alpha^2\beta {\cal P} {\partial^2 \over \partial \zeta^2} \left(g {\partial \over \partial \zeta} \left({\tilde{\rho}} g{\partial \over \partial \zeta} \left({u_r \over {\tilde{\rho}} g}\right)\right)\right) \nonumber\\ -~\alpha^2\beta {\cal P} {\partial^2 \over \partial \zeta^2}\left({1\over {\tilde{\rho}}} {\partial \over \partial \zeta} \left({\tilde{\rho}} g {\partial u_r \over \partial \zeta}\right)\right) \nonumber \\ +~{\alpha^2\beta^2 } {\partial^2 \over \partial \zeta^2}\bigg({\tilde{\rho}} g \big(u_r- gu_0 \big)\bigg) +{\cal P}^2 {\partial^2 \over \partial \zeta^2}\bigg({u_r \over {\tilde{\rho}} g} \bigg) \nonumber \\ +~3\beta{\cal P }^2{u_r \over g}=0~. \end{eqnarray} The equation can be integrated from $\zeta =0$ out to the surface of the disk $\zeta_S$ where boundary conditions apply. Because $u_r$ is an even function of $\zeta$, the odd derivatives of $u_r$ are zero at $\zeta=0$ and one needs to specify $u_r(0)$, $u^{\prime \prime}_r(0)$, and $u^{iv}_r(0)$. A ``shooting method'' can be applied where the values of $u_r(0)$, $u_r^{\prime \prime}(0)$, and $u_r^{iv}(0)$ are adjusted to satisfy the boundary conditions. Once $u_r(\zeta)$ is calculated, equations (8), (12), and (14) can be integrated to obtain $b_\phi(\zeta)$, $b_r(\zeta)$, and $u_\phi(\zeta)$. For specificity we take \begin{equation} g(\zeta)=\left(1-{\zeta^2\over \zeta_S^2}\right)^\delta ~, \end{equation} where $\zeta_S <\zeta_m$ and $\delta \ll 1$. That is, we neglect the ratio of the Spitzer diffusivity on the surface of the disk to its value in the central part of the disk. An estimate of $\zeta_S$ can be made by noting that $\beta(\zeta)=4\pi p(\zeta)/B_0^2 = \beta (\tilde{\rho})^\gamma \approx 1$ at $\zeta_S$. This gives $\zeta_S^2/\zeta_m^2= 1-\beta^{-(\gamma-1)/\gamma}$ and $\rho_S/\rho_0 = \beta^{-1/\gamma}$. If ${\cal E}$ denotes the fraction of the disk mass accretion rate which goes into outflows, then we can estimate the vertical speed of matter at the disk's surface as $v_z(\zeta_S) \sim {\cal E} (2h/r) (\rho_0/\rho_S)|\overline{v}_r| ={\cal E}(2h/r) \beta^{1/\gamma}|\overline{v}_r|$. Our neglect of $v_z$ evidently requires that ${\cal E}(2h/r) \beta^{1/\gamma} \ll 1$. \section{Boundary Conditions} We restrict our attention to physical solutions which (a) have net mass accretion, \begin{equation} \dot{M}=4\pi r h \rho_0 \alpha c_{s0}\tilde{\Sigma}~ \overline{u}_r > 0~, \end{equation} and (b) have $b_{\phi }\leq 0$ on the disk's surface. This condition on $b_{\phi S+}$ corresponds to an efflux of angular momentum and energy (or their absence) from the disk to its corona rather than the reverse. From equation (9), the condition $b_{\phi S+} \leq 0$ is the same as $\overline{u}_r \geq u_0$, where $u_0$ is the minimum (viscous) accretion speed. Note that $\overline{u}_r/u_0 -1$ is the fraction of the accretion power which goes into the outflows or jets (Lovelace et al. 1994). Clearly, the condition on $b_{\phi S+}$ implies that $\dot{M}>0$ so that there is only one condition. In general there is a continuum of values of $b_{\phi S+} \leq 0$ for the considered solutions inside the disk. The value of $b_{\phi S +}$ can be determined by matching the calculated fields $b_{rS}$ and $b_{\phi S+}$ onto an external field and flow solution as discussed in \S 5. From equation (12) we found that there is no jump in $b_r$ across the conducting surface layer. Thus, integration of equation (6) from $\zeta_S-\epsilon$ to $\zeta_S+\epsilon$ implies that \begin{equation} {\partial u_r\over \partial \zeta}\bigg|_{\zeta_S-} =0~. \end{equation} This represents a second condition on the disk solutions. Integration of equation (14) from $\zeta_S-\epsilon$ to $\zeta_S+\epsilon$ gives $$ u_{\phi S+} -u_{\phi S -} = {\alpha \varepsilon \over {\cal P}} {\partial b_\phi \over \partial \zeta}\bigg|_{\zeta_{S-}}~. $$ This velocity jump must be zero as can be shown by inspection of the total angular momentum flux-density in the $z-$direction, $r T_{\phi z} = -B_\phi B_z/4\pi -\rho \nu( \partial v_\phi/\partial z)$, where the first term is the magnetic stress and the second is the viscous stress. A jump in $v_\phi$ would give a delta-function contribution to the viscous stress which cannot be balanced by the magnetic stress. Therefore \begin{equation} {\partial b_\phi \over \partial \zeta} \bigg{|}_{\zeta_{S-}} =0~. \end{equation} This is a third condition on the disk solutions. Integration of equation (8) from $\zeta_S-\epsilon$ to $\zeta_S+\epsilon$ gives \begin{equation} b_{\phi S+} -b_{\phi S -} = {\alpha \beta \tilde{\rho}_S \over \varepsilon} {\partial u_\phi \over \partial \zeta}\bigg|_{\zeta_{S-}}~. \end{equation} This equation is equivalent to the continuity of the angular momentum flux-density across the surface layer, namely, $rT_{ \phi z}^+ =-B_\phi^+B_z/4\pi$ (above the layer) is equal to $rT_{\phi z}^- = -B_\phi^- B_z/4\pi - \rho \nu (\partial v_\phi /\partial z)|_-$ (below the layer). The jump $b_{\phi S+} -b_{\phi S -}$ corresponds to a radial surface current flow in the highly conducting surface layer of the disk, ${\cal J}_r = -(c/4\pi)B_0(b_{\phi S+} -b_{\phi S -})$. \begin{inlinefigure} \centerline{\epsfig{file=Fig1.eps, height=2.2in,width=3.5in}} \epsscale{0.8} \figcaption{Radial and toroidal field components (normalized to $B_z$) at the disk's surface as a function of the average accretion speed $\overline{u}_r$ (normalized by the viscous accretion speed $u_0$). For this plot $\beta=100$ and Prandtl numbers ${\cal P}=1$ and $2$. Note that $b_{\phi S+}$ is given by equation (10) and is independent of ${\cal P}$ and $b_{rS}$ is given by equation (13).} \end{inlinefigure} \section{Internal Solutions} Here, to simplify the analysis we consider the limit where $\gamma \rightarrow \infty$ in equation (11) and $\delta \rightarrow 0$ in equation (18). Then, $\zeta_S \rightarrow \zeta_m$ and both $\tilde{\rho}$ and $g$ are unit step functions going to zero at $\zeta_m =\sqrt{2}$. Also, $\overline{u}_r = \langle u_r \rangle$ and $\tilde{\Sigma}= \sqrt{2}$. Thus the above physical condition $\overline{u}_r \geq 3 \varepsilon k_\nu=u_0$ implies that $b_{rS} \geq u_0 \zeta_S{\cal P}$ from equation (13). We assume $k_p$ and $k_\nu =1$. The solutions to equation (17) are $u_r \propto \exp(ik_j \zeta)$ (with $j=1,~2,~3$), where \begin{equation} \alpha^4\beta^2(k_j^2)^3+2{\cal P}\alpha^2\beta(k_j^2)^2+ (\alpha^2 \beta^2+{\cal P}^2)k_j^2-3\beta{\cal P}^2 =0~, \end{equation} is a cubic in $k_j^2$. The discriminant of the cubic is negative so that there is one real root, $k_1^2$, and a complex conjugate pair of roots, $k_{2,3}^2$. Because $u_r$ is an even function of $\zeta$ we can write \begin{eqnarray} u_r& = & a_1 \cos(k_0\zeta)+a_2 \cos(k_r\zeta) \cosh(k_i\zeta) \nonumber\\ &+&a_3 \sin(k_r\zeta)\sinh(k_i\zeta)~, \end{eqnarray} where $k_0 =\sqrt{k_1^2}$, $k_r ={\rm Re}(\sqrt{k_2^2})$, and $k_i ={\rm Im}(\sqrt{k_2^2})$. The three unknown constants $a_1, a_2, a_3$ are reduced to two by imposing equation (20). The two are then reduced to one constant by imposing equation (21). The remaining constant is restricted by the condition $b_{\phi S+} \leq 0$. We consider a thin disk, $\varepsilon =h/r=0.05$, and a viscosity parameter $\alpha=0.1$. Figure 1 shows the dependences of the surface field components on the average accretions speed for $\beta=100$ and two values of ${\cal P}$. The $b_{\phi S}$ dependence is given by equation (9) and is independent of ${\cal P}$ while the $b_{rS}$ is given by equation (13). \begin{inlinefigure} \centerline{\epsfig{file=Fig2.eps, height=3.2in,width=3.5in}} \epsscale{0.8} \figcaption{ Radial flow speed $v_r = - u_r$ (normalized to $\alpha c_{s0}$) as a function of $\zeta=z/h$ and a sample poloidal $(B_r,B_z)$ magnetic field line for $\beta=10^2$ and ${\cal P}=1$. Also, $r_0$ is a reference radius. } \end{inlinefigure} Figure 2 shows both the profile of the accretion speed $u_r(\zeta)$ and the shape of the poloidal magnetic field ${\bf B}_p$ for $\beta=100$ and ${\cal P}=1$. Figure 3 shows the profiles of the toroidal magnetic field $b_\phi$ and the fractional deviation of the toroidal velocity from the Keplerian value, $\delta u_\phi =(v_\phi - v_K)/v_K$. For this case, $\overline{u}_r/u_0 =1.30$, $b_{\phi S +} =-0.321$, and $b_{rS}=0.276$. In this case (and a range of others discussed below), we find that there is radial inflow of the top and bottom parts of the disk whereas there is radial {\it outflow} $u_r <0$ of the part of the disk around the midplane. Inspection of the flow/field solution shows that the top and bottom parts of the disk lose angular momentum (by the vertical angular momentum flux $rT_{\phi z}$) {\it both} to (a) magnetic winds or jets from the disk's surfaces {\it and} to (b) the vertical flow of angular momentum to the midplane part of the disk which flows radially outward. We find that the flow pattern is the same as in Figure 2 for $10 \leq \beta \leq 200$ and ${\cal P} \geq 1$. For larger values of $\beta$ and ${\cal P}=1$, the flow pattern changes from that in Figure 2 to that in Figure 4 for $\beta =300$. However, for $\beta =300$ and ${\cal P}=2$, the flow pattern is again similar to that in Figure 2. For $\beta = 10^3$ and ${\cal P}=3$ the flow pattern is also the same as in Figure 2. For smaller viscosity values, $\alpha \leq 0.03$, and $\beta =100$ there are multiple channels with for example three layers of radial inflow and two layers of radial outflow in the disk. \begin{inlinefigure} \centerline{\epsfig{file=Fig3.eps, height=2.2in,width=3.5in}} \epsscale{0.8} \figcaption{Toroidal magnetic field $b_\phi =B_\phi/B_z$ and toroidal velocity $\delta u_\phi =(v_\phi - v_K)/v_K$ (with $v_K$ the Keplerian velocity) for the case where $\beta=100$ and ${\cal P}=1$. The jump in the toroidal magnetic field at the disk's surface is shown by the dashed line.} \end{inlinefigure} Figure 4 shows the profile of the accretion speed $u_r(\zeta)$ and the shape of the poloidal magnetic field ${\bf B}_p$ for $\beta=300$ and ${\cal P}=1$. Figure 5 shows the profiles of the toroidal magnetic field $b_\phi$ and the fractional deviation of the toroidal velocity from the Keplerian value, $\delta u_\phi =(v_\phi - v_K)/v_K$. For this case, $\overline{u}_r/u_0 =1.5$, $b_{\phi S +} =-1.59$, and $b_{rS}=0.318$. Inspection of the flow/field solution shows that the angular momentum flux $rT_{\phi z}>0$ in the top half of the disk and at the disk's surface this flux goes into an outflow or jet. \begin{inlinefigure} \centerline{\epsfig{file=Fig4.eps, height=3.2in,width=3.5in}} \epsscale{0.8} \figcaption{ Radial flow speed $v_r=-u_r$ (normalized to $\alpha c_{s0}$) as a function of $\zeta=z/h$ and a sample poloidal $(B_r,B_z)$ magnetic field line for $\beta=300$ and ${\cal P}=1$. Also, $r_0$ is a reference radius.} \end{inlinefigure} \section{External Solutions} As mentioned in \S 3, the value of $b_{\phi S+}\leq 0$ is not fixed by the solution for the field and flow inside the disk. Its value can be determined by matching the calculated surface fields $b_{r S}$ and $b_{\phi S+}$ onto an external magnetic wind or jet solution. Stability of the wind or jet solution to current driven kinking is predicted to limit the ratio of the toroidal to axial magnetic field components at the disk's surface $|b_{\phi S +}|$ to values $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} {\cal O}(2\pi r/L_z)$ (Hsu \& Bellan 2002; Nakamura, Li, \& Li 2007), where $L_z$ is the length-scale of field divergence of the wind or jet at the disk surface. From known wind and jet solutions we estimate $2\pi r/L_z \approx \pi$ (Lovelace, Berk, \& Contopoulos 1991; Ustyugova et al. 1999; Ustyugova et al. 2000; Lovelace et al. 2002). Recall that $\overline{u}_r/u_0 -1= 2|b_{\phi S +}|/ (\alpha \beta \tilde{\Sigma} u_0)$ (from equation 9) is the faction of the accretion power going into the jets or winds. For the mentioned upper limit on $|b_{\phi S+}|$, we find $\overline{u}_r/u_0 -1 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} {\cal O}[2\pi/(\alpha \beta \tilde{\Sigma} u_0)]$. From equation (13) we have $b_{rS} = ({\cal P}\zeta_S u_0)(\langle{u}_r\rangle/u_0)$. Therefore, for $\beta \gg 1$ and $\langle{u}_r\rangle \approx u_0$, we have $ b_{rS} \approx {\cal P}\zeta_S u_0$. The matching of internal and external field/flow solutions has been carried out by K\"onigl (1989) and Li (1995) for the case of self-similar [$B_z(r,0) \sim r^{-5/4}$] magnetocentrifugally outflows from the disk's surface. These outflows occur under conditions where the poloidal field lines at the disk's surface are tipped relative to the rotation axis by more than $30^\circ$ which corresponds to $b_{rS} > 3^{-1/2} \approx 0.577$ (Blandford \& Payne 1982). The outflows typically carry a significant mass flux. For the internal field/flow solutions discussed in \S 4 with $\beta \gg1$, we conclude that $b_{rS}$ is sufficiently large for magnetocentrifugal outflows only for turbulent magnetic Prandtl numbers, ${\cal P} \gtrsim 2.7$. Shu and collaborators (e.g., Cai et al. 2008, and references therein) have developed detailed `X-wind' models which depend on the disk having Prandtl numbers larger than unity. Recent MHD simulations by Romanova et al. (2008) provides evidence of conical or X-wind type outflows for Prandtl numbers $\geq1$. \begin{inlinefigure} \centerline{\epsfig{file=Fig5.eps, height=2.2in,width=3.5in}} \epsscale{0.8} \figcaption{Toroidal magnetic field $b_\phi =B_\phi/B_z$ and toroidal velocity $\delta u_\phi =(v_\phi - v_K)/v_K$ (with $v_K$ the Keplerian velocity) for the case where $\beta=300$ and ${\cal P}=1$. The jump in the toroidal magnetic field at the disk's surface is shown by the dashed line.} \end{inlinefigure} For Prandtl numbers say ${\cal P} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 2.7$, the values of $b_{rS}$ are too small for there to be a magnetocentrifugal outflow. In this case there is an outflow of electromagnetic energy and angular momentum from the disk (with little mass outflow) in the form of a magnetically dominated or `Poynting flux jet' (Lovelace, Wang, \& Sulkanen 1987; Lovelace, et al. 2002) also referred to as a `magnetic tower jet' (Lynden-Bell 1996, 2003). MHD simulations have established the occurrence of Poynting-flux jets under different conditions (Ustyugova et al. 2000, 2006; Kato, Kudoh, \& Shibata 2002; Kato 2007). Laboratory experiments have allowed the generation of magnetically dominated jets (Hsu \& Bellan 2002; Lebedev et al. 2005). \section{Discussion} A study is made of stationary axisymmetric accretion flows $[v_r(z),v_\phi(z),v_z=0]$ and the large-scale, weak magnetic field [$B_r(z),B_\phi(z), B_z\approx{\rm const}$] taking into account the turbulent viscosity and diffusivity due to the MRI and the fact that the turbulence vanishes at the surface of the disk as discussed by Bisnovayi-Kogan \& Lovelace (2007) and Rothestein \& Lovelace (2008). We derive a sixth-order differential equation for the radial flow velocity $v_r(z)$ (equation 17) which depends mainly on the ratio of the midplane thermal to magnetic pressures $\beta >1$ and the Prandtl number of the turbulence ${\cal P}=$ viscosity/diffusivity. Boundary conditions at the disk's surfaces take into account the outflow of angular momentum to magnetic winds or jets and allow for current flow in the highly conducting surface layers. In general we find that there is a radial surface current but no toroidal surface current. The stability of this surface current layer is unknown and remains to be studied. If the layer is unstable to kinking this may cause a thickening of the current layer. We argue that stability of the winds or jets will limit the ratio of the toroidal to axial field at the disk's surface $|b_{\phi S+}|$ to values $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} \pi$. The stationary solutions we find indicate that a weak ($\beta \gg 1$), large-scale field does not diffuse away as suggested by earlier work (e.g., Lubow et al. 1994) which assumed $b_{rS} \geq 3^{-1/2}$. For a wide range of parameters $\beta > 1$ and ${\cal P}\geq 1$, we find stationary {\it channel} type flows where the flow is radially {\it outward} near the midplane of the disk and radially {\it inward} in the top and bottom parts of the disk. Solutions with inward flow near the midplane and outflow in the top and bottom parts of the disk are also found. The solutions with radial outflow near the midplane are of interest for the outward transport of chondrules in protostellar disks from distances close to the star ($\sim 0.05$ AU) (where they are melted and bombarded by high energy particles) to larger distances ($> 1$ AU) where they are observed in the Solar system. Outward transport of chondrules from distances $\sim 0.05$ AU to $>1$ AU by an X-wind has been discussed by Shu et al. (2001). The flow/field solutions found here in a viscous/diffusive disk and are different from the exponentially growing channel flow solutions found by Goodman and Xu (1994) for an MRI in an ideal MHD unstable shearing box. Channel solutions in viscous/diffusive disks were found earlier by Ogilvie \& Livio (2001) and by Salmeron, K\"onigl, \& Wardle (2007) for conditions different from those considered here. In general we find that the magnitude of the toroidal magnetic field component inside the disk is much larger than the other field components. The fact that the viscous accretion speed is very small, $\sim \alpha \varepsilon c_{s0}$, means that even a small large-scale field can significantly influence the accretion flow. We find that Prandtl numbers larger than a critical value estimated to be $2.7$ are needed in order for there to be magnetocentrifugal outflows from the disk's surface. For smaller ${\cal P}$, electromagnetic outflows are predicted. Owing to the stability condition, $|b_{\phi S+}| \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} \pi$, the fraction of the accretion power going into magnetic outflows or jets is $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} {\rm const}\beta^{-1} \sim B_z^2$. Analysis of the time-dependent accretion of the large-scale ${\bf B}-$field is clearly needed to study the amplification of the field and build up of magnetic flux in the inner region of the disk. One method is to use global 3D MHD simulations (Igumenshchev et al. 2003; Igumenshchev 2008), but this has the difficulty of resolving the very thin highly conducting surface layers of the disk. Another method is to generalize the approach of Lovelace et al. (1994) taking into account the results of the present work. This is possible because the radial accretion time ($r/|\overline{u}_r|$) is typically much longer than the viscous diffusion time across the disk ($h^2/\nu$). We thank an anonymous referee for valuable comments. The work of G.S.B.-K. was partially supported by RFBR grants 08-02-00491 and 08-02-90106, RAN Program ``Formation and evolution of stars and galaxies.'' D.M.R. was supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-0602259. R.V.E.L. was supported in part by NASA grant NNX08AH25G and by NSF grants AST-0607135 and AST-0807129.
1,314,259,993,756
arxiv
\section{The Problem} Let ${\cal X}=\{x_1, \ldots ,x_n\}$ be a finite alphabet, and $X$ be any random variable (r.v.) taking values in ${\cal X}$ according to the probability distribution ${\bf p}=(p_1, p_2, \ldots , p_n)$, that is, such that $P\{X=x_i\}=p_i$, for $i=1, 2, \ldots , n$. A well known and widely used inequality (see \cite{CT}, Exercise 2.4), states that \begin{equation}\label{eq:HX>HfX} H(f(X))\leq H(X), \end{equation} where $f:{\cal X}\to {\cal Y}$ is any function defined on ${\cal X}$, and $H(\cdot)$ denotes the Shannon entropy. Moreover, equality holds in (\ref{eq:HX>HfX}) if and only if $f$ is one-to-one. The main purpose of this paper is to sharpen inequality (\ref{eq:HX>HfX}) by deriving tight bounds on $H(f(X))$ when $f$ \emph{is not} one-to-one. More precisely, given the r.v. $X$, an integer $2\leq m<n$, a set ${\cal Y}_m=\{y_1, \ldots ,y_m\}$, and the family of surjective functions ${\cal F}_m=\{f| \; f:{\cal X}\to {\cal Y}_m, \ |f({\cal X})|=m \}$, we want to compute the values \begin{equation}\label{eq:maxandmin} \max_{f\in {\cal F}_m}H(f(X)) \qquad \mbox{and} \qquad \min_{f\in {\cal F}_m}H(f(X)). \end{equation} \section{The Results} For any probability distribution ${\bf p}=(p_1, p_2, \ldots , p_n)$, with $p_1\geq p_2, \ldots , \geq p_n\geq 0$, and integer $2\leq m<n$, let us define the probability distributions $R_m({\bf p})=(r_1, \ldots ,r_m)$ as follows: { if $p_1<1/m$ we set $R_m({\bf p})=(1/m, \ldots ,1/m)$, whereas if $p_1\geq 1/m$ we set $R_m({\bf p})=(r_1, \ldots ,r_m)$, where } \begin{equation} \label{eq:definition-restriction} r_i = \begin{cases} p_i & \hbox{ for } i = 1, \dots, i^* \cr \left (\sum_{j=i^*+1}^n p_j\right )/{(m-i^*)} & \hbox{ for } i = i^*+1, \dots, m, \end{cases} \end{equation} and {$i^*$ is the maximum index $i$ such that $p_i\geq \frac{\sum_{j=i+1}^n p_j}{m-i}$.} A somewhat similar operator was introduced in \cite{HY}. \noindent Additionally, we define the probability distributions $Q_m({\bf p})=(q_1, \ldots ,q_m)$ in the following way: \begin{equation} \label{eq:Q} q_i = \begin{cases} \sum_{k=1}^{n-m+1}p_k, \quad & \hbox{ for } i = 1, \cr p_{n-m+i}, & \hbox{ for } i=2, \ldots, m. \end{cases} \end{equation} The following Theorem provides the results seeked in (\ref{eq:maxandmin}). \begin{theorem} For any r.v. $X$ taking values in the alphabet ${\cal X}=\{x_1, x_2, \ldots ,x_n\}$ according to the probability distribution ${\bf p}=(p_1, p_2, \ldots , p_n)$, and for any $2\leq m<n$, it holds that \begin{equation}\label{max} \max_{f\in {\cal F}_m}H(f(X))\in \left[H(R_m({\bf p}))-\alpha, H(R_m({\bf p}))\right], \end{equation} where $\alpha=1-({1+\ln(\ln 2)})/{\ln 2}< 0.0861$, and \begin{equation}\label{min} \min_{f\in {\cal F}_m}H(f(X))=H(Q_m({\bf p}))\footnote{Here, with a slight abuse of notation, for a probability distribution ${\bf a}=(a_1, \ldots , a_t)$ we denote with $H({\bf a})=-\sum_ia_i\log a_i$ the entropy of a discrete r.v. distributed according to ${\bf a}$. Moreover, with $\log$ we denote the logarithm in base 2, and with $\ln$ the natural logarithm in base $e$}. \end{equation} \end{theorem} Therefore, the function $f\in {\cal F}_m$ for which $H(f(X))$ is minimum maps all the elements $x_1, \ldots , x_{n-m+1}\in {\cal X}$ to a single element, and it is one-to-one on the remaining elements $x_{n-m+2}, \ldots , x_n$. Before proving Theorem 1 and discuss its consequences, we would like to notice that there are quite compelling reasons why we are unable to determine the exact value of the maximum in (\ref{max}), and consequently, the form of the function $f\in {\cal F}_m$ that attains the bound. Indeed, computing the value $\max_{f\in {\cal F}_m}H(f(X))$ is an NP-hard problem. It is easy to understand the difficulty of the problem already in the simple case $m=2$. To that purpose, consider any function $f\in {\cal F}_2$, that is $f:{\cal X}\to{\cal Y}_2=\{y_1, y_2\}$, and let $X$ be any r.v. taking values in ${\cal X}$ according to the probability distribution ${\bf p}=(p_1, p_2, \ldots , p_n)$. Let $z_1=\!\!\!\sum_{x\in {\cal X} : f(x)=y_1}P\{X=x\}, \quad z_2=\!\!\!\sum_{x\in {\cal X} : f(x)=y_2}P\{X=x\}.$ Then, $H(f(X))=-z_1\log z_1-z_2\log z_2$, and it is maximal in correspondence of a function $f\in {\cal F}_2$ that makes the sums $z_1$ and $z_2$ as much equal as possible. This is equivalent to the well known NP-hard problem \textsc{Partition} on the instance $\{p_1, \ldots , p_n\}$ (see \cite{GJ})\footnote{In the full version of the paper we will show that the problem of computing the value $\max_{f\in {\cal F}_m}H(f(X))$ is \emph{strongly} NP-hard}. Since the function $f\in {\cal F}_m$ for which $H(f(X))\geq H(R_m({\bf p}))-\alpha$ can be efficiently constructed, we have also the following important consequence of Theorem 1. \begin{corollary} There is a polynomial time algorithm to approximate the NP-hard problem of computing the value $$\max_{f\in {\cal F}_m}H(f(X)),$$ with an \emph{additive} approximation factor of $\alpha\leq 0.0861$. \end{corollary} A key tool for the proof of Theorem 1 is the following result, proved in the second part of Section \ref{proofs}. \begin{theorem}\label{teo-H1} Let ${\bf p}=(p_1,p_2, \ldots, p_n)$ be a probability distribution such that $p_1\geq p_2\geq \ldots \geq p_n>0$. If $ p_1/p_n\leq {\rho}$ then \begin{equation}\label{eq:rho} H({\bf p})\geq \log n - \left( \frac{{\rho} \ln {\rho}}{{\rho}-1} -1- \ln\frac{{\rho}\ln {\rho}}{{\rho}-1}\right)\frac{1}{\ln 2}. \end{equation} \end{theorem} Theorem \ref{teo-H1} improves on several papers (see \cite{S+} and references therein quoted), that have studied the problem of estimating $H({\bf p})$ when only a bound on the ratio $ p_1/p_n$ is known.\footnote{The bound in \cite{S+} has this form: if $p_1/p_n\leq 1+2(e^\epsilon-1)+2\sqrt{e^{2\epsilon}-e^\epsilon}$, then $H(X)\geq \log n -\epsilon$. One can see that our bound (\ref{eq:rho}) is tighter.} We believe the result to be of independent interest. For instance, it can also be used to improve existing bounds on the leaf-entropy of parse trees generated by Tunstall algorithm. To prove our results, we use ideas and techniques from Majorization Theory \cite{MO}, a mathematical framework that has been proved to be very much useful in Information Theory (e.g., see \cite{CV,CV1,HY,HV} and references therein quoted). \section{Some Applications}\label{app} Besides its inherent naturalness, the problem of estimating the entropy $H(f(X))$ vs. $H(X)$ has several interesting applications. We highlight some of them here, postponing a more complete discussion in the full version of the paper. In the area of clustering, one seeks a mapping $f$ (deterministic or stochastic) from some data, generated by a r.v. $X$ taking values in a set ${\cal X}$, to ``clusters'' in ${\cal Y}$, where $|{\cal Y}|\ll |{\cal X}|$. A widely employed measure to appraise the goodness of a clustering algorithm is the information that the clusters retain towards the original data, measured by the mutual information $I(X;f(X))$ (see \cite{F+,KMN} and references therein quoted). In general, one wants to choose $f$ such that $|f({\cal X})|$ is small but $I(X;f(X))$ is large. The authors of \cite{GA} (see also \cite{kur}) proved that, {given} the random variable $X$, among all mappings $f$ that maximizes $I(X;f(X))$ (under the constraint that $|f({\cal X})|$ is fixed) there is a maximizing function $f$ that is \emph{deterministic}. Since in the case of deterministic functions it holds that $I(X;f(X))=H(f(X))$, finding the clustering $f$ of ${\cal X}$ (into a fixed number $m$ of clusters) that maximizes the mutual information $I(X;f(X))$ is \emph{equivalent} to our problem of finding the function $f$ that attains the upper bound in (\ref{eq:maxandmin}).% \footnote{In \cite{kur} the authors consider the problem of determining the function $f$ that maximizes $I(X;f(Y))$, where $X$ is the r.v. at the input of a DMC and $Y$ is the corresponding output. Our scenario could be seen as the particular case when the DMC is noiseless. However, the results in \cite{kur} do not imply ours since the authors give algorithms only for binary input channels (i.e. $n=2$, that makes the problem completely trivial in our case). Instead, our results are relevant to those of \cite{kur}. For instance, we obtain that the general maximization problem considered in \cite{kur} is NP-hard, a fact unnoticed in \cite{kur}. } Another scenario where our results directly find applications is the one considered in \cite{V12}. There, the author considers the problem of best approximating a probability distribution ${\bf p}=(p_1, \ldots, p_n)$ with a shorter one ${\bf q}^*=(q^*_1, \ldots , q^*_m)$, $m\leq n$. The criterion with which one chooses ${\bf q}^*$, given ${\bf p}$, is the following. Given ${\bf p}=(p_1, \ldots, p_n)$ and ${\bf q}=(q_1, \ldots , q_m)$, define the quantity ${\tt D}({\bf p},{\bf q}) $ as $2W({\bf p},{\bf q})-H({\bf p})-H({\bf q})$, where $W({\bf p},{\bf q})$ is the \emph{minimum} entropy of a bivariate probability distribution that has ${\bf p}$ and ${\bf q}$ as marginals. Then, the ``best'' approximation ${\bf q}^*$ of ${\bf p}$ is chosen as the probability distributions ${\bf q}^*$ with $m$ components that \emph{minimizes} ${\tt D}({\bf p},{\bf q})$, over all ${\bf q}=(q_1, \ldots , q_m)$. The author of \cite{V12} shows that ${\bf q}^*$ can be characterized in the following way. Given ${\bf p}=(p_1, \ldots , p_n)$, call ${\bf q}=(q_1, \ldots , q_m)$ an \emph{aggregation} of ${\bf p}$ into $m$ components if there is a partition of $\{1, \ldots , n\}$ into disjoint sets $I_1, \ldots , I_m$ such that $q_k=\sum_{i\in I_k}p_i$, for $k=1, \ldots m$. In \cite{V12} it is proved that the vector ${\bf q}^*$ that best approximate ${\bf p}$ (according to ${\tt D}$) is the aggregation of ${\bf p}$ into $m$ components of \emph{maximum entropy}. Since \emph{any} aggregation ${\bf q}$ of ${\bf p}$ can be seen as the distribution of the r.v. $f(X)$, where $f$ is some appropriate function and $X$ is a r.v. distributed according to ${\bf p}$ (and, vice versa, any deterministic $f$ \emph{gives} a r.v. $f(X)$ whose distribution is an aggregation of the distribution of $X$), one gets that the problem of computing the ``best'' approximation ${\bf q}^*$ of ${\bf p}$ is NP-hard. The bound (\ref{max}) allows us to provide an approximation algorithm to construct a probability distribution $\overline{{\bf q}}=(\overline{q}_1, \ldots , \overline{q}_m)$ such that ${\tt D}({\bf p},\overline{{\bf q}})\leq {\tt D}({\bf p},{\bf q}^*)+0.0861$, improving on \cite{CGV}, where an approximation algorithm for the same problem with an additive error of $1$ was provided. There are other problems that can be cast in our scenario. For instance, Baez \emph{et al}. \cite{Baez} give an axiomatic characterization of the Shannon entropy in terms of \emph{information loss}. Stripping away the Category Theory language of \cite{Baez}, the information loss of a r.v. $X$ amounts to the difference $H(X)-H(f(X))$, where $f$ is any deterministic function. Our Theorem 1 allows to quantify the extreme value of the information loss of a r.v., when the support of $f(X)$ is known. There is also a vast literature (see \cite{Ma}, Section 3.3, and references therein quoted) studying the ``\emph{leakage of a program $P$ [...] defined as the (Shannon) entropy of the partition} $\Pi(P)$'' \cite{Ma}. One can easily see that their ``leakage'' is the same as the entropy $H(f(X))$, where $X$ is the r.v. modeling the program input, and $f$ is the function describing the input-output relation of the program $P$. In Section 8 of the same paper the authors study the problem of maximizing or minimizing the leakage, in the case the program $P$ is stochastic, using standard techniques based on Lagrange multipliers. They do not consider the (harder) case of deterministic programs (i.e., deterministic $f$'s) and our results are likely to be relevant in that context. Finally, we remark that our problem can also be seen as a problem of quantizing the alphabet of a discrete source into a smaller one (e.g., \cite{ME}), and the goal is to maximize the mutual information between the original source and the quantized one. \section{The Proofs}\label{proofs} We first recall the important concept of \emph{majorization} among probability distributions. \begin{definition}\label{defmaj} {\rm \cite{MO}} Given two probability distributions ${\bf a}=(a_1, \ldots ,a_n)$ and ${\bf b}=(b_1, \ldots , b_n)$ with $a_1\geq \ldots \geq a_n\geq 0$ and $b_1\geq \ldots \geq b_n\geq 0$, we say that ${\bf a}$ is {\em majorized} by ${\bf b}$, and write ${\bf a} \preceq {\bf b}$, if and only if $$\sum_{k=1}^i a_k\leq \sum_{k=1}^i b_k, \quad\mbox{\rm for all }\ i=1,\ldots , n.$$ \end{definition} Without loss of generality we assume that \emph{all} the probabilities distributions we deal with have been ordered in non-increasing order. We also use the majorization relationship between vectors of unequal lenghts, by properly padding the shorter one with the appropriate number of $0$'s at the end. Consider an arbitrary function $f:{\cal X}\to{\cal Y}$, $f\in {\cal F}_m$. Any r.v. $X$ taking values in ${\cal X}=\{x_1, \ldots , x_n\}$, according to the probability distribution ${\bf p}=(p_1, \ldots, p_n)$, and the function $f$ naturally induce a r.v. $f(X)$, taking values in ${\cal Y}=\{y_1, \ldots , y_m\}$ according to the probability distribution whose values are given by the expressions \begin{equation}\label{eq:defy} \forall y_j\in {\cal Y} \qquad P\{f(X)=y_j\}=\sum_{x\in {\cal X}:f(x)=y_j}P\{X=x\}. \end{equation} Let ${\bf z}=(z_1, \ldots , z_m)$ be the vector containing the values $z_1=P\{f(X)=y_1\}, \ldots , z_m=P\{f(X)=y_m\}$ ordered in non-increasing fashion. For convenience, we state the following self-evident fact about the relationships between ${\bf z}$ and ${\bf p}$. \begin{claim} There is a partition of $\{1, \ldots , n\}$ into disjoint sets $I_1, \ldots , I_m$ such that $z_j=\sum_{i\in I_j}p_i$, for $j=1, \ldots m$. \end{claim} \noindent Therefore, ${\bf z}$ is an \emph{aggregation} of ${\bf p}$. Given a r.v. $X$ distributed according to ${\bf p}$, and \emph{any} $f\in {\cal F}_m$, by simply applying the definition of majorization one can see that the (ordered) probability distribution of the r.v. $f(X)$ is majorized by $Q_m({\bf p})=(q_1, \ldots ,q_m)$, as defined in (\ref{eq:Q}). Therefore, by invoking the Schur concavity of the entropy function $H$ (see \cite{MO}, p. 101 for the statement, and \cite{HV} for an improvement), saying that $H({\bf a})\geq H({\bf b})$ whenever ${\bf a}\preceq {\bf b}$, we get that $H(f(X))\geq H(Q_m({\bf p}))$. From this, the equality (\ref{min}) immediately follows. We need the following two simple results, but important to us, stated and proved in \cite{CGV} with a different terminology. \begin{lemma}{\rm\cite{CGV}} For ${\bf p}$ and ${\bf z}$ as above, it holds that ${\bf p}\preceq {\bf z}.$ \end{lemma} In other words, {for any} r.v. $X$ {and function} $f$, the probability distribution of $f(X)$ \emph{always} {majorizes} that of $X$. \begin{lemma}{\rm\cite{CGV}} For \emph{any} $m$, $2\leq m<n$, and probability distribution ${\bf a}=(a_1, \ldots , a_m)$ such that ${\bf p}\preceq {\bf a}$, it holds that \begin{equation}\label{p<R<a} R_m({\bf p})\preceq {\bf a}, \end{equation} where $R_m({\bf p})$ is the probability distribution defined in {\rm (\ref{eq:definition-restriction}). } \end{lemma} From Lemmas 1 and 2, and by applying the Schur concavity of the entropy function $H$, we get the following result. \begin{corollary} For any r.v. $X$ taking values in ${\cal X}$ according to a probability distribution ${\bf p}$, and for \emph{any} $f\in {\cal F}_m$, it holds that \begin{equation}\label{HF<HR} H(f(X))\leq H(R_m({\bf p})). \end{equation} \end{corollary} \noindent Above corollary implies that $$\max_{f\in {\cal F}_m}H(f(X))\leq H(R_m({\bf p})).$$ Therefore, to complete the proof of Theorem 1 we need to show that we can construct a function $f\in {\cal F}_m$ such that \begin{equation}\label{need} H(f(X))\geq H(R_m({\bf p}))-\left(1-\frac{1+\ln(\ln 2)}{\ln 2}\right), \end{equation} or, equivalently, that we can construct an {aggregation} of ${\bf p}$ into $m$ components, whose entropy is at least $H(R_m({\bf p}))- \left(1-\frac{1+\ln(\ln 2)}{\ln 2}\right).$ We prove this fact in the following lemma. \begin{lemma} \label{lemma:huffman-prob} For any ${\bf p}=(p_1, \ldots , p_n)$ and $2\leq m<n$, we can construct {an aggregation} ${\bf q}=(q_1, \ldots , q_m)$ of ${\bf p}$ such that $$H({\bf q}) \geq H(R_m({\bf p})) - \left(1-\frac{1+\ln(\ln 2)}{\ln 2})\right).$$ \end{lemma} \begin{IEEEproof} We will assemble the aggregation ${\bf q}$ through the Huffman algorithm. We first make the following observation. To the purposes of this paper, each \emph{step} of the Huffman algorithm consists in merging the two smallest element $x$ and $y$ of the current probability distribution, deleting $x$ and $y$ and substituting them with the single element $x+y$, and \emph{reordering} the new probability distribution from the largest element to the smallest (ties are arbitrarily broken). Immediately after the step in which $x$ and $y$ are merged, \emph{each} element $z$ in the new and reduced probability distribution that finds itself positioned at the ``right'' of $x+y$ (if there is such a $z$) has a value that satisfies $(x+y)\leq 2z$ (since, by choice, $x,y\leq z$). Let ${\bf q} = (q_1, \dots, q_m)$ be the ordered probability distribution obtained by executing \emph{exactly} $n-m$ steps of the Huffman algorithm, starting from the distribution ${\bf p}$. Denote by $i_q$ the maximum index $i$ such that for each $j = 1, \dots ,i_q$ the component $q_j$ \emph{has not} been produced by a merge operation of the Huffman algorithm. In other word, $i_q$ is the maximum index $i$ such that for each $j = 1, \dots ,i_q$ it holds that $q_j=p_j$. Notice that we allow $i_q$ to be equal to $0$. Therefore $q_{i_q+1}$ has been produced by a merge operation. At the step in which the value $q_{i_q+1}$ was created, it holds that $q_{i_q+1}\leq 2z$, for any $z$ at the ``right'' of $q_{i_q+1}$. At later steps, the inequality $q_{i_q+1}\leq 2z$ still holds, since elements at the right of $q_{i_q+1}$ could have only increased their values. Let $S = \sum_{k=i_q+1}^mq_k$ be the sum of the last (smallest) $m-i_q$ components of ${\bf q}$. The vector ${\bf q}' = (q_{i_q+1}/S, \dots q_{m}/S)$ is a probability distribution such that the ratio between its largest and its smallest component is upper bounded by 2. By Theorem \ref{teo-H1}, with $\rho=2$, it follows that \begin{equation} \label{equation:DG} H({\bf q}') \geq \log(m-i_q) - \alpha, \end{equation} where $\alpha\leq \left(1-\frac{1+\ln(\ln 2)}{\ln 2}\right)< 0.0861$. Therefore, we have \begin{eqnarray*} H({\bf q}) &=& \sum_{j=1}^{i_q} q_j \log \frac{1}{q_j} + \sum_{j=i_q+1}^{m} q_j \log \frac{1}{q_j} \label{eq:h-1st}\\ &=& \sum_{j=1}^{i_q} q_j \log \frac{1}{q_j} - S \log S + S \sum_{j=i_q+1}^{m} \frac{q_j}{S} \log \frac{S}{q_j} \\ &=& \sum_{j=1}^{i_q} q_j \log \frac{1}{q_j} - S \log S + S H({\bf q}')\\ &\geq& \sum_{j=1}^{i_q} q_j \log \frac{1}{q_j} - S \log S + S (\log(m-i_q) - \alpha) \\ &=& \sum_{j=1}^{i_q} q_j \log \frac{1}{q_j} + \!\!S \log\frac{m-i_q}{S} - \alpha S\\ &=& \sum_{j=1}^{i_q} q_j \log \frac{1}{q_j}\! + \!\!\!\sum_{j=i_q+1}^m \frac{S}{m-i_q} \log\frac{m-i_q}{S} - \alpha S \\ &\geq& \sum_{j=1}^{i_q} q_j \log \frac{1}{q_j} + \sum_{j=i_q+1}^m \frac{S}{m-i_q} \log\frac{m-i_q}{S} - \alpha \\ &=& H\Bigl(q_1, q_2, \dots, q_{i_q}, \frac{S}{m-i_q}, \dots, \frac{S}{m-i_q}\Bigr) - \alpha. \label{eq:h-last} \end{eqnarray*} Let ${\bf q}^* = (q_1, q_2, \dots, q_{i_q}, \frac{S}{m-i_q}, \frac{S}{m-i_q}, \dots, \frac{S}{m-i_q}),$ and observe that ${\bf q}^*$ coincides with ${\bf p}$ in the first $i_q$ components, as it does ${\bf q}$. What we have shown is that \begin{equation} \label{h-h*} H({\bf q}) \geq H({\bf{q}}^*) - \alpha. \end{equation} We now observe that $i_q \leq i^*$, where $i^*$ is the index that intervenes in the definition of our operator $R({\bf p})$ (see (\ref{eq:definition-restriction})). In fact, by the definition of ${\bf q}$ one has $q_{i_q} \geq q_{i_q+1} \geq \cdots \geq q_m$, that also implies \begin{equation}\label{eq:hh} \frac{\sum_{j=i_q+1}^m q_j}{m} \leq q_{i_q+1} \leq q_{i_q} = p_{i_q}. \end{equation} Moreover, since the first $i_q$ components of ${\bf q}$ are the same as in ${\bf p}$, we also have $\sum_{j = i_{q}+1}^m q_j = \sum_{i_q+1}^n p_j$. This, together with relation (\ref{eq:hh}), implies \begin{equation}\label{eq:new} \frac{\sum_{j=i_q+1}^n p_j}{m} \leq p_{i_q}. \end{equation} Equation (\ref{eq:new}) clearly implies $i_q \leq i^*$ since $i^*$ is by definition, the maximum index $i$ such that $\sum_{j=i+1}^n p_j \geq (n-i) p_i.$ From the just proved inequality $i^* \geq i_q$, we have also \begin{equation} \label{h:inequality} {\bf q}^* \preceq R({\bf p}). \end{equation} Using (\ref{h-h*}), (\ref{h:inequality}), and the Schur concavity of the entropy function, we get $$H({\bf q})\geq H({\bf q}^*)-\alpha\geq H(R({\bf p}))-\alpha,$$ thus completing the proof of the Lemma (and of Theorem 1). \end{IEEEproof} We now prove Theorem \ref{teo-H1}. Again, we use tools from majorization theory. Consider an arbitrary probability distribution ${\bf p}=(p_1,p_2, \ldots, p_n)$ with $p_1\geq p_2\geq \ldots \geq p_n>0$ and $p_1/p_n\leq {\rho}$. Let us define the probability distribution \begin{eqnarray}\label{zeta} {\bf z}_{\rho}({\bf p})=(z_1,\ldots, z_n)&&\\ =(\underbrace{{\rho} p_n,\ldots, {\rho} p_n}_{i \ \mbox{\scriptsize times}},&& \hspace{-0.7truecm} 1-(n+i{\rho}-i-1)p_n,p_n, \ldots, p_n),\nonumber \end{eqnarray} where $i=\left\lfloor{(1-np_n)}/{p_n({\rho}-1)}\right\rfloor$. It is easy to verify that $p_n\leq 1-(n+i({\rho}-1)-1)x\leq {\rho} p_n$. \begin{lemma}\label{primo} Let ${\bf p}=(p_1,p_2, \ldots, p_n)$ with $p_1\geq p_2\geq \ldots \geq p_n>0$ be any probability distribution with $p_1/p_n\leq{\rho}$. The probability distribution ${\bf z}_{\rho}({\bf p})$ satisfies ${\bf p}\preceq {\bf z}_{\rho}({\bf p}).$ \end{lemma} \begin{IEEEproof} For any $j\leq i$, it holds that $$p_1+\ldots + p_j\leq j\, p_1\leq j ({\rho} p_n)=z_1+\ldots +z_j.$$ Consider now some $j\geq i +1$ and assume by contradiction that $p_1+\ldots + p_j >z_1+\ldots+z_j$. It follows that $p_{j+1}+\ldots+p_n<z_{j+1}+\ldots+z_n=(n-j)p_n$. As a consequence we get the contradiction $p_n\leq (p_{j+1}+\ldots+p_n)/(n-j)<p_n$. \end{IEEEproof} \medskip Lemma \ref{primo} and the Schur concavity of the entropy imply that $H({\bf p})\geq H({\bf z}_{\rho}({\bf p}))$. We can therefore prove Theorem 2 by showing the appropriate upper bound on $\log n -H({\bf z}_{\rho}({\bf p}))$. \begin{lemma} It holds that $$\log n -H({\bf z}_{\rho}({\bf p}))\leq \left(\frac{{\rho}\ln {\rho}}{{\rho}-1} -1- \ln \frac{{\rho}\ln {\rho}}{{\rho}-1}\right)\frac{1}{\ln 2}.$$ \end{lemma} \begin{IEEEproof} Consider the class of probability distributions of the form $${\bf z}_{\rho}(x,i)=({\rho} x,\ldots, {\rho} x,1-(n+i({\rho}-1)-1)x,x, \ldots, x),$$ having the first $i$ components equal to ${{\rho} x}$ and the last $n-i-1$ equal to $x$, for suitable $0\leq x\leq 1/\rho$, and $i\geq 0$ such that \begin{equation}\label{xin} 1-(n+i({\rho}-1)-1)x\in[x,{\rho} x). \end{equation} Clearly, for $x=p_n$ and $i=\left\lfloor{(1-np_n)}/{p_n({\rho}-1)}\right\rfloor$ one has ${\bf z}_{\rho}({\bf p})={\bf z}_{\rho}(x,i)$, and we can prove the lemma by upper bounding the maximum (over all $x$ and $i$) of $\log n -H({\bf z}_{\rho}(x,i))$. Let \begin{align*} f(x,&i)= \log n -H({\bf z}_{\rho}((x,i)) =\log n + i ({\rho} x\log ({\rho} x))\\ +& (1-(n+i({\rho}-1)-1)x)\log(1-(n+i({\rho}-1)-1)x) \\ & \ \qquad\qquad\qquad \qquad\qquad\qquad\qquad + (n-i-1)x \log x. \end{align*} From (\ref{xin}), for any value of $i\in \{1,\ldots, n-2\}$, one has that \begin{equation*} x\in \left(\frac{1}{n+(i+1)({\rho}-1)},\frac{1}{n+i({\rho}-1)}\right] \end{equation*} Set $A=n+i({\rho}-1)-1$. We have \begin{align*} f(x,i)=&\log n +i{\rho} x \log({\rho} x)\\ & -(1-Ax) \log (1-Ax)+(n-i-1)x\log x,\\ \frac{d}{d x}f(x,i)=&i{\rho}\log {\rho} + (i{\rho} -A +n-i-1)\log e \\ &+ (i{\rho} +n-i-1)\log x -A\log(1-Ax)\\ =&i{\rho}\log {\rho}+ A\log x-A\log(1-Ax),\\ \frac{d^2}{d x^2}f(x,i)=& \Bigl(\frac{A}{x} +\frac{A^2}{1-Ax}\Bigr ) \log e. \end{align*} Since $\frac{d^2}{d x^2}f(x,i)\geq 0$ for any $x\in \left(\frac{1}{n+(i+1)({\rho}-1)},\frac{1}{n+i({\rho}-1)}\right]$, the function is $\cup$-convex in this interval, and it is upper bounded by the maximum between the two extrema values $f(1/(n+(i+1)({\rho}-1)),i)$ and $f(1/(n+i({\rho}-1)),i)$. Therefore, we can upper bound $f(x,i)$ by the maximum value among \begin{align*} f(1/(n+i({\rho}-1)),i)=&\log n+ \frac{i{\rho}}{n+i({\rho}-1)}\log {\rho} \\ &+\log \frac{1}{n+i({\rho}-1)}, \end{align*} for $i=1,\ldots, n-1$. We now interpret $i$ as a continuous variable, and we differentiate $\log n+ \frac{i{\rho}}{n+i({\rho}-1)}\log {\rho} +\log \frac{1}{n+i({\rho}-1)}$ with respect to $i$. We get \begin{align*} \frac{d}{d i} & \left(\log n+ \frac{i{\rho}}{n+i({\rho}-1)}\log {\rho} +\log \frac{1}{n+i({\rho}-1)}\right)\\ &= \frac{n({\rho}\log {\rho}-({\rho}-1)\log e)-i({\rho}-1)^2\log e}{(n+i({\rho}-1))^2}, \end{align*} that is positive if and only if $i\leq \frac{n}{{\rho}-1}\left(\frac{{\rho}\ln {\rho}}{{\rho}-1}-1\right).$ Therefore, the desired upper bound on $f(x,i)$ can be obtained by computing the value of $f(\overline{x},\overline{\imath})$, where $\overline{\imath}=\frac{n}{{\rho}-1}\left(\frac{{\rho}\ln {\rho}}{{\rho}-1}-1\right)$ and $\overline{x}=\frac{1}{n+\overline{\imath}({\rho}-1)}$. The value of $f(\overline{x},\overline{\imath})$ turns out to be equal to \begin{align*} \log n &+ \frac{\frac{n}{{\rho}-1}\left(\frac{{\rho}\ln {\rho}}{{\rho}-1}-1\right){\rho}\log {\rho}}{n+ n\left(\frac{{\rho}\ln {\rho}}{{\rho}-1}-1\right)} -\log \left(n+ n\!\left(\frac{{\rho}\ln {\rho}}{{\rho}-1}-1\right)\!\right)\\ &= \frac{{\rho}\log{\rho}({\rho}\ln{\rho}-{\rho}+1)}{({\rho}-1){\rho}\ln {\rho}}-\log\left(\frac{{\rho}\ln{\rho}}{{\rho}-1}\right)\\ &=\frac{{\rho}\ln{\rho}-({\rho}-1)}{({\rho}-1)\ln 2} -\log\left(\frac{{\rho}\ln{\rho}}{{\rho}-1}\right) \\ &=\left(\frac{{\rho}\ln {\rho}}{{\rho}-1} -1- \ln\frac{{\rho}\ln {\rho}}{{\rho}-1}\right)\frac{1}{\ln 2}. \end{align*} \end{IEEEproof} We conclude the paper by showing how Theorems 1 and 2 allow us to design an approximation algorithm for the second problem mentioned in Section \ref{app}, that is, the problem of constructing a probability distribution $\overline{{\bf q}}=(\overline{q}_1, \ldots , \overline{q}_m)$ such that ${\tt D}({\bf p},\overline{{\bf q}})\leq {\tt D}({\bf p},{\bf q}^*)+0.0861$. Our algorithm improves on the result presented in \cite{CGV}, where an approximation algorithm for the same problem with an additive error of $1$ was provided. \smallskip Let ${\bf q}$ be the probability distribution constructed in Lemma 3 and let us recall that the first $i_q$ components of ${\bf q}$ coincide with the first $i_q$ components of ${\bf p}$. In addition, for each $i = i_q+1, \dots, m,$ there is a set $I_i \subseteq \{i_q+1, \dots, n\}$ such that $q_i = \sum_{k \in I_i} p_k$ and the $I_i$'s form a partition of $\{i_q+1, \dots, n\},$ (i.e., ${\bf q}$ is an aggregation of ${\bf p}$ into $m$ components). We now build a bivariate probability distribution ${{\bf M}}_q=[m_{ij}]$, having ${\bf p}$ and ${\bf q}$ as marginals, as follows: \begin{itemize} \item in the first $i_q$ rows and columns, the matrix ${{\bf M}}_q$ has non-zero components only on the diagonal, namely $m_{j\,j} = p_j = q_j$ and $m_{i\, j} = 0$ for any $i, j \leq i_q$ such that $i \neq j$; \item for each row $i = i_q+1, \dots, m$ the only non-zero elements are the ones in the columns corresponding to elements of $I_i$ and precisely, for each $j \in I_i$ we set $m_{i \, j} = p_j.$ \end{itemize} It is not hard to see that ${\bf M}_q$ has ${\bf p}$ and ${\bf q}$ as marginals. Moreover we have that $H({\bf M}_q) = H({\bf p})$ since by construction the only non-zero components of ${\bf M}_q$ coincide with the set of components of ${\bf p}.$ Let ${\cal C}({\bf p}, {\bf q})$ be the set of all bivariate probability distribution having ${\bf p}$ and ${\bf q}$ as marginals. Recall that $\alpha=1-({1+\ln(\ln 2)})/{\ln 2}< 0.0861$. We have that \begin{eqnarray} \label{eq:V-distance} {\tt D}({\bf p}, {\bf q})&=& \min_{{\bf N} \in {\cal C}({\bf p}, {\bf q})} 2H({\bf N}) - H({\bf p}) - H({\bf q}) \label{eq:V-2} \\ &\leq& 2 H({\bf M}_q) - H({\bf p}) - H({\bf q}) \label{eq:V-3} \\ &=& H({\bf p}) - H({\bf q}) \label{eq:V-3-1}\\ &\leq& H({\bf p}) - H(R_m({\bf p})) + \alpha \label{eq:V-5}\\ &\leq& H({\bf p}) - H({\bf q}^*) + \alpha \label{eq:V-5new}\\ &\leq& {\tt D}({\bf p}, {\bf q}^*) + \alpha \label{eq:V-6} \end{eqnarray} where (\ref{eq:V-2}) is the definition of ${\tt D}({\bf p}, {\bf q})$; (\ref{eq:V-3}) follows from (\ref{eq:V-2}) since ${\bf M}_q \in {\cal C}({\bf p}, {\bf q})$; (\ref{eq:V-3-1}) follows from (\ref{eq:V-3}) because of $H({\bf M}) = H({\bf p})$; (\ref{eq:V-5}) follows from Lemma \ref{lemma:huffman-prob}; (\ref{eq:V-5new}) follows from (\ref{eq:V-5}), the known fact that ${\bf q}^*$ is an aggregation of ${\bf p}$ (see \cite{V12}) and Lemmas 1 and 2. Finally, the general inequality $H({\bf a}) - H({\bf b})\leq {\tt D}({\bf a}, {\bf b})$ is formula (48) in \cite{KSS15}.
1,314,259,993,757
arxiv
\section{Introduction} A lot of physical and natural phenomena can be modeled mathematically using partial differential equations (PDEs). In the realm of linear theory, solutions of PDEs obey the principle of linear superposition, and in some cases, they possess explicit analytical expressions. However, the laws of the nature are not always linear, and nonlinear PDEs often play an essential role in modeling these phenomena. Having applications in practically all areas of the natural sciences, the theory of PDEs, both linear and nonlinear, is one of the largest and most active areas of modern mathematics. Historically, the subject of PDEs sprang as a study on the geometry of surfaces and tackled various problems in mechanics. It continues with the historically developed calculus of variations to bridge the theory of surfaces with the understanding of physical problems. Particularly, the study of wave propagation problems has, in turn, stimulated further developments in the general theory of nonlinear PDEs. Some examples of famous nonlinear PDEs amongst others are the nonlinear kinematic wave equation, the nonlinear Klein-Gordon equation, the Burgers equation, the Fischer equation, the Boussinesq equation, the Korteweg-de Vries equation (KdVE), the nonlinear Schr\"odinger equation (NLSE), the Benjamin-Ono equation, the Benjamin-Bona-Mahoney equation, the Kadomtsev-Petviashvili equation, the Davey-Stewartson equation, and the Camassa-Holm equation~\cite{Debnath12}. This chapter specifically deals with the NLSE. In particular, we will provide an overview on modeling and application aspects of the model in $(1 + 1)$D, the simplest domain of space and time variables. The nonlinear term in the NLSE that we are discussing is in the form of the cubic power and thus the equation is also sometimes called the cubic Schr\"odinger equation (CSE). In the context of Bose-Einstein condensation (BEC), the NLSE is known as the Gross-Pitaevskii equation (GPE). The NLSE models an evolution equation for slowly varying envelope dynamics of a weakly nonlinear quasi-monochromatic wave packet in dispersive media. It possesses an infinite set of conservation laws and the model is a completely integrable system by the inverse scattering transform~\cite{Ablowitz74,Ablowitz81,Ablowitz91}. Indeed, having fundamental knowledge on the NLSE is essential in understanding the general theory of nonlinear dispersive waves. In the absence of the nonlinear term, the NLSE reduces to the well-known Schr\"odinger equation, a linear PDE that governs a wave function characterizing the state of a quantum-mechanical system. The equation is named after the Austrian physicist Erwin Schr\"odinger who developed a number of fundamental results in the field of quantum theory~\cite{Schrodinger26}. The Schr\"odinger equation can be derived using the mathematical formulation of quantum mechanics in terms of operators in Hilbert space introduced by~\cite{Dirac30} and~\cite{von32}, known as the Dirac-von Neumann axioms. The readers who are interested in more detailed discussion on the Schr\"odinger equation may consult any textbook in quantum mechanics, such as~\cite{Grif18,Phil03} or~\cite{Shankar94}. The discussion in this chapter will focus more on the NLSE instead of the (linear) Schr\"odinger equation. The NLSE arises in various physical settings in fluid mechanics and hydrodynamics describing the evolution of surface gravity water waves. It also has extensive applications in nonlinear optics, plasma physics and magnetohydrodynamics, solid state physics characterizing the propagation of a heat pulse in a solid, superconductivity describing solitary waves propagation in piezoelectric semiconductors, condensed matter such as BEC, and even mathematical finance. In this chapter, we will focus only on four of these applications: water waves, superconductivity, nonlinear optics and BEC. This chapter is organized as follows. After this introduction, Section~\ref{hydrodynamics} discusses NLSE modeling and application in hydrodynamics. We derive both linear and nonlinear Schr\"odinger equations heuristically. By implementing the method of multiple-scale~\cite{Kevorkian12,Kevorkian13,Nayfeh08}, it follows with the derivation of the temporal and the spatial NLSEs. An overview of applications in surface water waves will be presented. Section~\ref{superconductivity} covers the NLSE derivation from the nonlinear Klein-Gordon equation using the same technique as in Section~\ref{hydrodynamics}. We also consider applications of the sine-Gordon models in superconductivity. Section~\ref{nonlinearoptics} reviews modeling and applications of the NLSE in the field of nonlinear optics. We adopt Maxwell's and Helmholtz' equations as a starting point for the derivation of the NLSE. Section~\ref{BEcondensate} describes the derivation of the GPE by means of the mean-field theory and its applications in the emerging field of BEC. Finally, Section~\ref{conclusion} provides concluding remarks to our discussion. \section{Hydrodynamics} \label{hydrodynamics} \subsection{Heuristic derivation for the NLSE} The following heuristic derivation for the NLSE is based on~\cite{Debnath94,Dingemans97}. For a linear dispersive wave equation, we can express its general solution in terms of a Fourier transform representation. Suppose we have the following linear equation governing the evolution of the surface elevation $\eta(x,t)$: \begin{equation} \partial_{t} \eta + i \Omega(-i\partial_{x}) \eta = 0, \end{equation} then its general solution $\eta(x,t)$ expressed by the Fourier representation is given by \begin{equation} \eta(x,t) = \frac{1}{2\pi} \int_{-\infty}^{\infty} F(\zeta) \, e^{i(k x - \omega t)} \, \mathrm{d}\zeta. \label{generaleta} \end{equation} Here, we can replace the variable $\zeta$ either with wavenumber $k$ or with frequency $\omega$, and both are related by the linear dispersion relationship $\omega = \Omega(k)$ or $k = K(\omega)$, where $K = \Omega^{-1}$. The spectrum function $F(\zeta)$ will be determined from a given initial or boundary condition. For an initial value problem (IVP), $F(k)$ is the Fourier transform of the initial condition $\eta(x,0)$. Correspondingly, the Fourier transform of the initial signal $\eta(0,t)$ is given by $F(\omega)$ in the case of a boundary value problem (BVP). They are given as follows: \begin{align} F(k) &= \int_{-\infty}^{\infty} \eta(x,0) \, e^{-i k x} \, \mathrm{d}x \\ F(\omega) &= \int_{-\infty}^{\infty} \eta(0,t) \, e^{ i \omega t} \, \mathrm{d}t. \end{align} We adopt an assumption of a slowly modulated wave as it propagates in a dispersive medium, and hence $F(k)$ and $F(\omega)$ are narrow-banded spectra around $k_0$ and $\omega_0$, respectively. The linear dispersion relationship can be expressed in its Taylor-expansion series about the basic state wavenumber $k_0$ and frequency $\omega_0$, written as follows: \begin{align} \Omega(k) &= \omega_0 + \Omega'(k_0)(k - k_0) + \frac{1}{2!} \Omega''(k_0) (k - k_0)^2 + \dots \\ K(\omega) &= k_{0} + K'(\omega_{0})(\omega - \omega_{0}) + \frac{1}{2!}K''(\omega_{0})(\omega - \omega_{0})^{2} + \dots. \end{align} Therefore, we can rewrite~\eqref{generaleta} as $\eta(x,t) = A(\xi,\tau) e^{i(k_{0}x - \omega_{0}t)}$, where $A(\xi,\tau)$ is the corresponding complex-valued amplitude of the wave packet $\eta(x,t)$, written in two different versions: \begin{align} A(\xi_1,\tau_1) &= \frac{1}{2\pi} \int_{-\infty}^{\infty} F(k_0 + \kappa) \, e^{i(\xi_1 - \Omega_{\textmd{res}}(k_0)\tau_1/\kappa^{2})} \,\mathrm{d} \kappa, \label{complexamplitude1} \\ A(\xi_2,\tau_2) &= \frac{1}{2\pi} \int_{-\infty}^{\infty} F(\omega_{0} + \nu) \, e^{-i(\tau_2 - K_{\textmd{res}}(\omega_0)\xi_2/\nu^{2})} \, \mathrm{d} \nu. \label{complexamplitude2} \end{align} Here, $\kappa = k - k_0 = \mathcal{O}(\epsilon)$, $\xi_1 = \kappa(x - \Omega'(k_0) t)$, $\tau_1 = \kappa^2 t$, $\nu = \omega - \omega_{0} = \mathcal{O}(\epsilon)$, $\xi_2 = \nu^{2} x$, $\tau = \nu(t - K'(\omega_{0})x)$, and $0 < \epsilon \ll 1$ is a small positive parameter. The residual terms appearing in the exponential term read \begin{align*} \Omega_{\textmd{res}}(k_0) &= \Omega(k) - [\omega_0 + \Omega'(k_0)\kappa] = \kappa^{2}\left(\frac{1}{2!}\Omega''(k_0) + \frac{1}{3!}\Omega'''(k_0)\kappa + \dots \right) \\ K_{\textmd{res}}(\omega_0) &= K(\omega) - [k_{0} + K'(\omega_{0})\nu] = \nu^{2}\left(\frac{1}{2!}K''(\omega_{0}) + \frac{1}{3!}K'''(\omega_{0})\nu + \dots \right). \end{align*} From the complex-amplitude representations~\eqref{complexamplitude1} and~\eqref{complexamplitude2}, it follows that $\kappa$ (respectively $\nu$) are associated with the differential operator $i\partial_{\xi}$ (respectively $-i \partial_{\tau}$) and thus, $\kappa^2 = - \partial_{\xi}^2$ (respectively $\nu^{2} = - \partial_{\tau}^{2}$). Thus, the complex-valued amplitude $A$ satisfies \begin{align} \partial_{\tau}A + i \Omega_{\textmd{res}}( i\partial_{\xi}) A &= 0 \label{LSKres1} \\ \partial_{\xi}A + i K_{\textmd{res}} (-i\partial_{\tau}) A &= 0. \label{LSKres2} \end{align} For a narrow-banded spectrum, the equations~\eqref{LSKres1} and~\eqref{LSKres2} reduce to approximate equations called the temporal and the spatial ``linear Schr\"{o}dinger'' equations by approximating $\Omega_{\textmd{res}}$ and $K_{\textmd{res}}$ in their lowest-order terms, respectively \begin{align} i\partial_{\tau}A + \beta_1 \partial_{\xi}^{2} A &= 0, \label{linearSchrodinger1} \\ i\partial_{\xi} A + \beta_2 \partial_{\tau}^{2} A &= 0. \label{linearSchrodinger2} \end{align} Here, the dispersion coefficients $\beta_1 = \frac{1}{2}\Omega''(k_0)$ and $\beta_2 = -\frac{1}{2} K''(\omega_{0}) = \frac{1}{2}\frac{\Omega''(k_{0})} {[\Omega'(k_{0})]^{3}}$. Before proceeding to derive the NLSE heuristically, we make a quick note on the local existence and uniqueness for an IVP of the temporal linear Schr\"{o}dinger equation (LSE)~\eqref{linearSchrodinger1}. Physically, this evolution equation exhibits a dispersion phenomenon. It means that the group velocity $\Omega'(k)$ depends on the wavenumber $k$ and waves with different frequencies travel at different speed. Consequently, a traveling and localized wave packet will dissolve. \begin{definition}[Fundamental solution] The function \begin{equation} \Psi(\xi,\tau) := \frac{1}{\sqrt{4 \pi i \beta_1 \tau}} e^{\frac{i|\xi|^2}{4 \beta_1 \tau}}, \qquad \xi \in \mathbb{R}, \quad \tau \neq 0. \label{funso} \end{equation} is called the {\upshape fundamental solution} of the LSE~\eqref{linearSchrodinger1}. \end{definition} \begin{theorem} The corresponding initial value problem LSE~\eqref{linearSchrodinger1} with an initial condition $A(\xi,0)$ admits an exact solution using the fundamental solution~\eqref{funso}, given explicitly as follows: \begin{equation} A(\xi,\tau) = \frac{1}{\sqrt{4\pi i \beta_1 \tau}} \int_{-\infty}^{\infty} e^{-\frac{(\xi - \zeta)^2}{4i \beta_1 \tau}} A(\zeta,0) \, \mathrm{d} \zeta, \qquad \xi \in \mathbb{R}, \quad \tau > 0. \end{equation} \end{theorem} The proof of this theorem can be found in~\cite{Evans10,Fibich15} using the Fourier transform method and we have adopted the following convention for the Fourier transform definition in this chapter. \begin{definition}[Fourier transform] Let $f$ be a defined function on $\xi \in \mathbb{R}$, then the {\upshape Fourier transform} of $f$ and its {\upshape inverse Fourier transform} are given as follows: \begin{align} \hat{f}(\kappa) = {\cal F}\left\{ f(\xi) \right\} &= \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(\xi) e^{-i \kappa \xi} \, \mathrm{d}\xi \\ f(\xi) = {\cal F}^{-1} \left\{ \hat{f}(\kappa) \right\} &= \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} \hat{f}(\kappa) e^{i \kappa \xi} \, \mathrm{d}\kappa. \end{align} \end{definition} \begin{proof} Let $\hat{A}(\kappa,\tau) = {\cal F}\{ A(\xi,\tau) \}$ be the Fourier transform of the complex-valued amplitude $A(\xi,\tau)$. Taking the Fourier transform of the LSE~\eqref{linearSchrodinger1} and its initial condition yields the following, respectively \begin{equation} i \partial_{\tau} \hat{A} - \beta_1 \kappa^2 \hat{A} = 0, \qquad \qquad \text{and} \qquad \qquad \hat{A}(\kappa, 0). \end{equation} The solution to this ODE is \begin{equation} \hat{A}(\kappa, \tau) = \hat{A}(\kappa,0) e^{-i \beta_1 \kappa^2 \tau}. \label{invFTA} \end{equation} Taking the inverse of the Fourier transform~\eqref{invFTA}, we obtain the solution in convolution form \begin{equation} A(\xi,\tau) = \frac{1}{\sqrt{2\pi}} A(\xi,0) \ast {\cal F}^{-1} \left\{e^{-i \beta_1 \kappa^2 \tau} \right\}. \end{equation} Using the fact that the Fourier transform of a Gaussian-shape function is also Gaussian-shape profile \begin{equation} {\cal F} \left\{e^{-\frac{p}{2} \xi^2} \right\} = \frac{1}{\sqrt{p}} e^{-\frac{\kappa^2}{2p}}, \qquad \text{and} \qquad {\cal F}^{-1} \left\{e^{-i \beta_1 \kappa^2 \tau} \right\} = \frac{1}{\sqrt{2 i \beta_1 \tau}} e^{- \frac{\xi^2}{4 i \beta_1 \tau}}, \end{equation} with $p = 1/(2i\beta_1 \tau)$, we then obtain the desired expression \begin{equation} A(\xi,\tau) = \frac{1}{\sqrt{4\pi i \beta_1 \tau}} \int_{-\infty}^{\infty} e^{-\frac{(\xi - \zeta)^2}{4i \beta_1 \tau}} A(\zeta,0) \, \mathrm{d} \zeta, \qquad \xi \in \mathbb{R}, \quad \tau > 0. \end{equation} \end{proof} This solution does make sense for all positive time $t > 0$ provided that the initial condition $A(\xi,0) \in L^{1}(\mathbb{R})$. From this expression, we obtain the following basic $L^{\infty}$-estimate~\cite{Strauss89,Cazenave96a,Teschl14} \begin{align} \left\|A(\xi,\tau) \right\|_{L^\infty(\mathbb{R})} = \sup_{\xi \in \mathbb{R}} \left| A(\xi,\tau) \right| &\leq \frac{1}{\sqrt{4 \pi \beta_1 \tau}} \int_{-\infty}^{\infty} \left|A(\xi,0) \right| \, \mathrm{d}\xi \nonumber \\ &= \frac{1}{\sqrt{4 \pi \beta_1 \tau}} \left\| A(\xi,0) \right\|_{L^1(\mathbb{R})}. \end{align} This indicates that any solution with spatially localized initial conditions will decay uniformly toward zero with a rate of $\sqrt{\tau}$~\cite{Schneider11}. Furthermore, using Plancherel's theorem, if the initial condition $A(\xi,0) \in L^1(\mathbb{R}) \cap L^2(\mathbb{R})$, then the $L^2$-norm is preserved \begin{equation} \left \|A(\xi,\tau) \right\|_{L^2(\mathbb{R})} = \left\| A(\xi,0) \right\|_{L^2(\mathbb{R})}, \qquad \qquad \forall \quad t > 0. \end{equation} In the following, we will derive the NLSE from the nonlinear dispersion relationship that includes a slowly-varying real-valued amplitude $a(x,t) \sim \mathcal{O}(\epsilon)$: \begin{equation} \omega = \Omega(k, a^2) \qquad \qquad \text{or} \qquad \qquad k = K(\omega, a^2). \end{equation} We perform a Taylor series expansion about the basic state wavenumber $k = k_0$ (or the basic state frequency $\omega = \omega_0$) and the zero amplitude $|a|^2 = 0$ \begin{align} \omega &= \Omega(k_0,0) + \frac{\partial \Omega}{\partial k}(k_0,0) (k - k_0) + \frac{\partial^2 \Omega}{\partial |a|^2}(k_0,0) |a|^2 + \frac{1}{2} \frac{\partial^2 \Omega}{\partial k^2}(k_0,0) (k - k_0)^2 \nonumber \\ & \quad + \frac{\partial^2 \Omega}{\partial k \partial|a|^2}(k_0,0)(k - k_0)|a|^2 + \frac{1}{2} \frac{\partial^2 \Omega}{\partial|a|^4}(k_0,0)|a|^4 + \dots \end{align} Rewriting and considering only the essential terms, we obtain \begin{equation} \nu - \Omega'(k_0) \kappa - \frac{1}{2} \Omega''(k_0) \kappa^2 - \epsilon^2 \frac{\partial^2 \Omega}{\partial |A|^2}(k_0,0) |A|^2 - \dots = 0. \end{equation} Associating $\nu$ and $\kappa$ with differential operators $i\partial_{t}$ and $-i\partial_{x}$, and acting on the complex-valued amplitude $A$, it yields \begin{equation} i(\partial_{t} + \Omega'(k_0) \partial_{x}) A + \frac{1}{2} \Omega''(k_0) \partial_{x}^2 A - \epsilon^2 \frac{\partial^2 \Omega}{\partial |a|^2}(k_0,0) |A|^2 A = 0. \end{equation} Introducing the slowly-moving coordinate $\xi = \epsilon(x - \Omega'(k_0)t)$ and slower time variable $\tau = \epsilon^2 t$, we obtain the temporal NLSE \begin{equation} i \partial_{\tau} A + \beta_1 \partial_{\xi}^2 A + \gamma_1 |A|^2 A = 0 \label{temporalNLS1} \end{equation} where the dispersion and the nonlinear coefficients are respectively given by \begin{equation} \beta_1 = \frac{1}{2} \Omega''(k_0) \qquad \qquad \text{and} \qquad \qquad \gamma_1 = -\frac{\partial^2 \Omega}{\partial |A|^2}(k_0,0). \end{equation} For the temporal NLSE~\eqref{temporalNLS1}, the following result on the local existence and uniqueness solution for an initial value problem in Sobolev spaces is known in the literature. See, for instance,~\cite{Schneider11,Kramer13} for a detailed proof of the following theorem. \begin{theorem} Let $m \geq 1$, let $H^{m}(\mathbb{R})$ be the Sobolev space, the space of $m$ times weakly differentiable functions $u: \mathbb{R} \rightarrow \mathbb{R}$ with derivatives in $L^2$-space for $j \in \{0,1,\dots, m \}$. Let the space $H^m$ be equipped with the norm \begin{equation} \| u \|_{H^m} = \max_{j \in \{0,1,\dots, m\}} \| \partial_\xi^j u \|_{L^2}. \end{equation} Let $A_0 \in H^m(\mathbb{R})$ be a complex-valued function. Then there exists a time $\tau_0 = \tau_0\left(\| A_0 \|_{H^m} \right) > 0$ and a unique solution $A \in \mathbb{C}\left( \left[0,\tau_0 \right], H^m \right)$ of the temporal NLSE~\eqref{temporalNLS1} with the initial condition $A_0$. \end{theorem} The readers who are interested in the well-posedness of the Cauchy problem and the long-time behavior of the corresponding global solutions to the NLSE may consult~\cite{Strauss89,Cazenave96a,Cazenave96b,Ginibre97,Bourgain99}. Similarly, writing out the Taylor series expansion in two variables for $k = K(\omega,a^2)$ and retaining only the essential terms, we have \begin{align*} k &= K(\omega_0,0) + \frac{\partial K}{\partial \omega}(\omega_0,0) (\omega - \omega_0) + \frac{1}{2} \frac{\partial^2 K}{\partial \omega^2}(\omega_0,0) (\omega - \omega_0)^2 + \frac{\partial^2 \Omega}{\partial |a|^2}(\omega_0,0) |a|^2 + \dots \end{align*} Rearranging the terms to the left-hand side yields \begin{equation} \kappa - K'(\omega_0) \nu - \frac{1}{2} K''(\omega_0) \nu^2 - \frac{\partial^2 \Omega}{\partial |a|^2}(\omega_0,0) |a|^2 - \dots = 0. \end{equation} Corresponding the parameters $\kappa$ and $\nu$ with the differential operators $-i\partial_{x}$ and $i\partial_{t}$, respectively, and acting on the complex-valued amplitude $A$, we obtain \begin{equation} i \left(\partial_{x} + K'(\omega_0) \partial_{t}\right) A - \frac{1}{2} K''(\omega_0) \partial_{t}^2 A + \epsilon \frac{\partial^2 \Omega}{\partial |A|^2}(\omega_0,0) |A|^2 A = 0. \end{equation} Introducing the slowly-moving variables $\xi = \epsilon^2 x$ and $\tau = \epsilon(t - K'(\omega_0) x)$, we obtain the spatial NLSE \begin{equation} i \partial_{\xi} A + \beta_2 \partial_{\tau}^2 A + \gamma_2 |A|^2 A = 0 \end{equation} where the dispersion and the nonlinear coefficients are respectively given by \begin{equation} \beta_2 = - \frac{1}{2} K''(\omega_0) \qquad \qquad \text{and} \qquad \qquad \gamma_2 = \frac{\partial^2 \Omega}{\partial |A|^2}(\omega_0,0). \end{equation} \subsection{Derivation of the temporal NLSE} \label{SubsectemporalNLSE} The following derivation, as well as the derivation for the spatial NLSE in the next subsection, follow the argument presented by~\cite{vanGroesen98,Cahyono02,Karjanto06}. Consider a KdV type of equation with an exact dispersion relationship property, given as follows \begin{equation} \partial_{t}\eta + i \Omega(-i\partial_{x})\eta + c \,\eta \partial_{x}\eta = 0. \label{KdVeqn} \end{equation} Here, $c \in \mathbb{R} $ is a nonlinear coefficient for the KdV equation~\eqref{KdVeqn}. We will observe that it also contributes to the nonlinear coefficient for the NLSE. The function $\Omega$ acts as both a differential operator and a dispersion relationship. We seek a solution for the surface wave elevation $\eta(x,t)$ in the form of a wave packet, or a wave group. This wave packet consists of a superposition of the first-order harmonic wave, the second-order double harmonic wave, and the second-order non-harmonic long wave, explicitly given as follows: \begin{equation} \eta(x,t) = \epsilon A(\xi,\tau)e^{i\theta} + \epsilon^{2}[B(\xi,\tau)e^{2i\theta} + C(\xi,\tau)] + \textmd{c.c.}, \label{etaABC} \end{equation} where $0 < \epsilon \ll 1$ is a small positive parameter as used commonly in perturbation theory. The term in the exponent is $\theta(x,t) = k_{0}x - \omega_{0}t$, where $k_0$ and $\omega_0$ are related by the linear dispersion relation. The functions $A(\xi,\tau)$, $B(\xi,\tau)$, and $C(\xi,\tau)$ are complex-valued wave packet envelopes and they are allowed to vary slowly in the slower-moving frame of reference, where the spatial and temporal variables are given by $\xi = \epsilon(x - \Omega'(k_{0})t)$ and $\tau = \epsilon^{2}t$, respectively. As usual, c.c. denotes the complex conjugation of the preceding terms. Substituting the Ansatz~\eqref{etaABC} into the KdV equation~\eqref{KdVeqn} yields the residue $R(x,t)$ that can be expressed in the following form: \begin{equation} R(x,t) = \sum_{n,m} \epsilon^{n} R_{nm} e^{i \, m \theta} + \textmd{c.c.}, \label{Rnm} \end{equation} where $n \geq 1$, $m \geq 0$, and where the coefficients $R_{nm}$ contain expressions in $A(\xi,\tau)$, $B(\xi,\tau)$, and $C(\xi,\tau)$ and their partial derivatives. All coefficients of $R(x,t)$ must vanish in order to satisfy the KdV equation~\eqref{KdVeqn}. We obtain the vanishing of the first order residue coefficients $R_{1m} = 0$, $m \geq 0$ when $k_{0}$ and $\omega_{0}$ are related by the linear dispersion relationship. The second order residue coefficients read \begin{align*} R_{20} &= 0; \qquad R_{21} = 0; \qquad R_{2m} = 0, \quad m \geq 3; \\ R_{22} &= i([\Omega(2k_{0}) - 2\omega_{0}]B + c\,k_{0}A^{2}). \end{align*} Vanishing of $R_{22}$ leads to an expression of $B(\xi,\tau)$ as a function of $A(\xi,\tau)$: \begin{equation} B(\xi,\tau) = \frac{c\,k_{0}A^{2}(\xi,\tau)}{2\omega_{0} - \Omega(2k_{0})}. \label{BR22} \end{equation} The third order residue coefficients read \begin{align*} R_{30} &= [\Omega'(0) - \Omega'(k_{0})]\partial_{\xi}C + \frac{1}{2}c \, \partial_{\xi}|A|^{2};\\ R_{31} &= \partial_{\tau}A - \frac{1}{2}i\Omega''(k_{0}) \partial_{\xi}^{2}A + i\, c k_{0} (A^{\ast} B + AC + A C^{\ast}). \end{align*} Requiring $R_{30}$ to vanish yields an expression for $C(\xi,\tau)$ as a function of $A(\xi,\tau)$ and a $\tau$-dependent constant of integration $\alpha_{T}$, given as follows: \begin{equation} C(\xi,\tau) = \frac{1}{2} \frac{c\, |A(\xi,\tau)|^{2}}{\Omega'(k_{0}) - \Omega'(0)} + \alpha_{T}(\tau). \label{CR30} \end{equation} To prevent a resonance, $R_{31}$ has to vanish as well, and this leads to an evolution equation for $A(\xi,\tau)$: \begin{equation} \partial_{\tau}A + i \beta_1 \partial_{\xi}^{2} A + i \gamma_1 |A|^{2}A + 2 i k_{0} c\, \textmd{Re}[\alpha_{T}(\tau)] A = 0. \end{equation} A similar assumption of unidirectional wave propagation, applying the ``gauge transformation'' by multiplying the evolution equation by $e^{2ik_{0}c \int \textmd{Re}[\alpha_{T}(\tau)] d\tau}$~\cite{Mei83}, we obtain the temporal NLSE for $A(\xi,\tau)$: \begin{equation} \partial_{\tau}A + i \beta_1 \partial_{\xi}^{2} A + i \gamma_1 |A|^{2}A = 0, \label{temporalNLS} \end{equation} where the dispersion and the nonlinear coefficients are respectively given by \begin{align} \beta_1 &= -\frac{1}{2}\Omega''(k_{0}) \\ \gamma_1 &= k_{0}c^{2} \left(\frac{1}{\Omega'(k_{0}) - \Omega'(0)} + \frac{k_{0}}{2\omega_{0} - \Omega(2k_{0})} \right). \end{align} This temporal NLSE as well as the spatial NLSE derived in the following subsection are valid for intermediate-water wave models. In the following, we will derive the corresponding ``energy equation'' and ``nonlinear dispersion relationship'' for the temporal NLSE. Write the complex-valued amplitude $A(\xi,\tau)$ in the physical, faster-moving variables as $A_1(x,t) = \epsilon A(\xi,\tau)$ using the relationship above $\xi = \epsilon(x - \Omega'(k_{0})t)$ and $\tau = \epsilon^{2}t$. The temporal NLSE in the physical variables is given by \begin{equation} \partial_{t} A_1 + \Omega'(k_{0}) \partial_{x} A_1 + i \beta_1 \partial_{x}^{2} A_1 + i \gamma_1 |A_1|^{2} A_1 = 0. \label{temporalNLSphysical} \end{equation} Apply the Madelung transformation by writing the complex-valued amplitude $A_1(x,t)$ in its polar form $A_1(x,t) = a(x,t) e^{i \phi(x,t)}$, where the amplitude $a(x,t)$ and the phase $\phi(x,t)$ are both real-valued functions~\cite{Madelung27}. Upon substitution to~\eqref{temporalNLSphysical}, removing the factor $e^{i\phi(x,t)}$ and separating the real and the imaginary parts, it yields the following coupled phase-amplitude equations \begin{align} \left\{ \begin{array}{ll} \displaystyle{\partial_{t}a + \Omega'(k_{0}) \partial_{x} a - \beta_1 \left(a \partial_{x}^{2} \phi + 2 \partial_{x} a \partial_{x} \phi \right)} &= 0 \\ \displaystyle{\partial_{t}\phi + \Omega'(k_{0}) \partial_{x}\phi + \beta_1 \left(\frac{\partial_{x}^{2}a}{a} - (\partial_{x}\phi)^{2} \right) + \gamma_1 a^{2}} &= 0. \end{array} \right. \label{phaseamplitudeeqns1} \end{align} Expressing the wavenumber $k(x,t)$ and the frequency $\omega(x,t)$ in the following form \begin{equation} k(x,t) = k_0 + \kappa = k_{0} + \partial_{x} \phi \qquad \qquad \omega(x,t) = \omega_0 + \nu = \omega_{0} - \partial_{t} \phi. \end{equation} The local wavenumber $\kappa = \partial_{x} \phi$ and the local frequency $\nu = -\partial_{t} \phi$ act as modulational quantities. Using these quantities, the phase-amplitude equations can be written in a more compact form. The amplitude and the phase equations are the first and the second expressions in~\eqref{phaseamplitudeeqns1}, respectively. Expressing $\partial_{x} \phi$ in terms of the local wavenumber $k(x,t)$ and multiplying the amplitude equation with $a(x,t)$, we obtain \begin{equation} \frac{1}{2} \partial_{t}(a^{2}) + \frac{1}{2} \partial_{x} \left\{ \left[\Omega'(k_{0}) + \Omega''(k_{0})(k - k_{0}) \right] a^{2} \right\} = 0. \end{equation} By noting the terms inside the square brackets on the second term above as a linear approximation for $\Omega'(k)$, we arrive at the conservation of energy equation for the temporal NLSE. It reads \begin{equation} \partial_{t}(a^{2}) + \partial_{x}[\Omega'(k) a^{2}] = 0. \end{equation} By expressing the local wavenumber $\partial_{x} \phi = k - k_0$ and the local frequency $-\partial_{t} \phi = \omega - \omega_0$, we obtain the following \begin{equation} \omega - \left[\omega_{0} + \Omega'(k_{0})(k - k_{0}) + \frac{1}{2}\Omega''(k_{0})(k - k_{0})^{2} \right] = \beta_1 \frac{\partial_{x}^{2}a}{a} + \gamma_1 \, a^{2}. \end{equation} By taking the terms inside the square brackets as a quadratic approximation for $\Omega(k)$, the phase equation leads to the nonlinear dispersion relationship \begin{equation} \omega - \Omega(k) = \beta_1 \frac{\partial_{x}^{2}a}{a} + \gamma_1 a^{2}. \label{nondisreltem} \end{equation} \subsection{Derivation of the spatial NLSE} \label{spatialNLSE} A similar technique using the method of multiple-scale can be applied to the KdVE with an exact dispersion relationship~\eqref{KdVeqn} to obtain the spatial NLSE. The difference is in the choice of the slow-moving spatial and temporal variables, respectively chosen as $\xi = \epsilon^{2}x$ and $\tau = \epsilon(t - x/\Omega'(k_{0}))$. Following an equivalent procedure as in the previous subsection, we obtain identical first-order and second-order residue coefficients \begin{align*} R_{1m} &= 0, \quad m \geq 0 \\ R_{20} &= 0; \qquad R_{21} = 0; \qquad R_{2m} = 0, \quad m \geq 3; \\ R_{22} &= i([\Omega(2k_{0}) - 2\omega_{0}]B + c\,k_{0}A^{2}). \end{align*} The third-order residue coefficients are given as follows: \begin{align} R_{30} &= \left(1 - \frac{\Omega'(0)}{\Omega'(k_{0})} \right) \partial_{\tau}C - \frac{1}{2} \frac{c}{\Omega'(k_{0})} \partial_{\tau} |A|^{2} \\ R_{31} &= \Omega'(k_{0})\partial_{\xi}A - \frac{1}{2}i \frac{\Omega''(k_{0})}{[\Omega'(k_{0})]^{2}} \partial_{\tau}^{2}A + i\, c k_{0} (A^{\ast}B + AC + A C^{\ast}) \\ R_{32} &= -\frac{1}{\Omega'(k_{0})}\left(\Omega'(2k_{0})\partial_{\tau}B + \frac{1}{2} c \partial_{\tau} A^{2} \right) \\ R_{33} &= 3ik_{0}c\,A B. \end{align} Requiring $R_{30}$ to vanish leads to an expression for $C(\xi,\tau)$ as a function of $A(\xi,\tau)$ and a $\xi$-dependent constant of integration $\alpha_{S}(\xi)$ for all $\theta(x,t)$: \begin{equation} C(\xi,\tau) = \frac{1}{2} \frac{c\, |A(\xi,\tau)|^{2}}{\Omega'(k_{0}) - \Omega'(0)} + \alpha_{S}(\xi). \end{equation} In order to prevent resonance, $R_{31}$ has to vanish which leads to a dynamic evolution equation for $A(\xi,\tau)$: \begin{equation} \partial_{\xi}A + i \beta_2 \partial_{\tau}^{2} A + i \gamma_2 |A|^{2}A + \frac{2ik_{0} c}{\Omega'(k_{0})} \textmd{Re}\left[\alpha_{S}(\xi) \right] A = 0. \label{spatialNLSgauge} \end{equation} We can remove the term containing Re$[\alpha_{S}(\xi)]$ by applying the gauge transformation~\cite{Mei83}. Multiply the evolution equation by $e^{\frac{2ik_{0}c}{\Omega'(k_{0})} \int \textmd{Re}[\alpha_{S}(\xi)] d\xi}$, then the new complex amplitude $\tilde{A}(\xi,\tau) = e^{\frac{2ik_{0}c}{\Omega'(k_{0})} \int \textmd{Re}[\alpha_{S}(\xi)] d\xi} A(\xi,\tau)$ satisfies the spatial NLSE which, after dropping the tilde, can be written in the following form: \begin{equation} \partial_{\xi}A + i \beta_2 \partial_{\tau}^{2} A + i \gamma_2 |A|^{2}A = 0. \label{spatialNLS} \end{equation} The dispersion coefficient $\beta$ and the nonlinear coefficient $\gamma$ are given as follows \begin{align} \beta_2 &= \beta(k_{0}) = - \frac{1}{2} \frac{\Omega''(k_{0})}{[\Omega'(k_{0})]^{3}} \label{beta} \\ \gamma_2 &= \gamma(k_{0},c^2) = \frac{k_{0}c^{2}}{\Omega'(k_{0})} \left(\frac{1}{\Omega'(k_{0}) - \Omega'(0)} + \frac{k_{0}}{2\omega_{0} - \Omega(2k_{0})} \right). \label{gamma} \end{align} We apply a similar approach as in the previous section to derive the corresponding energy equation and nonlinear dispersion relationship for the spatial NLSE. Write $A_2(x,t) = \epsilon A(\xi,\tau)$, where $\xi = \epsilon^{2}x$ and $\tau = \epsilon(t - x/\Omega'(k_{0}))$. The spatial NLSE in the physical variables is expressed as follows: \begin{equation} \partial_{x} A_2 + \frac{1}{\Omega'(k_{0})} \partial_{t} A_2 + i \beta_2 \partial_{t}^{2} A_2 + i \gamma_2 |A_2|^{2} A_2 = 0. \label{spatialNLSphysical} \end{equation} Apply the Madelung transformation by writing $A_2(x,t)$ in its polar form $A_2(x,t) = a(x,t)e^{i\phi(x,t)}$, with $a(x,t)$ and $\phi(x,t)$ are real-valued quantities~\cite{Madelung27}. After substituting to the spatial NLSE~\eqref{spatialNLSphysical}, removing the factor $e^{i\phi(x,t)}$, and collecting the real and the imaginary parts, we obtain the following coupled phase-amplitude equations in the original physical variables: \begin{align} \left\{ \begin{array}{ll} \displaystyle{\partial_{x}a + \frac{\partial_{t}a }{\Omega'(k_{0})} - \beta_2 \left(a \partial_{t}^{2} \phi + 2 \partial_{t}a \partial_{t}\phi \right)} &= 0 \\ \displaystyle{\partial_{x}\phi + \frac{\partial_{t}\phi}{\Omega'(k_{0})} + \beta_2 \left(\frac{\partial_{t}^{2}a}{a} - (\partial_{t}\phi)^{2} \right) + \gamma_2 a^{2}} &= 0. \end{array} \right. \label{phaseamplitudeeqns2} \end{align} Using the previous definition for wavenumber and frequency expressed in terms of modulated local wavenumber and local frequency, respectively, we can write these phase-amplitude equations in a more compact form. We also adopt the linear dispersion relationship $\omega = \Omega(k)$ or $k = K(\omega)$, where $K = \Omega^{-1}$. From $k = K[\Omega(k)]$, we can derive the relationship between its derivatives, up to the second order given explicitly as follows: \begin{equation} K'(\omega_{0}) = \frac{1}{\Omega'(k_{0})} \qquad \textmd{and} \qquad K''(\omega_{0})= -\frac{\Omega''(k_{0})}{[\Omega'(k_{0})]^{3}}. \end{equation} Expressing the local frequency $-\partial_{t}\phi = \omega - \omega_0$ and multiplying the amplitude equation with $a(x,t)$, we obtain \begin{equation} \frac{1}{2} \partial_{x}(a^{2}) + \frac{1}{2} \partial_{t} \left\{ \left[K'(\omega_{0}) + K''(\omega_{0})(\omega - \omega_{0}) \right] a^{2} \right\} = 0. \end{equation} Noting the terms in the square brackets as a linear approximation for $K'(\omega)$, we can write the amplitude equation as the energy equation \begin{equation} \partial_{x}(a^{2}) + \partial_{t}[K'(\omega) a^{2}] = 0. \label{energyequation} \end{equation} By expressing the local wavenumber $\partial_{x} \phi = k - k_0$ and the local frequency $-\partial_{t} \phi = \omega - \omega_0$, we can write the phase equation as follows: \begin{equation} \left[k_{0} + K'(\omega_{0})(\omega - \omega_{0}) + \frac{1}{2}K''(\omega_{0})(\omega - \omega_{0})^{2} \right] - k = \beta_2 \frac{\partial_{t}^{2}a}{a} + \gamma_2 \, a^{2}. \end{equation} By considering the terms inside the square brackets as a quadratic approximation for $K(\omega)$, the phase equation leads to the nonlinear dispersion relationship \begin{equation} K(\omega) - k = \beta_2 \frac{\partial_{t}^{2}a}{a} + \gamma_2 \, a^{2}. \label{nondisrelspa} \end{equation} The nonlinear dispersion relationship~\eqref{nondisreltem} or~\eqref{nondisrelspa} describes the relationship between wavenumber $k$ and frequency $\omega$ in the dispersion plane $(k,\omega)$. Since generally the right-hand side of ~\eqref{nondisreltem} or~\eqref{nondisrelspa} does not vanish, then any combination of $(k,\omega)$ does not always satisfy the linear dispersion relationship. The ratio $\frac{\partial_{x}^2 a}{a}$ in~\eqref{nondisreltem} or $\frac{\partial_{t}^2 a}{a}$ in~\eqref{nondisrelspa} is coined as the ``Chu-Mei quotient'' by~\cite{Karjanto07a} when they discussed ``phase singularity'' and ``wavefront dislocation'' in surface gravity waves. The unboundedness of this quotient at the vanishing real-valued amplitude $a(x,t)$ is responsible for the occurrence of the phenomena. Several authors refer to this quotient as the ``Fornberg-Whitham term''~\cite{Infeld00}, referring to the paper written by~\cite{Fornberg78}. Even though the quotient has already appeared earlier in the literature in the context of modulated waves in nonlinear media~\cite{Karpman67,Karpman69}, it was~\cite{Chu70,Chu71} who introduced it for the first time when deriving the modulation equations of Whitham's theory for slowly varying Stokes' waves. \subsection{Applications in surface gravity waves} The temporal NLSE derived in Subsection~\ref{SubsectemporalNLSE} models absolute dynamics while the spatial NLSE derived in Subsection~\ref{spatialNLSE} models convective dynamics~\cite{Sulem99}. Together with an initial condition, the temporal NLSE constitutes an initial value problem. Given a wave packet profile at a particular time, the NLSE governs an evolution in time of the wave packet so that we could find out the shape of the wave packet at some time in the future. Together with a boundary condition, the spatial NLSE composes a boundary value problem, or signaling problem. This model is suitable for a wave signal generation in a wave tank of a hydrodynamic laboratory. By inputting an initial wave signal to a wavemaker, letting it propagate along the wave tank, we can measure the ``experimental'' wave signal at several frontal positions and compared it with the ``theoretical'' wave signal predicted by the spatial NLSE. In particular, the spatial NLSE and its family of solitons on a non-vanishing background describing a nonlinear extension of modulational instability have been utilized as a model for deterministic freak wave generation in intermediate water depth at the high-speed wave basin of the Dutch Maritime Research Institute, the Netherlands (MARIN)~\cite{vanGroesen06,Karjanto06,Andonowati07}. The experimental results confirm an occurrence of phase singularity of the wave packet envelope at the location where the wave signals reach maximum amplitude, where the phenomenon and its related counterpart of wavefront dislocation have been predicted theoretically~\cite{Karjanto07a}. Even though the model does not quantitatively predict the signal and the spectrum evolution in accurate detail, it exhibits an extraordinary qualitative agreement. Our experimental results confirm similar behavior with other testings where the corresponding wave spectra demonstrate frequency downshift as the wave signals propagate along the wave tank~\cite{Karjanto10,Lake77}. The authors also coined the term ``Wessel curves'' indicating the evolution of the real-part and the imaginary-part of the complex-valued amplitude $A(\xi,\tau)$ exclusive of the oscillatory part contributed from the continuous-wave or the plane-wave solution. Modulational instability, also known as sideband instability, is a well-known phenomenon in both fluid dynamics and nonlinear optics. In the context of hydrodynamics, it is known as the Benjamin-Feir instability where~\cite{Benjamin67,Benjamin67a} predicted the onset of the instability of Stokes wave trains in deep water. Under the theory of linear perturbation analysis, the plane-wave solution of the NLSE is modulationally unstable and its nonlinear extension is given by the ``Akhmediev-Eleonskii-Kulagin breather'', also known as the ``solitons on a non-vanishing background''~\cite{Akhmediev87,Akhmediev97,Ablowitz90,Karjanto09}. In the spectral domain, the effect of nonlinearity which reinforces periodic wave trains, leads to the generation of spectral sidebands and an eventual breakup of the waveform into a train of pulses. The breather's analytical expression for $\beta_1 = 1$ and $\gamma_1 =2$ in the temporal NLSE~\eqref{temporalNLS} can be written as follows: \begin{equation} A_\text{AEK}(\xi,\tau) = e^{2i\tau} \left(\frac{\nu^3 \cosh\left[\sigma (\tau - \tau_0) \right] + i \nu \sigma \sinh\left[\sigma (\tau - \tau_0)\right]}{2 \nu \cosh \left[\sigma (\tau - \tau_0)\right] - \sigma \cos \left[\nu (\xi - \xi_0) \right]} - 1 \right). \label{AEKbreather} \end{equation} The family of solitons on a non-vanishing background is defined for the parameter values $0 < \nu < 2$ and $\sigma = \nu \sqrt{4 - \nu^2}$. The soliton is a holomorphic function for $\sqrt{3} < \nu < 2$ and it reaches local maxima and local minima at $(\xi,\tau) = (\xi_0 + 2n\pi/\nu,\tau_0)$ and $(\xi,\tau) = (\xi_0 + [2n + 1]\pi/\nu, \tau_0)$, respectively, for $n \in \mathbb{Z}$. Other NLSE solutions with a non-vanishing background type of soliton have been proposed for a hydrodynamic freak wave formation, called ``breather solutions''~\cite{Dysthe99}. One family of solitons on a non-vanishing background is called by a new name: ``Kusnetsov-Ma breather''~\cite{Kibler12}, derived independently by~\cite{Kuznetsov77} and~\cite{Ma79}. For the corresponding dispersive and nonlinear coefficients in the temporal NLSE~\eqref{temporalNLS}, $\beta_1 = 1$ and $\gamma_1 = 2$, it is explicitly given as follows: \begin{equation} A_\text{KM}(\xi,\tau) = e^{2i\tau} \left(\frac{\mu^3 \cos \left[\rho (\tau - \tau_0) \right] + i \mu \rho \sin\left[\rho (\tau - \tau_0) \right]}{2 \mu \cos \left[\rho (\tau - \tau_0) \right] - \rho \cosh \left[\mu (\xi - \xi_0) \right]} + 1 \right) \label{KMbreather} \end{equation} where $\rho = \mu \sqrt{4 + \mu^2}$, $\mu \in \mathbb{R}$. Another exact solution is known as the ``Peregrine breather'' or the ``rational soliton'' solution~\cite{Peregrine83}. It is given as follows: \begin{equation} A_\text{PR}(\xi,\tau) = e^{2i\tau} \left(\frac{4(1 + 4i (\tau - \tau_0)}{1 + 16 (\tau - \tau_0)^2 + 4 (\xi - \xi_0)^2} - 1 \right). \end{equation} This solution can be obtained as a limiting case for both the Akhmediev-Eleonskii-Kulagin breather and the Kusnetsov-Ma breather when the parameters $\nu$ and $\mu$ approach zero~\cite{Karjanto07b}. The Peregrine breather has also been successfully generated experimentally as a rogue wave in a water wave tank~\cite{Chabchoub11}. The NLSE has also been considered as a model to investigate the oceanic rogue wave formation caused by a nonlinear energy transfer in the open ocean, both deterministically and stochastically~\cite{Henderson99,Onorato01,Osborne01}. Extensive reports on progress in the physical mechanisms of oceanic rogue wave phenomenon are available~\cite{Kharif03,Dysthe08,Kharif09,Pelinovsky16}. A similar phenomenon has also been proposed, predicted, observed, and studied in other fields than hydrodynamics where the NLSE has been used as a mathematical model, including but not limited to, in optical rogue waves~\cite{Solli07}, atmospheric rogue waves~\cite{Stenflo10}, matter rogue waves in Bose-Einstein condensates~\cite{Bludov09}, and financial rogue waves~\cite{Ivancevic10,Yan10}. \section{Superconductivity} \label{superconductivity} \subsection{NLSE derivation from the nonlinear Klein-Gordon equation} Consider a line of pendula positioned very close together and they hang vertically under the influence of gravity. There exists a horizontal torsion wire for which each pendulum can twist. Let $u(x,t)$ be the twist angle of the pendulum at position $x$ and time $t$, then its motion can be modeled by a sine-Gordon equation \begin{equation} \partial^2_t u - a \partial^2 x u + b \sin u = 0, \qquad \qquad a, \; b \geq 0. \end{equation} In this model, the term $-b \sin u$ is an external force due to gravitational acceleration and the term $a \partial_x^2 u$ models the force caused by the effect of the twist. Assume that one end of the pendulum chain is wiggled with a small amplitude motion with frequency $\omega$, then the term $\sin u$ can be approximated by its Maclaurin series about $u = 0$. Keeping only the first two terms, we obtain a nonlinear Klein-Gordon equation with a cubic nonlinearity. The following derivation of the NLSE from a nonlinear Klein-Gordon equation follows the argument in~\cite{Sharma76,Newell85,Sulem99,Kramer13,Schneider11}. Consider a cubic nonlinearity Klein-Gordon equation describing a model for a wave packet $u(x,t)$ that moves at a constant group velocity $c$, presented as the following initial value problem (IVP): \begin{align} \partial_{t}^2 u - a \partial_{x}^2 u + b u - \lambda u^3 &= 0, \qquad \qquad a, \; b \geq 0, \quad \lambda \in \mathbb{R} \label{KGE} \\ u(x,0) = u_0, \qquad \qquad \partial_{t}u(x,0) &= u_1. \label{ICKGE} \end{align} An Ansatz for $u(x,t)$ is expressed as a perturbation series, where again $0 < \epsilon \ll 1$ is a small parameter \begin{equation} u(x,t) = \hat{u}_0 + \epsilon \hat{u}_1 + \epsilon^2 \hat{u}_2 + \dots, \label{anzats} \end{equation} with $u_n(x,X,t,\tau_1,\tau_2)$, where $X = \epsilon x$, $\tau_1 = \epsilon t$, and $\tau_2 = \epsilon^2 t$ act as slower variables. Substituting~\eqref{anzats} to the IVP~\eqref{KGE} and~\eqref{ICKGE} yields a series progressing in the order of $\epsilon$. The vanishing of the lowest-order term simply reduces to a linear Klein-Gordon IVP: \begin{align} \partial_{t}^2 \hat{u}_0 - a \partial_{x}^2 \hat{u}_0 + b \hat{u}_0 &= 0 \\ \hat{u}_0(x,X,0,0,0;\epsilon) &= u_0/\epsilon \qquad \qquad \partial_{t}\hat{u}_0(x,X,0,0,0;\epsilon) = u_1/\epsilon. \end{align} We seek the solution in the form \begin{equation} \hat{u}_0(x,X,t,\tau_1,\tau_2) = A(X,\tau_1,\tau_2) e^{i(kx - \omega t)} + \text{c.c.}, \qquad A \in \mathbb{C} \end{equation} where c.c. denotes the complex conjugation of the preceding term and $(k,\omega)$ satisfies the dispersion relationship $\omega = \Omega(k) = \sqrt{a k^2 + b}$. The vanishing of the first-order terms reads \begin{align} \partial_{t}^2 \hat{u}_1 - a \partial_{x}^2 \hat{u}_1 + b \hat{u}_1 &= -2 \partial_{\tau_1} \partial_{t} \hat{u}_0 + 2 a \partial_{X} \partial_x \hat{u}_0 \nonumber \\ &= 2i \left(\omega \partial_{\tau_1} A + a k \partial_{X} A \right) e^{i(kx - \omega t)} + \text{c.c.} \\ \hat{u}_1(x,X,0,0,0;\epsilon) &= 0 \\ \partial_{t}\hat{u}_1(x,X,0,0,0;\epsilon) &= -\partial_{\tau_1} \hat{u}_0(x,X,0,0,0;\epsilon). \end{align} Since the right-hand side represents secular terms that would lead to unbounded growth in $\hat{u}_1$ over a long period of time, we need to eliminate them by taking $\omega \partial_{\tau_1} A + a k \partial_{X} A = 0$. This condition is equivalent to the group velocity $\Omega'(k) = a k/\Omega(k)$. The solution of the linear Klein-Gordon equation for $\hat{u}_1$ is similar to the one for $\hat{u}_0$: \begin{equation} \hat{u}_1(x,X,t,\tau_1,\tau_2) = B(X,\tau_1,\tau_2) e^{i(kx - \omega t)} + \text{c.c.}, \qquad B \in \mathbb{C}. \end{equation} Collecting the second-order term and requiring it to vanish gives \begin{align} \partial_{t}^2 \hat{u}_2 - \alpha \partial_{x}^2 \hat{u}_2 + \beta \hat{u}_2 &= -2 \partial_{\tau_2} \partial_{t} \hat{u}_0 - \partial_{\tau_1}^2 \hat{u}_0 + a \partial_{X}^2 \hat{u}_0 + \lambda \hat{u}_0^3 + 2 \partial_{\tau_1} \partial_{t} \hat{u}_1 - 2a \partial_X \partial_{x} \hat{u}_1 \nonumber \\ &= \left\{ 2i\Omega(k) \partial_{\tau_2}A + (a - \Omega'(k)^2) \partial_{\xi}^2 A + 3\lambda |A|^2 A \right. \nonumber \\ & \qquad \left. + 2i \left(\omega \partial_{\tau_1}B + a k \partial_{X} B \right) \right\} e^{i(kx - \omega t)} + \lambda A^3 e^{3i(kx - \omega t)} + \text{c.c.}\\ \hat{u}_2(x,X,0,0,0;\epsilon) &= 0 \\ \partial_{t}\hat{u}_2(x,X,0,0,0;\epsilon) &= -\partial_{\tau_1} \hat{u}_0(x,X,0,0,0;\epsilon) - \partial_{\tau_2} \hat{u}_0(x,X,0,0,0;\epsilon). \end{align} Here, we have used $\xi = \epsilon(x - \Omega'(k) t)$. Similar to previous method, we would like to remove secular terms by requiring $\hat{u}_2$ to be bounded and $B$ must satisfy $\omega \partial_{\tau_1} B + a k \partial_{X} B = 0$. For $A(\xi,\tau_2)$, it satisfies the temporal NLSE \begin{equation} i \partial_{\tau} A + \beta \partial_{\xi}^2 A + \gamma |A|^2 A = 0 \end{equation} where we drop the subscript 2 from the variable $\tau_2$ and the dispersive and the nonlinear coefficients are given as follows, respectively \begin{equation} \beta = \frac{1}{2} \Omega''(k) \qquad \qquad \text{and} \qquad \qquad \gamma = \frac{3}{2} \frac{\lambda}{\Omega(k)}. \end{equation} The additional term $\lambda A^3 e^{3i(kx - \omega t)} + \text{c.c.}$ is not a resonant term and thus, is not problematic since generally $\Omega(3k) \neq 3 \Omega(k)$. \subsection{Applications of sine-Gordon model} Both the nonlinear Klein-Gordon and the sine-Gordon equations have been studied analytically using the Zakharov-Shabat method~\cite{Zakharov72,Grundland92} and the sine-cosine and tanh methods~\cite{Wazwaz05}, as well as numerically using finite difference method~\cite{Strauss78,Jimenez90,Vu93,Duncan97,Li95} and using thin plate splines--radial basis functions~\cite{Dehghan09}. These evolution equations find many applications in physical sciences and different nonlinear phenomena~\cite{Dodd82,Drazin89}. Historically, the sine-Gordon equation arises from the field of differential geometry where it describes surfaces with a constant negative Gaussian curvature~\cite{Enneper70}. The propagation of a crystal dislocation with sinusoidal periodicity is governed by the sine-Gordon equation~\cite{Frenkel39}. The behavior and interaction of elementary particles of mesons and baryons using an identical model has been proposed by~\cite{Perring62}. The real-valued amplitude of wave packet envelope nonlinear evolution equations governing modulationally weakly unstable baroclinic shear flow can be transformed to the sine-Gordon equation~\cite{Gibbon79}. Another important application is in the field of superconductivity, which is known as Josephson effect across a Josephson junction. The latter is a quantum mechanical device composed of superconductor electrodes separated by a barrier and coupled by a weak link. The former is a macroscopic quantum phenomenon where a current could flow for a long period of time without any voltage supply. It was British physicist Brian David Josephson who investigated the relationship between the current and voltage across that weak link~\cite{Josephson62,Josephson74}. Reviews on Josephson junctions and superconducting soliton oscillators in the context of the sine-Gordon model are given by~\cite{Parmentier93,Pnevmatikos93}. The derivation of the NLSE from the sine-Gordon system for a small-amplitude limit was considered by~\cite{Kaup78} who discovered a phase-locked breather with an applied alternating current field by means of perturbation theory. Nonlinear breather dynamics of an alternating current parametric force in the presence of loss in a sine-Gordon system has been analyzed by~\cite{Gronbech93}. In the case of a small-amplitude limit where the system can be described by an effective NLSE, a correct threshold value for the driving force amplitude was obtained when the breather frequency was identically one. Inspired by the recent progress in quantum graph theory and its applications~\cite{Gnutzmann06,Kuchment08,Berkolaiko13}, interactions of traveling localized wave solutions with a vertex in a star graph from a tricrystal Josephson junction has been investigated recently by~\cite{Susanto19}. Other applications of the sine-Gordon and nonlinear Klein-Gordon models include a mechanical model with springs, wires and bearings, and Bloch wall dynamics in magnetic crystals~\cite{Barone71}. For a summary in contemporary developments of the sine-Gordon model and its wide range of applications, please consult~\cite{Cuevas14}. \section{Nonlinear optics} \label{nonlinearoptics} \subsection{NLSE derivation from Maxwell's and Helmholtz' equations} The derivation in this subsection follows the argument presented by~\cite{Agrawal12,Kivshar03,Sulem99}. See also~\cite{Moloney19,Banerjee04,Butcher90}. Maxwell's equations govern the propagation of electromagnetic waves and optical fields in fibers, given as follows in the International System of Units: \begin{align} \nabla \times \mathbf{E} &= -\partial_{t}\mathbf{B} \qquad \qquad \quad \text{(Faraday's law)} \label{faraday} \\ \nabla \times \mathbf{H} &= \mathbf{J} + \partial_{t}\mathbf{D} \qquad \qquad \text{(Ampere's law)} \label{ampere} \\ \nabla \cdot \mathbf{D} &= \rho \\ \nabla \cdot \mathbf{B} &= 0. \end{align} Here, $\mathbf{E}$ and $\mathbf{H}$ denote electric and magnetic vector fields, respectively; $\mathbf{D}$ and $\mathbf{B}$ denote the corresponding electric and magnetic flux densities; $\rho$ is the charge density and $\mathbf{J}$ is the corresponding current density of free charges and both represent the sources for the electromagnetic field. We also have the following constitutive relations for electric and magnetic flux densities: \begin{align} \mathbf{D} &= \epsilon_0 \mathbf{E} + \mathbf{P} \label{dep} \\ \mathbf{B} &= \mu_0 \mathbf{H} + \mathbf{M} \label{bhm} \end{align} where $\epsilon_0$ is the vacuum permittivity, $\mu_0$ is the vacuum permeability, and $\mathbf{P}$ and $\mathbf{M}$ are the induced electric and magnetic polarizations. We will adopt the following common assumptions in nonlinear fiber optics: the absence of free charges ($\rho = 0$ and $\mathbf{J} = \mathbf{0}$) and a nonmagnetic medium like fiber optics ($\mathbf{M} = \mathbf{0}$). By taking the curl of Faraday's law~\eqref{faraday}, using~\eqref{bhm},~\eqref{ampere}, and~\eqref{dep}, we can eliminate $\mathbf{B}$ and $\mathbf{D}$ to obtain an expression in $\mathbf{E}$ and $\mathbf{P}$ \begin{equation} \nabla \times \nabla \times \mathbf{E} + \frac{1}{c^2} \partial_{t}^2 \mathbf{E} = -\mu_0 \partial_{t}^2 \mathbf{P}, \label{maxwell} \end{equation} where $1/c^2 = \mu_0 \epsilon_0$ is the speed of light in a vacuum. We adopt a common relationship between the induced polarization $\mathbf{P}$ and the electric field $\mathbf{E}$ which is valid in the electric-dipole approximation and under the assumption of local medium response. The induced polarization $\mathbf{P}$ is written as the combination of the linear and the nonlinear parts, $\mathbf{P}_L$ and $\mathbf{P}_{NL}$, respectively: $\mathbf{P}(\mathbf{r},t) = \mathbf{P}_L(\mathbf{r},t) + \mathbf{P}_{NL}(\mathbf{r},t)$, where \begin{align} \mathbf{P}_L (\mathbf{r},t) &= \epsilon_0 \int_{-\infty}^{t} \chi^{(1)} (t - t_0) \mathbf{E}(\mathbf{r},t_0) \, \mathrm{d}t_0 \\ \mathbf{P}_{NL}(\mathbf{r},t) &= \epsilon_0 \int_{-\infty}^{t} \int_{-\infty}^{t} \int_{-\infty}^{t} \chi^{(3)} (t - t_1,t - t_2, t - t_3) \mathbf{E}(\mathbf{r},t_1) \mathbf{E}(\mathbf{r},t_2) \mathbf{E}(\mathbf{r},t_3) \, \mathrm{d}t_1 \, \mathrm{d}t_2 \, \mathrm{d}t_3 \end{align} where $\chi^{(j)}$ is a tensor of rank $j + 1$, the $j$-th order of susceptibility. Consider the case where $\mathbf{P}_{NL} = \mathbf{0}$. Let $\mathbf{\hat{E}}(\mathbf{r},\omega)$ be the Fourier transform of $\mathbf{E}(\mathbf{r},t)$, defined as \begin{equation} \mathbf{\hat{E}}(\mathbf{r},\omega) = \int_{-\infty}^{\infty} \mathbf{E}(\mathbf{r},t) \, e^{i\omega t} \, \mathrm{d}t \end{equation} and let $\hat{\chi}^{(1)}(\omega)$ be the Fourier transform of the linear, first-order susceptibility of $\chi^{(1)}(t)$, then~\eqref{maxwell} can be written in the frequency domain \begin{equation} \nabla \times \nabla \times \hat{\mathbf{E}} = \epsilon(\omega) \frac{\omega^2}{c^2} \mathbf{\hat{E}}(\mathbf{r},\omega) \end{equation} where $\epsilon(\omega) = 1 + \hat{\chi}^{(1)}(\omega) = \left(n + i \alpha c/(2 \omega)\right)^2$ is the frequency-dependent dielectric constant and its real and imaginary parts are related to the refractive index $n(\omega)$ and the absorption coefficient $\alpha(\omega)$. Due to low optical loses in fibers within the wavelength region of interest, Im$\{\chi^{(1)}(\omega)\} \ll \text{Im}\{\chi^{(1)}(\omega)\}$ and hence, $\epsilon(\omega) \approx n^2(\omega)$. In addition, since usually $n(\omega)$ is independent of spatial coordinates, then $\nabla \cdot \mathbf{D} = \epsilon \, \nabla \cdot \mathbf{E} = 0$ and hence, $\nabla \times \nabla \times \mathbf{E} = \nabla (\nabla \cdot \mathbf{E}) - \nabla^2 \mathbf{E} = - \nabla^2 \mathbf{E}$. Finally, we arrive at the Helmholtz equation \begin{equation} \nabla^2 \mathbf{\hat{E}} + n^2 (\omega) \frac{\omega^2}{c^2} \mathbf{\hat{E}} = 0. \label{helmholtz1} \end{equation} By including the nonlinear effect of the induced polarization, the Helmholtz equation can be written as \begin{equation} \nabla^2 \mathbf{\hat{E}} + \epsilon (\omega) k_0^2 \mathbf{\hat{E}} = 0 \label{helmholtz2} \end{equation} where $k_0 = \omega/c$ and the dielectric constant $\epsilon(\omega) = 1 + \hat{\chi}^{(1)}(\omega) + \frac{3}{4} \frac{d^4 \chi^{(3)}}{dx^4} |E(\mathbf{r},t)|^2$. The Helmholtz equation~\eqref{helmholtz2} can be solved using the method of separation of variables. We assume an Ansatz in the following form: \begin{equation} \hat{E}(\mathbf{r},\omega - \omega_0) = \hat{A}(z, \omega - \omega_0) \, B(x,y) \, e^{i \beta_0 z} \end{equation} where $\hat{A}(z,\omega)$ is a slowly varying function of $z$ and $\beta_0$ is a wavenumber that needs to be determined. The Helmholtz equation~\eqref{helmholtz2} leads to the following equations for $\hat{A}(z,\omega)$ and for $B(x,y)$, where we have neglected $\partial_{z}^2 \hat{A}$ due to an assumption of slowly varying function in $z$: \begin{align} 2i \beta_0 \partial_{z} \hat{A} + (\hat{\beta}^2 - \beta_0^2)^2 \hat{A} &= 0 \label{eigenafou}\\ \nabla^2 B + \left[\epsilon(\omega) k_0^2 - \hat{\beta}^2 \right] B &= 0. \label{eigenbeta} \end{align} The wavenumber $\hat{\beta}$ is determined by solving the eigenvalue equation~\eqref{eigenbeta} using the first-order perturbation theory. We obtain \begin{equation} \hat{\beta}(\omega) = \beta(\omega) + \Delta \beta \label{betaeigen} \end{equation} where \begin{equation} \Delta \beta = \frac{\omega^2 n(\omega)}{c^2 \beta(\omega)} \frac{\displaystyle \int_{\infty}^{\infty} \int_{-\infty}^{\infty} \Delta n(\omega) |B(x,y)|^2 \, \mathrm{d}x \, \mathrm{d}y}{\displaystyle \int_{\infty}^{\infty} \int_{-\infty}^{\infty} |B(x,y)|^2 \, \mathrm{d}x \, \mathrm{d}y}. \end{equation} Using~\eqref{betaeigen} and approximating $\hat{\beta}^2 - \beta_0^2 \approx 2 \beta_0 (\hat{\beta} - \beta_0)$, the Fourier transform $\hat{A}(z,\omega - \omega_0)$ satisfying~\eqref{eigenafou} can be written as follows: \begin{equation} \partial_{z} \hat{A} = i \left[ \beta(\omega) + \Delta \beta(\omega) - \beta_0 \right] \hat{A}. \end{equation} Since an exact form of the propagation constant $\beta(\omega)$ is rarely known, it is beneficial to expand both $\beta(\omega)$ and $\Delta \beta(\omega)$ in a Taylor series about the carrier frequency $\omega_0$ \begin{align} \beta(\omega) &= \beta(\omega_0) + \beta'(\omega_0) (\omega - \omega_0) + \frac{1}{2} \beta''(\omega_0) (\omega - \omega_0)^2 + \dots \\ \Delta \beta(\omega) &= \Delta \beta(\omega_0) + \Delta \beta'(\omega_0) (\omega - \omega_0) + \frac{1}{2} \Delta \beta''(\omega_0) (\omega - \omega_0)^2 + \dots. \end{align} Replacing $\omega - \omega_0$ with the differential operator $i\partial_{t}$ and taking back the inverse Fourier transform of $\hat{A}(z,\omega - \omega_0)$, we obtain the following equation for $A(z,t)$: \begin{equation} i \partial_{z} A + i \beta'(\omega_0) \partial_{t} A - \frac{1}{2} \beta''(\omega_0) \partial_{t}^2 A + \Delta \beta(\omega_0) A = 0. \end{equation} Using the transformation of a moving frame of reference $T = t - \beta'(\omega_0) z$ and considering that the last term contains the fiber loss and nonlinearity effects, we obtain the NLSE \begin{equation} i \partial_{z} A - \frac{1}{2} \beta''(\omega_0) \partial_{T}^2 A + \gamma |A|^2 A = 0. \label{NLSEtemporalopticalsoliton} \end{equation} Here, the nonlinear coefficient $\gamma$ is given by \begin{equation} \gamma(\omega_0) = -\frac{\omega_0}{c} n_2(\omega_0) \frac{\displaystyle \int_{\infty}^{\infty} \int_{-\infty}^{\infty} |B(x,y)|^4 \, \mathrm{d}x \, \mathrm{d}y}{\displaystyle \left(\int_{\infty}^{\infty} \int_{-\infty}^{\infty} |B(x,y)|^2 \, \mathrm{d}x \, \mathrm{d}y \right)^2}. \end{equation} For a single-mode fiber, the modal distribution $B(x,y)$ corresponds to the fundamental fiber mode, given by one of the following expressions: \begin{align} B(x,y) &= \left\{ \begin{array}{ll} J_0(p \sqrt{x^2 + y^2}), & \qquad \sqrt{x^2 + y^2} \leq a \\ \frac{\sqrt{a}}{\sqrt[4]{x^2 + y^2}} J_0(pa) e^{-q(\sqrt{x^2 + y^2} - a)}, & \qquad \sqrt{x^2 + y^2} \geq a \end{array} \right. \\ \text{or} \quad B(x,y) &= e^{-\frac{(x^2 + y^2)}{w^2}}. \end{align} Here, $J_0$ denotes the Bessel function of the first kind of order zero, $a$ is the radius of the fiber core, $w$ is a width parameter, and the quantities $p = \sqrt{n_1^2 k_0^2 - \beta^2}$ and $q = \sqrt{\beta^2 - n_c^2 k_0^2}$. \subsection{Applications in nonlinear optics} In the context of nonlinear optics, an of interest object of study is ``solitary waves,'' also known as ``solitons.'' Depending on whether the light confinement occurs in time or space during wave propagation, solitons can be classified as either temporal or spatial. The NLSE~\eqref{NLSEtemporalopticalsoliton} governs the time-dependent pulse envelope propagation in optical fibers, also known as temporal soliton. On the other hand, the spatial soliton, a continuous wave beam propagation inside a nonlinear optical medium with Kerr (or cubic) nonlinearity, is governed by the following $(1 + 1)$D NLSE \begin{equation} i\partial_{z}A + \beta \partial_{X}^2 A + \gamma |A|^2 A = 0. \label{NLSEspatialopticalsoliton} \end{equation} A classical example for the latter is a bell-shaped spatial wavepacket with the self-induced lensing effect, a phenomenon of self-trapping in dielectric waveguide modes discovered by~\cite{Chiao64}. A stable spatial soliton was also observed experimentally with self-trapping laser beams propagating through homogeneous transparent dielectrics~\cite{Barthelemy85}. For an extensive coverage on spatial solitons, please consult~\cite{Trillo01}. In the following, we will discuss temporal solitons in optical fibers. An optical fiber is a flexible and transparent material fiber made by drawing a pure silica glass or plastic fiber. The center core is surrounded by an outer layer, known as cladding, with lower refractive index optical material than the core. The fiber is then coated with buffer and jacket for protection from moisture and physical damage. Optical fibers are widely used in fiber-optics communications as a means to transmit light between the two ends of the fiber. The light remains in the core due to total internal reflection so that the fiber acts as a waveguide. The result of an interaction between fiber dispersion and nonliearity is fiber optic solitons. The self-phase modulation phenomenon, where an ultrashort pulse of light inducing a varying refractive index when traveling in a medium due to the optical Kerr effect, can balance the anomalous group velocity dispersion with a nonlinearity effect to create an optical soliton. An existence of temporal optical solitons in the context of optical fibers was predicted theoretically by~\cite{Hasegawa73a,Hasegawa73b} and experimentally confirmed by~\cite{Molleanauer80}. These solitons represent optical pulses that maintain their shape during propagation and belong to families of exact solution of the NLSE. The field of nonlinear fiber optics continues to progress with the development of Erbium-doped fiber amplifiers~\cite{Becker99,Desurvire02}. With the advent of this century, new types of fiber optic amplifiers indicating nonlinear effects were developed, including stimulated Raman scattering and four-wave mixing~\cite{Headley05,Pal10}. This further leads to other types of solitons such as dispersion-managed solitons and dissipative solitons~\cite{Hasegawa03,Kivshar03,Akhmediev05,Mollenauer06,Akhmediev08}. For feature articles on theoretical and experimental challenges in optical solitons, the readers may consult a volume edited by~\cite{Porsezian03}. In addition to the well-known bright, dark and gray solitons, ``optical rogue waves'' have gained popularity during the past decade in the field of nonlinear optics. The experimental result on rogue waves in an optical system is supported by numerical simulation based on probabilistic supercontinuum generation in a highly nonlinear microstructured optical fiber and a generalized NLSE as a mathematical model~\cite{Solli07,Dudley08,Bonatto11}. An overview on the research dynamics in optical rogue waves and the state of the art on the subject has been covered by~\cite{Akhmediev13}. For discussion and debate on whether the science of rogue waves is moving towards a unifying concept, please consult the papers published by various authors in \textit{The European Physical Journal Special Topics}, Volume 185, pages 1--266, July 2010, published by Springer-Verlag. \section{Bose-Einstein condensation} \label{BEcondensate} \subsection{NLSE derivation from Bose-Einstein condensed state} A Bose-Einstein condensate (BEC) is a state of matter of a low density dilute gas, also known as bosons, for which cooling down to a nearly absolute zero temperature would cause them to condense into the lowest accessible quantum state. The phenomenon was predicted nearly a century ago by~\cite{Bose24} and~\cite{Einstein24}. In the context of BEC, the NLSE is known as Gross-Pitaevskii equation (GPE), where the model was derived independently by~\cite{Gross61} and~\cite{Pitaevskii61}. The derivation presented in this section follows an argument presented in~\cite{Pitaevskii03,Pethick08,Dalfovo99}. For a rigorous treatment of the modeling, see~\cite{Erdos07,Erdos10}. Consider a system of a weakly-interacting Bose gas where its Hamiltonian can be written in terms of the field operator $\psi$ \begin{equation} \hat{H} = \int \left(\frac{\hbar}{2m} \nabla \hat{\Psi}^{\dagger} \nabla \hat{\Psi} \right) \mathrm{d}\mathbf{r} + \frac{1}{2} \int \hat{\Psi}^{\dagger} \hat{\Psi}^{\dagger \prime} V(\mathbf{r}' - \mathbf{r}) \hat{\Psi} \hat{\Psi}^{\prime} \, \mathrm{d}\mathbf{r}^{\prime} \, \mathrm{d}\mathbf{r}. \label{hamil} \end{equation} Here, $\hbar$ is the Planck constant, $m$ is particle mass, $\hat{\Psi}^{\dagger}(\mathbf{r})$ and $\hat{\Psi}(\mathbf{r})$ are the field operator creating and annihilating a particle at the point $\mathbf{r}$, and $V(\mathbf{r})$ is the two-body potential. The field operators satisfy the following commutation relationship \begin{equation} \left[\hat{\Psi}(\mathbf{r}), \hat{\Psi}^{\dagger}(\mathbf{r}) \right] = \delta(\mathbf{r} - \mathbf{r}') \quad \quad \text{and} \quad \quad \left[\hat{\Psi}(\mathbf{r}), \hat{\Psi}(\mathbf{r}^\prime) \right] = 0. \label{commu} \end{equation} In the Heisenberg representation, the field operator $\hat{\Psi}(\mathbf{r},t)$ satisfies \begin{align} i \hbar \partial_t \hat{\Psi}(\mathbf{r},t) &= \left[\hat{\Psi}(\mathbf{r},t), \hat{H} \right] \\ &= \left[- \frac{\hbar^2 \nabla^2}{2 m} + V_\text{ext}(\mathbf{r},t) + \int \hat{\Psi}^{\dagger}(\mathbf{r}^\prime, t) V(\mathbf{r}^\prime - \mathbf{r}) \hat{\Psi}(\mathbf{r}^\prime,t) \, \mathrm{d}\mathbf{r}^\prime \right] \, \hat{\Psi}(\mathbf{r},t) \end{align} where $V_\text{ext}$ is an external potential and we have utilized the Hamiltonian~\eqref{hamil} and the commutation relationship~\eqref{commu}. If we apply an effective potential $V_\text{eff}$ where the Born approximation is applicable~\cite{Born26}, then we can replace the field operator $\hat{\Psi}(\mathbf{r},t)$ with a classical field or the condensation wave function $\Psi_0(\mathbf{r},t)$ at a very low temperature and up to the lowest-order approximation. Under an assumption of the slowly varying function $\Psi_0(\mathbf{r},t)$ on distances of the order of interatomic force range, $\mathbf{r}^\prime$ can be replaced with $\mathbf{r}$. We arrive at the GPE \begin{equation} i \hbar \partial_t \Psi_0(\mathbf{r},t) = \left(-\frac{\hbar^2 \nabla^2}{2 m} + V_\text{ext}(\mathbf{r},t) + g|\Psi_0(\mathbf{r},t)|^2 \right) \Psi_0(\mathbf{r},t), \end{equation} where ${\displaystyle g = \int V_\text{eff}(\mathbf{r}) \, \mathrm{d}\mathbf{r}}$. This GPE governs the ground state of a quantum system of identical bosons where it is used as a model equation for the single-particle wavefunction in a BEC. In particular, the presence of external potential $V_\text{ext}$ allows us to model various situations of the external world action on the condensate. \subsection{Applications in BEC} Although the low-temperature and high-density state of BEC was predicted in the 1920s, it was not until the 1990s that the phenomenon has been successfully implemented and experimentally tested in laboratories, by confining in magnetic traps and cooling down to extremely low temperatures, vapors of rubidium $^{87}$Rb~\cite{Anderson95}, lithium $^{7}$Li~\cite{Bradley95}, and sodium $^{23}$Na~\cite{Davis95} atoms. An intensive effort on other atomic species has also produced fruitful experimental results, such as on dilute gasses of atomic hydrogen~\cite{Fried98,Greytak00} and helium in the $2^3 \, S_1$ metastable state $^4$He$^{\ast}$~\cite{Dos01,Robert01}, as well as samples of potassium $^{41}$K~\cite{Modugno01}, cesium $^{133}$Cs~\cite{Weber03}, and another isotope of rubidium $^{85}$Rb~\cite{Cornish00}. The group at the Joint Institute for Laboratory Astrophysics in Colorado successfully measured, for the first time, the collective excitations of a Bose condensed dilute gas in a trap~\cite{Jin96}. For the modeling aspect, the dynamics of a dilute trapped BEC has been successfully constructed using the mean-field theory, within the self-consistent~\cite{Hartree28}-\cite{Fock30}-\cite{Bogo47} approximation, where indeed the GPE can also be derived~\cite{Griffin96}. The majority of theoretical approach in solving the GPE is centered around the Thomas-Fermi approximation~\cite{Thomas27,Fermi27}, where the nonlinear atomic and interactions are much larger than the kinetic energy pressure, and hence the latter is neglected~\cite{Baym96,Stringari96,Dalfovo96}. An analytical attempt in solving the GPE to model the dynamics of dilute ultracold atom clouds in the BEC phase by applying a variational technique is proposed by~\cite{Perez96,Perez97}. The GPE for BEC has also been studied and solved numerically, by direct numerical integration for time-independent~\cite{Edwards95} and time-dependent GPEs~\cite{Ruprecht96}. Other techniques for the latter include, but are not limited to, the semi-implicit Crank-Nicholson scheme~\cite{Ruprecht95}, an eigenfunction basis expansion method~\cite{Edwards96}, an explicit finite-difference scheme~\cite{Cerimele00}, and the time-splitting spectral method~\cite{Bao03}. Another application of the GPE to a 1D cloud boson is covered by~\cite{Roger13}. This 1D bosonic gas with a uniform potential can be obtained by getting a condensate in an elongated trap. In particular, the phenomena occurring at the center of the trap where the density is relatively uniform and the condensate behaves as a fluid is interesting. For repulsive interactions, the solution of the GPE is in the form of a hyperbolic tangent profile while for attractive interactions, it becomes a hyperbolic secant profile. An extensive review of the mean-field theory applied to BEC is given by~\cite{Dalfovo99}. See also other reviews by~\cite{Burnett96} and~\cite{Parkins98}. For an excellent symbiosis between theoretical and experimental contributions on BEC, the readers are encouraged to consult~\cite{Kevrekidis08}. \section{Conclusion} \label{conclusion} In this chapter, we have provided an overview of modeling and application aspects of the NLSE in various physical settings. Indeed, the subject of NLSE is an active and dynamic research area not only in Mathematical Physics, but also in other areas of Science as well as in Engineering. We have derived the NLSE heuristically as well as by implementing the method of multiple-scales. We also covered the applications of NLSE in surface water waves, superconductivity, nonlinear optics and BEC, including solitons and rogue waves. The evolution equation admittedly has some limitations, yet, it is remarkable that the NLSE provides a rather universal model in several areas that do not seem to be closely connected according to a general point of view. We hope that this chapter will stimulate further research on these exciting topics. \addcontentsline{toc}{section}{Acknowledgement} \section*{\large Acknowledgements} {\small The author would like to acknowledge Professor E. (Brenny) W. C. van Groesen (University of Twente, the Netherlands and LabMath Indonesia), Professor Hadi Susanto (The University of Essex, UK), Professor Nail N. Akhmediev (Australian National University), Professor Panayotis G. Kevrekidis (University of Amherst, Massachusetts), Professor Mason A. Porter (University of California, Los Angeles), Professor Guido Schneider (University of Stuttgart, Germany), Dr. Gert Klopman (Witteveen$+$Bos, the Netherlands), Dr. Agung Trisetyarso (Bina Nusantara University, Indonesia), Drs. Andonowati, Alexander A. P. Iskandar, and Rudy Kusdiantara (Bandung Institute of Technology, Indonesia), Professor Fr\'ed\'eric Dias (University College Dublin, Ireland), Professor Kristian B. Dysthe (University of Bergen, Norway), Professor Karsten Trulsen (University of Oslo, Norway), Professors Miguel Onorato and Alfred R. Osborne (Universit\'a di Torino, Italy), Dr. Ardhasena Sopaheluwakan (Meteorology, Climatology, and Geophysical Agency, Indonesia), Professor Christian Kharif (\textit{Institut de Recherche sur les Ph\'enom\`enes Hors Equilibre}, Research Institute for Non-Equilibrium Phenomena, Aix- Marseille University, France), Professors Ren\'e H. M. Huijsmans and Jurjen A. Battjes (Delft University of Technology, the Netherlands), Professor Arthur E. Mynett (International Institute for Hydraulic and Environmental Engineering IHE Delft Institute for Water Education and Deltares, formerly WL|Delft Hydraulics, the Netherlands), and Dr. Pearu Peterson (Tallinn University of Technology, Estonia) for advices, suggestions, and fruitful discussions. This research is supported by the Dutch Organization of Scientific Research NWO (\textit{Nederlandse Organisatie voor Wetenschappelijk Onderzoek}), subdivision Applied Sciences STW (\textit{Stichting Technische Wetenschappen}) through Mathematics and Innovation Transfer Point (\textit{Transferpunt Wiskunde en Innovatie}) Grant No. TWI-5374, the New Researcher Fund from the University of Nottingham, University Park Campus, UK, through Grant No. NRF 5035-A2RL20, the SKKU Samsung Intramural Research Fund No. 2016-1299-000, and the Beginning Independent Researcher Program ({\slshape Saengae-Cheot Yeongu}) from the National Research Foundation of Korea through Grant No. NRF-2017-R1C1B5-017743 under the Basic Research Program in Science and Engineering.\par}
1,314,259,993,758
arxiv
\section{Introduction} Simultaneous localization and mapping (SLAM) is one of the essential functions for autonomous robots. Its primary tasks are state estimation and map building. State estimation aims at finding the transformation that best aligns consecutive sensor data, in which a data association process is required. Map building involves representing the environment using a specific type of the model and accumulating information. The chosen model of the environment is fundamental for data association, and thus it impacts the accuracy and efficiency of the whole system. Specifically, in lidar SLAM problem, point set registration \cite{pomerleau2015review} is needed for state estimation. For mobile robot with light-weight sensors and limited computational resource, it is challenging to achieve accurate data association efficiently due to the sensor mechanism and the motion of the vehicles. For instance, the spinning 2D lidar \cite{droeschel2016multilayered} provides low-resolution point cloud in low scan rate. The single-axis 3D lidar, for example, VLP-16 used in \cite{shan2018lego}, also produces data with low vertical resolution, which means the range data still aggregate in several channels due to the sweeping mechanism. Regarding the map building, it is critical to avoid the dimension explosion of the map state vector when accumulating the data into it. Consequently, when facing large amount non-uniform and sparse data, it is still worth pursing more reliable registration method suitable for both state estimation and map building. \begin{figure}[t] \centering \includegraphics[scale = 0.95]{fig1-eps-converted-to.pdf} \caption{Map of a plaza reconstructed by GP. We show those points whose uncertainty are below certain threshold. They are uniformly distributed and colored according to height. (a) is a perspective view of the map produced by the core workflow. The white curve indicates the trajectory of the MAV. (b-d) are the detailed views of several typical objects in environment including a facade of building (b), sculptures (c), and unstructured trees (d).} \label{fig:1} \end{figure} In this work, a 3D lidar-based SLAM approach, named GP-SLAM+, is designed to address those challenges above. We use regionalized GP map reconstruction to model the environment from range data, which serves as the fundamental of our approach. After this, evenly distributed samples are drawn from the model and fed into a scan-to-map registration scheme to compute the rigid transformation. Map is built incrementally by fusing the information in current frames into it. One of the mapping results can be seen in Fig. \ref{fig:1}. This GP-SLAM workflow was proposed in our previous work \cite{li2020gp} in 2D situation. We also investigated the registration between dense 3D point clouds \cite{Li2020}. However, moving to 3D space, the structure is more complicated and thus can not represented by a function easily. Also, the cubic complexity of the GP becomes prohibitive. This work overcomes those barriers. Firstly, we use a principled down-sample method to accelerate the training of GP. The registration, including the data association, is redesigned based on a maximum likelihood estimation (MLE) probabilistic scheme. We also design a two-thread framework to further enhance the mapping quality and the fidelity of pose estimation in large-scale tasks. We implemented experiments with light-weight sensors to thoroughly evaluate the core workflow and full system. \section{Related Work} A wide range of existing literature devoted to build lidar-based SLAM systems. Many of them are based on the iterative closest point (ICP) method \cite{besl1992method} or its variants \cite{rusinkiewicz2001efficient}. Classical ICP may fall into local minima caused by the sparsity of range data. Consequently, it is more recommended to identify more stable features to capture the environment structure. Geometric features, such as lines and planes, can be extracted easily and are used widely. These features are incorporated into a probabilistic framework by Generalize ICP (GICP) \cite{segal2009generalized}. Lidar Odometry and Mapping (LOAM) \cite{zhang2014loam} is one of the state-of-the-art systems that extract such features. Then LeGO-LOAM \cite{shan2018lego} avoids features extracted from noisy areas like vegetation. Another option is to study the properties of point cloud within sub-sections. Normal distribution transform (NDT) -based methods \cite{magnusson2007scan}\cite{hong2017probabilistic} and surfel-based method \cite{droeschel2016multilayered}\cite{bosse2009continuous} fall into this category. It is also notable that \cite{deschaud2018imls} constructs a implicit surface for precise registration offline. By contrast, our GP-based mapping use several GPs in sub-domains to express the local surfaces. It reduces the loss of information caused by feature extraction. In the other hand, this model can fully recover the structure well with a fixed grid size compared with those multi-resolution grid-based parametric model \cite{droeschel2016multilayered}\cite{hong2017probabilistic}\cite{bosse2009continuous}. GP-based mapping appeals notice in robotics society in recent years as it is a continuous representation and can make inference with uncertainty in un-explored regions. Several works use GP as a regressor to obtain continuous occupancy grid map \cite{o2012gaussian}, or use it to model and interpolate the strength of ambient magnetic field \cite{solin2018modeling} for indoor localization, while we use spatial GP to recover the local surfaces directly. Some other works use spatial GP on terrain modeling \cite{plagemann2008learning} or surface reconstruction \cite{lee2019online}. The functional relationships in our method share certain similarity with those works. However, they mainly focus on mapping problem and do not include real-time state estimation in a SLAM system. Also, few methods complete large-scale 3D mapping online. With the aforementioned representations of the environment, the amount of components in registration is reduced, and the efficiency is enhanced. However, most feature-based methods still suffer from the time-consuming matching process and usually use data structures like Kd-tree \cite{bentley1975multidimensional} to accelerate it. Computational complexity is also the main occlusion that prevents GP from wider robotics applications. Domain decomposition \cite{park2011domainDec} and local regression \cite{shen2006fast} with Kd-tree are two techniques that improve the efficiency of GP. Our method adopts both techniques. However, in this work, by utilizing the evenly distributed property of the samples from GP map reconstruction, the matching can be finished directly and the Kd-tree is also avoided. \section{GP Map Reconstruction} We use spatial GP to reconstruct local surfaces from noisy range observations. To get more feasible access to data association and map updating, we extract discrete samples from recovered surfaces. This process is named as regionalized GP map construction. In other words, it can be considered as certain kind of surface interpolation. This process contains two components, regionalization and reconstruction. \begin{figure} \centering \includegraphics[scale = 0.60]{howgp9-eps-converted-to.pdf} \caption{Illustration of the GP map reconstruction process in a cell. (a) Sparse raw points indicated by black points. The blue arrow refers to the normal from PCA. This local surface is approximately perpendicular to the $oxy$ plane. (b-d) show the results of GP map reconstruction in each direction. Samples are colored according to its variance. The direction $z$ is determined to be noisy direction as shown in (d) and will be omitted.} \label{fig:2} \end{figure} \subsection{Regionalization} Initially, to establish different function relationships locally, we divide the whole domain into several evenly distributed cubic cells in the word coordinate system $\{W\}$. This decomposition also accelerates the reconstruction process \cite{park2011domainDec}. The side length of each cell is $a$. The subset of the raw points $S_{t}^{W}$ located in the $k$th cell is denoted by $S_{t, k}^{W}=\left\{p_{k, i}, i=1, \dots, n_{k}\right\}$. Then we need to determine the function relationship between the coordinates as $x=f(y,z)$, or $y=f(x,z)$, or $z=f(x,y)$. Considering one function can only express a 2.5D surface, in case of complex 3D structure, generally we assume three functions exist in a cell. Each function provides corresponding constrains along its direction, which will be detailed in the Section \RNum{4}. Accordingly, if the surface in a cell is perpendicular to one coordinate plane, as it only provides constrains along its normal, we can omit the corresponding function in this cell. This situation is judged based on the principled component analysis (PCA). Fig. \ref{fig:2} illustrates a example that when a set of raw data are drawn from a vertical wall, the function whose direction is $z$ will be neglected as the wall cannot provide vertical constrains. \subsection{Reconstruction} After the regionalization, we conduct GP map reconstruction in each nonempty cell. Lidar measurements are noisy samples of the environment. The noise model of it can be derived from manufacture data as in \cite{hong2017probabilistic}. Here, we simply assume that each lidar point follows an independent normal distribution with a isotropic variance $\sigma^{2}$. In this case, GP regression, which can produce the best linear unbiased prediction \cite{park2011domainDec}, is used. The GP regression problem is detailed as follows based on \cite{rasmussen2003gaussian}. Given $n_{k}$ training points as $D=\left\{\left(f_{i}, {l}_{i}\right), i=1, \dots n_{k}\right\}$, the relationship between the observations $f_{i} \in \mathbb{R}$ in the training locations $l_{i} \in \mathbb{R}^{2}$ is expressed as $f_{i}=f\left({l}_{i}\right)+\varepsilon_{i}, i=1, \dots, {n}_{k}$, where $\varepsilon_{i}$ is the noise term following the distribution of $\varepsilon_{i} \sim \mathcal{N}\left(0, \sigma^{2}\right)$. The goal is to achieve the distribution of $n_{{test}}$ predictions $\boldsymbol{f}_{*}$ in the preset test locations $\boldsymbol{l}_{*}=\left[l_{* 1}, l_{* 2}, \ldots, l_{* n_{ {test}}}\right]^{T}$ denoted by $f_{* j} =f\left( l_{* j}\right)+\varepsilon_{* j}, j=1, \ldots, {n}_{ {test}}$. Defining $\boldsymbol{f}=\left[f_{1}, f_{2}, \ldots, f_{{n}_{k}}\right]^{T}$, $\boldsymbol{l}=\left[l_{1}, l_{2}, \ldots, l_{{n}_{k}}\right]^{T}$ the predictive distribution $\boldsymbol{f}_{*}$ of given $\boldsymbol{f}$ will be \begin{equation} \begin{array}{r} P\left(\boldsymbol{f}_{*} | \boldsymbol{f}\right)=\mathcal{N}\left(k_{\boldsymbol{l *}}^{T}\left(\sigma^{2} I+K_{\boldsymbol{l l}}\right)^{-1} \boldsymbol{f}, k_{\boldsymbol{* *}}-\right. \\ \left.k_{\boldsymbol{l *}}^{T}\left(\sigma^{2} I+K_{\boldsymbol{l l}}\right)^{-1} k_{\boldsymbol{l *}}\right) \end{array} \end{equation} in which the mean value $k_{\boldsymbol{l *}}^{T}\left(\sigma^{2} I+{K}_{\boldsymbol{ll}}\right)^{-1} \boldsymbol{f}$ is taken as the point prediction of $\boldsymbol{f}_{*}$ at test locations $\boldsymbol{l}_{*}$. Its variance is estimated by $k_{\boldsymbol{**}}-k_{\boldsymbol{l *}}^{T}\left(\sigma^{2} I+{K}_{\boldsymbol{l} \boldsymbol{l}}\right)^{-1} k_{\boldsymbol{l *}}$. Here, $k_{\boldsymbol{**}}=k\left(\boldsymbol{l}_{*}, \boldsymbol{l}_{*}\right)$, $k_{\boldsymbol{l*}}={k\left(\boldsymbol{l}, \boldsymbol{l}_{*}\right)}^{T}$ and $\boldsymbol{K}_{\boldsymbol{l} \boldsymbol{l}}$ is an $n_{k} \times n_{k}$ matrix, $\boldsymbol{K}_{\boldsymbol{l} \boldsymbol{l}}(i, j)=k(i, j)$, $ k(., .)$ represent the kernel function. In this work, we choose the commonly used exponential covariance function, $k\left(l_{i}, l_{j}\right)=\exp \left(-\kappa\left|l_{i}-l_{j}\right|\right)$, with a preset length-scale parameter $\kappa$. In this context, the training points are the raw points $S_{t, k}^{W}$ in a cell. The coordinate used as observation is named direction, and the other two serve as a training location. The $n_{{test}}$ test locations are evenly set, and the interval between them is $r$. Recalling the side length of the cell is $a$, we set $a$ as integral times of $r$, which means $n_{ {test}}=({a} / {r})^{2}$. Those predictions with variance are samples drawn from the implicit surfaces, and each set of samples are named as layer. As shown in Fig. \ref{fig:2}, those predictions which are remote from raw data are more unreliable. We use these samples as the reconstruction result. After the reconstruction, there are 0$\sim$3 layers in one cell. The cells is stored in a hashing table data structure. A sample is represented by $p_{i}=\left(f_{i}, l_{i}\right)$, where $f_{i} \sim \mathcal{N}\left({u}_{i}, \sigma_{i}^{2}\right)$ and the test location serve as index. By this way, a sample can be queried directly. \subsection{Acceleration of the Reconstruction} Besides the domain decomposition, we further accelerate the training process of GP through the concept of local regression. The central idea is that a prediction is mostly influenced by those observations whose training locations are closer to the test location of that prediction. Thus, the training process can be accelerated by principled down-sample of the training points without much precision loss \cite{shen2006fast}. Accordingly, we retain the raw data but only use all the closest points of each test location in GP map reconstruction each time. As a result, the amount of the filtered training points is reduced significantly. One approach to complete this filtering process is utilizing the Kd-tree. However, the initializing cost of this data structure is $O({n{log}n})$ with $n$ inputs, and the average searching cost is $O({log}n)$ \cite{bentley1975multidimensional}. Although Kd-tree is faster than the brute-force searching, it is still time-consuming especially when the amount of points is large. As the searching targets in our application are evenly distributed, we use a modified 2D voxel filter to approximate this process. As shown in Fig. \ref{fig:3}, in a cell, the training locations spread over a 2D domain, which is divided into smaller grids whose center are the test locations. The original voxel filter calculates the mean of all raw data in each smaller grid. The modification is that we keep a point if it is the closest one to the test location among all points in the same smaller grid. By this way. The filtering process can be finished with linear complexity cost. \begin{figure} \centering \includegraphics[scale = 0.9]{fig3_3-eps-converted-to.pdf} \caption{Illustration of the principled filtering process. (a) Kd-tree divides the searching domain according to data, (b) Our modified 2D voxel filter divides it into smaller grid indicated by the dash lines, and the centers of them are the test locations (red points). A raw point will be kept if its training location (gray point) is the closest one to the test location in a smaller grid, the blue line indicates this relationship, the filtering result is highlighted by the blue circles. } \label{fig:3} \end{figure} \section{State Estimation} Follow the reconstruction, the current frame is aligned to the map. Using the map as the reference frame, we can suppress the pose drift. The GP map reconstruction and scan registration processes are conducted iteratively till it converges to provide the state estimation. This scan-to-map registration process includes two main steps, matching and alignment. The matching step establishes the correspondences between the two frames, and the alignment step targets on computing a transformation between the matched pairs. \subsection{Matching} The correspondences will be established between two samples coming from the current frame $P_{t}^{W}$ and the reference frame $Q_{t-1}^{W}$ respectively when they satisfy the following conditions: 1) two samples are located in the same or adjacent cell; 2) two samples share the same prediction direction and test location; 3) both variances of the samples are below threshold $\sigma_{thr}^{2}$. We illustrate this process in Fig. \ref{fig:4} in 2D space for simplicity. Pair-1 is a qualified correspondence while Pair-2 is invalid as the variance of one sample is too large. Pair-3 is established between two samples as they share another direction and are located in adjacent cells. In Pair-1, there are several samples satisfy aforementioned conditions. In this case the closer one is chosen. Paired samples are expressed by $\{p_{i}, q_{i}\}$, where $p_{i}=\left(f_{pi}, l_{pi}\right)$, $f_{p i} \sim \mathcal{N}\left({u}_{p i}, \sigma_{p i}^{2}\right)$ and $q_{i}=\left(f_{qi}, l_{qi}\right)$, $f_{q i} \sim \mathcal{N}\left({u}_{q i}, \sigma_{q i}^{2}\right)$. \subsection{Alignment} Based on the idea that layers only offer observability in their directions, we design the error metric as the distance between prediction of samples. Firstly, Given $e_{x}=[1,0,0]$, the coordinate $x_{i}$ of a 3D point $p_{i}={[x_{i},y_{i},z_{i}]}^{T}$ can be computed by $x_{i}=e_{x} \cdot p_{i}$, and the other two directions are the same. For the sake of brevity, we define an operator $\left( \cdot \right)_{\cdot} {\circ}$ to express such operation of obtaining the coordinate that corresponds to the direction $\circ$ in the following context. Similar to the MLE approach expressed in \cite{segal2009generalized}, with $n_{cur}$ paired samples $\{p_{i}, q_{i}\}, i=1, \dots, n_{c u r}$ in all layers, for a transformation $\boldmath{T}$, we define the 1-dimensional distribution of an observation in certain test location as ${d}_{i}\sim\mathcal{N}\left({\left(T {p}_{i}\right)}_{\cdot}\circ-q_{i\cdot}\circ, \sigma_{p i}^{2}+\sigma_{q i}^{2}\right)$. Then the relative transformation is compute by \begin{equation} {T}=\underset{T}{{argmax}} \prod_{i}\left({P}\left({d}_{i}\right)\right)\\ =\underset{T}{{argmax}} \sum_{i} \log \left({P}\left({d}_{i}\right)\right) \end{equation} The above objective function can be simplified to \begin{equation} \begin{aligned} {T}&=\underset{T}{{argmin}} \sum_{i}\left({{d}_{i}}^{T}\left(\sigma_{p i}^{2}+\sigma_{q i}^{2}\right)^{-1} {d}_{i}\right) \\ &=\underset{T}{{argmin}} \sum_{i} \frac{\left\|\left(T {p}_{i}\right)_{\cdot}{\circ}-q_{i \cdot} \circ \right\|^{2}}{\sigma_{p i}^{2}+\sigma_{q i}^{2}} \end{aligned} \end{equation} where the variance can be seen as weight. This optimization problem is solved by the non-linear solvers Ceres \cite{ceres-solver}. \begin{figure} \centering \includegraphics[scale = 0.22]{registration2-eps-converted-to.pdf} \caption{Illustration of the matching and error metric in 2D situation for simplicity. In the cell view, points indicate samples drawn from surfaces with uncertainty along its direction, and the dash lines refer to the identical test locations between established correspondences. In the frame view, searching happen in identical and adjacent several cells represented by the colored rectangles. The color of points, dash lines and rectangles indicate two different directions.} \label{fig:4} \end{figure} Compared with our previous work, the registration has been redesigned in several aspects. In the previous work, the correspondences are established using only one layer within each cell. It treats all the reconstructed samples as 3D points, and use the 3D Euclidean distance as the error metric. The problem is solved by singular value decomposition (SVD). In contrast, we use several layers to model complex structure and extended the correspondences searching area to avoid information loss in borders (In the previous work, the pair-3 in Fig. \ref{fig:4} is omitted). The error metric is also changed so that it will drags the surfaces, rater than points as in previous work, closer. This leads to faster convergence as it avoids inducing the test locations into the objective function. \subsection{Demonstration of Registration} We select two sets of typical range data to demonstrate the advantages of our registration upon ICP and previous method. In the first test, we use two frames of range data measured with a spinning 2D lidar from experiment A in Section \RNum{7}. As shown in Fig. \ref{fig:5}(a), the point cloud provided by this kind of sensor is rather sparse and uneven. Here, the Generalized ICP (GICP) is selected as the benchmark. The results in Fig. \ref{fig:5}(b)\&(c) shows that GICP falls into wrong local minima, while our method align structure well. \begin{figure} \centering \includegraphics[scale = 0.95]{fig5-eps-converted-to.pdf} \caption{Registration test of sparse point cloud from a spinning 2D lidar. Red and blue points indicate the two frames respectively. (a) Two frames of data before alignment. (b) Result of GICP. Although the points are drawn closer, the walls and columns are misaligned. (c) Registration of GP-SLAM+ outputs correct result.} \label{fig:5} \end{figure} Secondly we check the impact of our modification on registration strategy. We choose a frame of range data collected by Velodyne VLP-16 from experiment C as the target frame and set an initial transformation error (1m in translation and 5 degree in rotation around the $z$-axis) on it to form the source frame. Then these two frames are aligned using the original and current registration methods. The root mean square error (RMSE) of the distances between the closest points from two frames are used as the convergence criteria. As shown in Fig. \ref{fig:6}, the result indicates that the registration part of our method is significantly improved. \begin{figure} \centering \includegraphics[scale = 0.45]{fig7-eps-converted-to.pdf} \caption{Comparison of converge speed between original and current registration methods. The RMSE between the closest point pairs from target frame and the source frame after each iteration is used as the convergence criteria. } \label{fig:6} \end{figure} \section{Map Building} The map represents the accumulation of historical information. We build it incrementally making use of the uncertainty and convenient data association approach again. The map is initialized by the first frame after map reconstruction. The following current frame $P_{t}^{W}$ is fused into $Q_{t-1}^{W}$ to form the updated map $Q_{t}^{W}$. In detail, there are three different cases: The newly explored cells or newly built layers are added to $Q_{t-1}^{W}$ directly; In those overlapping cells, two layer with the same direction are fused by a recursive least square method. For two samples $\{ p_{cur}, q_{map}\}$ sharing the same test location from these two layers, we obtain the updated sample $q_{upd}$ by: \begin{equation} \begin{aligned} \sigma_{upt}^{2}=\frac{\sigma_{map}^{2} \sigma_{cur}^{2}}{\sigma_{map}^{2}+\sigma_{cur}^{2}}, \end{aligned} \end{equation} \begin{equation} \begin{aligned} f_{upt}=\frac{\sigma_{{map}}^{2} f_{cur^{+}} \sigma_{cur}^{2} f_{{map}}}{\sigma_{{map}}^{2}+\sigma_{cur}^{2}}, \end{aligned} \end{equation} where $f_{upt}$ and $\sigma_{upt}^{2}$ refer to the prediction and variance of the updated sample; In the last case, two overlapping cells in both frames contain only raw data, implying that these raw data was too sparse, we accumulate the data and conduct the reconstruction. Since we only update the predictions of the samples, and their test locations are fixed, the dimension of the map state in each voxel cell is prevented from exceeding the number of test locations as the SLAM process unfolded. \section{Two-Thread Framework} Although the core workflow can complete low-drift odometry and dense mapping independently, when IMU or multi-core hardware is available, it can be extended to the full system. The prediction of IMU can compensate motion distortion and provide initial guess $\hat{T}_{L, t}^{W}$ for scan registration. Subsequently, the result of scan registration is fed back to rectify the bias. The two-thread architecture can further enhance the mapping quality and decrease the drift, especially in large-scale scenarios. The full architecture is shown in Fig. \ref{fig:7}. This framework is inspired by \cite{zhang2014loam}. However, in contrast to it, both threads in our system use the same scan-to-map strategy described in Section \RNum{3}-\RNum{5}, so the state estimation produced by our core workflow perform higher fidelity than the odometry thread in \cite{zhang2014loam}. \begin{figure} \centering \includegraphics[scale = 0.37]{kuangtu3-eps-converted-to.pdf} \caption{Architecture of the full system. The central block is the core workflow. IMU or the two-thread framework is optional as the core workflow can finish odometry and mapping independently. } \label{fig:7} \end{figure} More precisely, after the core workflow has processed with several sequential frames of point clouds, those aligned points and the relative transformation are sent to the refinement thread. As the aggregated point cloud is denser, the variances of the samples are smaller as the GP map reconstruction can reveal the real structure of the environment better, and there are more valid constrains. As the refinement thread operates in a lower frequency (2 Hz in our implementation), to obtain more accurate odometry results, the registration module in this thread also execute with more times of iteration than that in the core workflow to obtain more accurate odometry results. The transformation from these modules are integrated. \section{Experiments} We conducted several experiments to evaluate the performance of our system from different aspects and compare it with two state-of-the-art methods according to the scenarios. The algorithm is implemented by C++ based on ROS (Robot Operating System) in Linux, and run on an Intel NUC computer with a 2.7 GHz i7-8559U CPU inside. We test data from two custom types of light-weigh sensors, a spinning 2D Hokuyo UTM-30LX-EW lidar in the experiment A, and a Velodyne VLP-16 3D lidar in the others. Fig. \ref{fig:8} shows the sensor configurations. The main parameters in our algorithm include the side length of the cell $a$ and the interval $r$ between the test locations. They were both set mainly according to the scale of scenarios. In detail, for the small indoor test in the experiment B(a), $a$ = 0.4 m and $r$ = 0.4/6 m. In the larger parking garage in the experiment A, $a$ = 1.5 m and $r$ = 0.25 m. In the outdoor tests, $a$ = 1.8 m and $r$ = 0.3 m. When compared with those feature-based methods, the resolution of the feature points in their map is set identical with the interval between our test locations. A video attachment presenting the experiment process can be found in website$\footnote{https://www.youtube.com/watch?v=2nRJThK0hCw}$ . \begin{figure} \centering \includegraphics[scale = 0.95]{fig10-eps-converted-to.pdf} \caption{Sensor configurations in experiments. (a) A MAV with a spinning 2D lidar \cite{droeschel2016multilayered} in the experiment A. (b) A MAV with a 3D lidar and onboard computer in the outdoor test in the experiment B. (c) A 3D lidar and an IMU on passenger vehicle in the experiment C. } \label{fig:8} \end{figure} \subsection{Registering Sparse Point Cloud} We use a data set collected by a spinning 2D lidar mounted on a micro aerial vehicle (MAV) \cite{droeschel2016multilayered} in a parking garage to demonstrate the robustness of our method when faced with sparse point cloud. The data set contains 200 frames of 3D data assembled from a 2D laser scan with the aid of visual odometry. The low-resolution point clouds are particularly sparse. Thus, the registration becomes challenging for ICP method as shown in Section \RNum{4}-C. The overall trajectory length is 73 m. The mapping result by our method is shown in Fig. \ref{fig:9}. The map recovers the structure of the garage and the walls show few distortion. The dense and uniformly distributed point-cloud-like map depicts rich details inside the building. By contrary, as shown in the Fig. 18(c) in work \cite{droeschel2016multilayered}, the GICP produces distort map even with graph optimization and registration with local dense map. \begin{figure} \centering \includegraphics[scale = 0.95]{fig11-eps-converted-to.pdf} \caption{Map generated by GP-SLAM+ with spinning 2D lidar from the ``Parking garage'' data set. The map depicts rich details and recovers the walls without distortion. Points are colored according to height. } \label{fig:9} \end{figure} \subsection{Evaluation of the Core Workflow} In the second part of experiments, we test the performance of the core workflow in our system without IMU. Here, we use one open-access method, A-LOAM$\footnote{https://github.com/HKUST-Aerial-Robotics/A-LOAM}$, as the benchmark, which is an advanced implementation of LOAM \cite{zhang2014loam}. \subsubsection{Accuracy of State Estimation} We evaluate the accuracy of state estimation with the ground-truth recorded by an Optitrack motion caption system. The range data were collected by a hand-held Velodyne VLP-16 lidar at a walking speed of 0.35 m/s in a room. The overall length of the trajectory is 53 m. The trajectory estimated by both methods are shown in Fig. \ref{fig:10}. Both GP-SLAM+ and A-LOAM yield relative precise pose estimation. For quantitative comparison, we align the trajectories with the ground-truth respectively, and calculate the average translation error. As reported in Table \ref{tab:1}, our method produces competing accuracy in state estimation comparable to A-LOAM. \subsubsection{Quality of Mapping Result} We mounted the sensor horizontally on a MAV to complete an outdoor mapping task. With the sensor suite shown in Fig. \ref{fig:8}-b, we finished this mapping task onboard. The plaza is surrounded by buildings, and the scale of it is 150$\times$120 m. During this experiment, the MAV took off from the north part, and finally landed on the south part after about one and a half circle. The length of the trajectory is about 200 m. \begin{figure} \centering \includegraphics[scale = 0.43]{fig12_final_bold-eps-converted-to.pdf} \caption{Overhead view of the trajectories produced by A-LOAM and GP-SLAM+ overlaid with ground truth in the indoor test.} \label{fig:10} \end{figure} \begin{table}[] \centering \caption{Evaluation of State Estimation and Mapping} \label{tab:1} \begin{tabular}{@{}lll@{}} \toprule & A-LOAM & GP-SLAM+ \\ \midrule Avg. transl. error (m) & 0.0175 & {0.0156} \\ MME & 1.5422 & {1.2014} \\ \bottomrule \end{tabular} \end{table} The mapping result of the core workflow in GP-SLAM+ is presented in Fig. \ref{fig:1}. The map shows rich details including trees and sculptures. When overlaid on the satellite image (Fig. \ref{fig:11}), the map exhibits good alignment with it, and the maximal gap is less than 1 m measured manually. The map contains no multi-wall phenomenon, which demonstrates the accuracy of it. Furthermore, we check the aggregation of registered raw point cloud from both methods in the partial views in Fig. \ref{fig:11} b\&c. We use the mean map entropy (MME) as the criteria to evaluate the consistency of the registered raw data with the tool from \cite{razlaw2015evaluation}. The searching radius in the tool is set as 1.5 m in this outdoor scenarios. As listed in Table \ref{tab:1}, our method outputs smaller entropy, which means the point cloud registered by our method are sharper compared with that by A-LOAM. At the end of this task, the MAV climbed up and down to model a building. The structure of each floor on the wall are similar with less vertical features, and the amount of valid measurements becomes less at the top of the path. Therefore, it forms a degenerate scene for lidar-based method. As shown in Fig. \ref{fig:12}, A-LOAM fails to recover the motion and the map becomes blurred, while our method still yields reliable odometry and consistent map. The reason is that the feature extraction strategy prone to loss more valid information compared to our approach. \begin{figure} \centering \includegraphics[scale = 0.95]{fig13-eps-converted-to.pdf} \caption{Qualitative analyze of the mapping result in the aerial test. Points are colored according to height. (a) Map produced by GP-SLAM+ overlaid on the satellite image. (b)(c) are the partial view of the aggregated raw point cloud by A-LOAM and GP-SLAM+ respectively. The map of A-LOAM contains multi-wall phenomenon.} \label{fig:11} \end{figure} \begin{figure} \centering \includegraphics[scale = 0.95]{fig14-eps-converted-to.pdf} \caption{Comparison between A-LOAM and GP-SLAM+ in a degenerate scene. Points are colored according to height. White curve indicates the estimated trajectory of the MAV. (a) A-LOAM fails to track the motion due to lack of features and the map gets blur. (b) GP-SLAM+ produces consistent mapping result. } \label{fig:12} \end{figure} \subsubsection{Efficiency } Considering only odometry can not yield consistent map, here we compare the efficiency of the scan-to-map registration process in both methods. During the aerial test, our core workflow completes both odometry and mapping in 73 ms for each frame, whereas the mapping thread in A-LOAM takes 224 ms per step. Notice that here the interval between range data is 100 ms. The mapping thread in A-LOAM will drop data automatically if it cannot process it in time. When this occurs, and this data will only be processed in their odometry thread. Our method processes all the 3842 frames of range data, while the mapping thread in A-LOAM processes 1774 frames of them. Thus, our method achieves better real-time performance. To assess efficiency further, we break down the time consumption of both methods into four main modules including preprocessing, matching, alignment and map building. As listed in Table \ref{tab:3}, the preprocessing containing GP map reconstruction is the main computational burden of GP-SLAM+. Given $n_{cur}$ training points in $n_{cell}$ cells in the current frame, the computational time complexity of GP map reconstruction is ${O}\left({n}_{cur}^{3} / {n}_{cell}^{2}\right)$, where $n_{cur}$ is typically one magnitude larger than $n_{cell}$. $n_{cur}$ is reduced by our principled down-sample filter. We utilize the evenly distributed property of samples in the matching and the map building processes so that they can be finished in ${O}\left(n_{cur}\right)$ time. For those ICP-based methods, the main cost is the searching for closest points. Although this process is accelerated by Kd-tree, the building of this data structure still cost ${O}\left( {n}_{{map}} \log {n}_{{map}}\right)$ and the entire searching time is ${O}\left( {n}_{{cur}} \log {n}_{{map}}\right)$ \cite{bentley1975multidimensional}. Concerning the lidar-based SLAM system, to restrain the pose drift, the map is usually denser and $n_{map}$ can be one or more magnitude larger than $n_{cur}$. For instance, in this outdoor test, the scale of the average $n_{map}$ and $n_{cur}$ are $10^6$ and $10^4$ in A-LOAM. Therefore, our method employs a different strategy that focuses on the preprocessing step compare with ICP-based methods. \begin{table}[] \centering \caption{Computation Time Break-down of Modules in Mapping} \label{tab:3} \begin{tabular}{@{}llll@{}} \toprule \multicolumn{2}{l}{Method} & A-LOAM & GP-SLAM+ \\ \midrule \multirow{5}{*}{\begin{tabular}[c]{@{}l@{}}Avg. time cost in a \\ mapping frame (ms)\end{tabular}} & Preprocessing & 5 & {58} \\ & Matching & 134 & {1} \\ & Alignment & 36 & {13} \\ & Map building & 49 & {1} \\ & All & 224 & {73} \\ \bottomrule \end{tabular} \end{table} \begin{table}[] \centering \caption{Accuracy of Pose Estimation in Large-Scale Test} \label{tab:4} \begin{tabular}{@{Method}llll@{}} \toprule & \begin{tabular}[c]{@{}l@{}}LeGO-\\ LOAM\end{tabular} & \begin{tabular}[c]{@{}l@{}}GP-\\ SLAM+\end{tabular} & \begin{tabular}[c]{@{}l@{}}Full GP-\\ SLAM+\end{tabular} \\ \midrule Avg. transl. error in x-y (m) & 7.355 & 5.753 & 4.032 \\ Final elevation error (m) & 42.136 & 5.561 & 0.178 \\ \bottomrule \end{tabular} \vspace{-0.4cm} \end{table} \subsection{Evaluation of the Full System} Finally, we test the full system in a large-scale task. The VLP-16 lidar together with an Xsens MTi-610 IMU was mounted upon a passenger vehicle. The ground-truth was provided by an RTK-GPS. The vehicle traveled 2.1 km in a campus at an average speed of 2.7 m/s. Since A-LOAM do not utilize IMU information, here we choose LeGO-LOAM \cite{shan2018lego} as the baseline to conduct a fair comparison. It performs higher efficiency compared with the original LOAM and is optimized for ground application, which also means it is not suitable for the aerial experiment B directly. This scenario includes urban and unstructured environment. We overlay the mapping result produced by our refinement thread with satellite image in Fig. \ref{fig:13}. Our method produces coherent map. To visualize the drift, we align the front 30\% part of the trajectories produced by two methods with the ground-truth respectively, and draw them in Fig. \ref{fig:14}. Due to occlusion from buildings or trees, the RTK-GPS signal is unavailable in the southwest part. In those areas, the accuracy can be demonstrated by the map and satellite image in Fig. \ref{fig:13}. For quantitatively evaluation, we align the entire trajectories with the ground truth respectively. Then, in Table \ref{tab:4}, we calculate the average translation error in x-y plane and the elevation difference when the vehicle returned to the start point. As it shows, the core workflow in our method accumulates less pose drift compared with LeGO-LOAM, and the refinement thread further enhances the performance especially in terms of the elevation error. \begin{figure} \centering \includegraphics[scale = 1.00]{fig15-eps-converted-to.pdf} \caption{Mapping result of the full GP-SLAM+ in large-scale test overlaid on satellite image. The overall run is 2.1 km and the average vehicle speed is 2.7 m/s. Points are colored according to height. } \label{fig:13} \end{figure} \begin{figure} \centering \includegraphics[scale = 0.45]{fig16_5_2-eps-converted-to.pdf} \caption{Overhead view of the trajectories in the large-scale test. It shows the trajectories from the core workflow (green) and refinement thread (red) in GP-SLAM+, LeGO-LOAM (blue) and ground truth (black). } \label{fig:14} \end{figure} \section{Discussion and Future Work} The robustness of our method when registering sparse point cloud mainly derives from the GP map reconstruction, which devotes to model the local surfaces rather than focuses on points or features separately. The resulted maps show that the structure in areas that are sparsely covered by laser can be depicted clearly after our map building approach. The evenly distributed samples enable our core workflow to accomplish the scan-to-map registration in real-time. By contrast, the two baselines drops data in their mapping thread. Regarding the scan-to-map registration produces more precise estimation than the scan-to-scan one, we believe this efficiency to be one of the reasons that our method produced more accurate pose estimation. These two main advantages of our method, robustness with sparse point cloud and efficiency, was not obvious in small space (e.g., the room in the experiment B-a) but was significant in the outdoor tests. Although we use a filter in the GP map reconstruction process, the principle down-sampling process mainly skips the redundant points due to the sweeping mechanism. Our work demonstrates the advantages of using spatial GP to model the structure for SLAM system. It also opens up the possibility to various kernel learning techniques for Gaussian process which have been studied in machine learning literature. We will further explore them to obtain better performance. For instance, the accuracy of the model can be refined if online learning of the kernel function is employed like \cite{plagemann2008learning}. Moving into Hilbert space is also promising as claimed in some works \cite{doi:10.1177/0278364916684382}. Besides, We will investigate the impact when denser range data, like those produced by 64-channel lidar in KITTI benchmark, is applied. \bibliographystyle{IEEEtran}
1,314,259,993,759
arxiv
\section{Introduction} Place/Transition Petri nets with inhibitor arcs (PTI nets, for short), originally introduced in \cite{FA73}, are a well-known (see, e.g., \cite{tcsinib,Koutny,Pet81}), Turing-complete (as proved first by Agerwala in \cite{ager-pti}), distributed model of computation, largely exploited, e.g., for modeling systems with priorities \cite{Hack}, for performance evaluation of distributed systems \cite{ajmone} and to provide $\pi$-calculus \cite{MPW,SW} with a net semantics \cite{BG09}. As finite PTI nets constitute a Turing-complete model of computation, essentially all the properties of interest are undecidable, notably the reachability problem, and so even termination: it is undecidable whether a deadlock marking is reachable from the initial one. Also interleaving bisimulation equivalence is undecidable for finite PTI nets, as it is already undecidable \cite{Jan95} on the subclass of finite P/T nets \cite{Reisig}. Similarly, one can prove that also well-known truly-concurrent behavioral equivalences, such as {\em fully-concurrent} bisimilarity \cite{BDKP91}, are undecidable \cite{Esp98} for finite PTI nets. Despite this, we show that it is possible to define a {\em sensible}, behavioral equivalence which is actually {\em decidable} on finite PTI nets. This equivalence, we call {\em pti-place bisimilarity}, is a conservative extension of {\em place bisimilarity on finite P/T nets}, introduced in \cite{ABS91} as an improvement of {\em strong bisimulation} \cite{Old}, (a relation proposed by Olderog in \cite{Old} on safe nets which fails to induce an equivalence relation), and recently proved decidable in \cite{Gor21}. Place bisimilarity on finite P/T nets is an equivalence over markings, based on relations over the {\em finite set of net places}, rather than over the (possibly infinite) set of net markings. This equivalence is very natural and intuitive: as a place can be interpreted as a sequential process type (and each token in this place as an instance of a sequential process of that type), a place bisimulation states which kinds of sequential processes (composing the distributed system represented by the finite P/T net) are to be considered as equivalent. Moreover, this equivalence does respect the causal behavior of P/T nets, as van Glabbeek proved in \cite{G15} that it is slightly finer than {\em structure preserving bisimilarity} \cite{G15}, in turn slightly finer than {\em fully-concurrent bisimilarity} \cite{BDKP91}. We extend this idea in order to be applicable to PTI nets. Informally, a binary relation $R$ over the set $S$ of places is a {\em pti-place bisimulation} if for all markings $m_1$ and $m_2$ which are {\em bijectively} related via $R$ (denoted by $(m_1, m_2) \in R^\oplus$, where $R^\oplus$ is called the {\em additive closure} of $R$), if $m_1$ can perform transition $t_1$, reaching marking $m_1'$, then $m_2$ can perform a transition $t_2$, reaching $m_2'$, such that \begin{itemize} \item the pre-sets of $t_1$ and $t_2$ are related by $R^\oplus$, the label of $t_1$ and $t_2$ is the same, the post-sets of $t_1$ and $t_2$ are related by $R^\oplus$, and also $(m_1', m_2') \in R^\oplus$, as required by a place bisimulation \cite{ABS91,Gor21}, but additionally it is required that \item the inhibiting sets of $t_1$ and $t_2$ are related by $R^\oplus$ and that, whenever $(s, s') \in R$, $s$ belongs to the inhibiting set of $t_1$ if and only if $s'$ belongs to the inhibiting set of $t_2$; \end{itemize} and symmetrically if $m_2$ moves first. Two markings $m_1$ and $m_2$ are pti-place bisimilar, denoted by $m_1 \sim_p m_2$, if a pti-place bisimulation $R$ exists such that $(m_1, m_2) \in R^\oplus$. We prove that pti-place bisimilarity is an equivalence, but it is not coinductive as the union of pti-place bisimulations may be not a pti-place bisimulation; so, in general, there is not a largest pti-place bisimulation, rather many maximal pti-place bisimulations. In fact, pti-place bisimilarity is the relation on markings given by the union of the additive closure of each maximal pti-place bisimulation. We also prove that $\sim_p$ is sensible, as it respects the causal semantics of PTI nets. As a matter of fact, following the approach in \cite{BP99,BP00}, we define a novel, process-oriented, bisimulation-based, behavioral semantics for PTI nets, called {\em causal-net bisimilarity}, and we prove that this is slightly coarser than pti-place bisimilarity. The other main contribution of this paper is to show that $\sim_p$ is decidable for finite PTI nets. As a place relation $R \subseteq S \times S$ is finite if the set $S$ of places is finite, there are finitely many place relations for a finite net. We can list all these place relations, say $R_1, R_2, \ldots R_n$. It is possible to decide whether $R_i$ is a pti-place bisimulation by checking two {\em finite} conditions over a {\em finite} number of marking pairs: this is a non-obvious observation, as a pti-place bisimulation requires that the pti-place bisimulation conditions hold for the infinitely many pairs $(m_1, m_2)$ belonging to $R_i^\oplus$. Hence, to decide whether $m_1 \sim_p m_2$, it is enough to check, for $i = $ $1, \ldots n$, whether $R_i$ is a pti-place bisimulation and, in such a case, whether $(m_1, m_2) \in R_i^\oplus$. The paper is organized as follows. \cref{def-sec} recalls the basic definitions about PTI nets, including their causal semantics. \cref{iplace-sec} deals with pti-place bisimilarity, shows that it is an equivalence relation, that it is not coinductive, and that it is slightly finer than causal-net bisimilarity. \cref{decid-iplace-sec} shows that $\sim_p$ is decidable. Finally, \cref{conc-sec} discusses some related literature and future research. \section{Basic definitions about P/T nets and PTI nets}\label{def-sec} \begin{definition}\label{multiset}{\bf (Multiset)}\index{Multiset} Let ${\mathbb N}$ be the set of natural numbers. Given a finite set $S$, a {\em multiset} over $S$ is a function $m: S \rightarrow{\mathbb N}$. The {\em support} set $\mathit{dom}(m)$ of $m$ is $\{ s \in S \mid m(s) \neq 0\}$. The set of all multisets over $S$, denoted by ${\mathcal M}(S)$, is ranged over by $m$. We write $s \in m$ if $m(s)>0$. The {\em multiplicity} of $s$ in $m$ is given by the number $m(s)$. The {\em size} of $m$, denoted by $|m|$, is the number $\sum_{s\in S} m(s)$, i.e., the total number of its elements. A multiset $m$ such that $\mathit{dom}(m) = \emptyset$ is called {\em empty} and is denoted by $\theta$. We write $m \subseteq m'$ if $m(s) \leq m'(s)$ for all $s \in S$. {\em Multiset union} $\_ \oplus \_$ is defined as follows: $(m \oplus m')(s)$ $ = m(s) + m'(s)$. {\em Multiset difference} $\_ \ominus \_$ is defined as follows: $(m_1 \ominus m_2)(s) = max\{m_1(s) - m_2(s), 0\}$. The {\em scalar product} of a number $j$ with $m$ is the multiset $j \cdot m$ defined as $(j \cdot m)(s) = j \cdot (m(s))$. By $s_i$ we also denote the multiset with $s_i$ as its only element. Hence, a multiset $m$ over $S = \{s_1, \ldots, s_n\}$ can be represented as $k_1\cdot s_{1} \oplus k_2 \cdot s_{2} \oplus \ldots \oplus k_n \cdot s_{n}$, where $k_j = m(s_{j}) \geq 0$ for $j= 1, \ldots, n$. \end{definition} \begin{definition}\label{pt-net-def}{\bf (Place/Transition Petri net)} A labeled, finite {\em Place/Transition} Petri net (P/T net for short) is a tuple $N = (S, A, T)$, where \begin{itemize} \item $S$ is the finite set of {\em places}, ranged over by $s$ (possibly indexed), \item $A$ is the finite set of {\em labels}, ranged over by $\ell$ (possibly indexed), and \item $T \subseteq ({\mathcal M}(S) \setminus \{\theta\}) \times A \times ({\mathcal M}(S) \setminus \{\theta\})$ is the finite set of {\em transitions}, ranged over by $t$ (possibly indexed). \end{itemize} Given a transition $t = (m, \ell, m')$, we use the notation: \begin{itemize} \item $\pre t$ to denote its {\em pre-set} $m$ (which cannot be empty) of tokens to be consumed; \item $l(t)$ for its {\em label} $\ell$, and \item $\post t$ to denote its {\em post-set} $m'$ (which cannot be an empty multiset) of tokens to be produced. \end{itemize} Hence, transition $t$ can be also represented as $\pre t \deriv{l(t)} \post t$. We also define the {\em flow function} ${\mbox flow}: (S \times T) \cup (T \times S) \rightarrow {\mathbb N}$ as follows: for all $s \in S$, for all $t \in T$, ${\mbox flow}(s,t) = \pre{t}(s)$ and ${\mbox flow}(t,s) = \post{t}(s)$. We will use $F$ to denote the {\em flow relation} $\{(x,y) \mid x,y \in S \cup T \, \wedge \, {\mbox flow}(x,y) > 0\}$. Finally, we define pre-sets and post-sets also for places as follows: $\pre s = \{t \in T \mid s \in \post t\}$ and $\post s = \{t \in T \mid s \in \pre t\}$. \end{definition} \begin{definition}\label{cpt-net-def} {\bf (Place/Transition net with inhibitor arcs)} A finite Place/Transition net {\em with inhibitor arcs} (PTI net for short) is a tuple $N = (S, A, T, I)$, where \begin{itemize} \item $(S, A, T)$ is a finite P/T net; \item $I \subseteq S \times T$ is the {\em inhibiting relation}. \end{itemize} Given a transition $t \in T$, we denote by $\prei t$ its {\em inhibiting set} $\{ s \in S \mid (s, t) \in I \}$ of places to be tested for absence of tokens. Hence, a transition $t$ can be also represented as $(\pre t, \prei t) \deriv{l(t)} \post t$. \end{definition} We use the standard graphical convention for Petri nets. In particular, a pair $(s, t)$ in the inhibiting relation $I$ is graphically represented by an arc from $s$ to $t$ ending with a small circle on the transition side. \begin{definition}\label{pti-net-system}{\bf (Marking, PTI net system)} A {\em PTI net system} $N(m_0)$ is a tuple $(S, A, T, I,$ $m_{0})$, where $(S,A, T, I)$ is a PTI net and $m_{0}$ is a multiset over $S$, called the {\em initial marking}. We also say that $N(m_0)$ is a {\em marked} net. \end{definition} \begin{definition}\label{token-game} {\bf (Token game)} A transition $t $ is {\em enabled} at $m$, denoted $m[t\rangle$, if $\pre t \subseteq m$ and $\prei t \cap \mathit{dom} (m) = \emptyset$. The execution, or {\em firing}, of $t$ enabled at $m$ produces the marking $m' = (m \ominus \pre t) \oplus \post t$, written $m[t\rangle m'$. \end{definition} \begin{definition}\label{net-system}{\bf (Firing sequence, reachable marking, safe net)} A {\em firing sequence} starting at $m$ is defined inductively as follows: \begin{itemize} \item $m[\epsilon\rangle m$ is a firing sequence (where $\epsilon$ denotes an empty sequence of transitions) and \item if $m[\sigma\rangle m'$ is a firing sequence and $m' [t\rangle m''$, then $m [\sigma t\rangle m''$ is a firing sequence. \end{itemize} The set of {\em reachable markings} from $m$ is $[m\rangle = \{m' \mid \exists \sigma. m[\sigma\rangle m'\}$. A PTI system $N = $ $(S, A, T, I, m_0)$ is {\em safe} if for each marking $m \in [m_0\rangle$, we have that $m(s) \leq 1$ for all $s \in S$. \end{definition} \subsection{Causal semantics for P/T nets and PTI nets} We outline some definitions about the causal semantics of P/T nets, adapted from the literature (cf., e.g., \cite{BDKP91,G15,GR83,Old}). \begin{definition}\label{acyc-def}{\bf (Acyclic net)} A P/T net $N = (S, A, T)$ is {\em acyclic} if its flow relation $F$ is acyclic (i.e., $\not \exists x$ such that $x F^+ x$, where $F^+$ is the transitive closure of $F$). \end{definition} The concurrent semantics of a marked P/T net is defined by a class of particular acyclic safe nets, where places are not branched (hence they represent a single run) and all arcs have weight 1. This kind of net is called {\em causal net}. We use the name $C$ (possibly indexed) to denote a causal net, the set $B$ to denote its places (called {\em conditions}), the set $E$ to denote its transitions (called {\em events}), and $L$ to denote its labels. \begin{definition}\label{causalnet-def}{\bf (Causal P/T net)} A causal net is a finite marked net $C(\mathsf{m}_0) = (B,L, E, \mathsf{m}_0)$ satisfying the following conditions: \begin{enumerate} \item $C$ is acyclic; \item $\forall b \in B \; \; | \pre{b} | \leq 1\, \wedge \, | \post{b} | \leq 1$ (i.e., the places are not branched); \item $ \forall b \in B \; \; \mathsf{m}_0(b) = \begin{cases} 1 & \mbox{if $\; \pre{b} = \emptyset$}\\ 0 & \mbox{otherwise;} \end{cases}$\\ \item $\forall e \in E \; \; \pre{e}(b) \leq 1 \, \wedge \, \post{e}(b) \leq 1$ for all $b \in B$ (i.e., all the arcs have weight $1$). \end{enumerate} We denote by $Min(C)$ the set $\mathsf{m}_0$, and by $Max(C)$ the set $\{b \in B \mid \post{b} = \emptyset\}$. \end{definition} Note that any reachable marking of a causal net is a set, i.e., this net is {\em safe}; in fact, the initial marking is a set and, assuming by induction that a reachable marking $\mathsf{m}$ is a set and enables $e$, i.e., $\mathsf{m}[e\rangle \mathsf{m}'$, then also $\mathsf{m}' = (\mathsf{m} \ominus \pre{e}) \oplus \post{e}$ is a set, as the net is acyclic and because of the condition on the shape of the post-set of $e$ (weights can only be $1$). As the initial marking of a causal P/T net is fixed by its shape (according to item $3$ of \cref{causalnet-def}), in the following, in order to make the notation lighter, we often omit the indication of the initial marking (also in their graphical representation), so that the causal net $C(\mathsf{m}_0)$ is denoted by $C$. \begin{definition}\label{trans-causal}{\bf (Moves of a causal P/T net)} Given two causal nets $C = (B, L, E, \mathsf{m}_0)$ and $C' = (B', L, E', \mathsf{m}_0)$, we say that $C$ moves in one step to $C'$ through $e$, denoted by $C [e\rangle C'$, if $\; \pre{e} \subseteq Max(C)$, $E' = E \cup \{e\}$ and $B' = B \cup \post{e}$. \end{definition} \begin{definition}\label{folding-def}{\bf (Folding and Process)} A {\em folding} from a causal P/T net $C = (B, L, E, \mathsf{m}_0)$ into a P/T net system $N(m_0) = (S, A, T, m_0)$ is a function $\rho: B \cup E \to S \cup T$, which is type-preserving, i.e., such that $\rho(B) \subseteq S$ and $\rho(E) \subseteq T$, satisfying the following: \begin{itemize} \item $L = A$ and $\mathsf{l}(e) = l(\rho(e))$ for all $e \in E$; \item $\rho(\mathsf{m}_0) = m_0$, i.e., $m_0(s) = | \rho^{-1}(s) \cap \mathsf{m}_0 |$; \item $\forall e \in E, \rho(\pre{e}) = \pre{\rho(e)}$, i.e., $\rho(\pre{e})(s) = | \rho^{-1}(s) \cap \pre{e} |$ for all $s \in S$; \item $\forall e \in E, \, \rho(\post{e}) = \post{\rho(e)}$, i.e., $\rho(\post{e})(s) = | \rho^{-1}(s) \cap \post{e} |$ for all $s \in S$. \end{itemize} A pair $(C, \rho)$, where $C$ is a causal net and $\rho$ a folding from $C$ to a net system $N(m_0)$, is a {\em process} of $N(m_0)$. \end{definition} \begin{definition}\label{trans-process}{\bf (Moves of a P/T process)} Let $N(m_0) = (S, A, T, m_0)$ be a net system and let $(C_i, \rho_i)$, for $i = 1, 2$, be two processes of $N(m_0)$. We say that $(C_1, \rho_1)$ moves in one step to $(C_2, \rho_2)$ through $e$, denoted by $(C_1, \rho_1) \deriv{e} (C_2, \rho_2)$, if $C_1 [e\rangle C_2$ and $\rho_1 \subseteq \rho_2$. \end{definition} Following \cite{BP99,BP00}, we define here a possible causal semantics for PTI nets. In order to maintain the pleasant property that a process univocally determines the causal dependencies among its events, it is not enough to just enrich causal P/T nets with inhibitor arcs. Indeed, the {\em reason} why a condition is empty may influence the causal relation of events. To solve the problem, in \cite{BP99,BP00} inhibitor arcs are partitioned into two sets: {\em before} inhibitor arcs and {\em after} inhibitor arcs. If a condition is connected to an event by a before inhibitor arc, the event fires because the condition has not held yet; if they are connected by an after inhibitor arc, the event fires because the condition does not hold anymore. \begin{definition}\label{causal-pti-net-def}{\bf (Causal PTI net)} A causal PTI net is a tuple $C(\mathsf{m}_0) = (B,L, E, Y^{be}, Y^{af},$ $\mathsf{m}_0)$ satisfying the following conditions, denoting the flow relation of $C$ by $\mathsf{F}$: \begin{enumerate} \item $(B,L,E, \mathsf{m}_0)$ is a causal P/T net; \item $(B,L, E, Y^{be} \cup Y^{af}, \mathsf{m}_0)$ is a marked PTI net; \item {\em before} and {\em after} requirements are met, i.e. \begin{alphaenumerate} \item If $b \mathrel{Y^{be}} e$, then there exists $e' \in E$ such that $e' \mathrel{\mathsf{F}} b$, and \item If $b \mathrel{Y^{af}} e$, then there exists $e' \in E$ such that $b \mathrel{\mathsf{F}} e'$; \end{alphaenumerate} \item relation $\mathsf{F}\, \cup \prec_{af} \cup \prec_{be}$ is acyclic, where $\prec_{af} = \mathsf{F}^{-1} \circ Y^{af}$ and $\prec_{be} = (Y^{be})^{ -1} \circ \mathsf{F}^{-1}$. \end{enumerate} We denote by $Min(C)$ the set $\mathsf{m}_0$, and by $Max(C)$ the set $\{b \in B \mid \post{b} = \emptyset\}$. \end{definition} Relation $\prec_{af} \subseteq E \times E$ states that $e \prec_{af} e'$ if $e$ consumes the token in a place $b$ inhibiting $e'$: this is clearly a causal dependency. Instead, relation $\prec_{be} \subseteq E \times E$ states that $e \prec_{be} e'$ if $e'$ produces a token in a place $b$ inhibiting $e$: this is clearly a temporal precedence, because the two events can be causally independent, yet they cannot occur in any order, as if $e'$ occurs, then $e$ is disabled. \begin{definition}\label{folding-pti--def}{\bf (Folding and PTI process)} A {\em folding} from a causal PTI net $C = (B, L, E,$ $Y^{be}, Y^{af}, \mathsf{m}_0)$ into a PTI net system $N(m_0) = (S, A, T, I, m_0)$ is a function $\rho: B \cup E \to S \cup T$, which is type-preserving, i.e., such that $\rho(B) \subseteq S$ and $\rho(E) \subseteq T$, satisfying the following: \begin{itemize} \item $\rho$ is a P/T folding from $(B,L,E, \mathsf{m}_0)$ into $(S, A, T, m_0)$; \item for all $s \in S$ and $e \in E$, if $(s,\rho(e)) \in I$ then for all $b \in B$ such that $\rho(b) = s$, it holds $(b, e) \in Y^{be} \cup Y^{af} \cup \mathsf{F}^{-1}$, and \\ for all $b \in B$ and $e \in E$, if $(b, e) \in Y^{be} \cup Y^{af}$ then $(\rho(b), \rho(e)) \in I$. \end{itemize} A pair $(C, \rho)$, where $C$ is a causal PTI net and $\rho$ a folding from $C$ to a PTI net system $N(m_0)$, is a {\em PTI process} of $N(m_0)$. \end{definition} Each inhibitor arc in the causal net has a corresponding inhibitor arc in the net system. The only case where a condition $b$ is not connected to an event $e$ is when $b$ is in the post-set of $e$: as $b$ starts to hold only after $e$ occurs, the only possibility is to put a before arc. This would make the relation $\prec_{be}$ reflexive, invalidating item 4 of \cref{causal-pti-net-def}. However, since $b$ is in the post-set of $e$, we are sure that $e$ happens before $b$ is fulfilled, hence making useless the presence of a before inhibitor arc. For this reason, with the requirement $(b, e) \in Y^{be} \cup Y^{af} \cup \mathsf{F}^{-1}$, we ask for the presence of an inhibitor arc only if there exists no flow from $e$ to $b$. \begin{definition}\label{trans-pti-process}{\bf (Moves of a PTI process)} Let $N(m_0) = (S, A, T, I, m_0)$ be a PTI net system and let $(C_i, \rho_i)$, for $i = 1, 2$, be two PTI processes of $N(m_0)$, where $C_i = (B_i, L, E_i,$ $Y_i^{be}, Y_i^{af}, \mathsf{m}_0) $. We say that $(C_1, \rho_1)$ moves in one step to $(C_2, \rho_2)$ through $e$, denoted by $(C_1, \rho_1) \deriv{e} (C_2, \rho_2)$, if the following hold: \begin{itemize} \item $\pre{e} \subseteq Max(C_1)$, $E_2 = E_1 \cup \{ e \}$, $B_2 = B_1 \cup \post{e}$, $\rho_1 \subseteq \rho_2$, i.e. the P/T process of $(C_1, \rho_1)$ moves in one step through $e$ to the P/T process of $(C_2, \rho_2)$. \item Given two relations $\mathcal{B}$ and $\mathcal{A}$, defined as \begin{itemize} \item[-] $\forall b \in \post{e}$, $\forall e' \in E_1$ we have $\;b \mathrel{\mathcal{B}} e'\;$ if and only if $(\rho_2(b), \rho_2(e')) \in I$, \item[-] $\forall b \in B_2$ such that $\post{b} \neq \emptyset$, we have $\; b \mathrel{\mathcal{A}} e\;$ if and only if $(\rho_2(b), \rho_2(e)) \in I$, \end{itemize} we have $\{ b \in B_2 \mid b \mathrel{\mathcal{A}} e\} \cap Max(C_1) = \emptyset$. \item Finally, $Y_2^{be} = Y_1^{be} \cup \mathcal{B}$ and $Y_2^{af} = Y_1^{af} \cup \mathcal{A}$. \end{itemize} \end{definition} The item $ \{ b \in B_2 \mid b \mathrel{\mathcal{A}} e\} \cap Max(C_1) = \emptyset$ models the fact that a transition can fire only if all its inhibiting places are free. Indeed, an event can fire only if its (so far known) inhibiting conditions are not maximal. Note that, by construction, before arcs can connect only new inhibiting conditions to past events and in particular we do not allow before arcs connecting a condition in the post-set of a newly added event $e$ with the event $e$ itself. Moreover, after arcs can only connect old inhibiting conditions to the new event $e$ and since $ \{ b \in B_2 \mid b \mathrel{\mathcal{A}} e\} \cap Max(C_1) = \emptyset$, the old inhibiting conditions cannot be in the pre-set of the newly added event $e$. Therefore, both relations $\prec_2^{be}$ and $\prec_2^{af}$ are acyclic, and since $\mathsf{F}_2$ is acyclic too, $(C_2, \rho_2)$ is truly a process of $N(m_0)$. \begin{example} Consider the three nets in \cref{fig:1}, where we use the graphical convention that {\em before} inhibitor arcs and {\em after} inhibitor arcs are represented by lines between a condition and an event: the former labeled by $b$, the latter labeled by $a$. The initial marking of $N$ is $m_0 = s_1 \oplus s_3$. The shape of a process generated by $N(m_0)$ may depend on the order of transitions in a given transition sequence. As a matter of fact, transition sequences containing the same transitions but in a different order may generate different processes, e.g. $C_1$ and $C_2$. Indeed, $C_1$ represents the transition sequence $t_1 \, t_3 \, t_2$, while $C_2$ represents the transition sequence $t_2 \, t_1 \, t_3$. Note that the underlying causal P/T net of these two processes is the same, but before and after inhibitor arcs are different. \end{example} \begin{figure} \centering \begin{tikzpicture}[ every place/.style={draw,thick,inner sep=0pt,minimum size=6mm}, every transition/.style={draw,thick,inner sep=0pt,minimum size=4mm}, bend angle=30, pre/.style={<-,shorten <=1pt,>=stealth,semithick}, post/.style={->,shorten >=1pt,>=stealth,semithick} ] \def3.3cm{3.3cm} \def0.35cm{0.35cm} \def0.55cm{0.5cm} \node (N) [label=$N)$]{}; \node (s1) [place, tokens=1] [right=0.35cm of N, label=above:$s_1$] {}; \node (t1) [transition] [below=0.55cm of s1, label=right:$t_1$] {}; \node (s2) [place] [below=0.55cm of t1, label=below:$s_2$] {}; \node (s3) [place, tokens=1] [right=0.55cm of s1, label=above:$s_3$] {}; \node (t2) [transition] [below=0.55cm of s3, label=right:$t_2$] {}; \node (s4) [place] [below=0.55cm of t2, label=below:$s_4$] {}; \node (t3) [transition] [left=0.35cm of s2, label=below:$t_3$] {}; \node (s5) [place] [left=0.35cm of t3, label=below:$s_5$] {}; \draw [->] (s1) to (t1); \draw [->] (t1) to (s2); \draw [-o] (s2) to (t2); \draw [->] (s3) to (t2); \draw [->] (t2) to (s4); \draw [->] (s2) to (t3); \draw [->] (t3) to (s5); \node (C1) [right={4.2cm} of N, label=$C_1)$]{}; \node (b1) [place] [right=0.35cm of C1, label=above:$b_1$] {}; \node (e1) [transition] [below=0.55cm of b1, label=right:$e_1$] {}; \node (b2) [place] [below=0.55cm of e1, label=below:$b_2$] {}; \node (b3) [place] [right=0.55cm of b1, label=above:$b_3$] {}; \node (e2) [transition] [below=0.55cm of b3, label=right:$e_2$] {}; \node (b4) [place] [below=0.55cm of e2, label=below:$b_4$] {}; \node (e3) [transition] [left=0.35cm of b2, label=below:$e_3$] {}; \node (b5) [place] [left=0.35cm of e3, label=below:$b_5$] {}; \draw [->] (b1) to (e1); \draw [->] (e1) to (b2); \draw (b2) -o (e2) node[midway,right, rotate=0] {$a$}; \draw [->] (b3) to (e2); \draw [->] (e2) to (b4); \draw [->] (b2) to (e3); \draw [->] (e3) to (b5); \node (C2) [right={4.2cm} of C1, label=$C_2)$]{}; \node (b1) [place] [right=0.35cm of C2, label=above:$b_1$] {}; \node (e1) [transition] [below=0.55cm of b1, label=right:$e_1$] {}; \node (b2) [place] [below=0.55cm of e1, label=below:$b_2$] {}; \node (b3) [place] [right=0.55cm of b1, label=above:$b_3$] {}; \node (e2) [transition] [below=0.55cm of b3, label=right:$e_2$] {}; \node (b4) [place] [below=0.55cm of e2, label=below:$b_4$] {}; \node (e3) [transition] [left=0.35cm of b2, label=below:$e_3$] {}; \node (b5) [place] [left=0.35cm of e3, label=below:$b_5$] {}; \draw [->] (b1) to (e1); \draw [->] (e1) to (b2); \draw (b2) -o (e2) node[midway,right, rotate=0] {$b$}; \draw [->] (b3) to (e2); \draw [->] (e2) to (b4); \draw [->] (b2) to (e3); \draw [->] (e3) to (b5); \end{tikzpicture} \caption{A marked PTI net and two PTI causal nets corresponding to its two maximal processes.} \label{fig:1} \end{figure} We are now ready to introduce a novel behavioral relation for PTI nets, namely causal-net bisimulation, which is an interesting relation in its own right, as the induced equivalence, namely {\em causal-net bisimilarity}, on P/T nets coincides with {\em structure-pre-serving bisimilarity} \cite{G15}, and so it is slightly finer than {\em fully-concurrent bisimilarity} \cite{BDKP91}. However, since we conjecture that causal-net bisimilarity is undecidable (already on finite P/T nets), we will use this behavioral relation only for comparison with pti-place bisimilarity, showing the the latter is a finer, but decidable, approximation of the former. \begin{definition}\label{cn-bis-def}{\bf (Causal-net bisimulation)} Let $N = (S, A, T, I)$ be a PTI net. A {\em causal-net bisimulation} is a relation $R$, composed of triples of the form $(\rho_1, C, \rho_2)$, where, for $i = 1, 2$, $(C, \rho_i)$ is a process of $N(m_{0i})$ for some $m_{0i}$, such that if $(\rho_1, C, \rho_2) \in R$ then \begin{itemize} \item[$i)$] $\forall t_1, C', \rho_1'$ such that $(C, \rho_1) \deriv{e} (C', \rho_1')$, where $\rho_1'(e) = t_1$, $\exists t_2, \rho_2'$ such that\\ $(C, \rho_2) \deriv{e} (C', \rho_2')$, where $\rho_2'(e) = t_2$, and $(\rho'_1, C', \rho'_2) \in R$; \item[$ii)$] symmetrically, $\forall t_2, C', \rho_2'$ such that $(C, \rho_2) \deriv{e} (C', \rho_2')$, where $\rho_2'(e) = t_2$, $\exists t_1, \rho_1'$ such that $(C, \rho_1) \deriv{e} (C', \rho_1')$, where $\rho_1'(e) = t_1$, and $(\rho'_1, C', \rho'_2) \in R$. \end{itemize} Two markings $m_{1}$ and $m_2$ of $N$ are cn-bisimilar, denoted by $m_{1} \sim_{cn} m_{2}$, if there exists a causal-net bisimulation $R$ containing a triple $(\rho^0_1, C^0, \rho^0_2)$, where $C^0$ contains no events and $\rho^0_i(Min( C^0)) = \rho^0_i(Max( C^0)) = m_i\;$ for $i = 1, 2$. \end{definition} If $m_1 \sim_{cn} m_2$, then these two markings have the same causal PTI nets, so that the executions originating from the two markings have the same causal dependencies (determined by $\mathsf{F}$ and $\prec_{af}$) and the same temporal dependencies (determined by $\prec_{be}$). Causal-net bisimilarity $\sim_{cn}$ is an equivalence relation (see \cref{causal-net-equivalence-sec}). \section{Pti-place bisimilarity}\label{iplace-sec} We now present pti-place bisimilarity, which conservatively extends {\em place bisimilarity} \cite{ABS91,Gor21} to the case of PTI nets. First, an auxiliary definition. \subsection{Additive closure and its properties} \begin{definition}\label{add-eq}{\bf (Additive closure)} Given a PTI net $N = (S, A, T, I)$ and a {\em place relation} $R \subseteq S \times S$, we define a {\em marking relation} $R^\oplus \, \subseteq \, {\mathcal M}(S) \times {\mathcal M}(S)$, called the {\em additive closure} of $R$, as the least relation induced by the following axiom and rule.\\ $\begin{array}{lllllllllll} \bigfrac{}{(\theta, \theta) \in R^\oplus} & \; & \; \bigfrac{(s_1, s_2) \in R \; (m_1, m_2) \in R^\oplus }{(s_1 \oplus m_1, s_2 \oplus m_2) \in R^\oplus } \\ \end{array}$ \end{definition} Note that two markings are related by $R^\oplus$ only if they have the same size; in fact, the axiom states that the empty marking is related to itself, while the rule, assuming by induction that $m_1$ and $m_2$ have the same size, ensures that $s_1 \oplus m_1$ and $s_2 \oplus m_2$ have the same size. \begin{proposition}\label{fin-k-add} For each relation $R \subseteq S \times S$, if $(m_1, m_2) \in R^\oplus$, then $|m_1| = |m_2|$. \end{proposition} Note also that the membership $(m_1, m_2) \in R^\oplus$ may be proved in several different ways, depending on the chosen order of the elements of the two markings and on the definition of $R$. For instance, if $R = \{(s_1, s_3),$ $(s_1, s_4),$ $(s_2, s_3), (s_2, s_4)\}$, then $(s_1 \oplus s_2, s_3 \oplus s_4) \in R^\oplus$ can be proved by means of the pairs $(s_1, s_3)$ and $(s_2, s_4)$, as well as by means of $(s_1, s_4), (s_2, s_3)$. An alternative way to define that two markings $m_1$ and $m_2$ are related by $R^\oplus$ is to state that $m_1$ can be represented as $s_1 \oplus s_2 \oplus \ldots \oplus s_k$, $m_2$ can be represented as $s_1' \oplus s_2' \oplus \ldots \oplus s_k'$ and $(s_i, s_i') \in R$ for $i = 1, \ldots, k$. In fact, a naive algorithm for checking whether $(m_1, m_2) \in R^\oplus$ would simply consider $m_1$ represented as $s_1 \oplus s_2 \oplus \ldots \oplus s_k$, and then would scan all the possible permutations of $m_2$, each represented as $s'_1 \oplus s'_2 \oplus \ldots \oplus s'_k$, to check that $(s_i, s_i') \in R$ for $i = 1, \ldots, k$. Of course, this naive algorithm is in $O(k!)$. \begin{example}\label{nsubtractive} Consider $R = \{(s_1, s_3),$ $(s_1, s_4), (s_2, s_4)\}$, which is not an equivalence relation. Suppose we want to check that $(s_1 \oplus s_2, s_4 \oplus s_3) \in R^\oplus$. If we start by matching $(s_1, s_4) \in R$, then we fail because the residual $(s_2, s_3)$ is not in $R$. However, if we permute the second marking to $s_3 \oplus s_4$, then we succeed because the required pairs $(s_1, s_3)$ and $(s_2, s_4)$ are both in $R$. {\mbox{ }\nolinebreak\hfill{$\Box$}} \end{example} Nonetheless, the problem of checking whether $(m_1, m_2) \in R^\oplus$ has polynomial time complexity because it can be considered as an instance of the problem of finding a perfect matching in a bipartite graph, where the nodes of the two partitions are the tokens in the two markings, and the edges are defined by the relation $R$. In fact, the definition of the bipartite graph takes $O(k^2)$ time (where $k = |m_1| = |m_2|$) and, then, the Hopcroft-Karp-Karzanov algorithm \cite{HK73} for computing the maximum matching has worst-case time complexity $O(h\sqrt{k})$, where $h$ is the number of the edges in the bipartire graph ($h \leq k^2$) and to check whether the maximum matching is perfect can be done simply by checking that the size of the matching equals the number of nodes in each partition, i.e., $k$. Hence, in evaluating the complexity of the algorithm in Section \ref{decid-iplace-sec}, we assume that the complexity of checking whether $(m_1, m_2) \in R^\oplus$ is in $O(k^2 \sqrt{k})$. \begin{proposition}\label{add-prop1}\cite{Gor17b} For each place relation $R \subseteq S \times S$, the following hold: \begin{enumerate} \item If $R$ is an equivalence relation, then $R^\oplus$ is an equivalence relation. \item If $R_1 \subseteq R_2$, then $R_1^\oplus \subseteq R_2^\oplus$, i.e., the additive closure is monotone. \item If $(m_1, m_2) \in R^\oplus$ and $(m_1', m_2') \in R^\oplus$, then $(m_1 \oplus m_1', m_2 \oplus m_2') \in R^\oplus$, i.e., the additive closure is additive. \end{enumerate} \end{proposition} Now we list some necessary, and less obvious, properties of additively closed place relations that will be useful in the following. \begin{proposition}\label{add-prop2}\cite{Gor17b} For each family of place relations $R_i \subseteq S \times S$, the following hold: \begin{enumerate} \item $\emptyset^\oplus = \{(\theta, \theta)\}$, i.e., the additive closure of the empty place relation yields a singleton marking relation, relating the empty marking to itself. \item $(\mathcal{I}_S)^\oplus = \mathcal{I}_M$, i.e., the additive closure of the identity on places $\mathcal{I}_S = \{(s, s) \mid s \in S\}$ yields the identity relation on markings $\mathcal{I}_M = \{(m, m) \mid $ $ m \in {\mathcal M}(S)\}$. \item $(R^\oplus)^{-1} = (R^{-1})^\oplus$, i.e., the inverse of an additively closed relation $R$ equals the additive closure of its inverse $R^{-1}$. \item $(R_1 \circ R_2)^\oplus = (R_1^\oplus) \circ (R_2^\oplus)$, i.e., the additive closure of the composition of two place relations equals the compositions of their additive closures. \end{enumerate} \end{proposition} \subsection{Pti-place bisimulation and its properties} We are now ready to introduce pti-place bisimulation, which is a non-interleaving behavioral relation defined over the net places. Note that for P/T nets, place bisimulation \cite{ABS91,Gor21} and pti-place bisimulation coincide because $I = \emptyset$. \begin{definition}\label{def-pti-place-bis}{\bf (Pti-place bisimulation)} Let $N = (S, A, T, I)$ be a PTI net. A {\em pti-place bisimulation} is a relation $R\subseteq S \times S$ such that if $(m_1, m_2) \in R^\oplus$ then \begin{enumerate} \item $\forall t_1$ such that $m_1[t_1\rangle m'_1$, $\exists t_2$ such that $m_2[t_2\rangle m'_2$ and \begin{alphaenumerate} \item $(\pre{t_1}, \pre{t_2}) \in R^\oplus$, $(\prei{t_1}, \prei{t_2}) \in R^\oplus$, $(\post{t_1}, \post{t_2}) \in R^\oplus$, $l(t_1) = l(t_2)$, $(m'_1, m'_2) \in R^\oplus$, \item $\forall s, s' \in S. (s, s') \in R \Rightarrow (s \in \prei{t_1} \Leftrightarrow s' \in \prei{t_2})$. \end{alphaenumerate} \item $\forall t_2$ such that $m_2[t_2\rangle m'_2$, $\exists t_1$ such that $m_1[t_1\rangle m'_1$ and \begin{alphaenumerate} \item $(\pre{t_1}, \pre{t_2}) \in R^\oplus$, $(\prei{t_1}, \prei{t_2}) \in R^\oplus$, $(\post{t_1}, \post{t_2}) \in R^\oplus$, $l(t_1) = l(t_2)$, $(m'_1, m'_2) \in R^\oplus$, \item $\forall s, s' \in S. (s, s') \in R \Rightarrow (s \in \prei{t_1} \Leftrightarrow s' \in \prei{t_2})$. \end{alphaenumerate} \end{enumerate} Two markings $m_1$ and $m_2$ are {\em pti-place bisimilar}, denoted by $m_1 \sim_p m_2$, if there exists a pti-place bisimulation $R$ such that $(m_1, m_2) \in R^\oplus$. \end{definition} Conditions 1(b) and 2(b) make sure that the relation $R$ respects the inhibiting behavior of places. Thus, not only the sets $\prei{t_1}$ and $\prei{t_2}$ must be bijectively related, but also an inhibiting place for one of the two transitions cannot be related via $R$ to a non-inhibiting place for the other transition. \begin{example} Consider the PTI net $N_1$ in \cref{fig:2}, where the right part is a partial unfolding of the left one. The relation $R = \{ (s_1, s_1'), (s_2, s_2'), (s_3, s_3'), (s_4, s_4'), (s_2, s_5') \}$ is a pti-place bisimulation; and so, e.g., $2 \cdot s_2 \oplus 2 \cdot s_3 \sim_p s_2' \oplus s_5' \oplus 2 \cdot s_3'$. Now consider the PTI net $N_2$ in \cref{fig:2}. Not only the loop labeled by $b$ on the left is unwound on the right, but also the $a$-labeled transition on the left is replicated three times on the right. The relation $R' = \{ (s_0, s_5),$ $ (s_1, s_{11}), (s_2, s_4), (s_2, s_7), (s_3, s_6), (s_3, s_8),$ $(s_3, s_9), (s_3, s_{10}) \}$ is a pti-place bisimulation and so, e.g., $2 \cdot s_2 \oplus s_3 \sim_p s_4 \oplus s_7 \oplus s_9$. \end{example} \begin{figure}[t] \centering \begin{tikzpicture}[ every place/.style={draw,thick,inner sep=0pt,minimum size=6mm}, every transition/.style={draw,thick,inner sep=0pt,minimum size=4mm}, bend angle=30, pre/.style={<-,shorten <=1pt,>=stealth,semithick}, post/.style={->,shorten >=1pt,>=stealth,semithick} ] \def3.3cm{3.3cm} \def0.35cm{0.5cm} \def0.55cm{0.75cm} \node (N1) [label=$N_1)$]{}; { \node (s1) [place] [right=0.35cm of N1, label=above:$s_1$] {}; \node (s2) [place] [right=0.55cm of s1, label=above:$s_2$] {}; \node (s3) [place] [right=0.55cm of s2, label=above:$s_3$] {}; \node (t1) [transition] [below=0.55cm of s2, label=below:$a$] {}; \node (t3) [transition] [below=0.55cm of s1, label=left:$b$] {}; \node (s4) [place] [below=0.55cm of t3, label=below:$s_4$] {}; \draw [-o] [bend right] (s1) to (t1); \draw [->] [bend right] (t1) to (s2); \draw [->] [bend right] (s2) to (t1); \draw [->] [bend left] (s3) to (t1); \draw [->] (s1) to (t3); \draw [->] (t3) to (s4); \node (e) [right={5.5cm} of s1]{}; \node (s1p) [place] [below=0.55cm of e, label=above:$s_1'$] {}; \node (t3) [transition] [left=0.35cm of s1p, label=left:$b$]{}; \node (s5) [place] [below=0.55cm of t3, label=below:$s_4'$] {}; \node (s2) [place] [right=0.55cm of e, label=above:$s_2'$] {}; \node (s3) [place] [right=0.55cm of s2, label=above:$s_3'$] {}; \node (t1) [transition] [below=0.55cm of s2, label=below:$\qquad a$] {}; \node (s4) [place] [below=0.55cm of t1, label=right:$s_5'$] {}; \node (t2) [transition] [below=0.55cm of s4, label=below:$a$] {}; \draw [-o] (s1p) to (t1) {}; \draw [-o] [bend right](s1p) to (t2){}; \draw [->] (s2) to (t1){}; \draw [->] (s3) to (t1){}; \draw [->] [bend left] (s4) to (t2){}; \draw [->] (t1) to (s4){}; \draw [->] [bend left] (t2) to (s4){}; \draw [->] [bend left] (s3) to (t2){}; \draw [->] (s1p) to (t3); \draw [->] (t3) to (s5); } \node (N2) [below={4.5cm} of N1, label=$N_2)$]{}; { \node (s2) [place] [right={1.6cm} of N2, label=above:$s_2$]{}; \node (t1) [transition] [below=0.55cm of s2, label=right:$a$] {}; \node (s0) [place] [left=0.55cm of t1, label=left:$s_0$] {}; \node (s3) [place] [below=0.55cm of t1, label=right:$s_3$] {}; \node (t2) [transition] [below=0.55cm of s3, label=right:$b$] {}; \node (s1) [place] [left=0.55cm of t2, label=left:$s_1$] {}; \draw [-o] (s0) to (t1); \draw [->] (s2) to (t1); \draw [->] (t1) to (s3); \draw [-o] (s1) to (t2); \draw [->] [bend right] (s3) to (t2); \draw [->] [bend right] (t2) to node[auto,swap] {2} (s3); \node (s4) [place] [right={9cm} of N2, label=above:$s_4$]{}; \node (t1) [transition] [below=0.55cm of s4, label=right:$a$]{}; \node (s5) [place] [left=0.55cm of t1, label=above:$s_5$]{}; \node (t2) [transition] [left=0.55cm of s5, label=left:$a$]{}; \node (s6) [place] [below=0.55cm of t1, label=below:$\qquad s_6$]{}; \node (t3) [transition] [left=0.55cm of s6, label=above:$\quad a$]{}; \node (s7) [place] [left=0.55cm of t3, label=below:$s_7$]{}; \node (t5) [transition] [below=0.55cm of s6, label=below:$b$]{}; \node (s10) [place] [left=0.55cm of t5, label=below:$s_{10}$]{}; \node (t6) [transition] [left=0.55cm of s10, label=left:$b$]{}; \node (t7) [transition] [left={1.5cm} of t6, label=left:$b$]{}; \node (s8) [place] [above=0.55cm of t7, label=above:$s_8$]{}; \node (s9) [place] [right=0.55cm of t5, label=above:$s_9$]{}; \node (t4) [transition] [right=0.55cm of s9, label=right:$b$]{}; \node (s11) [place] [below=0.55cm of s10, label=below:$s_{11}$]{}; \draw [-o] (s5) to (t1); \draw [-o] (s5) to (t2); \draw [-o] (s5) to (t3); \draw [-o] [bend right] (s11) to (t4); \draw [-o] (s11) to (t5); \draw [-o] (s11) to (t6); \draw [-o] [bend left] (s11) to (t7); \draw [->] (s4) to (t1); \draw [->] (t1) to (s6); \draw [->] (s7) to (t2); \draw [->] (s7) to (t3); \draw [->] (t3) to (s6); \draw [->] (t2) to (s8); \draw [->] (s6) to (t5); \draw [->] (t5) to (s10); \draw [->] (t5) to (s9); \draw [->] (s10) to (t6); \draw [->] (s9) to (t4); \draw [->] [bend right] (t4) to (s9); \draw [->] [bend right] (t4) to (s6); \draw [->] (t6) --node[fill=white,sloped] {2} (s6); \draw [->] (s8) to (t7); \draw [->] [bend right] (t7) to node[auto,swap] {2} (s8); } \end{tikzpicture} \caption{Two PTI nets, whose transitions are labeled either by $a$ or by $b$. } \label{fig:2} \end{figure} In order to prove that $\sim_p$ is an equivalence relation, we now list some useful properties of pti-place bisimulation relations. \begin{proposition}\label{pti-prop-bis} For each PTI net $N = (S, A, T, I)$, the following hold: \begin{enumerate} \item The identity relation ${\mathcal I}_S = \{ (s, s) \mid s \in S \}$ is a pti-place bisimulation; \item the inverse relation $R^{-1} = \{ (s', s) \mid (s, s') \in R\}$ of a pti-place bisimulation $R$ is a pti-place bisimulation; \item the relational composition $R_1 \circ R_2 = \{ (s, s'') \mid $ $\exists s'. (s, s') \in R_1 \wedge (s', s'') \in R_2 \}$ of two pti-place bisimulations $R_1$ and $R_2$ is a pti-place bisimulation. \end{enumerate} \begin{proof} The proof is almost standard, due to \cref{add-prop2}. (1) ${\mathcal I}_S$ is a pti-place bisimulation as for each $(m, m) \in {\mathcal I}_S^\oplus$ whatever transition $t$ the left (or right) marking $m$ performs a transition (say, $m[t\rangle m'$), the right (or left) instance of $m$ in the pair does exactly the same transition $m[t\rangle m'$ and, of course, $(\pre{t}, \pre{t}) \in {\mathcal I}_S^\oplus$, $(\prei{t}, \prei{t}) \in {\mathcal I}_S^\oplus$, $(\post{t}, \post{t}) \in {\mathcal I}_S^\oplus$, $l(t) = l(t)$, $(m', m') \in {\mathcal I}_S^\oplus$, by \cref{add-prop2}(2), and, also, $\forall s \in S. (s, s) \in {\mathcal I}_S \Rightarrow (s \in \prei{t} \Leftrightarrow$ $ s \in \prei{t})$, as required by the pti-place bisimulation definition. (2) Suppose $(m_2, m_1) \in (R^{-1})^\oplus$ and $m_2[t_2\rangle m_2'$. By \cref{add-prop2}(3) $(m_2, m_1) \in (R^\oplus)^{-1}$ and so $(m_1, m_2) \in R^\oplus$. Since $R$ is a pti-place bisimulation, item 2 of the bisimulation game ensures that there exist $t_1$ and $m_1'$ such that $m_1 [t_1\rangle m_1'$, with $(\pre{t_1}, \pre{t_2}) \in R^\oplus$, $(\prei{t_1}, \prei{t_2}) \in R^\oplus$, $l(t_1) = l(t_2)$, $(\post{t_1}, \post{t_2}) \in R^\oplus$ and $(m_1', m_2') \in R^\oplus$; moreover, $\forall s, s' \in S. (s, s') \in R \Rightarrow (s \in \prei{t_1} \Leftrightarrow s' \in \prei{t_2})$. Summing up, if $(m_2, m_1) \in (R^{-1})^\oplus$, to the move $m_2[t_2\rangle m_2'$, $m_1$ replies with the move $m_1 [t_1\rangle m_1'$, such that (by \cref{add-prop2}(3)) $(\pre{t_2}, \pre{t_1}) \in (R^{-1})^\oplus$, $(\prei{t_2}, \prei{t_1}) \in (R^{-1})^\oplus$, $l(t_2) = l(t_1)$, $(\post{t_2}, \post{t_1}) \in (R^{-1})^\oplus$, $(m_2', m_1') \in (R^{-1})^\oplus$ and, moreover, $\forall s, s' \in S. (s', s) \in R^{-1} \Rightarrow (s' \in \prei{t_2} \Leftrightarrow s \in \prei{t_1})$, as required. The case when $m_1$ moves first is symmetric and thus omitted. (3) Suppose $(m, m'') \in (R_1 \circ R_2)^\oplus$ and $m [t_1\rangle m_1$. By \cref{add-prop2}(4), we have that $(m, m'') \in R_1^\oplus \circ R_2^\oplus$, and so there exists $m'$ such that $(m, m') \in R_1^\oplus$ and $(m', m'') \in R_2^\oplus$. As $(m, m') \in R_1^\oplus$ and $R_1$ is a pti-place bisimulation, if $m [t_1\rangle m_1$, then there exist $t_2$ and $m_2$ such that $m' [t_2\rangle m_2$ with $(\pre{t_1}, \pre{t_2}) \in R_1^\oplus$, $(\prei{t_1}, \prei{t_2}) \in R_1^\oplus$, $l(t_1) = l(t_2)$, $(\post{t_1}, \post{t_2}) \in R_1^\oplus$ and $(m_1, m_2) \in R_1^\oplus$; moreover, $\forall s, s' \in S. (s, s') \in R_1 \Rightarrow (s \in \prei{t_1} \Leftrightarrow s' \in \prei{t_2})$. But as $(m', m'') \in R_2^\oplus$ and $R_2$ is a pti-place bisimulation, we have also that there exist $t_3$ and $m_3$ such that $m'' [t_3 \rangle m_3$ with $(\pre{t_2}, \pre{t_3}) \in R_2^\oplus$, $(\prei{t_2}, \prei{t_3}) \in R_2^\oplus$, $l(t_2) = l(t_3)$, $(\post{t_2}, \post{t_3}) \in R_2^\oplus$ and $(m_2, m_3) \in R_2^\oplus$; moreover, $\forall s', s'' \in S. (s', s'') \in R_2 \Rightarrow (s' \in \prei{t_2} \Leftrightarrow s'' \in \prei{t_3})$. Summing up, for $(m, m'') \in (R_1 \circ R_2)^\oplus$, if $m [t_1 \rangle m_1$, then there exist $t_3$ and $m_3$ such that $m'' [t_3 \rangle m_3$ and (by \cref{add-prop2}(4)) $(\pre{t_1}, \pre{t_3}) \in (R_1 \circ R_2)^\oplus$, $(\prei{t_1}, \prei{t_3}) \in (R_1 \circ R_2)^\oplus$, $l(t_1) = l(t_3)$, $(\post{t_1}, \post{t_3}) \in (R_1 \circ R_2)^\oplus$ and $(m_1, m_3) \in (R_1 \circ R_2)^\oplus$; moreover, $\forall s, s'' \in S. (s, s'') \in R_1 \circ R_2 \Rightarrow (s \in \prei{t_1} \Leftrightarrow s'' \in \prei{t_3})$, as required. The case when $m''$ moves first is symmetric and so omitted. \end{proof} \end{proposition} \begin{proposition}\label{pti-place-bis-eq} For each PTI net $N = (S, A, T, I)$, relation $\sim_p \; \subseteq \mathcal{M}(S) \times \mathcal{M}(S)$ is an equivalence relation. \begin{proof} Direct consequence of \cref{pti-prop-bis}. \end{proof} \end{proposition} \noindent By \cref{def-pti-place-bis}, pti-place bisimilarity can be defined in the following way: $\sim_p = \bigcup \{ R^\oplus \mid R \mbox{ is a pti-place bisimulation}\}.$ \noindent By monotonicity of the additive closure (\cref{add-prop1}(2)), if $R_1 \subseteq R_2$, then $R_1^\oplus \subseteq R_2^\oplus$. Hence, we can restrict our attention to maximal pti-place bisimulations only: $\sim_p = \bigcup \{ R^\oplus \mid R \mbox{ is a {\em maximal} pti-place bisimulation}\}.$ \noindent However, it is not true that $\sim_p = (\bigcup \{ R \mid R \mbox{ is a {\em maximal} pti-place bisimulation}\})^\oplus$ \noindent because the union of pti-place bisimulations may be not a pti-place bisimulation (as already observed for place bisimulation in \cite{ABS91,Gor21}), so that its definition is not coinductive. \begin{figure}[h] \centering \begin{tikzpicture}[ every place/.style={draw,thick,inner sep=0pt,minimum size=6mm}, every transition/.style={draw,thick,inner sep=0pt,minimum size=4mm}, bend angle=30, pre/.style={<-,shorten <=1pt,>=stealth,semithick}, post/.style={->,shorten >=1pt,>=stealth,semithick} ] \node (s2) [place] [label=below:$s_2$] {}; \node (t2) [transition] [below right of = s2,label=above:$a$] {}; \node (s3) [place] [above right of = t2, label=below:$s_3$] {}; \node (t1) [transition] [left of = s2,label=above:$a$] {}; \node (s1) [place] [left of = t1, label=above:$s_1$] {}; \node (t3) [transition] [right of = s3,label=above:$a$] {}; \node (s4) [place] [right of = t3, label=above:$s_4$] {}; \node (s5) [place] [below of = t2, label=below:$s_5$] {}; \draw [->] (s2) to (t2); \draw [->] (s3) to (t2); \draw [->] (s2) to (t1); \draw [->] (t1) to (s1); \draw [->] (s3) to (t3); \draw [->] (t3) to (s4); \draw [->] (t2) to (s5); \draw [-o, bend right] (s3) to (t1); \draw [-o, bend left] (s2) to (t3); \end{tikzpicture} \caption{A PTI net.} \label{fig:pti_not_coinductive} \end{figure} \begin{example} Consider the net in \cref{fig:pti_not_coinductive}, whose transitions are $t_1 = (s_2, s_3) \deriv{a} s_1$, $t_2 = (s_2 \oplus s_3, \emptyset) \deriv{a} s_5$ and $t_3 = (s_3, s_2) \deriv{a} s_4$. Clearly, $R$ and $R'$, defined as follows, are both pti-place bisimulations. \begin{center} $R = \{ (s_1, s_1), (s_2, s_2), (s_3, s_3), (s_4, s_4), (s_5, s_5)\}$\\ $R' = \{ (s_1, s_1), (s_2, s_3), (s_3, s_2), (s_4, s_4), (s_5, s_5)\}$ \end{center} It would be easy to make $R, R'$ maximal by adding all possible combinations of $s_1, s_4, s_5$ (since they are all stuck places), however it would not be meaningful for this example, so we prefer to keep it simple. Note that the union $R \cup R'$ is not a pti-place bisimulation as, for example, $2 \cdot s_2 \not\sim_p s_2 \oplus s_3$. Indeed, if $2 \cdot s_2$ moves first by $2 \cdot s_2 \trns{t_1} s_1 \oplus s_2$, then $s_2 \oplus s_3$ can only respond with $s_2 \oplus s_3 \trns{t_2} s_5$ since $t_1$ and $t_3$ are inhibited. However, $s_1 \oplus s_2 \not\sim_p s_5$, because the former can perform transition $t_1$, while the latter is stuck. \end{example} \subsection{Pti-place bisimilarity is finer than causal-net bisimilarity} \begin{theorem}\label{pti-bis-finer-cn-bis} {\bf (Pti-place bisimilarity implies causal-net bisimilarity)} Let $N=(S,A,T,I)$ be a PTI net and $m_1, m_2$ two of its markings. If $m_1 \sim_{p} m_2$, then $m_1 \sim_{cn} m_2$. \begin{proof} See \cref{pti-cn-bis-th}. \end{proof} \end{theorem} There are at least the following three important technical differences between causal-net bisimilarity and pti-place bisimilarity. \begin{enumerate} \item A causal-net bisimulation is a very complex relation -- composed of cumbersome triples of the form $(\rho_1, C, \rho_2)$ -- that must contain infinitely many triples if the net system offers a never-ending behavior. On the contrary, a pti-place bisimulation is always a very simple finite relation over the finite set $S$ of places. \item A causal net bisimulation proving that $m_1 \sim_{cn} m_2 $ is a relation specifically designed for showing that $m_1$ and $m_2$ generate the same causal nets, step by step. If we want to prove that, e.g., $n \cdot m_1$ and $n \cdot m_2$ are causal-net bisimilar (which may not hold!), we have to construct a new causal-net bisimulation to this aim. Instead, a pti-place bisimulation $R$ relates those places which are considered equivalent under all the possible $R$-related contexts. Hence, if $R$ justifies that $m_1 \sim_{p} m_2 $ as $(m_1, m_2) \in R^\oplus$, then for sure the same $R$ justifies that $n \cdot m_1$ and $n \cdot m_2$ are pti-place bisimilar, as also $(n \cdot m_1, n \cdot m_2) \in R^\oplus$. \item Finally, while pti-place bisimilarity is decidable (see the next section), it is not known whether causal-net bisimilarity is decidable on finite PTI nets.\footnote{Esparza observed \cite{Esp98} that, for finite P/T nets with at least two unbounded places, all the behavioral relations ranging from interleaving bisimilarity to fully-concurrent bisimilarity \cite{BDKP91} are undecidable. Even if his proof does not apply to causal-net bisimilarity, we conjecture that this equivalence is undecidable as well.} \end{enumerate} However, these technical advantages of pti-place bisimilarity over causal-net bisimilarity are balanced by an increased discriminating power of the former over the latter, that, in some cases, might appear even excessive, as the following intriguing example shows. \begin{figure} \centering \begin{tikzpicture}[ every place/.style={draw,thick,inner sep=0pt,minimum size=6mm}, every transition/.style={draw,thick,inner sep=0pt,minimum size=4mm}, bend angle=30, pre/.style={<-,shorten <=1pt,>=stealth,semithick}, post/.style={->,shorten >=1pt,>=stealth,semithick} ] \def3.3cm{3.3cm} \def0.35cm{0.35cm} \def0.55cm{0.55cm} \node (s1) [place] [label=above:$s_1$] {}; \node (t1) [transition] [below=0.55cm of s1, label=left:$a$] {}; \node (s3) [place] [below right=0.55cm of t1, label=right:$s_3$] {}; \node (s2) [place] [below left=0.55cm of t1, label=left:$s_2$] {}; \node (t2) [transition] [below left=0.55cm of s3, label=left:$b$] {}; \node (s4) [place] [right=0.55cm of t2, label=below:$s_4$] {}; \draw [->] (s1) to (t1); \draw [->] (t1) to (s2); \draw [->] (t1) to (s3); \draw [-o] [bend right] (s3) to (t1); \draw [->] (s2) to (t2); \draw [->] (s3) to (t2); \draw [->] (t2) to (s4); \node (s5) [place] [right={4cm} of s1, label=above:$s_1'$] {}; \node (t3) [transition] [below=0.55cm of s5, label=left:$a$] {}; \node (s6) [place] [below=0.55cm of t3, label=right:$s_2'$] {}; \node (t4) [transition] [below=0.55cm of s6, label=left:$b$] {}; \node (s7) [place] [right=0.55cm of t4, label=above:$s_4'$] {}; \draw [->] (s5) to (t3); \draw [->] (t3) to node[auto,swap] {2} (s6); \draw [-o] [bend right] (s6) to (t3); \draw [->] (s6) to node[auto,swap] {2} (t4); \draw [->] (t4) to (s7); \end{tikzpicture} \caption{Two PTI nets. } \label{fig:3} \end{figure} \begin{example}\label{ex-strano} Consider the net in \cref{fig:3}. First of all, note that $s_2 \sim_{cn} s_2'$, because both are stuck markings. However, we have that $2 \cdot s_2 \nsim_{cn} 2 \cdot s_2'$ because $2 \cdot s_2$ is stuck, while $2 \cdot s_2'$ can perform $b$. This observation is enough to conclude that $s_2 \nsim_p s_2'$, because a pti-place bisimulation $R$ relates places that are equivalent under any $R$-related context: if $(s_2, s_2') \in R$ then $(2 \cdot s_2, 2 \cdot s_2') \in R^\oplus$, but these two markings do not satisfy the pti-place bisimulation conditions, so $R$ is not a pti-place bisimulation. Nonetheless, it is interesting to observe that $s_1 \sim_{cn} s_1'$, because they generate the same causal PTI nets, step by step; moreover, even for any $n \geq 1$ we have $n \cdot s_1 \sim_{cn} n \cdot s_1'$. However, $s_1 \nsim_p s_1'$ because it is not possible to build a pti-place bisimulation $R$ containing the pair $(s_1, s_1')$. The problem is that it would be necessary to include, into the candidate pti-place relation $R$, also the pair $(s_2, s_2')$, which is not a pti-place bisimulation pair, as discussed above. Therefore, no pti-place bisimulation $R$ can relate $s_1$ and $s_1'$. \end{example} \section{Pti-place bisimilarity is decidable}\label{decid-iplace-sec} In order to prove that $\sim_p$ is decidable, we first need a technical lemma which states that it is decidable to check whether a place relation $R \subseteq S \times S$ is a pti-place bisimulation. \begin{lemma}\label{r-pti-place-bis-decid} Given a finite PTI net $N = (S, A, T, I )$ and a place relation $R \subseteq S \times S$, it is decidable whether $R$ is a pti-place bisimulation. \begin{proof} We want to prove that $R$ is a pti-place bisimulation if and only if the following two finite conditions are satisfied: \begin{enumerate} \item $\forall t_1$ such that $\pre{t_1} \trns{t_1}$, $\forall m$ such that $(\pre{t_1}, m) \in R^\oplus$, $\exists t_2$ such that $\pre{t_2} \trns{t_2}$ and \begin{alphaenumerate} \item $\pre{t_2} = m$, \item $(\prei{t_1}, \prei{t_2}) \in R^\oplus$, $(\post{t_1}, \post{t_2}) \in R^\oplus$, $l(t_1) = l(t_2)$, \item $\forall s, s' \in S. (s, s') \in R \Rightarrow (s \in \prei{t_1} \Leftrightarrow s' \in \prei{t_2})$. \end{alphaenumerate} \item $\forall t_2$ such that $\pre{t_2} \trns{t_2}$, $\forall m$ such that $(m, \pre{t_2}) \in R^\oplus$, $\exists t_1$ such that $\pre{t_1} \trns{t_1}$ and \begin{alphaenumerate} \item $\pre{t_1} = m$, \item $(\prei{t_1}, \prei{t_2}) \in R^\oplus$, $(\post{t_1}, \post{t_2}) \in R^\oplus$, $l(t_1) = l(t_2)$, \item $\forall s, s' \in S. (s, s') \in R \Rightarrow (s \in \prei{t_1} \Leftrightarrow s' \in \prei{t_2})$. \end{alphaenumerate} \end{enumerate} First we prove the implication from left to right, only for condition 1, as the other is symmetrical. If $R$ is a pti-place bisimulation and $(\pre{t_1}, m) \in R^\oplus$, then from $\pre{t_1} \trns{t_1} \post{t_1}$ it follows that there exists $t_2$ such that $\pre{t_2} \trns{t_2} \post{t_2}$ with $\pre{t_2} = m$, $(\prei{t_1}, \prei{t_2}) \in R^\oplus$, $(\post{t_1}, \post{t_2}) \in R^\oplus$, $l(t_1) = l(t_2)$ and, moreover, $\forall s, s' \in S. (s, s') \in R \Rightarrow (s \in \prei{t_1} \Leftrightarrow s' \in \prei{t_2})$. Therefore, conditions (a), (b) and (c) are trivially satisfied. Now we prove the implication from right to left, i.e., if conditions 1 and 2 hold for $R$, then $R$ is a pti-place bisimulation. Suppose $(m_1, m_2) \in R^\oplus$ and $m_1 \trns{t_1} m_1'$. Let $ q = \{(s_1, s_1'), (s_2, s_2'), \ldots,$ $(s_k, s_k')\}$ be any set of associations that can be used to prove that $(m_1, m_2) \in R^\oplus$. So this means that $m_1 = s_1 \oplus s_2 \oplus \ldots \oplus s_k$, $m_2 = s_1' \oplus s_2' \oplus \ldots \oplus s_k'$ and that $(s_i, s_i') \in R$ for $i = 1, \ldots, k$. If $m_1 [t_1 \rangle m_1'$, then $m_1' = m_1 \ominus \pre{t_1} \oplus \post{t_1}$. Consider the set of associations $p = \{(\overline{s}_{1}, \overline{s}'_{1})$, $\ldots, (\overline{s}_{h}, \overline{s}'_{h})\} \subseteq q$, with $\{\overline{s}_{1}, \ldots, \overline{s}_{h}\} $ $= \pre{t_1}$. Note that $(\pre{t_1}, \overline{s}'_{1} \oplus \ldots \oplus \overline{s}'_{h}) \in R^\oplus$ and that $\pre{t_1} \trns{t_1}$. Hence, by condition 1, there exists a transition $t_{2}$ such that $\pre{t_2} \trns{t_2}$, $\pre{t_{2}} = \overline{s}'_{1} \oplus \ldots \oplus \overline{s}'_{h}$, $(\prei{t_1}, \prei{t_2}) \in R^\oplus$, $(\post{t_1}, \post{t_{2}}) \in R^\oplus$, $l(t_1) = l(t_{2})$, and $\forall s, s' \in S. (s, s') \in R \Rightarrow (s \in \prei{t_1} \Leftrightarrow s' \in \prei{t_2})$. By hypothesis, $\prei{t_1} \cap \mathit{dom}(m_1) = \emptyset$, so since $(m_1, m_2) \in R^\oplus$ and condition (c) holds, we have that $\prei{t_2} \cap \mathit{dom}(m_2) = \emptyset$. Therefore, since $\pre{t_2} \subseteq m_2$, also $m_2 [t_{2} \rangle m_2'$ is firable, where $m_2' = m_2 \ominus \pre{t_{2}} \oplus \post{t_{2}}$, and we have that $(\pre{t_1}, \pre{t_{2}}) \in R^\oplus$, $(\prei{t_1}, \prei{t_2}) \in R^\oplus$, $(\post{t_1}, \post{t_{2}}) \in R^\oplus$, $l(t_1) = l(t_{2})$, $(m_1', m_2') \in R^\oplus$ and, moreover, $\forall s, s' \in S. (s, s') \in R \Rightarrow (s \in \prei{t_1} \Leftrightarrow s' \in \prei{t_2})$, as required, where $(m_1', m_2') \in R^\oplus$ holds as, from the set $q$ of matching pairs for $m_1$ and $m_2$, we have removed those in $p$ and we have added those justifying $(\post{t_1}, \post{t_{2}}) \in R^\oplus$. If $m_2 [t_2 \rangle m_2'$, then we have to use an argument symmetric to the above, where condition 2 is used instead. Hence, we have proved that conditions 1 and 2 are enough to prove that $R$ is a pti-place bisimulation. Finally, the complexity of this procedure is as follows. For condition 1, we have to consider all the net transitions, and for each $t_1$ we have to consider all the markings $m$ such that $(\pre{t_1}, m) \in R_i^\oplus$, and for each pair $(t_1, m)$ we have to check whether there exists a transition $t_2$ such that $m = \pre{t_2}$, $l(t_1) = l(t_2)$, $(\prei{t_1}, \prei{t_2}) \in R_i^\oplus$, $(\post{t_1}, \post{t_2}) \in R_i^\oplus$ and, moreover, that $\forall s, s' \in S. (s, s') \in R \Rightarrow (s \in \prei{t_1} \Leftrightarrow s' \in \prei{t_2})$. And the same for condition 2. Hence, this procedure has worst-case time complexity $O(q \cdot \frac{(n+p-1)!}{(n-1)! \cdot p!} \cdot (p^2 \sqrt{p}) \cdot q \cdot (p^2 \sqrt{p} + n^2 \cdot p))$, where $q = |T|$, $n = |S|$, $p$ is the least number such that $| \pre{t}| \leq p$, $|\prei{t}| \leq p$ and $|\post{t}| \leq p$ for all $t \in T$, as the distribution of $p$ tokens over $n$ places is given by the binomial coefficient $\binom{n+p-1}{p} = \frac{(n+p-1)!}{(n-1)! \cdot p!}$, checking if a marking of size $p$ is related to $\pre{t_1}$ takes $O(p^2 \sqrt{p})$ time, and the size of $R$ is $n^2$ at most. \end{proof} \end{lemma} \begin{theorem} \label{pti-place-decidable} {\bf (Pti-place bisimilarity is decidable)} Given a PTI net $N = (S, A, T, I )$, for each pair of markings $m_1$ and $m_2$, it is decidable whether $m_1 \sim_p m_2$. \begin{proof} If $|m_1| \neq |m_2|$, then $m_1 \nsim_p m_2$ by \cref{fin-k-add}. Otherwise, we can assume that $|m_1| = k = |m_2|$. As $|S| = n$, the set of all the place relations over $S$ is of size $2^n$. Let us list such relations as: $R_1, R_2, \ldots, R_{2^n}$. Hence, for $i = 1, \ldots, 2^n$, by \cref{r-pti-place-bis-decid} we can decide whether the place relation $R_i$ is a pti-place bisimulation and, in such a case, we can check whether $(m_1, m_2) \in R_i^\oplus$ in $O(k^2 \sqrt{k})$ time. As soon as we have found a pti-place bisimulation $R_i$ such that $(m_1, m_2) \in R_i^\oplus$, we stop concluding that $m_1 \sim_p m_2$. If none of the $R_i$ is a pti-place bisimulation such that $(m_1, m_2) \in R_i^\oplus$, then we can conclude that $m_1 \nsim_p m_2$. \end{proof} \end{theorem} \section{Conclusion}\label{conc-sec} Pti-place bisimilarity is the only decidable behavioral equivalence for finite PTI nets, which constitute a powerful, Turing-complete distributed model of computation, widely used in theory and applications of concurrency (e.g., \cite{ager-pti,ajmone,BP99,tcsinib,BG09,Hack,Koutny,Pet81}). Thus, it is the only equivalence for which it is possible (at least, in principle) to verify algorithmically the (causality-preserving) correctness of an implementation by exhibiting a pti-place bisimulation between its specification and implementation. It is also sensible, because it respects the causal behavior of PTI nets, since it is finer than causal-net bisimilarity. Of course, pti-place bisimilarity is a rather discriminating behavioral equivalence, as illustrated in Example \ref{ex-strano}, and a proper evaluation of its usefulness on real case studies is left for future research. Our interpretation of (pti-)place bisimilarity is that this equivalence is an attempt of giving semantics to {\em unmarked} nets, rather than to marked nets, so that the focus shifts from the common (but usually undecidable) question {\em When are two markings equivalent?} to the more restrictive (but decidable) question {\em When are two places equivalent?} A possible, preliminary answer to the latter question may be: two places are equivalent if, whenever the same number of tokens are put on these two places, the behavior of the marked nets is the same. If we reinterpret Example \ref{ex-strano} in this perspective, we clearly see that place $s_2$ and place $s_2'$ cannot be considered as equivalent because, even if the marking $s_2$ and $s_2'$ are equivalent (as they are both stuck), the marking $2 \cdot s_2$ is not equivalent to the marking $2 \cdot s_2'$ (as only the latter can move). More specifically, a place (pti-)bisimulation $R$ considers two places $s_1$ and $s_2$ as equivalent if $(s_1, s_2) \in R$, as, by definition of (pti-)place bisimulation, they must behave the same in any $R$-related context. The decidability result for pti-place bisimilarity is based on the fact that the net model is finite, even if the associated reachability graph may be unboundedly large or even infinite: indeed, one can decide pti-place bisimilarity simply checking a large, but finite, number of conditions on the shape of the finite net, rather than inspecting its (possibly, infinitely many) reachable markings. Turing completeness is achieved in PTI nets by means of their ability to test for zero. Other Turing-complete models of computation may exploit different mechanisms to this aim. For instance, in the $\pi$-calculus \cite{MPW,SW} Turing completeness is achieved by means of the ability to generate unboundedly new names (by means of the interplay between recursion and the restriction operator), but this feature is not describable by means of a finite net model \cite{BG09,MG09}. For this reason, we think it is hard to find a sensible, decidable behavioral equivalence for the whole $\pi$-calculus. To the best of our knowledge, this is the second paper proving the decidability of a behavioral equivalence for a Turing-complete formalism. In fact, in \cite{lanese} it is proved that (interleaving) bisimilarity is decidable for a small process calculus, called HOcore, with higher-order communication (but without restriction), that is, nonetheless, Turing-complete. Future work will be devoted to see whether the pti-place bisimulation idea can be extended to other, even larger classes of nets, such as the {\em nonpermissive} Petri nets \cite{Gor17}, where a transition can not only test for zero, but it can also test if in a place of its pre-set there are exactly as many tokens as it wishes to consume.
1,314,259,993,760
arxiv
\section{Introduction} AdS/CFT correspondence \cite{Maldacena:1997re} has given us a useful tool to find weakly coupled descriptions for strongly coupled conformal field theories. The weakly coupled description generically is given in terms of a supergravity on a background containing an AdS part. Actually there is a one to one correspondence between objects in gravity side and those in the conformal field theory dual. In particular it is known that the symmetries of the conformal field theory can be geometrically realized in the gravity side as the isometries of the metric. For example the conformal group of the $d$-dimensional space-time, $SO(d,2)$, is realized as the isometry of the $AdS_{d+1}$ geometry where the gravity is defined. Encourage with the great success in describing relativistic strongly coupled conformal field theories, it is natural to look for a weakly coupled gravity descriptions for non-relativistic conformal field theories. Indeed there are several models, for example, in condensed mater where the theories are invariant under Schr\"odinger group which is essentially the conformal group of the non-relativistic field theories (for example see \cite{Nishida}). Therefore it is important to find the corresponding gravity dual. Actually taking into account that the Schr\"odinger group, $Sch(d-1)$, is a subgroup of the relativistic conformal group $SO(d,2)$ and indeed can be obtained from it by taking non-relativistic limit (contraction), one would expect that the same procedure can be applied in the gravity side as well. Namely one expects that the geometry we are interested in could be given by a deformation of AdS geometry. In fact the corresponding gravity whose isometry is $Sch(d-1)$ has been proposed in \cite{{Son:2008ye},{Balasubramanian:2008dm}}\footnote{For recent studies on AdS/non-relativistic field theories see \cite{{Goldberger:2008vg},{Wen:2008hi},{Nakayama:2008qm},{Chen:2008ad}, {Minic:2008xa},{Imeroni:2008cr},{Galajinsky:2008ig},{Kachru:2008yh}, {Pal:2008rf},{SekharPal:2008uy},{Pal:2008id},{Duval:2008jg},{Lin:2008pi}, {Hartnoll:2008rs},{Schvellinger:2008bf},{Rangamani}} (See also \cite{Leiva:2003kd}). } which is \begin{eqnarray} ds^2=\frac{r^2}{R^2}(2dtdy+d{x}_i^2-\mu^2 r^2dt^2)+\frac{R^2}{r^2}dr^2, \label{back} \end{eqnarray} where $i=1,2,\cdots,d-1$. Here $\mu$ is a parameter which controls the deviation from AdS geometry. In other words $\mu$ is a parameter which characterizes the non-relativistic nature of the dual field theory. More precisely, as we will see, the corresponding physical dimensionless parameter indeed $\mu R$. The above metric is invariant under the following rescaling of the coordinates \begin{eqnarray} t\rightarrow \lambda^2 t,\;\;\;\;\;x_i\rightarrow \lambda x_i,\;\;\;\;\;y\rightarrow y,\;\;\;\;\; r\rightarrow \lambda^{-1} r. \end{eqnarray} This $d+2$-dimensional gravity is proposed to describe a $d$-dimensional non-relativistic CFT which is invariant under the following scaling \begin{eqnarray} t\rightarrow \lambda^2 t,\;\;\;\;\;\;\;\;x_i\rightarrow \lambda x_i,\;\;\;{\rm for}\;\;\;i=1,\cdots,d-1. \end{eqnarray} Since our main knowledge of AdS/CFT duality has come from string theory, it is natural to pose the question whether this solution can be embedded in the ten dimensional superstring theories. If it does, then one may use our experiences in string theory to study some features of the non-relativistic CFTs. The aim of this article is to study some features of (1+2)-dimensional non-relativistic CFT using supergravity solution in type IIB string theory. Since we are dealing with string theory, it is natural to consider a semi-classical string in this background. Having this in our mind, we will first consider a folded rotating closed string \cite{Gubser:2002tv} (for more details see, e.g. \cite{Frolov:2002av}) and evaluate the relation between its energy, $E$, and spin $S$. Using AdS/CFT dictionary this corresponds to an operator with anomalous dimension $\Delta=E$ and spin $S$. We find that, for $\mu R\ll1$ the anomalous dimension gets logarithmic corrections similar to AdS case, though the coefficients are functions of $\mu R$, at least up to order we are considering. This might be interpreted as the fact that in going from relativistic to non-relativistic theory the anomalous dimension of the corresponding operators gets corrections. We will also study circular pulsating strings following Ref. \cite{Minahan}. In this case, in comparison with the AdS case, we find new behavior at subleading order which we would like to associate with the non-relativistic properties of the dual theory. It is also interesting to study semi-classical open strings in this background. In the context of AdS/CFT correspondence an open string can be associated to a Wilson loop in the dual field theory, which in turn can be used to compute the effective potential between external objects (e.g. quark-anti quark potential). In our case at zero temperature we find the potential behaves as $l^{-2}$ as expected for non-relativistic CFT with dynamical scale $z=2$. To explore non-relativistic properties of the dual field theory we will also consider a moving open string on the relevant geometry at finite temperature which may be thought of as a moving {\it external object} on the {\it hot plasma}\footnote{When the dual theory is gauge theory the external object could be quark and the plasma can be made out of quark-gluon plasma.}. In this case, unlike the relativistic case, we observe that there is no upper bound on the velocity of the moving object, as expected for a non-relativistic theory. We will also see that when the external object moves through the plasma very slowly it loses its energy exponentially as a function of time, while when it moves very fast the decay rate behaves as inverse of time. We will also redo the drag force-like computations for the non-relativistic CFT at zero temperature. Being at zero temperature the moving objects still lose their energy, though unlike the finite temperature case, the characteristic time is given in terms of $\mu$. The paper is organized as follows. In the next section we review the embedding of the relevant geometry in type IIB string theory where we will also present the general features of the dual non-relativistic CFT. In section three we will study semi-classical closed strings on the background presented in section two. In section four we consider moving open strings on the geometry generated by a black hole in the supergravity solution of section two. In the section five we study the drag force in the non-relativistic field theory at zero temperature. The last section is devoted to discussions and conclusions. \section{Supergravity description} The supergravity solution we are interested in can be obtained from the $AdS_5\times S^5$ solution of type IIB supergravity solution via TsT transformation or by making use of the Null Melvin Twist procedure \cite{Gimon:2003xk}, \cite{{Alishahiha:2003ru}, {Herzog:2008wg},{Adams:2008wt},{Maldacena:2008wh}, {Mazzucato:2008tr}}\footnote{ Note that the whole solution beside the metric contains a non-zero RR 4-form as well as non-zero NSNS B field whose explicit forms are not important for what follows. The dilaton is constant as expected.} \begin{eqnarray} ds^2=\left(\frac{r}{R}\right)^2\left[-{\mu^2}{r^2}dt^2+2dtd\xi+dx_i^2\right] +\left(\frac{R}{r}\right)^2\left[dr^2+r^2d{\cal M}_5^2\right], \label{sol1} \end{eqnarray} where $d{\cal M}_5^2$ is the metric of the five dimensional internal space whose spin structure fixes the number of supersymmetries preserved by the background. For example the internal space could be a five sphere. Indeed this solution has been obtained in \cite{Alishahiha:2003ru} in the context of light like dipole field theory where it was shown that the solution preserves eight supercharges. To proceed let us consider the dilaton field,$\phi$ which can be treated as a massless scalar field in the bulk supergravity. In case of $AdS_5\times S^5$, the dilaton field is dual to the operator ${\cal O}={\rm Tr} F^2$ with $\Delta=4$ whose two point function is \begin{equation} \langle\; {\cal O}(x,t) {\cal O}(0)\;\rangle\sim \frac{1}{|X|^8},\;\;\;\;\;{\rm with}\;\;|X|=\sqrt{-t^2+x^2}, \end{equation} In our case we would like to study the dilaton field in the background (\ref{sol1}). Setting $\phi=e^{-iM\xi}e^{-i\omega t+ik\cdot x}\psi(r)$ the equation of motion for the dilaton becomes \begin{equation} \frac{1}{r^3}\partial_r(r^5\partial_r\psi)-(\mu^2M^2R^4+\frac{q^2R^4}{r^2})\psi=0, \label{dil} \end{equation} where $q^2=2M\omega+k^2$. Using AdS/CFT procedure one can compute the two point function of the dual operator in the non-relativistic field theory. This has essentially been done in \cite{{Son:2008ye},{Balasubramanian:2008dm}}. The result is \begin{equation} \langle\; {\cal O}(x,t) {\cal O}(0)\;\rangle\sim t^{-\Delta} e^{-\frac{iM x^2}{2t}}=\frac{(x^2/t)^\Delta e^{-\frac{iM x^2}{2t}}}{x^{2\Delta}}, \end{equation} where $\Delta=2+\sqrt{4+\mu^2 M^2R^4}$ is the dimension of the dual operator. As we have already anticipated in the above computations, by the scaling arguments, the solution of (\ref{dil}) depends only on $q^2R^4/r^2$. Therefore the UV/IR relation in the bulk and boundary theories should be as follows \cite{Barbon:2008bg} \begin{equation} \delta t\sim \frac{MR^4}{r^2},\;\;\;\;\;\;\;\;\delta x\sim \frac{R^2}{r}. \end{equation} Actually for the relativistic case where both $x$ and $t$ scale the same, the UV/IR relations are the same, i.e. $\delta |X|\sim R^2/r$ \cite{Susskind:1998dq}. Since the supergravity solution (\ref{sol1}) is obtained from the type IIB D3-brane solution, by a set of T-dualities and boosts, one expects that the resultant non-relativistic field theory, should have gauge field and gauge symmetry. Indeed one may suspect that the obtained theory could be related to three dimensional non-relativistic gauge theory which has previously been studied in the context of three dimensional Chern-Simons relativistic gauge theory (see for example \cite{Hagen}). Therefore it would be interesting to explore different features of 3D non-relativistic gauge theory by making use of the dual gravity. In particular we can evaluate the effective potential of the external quark-anti quark potential via the Wilson loop computations in the context of AdS/CFT correspondence \cite{{Rey:1998ik},{Maldacena:1998im}}. To compute the effective potential following \cite{{Rey:1998ik},{Maldacena:1998im}} we start from an ansatz for the classical string in the supergravity solution (\ref{sol1}) which has $Sch(d-1)$ isometry, \begin{equation} t=\tau,\;\;\;\;\;\;\;r=\sigma,\;\;\;\;\;\;x_1=x(\sigma),\;\;\;\;\;\;\xi={\rm constant}. \label{OPEN} \end{equation} Indeed this ansatz in the geometry (\ref{sol1}) was studied in \cite{Alishahiha:2003ru} where the energy of the string as a function of distance between two external sources, $l$, was found to be \begin{equation} E=-\frac{2\mu R^4}{\pi\alpha'}\;\frac{1}{l^2} \end{equation} This behavior may be understood from the fact that the dual theory in non-relativistic CFT with the dynamical scaling $z=2$. In the rest of the paper we extend our considerations for other semi-classical strings to extract information about the possible operators of the dual non-relativistic 3D CFT. \section{Semi-classical string in the background with\\ Schr\"odinger group isometry} In this section we will study semi-classical closed strings in the non-relativistic D3-brane background. Since we are interested in the energy of the semi-classical string we need to write the non-relativistic D3-brane solution in the ``global'' coordinates. The corresponding solution can be found from AdS solution in the global coordinates by making use of the Null Melvin Twist \cite{Yamada:2008if} \begin{eqnarray} ds^2&=&\left(\frac{r}{R}\right)^2\frac{1}{H}\bigg[-\left(\frac{g}{2}+\mu^2r^2 f\right)dt^2 -\frac{g}{2} d\xi^2-(1+f)dtd\xi\cr &&\cr &+&\frac{R^2}{4}H(d\theta^2+\cos^2\theta\;d\psi^2)\bigg] +\left(\frac{R}{r}\right)^2\frac{dr^2}{f}+d{\cal M}_5^2\cr &&\cr e^{2\phi}&=&H^{-1},\;\;\;\;\;\;\;\;{\rm with}\;\;\;\;f=1+g=1+\frac{R^2}{r^2},\;\;\;\;\;\;H=1-\frac{1}{2}\mu^2 R^2. \end{eqnarray} There is also a non-zero RR four field as well as a NSNS two form. $d{\cal M}_5^2$ is the metric of internal space whose explicit form is not important for our purpose. We note, however, that the detail of the internal space will become important for other sectors of the theory. For example if we are interested in the giant magnons, the main role is played by $d{\cal M}_5^2$. To proceed it is useful to make the following change of variables \begin{equation} t\rightarrow \frac{R}{\sqrt{2}}(t+\xi),\;\;\;\;\;\xi\rightarrow \frac{R}{\sqrt{2}}(t-\xi),\;\;\;\;\; r\rightarrow R\sinh\rho, \end{equation} where upon the above metric is recast in the following form \begin{eqnarray} ds^2&=&\frac{R^2}{H}\bigg[-\left(1+\frac{\mu^2 R^2}{2}\sinh^2\rho\right)\cosh^2\rho\; dt^2 +(\sinh^2\rho-\frac{\mu^2 R^2}{8}\sinh^22\rho) d\xi^2\cr &&\cr &-&\frac{\mu^2 R^2}{4}\sinh^22\rho\; dtd\xi +\frac{H}{4}\sinh^2\rho(d\theta^2+\cos^2\theta\;d\psi^2)\bigg] +R^2d\rho^2+d{\cal M}_5^2. \label{sol1g} \end{eqnarray} \subsection{Folded closed string} We would like to study a solution representing a rotating closed string configuration which is stretched along the radial coordinate. In order to study this system one needs to write an action for this closed string. Let us parameterize the string worldsheet by $\sigma $ and $\tau$. We can fix the re-parameterization invariance by a parameterization such that the time coordinate of space-time, $t$ to be equal to worldsheet time, i.e. $t=\tau$. In this gauge a closed string configuration representing a rotating string with angular velocity $\omega$ on geometry (\ref{sol1g}) stretched along the radial coordinate is given by \begin{equation} t=\tau,\;\;\;\;\;\psi=\omega \tau,\;\;\;\;\; \rho(\sigma)=\rho(\sigma+2\pi),\;\;\;\;\;\xi={\rm constant}; \label{CL} \end{equation} all other coordinates are set to zero. For this solution the Nambu-Goto action, \begin{equation} I=-\frac{1}{ 2\pi \alpha'}\int d\sigma^2\sqrt{-\det(G_{\mu\nu} \partial_{a}X^{\mu}\partial_b X^{\nu})}, \label{NAMBUS} \end{equation} reads \begin{equation} I=\frac{-4R^2}{2\pi\alpha'\sqrt{H}}\int_0^{\rho_0}d\rho A^{1/2}(\rho),\;\;\;\;A(\rho)= {(1+\frac{\mu^2R^2}{2}\sinh^2\rho)\cosh^2\rho\;{\dot t}^2- \frac{H{\dot \psi}^2}{4}\sinh^2\rho}\;, \end{equation} where dot represents derivative with respect to $\tau$. The factor of 4 comes from the fact that we are dealing with a folded closed string. Working with one fold string, the string can be divided to four segments. Using the periodicity condition we just need to perform the integral for one quarter of string multiplied by factor 4. To insure that the ansatz (\ref{CL}) represents a closed string we need to impose the periodicity condition which in our case is $A(\rho)\geq 0$ for all $\rho>0$. The periodicity condition, setting ${\dot t}=1,\; {\dot \psi}=\omega$, can be satisfied if \begin{equation} \omega^2\geq 4 \frac{{\sqrt{2}}+{\mu R}}{{\sqrt{2}}-{\mu R}}=\omega_c^2. \label{condition} \end{equation} for which the $A(\rho)$ takes positive values for $\rho\leq \rho_-$ or $\rho\geq \rho_+$ with $\rho_-< \rho_+$, where \begin{equation} \rho_\pm=\sinh^{-1}\left[\frac{\left((\frac{H\omega^2}{4}-1-\frac{\mu^2R^2}{2})\pm\sqrt{(\frac{H\omega^2}{4}-1- \frac{\mu^2R^2}{2})^2-2\mu^2R^2}\right)^{1/2}} {\mu R}\right]. \end{equation} An interesting feature of this semi-classical folded string is that the closed string cannot be longer than a maximum size given by $\rho_{max}= \sinh^{-1}[{2}^{1/4}/\sqrt{\mu R}]$ which corresponds to the length of string whose quantum number satisfies $\omega=\omega_c$. We note that for $\rho\geq \rho_+$, even though $A(\rho)$ is positive, we will not get a closed string. This might be thought of as the case when the periodicity condition is going to be lost and we are dealing with open string stretched all the way to infinity. The two conserved momenta conjugate to $t$ and $\psi$ are the space-time energy $E$ and spin $S$. When the periodicity condition is satisfied, using the above Nambu-Goto action the conserved quantities are given by \begin{eqnarray} E&=&\frac{2R^2}{ \pi\alpha'\sqrt{H}}\int_0^{\rho_-} d\rho\frac{(1+\frac{\mu^2R^2}{2}\sinh^2\rho)\cosh^2\rho} {\sqrt{(1+\frac{\mu^2R^2}{2}\sinh^2\rho)\cosh^2\rho- \frac{H{\omega}^2}{4}\sinh^2\rho}}\;, \cr &&\cr S&=&\frac{\omega R^2\sqrt{H}}{2 \pi\alpha'}\int_0^{\rho_-}d\rho\frac{\sinh^2\rho} {\sqrt{(1+\frac{\mu^2R^2}{2}\sinh^2\rho)\cosh^2\rho- \frac{H{\omega}^2}{4}\sinh^2\rho}}\;, \label{ES5} \end{eqnarray} From the integrals (\ref{ES5}) one can proceed to compute the relation between energy and spin. To do this we can use an approximation in which the string is much shorter or of order of the critical value $\rho_{max}$. In other words we will consider the cases where $\rho_-\ll \rho_{max}$ or $\rho_-\sim\rho_{max}$. Setting $\omega^2=\omega_c^2+\frac{4}{H}\eta$ the two limits correspond to $\eta\rightarrow \infty$ and $\eta\rightarrow 0$, respectively. Our aim will then be to find the energy $E$ as a function of spin for these two cases. \vspace*{.3cm} {\bf Short strings} \vspace*{.3cm} In this case one has $\rho_-\sim \frac{1}{\sqrt{\eta}}$ as $\eta\rightarrow\infty$. Therefore the string is much shorter than the radius of curvature of the geometry (\ref{sol1g}). In fact in this limit the background space may be approximated by a flat metric near the center. Therefore the calculations reduce to spinning string in the flat space. In this limit the integrals (\ref{ES5}) can be performed and find \begin{equation} E=\frac{R^2}{\alpha'\sqrt{H}}\;\frac{1}{\sqrt{\eta}},\;\;\;\;\;\;\; S=\frac{R^2}{ 4\alpha'}\;\frac{1}{{\eta}} \end{equation} so that \begin{equation} E^2=\frac{4R^2}{H\alpha'}\; S, \end{equation} which is the well-known flat space Regge trajectory. Indeed this is what we would expect to find; namely going deep into the core of the space time the physics should be independent of general structure and the string should locally feel the flat space. This may be compared with AdS result where we also get the Regge trajectory, though in our case we have an extra factor of $1/H$ which should be the signature of the non-relativistic nature of the theory. \vspace*{.3cm} {\bf Near $\rho_{max}$ string} \vspace*{.3cm} As we have seen the closed string cannot be longer than a maximum size given by $\rho_{max}= \sinh^{-1}[{2}^{1/4}/\sqrt{\mu R}]$. So another limit we may consider is the case where string is of order of $\rho_{max}$. In this case the spin is always large compare with the radius of the curvature of the background geometry, i.e. $S\gg \frac{R^2}{\alpha'}$. For $\rho_-\rightarrow \rho_{max}$ the integrals of (\ref{ES5}) yield to the following expressions for $S$ and $E$ at leading order \begin{eqnarray} E&\approx&\frac{2R^2}{\pi\alpha'}\frac{\sqrt{2}}{\mu R}\bigg{[} \frac{1+\frac{\mu R}{\sqrt{2}}}{\sqrt{1-\frac{\mu R}{\sqrt{2}}}}\tanh^{-1}\left(\sqrt{1+\frac{\mu R}{\sqrt{2}}} \tanh\rho_-\right)-\frac{\mu^2R^2\sinh2\rho_-}{4\sqrt{(1+\frac{\mu R}{\sqrt{2}})(2-{\mu^2R^2})}} \cr &&\cr &&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-\frac{\mu^2R^2+2\sqrt{2}\mu R+4}{4\sqrt{1-\frac{\mu^2R^2}{2}}}\rho_- \bigg{]}+\cdots\cr &&\\ S&\approx&\frac{R^2}{\pi\alpha'}\frac{\sqrt{2}}{\mu R}\bigg{[} \sqrt{1+\frac{\mu R}{\sqrt{2}}}\tanh^{-1}\left(\sqrt{1+\frac{\mu R}{\sqrt{2}}} \tanh\rho_-\right)-(1+\frac{\mu R}{\sqrt{2}})\rho_-\bigg{]}+\cdots.\nonumber \end{eqnarray} Note that both of the above expressions diverge for $\rho_-\rightarrow \rho_{max}$, nevertheless for $\mu R\sim 1$ we find \begin{equation} E\approx2\sqrt{\frac{\sqrt{2}+\mu R}{\sqrt{2}-\mu R}} S-\frac{R^2}{2\pi\alpha'}\; \frac{2\sqrt{2}+\mu R}{\sqrt{2-{\mu^2R^2}}}\;{\sinh^{-1}\left(\sqrt{{\sqrt{2}}/{ {{\mu R}}}}\right)}, \end{equation} while for $\mu R\ll 1$ where we can expand the above expressions in terms of $\mu R$ one gets \begin{eqnarray} &&E\approx\left(2+\frac{7\mu^2R^2}{64}+{\cal O}(\mu^4 R^4)\right)S +\frac{R^2}{ \pi\alpha'}\left(1-\frac{\mu R}{8\sqrt{2}}+\frac{3\mu^2R^2}{32} +{\cal O}(\mu^3 R^3)\right)\ln(\frac{\alpha'S}{ R^2}).\cr && \label{nonES} \end{eqnarray} This has to be compared with the case of $AdS_5$ geometry where we have \cite{Gubser:2002tv} \begin{equation} E=S+\frac{R^2}{ \pi\alpha'}\ln(\frac{\alpha'}{ R^2}S)+\cdots. \label{ES} \end{equation} in which the field theory dual is a relativistic conformal gauge theory and this behavior looks very similar to the logarithmic growth of anomalous dimensions of operators with spin in the gauge theory. Although in our case we still get the same expression as that in the relativistic case\footnote{Note that setting $\mu R=0$ in the equation (\ref{nonES}) we recover the relativistic (\ref{ES}) up to numerical factors. These disagreements are due to the normalization of the $\psi$ coordinate in metric (\ref{sol1g}). This is also the case in the short string limit.}, the coefficients are corrected by functions of $\mu R$, at least up to the order we are considering. This may be understood as follows. In fact as we argued the dual non-relativistic theory contains a gauge field inherited from the 4D ${\cal N}=4$ SYM theory. So, the 3D theory may be studied by a small deformation of the 4D where the deformation parameter is $\mu R$. Therefore for $\mu R\ll 1$ we still have the same operators as those in 4D though the anomalous dimension of the corresponding operators get corrected due to the deformation, bringing us to the non-relativistic field theory. It would be interesting to see if such a behavior can be obtained from non-relativistic gauge theory as well. \subsection{Circular pulsating string} Another semi-classical string we would like to study is a circular pulsating string first studied in \cite{Minahan} in the AdS geometry. This is a string which wrapped around a angular coordinate and pulsates in radial direction. More precisely consider a circular pulsating closed string which is wrapped $m$ times around the $\psi$ direction. The corresponding string configuration is given by \begin{equation} t=\tau,\;\;\;\;\;\;\;\rho=\rho(t),\;\;\;\;\;\;\;\psi=m\sigma,\;\;\;\;\;\;\xi={\rm constant}. \end{equation} The other coordinates are set to zero. The Nambu-Goto action for this configuration in the geometry (\ref{sol1g}) reads \begin{equation} S=-\frac{mR^2}{4\alpha'\sqrt{H}}\int dt\;(1+\frac{\mu^2R^2}{2}\sinh^2\rho)^{1/2}\;\sinh2\rho\; \sqrt{1-\frac{H{\dot \rho}^2}{(1+\frac{\mu^2R^2}{2}\sinh^2\rho)\cosh^2\rho}}, \end{equation} where dot represents derivative with respect to $t$. It is useful to make the following change of variable \begin{equation} \eta=\int \sqrt{\frac{H}{(1+\frac{\mu^2R^2}{2}\sinh^2\rho)\cosh^2\rho}}\;d\rho, \end{equation} by which the above action can be recast to the following form \begin{equation} S=-\int dt\;\; g(\eta)\;\sqrt{1-{\dot \eta}^2}, \end{equation} where $g(\eta)=\frac{mR^2}{4\alpha'\sqrt{H}}(1+\frac{\mu^2R^2}{2}\sinh^2\rho)^{1/2}\;\sinh2\rho$. The associated Hamiltoninan with the above action is given by \begin{equation} H=\sqrt{\Pi^2+g(\eta)^2} \end{equation} with $\Pi$ being the canonical momentum. Note that the $H^2$ may be considered as a one dimensional quantum mechanical system with the potential \begin{equation} V(\eta)=g(\eta)^2=\left(\frac{mR^2}{4\alpha'\sqrt{H}}\right)^2(1+\frac{\mu^2R^2}{2}\sinh^2\rho)\;\sinh^22\rho. \end{equation} Therefore following \cite{Minahan} we can use the Bohr-Sommerfeld analysis for the quantization of the states. The quantization condition is \begin{equation} (n+\frac{1}{2})\pi=\int_{\eta_1}^{\eta_2} d\eta\;\;\sqrt{E^2-V(\eta)} \end{equation} where $\eta_{1,2}$ are the turning points. It is useful to return to the original coordinate $\rho$ in which the above quantization condition becomes \begin{equation} (n+\frac{1}{2})\pi=E\sqrt{H}\int d\rho\;\sqrt{\frac{{1-\frac{1}{B^2} (1+\frac{\mu^2R^2}{2}\sinh^2\rho)\sinh^22\rho}} { (1+\frac{\mu^2R^2}{2}\sinh^2\rho)\cosh^2\rho}}, \end{equation} where $B^2=\frac{4\alpha'E\sqrt{H}}{mR^2}$. To perform the integral we follow the procedure of \cite{Minahan, Alishahiha} decomposing the integral into two parts \begin{eqnarray} (n+\frac{1}{2})\pi&=&E\sqrt{H}\bigg[- \int_0^{\rho_0} d\rho\; \frac{1-\sqrt{{1-\frac{1}{B^2} (1+\frac{\mu^2R^2}{2}\sinh^2\rho)\sinh^22\rho}}} { \sqrt{(1+\frac{\mu^2R^2}{2}\sinh^2\rho)\cosh^2\rho}}\cr &&\cr &&\;\;\;\;\;\;\;\;\;\;\;\;\;+\int_0^{\rho_0} \frac{d\rho} {\sqrt{ (1+\frac{\mu^2R^2}{2}\sinh^2\rho)\cosh^2\rho}}\;\bigg], \label{int1} \end{eqnarray} where $\rho_0$ is the turning point in the original coordinates. For the large $B$ the first integral in (\ref{int1}) becomes \begin{equation} \int_0^{\rho_0} d\rho\; \frac{1-\sqrt{{1-\frac{1}{B^2} (1+\frac{\mu^2R^2}{2}\sinh^2\rho)\sinh^22\rho}}} { \sqrt{(1+\frac{\mu^2R^2}{2}\sinh^2\rho)\cosh^2\rho}} \approx \frac{2B^{-2/3}}{(\sqrt{2}\mu R)^{1/3}}\; \left(-\frac{1}{2}+\frac{3^{3/2}\Gamma\left(\frac{2}{3}\right)^{3}}{2^{8/3}\pi}\right), \end{equation} while for the second integral one finds \begin{equation} \int_0^{\rho_0} \frac{d\rho} {\sqrt{ (1+\frac{\mu^2R^2}{2}\sinh^2\rho)\cosh^2\rho}} \approx \frac{2B^{-2/3}}{(\sqrt{2}\mu R)^{1/3}}\; \left(\frac{2\pi^2}{9\Gamma\left(\frac{2}{3}\right)^{3}} B^{2/3}-\frac{1}{2}\right). \end{equation} Thus altogether we get \begin{equation} (n+\frac{1}{2})\pi\approx \frac{4\pi^2}{9\Gamma\left(\frac{2}{3}\right)^3}\;\frac{\sqrt{H}} {(\sqrt{2}\mu R)^{1/3}}\;E -\frac{3^{3/2}\Gamma\left(\frac{2}{3}\right)^3}{8\pi}\;\left(\frac{\sqrt{H}}{\sqrt{2}\mu{\alpha'}^2}\right)^{1/3}\; Rm^{2/3} E^{1/3}, \label{mn} \end{equation} which can be inverted to find energy as a function of $n$ \begin{equation} E\approx \alpha n+\beta(m^2n)^{1/3}, \end{equation} where $\alpha$ and $\beta$ are two constants given in terms of $\mu,R$ and $\alpha'$ which can be read from equation (\ref{mn}), though whose explicit forms are not important for our consideration. This has to be compared with that in AdS geometry where it was found that the $E-n$ grows as $n^{1/2}$ \cite{Minahan}. It would be interesting to find the dual operator in the non-relativistic gauge theory. \section{Open string and drag force} So far we have considered closed strings in the geometry with Schr\"odinger isometry. As we have already mentioned, in section two, open string can also be used to explore different features of the model using the supergravity dual. It may be used to obtain, for example, the effective potential of quark-anti quark system. In this section we would like to study a non-relativistic quark moving through a hot plasma by making use of the gravity description of the system. In other words, following \cite{Gubser} we would like to study the drag force for a quark in a non-relativistic field theory. To do so, one first needs the gravity dual of the non-relativistic field theory at finite temperature. In the context of AdS/CFT duality we know that heating up the dual field theory generically corresponds to adding a black hole in the bulk geometry. Therefore we need to find the supergravity solution corresponding to the black hole in the geometry (\ref{sol1}). The relevant supergravity solution for our studies is given in \cite{Mazzucato:2008tr} (see also \cite{Kovtun:2008qy}) \begin{eqnarray} ds^2&=&\left(\frac{r}{R}\right)^2\frac{1}{H}\left[-\left(\frac{g}{2}+{\mu^2 r^2f}\right)dt^2 -\frac{g}{2} d\xi^2+(1+f)dtd\xi+H dx_i^2\right]\cr &&\cr &+&\left(\frac{R}{r}\right)^2\left[\frac{dr^2}{f}+r^2\left(\frac{(d\chi+A)^2}{H}+ds_P^2\right)\right],\cr &&\cr e^{2\phi}&=&H^{-1} \label{sol2} \end{eqnarray} where \begin{equation} g=-\left(\frac{r_H}{r}\right)^4,\;\;\;\;\;\;\;\;\;H=1-\frac{\mu^2r^2}{2}g,\;\;\;\;\;\;\;f=1+g \end{equation} There is also a non-zero RR 4-form as well as a NSNS 2-form (see \cite{Mazzucato:2008tr}). To proceed we start from an ansatz for the open string representing an external moving source in the dual field theory. To write the open string ansatz it is useful to make the following change of variables \begin{equation} t\rightarrow \frac{1}{\sqrt{2}} (\xi-t),\;\;\;\;\;\;\;\;\xi\rightarrow \frac{1}{\sqrt{2}}(\xi+t), \end{equation} in which the above metric reads \begin{eqnarray} ds^2&=&\left(\frac{r}{R}\right)^2\frac{1}{H}\left[-(1+\frac{1}{2}\mu^2 r^2)fdt^2 +(1-\frac{1}{2}\mu^2 r^2 f) d\xi^2+\mu^2r^2fdtd\xi+H dx_i^2\right]\cr &&\cr &+&\left(\frac{R}{r}\right)^2\left[\frac{dr^2}{f}+r^2\left(\frac{(d\chi+A)^2}{H}+ds_P^2\right)\right]. \label{sol3} \end{eqnarray} In this notation our ansatz is given by \begin{equation} t=\tau,\;\;\;\;\;\;r=\sigma,\;\;\;\;\;\;x_1=vt+x(r),\;\;\;\;\;\;\xi={\rm constant}. \label{drag} \end{equation} Re-writing the relevant part of the metric in the following form \begin{equation} ds^2=g_{tt}dt^2+g_{xx}dx_1^2+g_{rr}dr^2 \end{equation} the Nambu-Goto action becomes \cite{Herzog} \begin{equation} S=-\frac{1}{ 2\pi \alpha^{'}}\int dt dr \sqrt{-( g_{tt}g_{rr}+g_{tt}g_{xx} {x'}^{2}+g_{xx}g_{rr}v^{2})} \end{equation} where prime represents derivative with respect to $r$. Since the metric components are $t$ independent, the above action may be treated as a one dimensional mechanical system whose momentum is the constant of motion \begin{equation} \frac{-g_{tt}g_{xx} x'}{\sqrt{-( g_{tt}g_{rr}+g_{tt}g_{xx} {x}'^{2}+g_{xx}g_{rr}v^{2})}} =c=-2\pi\alpha' \pi_x={\rm constant}, \end{equation} which can be solved for $x'$ leading to \begin{eqnarray} {x'}^{2}=4\pi^2{\alpha'}^2\pi_x^{2}\bigg{(}\frac{g_{rr}(-g_{tt}-g_{xx}v^{2})}{g_{xx}g_{tt}(g_{xx}g_{tt} +4\pi^2{\alpha'}^2\pi_x^{2})}\bigg{)}. \label{a2} \end{eqnarray} In terms of the constant $\pi_x$ one has \cite{Herzog} \begin{equation} \frac{dE}{dt}={\pi_xv} ,\;\;\;\;\;\;\;\;\frac{dP}{dt}=\pi_x. \label{EP} \end{equation} where $E$ and $P$ are energy and momentum the open string gain from through its end point. To find $c$ we note that the equation (\ref{a2}) physically make sense if the numerator and denominator vanish at the same point \cite{Gubser}. Setting the numerator of (\ref{a2}) to zero, for the supergravity solution (\ref{sol3}), one finds \begin{eqnarray} \frac{1}{2}\mu^{2}r^{6}_0+(1-v^{2})r^{4}_0-\frac{1}{2}\mu^{2}r_{H}^{4}r^{2}_0(1+v^{2})-r_{H}^{4}=0, \label{r0} \end{eqnarray} which can be solved for $r_0$. Plugging the solution $r_0$ in the denominator one arrives at \begin{equation} \pi_x=-\frac{v}{2\pi\alpha'}g_{xx}|_{r_0} . \label{mom} \end{equation} From (\ref{r0}) we see that setting $\mu=0$ the velocity changes from $v=0$ to $v=1$ as $r_0$ varies from $r_H$ to infinity, as expected for relativistic field theory. On the other hand for $\mu\neq 0$ where the dual theory is supposed to be non-relativistic we observe that as we are varying the $r_0$ from $r_H$ to infinity, the velocity takes its value from zero to infinity. This is in fact due to the non-relativistic property of the dual field theory. To proceed we need to solve the equation (\ref{r0}) to find $r_0$ in terms of velocity. Then using the expression for constant conjugate momentum in terms of the metric components presented in (\ref{mom}), one may read, for example, the drag force from (\ref{EP}). To proceed we will consider two different limits depending on whether the velocity is small or large. In these limits the drag force becomes \begin{equation} \frac{dP}{dt}\approx \left\{ \begin{array}{lll} &-\frac{v}{2\pi\alpha'}\;\frac{r_H^2}{R^2}\; (1+\frac{1}{2}v^2)\;\;\;\;\;\;\;\;\;\;\;\;&{\rm for}\;\;\;\;\;v\ll 1, \cr &&\cr &-\frac{v}{2\pi\alpha'}\;\frac{2}{\mu^2 R^2} (v^2+{\mu^4r_H^4-4})\;\;\;\;\;\;&{\rm for}\;\;\;\;\;v\gg 1. \end{array} \right. \end{equation} We recognize the first one as the non-relativistic limit of that found in \cite{Gubser} for the relativistic field theory when $v\ll 1$. The second case is just because of the non-relativistic nature of the dual field theory. Now consider a single non-relativistic particle with momentum $P$ and mass $M$, then we have $P=Mv$. It is useful to formally rewrite the above expression for the drag force in terms of $P$. Then we can perform the integral yielding \begin{equation} P(t)\approx \left\{ \begin{array}{lll} &P_0\ e^{-\frac{\pi R^2 T^2}{4\alpha'M}\;t}\;\;\;\;\;\;\;\;\;\;\;\;&{\rm for}\;\;\;\;\;P_0\ll M, \cr &&\cr &\left(\frac{1}{P^2_0}+\frac{2t}{\pi\alpha'\mu^2 R^2M^3}\right)^{-1/2} \;\;\;\;\;\;&{\rm for}\;\;\;\;\;P_0\gg M. \end{array} \right. \label{pp} \end{equation} In the above expression we have set $r_H^2=\frac{\pi^2 R^4}{2} T^2$ \cite{Mazzucato:2008tr}. Similarly one finds \begin{equation} E(t)\approx \left\{ \begin{array}{lll} &E_0\ e^{-\frac{\pi R^2 T^2}{2\alpha'M}\;t}\;\;\;\;\;\;\;\;\;\;\;\;&{\rm for}\;\;\;\;\;E_0\ll M/2, \cr &&\cr &\left(\frac{1}{E_0}+\frac{4 t}{\pi \alpha'\mu^2 R^2M^2}\right)^{-1} \;\;\;\;\;\;&{\rm for}\;\;\;\;\;E_0\gg M/2. \end{array} \right. \end{equation} This means that a particle with energy much less than its mass will lose its energy exponentially with time, while for a particle with kinetic energy much more than its mass, the energy is lost as $t^{-1}$. Another interesting feature of the model is that for the slowly moving particles the relaxation time, $t_0 =\frac{2\alpha'M}{\pi R^2 T^2}$, depends inversely on temperature, whereas for the fast moving particles it is temperature independent, $t_0={\pi \alpha'\mu^2 R^2M^2}/4$. On the other hand in the first case the relaxation time is $\mu$ independent, while in the second case it proportional to $\mu^2$. This shows that even at zero temperature a non-relativistic particle will lose its energy. In the next section we explore this point in more details. \section{Speed limit and drag force} To study a quark moving through a hot plasma, Gubser \cite{Gubser} has considered a moving open string in a geometry with horizon where the radius of the horizon is related to the temperature of the dual gauge theory. Although having the horizon is important to deal with the gauge theory at finite temperature, as far as the gravity computations are concerned we are free to redo the computations for a geometry without horizon. Essentially what one needs to do is Wilosn loop computations in the context of AdS/CFT correspondence where the string is moving as well. Let us first consider a moving open string given by (\ref{drag}) in the $AdS_5\times S^5$ geometry parametrized as follows \begin{equation} ds^2=\left(\frac{r}{R}\right)^2(-dt^2+dx_1^2+dx_2^2+dx_3^2)+\left(\frac{R}{r}\right)^2dr^2+R^2d\Omega_5^2. \end{equation} Using the procedure of the previous section one gets \begin{equation} {x'}^2=(2\pi \alpha')^2\pi_x^2\frac{1-v^2}{\left(\frac{r}{R}\right)^4\left(\left(\frac{r}{R}\right)^4- (2\pi \alpha')^2\pi_x^2\right)}, \end{equation} which is well behaved if $v<1$ representing the fact that the dual theory is relativistic and therefore there is a bound for the velocity. Moreover we observe that the constant conjugate momentum, $\pi_x$, is independent of $v$. Now consider the following ansatz for the open string moving in the background (\ref{sol1}) \begin{equation} t=\tau,\;\;\;\;\;r=\sigma,\;\;\;\;\;x_1=vt+x(r),\;\;\;\;\;\xi={\rm constant}. \end{equation} In this case one finds \begin{equation} {x'}^2=(2\pi \alpha')^2\pi_x^2\frac{\mu^2r^2-v^2}{\left(\frac{r}{R}\right)^4\mu^2r^2 \left(\left(\frac{r}{R}\right)^4\mu^2r^2- (2\pi \alpha')^2\pi_x^2\right)}. \end{equation} We observe that in this case there is no bound on the velocity and it can change from zero to infinity. This is indeed the reflection of the fact that dual theory is non-relativistic. To avoid the imaginary solution one arrives at \begin{equation} \pi_x=-\frac{v}{2\pi\alpha'}\;\frac{v^2}{\mu^2 R^2}. \end{equation} For a particle with mass $M$ and momentum $P=Mv$ the drag force reads \begin{equation} \frac{dP}{dt}=-\frac{1}{2\pi\alpha'M^3\mu^2 R^2}\;P^3, \end{equation} which yields to \begin{equation} P=\left(\frac{1}{P_0^2}+\frac{t}{\pi \alpha' M^3 \mu^2 R^2}\right)^{-\frac{1}{2}}. \end{equation} This means that in this case even though the system is at zero temperature the moving particle losses its energy and the relaxation time is given in terms of $\mu^2$. Whereas in the hot plasma it is controlled by temperature. This may be be related to the fact that $\mu$ in the dual non-relativistic field theory could be interpreted as the chemical potential. The drag force calculations can be generalized to other backgrounds obtained from Null Melvin Twist procedure of $Dp$-brane for $p\leq 4$. The relevant solutions are given by \cite{Mazzucato:2008tr} \begin{eqnarray}\label{dp} ds^2&=&\left(\frac{r}{R}\right)^{\frac{7-p}{2}}\frac{1}{H}\left[-(1+\frac{1}{2}\mu^2 r^2)fdt^2 +(1-\frac{1}{2}\mu^2 r^2 f) d\xi^2+\mu^2r^2fdtd\xi+H dx_i^2\right]\cr &&\cr &+&\left(\frac{R}{r}\right)^{\frac{p+1}{2}}\left[\frac{dr^2}{f}+r^2\left(\frac{(d\chi+A)^2}{H}+ds_P^2\right)\right], \end{eqnarray} where $f=1-(r_H/r)^{7-p}$ and $H=1+\mu^2 r_H^{7-p}/2r^{5-p}$. Going through the computations of section four, for slowly moving particles, we find \begin{equation} P=P_0\;e^{-\frac{t}{t_0}},\;\;\;\;\;\;{\rm with}\;\;\;\;\;\;\;t_0=\left(\frac{(7-p)^4} {2^{p+1}\pi^{p-1}}\right)^{\frac{1}{5-p}}\;\frac{\alpha' M}{(RT)^{\frac{4}{5-p}}}, \end{equation} while for fast moving particles we get a universal result given by that in equation (\ref{pp}). Therefore all the models parametrized by $p\leq 4$ exhibit the same non-relativistic behavior, though in the case of slowly moving particles the characteristic nature of energy lost is fixed by different power of the temperature. i.e. $t_0\sim (RT)^{\frac{4}{p-5}}$. \section{Discussions and conclusions } In this paper we have studied a number of features of non-relativistic CFT by making use of the supergravity solution in type IIB string theory. Although we have mainly considered 3 dimensional CFT whose gravity dual can be obtained from D3-brane using TsT duality, we would expect that the general features we explored in this paper can be applied for other dimensions too. Since the world volume theory of D3 brane is a supersymmetric gauge theory, and taking into account that supergravity solution (\ref{sol1}) is obtained from D3-brane by the Null Melvin Twist procedure, we would expect that the resultant theory still contains a gauge field. Of course the procedure will reduce the number of supersymmetries as well as the space time symmetry. Indeed for the case where the internal space is a sphere the amount of supercharge preserved by the background are eight. The space time symmetry will also reduce to Schr\"odinger symmetry. Therefore we expect that the field theory dual to the type IIB on supergravity solution (\ref{sol1}) to be non-relativistic super conformal gauge theory in three dimensions. Since the supergravity dual can be embedded in type IIB string theory it is natural to study semi-classical string in this background to explore some properties of the dual non-relativistic superconformal gauge theory. In particular we have seen that the effective potential of the external objects is proportional to $l^{-2}$ as expected for a non-relativistic CFT. One may find the effective potential of quark-anti quark as a function of distance $l$ for arbitrary $p$ where the corresponding supergravity solutions are given by (\ref{dp}) with $r_H=0$. This has been done in \cite{Alishahiha:2003ru} where the authors argued that the interaction between external objects is due to their lightlike dipole moments given by $\mu$ \begin{equation} E\sim -\frac{\mu}{\alpha'}\;\left(\frac{R^4}{l^2}\right)^{\frac{2}{5-p}}. \end{equation} For $p=3$ which we have mainly considered in this paper, we have interpreted the effective potential due to non-relativistic CFT given by dynamical scaling $t\rightarrow \lambda^2 t$ and $x\rightarrow \lambda x$. For other cases there is no such a clear interpretation. We have also considered folded rotating closed strings in the geometry with isometry of Schr\"odinger group where we have shown that the anomalous dimension of the corresponding dual operator exhibits logarithmic corrections similar to that in the relativistic gauge theory for $\mu R\ll 1$. It would be interesting to find such a behavior directly from non-relativistic gauge theory. We have also studied circular pulsating closed strings where we have observed that although in the leading order the anomalous dimension is proportional to the winding number of the string, $\Delta\sim n$, at subleading order it goes as $n^{1/3}$. It is worth noting that in the case of four dimensional relativistic gauge theory the subleading correction grows as $\Delta-n\sim n^{1/2}$. This should be taken as the effect of non-relativistic nature of the theory. It is worth noting that whenever we have deviations from the relativistic field theory or in the gravity side from $AdS_5$ gravity, the deviations are controlled by the dimensionless parameter $\mu R$. Therefore assuming $\mu R\ll 1$ this parameter can be thought of as the expansion parameter by which we can study the non-relativistic three dimensional conformal gauge theory as a perturbation of four dimensional ${\cal N}=4$ superconformal gauge theory. In particular since the non-relativistic CFT we have been considering may be obtained by reduction (contract) from ${\cal N}=4$ 4D theory, we might suspect that the AdS/CFT dictionary, in some extend, works the same as before. If correct, the operator dual to dilaton would be ${\rm Tr} F^2$, though in this case the anomalous dimension of the operator gets higher loop corrections, such that summing up all the loops we get $\Delta=2+\sqrt{2+n^2 \mu^2 R^2}$. Actually this is the expression we have given in section two for the case of $M=\frac{n}{R}$ where from gravity point of view $M$ is the momentum of the dilaton in the light like direction $\xi$. By $F^2$ we mean the reduction of four dimensional $F^2$ to three dimensions. On the other hand since the three dimensional theory is a supersymmetric theory we expect to have scalars in the model which might be identified with the coordinates in which the isometry of internal space acts on. Therefore following the general philosophy of \cite{Gubser:2002tv} one might expect that the operators dual to the folded rotating closed string with spin $S$ have the following schematic form \begin{equation} {\cal O}_S\sim {\rm Tr} X \nabla^S X, \end{equation} where $X$ is the three dimensional scalar filed. At leading order the anomalous dimension should be $\Delta\sim S$. But as we have seen the anomalous dimension of the operator gets corrections and the corrections depend on $\mu R$ which controls the non-relativistic effects. It would be interesting to study these operators from non-relativistic gauge theory point of view. We have also considered non-relativistic three dimensional CFT at finite temperature. In particular we have considered an open string moving in the background created by a black hole geometry in (\ref{sol1}). In the dual picture this means that we are dealing with a quark moving through the hot plasma. Following \cite{Gubser} we have evaluated the drag force for this case too. The first observation we have made is that there is no speed limit in this case, pointing toward the fact that the theory is non-relativistic\footnote{The main motivation of the paper \cite{Alishahiha:2003ru} where the metric (\ref{sol1}) was first presented was to study the non-local features of the dual theory. In light of the recent studies as well as our observation in this paper we see that the dual theory is indeed non-local due to is non-relativistic nature.}. Moreover depending on the initial energy of the moving quark, it loses the energy either exponentially $e^{-t/t_0}$ or as $(t_0/t)$. We have also observed that the relaxation time for slowly moving particles depends on the temperature and is independent of $\mu$, though for fast moving particles it only depends on $\mu$. The drag force like computations may also be done for the non-relativistic field theory even at zero temperature. Doing so, one finds that unlike the AdS case where we only get a casual speed limit, in the non-relativistic case one arrives at non-trivial results, though in comparison with finite temperature system, the physics is controlled by $\mu$. It would be interesting to understand this observation from non-relativistic gauge theory dual. \vspace*{1cm} {\bf Acknowledgments} We would like to thank Farhad Ardalan for useful discussions and comments. We would also like to thanks Shahin Rouhani for discussions on the AdS/non-relativistic CFT duality. This work is supported in part by Iranian TWAS chapter at ISMO.
1,314,259,993,761
arxiv
\section{Discussion} Here, we argue that the most striking feature of the obtained phase diagram shown in Fig. \ref{fig2}(a) is that the transition field increases with increasing temperature. It is completely contrary to the shared tendency of the previous reports on spin crossover compounds such as cobalt oxides [Sr$_{1-x}$Y$_{x}$CoO$_{3}$ \cite{Kimura2008}, (Pr$_{1-y}$Y$_{y}$)$_{0.7}$Ca$_{0.3}$CoO$_{3}$ \cite{Marysko, Naito2014}] and coordinate compounds (Fe[(phen)$_{2}$(NCS)$_{2}$] \cite{Qi}, [Mn$^{\mathrm{III}}$(taa)] \cite{Kimura2005}), where the transition fields are observed to decrease with increasing temperature, as schematically shown by the dashed curve in Fig. \ref{fig2}(a). This tendency can be readily anticipated by considering the spin crossover in the local ion picture, where the ground state is less magnetic (i.e., LS) and that the excited state is more magnetic (e.g., HS or IS). In this situation, the magnetic state will be occupied with increasing either $T$ or $B$ due to the entropy or Zeeman energy contribution, respectively. In LaCoO$_{3}$, the ground state is the LS phase \cite{Asai1994}, denoted as phase (A1) in Fig. \ref{fig2}(a), whose entropy is considered to be small. The thermally induced paramagnetic state \cite{Asai1994}, denoted as phase (A2) in Fig. \ref{fig2}(a), is considered to possess a larger entropy due to the magnetic, orbital, and phonon degrees of freedom of HS or IS species and the mixing entropy of the LS-HS or LS-IS complexes \cite{Biernacki2005}. Based on the local model for spin crossover, it is expected that the transition field decreases with increasing temperature and that phase (B1), (B2) merges with phase (A2) at the high-temperature and high-field region, as shown by the dashed curve in Fig. \ref{fig2}(a). It is clear that phase (A2) and phase (B1), (B2) are the distinct phases in the present result as shown in Fig. \ref{fig2}(a). It is now decisive that the local model for the spin crossover compounds \cite{Kyomen2003, Biernacki2005} is not applicable to the $B$-$T$ phase diagram of LaCoO$_{3}$, suggesting that phasesr (A2) and (B1), (B2) are distinct in origin, which is contrary to the previous notion \cite{Sato2009, Rotter}. We now discuss the origin of the observed high-field phases (B1) and (B2). In the present observation, the reduction of $S$ is observed in the transition from phase (A2) to phase (B2), as shown in Fig. \ref{fig2}(c). This may suggest that some order is present in phase (B2). The candidates for the order of phase (B2) are (i) antiferromagnetic order (AFM), (ii) spin state crystalline (SSC), and (iii) orbital order (OO). In the SSC, the spin states of Co$^{3+}$ are spatially ordered. Among them, we believe the SSC is the most plausible idea for the following reasons. First, because AFM becomes unstable under larger magnetic fields, its N\'{e}el temperature is expected to decrease with increasing magnetic field. However, as shown in Fig. \ref{fig2}(a), the transition temperature of (B2) increases with increasing magnetic field. Therefore, AFM is excluded. Next, we consider the SSC. In phase (A2) the spin states are disordered. At the magnetic transition, the number of Co$^{3+}$ in the magnetic spin states is increased and the spatial order of the spin states is obtained, forming the SSC, the spatial order of spin states. This scenario is in good agreement with experimental observations, namely, the sudden increase of magnetization and the decrease of entropy. Thus, we regard the SSC is present in phase (B2). Lastly, we consider OO. The orbital degree of freedom is quite spin state dependent. Therefore, if the spin states are disordered, it should be very difficult for the OO to appear. Besides, OO itself does not change the magnetization. Therefore, OO alone cannot be the order parameter of phase (B2). On the other hand, the OO on the background of the SSC should appear plausible. Such spin state ordering is also suggested in recent theories \cite{Knizek, Kunes2011, Kanamori2011, Krapek} and high-field experiments \cite{Moaz, Rotter}. Another feature found in the obtained phase diagram in Fig. \ref{fig2}(a) is the sudden change in the transition fields at $T^{*}$, making the two high-field phases (B1) and (B2) distinct. The phase boundary between phase (B1) and (B2) seems horizontal ($dT/dB=0$) at $T^{*}$. This means that $\Delta M/\Delta S=0$ in the virtual transition from phase (B1) to (B2) based on the Clausius-Clapeyron relation. We deduce $\Delta M=0$, assuming that $\Delta S$ is not so large. As a possible origin of the two distinctive phases (B1) and (B2), we discuss that, besides SSC, another order may be present in phase (B1) which does not change $M$. This is because the SSC of phase (B2) is expected to be even more stable in phase (B1) due to the lower temperature. Possible origins for the order of (B1) in addition to the SSC are (i) AFM, (ii) OO, (iii) excitonic condensate (EC), and (iv) the SSC with a spatial pattern that is different from (B2). We note here that it is difficult at present to further qualify those possibilities, except for the AFM. The AFM in phase (B1) is excluded because $M$ should be smaller than that of phase (B2). This is in contradiction to the experimental observation. EC may be plausible, although further experimental evidence is needed to confirm it. EC has been recently proposed as the origin of the insulating phase of LaCoO$_{3}$ and Pr$_{0.5}$Ca$_{0.5}$CoO$_{3}$ \cite{Kunes2014, Kunes001, Kunes002, Kaneko2012, Kaneko2014, Kaneko2015}. In a very recent report, it is predicted by a dynamical mean field model calculation that a field-induced EC is possible \cite{Sotnikov}. Switching between two different SSCs may also be possible. The SSCs with various spatial patterns were considered with generalized gradient approximation (GGA+$U$) calculations in Ref. [\onlinecite{Knizek}]. Some two SSCs with the same $M$ may undergo a temperature-induced transition from one to the other with the assistance of the entropy difference of those phases due to a lattice or orbital contribution. The OO in phase (B1) is also in good agreement with the experimental results. Co$^{3+}$ in the IS or HS both possess orbital degrees of freedom at the $e_{g}$ and $t_{2g}$ orbitals, respectively. The formation of the OO at phase (B1) will stabilize it energetically, which may well result in the reduction of the transition field to phase (B1) as compared to that to phase (B2), being in accord with the observed change of the transition field at $T^{*}$. In addition, the flat phase boundary between (B1) and (B2) is also in good agreement with the order-disorder phase transition of orbitals \cite{Murakami} or the switching between different OO \cite{Mcqueen} because they can occur with $\Delta M=0$. In those cases, orbitals are ordered in phase (B1) and in (B2) the orbitals are disordered or forming the OO with different spatial pattern. For these reasons, we regard that, in phase (B1), OO may be present in addition to the SSC. Orbital ordering taking place along with the spin state ordering has also been claimed in YBaCo$_{2}$O$_{5}$ \cite{Vogt}, Sr$_{3}$YCo$_{4}$O$_{10.5}$ \cite{Nakao}, the thin film of LaCoO$_{3}$ \cite{Fujioka}, and a previous high-field study on LaCoO$_{3}$ \cite{Moaz}. We note, however, the origin of phases (B1) and (B2) is still an open question to be explored in future studies. In conclusion, high-field magnetization measurements of LaCoO$_{3}$ up to 133 T were carried out in a wide temperature range from 2 to 120 K. At $T>T^{*}$, we observed the novel magnetic transition at $B>100$ T. In addition, we observed the previously reported magnetic transition at $\sim65$ T with $T<T^{*}$. Based on the obtained $B$-$T$ phase diagram and the Clausius-Clapeyron relation, it was found that the high-field phases possess lower entropy than the low-filed phases, and that the high-field phases are separated into two phases at $T=T^{*}$. We argue that the observed magnetic transitions take place from the LS-HS or LS-IS disordered phase to the ordered SSC of LS-HS or LS-IS complex. At $T<T^{*}$, spatially different SSC or orbital order may develop. The authors acknowledge M. Tokunaga and S. Ishihara for fruitful discussions, T. T. Terashima, S. Takeyama and K. Yoshikawa for experimental and various supports. This work was supported by JSPS KAKENHI Grant-in-Aid for Young Scientists (B) Grant Number 16K17738.
1,314,259,993,762
arxiv
\section{Background and Introduction} Conversational agents are becoming part of our lives. These systems generally fall into two categories, \textit{task-oriented assistants} and \textit{chatbots} \cite{Shum2018}. Task-oriented assistants are designed to fulfill a specific task by having single-turn or multi-turn conversations with users to retrieve information from them (e.g. \textit{Microsoft Cortana}, \textit{Apple Siri}, \textit{Google Assistant}). Chatbots are designed to have more socially-oriented chit chat with users. The goal is usually to mimic human-human conversations and engage users in those conversations for as long as possible (e.g. \textit{ELIZA}\cite{Weizenbaum:1966:ECP:365153.365168}, \textit{XiaoIce}\cite{zhou2018design}, or \textit{TickTock}\cite{yu2015ticktock}). In order to have extended human-like conversations, some researchers have studied how to incorporate social language into chatbots to generate proper interpersonal responses and build an emotional connection with users \cite{Shum2018}. For example, \textit{XiaoIce}, Microsoft's social chatbot in China, can respond with empathetic language and show care while chatting with users. However, there are only a handful of studies that focus on incorporating social capabilities into task-oriented assistants \cite{Bickmore:2001:RAM:365024.365304, Cassell2003, Bickmore:2009:TTC:1518701.1518891, walker1997improvising, brixey2019building} even though prior literature has suggested that these factors might play an important role in the process of task-oriented conversations and be associated with better user engagement and satisfaction \cite{1545303, Liao:2016:YSS:2901790.2901842, Bickmore:2005:EML:1067860.1067867, Bickmore:2009:TTC:1518701.1518891}. Thus, in this work we try to answer the following research questions: (1) Does social language used by humans in task-oriented conversations affect user responsiveness and task completion? If so, how? (2) Can we effectively introduce social aspects of language in the responses of a task-oriented conversational agent? We focus on the customer service domain in which customer service personnel help drivers sign up with a ride-sharing provider, since customer service is a typical application area for automated task-oriented assistants Moreover, driver on-boarding support is a closed-domain problem and a well-defined task. We first describe an empirical study to quantitatively examine the relationship of customer service representatives' use of social language to drivers' responsiveness and the completion of their first trip, based on an analysis of driver and human-agent conversations. The social language is measured using existing pre-trained machine learning models developed in prior literature. After that, we apply the findings to build an end-to-end deep learning model to generate automated agent responses given driver inquiries. Our aim is to train a task-oriented agent that can produce dialogues with the desired level of social language while still maintaining the necessary content to guide drivers through the on-boarding process and lead them to complete their first trip. The main contributions of this work are below: \begin{enumerate} \item Systematically analyze the relationship between social language and user responsiveness as well as task completion in a large real-world conversational dataset. \item Propose a deep learning framework for task-oriented dialogue generation that includes a social language understanding and production component. \item Use human judgment and automatic assessment methods to evaluate the extent to which the new dialogue generation model preserves task-oriented content and incorporates social language generation. \end{enumerate} \section{Related Work} In 1978, Bloom and Lahey proposed a language development framework that suggests that language has three components: \textit{content}, \textit{form}, and \textit{use} \cite{bloom1978language}. \textit{Content} refers to semantics and underlying meaning in text; \textit{form} is related to the rules of language such as morphology and syntax. The \textit{use} of language, also called \textit{pragmatics} or \textit{social language}, refers to how language is used and interpreted in different social settings or contexts. They point out that social language is important for various interpersonal functions. Humans use a variety of social language strategies to maintain and develop interpersonal relationships, such as increasing intimacy through self-disclosure \cite{collins1994self} and building common ground through small talk \cite{clark1996}. The current research focuses on better understanding the use of social language in a human-to-human, task-oriented context and proposes a architecture to computationally generate such language. We start by reviewing the related work on social language generation in task-oriented assistants and then summarize the literature on politeness and positivity that motivates our work. \subsection{Social Language in Task-Oriented Conversational Agents} Despite the importance of social language for human-human relationships and interactions, most task-oriented conversational assistants only focus on presenting the right content to users; nevertheless, a few have tried to incorporate social language, with most work in this direction done in the context of embodied conversational agents. Embodied agents have a physical or graphical representation of a body or face and are usually capable of interacting with users through multiple modalities (e.g. linguistic content, tone of voice, gestures, facial expression). Human social communication and interaction is a complex process which often involves coordination across the different modalities. Some research has examined how to effectively facilitate this coordination. For example, Bickmore and Cassell integrated a theory of social dialogue in a real estate conversational agent (REA) and demonstrated that small talk can help the virtual agent build trust with users \cite{Cassell2003, Bickmore:2001:RAM:365024.365304}. However, the REA was not fully automated but controlled by a human wizard who followed scripts during the experiment. Following up on this work, Zhao et al. \cite{zhao2014towards} proposed a theoretical framework and a computational model \cite{papangelis2014towards} of how humans use various conversational strategies to build, maintain, or break rapport. This framework was used to develop SARA \cite{matsuyama2016socially}, an embodied conversational agent able to build rapport with humans and perform tasks such as calendar scheduling. In addition to REA and SARA, examples of embodied agents that use social language to achieve a task include Greta \cite{bevacqua2007expressive, niewiadomski2008expressions}, Ellie \cite{devault2014simsensei}, Zara \cite{siddique2017zara}, Nora \cite{winata2017nora} and other virtual \cite{gratch2007creating, paranjape2018towards, d2012autotutor} or robotic agents \cite{leite2013social}. However, because many interactions with virtual agents now occur online via text, it is important to understand how to encode and generate social language, especially when text is the only available modality. In many commercial systems, templates, rules, or content filters are used to generate social norms of language. The recent Alexa Prize competitions \cite{ram2018conversational, khatri2018alexa} have fostered interesting work that requires an assistant to fuse task-oriented goals (e.g. play music) with social talk. However, because of various challenges related to deploying agents to real customers, none of these efforts use statistical methods to explicitly model social aspects of language generation. Instead, most attempts are template-based or script-based. When considering approaches for generating social language in task-oriented conversational agents, most attempts are template-based or script-based, which interleave task-related utterances with socially-related utterances. Zhao et al. \cite{zhao2018sogo} proposed to alternate task-related utterances (``Task Phase'' - negotiation) with utterances related to social strategies (``Social Phase''), for example expressing gratitude or self-disclosure. Chandar et al. \cite{chandar2017leveraging} proposed an agent to assist with an onboarding task (e.g. by providing information, searching, or proactively reminding potential employees of pending tasks) and including chit-chat capabilities. However, this agent did not model social language directly, but used a neural network to select the most appropriate response from a pre-defined list. Our work differs from these by building an end-to-end deep neural network to jointly understand input utterances and generate output utterances infused with social language, instead of having separate modules for recognition and understanding of social features and for language generation. \subsection{Positivity and Politeness in Task Completion} Gnewuch and colleagues \cite{gnewuch2017towards} proposed twelve design principles for social conversational agents, according to which conversational agents should not only adhere to Grice's maxims \cite{grice1975logic} but also be responsive to social cues since humans expect an experience that resembles human communication \cite{moon2000intimate, moon2003don, nass2000machines, knijnenburg2016inferring}. In particular, they state that conversational agents \emph{``[...] should produce social cues (e.g. appearance or language) that correspond to these service agent characteristics as well as fit to the context in which they are implemented.''} \cite[p. 8]{gnewuch2017towards}. Because our agent is text-based, we focus on natural language only. Although there are many variants of social language, we concentrate here on \textit{politeness} and \textit{positivity} because prior research suggests that these two types of social language strategies can be important for more natural human-machine conversations and customer service interactions, as described below. According to politeness theory \cite{levinson1987politeness}, politeness is a common social language strategy used for saving ``face''. It helps regulate the social distance between two parties and removes face threats, which may lead to feeling awkward or embarrassed. Thus, the ability of a conversational agent to respond in a polite manner can protect users from ``losing face''. Politeness has also been shown to lead to the development of trust \cite{svennevig2000getting,Bickmore:2001:RAM:365024.365304} and rapport \cite{spencer2005politeness, tickle1990nature}, which in turn leads to better communication and performance in collaborative tasks \cite{bernieri1991interpersonal,drolet2000rapport,kang2012towards,garbarski2016interviewing,nadler2003rapport}. Positivity is "the quality or state of being positive" \cite{positivity_nd}. Researchers have shown that positivity can engage employees and improve their performance in the work place \cite{sweetman2010power}. Positivity is also contagious \cite{kramer2014experimental} and leads to likability \cite{dainton1994maintenance}. So we argue that when users work with a conversational agent to accomplish a task, they would also like to talk to one which exhibits more positive behavior or uses more positive language. We, therefore, hypothesize that users would prefer a conversational agent with more polite or more positive language and be more willing to engage with, respond to and persist in the interaction when conversing with an agent using polite and positive language. In the context of a task in which ride-share drivers are registering, this type of language will lead to more drivers completing tasks required for on-boarding and more completing their first trip. Appropriateness of social language and communication strategies such as positivity and politeness, however, may vary according to context and over time \cite{johnstone1989linguistic,giles1991,spencer2005politeness,tickle1990nature}, meaning that simply designing polite language templates may not be enough. Our model addresses this by taking into account the context of the conversation and completed tasks so far and generates language with the desired content and social language norms. \section{Research Context: Driver On-boarding} A ride-sharing company provides rides for real-time requests. Before drivers can start to take passengers, they need to go through an on-boarding process. Our data originates from an initiative in a large ride-sharing provider where new drivers who created an account to start the on-boarding process are paired with a customer support representative (CSR) who guides them via Short Message Service (SMS) exchanges. Typically, the driver must complete a series of tasks: consenting to a background check, providing proof of registration and insurance, a vehicle inspection, providing a photo and taking a city knowledge test. Once all the documents have been submitted, verified, and passed, the driver is approved and he or she can complete his/her first trip, which marks the completion of the onboarding task. The job of the CSR is to guide and encourage the driver through the process by answering general questions, providing status updates, reminding drivers of next steps, and resolving any problems during the on-boarding process. The resulting dialogues are SMS exchanges between a driver and the driver's CSR, which may last several days or weeks. We collected 4 million on-boarding driver-CSR message pairs that were exchanged between drivers and CSRs in the ride-sharing company. All the messages were de-identified. The dataset was used to 1) analyze the impact of politeness and positivity in CSR messages on driver responsiveness and task completion and 2) build conversational models to generate agent responses with social language cues. \section{Study 1: The Relationship between Social Language and User Engagement} The goal of this study is to empirically estimate how politeness and positivity in CSR responses predict driver engagement by building statistical analytic models. We measured politeness and positivity using pre-trained machine learning models. We operationalized driver engagement in terms of their responsiveness and completion of their first trip. \subsection{Dependent Variables} We examined two dependent variables relating to driver engagement: \subsubsection{Driver responsiveness} If the form of what CSRs say is relevant to drivers, then the likelihood of drivers' response should be affected by CSR message style. Based on this assumption, we created a binary variable to measure the \textit{short-term driver engagement}. It was set to 1 if a driver responded to a CSR message within 24 hours or 0 otherwise. \subsubsection{Completion of first trip} In addition to the short-term driver engagement metric, we also considered the \textit{long-term impact} of CSR responses on drivers' finishing the task, i.e., whether CSRs successfully guided drivers through the on-boarding funnel so that drivers completed their first trip. Specifically, this binary measure was set to 1 if a driver completed his/her first trip within 7 days after a CSR sent him/her a message or 0 otherwise. \subsection{Independent and Control Variables} Given a CSR message, we extracted its politeness and positivity levels as independent variables using off-the-shelf classifiers. We went through several steps to pre-process and clean CSR messages. Messages were tokenized with the NLTK toolkit \cite{Bird:2009:NLP:1717171}, and personally identifiable information (PII) such as URLs, email, names, dates, and numbers, were replaced with standard tags. Afterwards, we utilized pre-trained machine learning models to extract the following features: \subsubsection{Politeness} We used a state-of-the-art off-the-shelf politeness machine learning model to measure the politeness level of a CSR message \cite{DBLP:conf/acl/Danescu-Niculescu-MizilSJLP13}. The model was an SVM classifier trained on a corpus with politeness labels and based on domain-independent lexical and syntactic features identified by politeness theory. In particular, the features were developed to reflect a series of politeness strategies such as gratitude (e.g. \textit{\textbf{I} really \textbf{appreciate} your help.}), apologizing (e.g. \textit{\textbf{Sorry} to disturb you...}), and please (e.g. \textit{Would you \textbf{please}...}). Experiments conducted by the authors showed that these linguistically informed features generalize well to interactions between CSRs and drivers. The classifier outputs a politeness score between 0 and 1 and performs almost as well as human raters across domains. Below are a few examples of CSR messages with different levels of politeness scores generated by the classifier: \begin{itemize} \item (0.27) \textit{Please download the Partner app to confirm your account: <URL>} \item (0.43) \textit{Hello <Name>, are you still interested in partnering with us? You're so close to hitting the road and making some money while driving.} \item (0.97) \textit{Hello, my name is <Name> your Account Specialist. Good news! It looks like your background check has passed! The final step to earning with us is uploading your registration. Could you please text me a clear photo of your registration so I can upload it to your account?} \end{itemize} \subsubsection{Positivity} Dainton et al. \cite{dainton1994maintenance} defined positivity as language that is upbeat and cheerful. Since there is no existing off-the-shelf model for positivity, we decided to measure positive sentiment as a proxy. Sentiment refers to the contextual polarity or emotional affect of a text \cite{Wilson:2005:RCP:1220575.1220619}, which is semantically similar to positivity. To evaluate positive sentiment, we used VADER, a rule-based sentiment analyzer \cite{DBLP:conf/icwsm/HuttoG14}. VADER was built with a combination of lexical features and general syntactical and grammatical rules to capture the expression and emphasis of sentiment. The authors compared its performance with eleven benchmarks including both lexicon-based (e.g. Linguistic Inquiry and Word Count (LIWC) \cite{pennebaker2001linguistic}) and machine learning approaches such as one using a Naive Bayes algorithm. They showed that VADER outperforms human judges and is generalizable across contexts. Given a piece of text, VADER produces a 3-dimensional measurement to estimate the extent of positive, negative, and neutral sentiment in it. The three sentiment scores represent the proportion of each sentiment in the text and sum up to one as in the example shown below. Since positive sentiment score is highly negatively correlated with neutral and negative sentiment, to avoid multicollinearity we only included the positive sentiment score in regression models to predict driver responsiveness and trip completion. \begin{itemize} \item (pos=.49, neg=.0, neu=.51) \textit{Nice! The 2 links I sent you will be your best friends. Good luck! Let me know how it goes for you.} \end{itemize} \subsubsection{Control Variables} In addition to some basic demographic information about drivers such as their age, we also measured the following control variables. By controlling for these, we can make claims about the impact of CSR messages rather than about the driver himself/herself. \begin{itemize} \item\textbf{Sign-up city} is a dummy variable controlling for the city where a driver signed up to become a partner with the ride-sharing company. \item\textbf{Days since signup} is the number of days since the driver registered on the platform. \item\textbf{Number of previous driver messages} is a measure of how many messages the driver sent to the CSR since he/she signed up. Different drivers might have different likelihood of replying to a CSR message so we used this variable to control for the response variability among drivers. \item\textbf{CSR message length} is the total number of characters in the CSR message. \end{itemize} \noindent Except for the binary and dummy variables, all the numerical control and independent variables were standardized, with a mean of zero and standard deviation of one. Additionally, we took the logarithm of the variables \textit{Days since signup} and \textit{Num of driver messages} before they were standardized since they had a skewed distribution. \subsection{Analyses and Results} This analysis seeks to statistically test the effect of the level of politeness and positivity in CSR messages on driver responsiveness and completion of their first trip. The unit of analysis was a CSR message. Since the same driver might receive multiple CSR messages from his or her CSR, we built random-effects linear regression models which grouped CSR messages at the driver level to deal with non-independence of observations. The results are shown in Table~\ref{tab:study1} (Model 1). We omitted the \textit{Sign-up city} variables in the table since there are many of them, and we included them in the models mainly for controlling purposes but not their interpretability. First, considering the control variables, the driver responsiveness model shows that older drivers and those who were more responsive in previous conversations were more likely to reply to the current message; drivers who had signed up a longer time ago and received longer CSR messages tended not to respond. On the other hand, except for driver age, all the control variables had a positive effect on the completion of the drivers' first trip. Next, we examined the independent variables. CSR messages with a higher level of politeness were associated with drivers who were more likely to respond and to go through the on-boarding process and complete their first trip. However, although positive sentiment score was positively correlated with drivers' completing their first trip, it negatively predicts driver responsiveness. The negative association of positivity with driver responsiveness is counter-intuitive. We found the same negative association when replacing the VADER positive sentiment score with the positive emotion measure from Pennebaker et al.'s Linguistic Inquirey and Word Count program \cite{pennebaker2001linguistic}, suggesting this result is not a measurement artifact. \begin{table*}[t] \centering \begin{tabular}{lrrrr} \toprule & \multicolumn{2}{c}{Model 1} & \multicolumn{1}{c}{Model 2} & \multicolumn{1}{c}{Model 3} \\ & \multicolumn{2}{c}{\textbf{(all messages)}} & \multicolumn{1}{c}{\textbf{(no milestone messages)}} & \multicolumn{1}{c}{\textbf{(only question messages)}}\\ Variable & \multicolumn{1}{c}{Driver Response} & \multicolumn{1}{c}{Driver First Trip} & \multicolumn{1}{c}{Driver Response} & \multicolumn{1}{c}{Driver Response}\\ \toprule Signup city & (omitted) & (omitted) & (omitted) & (omitted)\\ Driver age & 0.016 *** & -0.001 \space\space\space\space\space\space\space & 0.016 *** & 0.016 *** \\ Days since signup & -0.105 *** & 0.104 *** & -0.112 *** & -0.148 ***\\ Num of driver msg & 0.068 *** & 0.053 *** & 0.075 *** & 0.098 ***\\ CSR msg length & -0.041 *** & 0.010 *** & -0.041 *** & -0.062 ***\\ \hline Politeness & 0.038 *** & 0.001 **\space\space\space & 0.038 *** & 0.016 *** \\ Positivity & -0.047 *** & 0.001 **\space\space\space & -0.047 *** & -0.004 ***\\ \bottomrule \multicolumn{5}{l}{Coefficients are reported. *:p<0.05, **:p<0.01, ***:p<0.001} \end{tabular} \caption{Results of the regression analyses. Model 1 is based on all the data without any filtering. Model 2 and Model 3 report the analysis results of driver responsiveness where the milestone reminder messages and messages without any question mark are removed, respectively. } \label{tab:study1} \vspace{3mm} \end{table*} \subsubsection{Investigating the negative relationship between CSR positivity and driver responsiveness} One of our speculations about why positivity of a CSR's utterance was negatively associated with driver responsiveness is that CSRs sent a congratulatory message every time drivers achieved a milestone. These messages were template-based and crafted with a highly positive tone and thus had a high positive sentiment score (e.g. \textit{``Hi <Name>, This is <Name> from <Company>. Congrats - your background check is complete!''}). However, because they signalled the completion of a subtaks drivers usually did not reply to this kind of status update messages. To test this speculation, we conducted two additional regression analyses by 1) removing the congratulatory / milestone reminder messages from the data using a set of keywords, such as ``congrats'' and ``congratulations'' and 2) only keeping messages with a question mark which warrant responses. The first analysis filtered out about 5\% of messages; the second analysis removed 53\% of messages that do not contain any question mark. The results for the two analyses are presented in Model 2 and Model 3 in Table~\ref{tab:study1}. Both analyses still show a negative relationships between CSR positive sentiment and driver responsiveness. We suggest two possible explanations. First, drivers might actually not care about whether the messages they received were positive or not, but rather want to focus on the task and go through the onboarding funnel as quickly as possible. They might find the positive messages from agents redundant and thus did not bother to reply. The second possible explanation is that although positivity has been shown to have a positive impact on user responsiveness in the prior literature, we operationalized it as positive sentiment and measured it with a sentiment analyzer. Although positive sentiment, which is usually defined more as an endorsement (e.g. like, great, good), the positive/negative connotation of words in a utterance, or polarity of its sentences (e.g. \textit{``remember to ...''} versus \textit{``don't forget to ...''}), may not be a good proxy for positivity (i.e., cheerful or upbeat). This discrepancy between positivity and positive sentiment might be the reason we found an unexpected effect. Despite this finding, positivity still had a positive association with first trip completion, which is the desired end goal of the on-boarding process. Therefore we retained positivity when building a generative model of social language for an automated agent, as described in Study 2 below. In sum, the significant impact of politeness and positive sentiment on driver responsiveness and first trip completion is consistent with the thesis that social language in task-oriented conversations can help achieve the task. These findings motivate us to propose a novel conversational agent framework which can automatically generate agent responses with the desired amount of social language given driver messages as input. \section{Study 2: A Conversational Agent that Generates Responses with Social Language} This section presents a conversational agent which can generate responses with more or less social language. In particular, the ability to adjust the use of politeness and positivity in a conversational agent is essential in the customer service domain, since as shown in the previous section both types of social language have positive impact on task completion and driver's first trip, which is the outcome metric we care the most for the driver on-boarding task. \subsection{Agent Response Generation} \begin{figure*}[t] \centering \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[trim = 40mm 54mm 30mm 0mm, width=1.0\columnwidth]{seq2seq} \caption{} \label{fig:seq2seq} \end{subfigure}% \unskip\ \vrule\ \begin{subfigure}[t]{.45\textwidth} \centering \includegraphics[trim = 25mm 50mm 40mm 2mm, width=1.0\columnwidth]{seq2seq_social} \caption{} \label{fig:seq2seq_social} \end{subfigure} \caption{Model architectures of the baseline and the proposed model: (a) The baseline lexical model implements a typical \textit{seq2seq} architecture. It has an embedding layer to convert one-hot word representations to dense representations, and utilizes LSTM cells to capture dependencies among words. (b) The proposed model has a social language understanding component in-between the encoder and decoder to integrate the politeness and positivity with the lexical information.} \label{fig:model} \end{figure*} In recent years, deep neural network models have dominated AI research because of their effectiveness. There is a line of research on developing deep network language generation models to control for a specific linguistic style, including sentiment \cite{shen2017style, hu2017toward, P18-1080}, personality \cite{W18-5019}, and politeness \cite{sennrich2016controlling}. Language style transfer is the idea of changing the style of a text while preserving its underlying meaning. Style transfer can be especially useful and relevant to dialogue generation in conversational agents. For example, Oraby et al. \cite{W18-5019} combined the idea of style transfer with a task-oriented dialogue model and have shown that doing so can alter the personality expressed in an output utterance by varying the personality parameters in the input vector. However, their model was trained and verified on a small synthetic dataset, so its generalizability and practicality is not clear. Among all types of deep learning architectures, a sequence-to-sequence learning approach (\textit{seq2seq}) has been most widely and successfully adopted for natural language generation problems, such as machine translation \cite{NIPS2014_5346}, question answering \cite{Yin:2016:NGQ:3060832.3061037}, text summarization \cite{chopra2016abstractive}, and conversational models \cite{DBLP:journals/corr/VinyalsL15, P15-1152, serban2016building}. A typical \textit{seq2seq} model is designed to transform one sequence to another. To do so, it has two sub-modules: an \textbf{encoder} and a \textbf{decoder}. The encoder takes a sequence as input and internalizes it as a vector representation, which is then passed to the decoder to generate a corresponding output sequence. When applied to end-to-end conversational modeling, it generates the next utterance given the previous utterance. In our case, the input sequence is a driver message, and the output sequence is a CSR response. To incorporate social language into a \textit{seq2seq} model, we built upon an architecture inspired by Huber et al. \cite{Huber:2018:EDG:3173574.3173851}. They proposed a conversational agent to generate emotionally appropriate responses by extracting features from images attached in conversations. In order to integrate visual information extracted from images into a dialogue model, they modified a \textit{seq2seq} structure that uses visual information together with lexical input for conversational language generation. We modified their architecture by replacing the image understanding layer with a social language understanding component. The politeness and positivity features are extracted from CSR responses using the pre-trained classifiers described in Study 1. We evaluated this model against the baseline, a typical \textit{seq2seq} model without the social language component. Figure~\ref{fig:model} presents the architectures of the baseline and the proposed model, and we describe the models' details below: \subsubsection{Lexical Model} Our baseline is a classic \textit{seq2seq} model \cite{NIPS2014_5346}, which transforms a driver message to a CSR response. We added an embedding layer for both the encoder and decoder to convert sparse one-hot word representation (a binary string where only the index of the active word is 1 and all others are 0) to dense vector representation \cite{Mikolov:2013:DRW:2999792.2999959}. The main advantage of embedding is that it maps words into a latent semantic space so that words with similar meanings and contexts would be closer to each other in that space (e.g. \textit{picture} and \textit{photo}). We built an encoder and decoder recurrent neural network (RNN) with long short-term memory units (LSTM) so that the model can capture word dependencies \cite{hochreiter1997long}. The embedding dimension is 300, and the dimensionality of the internal state is set to 512. \subsubsection{Lexical + Social Model} To introduce social language in agent responses, we introduced a social language understanding component in-between the encoder and decoder as shown in Figure~\ref{fig:model}(b). During the model training phase, we applied the pre-trained politeness and sentiment classifiers introduced in Study 1 to extract social language features from CSR responses. We concatenated the social feature vector with the lexical feature vector output by the encoder and passed it to a fully-connected feed-forward neural network. The output values of the fully-connected layer then become the initial state of the decoder. We did not employ attention mechanisms or more complex models because our goal is to directly evaluate the impact of the social language layer on the output. \begin{table*} \centering \begin{tabular}{l l l} \toprule Model &\textit{BLEU} score&\textit{word2vec}-based similarity\\ \toprule Lexical Model & 3.17 & 0.689 \\ Lexical + Social Model & 9.89 (212\%) & 0.750 (9.38\%) \\ \bottomrule \end{tabular} \caption{Results of content preservation evaluation using the \textit{BLEU} score and \textit{word2vec}-based similarity. The numbers in parentheses refer to the relative increase in the corresponding measure when we compared the Lexical + Social Model to the Lexical Model.} \label{tab:study2} \end{table*} \begin{table*} \centering \begin{tabular}{l l l} \toprule Agent Response &Avg. Politeness Score & Avg. Positivity Score\\ \toprule Unenhanced & 0.601 & 0.147 \\ Enhanced & 0.737 & 0.259 \\ & (22.7\%; p<2.2e-16, t=75.567, df=45317) & (76.6\%; p<2.2e-16, t=74.936, df=45874) \\ \bottomrule \end{tabular} \caption{Automatic assessment on politeness / positivity between unenhanced and enhanced agent responses.} \label{tab:polite_pos_scores} \vspace{3mm} \end{table*} Our dataset consists of 233,571 data points. Each data point is a pair of a driver message and a CSR response along with the politeness and sentiment scores extracted from the CSR response. Driver-CSR messages were paired together only if the CSR reply was sent within an hour after the driver's inquiry. The data was split into train, validation, and test sets with a ratio of 80\%:10\%:10\% where data points that came from the same driver would only be assigned to one set. We trained both models on the training set with early stopping based on the validation loss. We to evaluated the agent responses generated by our models using both automatic methods and human judgments. All the evaluation results reported below were based on the hold-out test set. \subsection{Evaluation of Content Preservation} We conducted automatic evaluations to examine the qualities of the generated text using the \textit{BLEU} score \cite{Papineni:2002:BMA:1073083.1073135} and \textit{word2vec} similarity measure \cite{mikolov2013efficient}. Both measures consider the text similarity between the actual CSR responses and the model-generated responses. The difference between them is that the \textit{BLEU} score is a metric based on n-gram overlap while the \textit{word2vec} similarity measures high-level semantic similarity. Our goal is to quantitatively inspect whether and how much the model-generated responses preserve the content in the ground-truth responses. The idea is that although we introduced a social language component in the model, we expected that the lexical feature vector output by the encoder should still capture the content, and thus the model should perform at least as well as the baseline model. \subsubsection{\textit{BLEU} Score} The \textit{BLEU} score (bilingual evaluation understudy) \cite{Papineni:2002:BMA:1073083.1073135} is a metric originally developed to evaluate the qualities of machine translation models. Recently, it has also been used to evaluate dialogue generation tasks \cite{N15-1020, N16-1014}. \textit{BLEU} score, ranging from 0 to 100, is a precision metric that measures how similar a generated response is to the actual human response. It quantifies the amount of n-gram overlaps between the two. It also penalizes a generated response that is shorter than the actual response so it does not favor short responses. \subsubsection{\textit{word2vec} Similarity Measure} We computed how similar the model outputs are to the ground truth in terms of their \textit{word2vec} \cite{mikolov2013efficient} representations. \textit{word2vec} is one of the state-of-the-art word embedding methods, which convert each word to a vector representation in a latent semantic space such that words used in common contexts are positioned close together in that space. Specifically, for each utterance, we mapped its words to their word embedding vectors using a \textit{word2vec} model pre-trained on Google News. Then we averaged the word embedding vectors across the utterance to derive a vector representation for that utterance. We did that for both the model-generated response and its corresponding ground truth (i.e., the CSR's actual response), and computed the cosine similarity between their vector representations as their \textit{word2vec} similarity measure. We computed the \textit{BLEU} score and \textit{word2vec}-based cosine similarity on a test set of 22,947 pairs of ground truth and model responses. The results are summarized in Table~\ref{tab:study2}. We found that adding the social language information significantly improves both measures: 212\% relative increase in \textit{BLEU} and 9.38\% relative increase in \textit{word2vec} similarity (pairwise t-test \textit{p}<.001). This finding suggests that the social model was better able to preserve the content in the ground-truth responses than the baseline \textit{seq2seq} model even though the social model added a social language understanding component between the encoder and decoder. \subsection{Social Language Evaluation} After confirming that the social model can maintain content, the next step is to investigate whether we can adjust the level of politeness or sentiment in the model-generated agent responses by changing the value of the politeness or sentiment feature in the \textit{Lexical+Social} model. We conducted an automatic analysis on the model outputs using the politeness and sentiment classifiers and also utilized crowdsourcing to collect human judgments on the model-generated dialogue responses. \subsubsection{Evaluation Setup} For each driver message in the test set, we generated two agent responses using the \textit{Lexical+Social} model: \textbf{politeness-unenhanced} and \textbf{politeness-enhanced} responses. The politeness-unenhanced one was generated based on the original level of politeness extracted from the ground truth CSR response; the politeness-enhanced one was produced with the politeness feature value increased by one standard deviation, where the standard deviation of the politeness feature was calculated from the test set. Using the same approach, we also generated two agent responses for positivity (\textbf{positivity-unenhanced} versus \textbf{positivity-enhanced} responses). \begin{table*}[t] \centering \begin{tabular}{p{13.5cm}} \toprule \textbf{Driver message}: i need to do the inspection just looking for a place close to my house\\ \textbf{Agent response 1}: visit any one of these locations for a free inspection - <url> \\ \textbf{Agent response 2}: ok , i 'll send you a link to the nearest free inspection location . \\ \toprule Please answer the following three questions:\\ \hspace{10pt} Q1: Is the agent response 1 reasonable and appropriate to answer the driver message? (Yes/No) \\ \hspace{10pt} Q2: Is the agent response 2 reasonable and appropriate to answer the driver message? (Yes/No) \\ \hspace{10pt} Q3: Which response is more polite? (1/2/cannot tell) \\ \bottomrule \end{tabular} \caption{The crowdsourcing task for comparing the politeness-unenhanced and politeness-enhanced agent responses.} \label{tab:crowdsource_task} \vspace{3mm} \end{table*} \begin{table*}[!htb] \centering \begin{subtable}{0.8\linewidth} \centering \begin{tabular}{ll} \toprule Question & \% of Majority Vote\\ \toprule Q1. Appropriateness (Unenhanced) & 34.0\% \\ Q2. Appropriateness (Enhanced) & 35.0\% \\ Q3. More polite (Unenhanced vs Enhanced) & 13.0\% vs 44.0\% (p=1.4e-11, chi2=45.651) \\ \bottomrule \end{tabular} \caption{Politeness} \vspace{3mm} \end{subtable} \begin{subtable}{0.8\linewidth} \centering \begin{tabular}{ll} \toprule Question & \% of Majority Vote\\ \toprule Q1. Appropriateness (Unenhanced) & 58.0\% \\ Q2. Appropriateness (Enhanced) & 62.5\% \\ Q3. More positive (Unenhanced vs Enhanced) & 35.0\% vs 28.0\% (p=0.162, chi2=1.958) \\ \bottomrule \end{tabular} \caption{Positivity} \end{subtable} \caption{The result of the crowdsourcing task.} \label{tab:human_consensus} \vspace{3mm} \end{table*} \subsubsection{Automatic Analysis} We applied the politeness and sentiment classifiers to automatically rate both unenhanced and enhanced responses on the entire test set. Upon doing so, we observed that the responses with the enhanced politeness or positivity input features have significantly higher politeness or positivity scores (Table \ref{tab:polite_pos_scores}). The results together with the content preservation evaluation provide evidence that the proposed model can generate responses that address drivers' task-oriented concerns with varying degrees of politeness or positivity. \subsubsection{Human Judgment} In addition to automatic analysis, we further conducted a pilot study to evaluate the unenhanced and enhanced responses through human judgments. There were two crowdsourcing tasks, one for politeness, the other for positivity. Since the two tasks are similar, we will only explain the politeness task here. In the politeness crowdsourcing task (Table ~\ref{tab:crowdsource_task}), crowdworkers were presented with a driver message and two agent responses generated by the model (the \textbf{politeness-unenhanced} and \textbf{politeness-enhanced} responses) in random order. Crowdworkers were then asked to answer three questions. The first two questions were used to assess whether the two generated responses appropriately addressed the driver's message (e.g. answered a question). We included these two questions to check whether our model can produce a semantically appropriate and reasonable agent response given the driver's prior utterance as context. The third question asked them to compare the two responses and choose which was more polite. Although human's perception of the politeness and positivity of an utterance can be influenced by many contextual factors, such as country, culture, dialect, intonation, and personal relationship, the crowdsourcing tasks were designed to measure positivity and politeness at utterance level for simplicity, i.e. from the text of a response without further context apart from the driver's previous message. Danescu-Niculescu-Mizil et al. \cite{DBLP:conf/acl/Danescu-Niculescu-MizilSJLP13} demonstrated that politeness can be evaluated at the utterance level with a high inter-annotator agreement. Also, to alleviate the impact of some cultural differences on positivity and politeness judgments, the drivers in our data are all located in US, and the agents and crowdworkers are native English speakers. We randomly selected 200 data points from the test set and had three crowdworkers make judgments for each politeness task. There was a similar crowdsourcing task for positivity which was evaluated by three different crowdworkers. We measured the inter-rater agreement (\texttt{kappa}) among the three crowdworkers for each question and both tasks \cite{cohen1960coefficient, light1971measures, conger1980integration}. The result shows that there is a fair to moderate agreement (\textit{Politeness Q1}=.5, \textit{Q2}=.4, \textit{Q3}=.3; \textit{Positivity Q1}=.3, \textit{Q2}=.4, \textit{Q3}=.2). We evaluated the human judgments by taking a majority vote of the three crowdworkers for each of the three questions. Results are presented in Table \ref{tab:human_consensus}. For the appropriateness questions (Q1 and Q2), we found no significant difference between the unenhanced and enhanced responses in both the politeness and positivity tasks. This finding suggests that the content of the generated agent responses, which was controlled by the lexical part of the model, were appropriate to answer drivers' questions and were not affected by the social language component in the \textit{Lexical+Social} model. Moreover, the analysis of Q3 results indicates that the politeness-enhanced responses were judged as significantly more polite than the unenhanced ones. In particular, 44\% of the enhanced responses were considered to be more polite than the unenhanced ones, and only 13\% of the unenhanced ones were rated more polite than the enhanced ones. However, positivity enhancement did not cause crowdworkers to judge the positivity-enhanced responses to be more positive than the unenhanced ones. This unexpected finding might be due to the small sample or the low agreement for Q3 among crowdworkers in the positivity task (Kappa=.2). The reason that crowdworkers had a low agreement on Q3 might be that the definition of positivity was not clear in the task guidelines. Another plausible explanation is again that positive sentiment might not a good proxy for postivity. The model was trained with the information provided by the sentiment analyzer while the crowdworkers were asked to compare the output in terms of general positivity. \section{Conclusion} \subsection{Discussion} In this Study 1 we investigated whether and how social language is related to user engagement in task-oriented conversations. We used existing machine learning models to measure politeness and positivity in our analyses. The results show that the politeness level in CSR messages was positively correlated with driver's responsiveness and completion of their first trip. We also found that positivity positively predicts driver's first trip, but it has a negative relationship to driver responsiveness even after removing congratulatory milestone messages or messages that do not have any question mark, which usually have positive sentiment and/or do not require responses from drivers. To integrate the findings from the statistical analyses into a dialogue model, Study 2 proposed and evaluated a task-oriented conversational agent model that can generate agent responses with the desired level of politeness / positivity by inserting a social language understanding component into a typical \textit{seq2seq} model. The automatic evaluations demonstrate that the proposed model can manipulate the politeness or positivity level of agent responses while preserving content. However, the crowdsourcing evaluation shows that the model can enhance politeness but not positivity in the generated agent responses. A common explanation for the negative association of positivity with driver responsiveness in Study 1 and the lack of an effect of positivity enhancement on generated agent responses in Study 2 might be a discrepancy between the concept of language positivity and its operationalization as positive sentiment. That is, the results may reflect a mismatch between what we thought we were measuring and manipulating and what we actually measured and manipulated. The contributions in this research were 1) using off-the-shelf classifiers as a way to detect social language behavior for quantitative analyses and 2) incorporating them in a task-oriented conversational agent framework. \subsection{Implications} This work has several implications. First, although we only focus on politeness and positivity, we believe that our proposed modeling framework should be extended to incorporate other kinds of types of social language into task-oriented assistants. The model can also be used to provide drivers with better experience during their on-boarding process. The customer support services can be improved by utilizing the model to provide suggested replies to CSRs so that they can (1) respond quicker and (2) adhere to the best practices (e.g. using more polite and positive language) while still achieving the goal that the drivers and the ride-sharing providers share, i.e., getting drivers on the road. \subsection{Limitations and Future Directions} The moderate inter-rater agreement of the crowdsourcing task and the unexpected human evaluation results for positivity suggest that crowdworkers might have been confused about the task. Follow-up research should investigate the labelling process and refine the task. Specifically, one suggestion we received from the crowdworkers is to provide more context in terms of conversation history to them so they can make better judgments. We would also like to expand our crowdsourcing effort to engage more workers to review and evaluate more model-generated responses. Since we suspect there might be some difference between positivity and positive sentiment, one future research direction is to build a positivity machine learning model so we can measure the concept directly. Finally, although we found that there is a correlation between social language and user engagement, the results are based on the analysis of human-human conversations. We need to validate whether this finding also applies to human-bot interactions, i.e., whether a polite conversational agent would also improve user engagement. Future work should conduct A/B tests to examine the effectiveness of a polite and positive conversational agent. \balance{} \balance{} \bibliographystyle{SIGCHI-Reference-Format}